US8107721B2 - Method and system for determining poses of semi-specular objects - Google Patents

Method and system for determining poses of semi-specular objects Download PDF

Info

Publication number
US8107721B2
US8107721B2 US12/129,386 US12938608A US8107721B2 US 8107721 B2 US8107721 B2 US 8107721B2 US 12938608 A US12938608 A US 12938608A US 8107721 B2 US8107721 B2 US 8107721B2
Authority
US
United States
Prior art keywords
coordinates
pose
images
camera
silhouette
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US12/129,386
Other versions
US20090297020A1 (en
Inventor
Paul A. Beardsley
Moritz Baecher
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Electric Research Laboratories Inc
Original Assignee
Mitsubishi Electric Research Laboratories Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Research Laboratories Inc filed Critical Mitsubishi Electric Research Laboratories Inc
Priority to US12/129,386 priority Critical patent/US8107721B2/en
Assigned to MITSUBISHI ELECTRIC RESEARCH LABORATORIES, INC. reassignment MITSUBISHI ELECTRIC RESEARCH LABORATORIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BAECHER, MORITZ, BEARDSLEY, PAUL A.
Priority to JP2009018171A priority patent/JP5570126B2/en
Publication of US20090297020A1 publication Critical patent/US20090297020A1/en
Application granted granted Critical
Publication of US8107721B2 publication Critical patent/US8107721B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination

Definitions

  • the invention relates generally to computer vision, and more particularly to determining poses of semi-specular objects.
  • a system for automated ‘bin-picking’ in a factory can acquire 3D data of a bin containing multiple instances of the same object, and compare the 3D data with a known 3D model of the object, in order to determine the poses of objects in the bin. Then, a robot arm can be directed to retrieve a selected one of the objects.
  • the pose of an object is its 3D location and 3D orientation at the location.
  • One set of vision-based techniques for sensing 3D data assumes that the objects have non-specular surfaces, such as matte surfaces.
  • Another type of sensor determines the silhouette of the object, and compares the silhouette with the known 3D model of the object, in order to determine pose.
  • One set of techniques for determining the silhouette assumes that, the objects cast shadows when illuminated.
  • Vision-based techniques for sensing 3D data for non-specular surfaces include structured light, time-of-flight laser scanners, stereo cameras, moving cameras, photometric stereo, shape-from -shading, and depth-from-(de)focus.
  • Vision-based techniques for sensing the 3D pose and shape of specular surfaces assume that there are features in a surrounding scene that are reflected by the specular surface.
  • the features may be sparse, such as specular highlights arising: from point light sources in the scene. If the features are sparse, then the sensed 3D shape of the surface is also sparse. This is undesirable for many applications. For example, it is difficult to determine a reliable pose of an object when the sensed features are sparse.
  • the problem can be ameliorated by moving the camera or the identifying features relative to the surface, but this increases the complexity of the system and is time-consuming.
  • the embodiments of the invention provide a method for determining a pose of a semi-specular object using a hybrid sensor including a laser scanner and a multi-flash camera (camera).
  • the scanner and camera have complementary capabilities.
  • the laser scanning acquires high-quality 3D coordinate data of fronto-parallel parts of the surface of an object:, with quality decreasing as the surface becomes more oblique with respect to the scanner; the laser scanning cannot: acquire any data at the occluding contour of the object.
  • the camera can acquire 2D flash images that show cast-shadows of the object, which can be used to determine the silhouette, but it does not acquire data elsewhere on the object surface.
  • the ability to identify cast-shadows in the flash images decreases as the background objects on which the shadows are being cast become more specular
  • both the scanner and the camera produce lower-quality information on semi-specular objects than on diffuse-surface objects.
  • the method combines the 3D data and the 2D silhouette information, so that even though the information are poor quality when taken individually, it is still possible to obtain an accurate pose of a semi-specular object when taking them together,
  • a camera acquires a set of coded images and a set of flash images of an object.
  • the coded images are acquired while scanning the object with a laser beam pattern
  • the flash images are acquired while illuminating the object with a set of light sources at different locations near the camera, there being one flash image for each fight source
  • 3D coordinates of points on the surface of the object are determined from the set of coded images
  • 2D silhouettes of the object are determined from shadows cast in the set of flash images
  • Surface normals are obtained using photometric stereo with the flash images. The 3D coordinates, 2D silhouettes and surface normals are used to determine the pose of the object.
  • FIG. 1 is a block diagram of a system and method for determining a 3D pose of an object that includes specular surfaces according to an embodiment of our invention
  • FIG. 2 is a schematic of a camera and a light source relative to a surface according to an embodiment of our invention.
  • FIG. 3 is an image of occluded contours of an object surfaces according to an embodiment of our invention.
  • FIG. 1 shows a system and method 100 for determining a 3D pose 101 of an object 130 that includes semi-specular surfaces according to an embodiment of our invention.
  • the 3D pose as defined herein means the 3D location and the 3D orientation of the object.
  • the system includes a hybrid sensor including a laser scanner 110 and a camera 120 .
  • the laser scanner 110 emits a laser beam 111 in a pattern 112 that can be used to determine 3D range data in a set of coded images 326 acquired by the camera 120 .
  • the pattern can use Gray-codes so that the pattern at each point on the surface of the object is unique.
  • the method determines 3D coordinate data at each point on the surface from the set of coded images 126 .
  • the camera also 120 acquires light 121 reflected by the object.
  • the camera includes multiple flash units 125 , e.g., LEDs, arranged at different locations, e.g., in an octagon or circular pattern, around the camera.
  • the LEDs are bright point light sources that cast sharp shadows.
  • the camera also acquires a set of flash images of the object. The flash images are used to determine the 2D silhouette of the object.
  • the set of coded images 126 is used to determine the 3D coordinates of the points 102 identified by laser scanning as well the 2D silhouette 103 .
  • the significance is that the 3D points and the 2D silhouettes are measured from a single camera 120 so they are in the same coordinate frame. This makes it possible to project the 3D points to the 2D camera image plane. Alternatively, it is also possible to ‘back-project’ any point on a 2D silhouette to a 3D ray in 3D space, where it is in the same coordinate frame as the 3D point coordinates obtained by the laser scanning.
  • the laser scanning projects the laser beam pattern onto the surface of the object to acquire ranges or ‘depths’ to points 301 on the object's surface.
  • the laser scanning data are sometimes called a range or depth map.
  • the range map can be converted to the coordinates 102 of 3D points 301 on the surface of the object.
  • the camera also acquires a diffuse component of light reflected by the surface of the object by acquiring the set of flash images 127 , one for each point light source 125 .
  • the light sources cast shadows at occluding contours, which reveal the silhouette of the object.
  • the laser scanning is less effective if the surface is specular because the reflected laser light shifts from the diffuse component to a specular component. This makes it more difficult to detect the diffuse component at the camera.
  • laser scanning is only effective for the parts of the surface that are most fronto-parallel to the sensor. It is difficult to extract data for oblique parts of the surface. On a curved object, only a small amount of surface data can be determined.
  • the object is also illuminated by the point light sources (flash units) 125 arranged near the camera.
  • the corresponding image 127 includes the shadows cast by the object onto a nearby background surface.
  • the cast shadows are used to infer the 2D occluding and self-occluding silhouettes as observed from the viewpoint of the camera.
  • the occluding contour is obtained most reliably when the shadows are being cast on a diffuse surface, which is the case for an isolated object of any material on a diffuse background and not for external distribution, or for stacked diffuse objects.
  • Our goal is to determine the 3D pose for each of an arbitrarily stacked pile of semi-specular objects in a bin 135 . It is assumed that objects are identical and all have the same known shape. For a particular object shape, and a full spectrum of possible materials with Lambertian to mirror-surface, there will be some failures as the object material becomes more specular, beyond which the camera cannot extract sufficient data to determine the pose.
  • the coded images 126 produces high quality 3D coordinate information at semi-specular surfaces front-to-parallel to the scanner but no information at the occluding contours, while the flash images 127 produces shape information only at occluding contours.
  • the scanner and the camera are complementary and mutually supporting.
  • the coded images produce the 3D coordinates 102 of the points 131 , while the flash image produces the silhouette data 103 . Therefore, the acquired 2D and 3D data are heterogeneous,
  • the laser scanning uses structured light based on Gray-codes as described by Scharstein et al., “High-accuracy stereo depth maps using structured light,” Proc, Conference Determiner Vision and Pattern Recognition, 2003.
  • the camera is described by Raskar et al., “Non-photorealistic camera: Depth edge detection and stylized rendering using multi-flash imaging,” ACM Siggraph, 2004 and U.S. Pat. No. 7,295,720.
  • the method of pose computation is based on range map matching described by Germann et al, “Automatic pose estimation for range images on the GPU, Sixth Intl Conf on Digital Imaging and Modeling, 2007, and in U.S.
  • Calibration is a one-time preprocessing step. Calibration of the sensors can use a second, temporary camera 140 . This is not essential but simplifies the processing of the data.
  • the calibration determines the intrinsic and extrinsic stereo parameters of the laser scanner 110 and the camera 120 .
  • the extrinsic parameters between the camera and the laser scanner We project the pattern on the blank surface and store corresponding points in the camera image and on the laser scanning image plane. We repeat the above for two or more positions of the plane and determine a fundamental matrix F between the camera and scanner.
  • the fundamental matrix F is a 3 ⁇ 3 matrix, which relates corresponding points in stereo images.
  • the above steps provide a complete calibration of all intrinsics and extrinsics parameters for all optical components. This calibration information is used to determine 3D surface points using Gray-codes,
  • Our hybrid sensor combines 110 the data acquired from the laser scanning 110 and flash images 103 .
  • the data are used to determine coordinates 102 of the 3D points on the object 130 , and to determine the silhouettes 103 of the occluding contours 300 of objects in the bin, see FIG. 3 for an example object with complex contours. It also enables us to determine surface normals n 104 at the 3D points using photometric stereo.
  • the normals 104 indicate the orientation of the object.
  • Our method differs from conventional photometric stereo in that there is no need for an assumption that the light sources 125 are distant from the object 130 , which is an issue in practical applications, such bin picking.
  • the camera 120 observes a 3D point X 131 on the surface 132 of the object 130 , with coordinates 102 known from the laser scanning, and records intensity I 0 .
  • the first LED 125 is illuminated, and the camera records intensity I 1 .
  • This puts a constraint on the surface normal I 1 ⁇ I 0 kv, n, (1) where k is an unknown constant depending on the brightness of the LED, and the surface albedo at the point X. Brightness is assumed to be constant for all the LEDS, and hence k is also constant.
  • Each LED can be used to generate one equation, and three or more equations provide a linear solution for the orientation of the normal n, up to unknown scale, which can be normalised to obtain a unit vector.
  • the laser scanning produces the coordinates 102 of the 3D points from which surface normals n can be inferred.
  • photometric stereo produces a per-pixel measurement at the camera, whereas 3D points require local surface fitting to generate a normal, which is a non-trivial process.
  • FIG. 3 shows example occluding contours 300 for a complex object.
  • the pose determination can be done in 3D or 2D.
  • For computational, efficiency we perform all operations on in 2D on the image plane. Because the data may be incomplete, our method assumes that the occluding contours can also be incomplete.
  • Germann The input for the method by Germann is a 3D range map, and the pose determination, is a minimization over the six DOF of pose of a 3D distance error, to bring the object model into close correspondence with the range data.
  • the distance error of Germann significantly to work on our 2D image plane, and to include an error associated with the occluding contours. Note, Germann only considers 3D range data and not 2D images.
  • the 3D model of the object can be obtained by computer-aided design (CAD).
  • CAD computer-aided design
  • the 3D model of the object is matched to the 2D images 127 , and consistency is measured in 2D on the image plane which has both the 3D laser data and the 2D contour data.
  • the object model is projected onto the image plane.
  • the projected information defines a silhouette and also provides depth and surface normal information for pixels inside the contour of the object.
  • Our cost function has two components: a position error D 1 for the projected model and the laser scanning 3D coordinate data 102 ; and a shape error D 2 for the projected model and the occluding 2D contours.
  • the set of pixels corresponding to the projected model of the object is P.
  • the depth and surface normal of the object: model are known at every pixel in the set P.
  • the set of pixels where the laser scanning has acquired coordinate data is L.
  • the depth of the object is known at each pixel in the set L.
  • the surface normal of the target object is typically known at each pixel in the set L, but may be absent if the photometric stereo failed.
  • the position error D 1 is measured over the pixels in the intersection of the sets P and L.
  • the shape error D 2 measures a consistency between the boundary of
  • the shape error D 2 is a 3D error, so that it can be meaningfully summed with the position error D 1 .
  • the pixels b on a surface or boundary of the projected 3D model is a set B.
  • the pixels m where the camera has detected an occluding contour is a set M.
  • Each pixel m in the set M is paired with a closest pixel b in the set M.
  • the set of pairs (b, m) is culled in two ways
  • the shape error D 2 is summed over the resulting set of pairs (m, b).
  • the pixel-specific depth d can be replaced by a global value d 0 that is the average distance to the 3D points 131 .
  • a problem in using a laser scanner on a specular object is caused by inter-reflection, whether between the object and the background, or between objects.
  • the detected signal is still a valid Gray-code, but the path of the light was not directly from the laser to the surface and back, so triangulation of the range data generates a spurious 3D point.
  • Both methods will be inconsistent in an area where there is an inter-reflection. Both methods may detect a signal for the inter-reflection, but their respective 3D computations are based on different light sources, i.e., the laser and the LEDs, so the spurious 3D points and spurious photometric surface normals generated for inter-reflection are not consistent. Inconsistent areas are eliminated from the pose determination.
  • a hybrid sensor system and method determines a pose of a semi-specular object.
  • the method combines data from a laser scanning and from a multi-flash images.
  • the method deals with dealing with inter-reflections when scanning specular objects.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A camera acquires a set of coded images and a set of flash images of a semi-specular object. The coded images are acquired while scanning the object with a laser beam pattern, and the flash images are acquired while illuminating the object with a set of light sources at different locations near the camera, there being one flash image for each light source. 3D coordinates of points on the surface of the object are determined from the set of coded images, and 2D silhouettes of the object are determined from shadows cast in the set of flash images. Surface normals are obtained for the 3D points from photometric stereo on the set of flash images. The 3D coordinates, 2D silhouettes and surface normals are compared with a known 3D model of the object to determine the pose of the object.

Description

FIELD OF THE INVENTION
The invention relates generally to computer vision, and more particularly to determining poses of semi-specular objects.
BACKGROUND OF THE INVENTION
Sensors that acquire 3D data are useful for many applications. For example, a system for automated ‘bin-picking’ in a factory can acquire 3D data of a bin containing multiple instances of the same object, and compare the 3D data with a known 3D model of the object, in order to determine the poses of objects in the bin. Then, a robot arm can be directed to retrieve a selected one of the objects. The pose of an object is its 3D location and 3D orientation at the location. One set of vision-based techniques for sensing 3D data assumes that the objects have non-specular surfaces, such as matte surfaces.
Another type of sensor determines the silhouette of the object, and compares the silhouette with the known 3D model of the object, in order to determine pose. One set of techniques for determining the silhouette assumes that, the objects cast shadows when illuminated.
Non-Specular Surfaces
Vision-based techniques for sensing 3D data for non-specular surfaces include structured light, time-of-flight laser scanners, stereo cameras, moving cameras, photometric stereo, shape-from -shading, and depth-from-(de)focus.
All of these techniques assume either that incident: light on the surface is reflected diffusely, and hence, reflected light is visible at: any sensor with a line-of-sight to the surface or they assume that visible features are actually physical features on the object surface with a measurable 3D physical location, and are not reflected features. These techniques degrade as the surface becomes less diffuse and more specular, because the above assumptions are no longer true.
Specular Surfaces
Vision-based techniques for sensing the 3D pose and shape of specular surfaces assume that there are features in a surrounding scene that are reflected by the specular surface. The features may be sparse, such as specular highlights arising: from point light sources in the scene. If the features are sparse, then the sensed 3D shape of the surface is also sparse. This is undesirable for many applications. For example, it is difficult to determine a reliable pose of an object when the sensed features are sparse. The problem can be ameliorated by moving the camera or the identifying features relative to the surface, but this increases the complexity of the system and is time-consuming.
Semi-Specular Surfaces
There are few vision-based sensors known in the art for objects with semi-specular surfaces, such as bin shed metal where the surface reflects some of the incident light in a specular way, and some of the light in a diffuse way. This is because techniques that sense 3D data by using diffuse reflection, receive less signal from a semi-specular surface, so they are less reliable. The techniques that determine the object silhouette using cast-shadows are also less reliable because the shadow is less pronounced when it is cast on semi-specular background, as occurs with a bin of semi-specular objects for example. Techniques that work on specular objects are inapplicable because sharp reflected features are not visible.
Thus, there is a need for a method and system for determining poses of semi-specular objects that performs well on varied surface shapes such as planar and curved semi-specular surfaces.
SUMMARY OF THE INVENTION
The embodiments of the invention provide a method for determining a pose of a semi-specular object using a hybrid sensor including a laser scanner and a multi-flash camera (camera). The scanner and camera have complementary capabilities.
The laser scanning acquires high-quality 3D coordinate data of fronto-parallel parts of the surface of an object:, with quality decreasing as the surface becomes more oblique with respect to the scanner; the laser scanning cannot: acquire any data at the occluding contour of the object.
In contrast, the camera can acquire 2D flash images that show cast-shadows of the object, which can be used to determine the silhouette, but it does not acquire data elsewhere on the object surface.
Both of these methods work best on diffuse surfaces. Both degrade as the object becomes more specular. In the case of the laser scanner, the reflected laser pattern
becomes weaker and less detectable as the object becomes more specular, with failure on the most oblique parts of the surface first, and then covering more and more of the surface. In the case of the camera, the ability to identify cast-shadows in the flash images decreases as the background objects on which the shadows are being cast become more specular,
Thus, both the scanner and the camera produce lower-quality information on semi-specular objects than on diffuse-surface objects. However, the method combines the 3D data and the 2D silhouette information, so that even though the information are poor quality when taken individually, it is still possible to obtain an accurate pose of a semi-specular object when taking them together,
More particularly, a camera acquires a set of coded images and a set of flash images of an object. The coded images are acquired while scanning the object with a laser beam pattern, and the flash images are acquired while illuminating the object with a set of light sources at different locations near the camera, there being one flash image for each fight source, 3D coordinates of points on the surface of the object are determined from the set of coded images, and 2D silhouettes of the object are determined from shadows cast in the set of flash images Surface normals are obtained using photometric stereo with the flash images. The 3D coordinates, 2D silhouettes and surface normals are used to determine the pose of the object.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of a system and method for determining a 3D pose of an object that includes specular surfaces according to an embodiment of our invention;
FIG. 2 is a schematic of a camera and a light source relative to a surface according to an embodiment of our invention; and
FIG. 3 is an image of occluded contours of an object surfaces according to an embodiment of our invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
FIG. 1 shows a system and method 100 for determining a 3D pose 101 of an object 130 that includes semi-specular surfaces according to an embodiment of our invention. The 3D pose as defined herein means the 3D location and the 3D orientation of the object.
The system includes a hybrid sensor including a laser scanner 110 and a camera 120. The laser scanner 110 emits a laser beam 111 in a pattern 112 that can be used to determine 3D range data in a set of coded images 326 acquired by the camera 120. The pattern can use Gray-codes so that the pattern at each point on the surface of the object is unique. Thus, the method determines 3D coordinate data at each point on the surface from the set of coded images 126.
The camera also 120 acquires light 121 reflected by the object. The camera includes multiple flash units 125, e.g., LEDs, arranged at different locations, e.g., in an octagon or circular pattern, around the camera. The LEDs are bright point light sources that cast sharp shadows. The camera also acquires a set of flash images of the object. The flash images are used to determine the 2D silhouette of the object.
The set of coded images 126 is used to determine the 3D coordinates of the points 102 identified by laser scanning as well the 2D silhouette 103. The significance is that the 3D points and the 2D silhouettes are measured from a single camera 120 so they are in the same coordinate frame. This makes it possible to project the 3D points to the 2D camera image plane. Alternatively, it is also possible to ‘back-project’ any point on a 2D silhouette to a 3D ray in 3D space, where it is in the same coordinate frame as the 3D point coordinates obtained by the laser scanning.
The laser scanning projects the laser beam pattern onto the surface of the object to acquire ranges or ‘depths’ to points 301 on the object's surface. The laser scanning data are sometimes called a range or depth map. The range map can be converted to the coordinates 102 of 3D points 301 on the surface of the object.
The camera also acquires a diffuse component of light reflected by the surface of the object by acquiring the set of flash images 127, one for each point light source 125. The light sources cast shadows at occluding contours, which reveal the silhouette of the object. By combining 110 the 3D scanner coordinate points 102 and the 2D silhouettes 103 the pose 101 of the object can be determined.
The laser scanning is less effective if the surface is specular because the reflected laser light shifts from the diffuse component to a specular component. This makes it more difficult to detect the diffuse component at the camera. Thus, laser scanning is only effective for the parts of the surface that are most fronto-parallel to the sensor. It is difficult to extract data for oblique parts of the surface. On a curved object, only a small amount of surface data can be determined.
The object is also illuminated by the point light sources (flash units) 125 arranged near the camera. For each light source, the corresponding image 127 includes the shadows cast by the object onto a nearby background surface. The cast shadows are used to infer the 2D occluding and self-occluding silhouettes as observed from the viewpoint of the camera. The occluding contour is obtained most reliably when the shadows are being cast on a diffuse surface, which is the case for an isolated object of any material on a diffuse background and not for external distribution, or for stacked diffuse objects.
Our goal is to determine the 3D pose for each of an arbitrarily stacked pile of semi-specular objects in a bin 135. It is assumed that objects are identical and all have the same known shape. For a particular object shape, and a full spectrum of possible materials with Lambertian to mirror-surface, there will be some failures as the object material becomes more specular, beyond which the camera cannot extract sufficient data to determine the pose.
The idea behind the invention is that the coded images 126 produces high quality 3D coordinate information at semi-specular surfaces front-to-parallel to the scanner but no information at the occluding contours, while the flash images 127 produces shape information only at occluding contours. Thus, the scanner and the camera are complementary and mutually supporting.
The coded images produce the 3D coordinates 102 of the points 131, while the flash image produces the silhouette data 103. Therefore, the acquired 2D and 3D data are heterogeneous,
Our hybrid sensor is an unusual combination, and to the best of our knowledge such a system is not described in the prior art. The laser scanning uses structured light based on Gray-codes as described by Scharstein et al., “High-accuracy stereo depth maps using structured light,” Proc, Conference Determiner Vision and Pattern Recognition, 2003. The camera is described by Raskar et al., “Non-photorealistic camera: Depth edge detection and stylized rendering using multi-flash imaging,” ACM Siggraph, 2004 and U.S. Pat. No. 7,295,720. The method of pose computation is based on range map matching described by Germann et al, “Automatic pose estimation for range images on the GPU, Sixth Intl Conf on Digital Imaging and Modeling, 2007, and in U.S. patent application Ser. No. 11/738,642, “Method and System for Determining Objects Poses from Range Images” filed by Pfister et al. on Apr. 23, 2007, all incorporated herein by reference. The prior art methods are adapted to our unusual hybrid sensor.
Hybrid Sensor Calibration
Calibration is a one-time preprocessing step. Calibration of the sensors can use a second, temporary camera 140. This is not essential but simplifies the processing of the data. The calibration determines the intrinsic and extrinsic stereo parameters of the laser scanner 110 and the camera 120.
To determine the intrinsic parameters of the laser scanner, we project a Gray-code pattern 112 onto a blank (white) surface, and determine the 3D coordinates 102 of the pattern using stereo images 151 acquired by a stereo camera 150. We store the 2D coordinates of the pattern on the laser scanning image plane, along with, the corresponding 3D coordinates determined in the previous step. We repeat the above for two or more positions of the plane. This information can use conventional plane-based camera calibration.
Then, we determine the extrinsic parameters between the camera and the laser scanner. We project the pattern on the blank surface and store corresponding points in the camera image and on the laser scanning image plane. We repeat the above for two or more positions of the plane and determine a fundamental matrix F between the camera and scanner. In computer vision, the fundamental matrix F is a 3×3 matrix, which relates corresponding points in stereo images. We decompose the matrix F to determine the extrinsics between the camera and scanner making use of intrinsic parameters of the camera and laser scanner.
We determine the 3D positions of the LEDs 125 by placing a planar mirror, augmented with calibration marks near the camera. We determine the 3D coordinates of the mirror plane π using the calibration marks and the stereo camera. We determine the 3D coordinates of the virtual (reflected) LEDs. We reflect the virtual LED coordinates in the mirror plane π to obtain the 3D coordinates of the LEDs with respect to the camera.
The above steps provide a complete calibration of all intrinsics and extrinsics parameters for all optical components. This calibration information is used to determine 3D surface points using Gray-codes,
Photometric Stereo
Our hybrid sensor combines 110 the data acquired from the laser scanning 110 and flash images 103. The data are used to determine coordinates 102 of the 3D points on the object 130, and to determine the silhouettes 103 of the occluding contours 300 of objects in the bin, see FIG. 3 for an example object with complex contours. It also enables us to determine surface normals n 104 at the 3D points using photometric stereo. The normals 104 indicate the orientation of the object.
Our method differs from conventional photometric stereo in that there is no need for an assumption that the light sources 125 are distant from the object 130, which is an issue in practical applications, such bin picking.
Surface Normals
As shown in FIG. 2, the camera 120 observes a 3D point X 131 on the surface 132 of the object 130, with coordinates 102 known from the laser scanning, and records intensity I0. The first LED 125 is illuminated, and the camera records intensity I1. This puts a constraint on the surface normal
I 1 −I 0 =kv, n,   (1)
where k is an unknown constant depending on the brightness of the LED, and the surface albedo at the point X. Brightness is assumed to be constant for all the LEDS, and hence k is also constant. Each LED can be used to generate one equation, and three or more equations provide a linear solution for the orientation of the normal n, up to unknown scale, which can be normalised to obtain a unit vector. This scheme fails if the surface is specular at the point X 131. Therefore, we use a threshold check on Ii−I0 to eliminate specularities.
The laser scanning produces the coordinates 102 of the 3D points from which surface normals n can be inferred. However, photometric stereo produces a per-pixel measurement at the camera, whereas 3D points require local surface fitting to generate a normal, which is a non-trivial process. A more significant benefit of doing photometric stereo, in addition to laser scanning, is described below,
Pose Determination
Our hybrid sensor generates heterogeneous data, i.e., coordinates of 3D points 102 from the laser scanning, and silhouettes 103 in the 2Ds images 127 of the occluding contours 300 from the camera. FIG. 3 shows example occluding contours 300 for a complex object. The pose determination can be done in 3D or 2D. For computational, efficiency, we perform all operations on in 2D on the image plane. Because the data may be incomplete, our method assumes that the occluding contours can also be incomplete.
The input for the method by Germann is a 3D range map, and the pose determination, is a minimization over the six DOF of pose of a 3D distance error, to bring the object model into close correspondence with the range data. However, we modify the distance error of Germann significantly to work on our 2D image plane, and to include an error associated with the occluding contours. Note, Germann only considers 3D range data and not 2D images.
Pose Cost Function
We determine the 3D pose of the object that is consistent with the sensed data and a 3D model of the object. The 3D model of the object can be obtained by computer-aided design (CAD). The 3D model of the object is matched to the 2D images 127, and consistency is measured in 2D on the image plane which has both the 3D laser data and the 2D contour data.
The issue of initializing the pose is described below. For a current pose estimate, the object model is projected onto the image plane. The projected information defines a silhouette and also provides depth and surface normal information for pixels inside the contour of the object.
Our cost function has two components: a position error D1 for the projected model and the laser scanning 3D coordinate data 102; and a shape error D2 for the projected model and the occluding 2D contours.
The Position Error D1
The set of pixels corresponding to the projected model of the object is P. For a particular pose, the depth and surface normal of the object: model are known at every pixel in the set P. The set of pixels where the laser scanning has acquired coordinate data is L. The depth of the object is known at each pixel in the set L. The surface normal of the target object is typically known at each pixel in the set L, but may be absent if the photometric stereo failed.
The position error D1 is measured over the pixels in the intersection of the sets P and L. The error at each pixel is
e 1=(r 1 −r 2)·λ,  (2)
where ri is the depth and λ is unity if the scanning process failed to determine surface normal at the pixel, else
λ=1.0/max(cos 45, n 1 , n 2),  (3)
where n1 and n2 are the surface normals of the object model and the object at the pixel.
The Shape Error D2
The shape error D2 measures a consistency between the boundary of
the projected model and the occluding contours as imaged by the camera. The shape error D2 is a 3D error, so that it can be meaningfully summed with the position error D1.
The pixels b on a surface or boundary of the projected 3D model is a set B. The pixels m where the camera has detected an occluding contour is a set M. Each pixel m in the set M is paired with a closest pixel b in the set M. The set of pairs (b, m) is culled in two ways
When there are multiple pairs with the same pixel m, we delete all pairs except the pair with a minimal distance between pixels b and m. For each pixel b and m, we indicate whether the pixel is inside or outside the object.
We also delete all pairs that contain pixels both inside and outside the object. The shape error D2 is summed over the resulting set of pairs (m, b). The error at each pair is
e 2 =d·tan θ,  (4)
where d is the distance to the object model at pixel b, and θ is the angle between the two camera rays through pixels m and b. For computational efficiency, the pixel-specific depth d can be replaced by a global value d0 that is the average distance to the 3D points 131.
Error Minimization
We minimize a combined error D=D1+D2 over the six degrees-of-freedom of the pose of the object. The pose estimate is initialized with multiple start-points around the view-sphere. The computation of the pairs (m, b) is potentially time-consuming. Therefore, we determine a distance map for the camera occluding contours before the minimization begins. Subsequently, each pixel b in the set B can use the distance map to identify its nearest pixel m the set M.
Inter-Reflection
A problem in using a laser scanner on a specular object is caused by inter-reflection, whether between the object and the background, or between objects. For an inter-reflection, the detected signal is still a valid Gray-code, but the path of the light was not directly from the laser to the surface and back, so triangulation of the range data generates a spurious 3D point. To deal, with this, we determine the consistency between the 3D coordinates of the points determined by the laser scanning and the surface normals determined by photometric stereo.
The two methods will be inconsistent in an area where there is an inter-reflection. Both methods may detect a signal for the inter-reflection, but their respective 3D computations are based on different light sources, i.e., the laser and the LEDs, so the spurious 3D points and spurious photometric surface normals generated for inter-reflection are not consistent. Inconsistent areas are eliminated from the pose determination.
Effect of the Invention
A hybrid sensor system and method determines a pose of a semi-specular object. The method combines data from a laser scanning and from a multi-flash images. The method deals with dealing with inter-reflections when scanning specular objects.
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.

Claims (10)

1. A method for determining a pose of an object, comprising:
acquiring a set of coded images of an object by a camera while scanning the object with a laser beam pattern, in which the object is semi-specular;
acquiring a set of flash images of the object by the camera while illuminating the object with a set of light sources at different locations near the camera, there being one flash image for each light source;
determining 3D coordinates of points on the surface of the object from the set of coded images;
determining 2D silhouettes of the object from shadows cast in the set of flash images;
determining surface normals of the points on the surface of the object using the 3D coordinates and photometric stereo of the 2D silhouettes; and
combining the 3D coordinates, the 2D silhouettes, and the surface normals to determine a pose of the semi-specular object, wherein the combining further comprises:
determining a position error by comparing the 3D coordinates with a 3D model of the object in a particular pose;
determining a silhouette error by comparing the 2D silhouette of the object with a projected silhouette of the 3D model in the particular pose; and
finding the pose of the 3D model that minimizes a sum of the position error and the silhouette error.
2. The method of claim 1, in which the laser beam pattern uses Gray-codes.
3. The method of claim 1, in which a set of pixels corresponding to the 3D coordinates is L, and a depth at each pixel in the set L is known, and a set of pixels corresponding to the projection of the known 3D model of the object is P, and a depth at each pixel in the set P is known, and in which the position error at each pixel is

e 1=(r 1 −r 2)·λ,
where ri corresponds to the depths, and

λ=1.0/max(cos 45, n1, n2),
where n1 and n2 are respective surface normals from the known 3D model, and from either the 3D coordinates or the flash images.
4. The method of claim 1, in which the silhouette error measures a consistency between the sensed silhouette and a boundary of the projection of the known 3D model.
5. The method of claim 1, in which a surface normal obtained from the 3D coordinates is compared with the surface normal obtained from the flash images, and areas where the two surface normals are inconsistent are marked as being laser inter-reflections.
6. The method of claim 1, in which inter-reflections are ignored.
7. The method of claim 1, in which the object is located in a bin with a plurality of identical objects; and further comprising;
selecting the object according to the pose.
8. The method of claim 1, in which the set of coded images form a depth map.
9. The method of claim 1, in which the silhouettes include occluding and self-occluding silhouettes.
10. An apparatus for determining a pose of an object, comprising:
a camera configured to acquire a set of coded images of an object while scanning the object with a laser beam pattern, in which the object is semi-specular, and a set of flash images of the object while illuminating the object with a set of light sources at different locations near the camera, there being one flash image for each light source;
means for determining 3D coordinates of points on the surface of the object from the set of coded images;
means for determining 2D silhouettes of the object from shadows cast in the set of flash images;
means for determining surface normals of file points on the surface of the object using the 3D coordinates and photometric stereo of the 2D silhouettes; and
means for combining the 3D coordinates, the 2D silhouettes, and the surface normals to determine a pose of the semi-specular object, wherein a position error is determined by comparing the 3D coordinates with a 3D model of the object in a particular pose, a silhouette error is determined by comparing the 2D silhouette of the object with a projected silhouette of the 3D model in the particular pose, and the pose of the 3D minimizes a sum of the position error and the silhouette error.
US12/129,386 2008-05-29 2008-05-29 Method and system for determining poses of semi-specular objects Active 2030-12-01 US8107721B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/129,386 US8107721B2 (en) 2008-05-29 2008-05-29 Method and system for determining poses of semi-specular objects
JP2009018171A JP5570126B2 (en) 2008-05-29 2009-01-29 Method and apparatus for determining the posture of an object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/129,386 US8107721B2 (en) 2008-05-29 2008-05-29 Method and system for determining poses of semi-specular objects

Publications (2)

Publication Number Publication Date
US20090297020A1 US20090297020A1 (en) 2009-12-03
US8107721B2 true US8107721B2 (en) 2012-01-31

Family

ID=41379882

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/129,386 Active 2030-12-01 US8107721B2 (en) 2008-05-29 2008-05-29 Method and system for determining poses of semi-specular objects

Country Status (2)

Country Link
US (1) US8107721B2 (en)
JP (1) JP5570126B2 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090080036A1 (en) * 2006-05-04 2009-03-26 James Paterson Scanner system and method for scanning
US20140081441A1 (en) * 2011-11-18 2014-03-20 Nike, Inc. Generation Of Tool Paths For Shoe Assembly
US20150125034A1 (en) * 2013-11-05 2015-05-07 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
US20160267801A1 (en) * 2013-10-24 2016-09-15 Huawei Device Co., Ltd. Image display method and apparatus
US9489765B2 (en) 2013-11-18 2016-11-08 Nant Holdings Ip, Llc Silhouette-based object and texture alignment, systems and methods
US9536362B2 (en) 2010-09-27 2017-01-03 Apple Inc. Polarized images for security
US9684941B2 (en) 2012-10-29 2017-06-20 Digimarc Corporation Determining pose for use with digital watermarking, fingerprinting and augmented reality
CN106885514A (en) * 2017-02-28 2017-06-23 西南科技大学 A kind of Deep Water Drilling Riser automatic butt position and posture detection method based on machine vision
US9939803B2 (en) 2011-11-18 2018-04-10 Nike, Inc. Automated manufacturing of shoe parts
US9990565B2 (en) 2013-04-11 2018-06-05 Digimarc Corporation Methods for object recognition and related arrangements
US10055881B2 (en) 2015-07-14 2018-08-21 Microsoft Technology Licensing, Llc Video imaging to assess specularity
US10152634B2 (en) 2013-11-25 2018-12-11 Digimarc Corporation Methods and systems for contextually processing imagery
US10194716B2 (en) 2011-11-18 2019-02-05 Nike, Inc. Automated identification and assembly of shoe parts
US10372191B2 (en) 2011-05-12 2019-08-06 Apple Inc. Presence sensing
US10393512B2 (en) 2011-11-18 2019-08-27 Nike, Inc. Automated 3-D modeling of shoe parts
US10402624B2 (en) 2011-05-12 2019-09-03 Apple Inc. Presence sensing
CN111006615A (en) * 2019-10-30 2020-04-14 浙江大学 Flat surface feature scanning imaging device and method
US10817594B2 (en) 2017-09-28 2020-10-27 Apple Inc. Wearable electronic device having a light field camera usable to perform bioauthentication from a dorsal side of a forearm near a wrist
US11317681B2 (en) 2011-11-18 2022-05-03 Nike, Inc. Automated identification of shoe parts

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2458927B (en) * 2008-04-02 2012-11-14 Eykona Technologies Ltd 3D Imaging system
US8670606B2 (en) * 2010-01-18 2014-03-11 Disney Enterprises, Inc. System and method for calculating an optimization for a facial reconstruction based on photometric and surface consistency
US9317970B2 (en) 2010-01-18 2016-04-19 Disney Enterprises, Inc. Coupled reconstruction of hair and skin
US9245375B2 (en) * 2010-09-16 2016-01-26 Siemens Medical Solutions Usa, Inc. Active lighting for stereo reconstruction of edges
US8165403B1 (en) * 2010-11-19 2012-04-24 Mitsubishi Electric Research Laboratories, Inc. Method and system for determining poses of specular objects
US8811767B2 (en) * 2011-03-15 2014-08-19 Mitsubishi Electric Research Laboratories, Inc. Structured light for 3D shape reconstruction subject to global illumination
GB2490872B (en) 2011-05-09 2015-07-29 Toshiba Res Europ Ltd Methods and systems for capturing 3d surface geometry
JP2013101045A (en) * 2011-11-08 2013-05-23 Fanuc Ltd Recognition device and recognition method of three-dimensional position posture of article
KR101918032B1 (en) 2012-06-29 2018-11-13 삼성전자주식회사 Apparatus and method for generating depth image using transition of light source
JP6429772B2 (en) * 2012-07-04 2018-11-28 クレアフォーム・インコーポレイテッドCreaform Inc. 3D scanning and positioning system
US8913825B2 (en) * 2012-07-16 2014-12-16 Mitsubishi Electric Research Laboratories, Inc. Specular edge extraction using multi-flash imaging
US9036907B2 (en) * 2012-07-16 2015-05-19 Mitsubishi Electric Research Laboratories, Inc. Method and apparatus for extracting depth edges from images acquired of scenes by cameras with ring flashes forming hue circles
JP6250035B2 (en) * 2012-10-31 2017-12-20 サムスン エレクトロニクス カンパニー リミテッド Depth sensor-based reflective object shape acquisition method and apparatus
JP2014092461A (en) * 2012-11-02 2014-05-19 Sony Corp Image processor and image processing method, image processing system, and program
US20140192210A1 (en) * 2013-01-04 2014-07-10 Qualcomm Incorporated Mobile device based text detection and tracking
JP6184237B2 (en) * 2013-08-07 2017-08-23 株式会社東芝 Three-dimensional data processing apparatus, processing method thereof, and processing program thereof
WO2015112078A1 (en) * 2014-01-23 2015-07-30 Performance Sk8 Holding Inc. System and method for manufacturing a board body
JP6679289B2 (en) * 2015-11-30 2020-04-15 キヤノン株式会社 Processing device, processing system, imaging device, processing method, processing program, and recording medium
GB201607639D0 (en) 2016-05-02 2016-06-15 Univ Leuven Kath Sensing method
CN107121131B (en) * 2017-04-06 2019-06-25 大连理工大学 A kind of horizontal relative pose recognition methods of binocular camera
DE102017118767B4 (en) * 2017-08-17 2020-10-08 Carl Zeiss Industrielle Messtechnik Gmbh Method and device for determining dimensional and / or geometric properties of a measurement object
CN107562226A (en) * 2017-09-15 2018-01-09 广东虹勤通讯技术有限公司 A kind of 3D drafting systems and method
JP6886906B2 (en) * 2017-10-10 2021-06-16 東芝テック株式会社 Readers and programs
CN110827360B (en) * 2019-10-31 2022-07-12 华中科技大学 Photometric stereo measurement system and method for calibrating light source direction thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030038822A1 (en) * 2001-08-14 2003-02-27 Mitsubishi Electric Research Laboratories, Inc. Method for determining image intensities of projected images to change the appearance of three-dimensional objects
US7295720B2 (en) 2003-03-19 2007-11-13 Mitsubishi Electric Research Laboratories Non-photorealistic camera
US20080260238A1 (en) 2007-04-23 2008-10-23 Hanspeter Pfister Method and System for Determining Objects Poses from Range Images
US20090080036A1 (en) * 2006-05-04 2009-03-26 James Paterson Scanner system and method for scanning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3358584B2 (en) * 1999-03-30 2002-12-24 ミノルタ株式会社 3D information input camera
JP2001235314A (en) * 2000-02-22 2001-08-31 Minolta Co Ltd Photographic system and two-dimensional image pickup apparatus for use therein
JP3859574B2 (en) * 2002-10-23 2006-12-20 ファナック株式会社 3D visual sensor
JP4372709B2 (en) * 2005-03-25 2009-11-25 シーケーディ株式会社 Inspection device
JP4390758B2 (en) * 2005-09-08 2009-12-24 ファナック株式会社 Work take-out device
US7711182B2 (en) * 2006-08-01 2010-05-04 Mitsubishi Electric Research Laboratories, Inc. Method and system for sensing 3D shapes of objects with specular and hybrid specular-diffuse surfaces

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030038822A1 (en) * 2001-08-14 2003-02-27 Mitsubishi Electric Research Laboratories, Inc. Method for determining image intensities of projected images to change the appearance of three-dimensional objects
US7295720B2 (en) 2003-03-19 2007-11-13 Mitsubishi Electric Research Laboratories Non-photorealistic camera
US20090080036A1 (en) * 2006-05-04 2009-03-26 James Paterson Scanner system and method for scanning
US20080260238A1 (en) 2007-04-23 2008-10-23 Hanspeter Pfister Method and System for Determining Objects Poses from Range Images

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Germann et al, Automatic Pose Estimation for Range Images on the GPU, Sixth Intl Conf on Digital Imaging and Modeling, 2007.
Huber et al. "Using a Hybrid of Silhouette and Range Templates for Real-time Pose Estimation" IEEE Proceedings of Int. Conf. on Robotics and Automation, Apr. 2004, pp. 1652-1657. *
Raskar et al., "Non-photorealistic camera: Depth Edge Detection and Stylized Rendering Using Multi-Flash Imaging," ACM Siggraph, 2004.
Scharstein et al., "High-Accuracy Stereo Depth Maps Uing Structured Light," Proc. Conference Determiner Vision and Pattern Recognition, 2003.

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8294958B2 (en) * 2006-05-04 2012-10-23 Isis Innovation Limited Scanner system and method for scanning providing combined geometric and photometric information
US20090080036A1 (en) * 2006-05-04 2009-03-26 James Paterson Scanner system and method for scanning
US9536362B2 (en) 2010-09-27 2017-01-03 Apple Inc. Polarized images for security
US10402624B2 (en) 2011-05-12 2019-09-03 Apple Inc. Presence sensing
US10372191B2 (en) 2011-05-12 2019-08-06 Apple Inc. Presence sensing
US11341291B2 (en) * 2011-11-18 2022-05-24 Nike, Inc. Generation of tool paths for shoe assembly
US9939803B2 (en) 2011-11-18 2018-04-10 Nike, Inc. Automated manufacturing of shoe parts
US11422526B2 (en) 2011-11-18 2022-08-23 Nike, Inc. Automated manufacturing of shoe parts
US20220245293A1 (en) * 2011-11-18 2022-08-04 Nike, Inc. Generation of tool paths for shoe assembly
US11346654B2 (en) 2011-11-18 2022-05-31 Nike, Inc. Automated 3-D modeling of shoe parts
US10552551B2 (en) * 2011-11-18 2020-02-04 Nike, Inc. Generation of tool paths for shore assembly
US11317681B2 (en) 2011-11-18 2022-05-03 Nike, Inc. Automated identification of shoe parts
US11641911B2 (en) 2011-11-18 2023-05-09 Nike, Inc. Automated identification and assembly of shoe parts
US11266207B2 (en) 2011-11-18 2022-03-08 Nike, Inc. Automated identification and assembly of shoe parts
US11763045B2 (en) * 2011-11-18 2023-09-19 Nike, Inc. Generation of tool paths for shoe assembly
US10667581B2 (en) 2011-11-18 2020-06-02 Nike, Inc. Automated identification and assembly of shoe parts
US10194716B2 (en) 2011-11-18 2019-02-05 Nike, Inc. Automated identification and assembly of shoe parts
US10671048B2 (en) 2011-11-18 2020-06-02 Nike, Inc. Automated manufacturing of shoe parts
US11879719B2 (en) 2011-11-18 2024-01-23 Nike, Inc. Automated 3-D modeling of shoe parts
US10393512B2 (en) 2011-11-18 2019-08-27 Nike, Inc. Automated 3-D modeling of shoe parts
US20140081441A1 (en) * 2011-11-18 2014-03-20 Nike, Inc. Generation Of Tool Paths For Shoe Assembly
US11238556B2 (en) 2012-10-29 2022-02-01 Digimarc Corporation Embedding signals in a raster image processor
US9684941B2 (en) 2012-10-29 2017-06-20 Digimarc Corporation Determining pose for use with digital watermarking, fingerprinting and augmented reality
US9990565B2 (en) 2013-04-11 2018-06-05 Digimarc Corporation Methods for object recognition and related arrangements
US10283005B2 (en) * 2013-10-24 2019-05-07 Huawei Device Co., Ltd. Image display method and apparatus
US20160267801A1 (en) * 2013-10-24 2016-09-15 Huawei Device Co., Ltd. Image display method and apparatus
US9639942B2 (en) * 2013-11-05 2017-05-02 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
US20150125034A1 (en) * 2013-11-05 2015-05-07 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
US9940756B2 (en) 2013-11-18 2018-04-10 Nant Holdings Ip, Llc Silhouette-based object and texture alignment, systems and methods
US9728012B2 (en) 2013-11-18 2017-08-08 Nant Holdings Ip, Llc Silhouette-based object and texture alignment, systems and methods
US9489765B2 (en) 2013-11-18 2016-11-08 Nant Holdings Ip, Llc Silhouette-based object and texture alignment, systems and methods
US10152634B2 (en) 2013-11-25 2018-12-11 Digimarc Corporation Methods and systems for contextually processing imagery
US10055881B2 (en) 2015-07-14 2018-08-21 Microsoft Technology Licensing, Llc Video imaging to assess specularity
CN106885514A (en) * 2017-02-28 2017-06-23 西南科技大学 A kind of Deep Water Drilling Riser automatic butt position and posture detection method based on machine vision
US11036844B2 (en) 2017-09-28 2021-06-15 Apple Inc. Wearable electronic device having a light field camera
US10817594B2 (en) 2017-09-28 2020-10-27 Apple Inc. Wearable electronic device having a light field camera usable to perform bioauthentication from a dorsal side of a forearm near a wrist
CN111006615A (en) * 2019-10-30 2020-04-14 浙江大学 Flat surface feature scanning imaging device and method

Also Published As

Publication number Publication date
JP2009288235A (en) 2009-12-10
JP5570126B2 (en) 2014-08-13
US20090297020A1 (en) 2009-12-03

Similar Documents

Publication Publication Date Title
US8107721B2 (en) Method and system for determining poses of semi-specular objects
US9392262B2 (en) System and method for 3D reconstruction using multiple multi-channel cameras
CN107525479B (en) Method for identifying points or regions on the surface of an object, optical sensor and storage medium
Sadlo et al. A practical structured light acquisition system for point-based geometry and texture
US7711182B2 (en) Method and system for sensing 3D shapes of objects with specular and hybrid specular-diffuse surfaces
US10041788B2 (en) Method and device for determining three-dimensional coordinates of an object
US9207069B2 (en) Device for generating a three-dimensional model based on point cloud data
US20110228052A1 (en) Three-dimensional measurement apparatus and method
KR102424135B1 (en) Structured light matching of a set of curves from two cameras
JP2007206797A (en) Image processing method and image processor
EP3069100B1 (en) 3d mapping device
US20190188871A1 (en) Alignment of captured images by fusing colour and geometrical information
US20080319704A1 (en) Device and Method for Determining Spatial Co-Ordinates of an Object
US11640673B2 (en) Method and system for measuring an object by means of stereoscopy
Ferstl et al. Learning Depth Calibration of Time-of-Flight Cameras.
Karami et al. Investigating 3D reconstruction of non-collaborative surfaces through photogrammetry and photometric stereo
US7430490B2 (en) Capturing and rendering geometric details
US11803982B2 (en) Image processing device and three-dimensional measuring system
JP4379626B2 (en) Three-dimensional shape measuring method and apparatus
CN110462688B (en) Three-dimensional contour determination system and method using model-based peak selection
Shi et al. Large-scale three-dimensional measurement based on LED marker tracking
JP6486083B2 (en) Information processing apparatus, information processing method, and program
Santo et al. Light structure from pin motion: Simple and accurate point light calibration for physics-based modeling
TWI480507B (en) Method and system for three-dimensional model reconstruction
Walter et al. Enabling multi-purpose mobile manipulators: Localization of glossy objects using a light-field camera

Legal Events

Date Code Title Description
AS Assignment

Owner name: MITSUBISHI ELECTRIC RESEARCH LABORATORIES, INC.,MA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BEARDSLEY, PAUL A.;BAECHER, MORITZ;SIGNING DATES FROM 20080529 TO 20080530;REEL/FRAME:021069/0255

Owner name: MITSUBISHI ELECTRIC RESEARCH LABORATORIES, INC., M

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BEARDSLEY, PAUL A.;BAECHER, MORITZ;SIGNING DATES FROM 20080529 TO 20080530;REEL/FRAME:021069/0255

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

SULP Surcharge for late payment
MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12