GB2312582A - Insertion of virtual objects into a video sequence - Google Patents

Insertion of virtual objects into a video sequence Download PDF

Info

Publication number
GB2312582A
GB2312582A GB9601098A GB9601098A GB2312582A GB 2312582 A GB2312582 A GB 2312582A GB 9601098 A GB9601098 A GB 9601098A GB 9601098 A GB9601098 A GB 9601098A GB 2312582 A GB2312582 A GB 2312582A
Authority
GB
United Kingdom
Prior art keywords
frame
feature points
virtual object
points
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9601098A
Other versions
GB9601098D0 (en
Inventor
Avi Sharir
Michael Tamir
Itzhak Wilf
Shmuel Peleg
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orad Hi Tech Systems Ltd
Original Assignee
Orad Hi Tech Systems Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orad Hi Tech Systems Ltd filed Critical Orad Hi Tech Systems Ltd
Priority to GB9601098A priority Critical patent/GB2312582A/en
Publication of GB9601098D0 publication Critical patent/GB9601098D0/en
Priority to PCT/GB1997/000029 priority patent/WO1997026758A1/en
Priority to AU13873/97A priority patent/AU1387397A/en
Priority to EP97900282A priority patent/EP0875115A1/en
Publication of GB2312582A publication Critical patent/GB2312582A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • H04N5/2723Insertion of virtual advertisement; Replacing advertisements physical present in the scene by virtual advertisement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • G11B27/32Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Processing Or Creating Images (AREA)

Description

2312582 METHOD AND APPARATUS FOR INSERTION OF VIR OBJECTS INTO A VIDEO
SEQUENCE The present invention relates to insertion of virtual objects into video sequences and in particular to sequences which have already been previously generated.
Computer generated (CG) images and characters are widely used in feature films and commercials. They provide for special effects possible only with CG content as well as for the special look of a cartoon character. While in many instances the complete picture is computer generated, in other instances, CG characters are to be inserted in a live image sequence taken by a physical camera.
Prior art describes how CG objects are inserted in a background photograph, for the purpose of architectural simulation [E Nakamae et al., A montage method: the overlaying of the computer generated images onto a background photograph, ACM Trans. on Graphics, Vol. 20, No. 4,
1986 (207-241)]. That method solves the viewpoint from a set of image points, matched with their geographical map locations. In other practical situations, no measured three-dimensional data can be associated with image points. Therefore, insertion is done manually, using a modeller to transform the CG object, until it is registered with the image.
Consider the automatic insertion of three dimensional virtual objects in image sequences. While manual techniques are suitable for a single picture, they pose practical problems when processing a sequence of images:
A typical shot of a few seconds involves hundreds of images, making the manual work tedious and error-prone.
Independently inserting the CG objects at each image might introduce spatial jitter over time, although the insertion may look 1 perfect at each frame.
In a real motion picture, the apparent motion of the objects and the characters is a combination of the objects ego-motion in a 3D world, and the motion of the camera.
For CG characters, the ego-motion is determined by the animator.
Then5 camera motion has to be applied to the characters.
One possible solution is to use motion control systems in shooting the live footage. In such systems, the motion of the camera is computer controlled and recorded. These records are then used in a straight forward manner to render the CG characters in synchronization with camera motion.
However, in many practical cases, the usage of motion control systems is inconvenient.
If a known 3D object is present in the sequence, it may be used to solve camera motion, by matching image features to the object's model If this is not the case, we may try to solve the structure and the motion concurrently [J Weng et al., Error Analysis of Motion Parameter Estimation from Image Sequences, First Ind. Conf. on Computer Vision 1987, pp. 703-707]. These non-linear methods are inaccurate, slowly converging and computationally unstable.
One may note that for the application at hand, we have no use for the explicitly camera model other than projecting the virtual object, at each view of the sequence, using the corresponding camera model. Thus, in the present invention we suggest merging the 3D estimation and projection stages into one process which predicts the image-space motion of the virtual object from image-space motion of tracked features.
The present application provides a method and apparatus for insertion of CG characters into a existing video sequence, independent of motion control records or a known pattern.
2 According to the present invention there is provided a method of insertion of virtual objects into a video sequence consisting of a plurality of video frames comprising the steps of:
i. detecting in a one frame (Frame A) of the video sequence a set of feature points; ii. detecting in another frame (Frame B) of the video sequence the set of feature points; iii. detecting in each frame other than frame A or frame b at least a sub-set of the feature points; iv. positioning a virtual object in a defined position in frame A; V. positioning the virtual object in the defined position in frame B; vi. selecting one or more reference points for the virtual object; vii. computing the position of the reference points in each frame of the sequence; and viii. inserting the virtual object in each frame in the position determined by the computation.
According to a further aspect of the present invention there is provided apparatus for insertion of virtual objects into a video sequence consisting of a plurality of video frames said apparatus including:
i. means for detecting in one frame (Frame A) a set of feature points; ii. means for detecting in another frame (Frame B)the set of feature points; iii. means for detecting in each frame other than frame A or frame B at least a sub-set of the feature points; iv. means for positioning a virtual object in a defined position in frame A; V. means for positioning the virtual object in the defined position in frame B; 3 vi. means for selecting one or more reference points for the virtual object; vii. means for computing the position of the reference points in each frame of the sequence; and viii. means for inserting the virtual object in each frame in the position determined by the computation.
In a preferred embodiment of the present invention, the CG character is constrained relative to a cube or other regularly shaped box, the cube representing the virtual ob ect. The CG character is thereby able j to be animated.
The present invention will now be described, by way of example with reference to the accompanying drawings, in which:
Figure 1 shows an exemplary video sequence illustrating in Figure IA a first frame of the video sequence; in Figure IB an intermediate frame (K) of the video sequence; in Figure 1C a last frame of the video sequence and in Figure ID a virtual object to be inserted into the video sequence of Figures IA to lC; Figure 2 shows apparatus according to the present invention; Figure 3 shows a flow diagram illustrating the selection and storage of feature points; Figure 4 shows a flow diagram illustrating the positioning of the virtual object in the first, last and intermediate frames; Figure 5 shows a cube (as defined) enclosing a three dimensional moving virtual character; and Figure 6 shows a flow diagram illustrating the solution of camera transformation corresponding to a frame.
The present invention is related to the investigation properties of feature points in three perspective views, As an example, consider the 4 concept of the fundamental matrix (FM). [R Deriche et al., Robust recovery of the epipolar geometry for an uncalibrated stereo rig, Lecture Notes in Computer Science, Vol. 800, Computer Vision - ECCV 94, Springer-Verlag Berlin Heidelberg 1994, pp. 567-576]. Given 2 corresponding points in two views, q and q' (in homogeneous coordinates) we can write q'TFq = 0 where the 3x3 matrix F, which describes this correspondence is known as the fundamental matrix. Given 8 or more matched point pairs, we can in general determine a unique solution for F, defined up to a scale factor.
Now, consider 3 images with two corresponding pixels m, and M2 in images 1 and 2. Where should the corresponding pixel M3 be in picture 3? Let F13 be the fundamental matrix of images 1,3 and let F23 be the fundamental matrix of images 2,1 Then M3 is given by the intersection of the epipolar lines:
FI 3 M, IF23M2 [0. Faugeras and L. Robert, What can two images tell us about a third one? Lecture Notes in Computer Science, Vol. 800, Computer Vision - ECCV 94, Springer-Verlag Berlin Heidelberg 1994, pp. 485-492] The fundamental matrix is used later in the description of the embodiment of the invention. However, the invention is not limited to this specific implementation. Other formulations could be used, for example the concept of tri-linearity M) [A. Shashua and M. Werman, Trilinearity of three perspective views and its associated tensor, IEEE 5th Ind. Conf. on Computer Vision, 1995, pp. 920-925] Specific embodiments of the invention are now described with reference to the accompanying figures.
With reference now to Figure 1, Figure IA shows a first video frame which is assumed to be the first frame of a sequence, selected as now described.
The sequence can be selected manually or automatically. For each sequence either the operator or an automatic feature selection system searches for a number of feature points in both a first frame (Frame 1) Figure 1A and a last frame (Frame N) Figure IC. In any intermediate frame such as in Figure IB (Frame K) a sub-set of the points must be visible. In a preferred embodiment there should be at least 8 (eight) feature points in all intermediate frames since this can be used both for the FM methods which requires at least 8 points along three frames, or the Tr method which requires at least 7 points along three frames.
In frame 1 (Figure 1A) feature points A-L (12 points) are recognised. In Figure IB where the camera has tilted and possibly zoomed point B is missing. In Figure IC all 12 points are again visible.
It is noted that in Figure I A a chair M,N is visible, this being also visible in Figure IB but not Figure IC. This chair M.N is not used for calculation.
An object (Figure ID) is computer generated and in this example comprises a cube 12 (XYZW). The cube 12 is to be positioned on a shelf 14 of a bookcase 16.
In the first scene of the video sequence a chair 18 is shown, but although the chair 18 is present in the intermediate frame (K) Figure lB it is not present in the last frame in the sequence. Thus it is not used to define points. Similarly cone 20 is present in the last frame but not in the first or Kth frame. Thus this cone 20 is not used but only the bookcase 16 is used.
In Figures IA and IC all corners of the shelves are visible (A-L).
In Figure I B only 11 out of 12 corners are visible since corner B is missing. However, in all video frames A-L at least a minimum number of feature points are visible. In a preferred embodiment this minium number is eight and these must be visible in all frames.
With reference now to Figure 2, the VDU 22 receives a video 6 sequence from VCR 24. The video controller 26 can control VCR 24 to evaluate a sequence of video shots as in Figures IA to IC to evaluate a sequence having the desired number of feature points. Such a sequence could be very long for a fairly static camera or short for a fast panning 5 camera.
The feature points may be selected manually by, for example, mouse 28 or automatically. Preferably, as stated above, at least eight feature points are selected to appear in all frames of a sequence. When the controller 26 in conjunction with processor 30 detects that there are less than eight points the video sequence is terminated. If further insertion of an object is required then a continuing further video sequence is generated using the same principles.
Assuming therefore that the sequence of video frames 1 to N has been selected, a computer generated (CG) object 12 is created by generator 32. The CG object 12 is then positioned as desired in the first and last frames of the sequence. The orientation of the object in the first and last frames is accomplished manually such that the object appears to be naturally correct in both frames. The CG object 12 is then automatically positioned in all intermediate frames by the processors 30 and 34 as follows with reference to Figures 3 and 4.
From a start 40 processor 1 searches for feature points in a first frame 42 and continues searching for these features until the sequence is lost 44. The feature positions are then stored in store 36 - step 46. The positions of these features in all intermediate frames are then stored in store 36 - step 48.
The CG object 12 is then generated 50, 52 - Figure 4 and positioned on the shelf 14 in a first frame of the video sequence - step 54.
One or more reference points are selected for the CG object - step 56. These could be the four non co-planar corners of the cube 12 or could be other suitable points on an irregularly shaped object.
7 The positions of the reference points in the first frame are stored in store 38 - step 58.
The CG object is then positioned in the last frame of the sequence step 60 and the position of the reference points is stored for this position 5 of the CG object in store 38 - step 62.
Using both processor 30 and processor 34 the positions of the reference points for the object 12 are calculated for each intermediate frame i by calculating the FM or the TT using the triplets of reference points in the first frame, last frame and frame i - step 64. The location of the reference points for the object in Frame 1 is computed from the locations of the corresponding object points in the first and in the last frames, as well as the FM or the TT as described before.
From these positions the virtual CG object 12 is inserted into each frame in accordance with the calculated positions of the reference points - step 66. The insertion is carried out by controller 26 under the control of inputs from processor 34 and from graphics generator 32 also controlled by processor 32. Alternatively, compute the TT of the first, last, and intermediate frames using at least 7 corresponding feature points in the three frames.
The process described in Figures 3 and 4 comprises a virtual point prediction using the fundamental matrix or the TT.
In Figures 3 and 4 we:
1. Position the virtual object in the first (1) and last frame (2):
2. For each frame except the first and last frames:
2.1 use at least 8 corresponding feature points to compute the fundamental matrix F1K between the first and intermediate frame: use at least 8 corresponding feature points to compute the fundamental matrix FNK between the last and intermediate frame.
2.2 For each reference point (to be predicted) such that its 8 location in the first frame (1 as determined by process 52 is m, and its location in frame N isMN; compute the lines F1K % FNKMN. Intersect the lines to obtainMK.
Alternatively, the location of the reference point m, can be computed using the TT and its locations in the first and in the last frames.
If, as shown by way of a preferred example the CG object is a cube or other regular solid shape (hereinafter referred to as a cube) there is a possibility of providing an animated figure which is associated with the cube. The figure may be completely within the cube or could be larger than the cube but constrained in its movement in relation to the cube.
Since the cube is positioned relative to the video sequence the animated figure will also be positioned. Thus, if for example the cube was made a rectangular box which was the size of shelf 16, then a rabbit could be made to dance along the shelf.
It may be seen therefore that the example described in Figure 4, is a complete recipe for wire-frame virtual objects, since it allows to compute the position of all vertices at each intermediate from.
However, this solution is not complete for most practical cases, where surface rendering and object ego-motion are required. For these cases we must derive a three-dimensional virtual object description at each frame.
We now describe how we deal with surface rendering and ego motion.
In step 54 when we position the virtual object the transformation applied to the model in 52 can be stored and the inverse of the transformation constitutes a camera transformation due to the duality between the camera and object motions.
9 Therefore when we generate the virtual object in 52 we would prefer to generate it relative to a rectangular bounding box (see Figure 5) and then the vertices of this bounding box can be used as reference points in 64.
Given the position of the reference points in the intermediate frames, the camera transformation corresponding to the frame can be solved as indicated in Figure 6 in which in step 68 the model coordinates for the reference points of the virtual object from step 52 of Figure 4 are used with the image coordinates of reference points in the intermediate field (step 70) to combine to solve camera transformation information (step
72) and this is then stored in store 35 (Figure 1) - step 74.
Solving camera transformation information from image coordinates of reference points is described in [C.K. Wu et al., Acquiring 3-D spatial data of a real object, Computer Vision, Graphics and Image Processing 28, 126-133 (19801.
Now, with reference to Figure 5, this transformation is applied to the actual object so that if we allow the virtual character 76 to move relative to the bounding box 78 in the object coordinate system then we take the animated model (character) at each intermediate frame and further 20 transform it by the camera transformation computed as described above.
The animated model will therefore move naturally and the correct perspective etc will be provided by the camera transformation system as calculated above.
An alternative method to insert an object having ego motion is to generate it manually only in the coordinate systems of frame A and frame B. This can be manually adjusted by an animator for correct appearance in both images. The entire object can then be reprojected into all other frames by using its locations in Frames A and B, and the FM or TT methods.

Claims (13)

1 A method of insertion of virtual ob ects into a video sequence consisting of a plurality of video frames comprising the steps of i. detecting in a one frame (frame A) of the video sequence a set of feature points; ii. detecting in another frame (frame B) of the video sequence the set of feature points; iii. detecting in each frame other than frame A or frame B at least a sub-set of the feature points; iv. positioning a virtual object in a defined position in frame A; V. positioning the virtual object in the defined position in frame B; vi. selecting one or more reference points for the virtual object; vii. computing the position of the reference points in each frame of the sequence; and viii. inserting the virtual object in each frame in the position determined by the computation.
2. A method as claimed in claim 1 in which the computation of the position of the reference points (step vii) is carried out by calculation of the positions of the feature points in each intermediate frame and by geometric transformation of the position of the reference points in relation to the feature points.
3. A method as claimed in claim 1 or claim 2 in which the virtual object is represented by a box, the reference points being corners of the box.
4. A method as claimed in claim 3 in which a virtual character is 11 positioned within or in fixed relationship to the box.
5. A method as claimed in claim 4 in which the virtual character is animated. 5
6. A method as claimed in any one of claims 1 to 5 in which the set of feature points is selected automatically.
7. A method as claimed in anyone of claims 1 to 6 in which the computation of the position of the feature points is carried out by tracIdng of each feature point on a frame by frame basis.
8. Apparatus for insertion of virtual objects into a video sequence consisting of a plurality of video frames said apparatus including:
i. means for detecting in one frame (frame A) a set of feature points; ii. means for detecting in another frame (frame B) the set of feature points; iii. means for detecting in each frame other than frame A or frame B at least a sub-set of the feature points; iv. means for positioning a virtual object in a defined position in frame A; V. means for positioning the virtual object in the defined position in frame B; vi. means for selecting one or more reference points for the virtual object; vii. means for computing the position of the reference points in each frame of the sequence; and viii. means for inserting the virtual object in each frame in the position determined by the computation.
12
9. Apparatus as in claim 8 including means for representing the virtual object.
10. Apparatus as claimed in claim 9 including means for positioning a virtual character within a rectangular box.
11. Apparatus as claimed in claim 10 including means for animating the virtual character.
12. Apparatus as claimed in claim 8 including means for automatically selecting the set of feature points.
13. Apparatus as claimed in claim 8 in which the means for computation of the position of the feature point comprises means for tracIdng of each point on a frame by frame basis.
13
GB9601098A 1996-01-19 1996-01-19 Insertion of virtual objects into a video sequence Withdrawn GB2312582A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
GB9601098A GB2312582A (en) 1996-01-19 1996-01-19 Insertion of virtual objects into a video sequence
PCT/GB1997/000029 WO1997026758A1 (en) 1996-01-19 1997-01-07 Method and apparatus for insertion of virtual objects into a video sequence
AU13873/97A AU1387397A (en) 1996-01-19 1997-01-07 Method and apparatus for insertion of virtual objects into video sequence
EP97900282A EP0875115A1 (en) 1996-01-19 1997-01-07 Method and apparatus for insertion of virtual objects into a video sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9601098A GB2312582A (en) 1996-01-19 1996-01-19 Insertion of virtual objects into a video sequence

Publications (2)

Publication Number Publication Date
GB9601098D0 GB9601098D0 (en) 1996-03-20
GB2312582A true GB2312582A (en) 1997-10-29

Family

ID=10787260

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9601098A Withdrawn GB2312582A (en) 1996-01-19 1996-01-19 Insertion of virtual objects into a video sequence

Country Status (4)

Country Link
EP (1) EP0875115A1 (en)
AU (1) AU1387397A (en)
GB (1) GB2312582A (en)
WO (1) WO1997026758A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2351199A (en) * 1996-09-13 2000-12-20 Pandora Int Ltd Automatic insertion of computer generated image in video image.
US6525765B1 (en) 1997-04-07 2003-02-25 Pandora International, Inc. Image processing
US6965397B1 (en) 1999-11-22 2005-11-15 Sportvision, Inc. Measuring camera attitude

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7295752B1 (en) 1997-08-14 2007-11-13 Virage, Inc. Video cataloger system with audio track extraction
US6360234B2 (en) 1997-08-14 2002-03-19 Virage, Inc. Video cataloger system with synchronized encoders
US6463444B1 (en) 1997-08-14 2002-10-08 Virage, Inc. Video cataloger system with extensibility
US6567980B1 (en) 1997-08-14 2003-05-20 Virage, Inc. Video cataloger system with hyperlinked output
US7230653B1 (en) 1999-11-08 2007-06-12 Vistas Unlimited Method and apparatus for real time insertion of images into video
US7260564B1 (en) 2000-04-07 2007-08-21 Virage, Inc. Network video guide and spidering
US8171509B1 (en) 2000-04-07 2012-05-01 Virage, Inc. System and method for applying a database to video multimedia
US7206434B2 (en) 2001-07-10 2007-04-17 Vistas Unlimited, Inc. Method and system for measurement of the duration an area is included in an image stream
US10089550B1 (en) 2011-08-17 2018-10-02 William F. Otte Sports video display

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1991015921A1 (en) * 1990-04-11 1991-10-17 Multi Media Techniques Process and device for modifying a zone of successive images

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL108957A (en) * 1994-03-14 1998-09-24 Scidel Technologies Ltd System for implanting an image into a video stream
IL109487A (en) * 1994-04-29 1996-09-12 Orad Hi Tec Systems Ltd Chromakeying system
US5436672A (en) * 1994-05-27 1995-07-25 Symah Vision Video processing system for modifying a zone in successive images

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1991015921A1 (en) * 1990-04-11 1991-10-17 Multi Media Techniques Process and device for modifying a zone of successive images

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2351199A (en) * 1996-09-13 2000-12-20 Pandora Int Ltd Automatic insertion of computer generated image in video image.
GB2351199B (en) * 1996-09-13 2001-04-04 Pandora Int Ltd Image processing
US6525765B1 (en) 1997-04-07 2003-02-25 Pandora International, Inc. Image processing
US6965397B1 (en) 1999-11-22 2005-11-15 Sportvision, Inc. Measuring camera attitude

Also Published As

Publication number Publication date
AU1387397A (en) 1997-08-11
EP0875115A1 (en) 1998-11-04
GB9601098D0 (en) 1996-03-20
WO1997026758A1 (en) 1997-07-24

Similar Documents

Publication Publication Date Title
Kanade et al. Virtualized reality: Concepts and early results
US6084979A (en) Method for creating virtual reality
Guillou et al. Using vanishing points for camera calibration and coarse 3D reconstruction from a single image
Pollefeys et al. Visual modeling with a hand-held camera
US6124864A (en) Adaptive modeling and segmentation of visual image streams
Pollefeys et al. From images to 3D models
US6266068B1 (en) Multi-layer image-based rendering for video synthesis
EP0903695B1 (en) Image processing apparatus
US7209136B2 (en) Method and system for providing a volumetric representation of a three-dimensional object
GB2312582A (en) Insertion of virtual objects into a video sequence
JP2000268179A (en) Three-dimensional shape information obtaining method and device, two-dimensional picture obtaining method and device and record medium
WO2003036384A2 (en) Extendable tracking by line auto-calibration
Rander A multi-camera method for 3D digitization of dynamic, real-world events
US6795090B2 (en) Method and system for panoramic image morphing
US5793372A (en) Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically using user defined points
Kanade et al. Virtualized reality: Being mobile in a visual scene
Kanade et al. Virtualized reality: perspectives on 4D digitization of dynamic events
Kang et al. Tour into the video: image-based navigation scheme for video sequences of dynamic scenes
Inamoto et al. Free viewpoint video synthesis and presentation of sporting events for mixed reality entertainment
KR100466587B1 (en) Method of Extrating Camera Information for Authoring Tools of Synthetic Contents
Kim et al. Digilog miniature: real-time, immersive, and interactive AR on miniatures
Mayer et al. Multiresolution texture for photorealistic rendering
JPH10111934A (en) Method and medium for three-dimensional shape model generation
Chan et al. A panoramic-based walkthrough system using real photos
Blanc et al. Towards fast and realistic image synthesis from real views

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)