CA2248721C - A computer system and process for defining and manufacturing images using structured objects with variable edge characteristics - Google Patents

A computer system and process for defining and manufacturing images using structured objects with variable edge characteristics Download PDF

Info

Publication number
CA2248721C
CA2248721C CA002248721A CA2248721A CA2248721C CA 2248721 C CA2248721 C CA 2248721C CA 002248721 A CA002248721 A CA 002248721A CA 2248721 A CA2248721 A CA 2248721A CA 2248721 C CA2248721 C CA 2248721C
Authority
CA
Canada
Prior art keywords
point
distance
boundary
anchor
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002248721A
Other languages
French (fr)
Other versions
CA2248721A1 (en
Inventor
Perry S. Kivolowitz
Sze-Ping Wong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avid Technology Inc
Original Assignee
Avid Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US08/613,283 external-priority patent/US5754180A/en
Application filed by Avid Technology Inc filed Critical Avid Technology Inc
Publication of CA2248721A1 publication Critical patent/CA2248721A1/en
Application granted granted Critical
Publication of CA2248721C publication Critical patent/CA2248721C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An image representation using a structured object may be readily manipulated by a computer. This image representation can represent more complex images where the structured object defines a boundary for which image values near the boundary in a region are defined by a function called an edge characteristic function. The edge characteristic function is applied by determining, for each point in the region, a point on the boundary to which the given point is closest, herein called an anchor, and a distance to that point. Different edge characteristic functions may be assigned to different points along the boundary. In one embodiment, edge characteristic functions are assigned to selected for characteristic points on the boundary. These edge characteristic functions may be interpolated to provide edge characteristic functions for remaining points along the boundary. Each point not on the boundary is processed according to the edge characteristic function of the point on the boundary to which it is closest and according to the distance to that point. An edge characteristic function may also be a function of other factors. This image representation provides many benefits including the ability to generate more complex images which can then be manipulated more easily or be converted from different image formats. One form of manipulation is interpolation over time.
In image compositing, this image representation also allows for easy attenuation of various effects over regions in a controlled way.

Description

A COMPUTER SYSTEM AND PROCESS FOR DEFINING AND
MANUFACTURING IMAGES USING STRUCTURED OBJECTS
WITH VARIABLE EDGE CHARACTERISTICS
Field of the Invention The invention is related to computer systems and computer-implemented processes for defining and manipulating a region of interest in a coordinate space, such as for defining structured objects in two- or three-dimensional space using a computer. In particular, this invention is useful for generating a control image for use in compositing images.
to Background of the Invention In a computer system, there are many ways to represent still image data which generally fall into one of two categories. In a f rst category of image representation. called a bit map, the value of each individual picture element, or pixel, of the image is stored. The image i s representation may be compressed or uncompressed. Bit-mapped image representations generally do not require much computation by a computer to display, but may occupy a substantial amount of memory space. Also, such image representations are generally not flexible and are difficult to scale or otherwise modify without generating significant visual artifacts. A
second category of image representation is a structured object. A structured object defines an 2o image, typically in mathematical terms, according to the structure of objects contained within the image. For example, an image of a cube may be represented by the positions of the vertices of the cube with respect to some reference point. Such structured objects are commonly found in three-dimensional modeling systems, desktop publishing systems, etc.
One area where image representation is important is in image compositing. One type of 25 image compositing, called alpha compositing, involves combining two images to generate a third image, where the extent of contribution of each of the combined images in the third image is defined by a control image. An example of an alpha compositing system is found in U.S. patent no. 4,602.286 to Paul Kellar.
As described in the Kellar patent. the control image is defined as a bit-mapped picture.
3o in which each pixel has a value. The value of each pixel represents an alpha value (a) on the scale of zero (0) to one (1 }. The alpha value (a,,.) for each pixel (x,y), herein called a control value. in the control image is used in a standard mapping function to combine the input images.
For example. for each pixel A,,,. in the first input image (A). and pixel B~_~. in the second input
-2-image (B). where A and B are images of x by y pixels, the pixel C~.,. in the output ima~~e C is given b5~ the formula:
C,., - a..," A,., + ( 1 - a,.,~)B,., ~ 1 ) Such compositing appears in video programs as special effects and transitions such as wipes, fades, etc. One difficulty with using a bit-mapped image as a control image is that alpha compositing often involves an interpolation over time, such as in transitions between cuts in a motion video sequence. That is, a control image used to combine one pair of frames to produce one frame in the motion video sequence is often different from control images used to produce other frames in the sequence. Additionally, if a bit-mapped control image created at a low resolution is then scaled to a higher resolution quantization found in the low resolution image produces artifacts in the higher resolution image.
Another way to define a control image is by using a structured object. A
structured ~ 5 object defines a control image, typically in mathematical terms, by the structure of objects within the image. This definition may be a mathematical formula. such as an equation representing a curve, or geometric, such as coordinates of points and vectors representing directions and magnitudes. The structured object definition is interpreted to obtain control values for each pixel to be generated in a final image. When taken together these control values are similar to the set 20 of control values obtained using a bit-mapped image.
One benefit of structured definitions is that they can be scaled, animated or interpolated over time more easily than static definitions, such as a bit-mapped image.
Another advantage of using structured objects is that the computational determination of whether one part of the image is either inside or outside a boundary of an object is relatively easy.
Structured object definitions also are more amenable to scaling and simple manipulation like rotations.
which simplifies the process of making some effects. However, structured objects to date have been used generally for simple operations on simple objects.
In image compositing, both bit-mappe~. .:rages and structured object definitions <~enerallv are used for ~~enerating a control image that typically has control values of either U or 1.
3o where the chan;~es between pixels in the image define edges. Usually the edges or boundaries are processed to provide a blending characteristic such that changes between source content in the resultant image do not occur disjointly over only one pixel. Rather. the control image is _3_ processed such that all edges have a gradual change, typically a simple greyscale ramp. In the Kellar patent, this greyscale ramp is provided because of an artifact in processing input from a pen and tablet input device. Using structured objects, most prior art systems simply adjust values before and after an edge in a scanline.
A general aim of the invention is to provide an image representation, for a computer, which can represent more complex images and which is also readily manipulable.
Summary of the Invention An image representatian using a structured object may be readily manipulated by a computer. This image representation can represent more complex images where the structured object defines a boundary for which image values near the boundary in a region are defined by a function called an edge characteristic function. The edge characteristic function is applied by determining, for each paint in the region, a point on the boundary to which the given point is closest: herein called an anchor, and a distance to that point.
35 Different edge characteristic functions may be assigned to different points along the boundary. In one embodiment. edge characteristic functions are assigned to selected characteristic points on the boundary. These edge characteristic functions may be interpolated to provide edge characteristic fiuactions for remaining points along the boundary. Each point not on the boundary is processed according to the edge characteristic function of the point on the 2o boundary to which it is closest and according to the distance to that point. An edge characteristic function may also be a function of other factors.
This image representation provides many benefits including the ability to generate rnor~e complex images which can then be manipulated more easily or be converted from different image formats. One form of manipulation is interpolation over time. In image ~ompositing, this 25 image representation also allows for easy attenuation of various effects over zegions in a controlled way. This method is particularly useful in image compositing and for modeling data which may vary over a number of parameters.

-3a-Accordingly, in one aspect of the present invention, there is provided a method for generating an image, comprising the steps of: (a) defining a structured object having a boundary of a plurality of points; (b) assigning an edge characteristic function to each point along the boundary, wherein the edge characteristic functions of at least two points are different, and wherein each of the edge characteristic functions defines an edge characteristic which defines image values near the boundary;
(c) for each picture element in the image, determining an anchor on the boundary and a signed distance to the anchor wherein the anchor is a point on the boundary to which the picture element is closest; (d) for each picture element in the image, selecting the edge characteristic function assigned to the anchor determined for the picture element;
and (e) for each picture element in the image, applying the selected edge characteristic function to the signed distance from the picture element to the anchor to determine image values for the picture element.
In another aspect, there is provided a computer readable medium having computer readable instructions stored thereon for execution by one or more computers, that when executed implement the method of the previous aspect.
One embodiment of the present invention is a method for generating an image. This method involves defining a curve in the image as a plurality of points. For at least two points in the curve, a function is assigned. The function of one of the points is different from the function of another of the points. For each picture element in. the image, an anchor on the curve and a signed distance to th.e anchor are determined. For each picture element, the function assigned to the anchor of the point is selected. For each picture element in the image. the selected function is applied to the signed distance from the picture element to the anchor to determine image data for the picture element. A signed distance is a distance measure which is negative or positive. rather than an absolute value, and is useful for indicating a direction for example.
In one embodiment of this method, the step of assigning a function to a point involves selecting a plurality of points on the boundary. An edge characteristic is assigned to each selected point. The edge characteristics of each of the selected points are interpolated to obtain edge characteristics of other points on the boundary.
~n another embodiment of this method, the step of applying the function involves calculating, for each point, a factor a according to the anchor and the distance. and wherein the image data for the point is a A + ( 1 - a)B. where A is image data from a corresponding point in a first image and B is image data from a corresponding point in a second image.
Another aspect of the invention is a computer-readable medium having a representation 15 of a region of interest within a coordinate space stored thereon in a computer-readable form.
This representation includes an indication of a boundary defined by a plurality of points in the region, and, for at least two points of the boundary, an indication of an edge characteristic, wherein the edge characteristic of a first point and the edge characteristic of a second point are different. In one embodiment, the representation includes an indication, for each point in the 20 coordinate space, of a point on the boundary to which the point is closest and a signed distance to the point on the boundary.
Another aspect of the invention is a computer-readable medium having a representation of a region of interest within a coordinate space stored thereon in a computer-readable fOItll. The representation includes an indication of a boundary defined by a plurality of points in the re~~ion 25 and an indication. for each point in the coordinate space. of a point on the boundary to which the point is closest and a signed distance to the point on the boundary.
Another aspect of the invention is a computer-implemented process for generating a representation of a region of interest in a coordinate space. The process involves defining a boundary in response to user input. User input is also received for at least two points on the 3o boundary. and an edge characteristic is assigned according to the user input. wherein the ed<~e characteristic of a first point and the edge characteristic of a second point are different.

Another aspect of the invention is a computer system for generating a representation of a region of interest in a finite coordinate space. This computer system has a mechanism for defining a boundary. Another mechanism is used for assigning an edge characteristic for at least two points on the boundary wherein the edge characteristic of a first point and the edge characteristic of second points are different.
Another aspect of the invention is a computer-implemented process for manipulating a definition of a region of interest in a coordinate space, wherein the definition is stored using computer-readable signals on a computer-readable medium, and wherein the definition includes an indication of a boundary defined by a plurality of points in the region and an indication, for each point in the coordinate space, of a point on the boundary to which the point is closest and a signed distance to the point on the boundary. This process involves applying, for each point in the coordinate space having an anchor and a signed distance to that anchor, the edge characteristic function assigned to the anchor to the signed distance stored to obtain a value. The point in the coordinate space is displayed according to the obtained value.
is Brief Description of the Drawing In the drawing, FIGS. 1 A and 1B illustrate a bit-mapped image definition of a control image for use in alpha compositing in a prior art system, and an associated blending characteristic:
2o FIGS. 2A and 2B illustrate a control image defined using a structured object in accordance with a prior art methodology, and an associated blending characteristic, FIGS. 3A-3E illustrate a control image defined using a structured object representation in accordance with the present invention, and associated edge characteristic functions:
FIG. 4 is a schematic illustration of a representation of a structured object definition in accordance with the present invention;
FIG. 5 is a block diagram of a computer system with which the present invention may be used:
FIG. ( is a schematic illustration of more details of a memory System shown In FIG. ~:
FIG. 7 is a block diagram of a system for processing structured object definitions in 3o accordance with the present invention;
FIGS. 8A and 8B are pseudocode representing example edge characteristic definitions:

FIG. 9A is a flow chart describing a process for determining anchor and distance values performed by anchor and distance calculator in FIG. 7;
FIG. 9B is a schematic illustration of a structured object delineating a region within a coordinate space;
FIGS. 9C and 9D are flowcharts describing another process for determining anchor and distance measures;
FIG. 9E is a diagram illustrating how radial distances and orthogonal distances are defined;
FIG. 10 is a flow chart describing image processing using structured object definitions, variable edge characteristics and anchor and distance values as shown in FIG.
7:
FIG. 11 is a diagram of an example output generated by the image processing module shown in FIG. 7; and FIG. 12 is a block diagram of an alpha compositing system with which a control image generated using the present invention may be used.
~J
Detailed Description The present invention will be more completely understood through the following detailed description which should be read in conjunction with the attached drawing in which similar reference numbers indicate similar structures.
20 The invention relates to the delineation of a region of interest in a coordinate space which is a subset of a larger region in the space. One type of delineation of a region of interest is the selection of a subset of an image which is treated differently from the rest of the image.
Treatment may mean any of a plurality of image processing functions including bet la ~. el~.lted to, painting, compositing, transitions and special effects. Another kind of delineation of a r~ Lion ?5 of interest. without relation to imaging, is the delineation of a geographical region for which some demographic characteristic applies. An example in three-dimensional space involves description of the animation of a fireball 111 lllOt1011. such as :comet, which includes the body and tail of the comet.
Referring now to FIG. lA. in a prior image processing system. a region is delineated 3o using a bit-mapped image which is generated by a user using a pen and tablet input device (not shown). ThIS lllpllt device allows a user to define a line such as shown at 20 in the image 22 of FIG. 1 A. Referring now to FIG. 1 B. by virtue of the sloped shape of the tip of the pen. an edge having a mathematical characteristic such as shown at 24 is provided. When viewed. the ed~~e defined by line 20 appears to have a short grey scale ramp. The graph shown at 24 illustrates grey scale values with respect to a distance from the boundary.
Referring now to FIG.2A, in prior systems using structured objects. an image 26 may include an object such as the circle defined at A. A circle may be defined by its radius, e.g., two.
and its center, e.g,.. coordinates (0,0), within the coordinate space of the image. Referring now to FIG. 2B, a function, such as shown at 28, may define a greyscale ramp in the x direction across the boundary defined by the structured object A to provide a desired edge smoothing. When processed, a point within the image 26 may be assigned a greyscale value according to how far it is from the boundary defined by the structured object A. In practice, given a point on the boundary defined by the structured object, this function 28 is used to identify other points close to the point on the boundary and to assign greyscale values to these identified points.
An advantage of a structured object definition is that interpolation of the structured object over time, e.g., changing its shape, is easy to perform. For example, an enlarged circle having an enlarged radius, e.g., three, and the same center coordinates is shown at B in the image 26. As can clearly be seen, the size of this circle can be modified easily by incrementing or decrementing the radius value. Its position can be changed easily by moving the center coordinate.
These solutions from image processing applications involve only indirectly the notion 2U of "proximity" in the determination of the blending characteristic at the boundary defined b5~ the structured object. 1t has been found that a direct measure and use of the proximity of a point to a boundary permits the representation of more complex objects.
Proximity is a signed measure of how far a point is from the edge of the region defined by the structured object. For example, the measure may be positive if the point is inside the region, or negative if the point is outside the region. If the color black (white) is assigned to all outside ( inside) points which are not close to the edge. and a ramp of greyscales is assigned for those near the edge. the region is manifested as a white area on a black background with a camped transition fI'Olll Whlte to black on the edge of the re~~ion. The shape of the greyscale ramp. which controls the shades near the edge. is called the edge characteristic of the region.
3o Different greyscale ramps give rise to different edge characteristics. and thus different delineations of the region. In the ima~~e representation shown in FIGS. 1 A
and l B. an ed~~e characteristic is not defined explicitly. The blending characteristic near the edge is really a -s-desirable side-effect of the input device. In the image representation shown in FIGS. 2A and 2B, only one blending characteristic is assigned per region. Additionally, the blending characteristic is processed with respect to a point on the boundary of the region in order to identify points in the coordinate space proximate to the boundary point. In contrast, in the present invention, proximity is defined with respect to a point in the region, in order to identify a point on the boundary to which the given point is closest.
Referring now to FIGS. 3A-3E, the present invention permits different edge characteristic function at different points along a curve defined by a structured object. For example, a circular region C may be delineated by a structured object definition. Variable edge to characteristic function assigned around the edge of the circle represent a circular disk with a greyscale ramp with a different appearance in every direction. For example, a point E on the edge of the circle in FIG. 3A may be assigned a first edge characteristic function f, (E) such as shown at 32 in FIG. 3B. A second point F at the opposite side of the edge of the circle in FIG.
3A may be assigned a different edge characteristic function f2 (F) such as shown at 34 in FIG.
3C. This combination of edge characteristics results in a shape that looks like an egg with an egg yolk.
Such an image representation may be interpolated easily over time. For example, the radius may be changed to produce a curve as shown at D in FIG. 3A.
Alternatively, edge characteristics assigned to points E and F on the curve may change, for example to be new 2o functions f,'(E) and f2'(F) such as shown at 36 and 38.
Referring now to FIG. 4, a representation of a delineation of a region of interest within a coordinate space such as a two dimensional image is illustrated. Such a representation generally includes an indication of a definition of a structured object which provides a boundary, such as indicated at 40. In the example given, the boundary of a circle is defined by a center point and a radius. The image representation also includes an indication of one or two or more edge characteristic functions, such as shown at 42, which are assigned, respectively, to characteristic points on the boundary.
This representation of a delineation of a region of interest is used by a computer system 50, for example, as shown in FIG. 5. The computer system 50 includes a processor 52 connected 3o to a memory 54 via an interconnection mechanism 56. Input devices 58 and display devices 60 are also connected to the processor and memory via interconnection mechanism 56. The computer may be programmed using a high-level programming language, such as "C", or other _.
programming language to create an application program, which is machine-executable code . 77787-71 created by compiling a computer file containing a program in the high-level proLrammin_~
language. An operating system (not shaven) controls the execution of the application program and provides scheduling. debuggin=. inputloutput. memory management. data management and related services on the computer.
The computer system 50 may be any of a number of commercially-available general purpose computing systems. for example. the indigo computer from Silicon Graphics lnc. of Mountain Viem. California. using the IRIX operating system similar computers available from Apple Computer' or other custom image processing work stations and other programming systems. Such computer systems employ commonly available processors, such as the Intel Pentiun~i or series x86 processors or Motorola series 680x0 processor. for example. Oilier suitably programmed microprocessors may be used. 'The interconnection mechanism 56 may be a simple bus. or may involve a more complex switching arrangement to allow the various elements of the computer system to communicate in parallel. It should also be understood that the invention is not limited to a single processor system. A multi-processor system. including t a parallel processing computers. may be used in the present invention.
Exemplary input devices ~~hich may be used with the present invention include a mouse, a track ball. a pen and tablet. or other pointing devices, as well as keyboards. keypads.
scanners and other commonly used input devices for general purpose computers.
The display device b0 may be a cathode ray tube (CRT) display; liquid crystal display (LCO), of urhich there 2o are a number of types. and other display devices such as a printer. In cases where this invention is used to generate motion picture sequences. the display device 10 may be an optical printin~~
device. or a video tape recorder.
FIG. 6 shows more details about the memory system 54 of FIG. 5. The tnemorv svs~em includes a computer readable and veritable. random access. nonvolatile recording medium 70.
?~ such as a magnetic or magneto-optic disk. The disk may be xemovahle. known as a floppy disk or compact disk. or permanent. known as a hard drive. Such a disk has a number of~tracks such as indicated at 7? in which signals are stared. typically in binary form.
i.e._ a form interpreted as a sequence of ones and zeros as shown at 76. Such a sequence of ones and zeros provides an indication of the boundary definitions. characteristic points and associated-~ed~.!e characteristic 3t~ functions such as shov~~n in F1G. 4. This information is stored on the disk 70. T'~pirally_ in operation. the processor causes the data to be read from disk 70 into an inte~~rated circuit memoru element such as indicated at ?8. The integrated circuit memory ?8 allov~~s faster access tn the *Trade-mark information than does the disk 70. The processor generally manipulates the data within the integrated circuit memory and then copies the data to the disk 70 when processing is completed.
A variety of mechanisms are known for managing data movement between the disk 70 and integrated circuit memory 78, and the invention is not limited thereto. The invention is also not a limited to the particular processor, input device, output device or interconnection mechanism.
One embodiment of a system in accordance with tlae invention will now be described in connection with FIG. 7. Such system may be implemented using a high-level computer programming language on a computer system, such as described above in connection with FIGS.
and 6. One module of this system is the model generator I 00 which receives user input 1 O 1.
i o typically via a graphical user interface and in response generates the structured objects that are used to define a boundary within a coordinate space. The model generator 100 rnay be, for example. matte generation software, such as found in Matadoi from Avid Technolog~~Europe Ltd., formerly Parallax;~ofLondon, England, Elastic Reality from Avid Technology. formerly from Elastic Reality in Madison, Wisconsin and Photoshop from Adobe Systems.~Inc., in ~ s Mountain View, California. Such systems allow a user to generate a matte using an input device.
The matte may be output to a file as a representation of a boundary. The boundary so generated, as indicated in 102, may be stored in the memory 54 of the computer.
Another module is the edge characteristic definition module 104 which receives user input I 03 and which enables a user to define an edge characteristic function as indicated at 106.
2U An edge characteristic function may be associated with one or more characteristic points on a boundary defined by a structured object. An interpolation may be performed. or the edge characteristic may be defined, so as to provide a single definition which applies to the whole object as a function of a position on the boundary. The definition of the edge characteristic function also may be stored in the computer memory ~4. The combination of edge characteristic 25 function I 06 and boundary 102 is sufficient to describe a delineation of a region of interest in the coordinate space in which the structured object is defined.
An anchor and distance calculation module 108 determines. in a manner described in more detail below in connection with FIGS. 9A-9D. for each point in a coordinate space, the point on the boundary to which it is closest {herein called an anchor). along with a distance 3o measure to that boundary. The anchor and distance for each poisat. as indicated at 110. is stored in the memory 54 of the computer. The distance measure may be a signed distance. for example.
*Trade-mark A signed distance is a distance measure which is negative or positive, rather than an absolute value. and is useful for indicating a direction for example.
It is possible to provide a collection of structured abject definitions in this invention. In some applications. it may be preferable to pre-compute the anchor and distance values for a given structured object. In such a case. the combination of boundary definitions 102 and anchor and distance values 110 are stored together in the memory 54 for later use.
Edge characteristic functions 106 can be defined at a later time. Additionally, a set of structured objects and corresponding edge characteristics lOG may be combined to provide, far example. a library of visual effects.
An image processor I I2 receives the boundary definition or structured object definition 102, the anchor and distance values 110 and the edge characteristic functions 106 to produce an image i 14. The process performed by the image processor is described in more detail below in connection with FIG. 10. It is possible to combine the functions of bath the anchor and distance calculator 108 and the image processor I 12 into one module that performs both operations on a t 5 per pixel; per scanline basis. The image 114 may be representative of data to be displayed, or may be used as a control image in an alpha compositing system I I G. Other applications and uses of this image are described in more detail below.
FIGS. 8A and 8B> which illustrate example edge characteristic definitions in more detail, will now be described. These definitions can be made using a program editor such as EMACS to generate computer code defining the edge characteristic as a procedure receiving two arguments. the anchor and the distance.
In this implementation. shown in "C" pseudoeode, the edge characteristic of the region is determined by function Color PickC~Ior (double dist, double anch), where dist is the signed distance of the current point P from the edge C of the region and anch is the anchor of P from C.
25 Anch is a normalized value between 0 and 1. with respect to a predetermined point on the boundary C. Hence the values 0 and 1 of aneh both represent the same paint on C when C is closed. Similarly. disc is a value between -1 and 1. with the convention that dist is negative if P
is outside the re=ion and positive if P isinside the region defined by C. A
maximal footprint (MAXDIST) is used for the edge characteristic so that disc*MAXDIST gives the actual distance ~o of P from C.
A simple edge characteristic function is shown in FICT. 8A. This function is independent of aneh. The color this edge characteristic function returns is radially symmetric *Trade-mark (i.e., this function is the same function along any radius of constant anchor). On any path of constant anchor, the color returned is on a straight ramp between colorA (the color at dust=1) and colorB (color at dist=1 ). By carefully choosing the two-end colors, and possibly the ramp between them, one can produce quality delineations of regions like soft edges, etc.
The power and flexibility of this image representation is demonstrated ~by the definition of an edge characteristic function as shown in FIG. 813. In this edge characteristic function, MixColorQ is a simple blend function which mixes colorA with another~olor which is obtained by mixing colorB and colorC. Ramps is a smooth function which maps.[-l;lJ to [~,1], the domain of the function MixColorQ. FLampO returns 0 at -1. I on the interval yWHIT~OUT,1]
Io and it is a smooth cubic ramp on the interval.[-l, WhITIrOUT], where WI-IITBOUT is a constant, e.g., 0.5.
In the definition of Pici:Color{~, the color is chosen by an application of the MixColorO function with two carefully chosen arguments. The first argument, Ramp~(dist), ensures a radially smooth transition of colors in the region. ~Vit~h the flat potion on
3 5 [V~~iITEOUT, l J, the center of the region is assigned a uniform color.(colorA). Moving away from the center, the color changes to another color, which is controlled by ~e second argument:
(anch<0.5)? anch*2: (1-anch)*2 which means that if anch less than 0.5, the value is arch*2, e9se the value is (i-anch)*2. 3his argument defines a hat-shaped function which is zero at anch~ or I and rues to 1 at anch='J.S.
2o This function causes the color at the outermost rim of the delineated region to change from colorB (anch=0) to colorC (anch=0.5) then back to colorB ranch=1 ~.0). 3'he fact that the anchor value anch is periodic means that PickColor~ should also be periodic with respect to anch.
Given the definition of a structured object, the process of determining the signed 25 distance and anchor on the curve defined by the structured object for each ~picttu~e element in ~e coordinate space will now be described in connection with F'I~S. 9A through 917. ~fG. 9A is a flow chart describing how anchor and distance values may be calculated using the c~tured object definition 102. FIG. 9B is a schematic illustration of a~coordinate space'R with a region y CA 02248721 2005-03-21 -12a-delineated using a structured object. Examples of graphical representations of edge characteristic functions for points within the region are depicted to the right of the figure.
In this process, the curve is parametrized such that each point defining the edge of the region is assigned a value i. There are many ways to parametrize a curve. In the embodiment described herein, an initial point on the curve is assigned parameter i=0, and the final point on f;l~~°C~ir~i='~ ~, ;C

the curve is assigned the parameter i=1. The purpose of the process defined in FIG. 9A is to determine, for each point P in the coordinate space R, 1 ) the point Q on the edge defining the region which is the nearest point on the edge to point P, and 2) the signed distance between point P and point Q.
The first step in this process is step 200 initializing Y and y values, representing pixel coordinates for the points in the curve to be scanned, as well as the parameter value i of the curve. Typically, this initialization sets corresponding variables to zero. Of course other initial values may be used. Next, the minimum distance value and an initial anchor are selected in step 202 to initial values. For example, the minimum distance value may be set to the highest 1o possible value which is expected using the given coordinate space, and the anchor is set to indicate the lowest parameter value, e.g., zero.
Given the initialized values, the signed distance 8 between the current point (x,y) and the current point i on the curve is determined in step 204. If the distance 8 so calculated is less than or equal to the minimum distance presently stored, as determined step 206, the minimum 15 distance for the current point is set to this calculated distance 8 in step 208 and the anchor value for this point (x,y) is set to the current parameter value i. The parameter value i is then updated in step 210 by incrementing by a value N, which is 1/(M-1), where M is the number of points on the curve. If all of the points on the curve have not been analyzed, as determined in step 212, the loop of steps 204, 206, 208 and 210 is repeated.
2o When all of the points on the curve have been analyzed, the minimum distance for the present point (x,y) and its anchor, have now been defined. The next point in the x direction is then selected by incrementing the x value in step 214 and resetting the parameter value i to its initial value, e.g., zero. All points in the scan line are then visited by continuing to perform steps 202 through 214 until the end of the scan line is reached as determined by step 216. When the 25 end of the scan line is reached, the y value is incremented in step 218 and the x value is initialized, e.g., to zero. Each scan line is processed in accordance with steps 202 through 218 until all scan lines have been processed as determined by step 220. Upon the completion of step 220, the minimum distance for each point and its anchor is determined.
The process described in connection with Fig. 9A is illustrated for only two dimensions.
3o An extension of this process to three dimensions is relatively simple by adding an additional loop having steps similar to steps 218, 220 and a link back to step 202 for the third dimension to be processed.

In step 204, a distance metric is used to measure the distance between a point in the coordinate space and points on the curve. A simple distance metric to use is the Euclidean distance, or the squared Euclidean distance which avoids performing a square root operation.
The Euclidean distance 0 of a point (x, ,y,) to another point (x,,y,) in a coordinate space is represented by the following equation:
II p II - (x~ - x2)2 + (y~ - y2)Z in two-dimensional space; and _ (x~ - x2)Z + (y, - y2)2 + (z~ - z2)Z in three-dimensional space.
Many other distance metrics are known for measuring a difference or a similarity between two points. The Euclidean distance is merely an example of distance metrics to be used.
This distance may be normalized to a range of [-1, 1] by dividing the Euclidean distance by a 1o factor MAXDIST, indicating a maximum possible distance in the region. The distance calculated may be a signed distance. One way to select a sign for the distance is to consider the curve as dividing the coordinate space into two regions. This is done by viewing the curve as moving forward in the coordinate space as the parameter value i increases. In a plane, this curve then divides the plane into two regions, one region to the right of the plane, another region to the left of the plane. In such a case, the signed distance between a point P in the coordinate space and the curve is defined to be positive if point P is on the right hand side of curve C; and negative if point P is on the left hand side of curve C.
Concerning the identification of the anchor, i.e., the point on the curve which is closest to a given point on the coordinate space, it is possible that two or more points on the curve may have the same distance to any given point in the coordinate space. In such a situation, one point should be selected. The process of selecting this point can be performed in the comparison and selection steps in steps 206 and 208 in Fig. 9A. In the embodiment shown in Fig. 9A, the point on the curve having the highest parameter value i is selected as the anchor.
By making the comparison in step 206 only a "less than" comparison, the point on the curve with the lowest parameter value i could be selected. The same effect could be obtained by changing the -1 ~-r'f Aia';'~;~i°:~=v .~'~'G~.~, orientation of the parametrization, or by changing the initialization and update steps 200 and 210, respectively.
The result of the process of Fig. 9A is information stored in memory for every point P in the coordinate space, representing two unique scalars:
i, called the anchor of P: and 8, its signed distance.
This mapping of point P to ordered pair (i,8) is called a delineation map of the curve C.
The delineation map of a curve C can be represented by a variety of media, such as colors. For example, a color map may be defined which assigns a color based on the anchor and signed to distance with respect to the curve. Thus, a region is delineated via the delineation map.
The brute force approach to the calculation of the anchor and signed distance in Fig. 9A
involves order O(Mxy) computation time, where M is the number of parameters representing the curve, and x~y represents the number of pixels in the image. A more efficient calculation is desirable.
15 Such a process will now be described in connection with Figs. 9C and 9D.
Similar to the embodiment of FIG. 9A, in this embodiment, in two dimensions a curve C is parametrized by a parametric value i which represents a pair of coordinates (x,y). Since a computer processes discrete pixels, in essence the curve C is approximated by a piecewise linear curve where each piece is represented by parameter i and has two vertices - the point (x,y) corresponding to 2o parameter i and the point (x,y) corresponding to parameter i-1. The curve C
between successive vertices is assumed to be all the points lying on a straight line between the two vertices.
While computing the distance from a point P to a curve C may involve computing with infinitely many points on the curve, computing the distance from P to its piecewise linear approximant involves work only in the order of the number of vertices, as demonstrated by the 25 process of FIG. 9A. The distance from P to C is defined to be the shortest distance from P to any point on C. There are two kinds of distances - radial distances and orthogonal distances. A
radial distance is the distance, using some predefined metric such as the Euclidean metric, between P and any vertex of C. A radial distance can be visualized as defining a circular disk with a vertex as a center. Orthogonal distance is the distance, using the metric used to compute 3o the radial distance, between P and each straight line segment between two adjacent vertices of the piecewise linear curve. An orthogonal distance can be visualized by an unite line drawn through P and in the direction orthogonal to the line segment. If this infinite line intersects the .. . .

line segment at some point Q between the two vertices defining the line segment, then the orthogonal distance is defined and is equal to the distance between P and Q.
On the other hand, if the point Q is not between the two vertices defining the line segment, the orthogonal distance between P and the line segment is undefined.
As described below in connection with FIG. 9D, this process actually determines the reverse. That is, given a scanline (*,y), where * stands for any x value between the left and right limits of the image, this process determines the x values on the scanline which have a well-defined orthogonal distance with a line segment defined by two vertices of the curve C.
Additionally, the smallest orthogonal distance produced by these valid x-values is determined.
1o In the process of this embodiment, better performance is obtained by traversing the curve, vertex by vertex, and updating the distances and anchors in an array representing a scanline only as an update is needed between scanlines, instead of by computing a distance and anchor for each pixel in every scanline. This process takes advantage of the fact that the distance and anchor values depend on the maximum distance from the curve in which pixels are affected 15 by the edge characteristics, defined by a value MAXDIST, and that only a few pixels in a scanline can be affected by a given point on the curve. Thus, the process of updating distance and anchor values for each scanline only as needed is accomplished by two steps. Referring now to FIG. 9E, first, for radial distances from the vertices, only those pixels within a maximum distance MAXDIST -need to be updated. For example, only those points (between xo and x,) on 2o the scanline that are within a radial distance MAXDIST from a point P; on the curve need to be updated. Second, the x-intercepts on the current scanline of two lines which pass through the endpoints of each line segment (e.g., defined by P; and P;+1) and which are normal to the line segment are computed easily. The orthogonal distance between points on the scanline and the line segment is also computed easily. Only those pixels between the two intercepts (e.g., x, and 25 x3) need to be updated when the orthogonal distance is smaller than a maximum distance MAXDIST.
Referring now the Fig. 9C, in this embodiment, an image is processed one scanline at a time. Thus, the first step is setting a y value to zero, in step 230. Two arrays are then generated for the scan line, in step 232. A first array is called the distance table and holds the current 3o minimum distance to the curve for each point x along the scanline. The second array is called the anchor table and holds the current closest point on the curve for each point x along the scanline. Thus, the distance table and the anchor table are x-entry arrays, where x is the number S~~mE~,.~~_ of pixels in a scanline. The process of generating these arrays is described below in connection with FIG. 9D. Then, for each pixel in the scan line, a color is selected by steps 234 through 240.
In particular, x is initialized to zero in step 234. The color for the pixel defined by the current x value and the current scanline is determined according to the distance and anchor values in the distance table and the anchor table for the current x value (step 236). X is incremented in step 238 and these steps are performed until the end of the scanline is reached as determined in step 240. These steps 234 to 240 may be performed separately, as part of the image processing module described below in connection with Fig. 10 if the distance table and anchor table is stored for each scanline. The y value is then incremented in step 242 to process subsequent t o scanlines until the final scanline has been processed as determined in step 244.
Referring now to FIG. 9D, the process of generating the distance and anchor tables will now be described. The first step of this process is determining whether the current scanline is within a bounding box defining the region of the image affected by the curve and its associated edge characteristics (step 246). In step 246, the scanline value y is compared to the minimum 15 and maximum y values defining the bounding box. If the scanline is outside of the bounding box, the anchor and distance values are set to a default value, such as infinity, in step 248.
If the scanline is within the bounding box, a parameter value, e.g., i, is initialized in step 250 to an initial value, e.g. zero. The pixel coordinates (x,y) of a point P
corresponding to the current parameter i are then obtained in step 252. The distance along the y axis between this 2o point and the current scanline is then compared to the value MAXDIST in step 254 to determine if this point may affect the distance and anchor values of the current scanline. Thus, if the distance is less than MAXDIST, the distance table and anchor table arrays are updated by the radial distances from point P for only those points within MAXDIST along the x axis of the point P in step 256. Thus, a distance, such as the Euclidean distance, is calculated between the 25 point P and each point in the scanline between x - MAXDIST and x + MAXDIST, where x is the x coordinate of point P. This distance is compared to the current distance in the distance table for the point, and the distance table and anchor table is updated, using steps similar to steps 206 and 208 in FIG. 9A. Steps 252 through 256 are repeated for each parameter i, or each point along the curve, by incrementing i (step 258) until all points on the curve have been evaluated as 3o determined in step 260.
The parameter value, e.g., i, is again initialized in step 262 to an initial value, e.g. zero.
The pixel coordinates (x,y) of a point Q corresponding to the current parameter i, and pixel ~.-- ',:~:'r . .

coordinates (x,y) of a point P corresponding to the previous parameter (i-1) are then obtained in step 264. The orthogonal projection of the scanline onto the line segment defined by points P
and Q is then determined and the orthogonal distance to points P and Q from the scanline is computed in step 266. The orthogonal distances are compared to the value MAXDIST in step X68. If both of these lengths are greater than MAXDIST, these points do not affect the distance and anchor values of the scanline. Otherwise, the distance table and anchor table are updated in step 270 between the minimum x point on the scanline and the ma<Yimum x point on the scanline corresponding to the orthogonal projections of the scanline on the line segment defined by points P and Q. That is, for each point along the scanline between the minimum and maximum x 1o points, the orthogonal distance to the line segment defined by points P and Q is calculated. This distance is compared to the current distance in the distance table for the point, and the distance table and anchor table is updated, using steps similar to steps 206 and 208 in FIG. 9A. Steps 264 through 270 are repeated for each parameter i, or each point along the curve, by incrementing i (step 272) until all points on the curve have been evaluated as determined in step 274.
Having now described how the anchor and distance values are determined for points in a region, the use of the boundary definitions, edge characteristics and anchor and distance values by image processor 112 to produce an image 114 will now be described in connection with Fig.
10. Given predefined edge characteristic,functions for selected points on the boundary, the first step of image processing is to interpolate these edge characteristic functions (step 300) to 2o determine an edge characteristic function for each point on the boundary.
This step is not necessary where the edge characteristic function is independent of the anchor, such as in FIG.
8A, or is defined for the entire object as a function of the anchor. Given the parametrization of the curve, there should be an edge characteristic function for each value of the parameter. Next, x and y values are initialized, e.g., to zero and zero, respectively, to allow for scan line processing of the image. For each point (x,y), the edge characteristic function assigned to the anchor of that point is selected in step 304. This function is applied to the signed distance for that point in step 306 to obtain a value, herein called a. It should be understood that a boundary within the image may limit the area affected by the edge characteristics.
Accordingly, when the distance is greater than some maximum distance, a default color, such as black, should be 3o generated. The combination of all these values a for each point (x,y) defines the image 114.
Next, the x value is incremented in step 308 to further process the scan line.
The scan line of steps 304 through 308 are repeated until the end of the scan line is reached as determined in step ~~~s~-m -l g-310. At the end of the scan line, the y value is incremented in step 312, and the x value is reset to its initial value, e.g., zero. Each scan line is processed by repeating steps 304 through 312 until all the scan lines have been processed as determined in step 314. The image 114 is then complete upon the completion of step 314.
As discussed above, the image processing function may be combined with the anchor and distance calculation. This combination of processing functions can be implemented by inserting steps 304 and 306 between steps 212 and 214 in Fig. 9A for example.
FIG. 1 I shows an example image that may be generated. This image is defined by an object or curve 320. The edge characteristic defined in a direction indicated by line 322 is a longer greyscale ramp than that indicated, for example, at 324. This fgure illustrates that the use of variable edge characteristics which are a function of a point an the curve enables the illustrated effect to be created which is not easily performed by prior art methods.
Referring now to FfG. 12, an application of this invention W 11 now be described. Fifi.
I2 shows a block diagram of system for alpha compositing of an image. Such a system includes t 5 a blender 80 which receives pixel data such as Amy and $x,y from respective frame stores 82 and 84. A control image is received from frame store 8b. The control image provides alpha values axY to the blender 80 which then performs the function of equation ( I ) provided above to output pixels C,~y for the resulting image which is placed in frame store 88.
The compositing system of Fig. 12 may be implemented to process :contirmous 2o synchronized video signals such as in the Kellar patent, or to proc~e~s streams ofdigitai video information which are intermittent and #Iow controlled such as in systems available tom Avid Technology; and as described in PCT ~ublicaticn ~lo. 941248 i S.
There are other applicatimns of the invention in the representation vof images and data.
In general, such applications involve representing values withhi~n a ~c-oordinate space as a function 25 of the distance to a nearest point on a boundary on that space. As an exarn~ie, the ~ility that a point of land may be under water may be defined with refererree to a distance-to a.point at a current boundary of a body of water to which the given point is closest. For-exarnple, various points along the boundary may be assigned different probability functions. As an additional example, the movement of a fireball, such as a comet, may be represented as a circle with a long 3o edge characteristic function defining its tail and randomly modified edge characteristics to indicate flames shooting from its core. An image generated by such a representation might be blended into another image using alpha compositing.
*Trade-mark More complex images may be defined by using such structured objects with an ed~~e characteristic function that defines a characteristic of a pixel of an image as a function of its distance to a closest point on the structured object. Such an image representation is also readily manipulated to allow for a variety of image processing applications, including. image COlllpOSitlllg, special effects, animation and modeling. Additionally. the structured objects in this invention have all of the benefits of structured objects and many of the benefits of a bit map.
particularly in the definition of edges.
Having now described a few embodiments of the invention, it should be apparent to those skilled in the art that the foregoing is merely illustrative and not limiting, having been presentedl~y way of example only. Numerous modifications and other embodiments are within the scope of one of ordinary skill in the art and are contemplated as falling within the scope of the invention as defined by the appended claims.

Claims (14)

CLAIMS:
1. A method for generating an image, comprising the steps of:

(a) defining a structured object having a boundary of a plurality of points;

(b) assigning an edge characteristic function to each point along the boundary, wherein the edge characteristic functions of at least two points are different, and wherein each of the edge characteristic functions defines an edge characteristic which defines image values near the boundary;

(c) for each picture element in the image, determining an anchor on the boundary and a signed distance to the anchor wherein the anchor is a point on the boundary to which the picture element is closest;

(d) for each picture element in the image, selecting the edge characteristic function assigned to the anchor determined for the picture element; and (e) for each picture element in the image, applying the selected edge characteristic function to the signed distance from the picture element to the anchor to determine image values for the picture element.
2. The method of claim 1, wherein the step (b) of assigning comprises the steps of:

selecting points from the plurality of points on the boundary;

assigning an edge characteristic function to each selected point; and interpolating the edge characteristic function of each of the selected points to obtain edge characteristic functions of other points on the boundary.
3. The method of claim 1, wherein the step of applying the function includes calculating for each point, a factor .alpha. according to the anchor and the distance, and wherein the image values for the point is .alpha.A+(1-.alpha.)B, where A
is data from a corresponding point in a first image and B is image data from a corresponding point in a second image.
4. The method of claim 1, wherein the step of assigning further comprises:

interpolating the edge characteristic functions to determine an edge characteristic function for each point on the boundary.
5. The method of claim 1, wherein the step of determining the anchor further includes the steps of:

initializing coordinates for points on the boundary;

selecting a minimum distance value;

selecting the anchor on the boundary;

determining a signed distance between the picture element and the selected anchor; and comparing the signed distance to the minimum distance value.
6. The method of claim 5, wherein the step of comparing further includes the steps of:
setting the minimum distance value to the signed distance;
and selecting an anchor for the picture element corresponding to the minimum distance value.
7. The method of claim 5, wherein the step of selecting the anchor further includes the steps of:

determining a signed distance between a picture element and each point on the boundary;

comparing the signed distance to the minimum distance value of each point on the boundary; and defining the minimum distance value for the picture element and selecting a corresponding anchor.
8. The method of claim 1, wherein the step of determining the anchor further includes the steps of:

for each point in each scanline, generating a first array which is a distance table including current minimum distances to the boundary for the point;

generating a second array which is an anchor table including current closest points on the boundary for the point; and selecting a color for each point along the scanline according to the distance table and anchor table for the point.
9. The method of claim 8, wherein the steps of generating a first array and a second array include:

determining whether the current scanline is within a bounding box defining a region of the image affected by the boundary and by associated edge characteristics;

initializing a parameter value of the boundary;

obtaining pixel coordinates of a point corresponding to a current parameter;

comparing a distance along a y-axis between the point and the current scanline to a maximum distance, the maximum distance representing the distance from the boundary in which pixels are affected by the edge characteristics of the boundary;

updating the distance table and the anchor table in the first and second arrays using radial distances from the point for only those points within the maximum distance along an x-axis of the point; and incrementing the parameter value until all points on the boundary have been evaluated.
10. The method of claim 9, further including the steps of:

reinitializing the parameter value;

obtaining coordinates of the point corresponding to the current parameter and the coordinates of a point in the image corresponding to the previous parameter;

determining an orthogonal projection of the scanline onto a line segment defined by the point representing the current parameter and the point in the image;

determining the orthogonal distance to the points from the scanline;

comparing the orthogonal distance to the maximum distance;
and updating the distance table and anchor table in the first and second arrays between a minimum point on the scanline and a maximum point on the scanline corresponding to the orthogonal projections of the scanline on the line segment defined by the points.
11. The method of claim 9, wherein the step of determining whether the current scanline is within a bounding box includes setting anchor and distance values to default values if the scanline is outside the bounding box.
12. The method of claim 2, wherein the edge characteristic function has two arguments, the anchor and the distance.
13. The method of claim 2, wherein the edge characteristic function is applied, by determining, for each point in the region, an anchor point on the boundary to which the given point is closest, and a distance to the anchor point.
14. A computer readable medium having computer readable instructions stored thereon for execution by one or more computers, that when executed implement the method of any one of claims 1 to 13.
CA002248721A 1996-03-11 1997-03-05 A computer system and process for defining and manufacturing images using structured objects with variable edge characteristics Expired - Fee Related CA2248721C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US08/613,283 1996-03-11
US08/613,283 US5754180A (en) 1996-03-11 1996-03-11 Computer system and process for defining and manipulating images using structured objects with variable edge characteristics
PCT/US1997/003436 WO1997034261A1 (en) 1996-03-11 1997-03-05 A computer system and process for defining and manufacturing images using structured objects with variable edge characteristics

Publications (2)

Publication Number Publication Date
CA2248721A1 CA2248721A1 (en) 1997-09-18
CA2248721C true CA2248721C (en) 2005-12-20

Family

ID=35645845

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002248721A Expired - Fee Related CA2248721C (en) 1996-03-11 1997-03-05 A computer system and process for defining and manufacturing images using structured objects with variable edge characteristics

Country Status (1)

Country Link
CA (1) CA2248721C (en)

Also Published As

Publication number Publication date
CA2248721A1 (en) 1997-09-18

Similar Documents

Publication Publication Date Title
US5754180A (en) Computer system and process for defining and manipulating images using structured objects with variable edge characteristics
Agoston et al. Computer graphics and geometric modeling
US6016152A (en) Apparatus and method for non-uniform image scaling
US7034823B2 (en) 3D computer graphics processing apparatus and method
US5369739A (en) Apparatus and method for generating point sample masks in a graphics display system
US6529193B1 (en) System and method for generating pixel values for pixels in an image using strictly deterministic methodologies for generating sample points
US6891535B2 (en) System and method for modeling graphics objects
US5222204A (en) Pixel interpolation in perspective space
EA012579B1 (en) System and method for approximating an editable surface
US6784896B1 (en) Colorization of a gradient mesh
Szirmay-Kalos Theory of three-dimensional computer graphics
Li et al. Representing higher-order singularities in vector fields on piecewise linear surfaces
US11217002B2 (en) Method for efficiently computing and specifying level sets for use in computer simulations, computer graphics and other purposes
US6954203B2 (en) Modeling and combining multiple graphics objects
CA2248721C (en) A computer system and process for defining and manufacturing images using structured objects with variable edge characteristics
US7027045B2 (en) Modeling graphics objects with topological hints
US5821942A (en) Ray tracing through an ordered array
US20050276518A1 (en) Computer graphics system and computer-implemented method for generating coarse-level meshes for multi-resolution surface editing
CA2674993C (en) System and method for model creation and computer animation
Agoston Raster Algorithms
Luo Effective techniques for generating Delaunay mesh models of single-and multi-component images
Sramek High precision non-binary voxelization of geometric objects
KR20060003059A (en) Method and system for filling in a parallelogram
Popescu Camera Model Design

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed