CA2256970A1 - Method for accessing and rendering an image - Google Patents

Method for accessing and rendering an image Download PDF

Info

Publication number
CA2256970A1
CA2256970A1 CA 2256970 CA2256970A CA2256970A1 CA 2256970 A1 CA2256970 A1 CA 2256970A1 CA 2256970 CA2256970 CA 2256970 CA 2256970 A CA2256970 A CA 2256970A CA 2256970 A1 CA2256970 A1 CA 2256970A1
Authority
CA
Canada
Prior art keywords
image
scanline
objects
segment
scanlines
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA 2256970
Other languages
French (fr)
Inventor
John-Paul J. Gignac
Sam D. Coulombe
Dale M. Wick
Stephen B. Sutherland
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Truespectra Canada Inc
Original Assignee
John-Paul J. Gignac
Truespectra Inc.
Sam D. Coulombe
Dale M. Wick
Stephen B. Sutherland
Truespectra Canada Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by John-Paul J. Gignac, Truespectra Inc., Sam D. Coulombe, Dale M. Wick, Stephen B. Sutherland, Truespectra Canada Inc. filed Critical John-Paul J. Gignac
Priority to CA 2256970 priority Critical patent/CA2256970A1/en
Priority to PCT/CA1999/001216 priority patent/WO2000039754A1/en
Priority to CA002355905A priority patent/CA2355905A1/en
Priority to AU18513/00A priority patent/AU1851300A/en
Priority to EP99962004A priority patent/EP1141898A1/en
Priority to JP2000591580A priority patent/JP2002533851A/en
Publication of CA2256970A1 publication Critical patent/CA2256970A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method of defining and rendering an image comprising a plurality of components (bitmaps, vector-based elements, text and effects or other effects) and an alpha channel. The components are grouped into a ranked hierarchy based on their position relative to each other. There can be groups of groups. With this grouping, each component can be defined using a common protocol and rendering and processing of the components can be dealt with in the same manner. The image can be processed on a scanline-by-scanline basis. For each scanline analysis, information regarding neighbouring scanlines are acquired and processed, as needed.

Description

- WH-10,340CA
TITLE: METHOD FOR ACCESSING AND RENDERING AN IMAGE
FIELD OF THE INVENTION
The present invention relates to a method for defining various objects which make up an image and a method of rendering the image on a scanline by scanline basis.
BACKGROUND OF THE INVENTION
There are a number of computer graphics programs which store various objects and use these objects to render the final image. Generally these programs are divided into vector based programs or bitmap based programs. COREL
DRAWT~~ is primarily vector based whereas PHOTOSHOPT~" is essentially bitmap based. These known graphics packages allocate enough temporary storage for the entire rendered image and then render each object, one by one, into that temporary storage. This approach fully renders lower objects prior to rendering upper objects. The programs require substantial memory in rendering the final image.
Some programs allow an object to be defined as a group of objects and this provides some flexibility. In the case of an object being a group of objects, this group is effectively a duplicate of the base bitmap. Groupings of objects add flexibility in changing the design or returning to an earlier design, but substantial additional memory is required.
The final image of graphics packages is typically sent to the raster device for output, which renders the image on a scanline by scanline basis. The final image is defined by a host of scanlines, each representing one row of the final bitmap image. Raster devices include printers, computer screens, television screens, etc.

- WH-10,340CA
Vector based programs such as COREL DRAWT"~, produce a bitmap of the final image for the raster device.
Similarly, the graphic program PHOTOSHOPT"~ produces a bitmap of the final image.
Vector based drawings tend to use little storage before rendering, as simple descriptions often produce largely significant results. Vector drawings are usually resolution independent and they are made up of a list of objects, described by a programming language or other symbolic representation. Bitmap images, in contrast, are a rectangular array of pixels wherein each pixel an associated color or grey level. This image has a clearly defined resolution (the size of the array). Each horizontal row of pixels of the bitmap is called a scanline. Bitmaps tend to use a great deal of storage, but they are easy to work with because they have few properties.
Recently the ability to combine layers of content has been standardized by using a so called "alpha channel" which represents the transparency of an object or pixel. There are levels of transparency between solid and transparent, which could be represented as a percentage.
Although some standard file formats such as CompuServe's GIF are limited to 2 levels (solid and transparent), newer formats such as Aldus' TIFF, PNG (Portable Network Graphics) and the Digital Imaging Group's ".fpx" format allow 256 or more levels of transparency, which allows for smooth blending of layers of content. Normally manipulation of alpha channel information is limited to bitmap based programs.
There remains a need for a method which allows the compact descriptions of vector programs, with a retargetable output resolution, which additionally allows for full use of all of the powerful image processing WH-10,340CA
effects of an bitmap based program including the alpha channel capabilities. Our earlier U.S. Patent application SN 08/629,543 entitled Method Rendering an Image allows for scanline based rendering and divides all objects into a tool and region where the region acts as a local alpha channel for the tool. This doesn't allow for a more general use of alpha channels to create holes in images when used, for example, on web pages -- showing through the background. Also some types of objects such as formatted text with color highlighting cannot be represented easily with a separate region (as the shape of the text), and tool (with the coloring for the text) since particular words need to have different colors, and these need to follow the words when the text is reformatted. Additionally the interface is inconvenient to use as it returns a variable number of scanlines, including none, when the output device works best with at most and at least one scanline for each call.
SUMMARY OF THE INVENTION
The present invention allows the user to define an image using different definitions for the individual objects of the image. The objects can be defined as a region tool and an alpha channel, as a bitmap and alpha channel or as a group of objects where each object within the group is defined as a region tool and alpha channel or a bitmap and an alpha channel. A group of objects is defined to act as any other object in the string of ranked objects defining the image. Grouped objects have the same defining characteristics as an object defined by a region tool and an alpha channel or a object defined by a bitmap and an alpha channel. A group object can contain within its grouping, a further grouped object which can also contain grouped objects. With this definition, each grouped object can be defined using the common protocol and the rendering of objects and the processing of objects can all be dealt with in the same manner. This arrangement WH-10,340CA
allows for all of the advantages of being able to group objects while having a common and consistent manner for dealing with the storage and rendering of an image defined by the different types of objects.
A method for rendering an image on a scanline by scanline basis where the image is composed of a plurality of distinct segments, according to the present invention, comprises defining each distinct segment of the image as a) a region tool and alpha channel, b) a bitmap and alpha channel, or c) a group of objects where the objects are defined according to a) and b), where each definition includes information of the scanline of the image affected by the particular segment.
The method further includes defining an order of the distinct segments from lower to higher in the unit and successively returning scanlines of the image where each scanline is returned by one examining the segments to determine which segments and the order of the segments which order the scanline to be returned, to examining the determining order segments of step 1, and determining the particular scanlines to be outputted by each ordered segment for returning the particular scanline of the image, and 3) using the ordered segments from lower to higher and returning the determined scanlines of the segment used by the next higher segment as an input until a scanline of the image is returned.
According to an aspect of the invention, the image includes a background segment which is defined as a region tool and alpha channel and the background segment is applied as a last segment prior to returning a scanline.
According to yet a further aspect of the invention, the method includes using an initial input for the lower most object equivalent to transparent scanlines.
' WH-10,340CA
According to yet a further aspect of the invention, the step of examining and determining the order segments of step 1 and determining the particular scanlines to be outputted by each ordered segment for returning the particular scanline of the image is carried out by examining the segments from the highest segment to the lower segment.
According to yet a further aspect of the invention, wherein some of the segments include a lookaround object which requires at least several scanlines from a lower object to return a scanline.
According to yet a further aspect of the invention, any defined group of objects can include as part thereof, a further group of objects.
An image made up of a number of render objects will have a "RenderLayer" as the base render object, which has a list of render objects contained in it. To create a RenderLayer job (which implements a render job interface), the RenderLayer first surveys the ordered list of objects from top to bottom, to determine the dependencies of the contained objects, and determines the amount of look around for each object. The RenderLayer job renders a scanline on request by creating a buffer for all active contained objects, that is, all objects that affect the current scanline. Each active object is partially rendered, in order, from bottom to top. The minimum portion of each object is rendered. That portion being the minimum number of scanlines required to create an output scanline for the parent object. Buffers that will be needed on subsequent calls to the render engine are retained for efficiency in calculations. Information buffered for objects can be reused or deleted when they are no longer needed. This provides for both computational and memory efficiency and allows for scanlines to be output sooner. A render object can combine its output using a variety of operators that WH-10,340CA
allow for special effects such as punching a hole through a background.
Each object is separately maintained in storage and has its own resolution or an unlimited resolution where possible. The rendering effect of each object is at the best resolution for imparting the rendering effect of the object to each segment of the scanline which the object affects. For example an object may only affect a middle segment of the scanline and the best resolution of the object for this segment of the scanline is used.
A method for grouping dependent elements of an image is provided where the method groups each element of the image into either (i) a region, tool and alpha channel;
(ii) a bit-map and alpha channel, or (iii) a group of objects defined according to (i) or (ii), where each group includes information of the scanlines of the image affected by a particular element and creating a plurality of single dependencies between each subordinate element and its parent.
BRIEF DESCRIPTION OF THE DRAWINGS
Preferred embodiments of the invention are shown in the drawings, wherein:
Figure 1 displays the abstract interfaces for the render object and render job;
Figure 2 defines the support classes which store information used to communicate between various parts of the method;
Figures 3, 3a, 3b, 3c, 3d, 3e, 3f, 3g, 3h, 3i, 3j, 3k, and 31 contain pseudo code illustrating the steps involved in rendering an image to an output device;
Figure 4 displays a rendering pipeline with a RenderLayer which contains two render objects;
Figure 5 displays a rendering pipeline with an effect which contains a region and a tool;
WH-10,340CA
Figure 6 displays a depiction of a rendered image containing the objects defined in figure 4 and 5;
Figure 7 displays the inter-relationship of the various major classes used to define the method;
Figure 8 follows the partial rendering process of objects contained in a RenderLayer;
Figure 9 is a visual representation of the objects referred to in Figure 8;
Figure 10 is an example of how RenderLayers work to create a result shown as a list within a list;
Figure 11 is a hierarchical representation of the objects in Figure 10; and Figure 12 shows (a) the a rendered final composition of the objects, with the Layerl object rendered into the scene, from the objects described in Figure 10;
(b) the Text1 object;
(c) the Text1 object with the Shadowl object rendered over top; and (d) the Text1 object with the Shadowl object rendered over top, and then the Wave1 object rendered on top of that.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
An image is defined using a common standard and according to at least three classifications. The common standard is that each classification includes a bound box definition which defines the area which that particular objects affects. A lookaround distance which is any additional information that might be required for the object to render itself as well as the alpha channel.
The first classification is a simple object defined by a region, tool and alpha channel similar to a vector program. The second classification is a bitmap object as commonly defined by a draw program. The third classification is a grouped object which is defined by a WH-10,340CA
plurality of objects which can include further grouped objects. The common definition or standard allows grouped objects to be rendered in a similar manner to the series of objects defining the image. It also simplifies the calculations necessary for rendering a scanline as a group of objects as a common definition which allows the requirements of that particular group to be known to the other objects within a series of objects. As far as the adjacent higher and lower object is concerned, a grouped object is merely a different type of object having common characteristics, and as such, the higher and lower objects continue to interact with the objects in a common set manner.
This standard for defining an image makes it convenient during rendering of the object to look from the top down through the primary objects to determine the number of lines required of the lower objects for passing onto an upper object. This process is essentially repeated for any grouped object. In this way, the steps and interfaces for rendering of the image are consistent and straightforward.
Figure 1 shows the following structures used by the invention.
1. Abstract RenderObiect Class This class defines the minimum interfaces that an object needs to be used by the method. They are getBoundBox which returns an upright rectangle which defines the limits of the area which the object affects, getLookAround which defines the amount of extra information required around any given pixel, in order for that pixel to be rendered correctly, and initRender which returns a RenderJob object.
_ g _ WH-10,340CA
2. Concrete RenderLayer Subclass The pseudo code required to implement a render object, which contains an ordered list of objects, is shown in Figure 3. A sample instance is shown in Figure 6 with a data flow diagram shown in Figure 4.
3. Other Concrete Subclasses of RenderOb'~ect Other subclasses include Effect which contains a Region and Tool as shown in Figure 7. A corresponding data flow diagram is shown in Figure 5. Other subclasses include other effects such as Rich Text with Color Highlighting or Alpha Channel Bitmap.
4. Abstract RenderJob Class The class defining the interfaces for a RenderJob is shown in Figure 1. They are prefersToOverwriteInputBuffer which facilitates negotiation of whether the output buffer is different from the input buffer for computational efficiency. The getLookAroundDistances is the same as in the RenderObject.
Finally renderNext takes an input buffer, and an output buffer, and renders one scanline.
5. A Concrete Subclass of RenderJob for Each Concrete Subclass of RenderObiect, Including RenderLaver The pseudo code required to implement a render job for a RenderLayer is shown in Figure 3. The definitions used in figure 3 are simplified for readability. A sample walk through of how this method renders 3 scanlines is included in Figures 8 and 9.
The defining implementations shown in Figure 2 of the invention include RGBAScanline, Rectangle2D, Affine2D, LookAroundDistances. Although the implementation in Figure 3 uses an RGB color space, this method applies equally well to other color spaces such as CIE-LAB, CMYK, XYZ, etc.

WH-10,340CA
Figure 9 shows an example of the operation of the invention, wherein:
a composition of 3 render objects contained in a RenderLayer is shown. The objects are "Heartl" which displays a red heart, "Hellol" which displays the black text "Hello World", and "Blur1" which blurs or makes fuzzy the content underneath it. Heartl and Hellol are known as "simple render objects" because they require only the background immediately behind the output pixel. This information can be ascertained by using the getLookAroundDistances method on the given object. This call passes in the output resolution and a transformation to the output space (which can involve rotation, scaling, translation and skewing - ie. all affine transformations).
The result is the number of extra pixels required as input which are above, below, to the left and to the right of any given pixel, in order for the object to be rendered. When the number of extra pixels is 0 in every direction, the object is considered to be a simple object. If the number is greater than zero in any direction then the object is a "look around" object. An example of a look around object is Blurl. Blur1 requires an extra pixel in each direction to render its effect. The extra area required by the blur is shown by the dashed line around the blur's bound box in Figure 9. Note that the blur requires information below the third scanline, which means that an additional scanline which isn't output needs to be at least partially rendered.
Using a technique known as called the Painter's Algorithm, a buffer large enough to buffer the entire image is allocated. First, the background is filled in (Figure 8i steps 1-4), then the bottom most object, Heartl, is rendered completely (Figure 8i steps 5-7), next the object in front of Heartl. (Hellol) is rendered completely (Figure 8i steps 8-9) and finally, the front most object, Blurl, is rendered completely (Figure 8i steps 10-11) using the results of steps 6, 7, 8, 9. Once this WH-10,340CA
- process is complete the 3 requested scanlines can be output (Figure 8i steps 12-14).
To start using the reordered rendering method, the render engine is invoked on the containing RenderLayer, called "RenderLayerl." RenderLayerl returns a render job object identified here as "RenderLayerJobl." To get a scanline, the renderScanline method is called on RenderLayerJobl, passing in a background. RenderLayerJobl determines which objects affect the Scanline 1 and renders them completely (Figure 8ii steps 1 and 2). The result of Figure 8ii step 2 is needed by the blur, which is buffered for later use. The resulting Scanline 1 is then returned in Figure 8ii step 3. The next time renderScanline is called, the blur becomes active. Since the blur needs a pixel above and a pixel below it in order to render correctly, the RenderLayerJobl must buffer up more information. The result of Figure 8ii step 4-6 is buffered as well as the result of step 7-8. These three results (from step 2, 6 and 8) are then passed into the BlurJobl which results in step 9. The buffer from step 2 can now be discarded or marked for reuse. The resulting scanline 2 is returned in step 10. To rendered scanline 3, the blur requires more than the already buffered result of step 6 and step 8, and so RenderLayerJobl renders step 11 and step 12. These three buffers (from step 6, 8 and 12) are then passed into the BlurJobl which results in step 13. Finally the scanline 3 is returned in step 14, and all of the temporary buffers can be discarded.
In this example, 3 scanline buffers were required versus 4 scanlines buffers with the Painter's Algorithm. With a larger render, the resource savings are often significant. Also the result of the top of the image became available much earlier.
Figures 4, 5 and 6 show an example of the processing of images by the modules in the invention.

WH-10,340CA
Figure 6 shows an image of a heart ("Heartl") in its bound box beneath the text Hello World ("Hellol") , in its bound box. Both the Heartl and the Hellol have colour and alpha-channel attributes "(c,a)". The composite image is referred to as "RenderLayerl".
Figure 4 illustrates the processing of the entire image. First, the Background color and alpha channel information (c, a) is fed to the RenderLayerl module, which initiates RenderJob. Starting from the bottom element, Heartl, a transparent background is fed to the subordinate call of RenderJob for Heartl. After the subordinate call of RenderJob for Heartl has completed its processing, it returns colour and alpha-channel attributes to the calling RenderJob for RenderLayerl. These returned attributes are forwarded to the next subordinate call of RenderJob, i.e. the call relating to Hellol. Once its processing is completed, its results are returned to RenderJob for RenderLayerl. At that point, RenderJob takes the final color and attribute information from RenderJob for Hellol and combines it with the background colour input to produce the final output color and alpha information.
Figure 5 illustrates subroutine calls within RenderJob for Heartl. Here the background color and alpha-channel information is fed to the RenderJob for the Shape of Heartl. The RenderJob for Shape returns alpha information to RenderJob for Heartl. This information along with the initial color information is fed to the ToolJob module for the Solid Color of Heartl. This module returns colour and alpha-channel attributes to the calling RenderJob for Heartl. These returned attributes are forwarded to the next subordinate call of RenderJob, i.e.
the call relating to Hellol.
In another example, Figure 12 shows a composite image comprising a heart, stylized text "Exploring the Wilderness" and a bitmap image of an outdoor scene wH-10,340CA
underneath the heart and the stylized text. The stylized text is shown with its normal attributes at 12b, with a shadow at 12c and with a wave at 12d.
As shown in figure 10, the invention processes each element of the image according to a hierarchical stack, having the heart ("Heartl") at the top of the stack, the stylized text ("Layerl") in the next layer down and finally with the bitmap ("Bitmapl") at the bottom. Layerl is exploded to show its constituent effects, comprising a wave effect ("Wave1"), a shadow effect ("Shadowl") and the text ("Text1").
Figure 11 shows the hierarchy structure of the image, where the RootLayer is the fundamental node, representing the image. Elements of the image, i.e.
Heartl, Layerl and Bitmapl are shown as immediate dependents of the RootLayer. Further sub-dependencies of Layerl, i.e. Wavel, Shadowl and Text1 stem from Layerl.
Other information, such as the bound box region may also be associated with each element. It can be appreciated that this structure of the invention isolates the dependencies between parent and child elements to one level of abstraction. As such, the invention provides abstraction between and amongst elements in an image. This abstraction provides implementation efficiencies in code re-use and maintenance. It can be appreciated that for more complex images having many more elements, bitmaps and effects, the flexibility and efficiencies of using the same code components to processes the components of the image become more apparent.
In the preferred embodiment, exactly one scanline is rendered during each call to the render method on any render object. This even holds for render groups, since Render Layer constitutes a valid implementation of the Render Object class. In the example implementation, render group always passes a completely transparent wH-10,340CA
background as input to its bottommost object. Then the scanlines produced by applying the bottom most object to the transparent background scanlines are passed as input to the next higher object. Similarly, the output of the second object is passed as input to the third object from the bottom. This passing repeats until the cumulative effect of all of the render group's objects is produced.
This final results are then composited onto the background scanlines (passed by the caller) using the render group's compositing operator.
Because some render objects have forward look-around, it is often necessary for lower objects to render a few scanlines ahead of objects above them. For example, for an object with one scanline of forward look-around to render a single scanline within its active range, the object immediately below it must already have rendered its result both on that scanline and on the following scanline.
Since rendering must be performed from the bottommost object to the topmost object, therefore, in order to guarantee that a single scanline will be completely rendered by all objects by the end of a call to the rendering method, it is useful to begin the process by determining exactly how many scanlines must be rendered by each object in the render group.
The computation is most easily done in terms of the total number of scanlines rendered by each object so far during the entire rendering process, as opposed to the number of scanlines rendered by each object just during this pass. The total number of scanlines required _of_ an object is referred to, relative to that object, as downTo whereas the total number of scanlines required by_ an object is referred to, relative to that object, as downToNeeded. Note that the downToNeeded of a given object is always equal to the downTo of the object immediately below it, if applicable. In the case of the bottommost object, its downToNeeded is the number of empty input wH-10,340CA
scanlines that must be passed to it in order for it to satisfy the object above it, if any, or the caller otherwise.
Although various preferred embodiments of the present invention have been described herein in detail, it will be appreciated by those skilled in the art, that variations may be made thereto without departing from the spirit of the invention or the scope of the appended claims.

Claims (9)

1. A method of rendering an image on a scanline by scanline basis where the image is composed of a plurality of distinct segment, said method comprising defining each distinct segments of the image as a) a region, tool and alpha channel, b) a bit map and alpha channel, or c) a group of objects where the objects are defined according to a) and b) and where each definition includes information of the scanlines of the image affected by the particular segment;
defining an order of the distinct segments from lower to higher in the image, successively returning scanlines of the image where each scanline is returned by i) examining the segments to determine which segments and the order of the segments which affect the scanline to be returned, ii) examining the determined ordered segments of step I) and determining the particular scanlines to be outputted by each ordered segment for returning the particular scanline of the image, and iii) using the ordered segments from lower to higher and returning the determined scanlines of the segment used by the next higher segment as an input until a scanline of the image is returned.
2. A method as claimed in claim 1 wherein said image includes a background segment defined as a region, tool and alpha channel and said background segment is applied as a last segment prior to returning a scanline.
3. A method as claimed in claim 2 wherein said method includes using an initial input for the lower most object equivalent to transparent scanlines.
4. A method as claimed in claim 3 wherein a group of objects is a simple object which requires only one scanline of input for returning a scanline of output.
5. A method as claimed in claim 1 wherein said step of examining the determined ordered segments of step I) and determining the particular scanlines to be outputted by each ordered segment for returning the particular scanline of the image is carried out by examining the segments from the highest segment to the lowest segment.
6. A method as claimed in claim 1 wherein at least some of said segments include a look around object which requires at least several scanlines from a lower object to return a scanline.
7. A method as claimed in claim 6 wherein said group of objects contains at least 3 objects.
8. A method as claimed in claim 1 wherein a group of objects includes as part thereof, a further group of objects.
9. A method of grouping dependent elements of an image for processing on a scanline by scanline basis, said method comprising: grouping each element of the image as a) a region, tool and alpha channel;
b) a bit map and alpha channel; or c) a group of objects where the objects are defined according to a) and b) and where each group includes information of the scanlines of the image affected by the particular element; creating single depending associations between each subordinate element and its parent; and defining an order of the distinct elements from lower to higher in the image.
CA 2256970 1998-12-23 1998-12-23 Method for accessing and rendering an image Abandoned CA2256970A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CA 2256970 CA2256970A1 (en) 1998-12-23 1998-12-23 Method for accessing and rendering an image
PCT/CA1999/001216 WO2000039754A1 (en) 1998-12-23 1999-12-23 Method for accessing and rendering an image
CA002355905A CA2355905A1 (en) 1998-12-23 1999-12-23 Method for accessing and rendering an image
AU18513/00A AU1851300A (en) 1998-12-23 1999-12-23 Method for accessing and rendering an image
EP99962004A EP1141898A1 (en) 1998-12-23 1999-12-23 Method for accessing and rendering an image
JP2000591580A JP2002533851A (en) 1998-12-23 1999-12-23 Methods for accessing and rendering images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA 2256970 CA2256970A1 (en) 1998-12-23 1998-12-23 Method for accessing and rendering an image

Publications (1)

Publication Number Publication Date
CA2256970A1 true CA2256970A1 (en) 2000-06-23

Family

ID=4163120

Family Applications (1)

Application Number Title Priority Date Filing Date
CA 2256970 Abandoned CA2256970A1 (en) 1998-12-23 1998-12-23 Method for accessing and rendering an image

Country Status (5)

Country Link
EP (1) EP1141898A1 (en)
JP (1) JP2002533851A (en)
AU (1) AU1851300A (en)
CA (1) CA2256970A1 (en)
WO (1) WO2000039754A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4554280B2 (en) * 2003-06-12 2010-09-29 マイクロソフト コーポレーション System and method for displaying images using multiple blending
CN102457654B (en) * 2010-10-20 2014-06-18 北大方正集团有限公司 Trap printing method and apparatus thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2618101B2 (en) * 1991-01-30 1997-06-11 大日本スクリーン製造株式会社 Image layout processing method
US5428724A (en) * 1992-04-29 1995-06-27 Canon Information Systems Method and apparatus for providing transparency in an object based rasterized image
US5509110A (en) * 1993-04-26 1996-04-16 Loral Aerospace Corporation Method for tree-structured hierarchical occlusion in image generators
AUPM822194A0 (en) * 1994-09-16 1994-10-13 Canon Inc. Utilisation of scanned images in an image compositing system
US6167442A (en) * 1997-02-18 2000-12-26 Truespectra Inc. Method and system for accessing and of rendering an image for transmission over a network

Also Published As

Publication number Publication date
EP1141898A1 (en) 2001-10-10
WO2000039754A1 (en) 2000-07-06
JP2002533851A (en) 2002-10-08
AU1851300A (en) 2000-07-31

Similar Documents

Publication Publication Date Title
US6049339A (en) Blending with planar maps
AU2003203331B2 (en) Mixed raster content files
US5701365A (en) Subpixel character positioning with antialiasing with grey masking techniques
US5907640A (en) Functional interpolating transformation system for image processing
US5613046A (en) Method and apparatus for correcting for plate misregistration in color printing
EP0618546B1 (en) Automatic determination of boundaries between polygonal structure elements of different colour in a planar graphic image
US5666503A (en) Structured image (SI) image editor and method for editing structured images
CN102402794B (en) Computer graphical processing
US7894098B1 (en) Color separation of pattern color spaces and form XObjects
US5659407A (en) Method and system for rendering achromatic image data for image output devices
JP2004318832A (en) Reducing method of number of compositing operations performed in pixel sequential rendering system
JPH08235367A (en) Antialiasing method by gray masking technique
CN101790749B (en) Multi-sample rendering of 2d vector images
US5903277A (en) Method of rendering an image
US20020175925A1 (en) Processing pixels of a digital image
JP4109740B2 (en) Convolutional scanning line rendering
US6496198B1 (en) Color editing system
US6611632B1 (en) Device and method for interpolating image data and medium on which image data interpolating program is recorded
CA2256970A1 (en) Method for accessing and rendering an image
US6647151B1 (en) Coalescence of device independent bitmaps for artifact avoidance
JPH11296670A (en) Image data interpolation device, its method, and medium for recording image data interpolation program
US6903748B1 (en) Mechanism for color-space neutral (video) effects scripting engine
JP3560124B2 (en) Image data interpolation device, image data interpolation method, and medium recording image data interpolation program
AU770175B2 (en) Processing pixels of a digital image
AU721232B2 (en) Scan line rendering of convolutions

Legal Events

Date Code Title Description
EEER Examination request
FZDE Dead