US6873711B1 - Image processing device, image processing method, and storage medium - Google Patents

Image processing device, image processing method, and storage medium Download PDF

Info

Publication number
US6873711B1
US6873711B1 US09/676,949 US67694900A US6873711B1 US 6873711 B1 US6873711 B1 US 6873711B1 US 67694900 A US67694900 A US 67694900A US 6873711 B1 US6873711 B1 US 6873711B1
Authority
US
United States
Prior art keywords
image data
embedding
components
digital watermark
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US09/676,949
Other languages
English (en)
Inventor
Tomochika Murakami
Junichi Hayashi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Assigned to CANON KABUSHIKI KAISHA reassignment CANON KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HAYASHI, JUNICHI, MURAKAMI, TOMOCHIKA
Application granted granted Critical
Publication of US6873711B1 publication Critical patent/US6873711B1/en
Adjusted expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/005Robust watermarking, e.g. average attack or collusion attack resistant
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/0028Adaptive watermarking, e.g. Human Visual System [HVS]-based watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00002Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
    • H04N1/00005Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for relating to image data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00002Diagnosis, testing or measuring; Detecting, analysing or monitoring not otherwise provided for
    • H04N1/00026Methods therefor
    • H04N1/00037Detecting, i.e. determining the occurrence of a predetermined state
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32203Spatial or amplitude domain methods
    • H04N1/32229Spatial or amplitude domain methods with selective or adaptive application of the additional information, e.g. in selected regions of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32267Methods relating to embedding, encoding, decoding, detection or retrieval operations combined with processing of the image
    • H04N1/32277Compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32309Methods relating to embedding, encoding, decoding, detection or retrieval operations in colour image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0065Extraction of an embedded watermark; Reliable detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3225Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document
    • H04N2201/3233Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document of authentication information, e.g. digital signature, watermark
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3269Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of machine readable codes or marks, e.g. bar codes or glyphs
    • H04N2201/327Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of machine readable codes or marks, e.g. bar codes or glyphs which are undetectable to the naked eye, e.g. embedded codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/333Mode signalling or mode changing; Handshaking therefor
    • H04N2201/33307Mode signalling or mode changing; Handshaking therefor of a particular mode
    • H04N2201/33378Type or format of data, e.g. colour or B/W, halftone or binary, computer image file or facsimile data

Definitions

  • the present invention relates to an image processing device and an image processing method for embedding digital watermark information in input image data so that the digital watermark information is not perceptible to human eyes, and to a storage medium for storing the image processing method.
  • Digital information is advantageous in that it does not deteriorate by aging and that it can indefinitely and reliably store information. In contrast, the digital information can be easily duplicated, causing a serious problem in copyright protection.
  • the digital watermark technique embeds the name of a copyright holder or the ID of a purchaser in digital image data, audio data, or text data so that the digital watermark is not perceptible to a person. Hence, it is possible to track unpermitted usage by illegal copying.
  • the digital watermark is applied to a technique for detecting interpolation in digital data by embedding the digital watermark in advance and by matching information using a digital data embedding rule.
  • the digital watermark technique embeds information by processing a portion such that the change in digital data is not perceptible to a person, there is a trade-off among the quality compared with the original, the resistance of the digital watermark to being lost when image data is attacked or distorted, and the amount of embeddable information of the digital data in which the digital watermark is embedded.
  • the digital watermark has been embedded in a gray-scale image by simply regarding the gray-scale image as gray-scale image data and converting the gray level. This results in serious image deterioration.
  • an image processing device for embedding digital watermark information in a gray-scale image.
  • the image processing device includes an input unit for inputting gray-scale image data in which each pixel is formed of one component.
  • a converter converts the format of the gray-scale image data into color image data in which each pixel is formed of a plurality of components.
  • An embedding unit embeds the digital watermark information in part of the components of the color image data obtained by the converter.
  • the present invention is appropriate to a case in which original image data is converted by a JPEG compression technique into color image data having brightness and chrominance components, and digital watermark information is embedded in the color image data.
  • a method including the steps of inputting gray-scale image data in which each pixel is formed of one component, converting the format of the gray-scale image data into color image data in which each pixel is formed of a plurality of components, and embedding digital watermark information in part of the components of the color image data obtained by the converting step.
  • a storage medium having recorded thereon a computer-readable program for performing the steps of inputting gray-scale image data in which each pixel is formed of one component, converting the format of the gray-scale image data into color image data in which each pixel is formed of a plurality of components, and embedding digital watermark information in part of the components of the color image data obtained by the converting step.
  • an image processing device for embedding digital watermark information in a gray-scale image
  • the device includes a color converter that converts the gray-scale image data into color image data in which each pixel is formed of a plurality of components, a color component extracting unit that separates a part of the plurality of components from the remaining components of the color image data, and an embedding unit that adds the digital watermark information to the part of the plurality of components separated by the color component extracting unit.
  • FIG. 1 is a block diagram of a digital watermark embedding unit
  • FIG. 2 is a block diagram of a digital watermark extracting unit
  • FIG. 3 is an illustration of an example of image data generated by an extracting side in printer processing
  • FIG. 4 is a block diagram of a registration signal embedding unit
  • FIG. 5 is an illustration of a registration signal
  • FIG. 6 is a flowchart showing a process of computing reliability distance
  • FIG. 7 is a block diagram of a scale adjusting unit
  • FIGS. 8A and 8B are illustrations of extraction of the registration signal
  • FIG. 9 is an illustration of a pattern array used for embedding and extracting additional information
  • FIG. 10 is a flowchart showing a process of embedding additional information
  • FIG. 11 is a block diagram of an embedding position determining unit
  • FIG. 12 is a conceptual diagram of a cone mask and a blue noise mask
  • FIG. 13 is a graph of spatial frequency characteristics of human vision
  • FIGS. 14A and 14B are graphs of spatial frequency characteristics of the blue noise mask and the cone mask
  • FIG. 15 is an illustration of a position reference mask
  • FIG. 16 is a conceptual diagram of embedding positions in the position reference mask
  • FIGS. 17A and 17B are illustrations of developing each pattern array on the mask shown in FIG. 16 ;
  • FIGS. 18A and 18B are illustration of a region required for embedding additional information Inf in the entirety of an image
  • FIG. 19 is an illustration of computation for embedding the additional information Inf.
  • FIG. 20 is a block diagram of an additional information extracting unit
  • FIG. 21 is an illustration of extracting the additional information Inf
  • FIG. 22 is an illustration of extracting the additional information Inf when the additional information Inf is not embedded
  • FIG. 23 is an ideal histogram when the reliability distances d are extracted from the original image
  • FIG. 24 is an example of a histogram of the reliability distances d.
  • FIG. 25 illustrates histograms showing reliability distances d1 and d2;
  • FIG. 26 is an illustration for describing the principle of embedding and extracting the registration signal
  • FIGS. 27A to 27 C are illustrations of performing offset adjustment
  • FIG. 28 is a flowchart showing a process of performing offset adjustment
  • FIG. 29 is a block diagram of the registration signal embedding unit in a spatial domain
  • FIG. 30 is an illustration of two sets in a patchwork method
  • FIG. 31 is a flowchart showing a process of embedding a digital watermark
  • FIG. 32 is a flowchart showing a process of extracting a digital watermark
  • FIGS. 33A and 33B are illustrations of examples of pattern arrays orthogonal to the pattern shown in FIG. 9 ;
  • FIG. 34 is an illustration of the “orthogonal” pattern array
  • FIGS. 35A and 35B are illustrations of first and second position reference masks
  • FIG. 36 is an illustration of the configuration of the additional information Inf
  • FIG. 37 is an illustration of an example of coefficients in the blue noise mask
  • FIG. 38 is an illustration of an example of coefficients of the pixel levels in the cone mask
  • FIG. 39 is a graph of chromatic spatial frequency characteristics of human vision
  • FIG. 40 is an illustration of the minimum coding unit in the Joint Photographic Experts Group (JPEG) mode
  • FIGS. 41A and 41B are illustrations of sampling of brightness and chrominance components in the JPEG mode.
  • FIG. 42 is an illustration of a pattern array (patch).
  • a digital watermark embedding unit according to one preferred embodiment of the present invention is described with reference to the accompanying drawings.
  • FIG. 1 shows the digital watermark embedding unit of the present embodiment.
  • the digital watermark embedding unit includes a color component extracting unit 0101 , a registration signal embedding unit 0102 , an embedding position determining unit 0103 , an additional information embedding unit 0104 , a color component synthesizer 0105 , a JPEG compressor/encoder 0106 , a memory 0107 , and a JPEG decompressor/decoder 0108 .
  • Image data I is input to the digital watermark embedding unit.
  • the image data I is multi-level image data in which a predetermined plurality of bits is allocated to one pixel.
  • the input image data I may be gray-scale image data or color image data.
  • the gray-scale image data is formed of one type of component per pixel, whereas the color image data is formed of three types of components per pixel.
  • the three types of components are a red component (R), a green component (G), and a blue component (B).
  • the present invention is applicable to a different combination of color components.
  • the image data I input to the digital watermark embedding unit is first input to the color component extracting unit 0101 .
  • the color component extracting unit 0101 separates only the blue component from the color image data, and outputs the blue component to the registration signal embedding unit 0102 at the subsequent stage.
  • the other color components are output to the color component synthesizer 0105 at the subsequent stage. Specifically, only the color component in which digital watermark information is to be embedded is separated and sent to a digital watermark processing system.
  • the digital watermark information is embedded in the blue component because, among the red component, the blue component, and the green component, human vision is most insensitive to the blue component.
  • Embedding the digital watermark information in the blue component is advantageous in that, compared with the case of embedding the digital watermark information in the other color components, image deterioration due to the digital watermark information is less perceptible to human eyes.
  • the color component extracting unit 0101 first converts the gray-scale image data into pseudo-color image data.
  • the pseudo-color image data is color image data formed of three types of components per pixel.
  • the three types of components R, G, and B have the same values.
  • the gray-scale image data is converted to the pseudo-color image data, and the blue component (B) in the color image data is extracted and output to the registration signal embedding unit 0102 .
  • the other color components are output to the color component synthesizer 0105 at the subsequent stage.
  • the digital watermark information is not embedded in all the color components, but only in the blue component.
  • the registration signal embedding unit 0102 is described.
  • the registration signal is a signal required to perform geometrical correction as preliminarily processing for extracting additional information Inf.
  • the image data of the blue component obtained by the color component extracting unit 0101 is input to the registration signal embedding unit 0102 .
  • the registration signal embedding unit 0102 embeds the registration signal in the image data using a digital watermark technique. Specifically, human vision cannot perceive the registration signal embedded in the image data. The process of embedding the registration signal is described in detail hereinafter.
  • the registration signal embedding unit 0102 outputs the image in which the registration signal is embedded.
  • the embedding position determining unit 0103 determines the embedding position for the additional information Inf in the image data input by the registration signal embedding unit 0102 .
  • the embedding position determining unit 0103 outputs control data indicating the embedding position at which the additional information Inf is to be embedded in the image along with the input image data to the additional information embedding unit 0104 .
  • the additional information Inf including bits of information is input to the additional information embedding unit 0104 .
  • the additional information Inf is embedded at the embedding position determined as above in the image data of the blue component using the digital watermark technique. The process of embedding the additional information Inf using the digital watermark technique is described hereinafter.
  • the additional information embedding unit 0104 outputs the image data in which the additional information Inf is embedded to the color component synthesizer 0105 .
  • the color component synthesizer 0105 synthesizes normal color image data using the blue component processed up to the previous stage (the additional information embedding unit 0104 ) and the red component and the green component directly input by the color component extracting unit 0101 .
  • the color image data obtained by the color component synthesizer 0105 is output to the JPEG compressor/encoder 0106 .
  • the JPEG compressor/encoder 0106 converts the color image data formed by the input red component, the blue component, and the green component into color image data formed by brightness and chrominance components, thereby performing the JPEG compression/encoding.
  • the JPEG compressed data from the JPEG compressor/encoder 0106 is stored in the memory 0107 . With a timing for transmitting to an external device or a timing for printing, the JPEG compressed data is read from the memory 0107 and output to the JPEG decompressor/decoder 0108 at the subsequent stage.
  • the JPEG decompressor/decoder 0108 decompresses the JPEG compressed data and outputs the data as color image data wI.
  • the data wI is output to an external device, or converted into printing data (CMYK) to be used for printing.
  • the image data wI in which the registration signal and the additional information Inf are embedded using the digital watermark technique is output.
  • various attacks are to be made on the image data wI, thus geometrically distorting the image data wI.
  • the attacks may be made by a user intentionally editing the image.
  • the image data wI may be printed, and the printed image may be scanned by a scanner.
  • the attacked image data becomes image data wI′ shown in FIG. 2 .
  • step S 3102 the image data I is input to the color component extracting unit 0101 .
  • This step may be performed by reading a photograph or a printed image by a scanner and generating image data.
  • the blue component is separated, which is to be used for inputting the registration signal at the subsequent stage.
  • step S 3103 the registration signal is generated.
  • the registration signal is embedded in step S 3014 .
  • the registration signal embedding processing in step S 3104 corresponds to the processing performed in the registration signal embedding unit 0102 shown in FIG. 1 , and a detailed description thereof is given hereinafter.
  • step S 3105 a mask is created.
  • the created mask is input in step S 3106 , thus specifying the relationship between embedded bit information and embedding positions.
  • step S 3107 the mask is enlarged in size to generate an expanded mask.
  • the registration signal is embedded in the image data.
  • the additional information Inf is embedded in the image data.
  • the additional information embedding processing is performed by repetitively embedding the additional information Inf in units of macro blocks in the entire image. This processing is described in detail with reference to FIG. 10 in the following description.
  • the macro block is the minimum embedding unit. All the information of one complete additional information Inf is embedded in an image region corresponding to the macro block.
  • step S 3109 the image data in which the additional information Inf is embedded is JPEG converted/encoded, stored in the memory 0107 , and decompressed/decoded.
  • the data is output as the image data wI.
  • attacks may be made on the image data wI, thus geometrically distorting the image data wI.
  • a digital watermark extracting unit of the embodiment is described.
  • FIG. 2 shows the digital watermark extracting unit of the present embodiment.
  • the digital watermark extracting unit includes a color component extracting unit 0201 , a registration unit 0202 , and an additional information extracting unit 0203 .
  • the image data wI′ is input to the digital watermark extracting unit.
  • the image data wI′ is generated by attacking the image data wI and geometrically distorting the image data wI (i.e., altering the image data wI).
  • the types of attacks or distortions may include irreversible compression including JPEG compression, scaling, rotation, printing, and scanning. A combination of these factors may be employed to attack the image data wI.
  • the image data wI′ has the same content as that of the image data wI. In fact, however, the two image data wI′ and wI often differ from each other significantly.
  • the image data wI′ is input to the color component extracting unit 0201 .
  • the color component extracting unit 0201 extracts the blue component, and outputs the image data of the blue component to the registration unit 0202 at the subsequent stage. Since the other color components, i.e., the red component and the green component, are not required, they are discarded.
  • Image data wI 1 ′ of the blue component obtained by the color component extracting unit 0201 is input to the registration unit 0202 .
  • image data wI 2 ′ for which the geometric distortion is corrected is generated.
  • the image data wI′ and the image data wI may have different scales, whereas the image data wI 2 ′ and the image data wI have the same scale.
  • the reason for this and a process of making the image data wI 2 ′ have the same scale as the image data wI are described in detail hereinafter.
  • the additional information extracting unit 0203 performs predetermined processing in accordance with the embedding mode used by the additional information embedding unit 0103 , thereby extracting the additional information Inf embedded in the image data wI 2 ′.
  • the additional information extracting unit 0203 outputs the extracted additional information Inf.
  • step S 3202 the image data wI′ is input.
  • the image data wI′ can be obtained by reading image data, which is assumed to be the image data wI, from a network or memory, or by printing the image data wI and scanning the printed image by the scanner. In the latter case, it is highly probable that the image data wI′ and the image data wI significantly differ from each other.
  • step S 3203 the scale of the image data wI 1 ′ of the input blue component is corrected.
  • step S 3204 the offset of the image data wI 1 ′ of the input blue component is corrected.
  • This scale adjustment is performed in the registration unit 0202 , and a detailed description thereof is omitted here.
  • step S 3206 a first pattern array is used to perform extraction.
  • step S 3205 a second pattern array is used to perform extraction. Accordingly, the embedded additional information Inf is extracted from the image data wI 2 ′ for which the scale and the offset have already been corrected.
  • step S 3207 statistical testing is performed by computing and determining the reliability of the extracted additional information Inf. If it is determined that the additional information Inf is incorrect, the process returns to step S 3202 and re-inputs the image assumed to have the additional information Inf embedded. In contrast, if it is determined that the additional information Inf is sufficiently accurate, a comparison is performed in step S 3208 to extract the additional information Inf. In step S 3210 , information indicating the reliability is displayed as a reliability index D.
  • step S 3203 The registration processing performed by the registration unit 0202 at the digital watermark extraction side and in step S 3203 is next described.
  • the registration is preliminary processing performed when extracting the additional information Inf, so that the additional information Inf can be extracted from the image data wI′ input to the digital watermark extracting unit.
  • registration processing includes positional adjustment as well as the scale adjustment.
  • the positional adjustment utilizes positional information embedded as part of the additional information Inf. Hence, the positional adjustment is described along with the additional information extraction.
  • the image data wI output from the digital watermark embedding unit is not always input as it is to the digital watermark extracting unit.
  • the image data wI is printed by a CMYK ink jet printer, and the printed image is scanned by the scanner.
  • both the input resolution and the output resolution are known.
  • An appropriate scaling algorithm in accordance with the computed scale ratio is used to apply scaling to the image data wI′. Accordingly, the image size of the image data wI and the image size of the image data wI′ will have the same scale.
  • the resultant image to be input is as shown in FIG. 3 .
  • the entirety of an image 0301 is represented by the image data wI′.
  • the image data 0301 includes an original image 0302 represented by the image data wI and a white margin 0303 .
  • the cutting is not accurately performed.
  • the above mentioned points always occur in the image representing the image data wI′ obtained through the printing system.
  • the positional adjustment for correcting the positional displacement caused by scanning is performed by the offset adjustment performed by the additional information extracting unit 0203 .
  • the registration signal embedding unit 0102 (step S 3104 ) is described first.
  • the registration signal embedding unit 0102 is provided prior to the additional information embedding unit 0104 .
  • the registration signal embedding unit 0102 is provided to embed in advance the registration signal, which is referred to when the image data wI′ is registered by the registration unit 0202 , in the original image data.
  • the registration signal is embedded as the digital watermark information in image data (in this embodiment, the blue component of the color image data) which is imperceptible to the human eye.
  • FIG. 4 shows the internal structure of the registration signal embedding unit 0102 .
  • the registration signal embedding unit 0102 includes a block splitter 0401 , a Fourier transform unit 0402 , an adder 0403 , an inverse Fourier transform unit 0404 , and a block combining unit 0405 . Each unit is described in detail.
  • the block splitter 0401 splits the input image data into a plurality of blocks so that they do not overlap each other.
  • the size of each block is defined as a power of two. In fact, the present invention is applicable to other sizes.
  • the Fourier transform unit 0402 which is connected to the block splitter 0401 , can perform processing at high speed.
  • the block splitter 0401 splits the data into two sets of blocks I 1 and I 2 .
  • the set I 1 is input to the Fourier transform unit 0402 at the subsequent stage, and the set I 2 is input to the block combining unit 0405 at the subsequent stage.
  • the block nearest to the center of the image data I among the blocks obtained by the block splitter 0401 is selected as the set I 1 .
  • the rest of the blocks are selected as the set I 2 .
  • the set I 1 which is part of the image data obtained by splitting by the block splitter 0401 , is input to the Fourier transform unit 0402 .
  • the Fourier transform unit 0402 performs a Fourier transform on the input image data I 1 .
  • the original data configuration of the input image data I 1 is referred to as the spatial domain, whereas the data configuration after the Fourier transform is performed is referred to as the frequency domain.
  • the Fourier transform is performed for all the input blocks. Since the size of each input block is a power of two in the embodiment, the fast Fourier transform (FFT) is employed to increase the processing speed.
  • FFT fast Fourier transform
  • the fast Fourier transform is a transform algorithm implementable with (n/2)log 2 (n) computations, whereas the Fourier transform requires n ⁇ n computations where n is a positive integer.
  • the only difference between the fast Fourier transform and the Fourier transform is the speed of obtaining the computation result, and the same result can be obtained by the two methods.
  • the fast Fourier transform and the Fourier transform are not distinguished.
  • the image data in the frequency domain obtained by the Fourier transform is expressed by the magnitude spectrum and the phase spectrum. Only the magnitude spectrum is input to the adder 0403 . In contrast, the phase spectrum is input to the inverse Fourier transform unit 0404 .
  • the adder 0403 is described.
  • the magnitude spectrum and a signal r referred to as the registration signal are input to the adder 0403 .
  • the registration signal includes impulse signals as shown in FIG. 5 .
  • FIG. 5 shows the magnitude spectrum of the two-dimensional spatial frequency components obtained by the Fourier transform.
  • the center indicates a low-frequency component, and the periphery thereof indicates high-frequency components.
  • a magnitude spectrum 0501 is the magnitude spectrum of a signal component of the original image component. In the case of a signal corresponding to a natural image including a photograph, many strong signals are present at the lower frequency. In contrast, almost no signal is present at the higher frequency.
  • the present invention is not limited to this.
  • a text image, a CG image, and the like may be processed in a similar manner.
  • the present embodiment is particularly advantageous in processing a natural image having relatively large portions at intermediate gray levels.
  • FIG. 5 shows the signal 0501 originally included in the natural image in which impulse signals 0502 to 0505 are added to a horizontal and vertical Nyquist frequency component of a signal in the frequency domain.
  • the registration signal preferably includes impulse signals because it enables the digital watermark extracting unit to easily extract only the registration signal.
  • the impulse signals are added to the Nyquist frequency component of the input signal in FIG. 5
  • the present invention is not limited to this. Specifically, any type of registration signal is permitted as long as the registration signal is retained even when the image in which the additional information Inf is embedded is attacked.
  • an irreversible compression system including JPEG compression has an effect similar to a low-pass filter. Therefore, when the impulse signals are embedded in the high-frequency components which are to be compressed, the impulse signals may be removed by compression/decompression.
  • the impulse signals in the low-frequency components has a drawback, compared with embedding the signals in the high-frequency components, in that the signals embedded in the low-frequency components are often perceived as noise due to human vision characteristics.
  • the impulse signals are embedded in an intermediate frequency, which is in a range from a first frequency substantially imperceptible to human vision to a second frequency which is difficult to remove by irreversible compression/decompression.
  • the registration signal is appended to each block (one block in the embodiment) input to the adder 0403 .
  • the adder 0403 outputs a signal in which the registration signal has been added to the magnitude spectrum of the image data in the frequency domain to the inverse Fourier transform unit 0404 .
  • the inverse Fourier transform unit 0404 performs an inverse Fourier transform on the input image data in the frequency domain.
  • the inverse Fourier transform is performed for all the input blocks.
  • the inverse Fourier transform unit 0404 employs the fast Fourier transform to increase the processing speed since the size of each input block is a power of two.
  • the signal in the frequency domain input to the inverse Fourier transform 0404 is transformed to a signal in the spatial domain by the inverse Fourier transform, and the signal in the spatial domain is output.
  • the image data in the spatial domain output from the inverse Fourier transform unit 0404 is input to the block combining unit 0405 .
  • the block combining unit 0405 performs the reverse processing of the splitting by the block splitter 0405 . As a result, the image data (blue component) is recovered and output.
  • the registration signal embedding unit 0102 of the present embodiment has the structure described in detail above.
  • FIG. 4 illustrates embedding of the registration signal in the Fourier transform domain.
  • the registration signal can be embedded in the spatial domain. The latter case is described with reference to FIG. 29 .
  • FIG. 29 shows a block splitter 2901 , an adder 2902 , a block combining unit 2903 , and an inverse Fourier transform unit 2904 .
  • the block splitter 2901 and the block combining unit 2903 operate in the same manner as the block splitter 0401 and the block combining unit 0405 shown in FIG. 4 .
  • the image data is first input to the block splitter 2901 and the data is split into blocks.
  • the blocks are input to the adder 2902 .
  • the registration signal r is input to the inverse Fourier transform unit 2904 and transformed into a signal r′ by the inverse Fourier transform.
  • the registration signal r is a signal in the frequency domain, as shown in FIG. 5 .
  • the block from the block splitter 2901 and the signal r′ from the inverse Fourier transform unit 2904 are input to the adder 2902 , and a summation thereof is performed.
  • a signal output from the adder 2902 is input to the block combining unit 2903 . Hence, the image data (blue component) is recovered and output.
  • the structure of the units shown in FIG. 29 performs the same processing as that shown in FIG. 4 in the spatial domain. Since the structure shown in FIG. 29 does not include a Fourier transform unit as compared with the structure in FIG. 4 , the processing speed is increased.
  • the signal r′ is independent of the input image data I. Therefore, computation of the signal r′, that is, processing of the inverse Fourier transform unit 2904 , need not be performed every time the image data I is input.
  • the signal r′ can be generated in advance. In this case, the inverse Fourier transform unit 2904 can be eliminated from the structure shown in FIG. 29 , thereby further increasing the speed of embedding the registration signal.
  • the registration processing for referring to the registration signal is described in the following description.
  • a principle referred to as a patchwork method is used to embed the additional information Inf.
  • the principle of the patchwork method is described.
  • the patchwork method performs embedding of the additional information Inf by generating statistical bias in an image.
  • FIG. 30 shows subset A 3001 , subset B 3002 , and an entire image 3003 .
  • the subsets A 3001 and the subset B 3002 are selected from the entire image 3003 .
  • the additional information Inf can be embedded using the patchwork method of the present embodiment.
  • the size and selection of the two subsets strongly influence the resistance of the additional information Inf embedded by the patchwork method, that is, the strength for retaining the additional information Inf when the image data wI is attacked. This is described in the following description.
  • Each element a 1 and b 1 of the subsets A and B is a pixel level or a set of pixel levels.
  • the elements a i and b i correspond to part of the blue component in the color image data.
  • An index d (1/ N ) ⁇ ( a i ⁇ b i ) (1)
  • the index d is referred to as the reliability distance.
  • the value c is added to all the elements constituting the subset A, and the value c is subtracted from all the elements constituting the subset B.
  • the subsets A and B are selected from the image in which the additional information Inf is embedded, and the index d is computed.
  • the index d does not become zero.
  • the reliability distance d is computed for the image.
  • the additional information Inf is not embedded.
  • the value d is at a predetermined distance from zero, it is determined that the additional information Inf is embedded.
  • the patchwork method is applied to embed a plurality of bits of information.
  • the patchwork method defines the selection method of selecting the subsets A and B using a pattern array.
  • the patchwork method performs embedding of the additional information Inf by adding or subtracting an element of the pattern array to or from a predetermined element of the original image.
  • FIG. 9 shows an example of a simple pattern array.
  • the pattern array shown in FIG. 9 indicates a variation in the pixel level from the original image when reference to 8 ⁇ 8 pixels is made to embed one bit.
  • the pattern array includes array elements having positive values, array elements having negative values, and array elements having zero values.
  • the corresponding pixel levels at positions indicated by the array elements +c are increased by c. This corresponds to the subset A.
  • the corresponding pixel levels at positions indicated by the array elements ⁇ c are decreased by c. This corresponds to the subset B.
  • the positions indicated by zero are included in neither of the subsets A and B.
  • the number of positive array elements and the number of negative array elements are set to be equal so that the overall gray level of the image does not change. In other words, the sum of all the array elements in one pattern array is zero. This is a condition for extracting the additional information Inf, which is described in the following description.
  • each bit of information constructing the additional information Inf is embedded.
  • the pattern shown in FIG. 9 is placed several times in different domains in the original image data, thereby increasing or decreasing the pixel levels. Accordingly, a plurality of bits of information, i.e., the additional information Inf, is embedded.
  • the additional information Inf including a plurality of bits is embedded.
  • the additional information Inf is repetitively embedded. Since the patchwork method utilizes statistical properties, a sufficient number of times is required to make use of the statistical properties.
  • the domains in which the pixel levels are changed using the pattern array are set not to overlap each other. This is accomplished by determining, for each bit, a relative position for using the pattern array. Specifically, the relationship between a position of the pattern array at which first bit information constructing the additional information Inf is embedded and a position of the pattern array at which second bit information is embedded is appropriately set.
  • the additional information Inf is constituted of sixteen bits
  • the positional relationship among 8 ⁇ 8-pixel pattern arrays of first to sixteenth bits is relatively provided on a domain larger than 32 ⁇ 32 pixels so that deterioration in the image quality is suppressed.
  • the additional information Inf namely the bits of information constructing the additional information Inf
  • the repetition is essential in the present embodiment because statistical measurement utilizing the repetitive embedding of the same additional information Inf is performed in the present embodiment.
  • the selection of the embedding positions is performed by the embedding position determining unit 0103 shown in FIG. 1 .
  • the operation of the embedding position determining unit 0103 is described.
  • FIG. 11 shows the internal structure of the embedding position determining unit 0103 .
  • a mask creator 1101 creates a mask for specifying the embedding position of each bit of information constructing the additional information Inf.
  • the mask is a matrix provided with positional information specifying a relative placement of the pattern array (see FIG. 9 ) corresponding to each bit of information.
  • FIG. 17A shows an example of a mask 1701 .
  • Coefficients are allocated to the interior of the mask. Each coefficient has the same frequency of occurrence in the mask. Using the mask, it is possible to embed the additional information Inf having a maximum of sixteen bits.
  • a mask referring unit 1102 reads the mask created by the mask creator 1101 , relates each coefficient in the mask to information indicating that each bit of information is nth bit information, and determines the pattern array placement for embedding each bit of information.
  • a mask/pattern array corresponding unit 1103 develops the 8 ⁇ 8 array elements of each pattern array at the position of each coefficient in the mask. Specifically, each coefficient (one box) in the mask 1701 shown in FIG. 17A is multiplied by 8 ⁇ 8, as shown by coordinates 1702 in FIG. 17B , thereby providing a referable embedding position for each pattern array.
  • the additional information embedding unit 0104 refers to the embedding head coordinates 1702 in FIG. 17 B and embeds each bit of information using the pattern array.
  • the mask is created every time the image data (blue component) is input to the mask creator 1101 .
  • image data of large size is input, the same additional information Inf is repetitively embedded.
  • the structure (array of coefficients) of the mask serves as a key. In other words, only the holder of the key can extract the information.
  • the present invention also covers a case in which, instead of creating a mask in real time, a pre-created mask is stored in an internal storage unit of the mask creator 1101 and the mask is read as circumstances demand. In this case, the processing can quickly move to the next stage.
  • the mask creator 1101 is described.
  • FIG. 13 shows spatial frequency characteristics perceived by human vision.
  • the horizontal axis represents radial spatial frequency
  • the vertical axis represents the visual response. It is understood from FIG. 13 that, when the pixel levels are manipulated and information is thus embedded, deterioration in the image quality is striking in the low-frequency domain to which the human eye is sensitive.
  • the present embodiment takes into consideration characteristics of a blue noise mask and a cone mask generally used in digitizing a multi-level image, and performs pattern placement corresponding to each bit.
  • the blue noise mask has a characteristic in which binarization of coefficients included in the mask at any threshold always gives a blue noise pattern.
  • the blue noise pattern is a pattern showing frequency characteristics in which the spatial frequency is biased toward the high-frequency domain.
  • FIG. 37 shows part of a blue noise mask.
  • FIG. 14A illustrates a graph 1401 showing the spatial frequency characteristics of the blue noise mask binarized at a threshold of ten.
  • the horizontal axis of the graph 1401 represents the radial spatial frequency, indicating a distance from the origin (DC component) when the Fourier transform on the blue noise mask is performed.
  • the vertical axis represents the power spectrum, indicating an average of the squared-sum of the magnitude components at a distance indicated by the radial spatial frequency of the horizontal axis.
  • FIG. 14A shows the two-dimensional frequency characteristics of the image in a one-dimensional graph which is visually easy to understand.
  • the blue noise mask is biased toward the high-frequency components, and it is thus imperceptible to the human eye. Therefore, ink jet printers and the like employ the blue noise mask when expressing the gray scale of a multi-level image by the areal gray scale using dots. In this manner, the spatial frequency component can be biased toward the high frequency, and the areal gray scale can be expressed so that the spatial frequency component is imperceptible to the human eye.
  • the black or white bit determined at the previous gray level g cannot be inverted. This imposes harsh restrictive conditions on low and high gray levels. Therefore, the resultant pattern is a random pattern lacking in uniformity.
  • FIG. 12 shows a histogram 1201 showing the coefficients constituting the blue noise mask.
  • the same numbers of all values (coefficients) 0 to 255 are included in the mask.
  • Binarization of a multi-level image using the blue noise mask is well known to those skilled in the art.
  • the technique is described in detail by Tehophano Mitsa and Kevin J. Parker in “Digital halftoning technique using a blue noise mask”, J. Opt. Soc. Am A, Vol. 9, No. 11, November 1992.
  • One of the characteristics of the cone mask is that, when coefficients included in the mask are binarized, a periodic or pseudo-periodic peak arises in the spatial frequency domain representing the obtained binary information, as shown in a graph 1402 in FIG. 14 B.
  • the cone mask is designed not to give rise to a peak in the low-frequency domain.
  • FIG. 38 shows part of a coefficient array of a cone mask.
  • the graph 1402 shows the spatial frequency characteristics of the cone mask binarized at a threshold of ten. As in the case of the spatial frequency characteristics of the blue noise mask shown by the graph 1401 , the graph 1402 illustrates that low-frequency components are sparse.
  • the cone mask is advantageous in that, whether at a high threshold or at a low threshold, a peak arises at a frequency higher than the low-pass frequency of the blue noise mask, reducing a dense portion at an embedding position. Therefore, noise generated by embedding the additional information Inf is less imperceptible than the blue noise mask.
  • the frequency of occurrence of the coefficients constituting the cone mask is as shown in the histogram 1201 shown in FIG. 12 , which is the same as the blue noise mask.
  • the additional information Inf is uniformly embedded.
  • the cone mask is employed as the embedding reference mask since the cone mask is advantageous as described above.
  • the mask (cone mask) created by the mask creator 1101 is input to the mask referring unit 1102 .
  • the mask referring unit 1102 relates the embedding position at which the N-bit information is embedded in the image to the number (pixel level) of the mask and determines the embedding position.
  • the embedding position determining processing performed by the mask referring unit 1102 is described.
  • the cone mask is used.
  • a 4 ⁇ 4 mask 1501 shown in FIG. 15 is used.
  • the mask 1501 shown in FIG. 15 has 4 ⁇ 4 coefficients, and the coefficients 0 to 15 are each placed once. Using the 4 ⁇ 4 mask 1501 , reference to the embedding position of the additional information Inf is made.
  • the mask used in the description is capable of embedding the additional information Inf having a maximum of sixteen bits. In the following description, an example of the additional information Inf having eight bits is described.
  • the additional information Inf includes start bits Inf 1 and utilization information Inf 2 .
  • the start bits Inf 1 are used by an offset adjusting unit 2002 included in the digital watermark extracting unit to recognize that the actual position at which the additional information Inf is embedded is away from an ideal position, and to correct the starting position for extracting the digital watermark, that is, the additional information Inf, in accordance with the recognition. This is described in detail below.
  • the utilization information Inf 2 is information actually utilized as additional information in the image data I.
  • the utilization information Inf 2 includes an ID of the device shown in FIG. 1 or a user ID.
  • the utilization information Inf 2 includes control information indicating that copying is prohibited.
  • the start bits have five bits and use a bit string “11111”.
  • the present invention is not limited to this.
  • start bits of the additional information Inf it is possible to use start bits having a number of bits other than five bits.
  • start bits having a number of bits other than five bits.
  • bit string other than the bit string “11111”.
  • the number of bits and the bit string of the start bits need to be shared by the digital watermark embedding unit and the digital watermark extracting unit.
  • the present invention is not limited to the above example.
  • the present invention is applicable to, for example, a case in which a 32 ⁇ 32 cone mask is used to embed additional information Inf having 69 bits including 5-bit start bits and 64-bit utilization information.
  • the additional information Inf in the embodiment has the 5-bit start bits “11111” and the 3-bit utilization information.
  • a first bit has bit information 1
  • a second bit has bit information 1
  • a third bit has bit information 1
  • a fourth bit has bit information 1
  • a fifth bit has bit information 1
  • a sixth bit has bit information 0
  • a seventh bit has bit information 1
  • an eighth bit has bit information 0 .
  • the pattern (see FIG. 9 ) corresponding to each of the bits is allocated to a position corresponding to each of the coefficients included in the cone mask.
  • each pixel level of the original image data is converted by ⁇ c. Accordingly, one piece of additional information Inf is embedded in the original image data of a size corresponding to one cone mask.
  • a threshold is determined based on the minimum number of bits required for embedding the additional information Inf.
  • the corresponding bit information is embedded. Independent of the number of bits of the additional information Inf, one piece of additional information Inf is embedded in each cone mask.
  • the present invention is not limited to the above method.
  • the corresponding bit information can be embedded at a position provided with a coefficient not smaller than a certain threshold. This can be used as presupposition to determine the threshold.
  • the ratio of the number of coefficients not more than the threshold used for embedding to the number of all coefficients included in the mask is referred to as the embedding filling factor.
  • a threshold for determining which coefficient is used as an embedding reference position in the mask 1501 shown in FIG. 15 is appropriately determined taking into consideration effects on the resistance and the image quality.
  • the embedding filling factor is 50%. Specifically, 50% of the original image data to which the mask is related is to be processed using the pattern array shown in FIG. 9 .
  • Table 1 shows an example of the corresponding relationship between the bit information and the coefficients included in the mask:
  • Table 1 includes bit information (start bits) S 1 to S 5 which are used to adjust the positions by the offset adjusting unit 2002 , and 3-bit utilization information 1 to 3 .
  • each bit of information is embedded using the pattern (see FIG. 9 ) at positions of pixels of input image data corresponding to positions of coefficients 0 to 7 shown by a mask 1601 in FIG. 16 .
  • the corresponding relationship between the order of bit information to be embedded and the coefficients in the mask is part of the key information.
  • Each bit of information cannot be extracted without knowing the corresponding relationship.
  • the present embodiment simplifies the description by using the corresponding relationship as shown in Table 1 that the bit information S 1 to S 5 and the 3-bit utilization information correspond to coefficients from 0 to the threshold.
  • the filling factor is as described below.
  • the processing steps are the same as when using the mask 1501 .
  • a threshold for reliably embedding the additional information Inf a certain integer number of times is determined taking into consideration deterioration in the image quality caused by embedding.
  • the number of coefficients not greater than the threshold is divided by the number of bits N constituting the additional information Inf. Hence, the number of repetitions each bit is embedded in one mask is determined.
  • the threshold is set to, for example, 137.
  • the information is embedded in all points having coefficients not larger than a certain threshold. This is because the present embodiment aims to make best use of the characteristics of the cone mask that no peak arises in the low-frequency component of the spatial frequency.
  • Table 2 includes start bits S 1 to S 5 which are used for adjusting the positions by the offset adjusting unit 2002 , and utilization information 1 to 64 .
  • the present invention is not limited to the above relationship. As long as each bit of information is embedded in all coefficients from zero to the threshold, namely from zero to the 255, using the pattern shown in FIG. 9 , the corresponding relationship between the bit information and the coefficients can be different from the above relationship.
  • the same coefficient is allocated to four positions in one mask.
  • each bit of information constructing the additional information Inf is embedded substantially the same number of times in a cone mask of large size, such as a 32 ⁇ 32 cone mask or a 64 ⁇ 64 cone mask.
  • the same bit information can be uniformly dispersed in the original image data.
  • the patchwork method randomly selects the embedding positions.
  • the present embodiment is as advantageous as the patchwork method by referring to the cone mask. In addition, deterioration in the image quality is suppressed.
  • the mask referring unit 1102 obtains the coordinates (x, y) of the embedding position corresponding to each bit of information.
  • the above processing steps are performed by the mask referring unit 1102 .
  • the embedding position of each bit of information in the cone mask obtained by the mask referring unit 1102 is input to the mask/pattern array corresponding unit 1103 .
  • the embedding position determined by the mask referring unit 1102 corresponds to positions of 8 ⁇ 8 pixels in a pattern of each bit of information.
  • the patchwork method allocates addition regions (+c), subtraction regions ( ⁇ c), and the other regions (0) to the determined embedding position.
  • the mask/pattern array corresponding unit 1103 performs 8 ⁇ 8 pattern-array development as shown in FIG. 9 .
  • the x coordinate is multiplied by the horizontal size of the pattern array
  • the y coordinate is multiplied by the vertical size of the pattern array.
  • a pattern array shown in FIG. 19 is used, and the pattern array development is performed starting from the head coordinates 1702 . As a result, embedding is successfully performed in a region 1703 of the size of the pattern array without any overlapping portion.
  • the additional information Inf corresponding to bit in the array S[bit][num] is used as the head position for embedding the pattern array, and a plurality of bits of information can be embedded.
  • a mask obtained by developing (expanding) each coefficient in the cone mask by the mask/pattern array corresponding unit 1103 to the 8 ⁇ 8 pattern array is referred to as an expanded mask.
  • the size of the expanded mask is (32 ⁇ 8) by (32 ⁇ 8).
  • This size is a minimum image unit (referred to as a macro block) for embedding at least one piece of additional information Inf.
  • the above processing is performed by the mask/pattern array corresponding unit 1103 .
  • a smaller mask has a smaller degree of freedom in placing dot positions when creating the mask than a larger mask. It is thus difficult to create a mask having desired characteristics, such as a cone mask. For example, when the additional information Inf is embedded by repetitively allocating a small mask to the entire image data, the spatial frequency of the small mask is perceived in the entire image data.
  • the complete additional information Inf is extracted from one mask.
  • the resistance against cutting the possibility of extracting the additional information Inf from partial image data wI′
  • the above processing is performed by the embedding position determining unit 0103 .
  • the additional information embedding unit 0104 refers to the embedding position of each bit of information in the image data and embeds the additional information Inf.
  • FIG. 10 shows the additional information embedding unit 0104 which repetitively embeds the additional information Inf.
  • a plurality of allocable macro blocks are allocated to the entire image.
  • a first bit of information is embedded in all the macro blocks
  • a second bit of information is embedded in all the macro blocks
  • a third bit of information is embedded in all the macro blocks, and so forth.
  • the bits of information are repetitively embedded. Specifically, when there is a bit of information that is not embedded, that bit of information is embedded in all unprocessed macro blocks by embedding steps performed by a switching unit 1001 , an adder 1002 , and a subtracter 1003 .
  • the present invention is not limited to the above processing steps.
  • the relationship between the two loop processing steps may be reversed. In other words, when there are any unprocessed macro blocks, all the bits of information that are not embedded may be embedded in the unprocessed macro blocks.
  • the additional information Inf is embedded by adding the pattern array shown in FIG. 9 .
  • the pattern array shown in FIG. 9 is subtracted, that is, the inverse of the pattern array shown in FIG. 9 is added.
  • the above addition and subtraction are performed by controlling the switching unit 1001 in accordance with the bit information to be embedded. Specifically, when the bit information to be embedded is one, the switching unit 1001 is connected to the adder 1002 . When the bit information to be embedded is zero, the switching unit 1001 is connected to the subtracter 1003 . The switching unit 1001 , the adder 1002 , and the subtracter 1003 perform the processing steps by referring to the information concerning the bit information and the pattern array.
  • FIG. 19 illustrates an example of embedding one bit of information which is one. In this case, the pattern array is added.
  • I(x, y) indicates the original image
  • P(x, y) indicates an 8 ⁇ 8 pattern array. Coefficients included in the 8 ⁇ 8 pattern array are superimposed on the original image data (blue component) of the same size as the pattern array. The addition and subtraction processing at the same position is performed. As a result, I′(x, y) is computed. The resultant I′(x, y) is output as the image data of the blue component in which the bit information is embedded to the color component synthesizer 0105 shown in FIG. 1 .
  • the above addition and subtraction processing using the 8 ⁇ 8 pattern array is repetitively performed at all the embedding positions (positions to which the pattern array is allocated for embedding each bit of information) determined by Table 2.
  • FIGS. 18A and 18B the loop processing in FIG. 10 is illustrated.
  • macro blocks 1802 are repetitively allocated to the entirety of image data 1801 ( 1803 ), starting from the upper left to the lower right in the raster order.
  • This processing corresponds to the processing steps performed by the switching unit 1001 , the adder 1002 , and the subtracter 1003 .
  • the above processing is performed by the additional information embedding unit 1014 , and the additional information Inf is embedded in the entire image.
  • the additional information Inf is embedded in the image data.
  • the pattern array is sufficiently reduced in size.
  • each pattern array is perceived by human vision to be a tiny dot.
  • the spatial frequency characteristics of the cone mask are maintained, and the cone mask is substantially imperceptible to the human eye.
  • the file is compressed, stored in the memory 0107 , and then decompressed.
  • FIG. 39 shows a graph of chromatic spatial frequency characteristics of human vision. Three curves are obtained using spatial sinusoidal waves formed by black and white (monochrome), and red and green, and yellow and blue which are opposite color pairs of uniform brightness. By changing the period and contrast of each spatial sinusoidal wave pattern, the perceptible limit of human vision is measured.
  • the sensitivity to black and white reaches a maximum at about 3 cycle/deg.
  • the sensitivity to chromaticity red and green, and yellow and blue reaches a maximum at about 0.3 cycle/deg.
  • the yellow and blue pattern is not as influential as the red and green pattern in identifying the fine spatial information.
  • embedding digital watermark information in a gray scale image which only has a brightness component by modulating the image as it is is less advantageous than embedding the digital watermark information in a color component of color image data because deterioration in the image quality is more perceptible in the gray-scale image.
  • the digital watermark information is perceptible to the human eye as uneven color in spatially large regions in which the spatial frequency is low. In contrast, it is less perceptible to the human eye in spatially narrow regions in which the spatial frequency is high compared with embedding the digital watermark information in the brightness component.
  • the gray-scale image in which each pixel has one type of component is first converted into color image data in which each pixel has a plurality of components, and then the digital watermark information, such as the additional information Inf, is embedded. Therefore, deterioration in the image quality is suppressed compared with embedding the digital watermark information in the normal, unconverted gray-scale image.
  • a comparison between the case of embedding the digital watermark information in the gray-scale image and the case of embedding the digital watermark information in one component among the components forming the color image data demonstrates that the latter case is more advantageous in retaining the image quality when outputting an image at high resolution, that is, when expressing the gray scale of a pixel level by fewer ink dots.
  • a drawback of the above case is that the file size of the output color image data is approximately three times as large as the original image data.
  • the JPEG compressor/encoder 0106 performs JPEG compression and encoding of the digitally watermarked image data.
  • a JPEG compression and encoding technique utilizes human visual characteristics. By removing a component to which human vision is imperceptible, the JPEG compression and encoding technique reduces the amount of data. In contrast, a digital watermarking technique embeds information in a component to which human vision is imperceptible. Therefore, it is difficult for the JPEG compression and encoding technique and the digital watermarking technique to coexist.
  • the JPEG compression and encoding technique is regarded as a type of attack on the digital watermark information.
  • the pattern array as shown in FIG. 9 to be used in the embodiment is designed so that the additional information embedded in the color image data is not lost by sub-sampling chrominance components and quantization.
  • the JPEG compression and encoding system is briefly described.
  • the color image data input to the JPEG compressor/encoder 0106 is converted into brightness (Y) and chrominance (Cr and Cb) components.
  • Y 0.29900 ⁇ R+ 0.58700 ⁇ G+ 0.11400 ⁇ B
  • Cr 0.50000 ⁇ R ⁇ 0.41869 ⁇ G ⁇ 0.08131 ⁇ B
  • Cb ⁇ 0.16874 ⁇ R ⁇ 0.33126 ⁇ G+ 0.50000 ⁇ B (5)
  • the image data separated into the brightness component and the chrominance components is split into blocks of 8 ⁇ 8 pixels starting from the upper left of the image in the raster order, as shown in FIG. 40 .
  • the JPEG compression and encoding is repetitively performed every 8 ⁇ 8 blocks.
  • FIGS. 41A and 41B illustrate sampling of image data. 4:2:2 sampling steps performed in the JPEG compression and encoding system are described below.
  • FIG. 41A shows a brightness component having 4 ⁇ 4 pixels 4101 . Since visually important information is included in the brightness component, decimation is not performed on the brightness component.
  • the 4 ⁇ 4 pixels 4101 remains unchanged and is output as 4 ⁇ 4 pixels 4102 .
  • FIG. 41B shows chrominance components (Cr and Cb) having 4 ⁇ 4 pixels 4103 . Since information included in the chrominance components is not very important visually, decimation is performed on the chrominance components in which two pixels are decimated to one pixel in the horizontal or the vertical direction. As a result, the chrominance components (Cr and Cb) having 4 ⁇ 4 pixels 4103 are converted into 4 ⁇ 2 pixels 4104 . Accordingly, the 8 ⁇ 8 pixels of the chrominance components are reduced to 8 ⁇ 4 pixels.
  • the brightness component Y and the chrominance components Cr and Cb having the 8 ⁇ 8 pixels become the 8 ⁇ 8-pixel brightness component Y and the 8 ⁇ 4-pixel chrominance components Cr and Cb, respectively.
  • DCT discrete cosine transform
  • the JPEG technique efficiently compresses data by reducing the number of quantizing steps for high-frequency components of DCT coefficients. Quantization is performed so that the number of quantizing steps is reduced for the chrominance components compared with the brightness component.
  • the pattern array having resistance against the above compression and encoding is described.
  • a region 4201 having positive elements +c is referred to as a positive patch
  • a region 4202 having negative elements ⁇ c is referred to as a negative patch.
  • information is biased toward low-frequency components in a minimum coding unit (MCU) 4001 having 8 ⁇ 8 pixels shown in FIG. 40 , thereby strengthening the resistance against JPEG compression.
  • MCU minimum coding unit
  • the present invention is not limited to this, and also covers a case in which the MCU has 16 ⁇ 16 pixels.
  • the resistance against sampling is strengthened by increasing the size of each patch by two multiplied by an integer number of pixels in the vertical and/or horizontal direction in accordance with sampling.
  • each patch is biased toward the low frequency in the MCU (8 ⁇ 8 pixels) and (2) the size of each patch is 2 ⁇ N (N is an integer) pixels in the vertical and/or horizontal direction in accordance with the sampling method.
  • each region having 8 ⁇ 8 pixels to be compressed and encoded using the JPEG technique in order that each patch has low-frequency components, it is preferable that the position of the image at which the pattern array is allocated and the size of the pattern array (in FIG. 9 , 8 ⁇ 8 pixels) are in synchronism with each region to be encoded.
  • the size of the pattern array and the embedding position are in synchronism with the unit size to be compressed and encoded by the JPEG technique.
  • the additional information Inf is embedded using the pattern array as shown in FIG. 9 . Accordingly, the digital watermark information, that is, the additional information Inf, is retained in the image data even after the image data is compressed and encoded using the JPEG technique. Hence, the image data has resistance against JPEG compression and encoding.
  • the present invention also covers a case in which the color component extracting unit 0101 directly converts the gray-scale (monochrome) image into the brightness component Y and the chrominance components Cr and Cb, and the additional information Inf or the like is embedded as the digital watermark information in the component Cb.
  • the JPEG compressor/encoder 0106 need not perform conversion into the brightness component and the chrominance components. Hence, the number of processing steps is reduced.
  • the present invention covers a case in which the color component extracting unit 0101 directly converts the gray-scale (monochrome) image into yellow (Y), magenta (M), cyan (C), and black (K) components, and the additional information Inf or the like is embedded as the digital watermark information only in the Y component. This case eliminates a step of converting the color components immediately before printing.
  • the present invention is not limited to the above cases in which embedding is performed in the blue component, the Cb component, and the Y component.
  • the present invention is also applicable to a case in which the additional information Inf or the like is embedded in part of all the components constructing one pixel.
  • Coded data obtained by the above JPEG compression and encoding processing is temporarily stored in the memory 0107 .
  • the coded data is read from the memory 0107 to the JPEG decompressor/decoder 0108 with a timing for transmitting to an external device or a timing for printing by a printer connected at the subsequent stage of the device shown in FIG. 1 .
  • coded data obtained by converting gray-scale image data into color image data, modulating a blue component, further converting the data into color image data formed of brightness and chrominance components, and finally compressing the color image data using the JPEG system is advantageous compared with coded data obtained by directly converting the original gray-scale data into the color image data formed of the brightness and chrominance components and compressing the color image data using the JPEG system.
  • the former coded data is advantageous since there is not a significant increase in the memory capacity, although there is a slight increase in the amount of data of the chrominance components.
  • the digital watermark information is embedded in the original image data, and then the image data is compressed using the JPEG compression and encoding system.
  • This method of embedding the digital watermark information in the gray-scale image data according to the present embodiment is advantageous compared with the method of modulating the gray-scale image and embedding the digital watermark information in that the image quality is improved while there is not a significant increase in the total amount of data.
  • the JPEG decompressor/decoder 0108 reads the coded data from the memory 0107 with a timing for transmitting to an external device or a timing for printing by a printer connected at the subsequent stage, and decodes the color image data using the reverse processing steps of the above compression processing steps.
  • the registration unit 0202 is provided before the additional information extracting unit 0203 and performs preliminary processing of extracting the additional information Inf.
  • An image of the blue component extracted by the color component extracting unit 0201 is input to the registration unit 0202 .
  • the registration unit 0202 compensates for the difference in scales of the image data wI output from the digital watermark embedding unit and the image data wI′ input to the digital watermark extracting unit.
  • FIG. 7 illustrates the registration unit 0202 in detail.
  • the registration unit 0202 includes a block splitter 0701 , a Fourier transform unit 0702 , an impulse extracting unit 0703 , a scaling factor computing unit 0704 , and a scaling unit 0705 .
  • the block splitter 0701 splits the data into blocks, which is similar to the processing performed by the block splitter 0401 included in the registration signal embedding unit 0102 . With the processing performed by the block splitter 0701 , it is generally difficult to extract blocks similar to those obtained by the block splitter 0401 in the registration signal embedding unit 0102 . Because the image data wI in which the digital watermark information Inf is embedded is processed by a printer, the size is changed and the positions are further shifted.
  • the block splitter 0701 outputs the image data which is split into blocks to the Fourier transform unit 0702 .
  • the Fourier transform unit 0702 transforms the image data in the spatial domain into image data in the frequency domain, which is similar to processing performed in the registration signal embedding unit 0102 .
  • the image data in the frequency domain obtained by the Fourier transform is expressed by the magnitude spectrum and the phase spectrum. Only the magnitude spectrum is input to the impulse extracting unit 0703 , while the phase spectrum is discarded.
  • the transformed image data in the frequency domain is input to the impulse extracting unit 0703 .
  • the impulse extracting unit 0703 only extracts impulse signals from the transformed image data in the frequency domain. Specifically, the impulse extracting unit 0703 extracts the impulse signals 0502 to 0505 shown in FIG. 5 which are embedded in the image data.
  • the transformed image data in the frequency domain is processed using a threshold, as shown in FIG. 8 A.
  • a magnitude spectrum 0801 input to the impulse extracting unit 0703 is processed using a threshold 0802 .
  • the transformed image data in FIG. 8A is expressed in one dimension.
  • the threshold 0802 the impulse signals can be extracted.
  • portions of the image data having the same size as the impulse signals at the low frequency are also extracted.
  • FIG. 8B shows a method for solving the above problem.
  • a quadratic differential is performed on the image data 0801 transformed in the frequency domain. This processing is similar to Laplacian filtering.
  • Data 0803 is obtained by performing a quadratic differential on the transformed image data 0801 in the frequency domain.
  • An appropriate threshold 0804 is selected for the data 0803 , and threshold processing is performed, thereby extracting impulse signals.
  • FIG. 26 also shows processing performed at the registration signal embedding side.
  • image data 2601 in the spatial domain is transformed to image data 2602 in the frequency domain.
  • An impulse signal 2603 is appended to the image data 2602 in the frequency domain.
  • Inverse frequency transformation is performed on the image data in the frequency domain to which the impulse signal (registration signal) 2603 is appended, and image data 2601 ′ in the spatial domain is restored. Even though some effects of the impulse signal 2603 can be found on the restored image data 2601 ′ in the spatial domain, they are substantially imperceptible to the human eye. Practically, the image data 2601 and the image data 2601 ′ seem to be identical. This is because the impulse signal 2603 appended in the frequency domain by the inverse Fourier transform is distributed in the entire image data with a small magnitude.
  • Appending an impulse signal as the impulse signal 2603 shown in FIG. 26 is similar to appending image data with a certain frequency component in the spatial domain.
  • the appended impulse signal is larger than a frequency perceptible to a person, and when the magnitude of the embedded impulse signal is not greater than a limit perceptible to a person, the appended impulse signal is not perceptible to the human eye. Therefore, the above method for appending the impulse signal is one type of digital watermarking.
  • the registration signal 2603 is embedded in the image data 2601 , and then the additional information Inf to be actually embedded is embedded. Finally, the image data 2601 ′ in the spatial domain is restored.
  • the Fourier transform is again performed. Therefore, the registration signal 2603 dispersed in the entire image data in the spatial domain is transformed to the signal in the frequency domain and restored as the impulse signal.
  • the impulse signal can be extracted by performing appropriate impulse extraction as described above, and a variation from the original image can be estimated. Compensation for the variation ensures that the embedded additional information Inf in the embodiment is reliably extracted.
  • the impulse signal is output from the impulse extracting unit 0703 shown in FIG. 7 , and the impulse signal is input to the scaling factor computing unit 0704 .
  • the scaling factor computing unit 0704 computes scaling based on the coordinates of the input impulse signal.
  • the scaling factor is computed based on the ratio of the frequency at which the impulse signal is embedded to the frequency at which the impulse is detected. For example, when the frequency of an embedded impulse signal is expressed by a and the frequency of a detected impulse signal is expressed by b, it can be concluded that scaling by the ratio a/b is performed. This is a well-known property of the Fourier transform. Accordingly, the scaling factor computing unit 0703 outputs the scaling factor.
  • the digital watermark embedding unit side may receive information about the position (frequency) at which the registration signal is embedded.
  • the positional information is received as an encoded signal, and the above computation processing for computing the scaling factor is performed. In this manner, only the person who knows the registration signal can reliably extract the additional information Inf. In this case, the registration signal is employed as the key to extracting the additional information Inf.
  • the scaling factor output from the scaling factor computing unit 0704 is input to the scaling unit 0705 .
  • the image data wI 1 ′ is input to the scaling unit 0705 .
  • Scaling of the input image data wI 1 ′ by the scaling factor is performed. Scaling can be performed by various methods, such as bilinear interpolation and bicubic interpolation.
  • the image data wI 2 ′ is output from the scaling unit 0705 .
  • FIG. 20 shows the additional information extracting unit 0203 .
  • an embedding position determining unit 2001 determines a region in the image data wI 2 ′ (blue component) from which the additional information Inf is extracted.
  • the operation of the embedding position determining unit 2001 is the same as the operation of the embedding position determining unit 0103 . Therefore, the same region is determined by the embedding position determining units 0103 and 2001 .
  • the additional information Inf is extracted using Table 2 and the pattern array shown in FIG. 9 .
  • Extraction of the additional information Inf is performed by convolution of the pattern array on the determined region.
  • the reliability distance d is a calculated value required for extracting the embedded information.
  • FIG. 6 shows a process of obtaining the reliability distance d corresponding to each bit of information.
  • FIG. 21 shows an example of extracting 1-bit information from image data (blue component) I′′(x, y) in which the 1-bit information constructing the additional information Inf is embedded.
  • FIG. 22 shows an example of extracting 1-bit information from image data I′′(x, y) in which the 1-bit information is not embedded.
  • the 1-bit information is embedded in the image data I′′(x, y).
  • An 8 ⁇ 8 pattern array P(x, y) i.e., a pattern array for extracting the additional information Inf, is used for convolution.
  • Each element (0, +c, or ⁇ c) constructing the 8 ⁇ 8 pattern array is integrated with the corresponding pixel level of the input image data I′′(x, y) which is located at the same position as that of the element (0, +c, or ⁇ c), and summation of integrated values is performed.
  • the pattern array P(x, y) is convoluted with the image data I′′(x, y).
  • the image data I′′(x, y) covers a case in which it is attacked.
  • P(x, y) is a pattern array used for embedding
  • P′(x, y) is a pattern array used for extraction
  • FIG. 22 illustrates the case in which the above processing is performed for the image data I′′(x, y) in which the 1-bit information is not embedded. From an original image (corresponding to the image data I), a zero value is obtained as a result of convolution, as shown in FIG. 22 .
  • the process for extracting the 1-bit information is illustrated hereinabove with reference to FIGS. 21 and 22 .
  • the foregoing description illustrates an ideal case in which the convolution result of the image data I in which the additional information Inf is to be embedded is zero. In practice, it is less likely that zero is obtained as a result of the convolution on a region of the image data I corresponding to the 8 ⁇ 8 pattern array.
  • each bit of information constructing the additional information Inf is embedded in the original image data a plurality of times.
  • the additional information Inf is embedded in the image a plurality of times.
  • the convolution arithmetic unit 0601 performs summation of results of the convolution arithmetic performed on each bit of information forming the additional information Inf. For example, when the additional information Inf has eight bits, eight sums are obtained. The sums corresponding to the bits of information are input to an averaging unit 0602 . The sums are divided by the number of all macro blocks n, thereby obtaining the average. The resultant average is the reliability distance d. In other words, the reliability distance d is a value generated by majority decision according to whether it is closest to 32c 2 or zero shown in FIG. 21 .
  • an average of the convolution results is a real-number multiplied by the reliability distance d.
  • the obtained reliability distance d is stored in a storage medium 0603 .
  • the convolution arithmetic unit 0601 repetitively obtains the reliability distance d for each bit forming the additional information Inf, and sequentially stores the reliability distance d in the storage medium 0603 .
  • the computed value is described in detail. Ideally, the reliability distance d computed for the original image data I using the pattern array shown in FIG. 9 (the cone mask is also referred to for the placement information) is zero. For the actual data I, however, the computed value is often a non-zero value though it is extremely close to zero.
  • a histogram of the reliability distance d for each bit of information is as shown in FIG. 23 .
  • the horizontal axis indicates the reliability distance d generated for each bit of information
  • the vertical axis indicates the number of bits of information, that is, the frequency of occurrence of the reliability distance d, for which the convolution is performed to obtain the reliability distance d.
  • the reliability distance is not necessarily zero, whereas an average thereof is zero or a value close to zero.
  • a histogram of the reliability distance d is as shown in FIG. 24 .
  • the histogram in FIG. 24 is shifted rightward while retaining the shape of the histogram shown in FIG. 23 .
  • the reliability distance d of the image data in which the 1-bit information constructing the additional information Inf is embedded may not always be c, but an average thereof is c or a value close to c.
  • FIG. 24 illustrates the example in which the bit information indicating one is embedded.
  • bit information indicating zero is embedded, the histogram shown in FIG. 23 is shifted to the left.
  • the structure of the offset adjusting unit 2002 is described.
  • the appropriately scaled image data is input to the offset adjusting unit 2002 .
  • the start bits are detected by the reliability distance computation shown in FIG. 6 .
  • the offset adjusting unit 2002 generates five reliability distances corresponding to five bits of the start bits Inf 1 .
  • the start bits Inf 1 are part of the additional information Inf embedded by the additional information embedding unit 0104 , as shown in FIG. 36 . In the embodiment, there are five start bits Inf 1 .
  • the start bits Inf 1 are the first five bits of the additional information Inf.
  • the start bits Inf 1 are not adjacently or densely provided in the image in which the additional information Inf is embedded.
  • the start bits Inf 1 are dispersed since they are sequentially embedded correspondingly to the coefficients forming the cone mask as shown in Table 2.
  • FIG. 28 shows a flowchart illustrating a process performed by the offset adjusting unit 2002 . The following description is provided by referring to the flowchart shown in FIG. 28 .
  • step S 2801 the offset adjusting unit 2002 regards, for the input image data wI 2 ′, the upper left coordinates as embedding starting coordinates. At the same time, the maximum MAX is set to zero.
  • step S 2802 the start bits are detected by the reliability distance computation shown in FIG. 6 .
  • step S 2803 the process determines whether the first to fifth bits of information obtained are correct start bits “11111”. If the determination is affirmative, a series of five positive reliability distances d are detected as a result. If the determination is negative, it is less likely that a series of five positive reliability distances d is obtained. The process sequentially performs the above determination, thereby determining the position at which the correct start bits Inf 1 are detected as the embedding starting coordinates.
  • the correct start bits Inf 1 may be detected at a point other than the point expressed by the embedding starting coordinates. The cause for this is described with reference to FIGS. 27A to 27 C.
  • the original positions of macro blocks 2701 , 2703 , and 2704 are searched for by convolution using pattern arrays 2702 and 2704 which are the same as the pattern array used in embedding the additional information Inf (the cone mask is also referred to for the placement information). Searching sequentially advances from FIG. 27A to FIG. 27 C.
  • searching is performed based on one macro block (the minimum unit for extracting the additional information Inf) which is part of the image data wI 2 ′.
  • One small box conceptually represents the size of a pattern array used to embed one bit of information.
  • the original image and the pattern array for extracting the additional information Inf overlap only in the shaded regions.
  • searching further advances, and the position being searched for completely coincides with the actual position of the macro block.
  • the pattern array to be convoluted and the macro block overlap each other to the fullest extent.
  • the position being searched for is below and to the right of the position of the macro block in which the additional information Inf is actually embedded.
  • the pattern array to be convoluted and the macro block overlap each other in the shaded regions.
  • the correct start bits Inf 1 can be extracted.
  • the reliability distances d of the three cases shown in FIGS. 27A to 27 C are different because the overlapping areas are different in each case.
  • Each overlapping area may replace the reliability distance d.
  • each bit of information and the reliability distance d are very close to ⁇ 32c 2 , as described above.
  • step S 2803 when the process determines, in step S 2803 , that the obtained bits of information are not the correct start bits Inf 1 , the process moves to the next searching point in the raster order in step S 2807 .
  • the process determines, in step S 2804 , whether the sum of the reliability distances corresponding to the five start bits Inf 1 is smaller the maximum MAX. If the determination is negative, the process moves, in step S 2807 , to the next starting point in the raster order. When the sum of the reliability distances corresponding to the five start bits Inf 1 is larger than the maximum MAX, the maximum MAX is updated to the sum of the reliability distances d.
  • the current searching point is stored as the embedding starting position.
  • step S 2806 the process determines whether searching for all the searching points is performed. If the determination is negative, the process moves, in step S 2807 , to the next searching point in the raster order. If all the searching points are searched for, the currently stored embedding starting position is output, and the process is terminated.
  • the offset adjusting unit 2002 of the present embodiment detects the start bits Inf 1 .
  • the coordinates at which the correct start bits Inf 1 are obtained is determined as the coordinates of the embedding starting position of the additional information Inf.
  • the information on the determined coordinates is output as the embedding starting coordinates to the subsequent stage.
  • the embedding starting coordinates and the image data in which the additional information Inf is embedded are input from the offset adjusting unit 2002 to a utilization information extracting unit 2003 .
  • the reliability distance d1 for each bit of information constructing the utilization information Inf 2 is computed.
  • the reliability distance d1 for each bit of information is output to a statistical testing unit 2006 .
  • Obtaining the reliability distance d1 corresponding to each bit of information forming the utilization information Inf 2 is substantially equivalent to obtaining each bit forming the embedded utilization information Inf 2 . This is described in detail hereinafter.
  • the reliability distances d1 are obtained based on the embedding starting coordinates determined by the above searching process.
  • the five start bits Inf 1 are not extracted.
  • the statistical testing unit 2006 determines the reliability of the reliability distances d1 obtained by the utilization information extracting unit 2003 in FIG. 20 .
  • the determination is performed by generating reliability distances d2 using a second pattern array differing from the first pattern array used for extracting the additional information Inf or the utilization information Inf 2 .
  • a reference to a histogram of the reliability distances d2 is made, and a reliability index D is generated.
  • the reliability distances d1 are obtained using the first pattern array (the cone mask is also referred to for the placement information) for extracting the utilization information Inf 2 by the utilization information extracting unit 2003 .
  • the reliability distances d2 are obtained using the second pattern array differing from the first pattern array.
  • the first pattern array is the pattern array shown in FIG. 9 employed to embed the additional information Inf including the start bits Inf 1 and the utilization information Inf 2 .
  • the second pattern array and the reliability index D are described in detail in the following description.
  • Each element of the subsets A and B is a pixel level.
  • the reliability distances d expressed by ⁇ (a i ⁇ b i )/N, when N has a substantially large value and pixel levels a i and b i are not correlated, the expectation value of the reliability distances d is zero.
  • the distribution of the reliability distances d is a normal distribution.
  • the central limit theorem indicates that, when extracting arbitrary samples of size n c from a population, not necessarily in a normal distribution, with a mean m c and a standard deviation ⁇ c , the distribution of sample means S c approaches a normal distribution N(m c , ( ⁇ c / ⁇ square root over ( ) ⁇ n c ) 2 ) as n c increases.
  • the standard deviation ⁇ c of the population is unknown.
  • the number of samples n c is sufficiently large and when the population N c is larger than the number of samples n c , no practical difficulty is caused by replacing ⁇ c with a standard deviation s c of the samples.
  • the histogram of the reliability distances d1 obtained by the utilization information extracting unit 2003 greatly varies depending on whether the utilization information Inf 2 is accurately extracted.
  • the bit information is not embedded at a position at which the utilization information Inf 2 should have been embedded.
  • the histogram of the reliability distances d1 becomes a normal histogram 2501 shown in FIG. 25 .
  • each reliability distance d1 corresponding to each bit of information indicating one which forms part of the utilization information Inf 2
  • Each reliability distance d1 corresponding to each bit of information indicating zero which forms part of the utilization information Inf 2
  • two “peaks” are formed.
  • the ratio of the sizes of the two “peaks” is substantially similar to the ratio of the number of bits of information indicating one to the number of bits of information indicating zero.
  • the reliability distances d1 obtained by convoluting the first pattern array with the original image data in which the additional information Inf is not embedded has the normal distribution 2501 .
  • the so-called second pattern array capable of reliably determining the state of the original image even though the additional information Inf is embedded, is used to generate a normal distribution of the reliability distances d2.
  • This normal distribution is regarded as the normal distribution 2501 , and it is determined whether the utilization information Inf 2 is correctly extracted.
  • the histogram of the reliability distances d1 is detected outside a shaded portion (elements from the center to 95%) forming the normal distribution 2501 created based on the reliability distances d2, it can be concluded that there is a statistical bias in a target image and that the utilization information Inf 2 is embedded in the image. Hence, the reliability of the utilization information Inf 2 is statistically determined.
  • the method for performing the above statistical determination is described in detail in the following description.
  • the following description illustrates a method for generating a distribution similar to the histogram of the reliability distances d1 before the additional information Inf is embedded, such as the normal distribution 2501 , using the image data in which the additional information Inf or the utilization information Inf 2 is embedded.
  • an extraction unit 2005 uses the second pattern array to obtain the reliability distances d2 generating a distribution similar to the normal distribution 2501 .
  • the extraction unit 2005 obtains the reliability distances d2 using the second pattern array which is “orthogonal” to the first pattern array used by the utilization information extracting unit 2003 .
  • the extraction unit 2005 operates in a manner substantially similar to the utilization information extracting unit 2003 in performing convolution or the like.
  • the pattern array shown in FIG. 9 used by the utilization information extracting unit 2003 is referred to as a “first pattern array”, and the mask or the cone mask used for referring to the position at which the first pattern array is placed is referred to as a “first position reference mask”.
  • the pattern array “orthogonal” to the first pattern array is referred to as a “second pattern array”, and a mask used for referring to the position at which the second pattern array is placed is referred to as a “second position reference mask”.
  • the offset adjusting unit 2002 inputs the embedding starting coordinates to the extraction unit 2005 using the second pattern array.
  • the reliability distances d2 are computed based on the reliability distance computation illustrated in FIG. 6 .
  • the pattern array used in the reliability distance computation shown in FIG. 6 is not the pattern array shown in FIG. 9 used for embedding information. Instead, a pattern array 3301 shown in FIG. 33A or a pattern array 3302 shown in FIG. 33B , each of which is “orthogonal” to the pattern array 0901 , is used.
  • the histogram of the reliability distances d2 obtained by convolution of the second pattern array on the image in which the additional information Inf is embedded is substantially the same as the normal distribution 2501 shown in FIG. 25 . Therefore, the histogram is regarded as the normal distribution 2501 .
  • the obtained normal distribution 2501 is used as the determination reference required for statistical testing performed in step S 3207 in FIG. 32 .
  • the extraction unit 2005 uses one of the pattern arrays 3301 and 3302 shown in FIGS. 33A and 33B , which are “orthogonal” to the first pattern array, and a second position reference mask 3502 shown in FIG. 35 to generate the normal distribution of the reliability distances d2.
  • Conditions for the pattern array “orthogonal” to the first pattern array include the following: (1) As shown in FIGS. 33A and 33B , the pattern array must have the same size as the pattern array 0901 shown in FIG. 9 , and (2) when the pattern array 0901 shown in FIG. 9 used to embed the additional information Inf is convoluted with the pattern array, the result gives zero, as in the pattern array 3301 or 3302 .
  • the convolution shown in FIG. 34 is the same as that shown in FIG. 21 and FIG. 22 .
  • each of the pattern arrays 3301 and 3302 shown in FIG. 33 is “orthogonal” to the pattern array 0901 shown in FIG. 9 .
  • the pattern array “orthogonal” to the pattern array used to embed the additional information Inf is employed to compute the reliability distances d2 because a statistical bias is not generated in the distribution of the reliability distances d2. In other words, the histogram which has zero at the center is generated.
  • Another condition for the pattern array “orthogonal” to the first pattern array is as follows: (3) The pattern array “orthogonal” to the first array has the same number of non-zero elements as that of the pattern array used by the utilization information extracting unit 2003 , and the number of positive elements and the number of negative elements are the same. Therefore, the reliability distances d1 and the reliability distances d2 are extracted under the same arithmetic processing conditions.
  • the reference mask 3502 shown in FIG. 35 is used as the “second position reference mask”.
  • the pattern and the size of the reference mask 3502 differ from those of a first embedding position reference mask 3501 .
  • the histogram of the reliability distances d2 is substantially similar to the normal distribution 2501 .
  • the positions of the start bits are not accurately detected, it is likely that a statistical bias is generated even when convolution using the second pattern array is performed. Taking this possibility into consideration, the sizes of the first and second position reference masks are made different, thereby canceling periodical elements.
  • the pattern arrays in the masks may be arranged in different configurations. Hence, convolution is not performed in the same region.
  • the “second position reference mask” may be any type of mask as long as coefficients constructing the mask are randomly distributed.
  • the “second position reference mask” need not be the cone mask.
  • the “second embedding position reference mask” is created by the embedding position determining unit 2004 shown in FIG. 20 .
  • the size of the first position reference mask i.e., the cone mask
  • the size of the “second position reference mask” it is preferable that the size of the “second position reference mask” be large.
  • the size of the second mask used to compute the reliability distances d1 at the additional information Inf side is set to be larger than the first mask which is referred to when embedding the additional information Inf.
  • the present invention is not limited to the above.
  • the sizes of the first and second masks may be set to be equal, thereby partially achieving the effect.
  • the “second position reference mask” may be created by the embedding position determining unit 2001 shown in FIG. 20 .
  • the minimum condition for each mask is that the number of repetitions of each bit forming the additional information Inf to be applied to each mask is equal to that in an image region of the same size.
  • Another second pattern array or another second position reference mask satisfying the above condition may be used to again compute the reliability distances d2.
  • an ideal histogram i.e., the normal distribution 2501 shown in FIG. 25 , may be created.
  • a 32 ⁇ 32 cone mask is used as the first position reference mask, and a 64 ⁇ 64 cone mask is used as the second position reference mask.
  • the relative arrays of coefficients are completely different.
  • the extraction unit 2005 determines the embedding position in accordance with Table 3:
  • the same coefficient appears 16 times.
  • the same coefficient appears four times when the mask is referred to in accordance with Table 2.
  • the same coefficient appears the same number of times in the first position reference mask and in the second position reference mask.
  • the second pattern array is allocated in accordance with the positional relationship illustrated in Table 3, and convolution is sequentially performed. As a result, 69 reliability distances d2 corresponding to 69 bits of information are computed.
  • the reliability distances d2 created by the extraction unit 2005 using the second pattern array are distributed in a manner substantially similar to the normal distribution 2501 .
  • 95% of samples appear in a range defined by the following expression: m ⁇ 1.96 ⁇ d 2 ⁇ m+ 1.96 ⁇ (7) where ⁇ is the standard deviation of the reliability distances d2 and m is the mean.
  • the above range is referred to as a “95% reliability region”.
  • the histogram of the reliability distances d1 input from the utilization information extracting unit 2003 to the statistical testing unit 2006 becomes the normal distribution 2502 shown in FIG. 25 .
  • the histogram becomes the normal distribution 2503 . Therefore, it is very likely that the reliability distances d1 corresponding to the utilization information Inf 2 are detected outside the 95% reliability region obtained by the extraction unit 2005 using the second pattern array, which is represented by the shaded portion in FIG. 25 .
  • the offset adjusting unit 2002 performs processing, when the utilization information Inf 2 is not detected in the target image, the histogram of the reliability distances d1 becomes the normal distribution 2501 .
  • the probability of having the reliability region expressed by expression (7) in which all 64 reliability distances d1 corresponding to the utilization information Inf 2 are not included is (1 ⁇ 0.95) 64 , which is very small.
  • the normal distribution 2501 is obtained based on the reliability distances d2, it is possible to reliably determine whether the additional information Inf or the utilization information Inf 2 is embedded by determining whether the histogram obtained based on the reliability distances d1 is included in a major portion of the normal distribution 2501 .
  • the statistical testing unit 2006 utilizes the above characteristics to determine the reliability that the additional information Inf or the utilization information Inf 2 is embedded.
  • the reliability that the additional information Inf 2 is embedded is referred to as the reliability index D.
  • the reliability index D is defined as the ratio of the number of reliability distances d1 outside the region defined by expression (7) to the number of all of the reliability distances d1 created by the utilization information extracting unit 2003 .
  • the statistical testing unit 2006 determines that the overall histogram of the reliability distances d1 is biased by someone's actions toward the normal distribution 2502 or the normal distribution 2503 . It is thus determined that the utilization information Inf 2 is positively embedded in the image.
  • the reliability distances d1 used for determination are regarded as reliable information. Hence, the reliability distances d1 are permitted to be forwarded to a comparator 2007 at the subsequent stage.
  • the reliability index D based on the utilization information Inf 2 or a message based on the reliability index D may be displayed on a monitor or the like.
  • Values of the reliability distances d1 output through the utilization information extracting unit 2003 and the statistical testing unit 2006 are input to the comparator 2007 shown in FIG. 20 . Since the input reliability distances d1 are highly reliable information, it is only necessary to determine whether each bit of information corresponding to the reliability distances d1 indicates one or zero.
  • the bit of information is determined to be one.
  • the reliability distance d1 is a negative value, the bit of information is determined to be zero.
  • the utilization information Inf 2 obtained as above is output as reference information for a user or as final data for converting it into a control signal.
  • the additional information Inf or the utilization information Inf 2 used in the above embodiment may be replaced by error-correction-coded information. In this case, the reliability of the extracted utilization information Inf 2 is further enhanced.
  • the present invention is applicable to part of a system including a plurality of devices, such as a host computer, an interface device, a reader, and a printer. Also the present invention is applicable to part of a device such as a copying machine or a facsimile machine.
  • the present invention is not limited to a device or a method for accomplishing the above embodiment.
  • the present invention also covers a case in which software program code for accomplishing the above embodiment is provided, and a computer of the system or the device operates the various devices in accordance with the program code, thereby accomplishing the above embodiment.
  • the software program code itself performs the functions of the above embodiment. Therefore, the present invention covers the program code and a medium for providing the computer with the program code, that is, a storage medium for storing the program code.
  • the storage medium for storing the program code includes a floppy disk, a hard disk, an optical disk, a magneto-optical disk, a magnetic tape, a nonvolatile memory card, or a ROM.
  • the present invention covers not only the above case in which the computer controls the various devices in accordance with the supplied program code and accomplishes the functions of the embodiment, but also a case in which the program code accomplishes the above embodiment in cooperation with an operating system (OS) running in the computer or other application software.
  • OS operating system
  • the present invention also covers a case in which, after the program code is stored in a memory of an add-on board of the computer or an add-on unit connected to the computer, a CPU of the add-on board or the add-on unit performs part or the entirety of the actual processing based on instructions from the program code, thereby performing the functions of the above embodiment.
  • the present invention is not limited to that embodiment.
  • the present invention also covers a case in which the blue noise mask is used to embed the digital watermark information.
  • the present invention includes any structure as long as that structure includes at least one of the above characteristic points.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)
  • Color Image Communication Systems (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
US09/676,949 1999-11-18 2000-10-02 Image processing device, image processing method, and storage medium Expired - Fee Related US6873711B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP32842799A JP2001148776A (ja) 1999-11-18 1999-11-18 画像処理装置及び方法及び記憶媒体

Publications (1)

Publication Number Publication Date
US6873711B1 true US6873711B1 (en) 2005-03-29

Family

ID=18210149

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/676,949 Expired - Fee Related US6873711B1 (en) 1999-11-18 2000-10-02 Image processing device, image processing method, and storage medium

Country Status (2)

Country Link
US (1) US6873711B1 (ja)
JP (1) JP2001148776A (ja)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020057443A1 (en) * 2000-11-02 2002-05-16 Yoshio Watanabe Printer system, printer driver and printer
US20030128861A1 (en) * 1993-11-18 2003-07-10 Rhoads Geoffrey B. Watermark embedder and reader
US20030188659A1 (en) * 2002-04-05 2003-10-09 Canadian Bank Note Company Limited Method and apparatus for reproducing a color image based on monochrome images derived therefrom
US20040085567A1 (en) * 2002-10-31 2004-05-06 Canon Kabushiki Kaisha Printing apparatus, print control method, and program product
US20040093498A1 (en) * 2002-09-04 2004-05-13 Kenichi Noridomi Digital watermark-embedding apparatus and method, digital watermark-detecting apparatus and method, and recording medium
US20040090655A1 (en) * 2002-11-12 2004-05-13 Murata Kikai Kabushiki Kaisha Color image transmitting device
US20040264733A1 (en) * 1996-04-25 2004-12-30 Rhoads Geoffrey B. Image processing using embedded registration data to determine and compensate for geometric transformation
US20050012821A1 (en) * 2003-07-15 2005-01-20 Canon Kabushiki Kaisha Display device, method of manufacturing display device, information processing apparatus, correction value determining method, and correction value determining device
US20050123194A1 (en) * 2002-02-21 2005-06-09 Xerox Corporation Method of embedding color information in printed documents using watermarking
US20050265575A1 (en) * 2004-06-01 2005-12-01 Micrososft Corporation Class of symmetric lattices for quantization and data embedding
US20060033942A1 (en) * 1999-11-19 2006-02-16 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US20060072778A1 (en) * 2004-09-28 2006-04-06 Xerox Corporation. Encoding invisible electronic information in a printed document
US20060153424A1 (en) * 2001-04-24 2006-07-13 Canon Kabushiki Kaisha Image processing apparatus and method, program code and storage medium
US20060274954A1 (en) * 2002-09-24 2006-12-07 Hideaki Yamada Image processing apparatus
US20080002851A1 (en) * 2006-05-08 2008-01-03 Shi Yun Q Data hiding with wavelet transform histogram shifting
US20080175431A1 (en) * 2005-02-09 2008-07-24 Canon Kabushiki Kaisha Information Processing Method and Device, Computer Program, and Computer-Readable Storage Medium
US20080279380A1 (en) * 2004-09-07 2008-11-13 Canon Kabushiki Kaisha Information Processing Method, Information Processing Device, Computer Program For Achieving the Information Processing Method, and Computer-Readable Storage Medium of Storing the Computer Program
US20090136084A1 (en) * 2005-10-03 2009-05-28 Mitsubishi Electric Corporation Digital watermark detecting device
US7961905B2 (en) * 2004-09-28 2011-06-14 Xerox Corporation Encoding invisible electronic information in a printed document
US8090141B2 (en) 2006-01-31 2012-01-03 Xerox Corporation System and method to automatically establish preferred area for image-wise watermark
US8250661B2 (en) 2005-12-28 2012-08-21 Canon Kabushiki Kaisha Image processing apparatus, information processing apparatus, and methods thereof
WO2012164361A1 (en) * 2011-05-27 2012-12-06 Nds Limited Frequency-modulated watermarking
US20130188824A1 (en) * 2010-09-16 2013-07-25 Hui-Man Hou Digital watermarking
US8553291B2 (en) 2003-09-17 2013-10-08 Canon Kabushiki Kaisha Copy-forgery-inhibited pattern image generation method and image processing apparatus
WO2014098385A1 (en) * 2012-12-21 2014-06-26 Samsung Electronics Co., Ltd. Method and apparatus for embedding message into image data
US8922839B2 (en) 2011-09-07 2014-12-30 Canon Kabushiki Kaisha Information processing apparatus and information processing method
US20150294171A1 (en) * 2014-04-11 2015-10-15 Accenture Global Services Limited Multimodal biometric profiling
WO2016134730A1 (en) * 2015-02-24 2016-09-01 Protectoria As Method for analysing if display data generated by an application has been tampered with

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3943073B2 (ja) 2003-11-28 2007-07-11 富士通株式会社 画像データ処理装置、画像データ処理方法および画像データ処理プログラム

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4998165A (en) * 1989-03-17 1991-03-05 Picker International, Inc. Software invisible selective monochrome to color signal converter for medical diagnostic imaging
US5168352A (en) * 1989-02-16 1992-12-01 Matsushita Electric Industrial Co., Ltd. Coloring device for performing adaptive coloring of a monochromatic image
US5210600A (en) * 1990-01-08 1993-05-11 Fuji Xerox Co., Ltd. Extraction of film image parameters in image processing apparatus
US5332968A (en) * 1992-04-21 1994-07-26 University Of South Florida Magnetic resonance imaging color composites
US5534915A (en) * 1992-09-30 1996-07-09 American Film Technologies, Inc. Method of color enhancing a monochrome image using multiple base colors for selected regions of the monochrome image
US5652626A (en) * 1993-09-03 1997-07-29 Kabushiki Kaisha Toshiba Image processing apparatus using pattern generating circuits to process a color image
US6125201A (en) * 1997-06-25 2000-09-26 Andrew Michael Zador Method, apparatus and system for compressing data
US6332030B1 (en) * 1998-01-15 2001-12-18 The Regents Of The University Of California Method for embedding and extracting digital data in images and video
US6351558B1 (en) * 1996-11-13 2002-02-26 Seiko Epson Corporation Image processing system, image processing method, and medium having an image processing control program recorded thereon
US6590996B1 (en) * 2000-02-14 2003-07-08 Digimarc Corporation Color adaptive watermarking

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3599795B2 (ja) * 1993-09-03 2004-12-08 株式会社東芝 画像処理装置
JPH07193838A (ja) * 1993-12-27 1995-07-28 Canon Inc 符号化装置
US5530759A (en) * 1995-02-01 1996-06-25 International Business Machines Corporation Color correct digital watermarking of images
JPH09331443A (ja) * 1996-06-07 1997-12-22 Dainippon Printing Co Ltd デジタル画像における識別データ埋め込み装置
JPH1032719A (ja) * 1996-07-18 1998-02-03 Benetsuse Corp:Kk 画像圧縮方法および画像圧縮装置
JPH10324025A (ja) * 1997-05-26 1998-12-08 Canon Inc 画像形成装置および方法並びに白黒原稿画像を疑似カラー画像に変換する制御プログラムを記録した記録媒体
JP3889140B2 (ja) * 1998-01-05 2007-03-07 株式会社東芝 画像形成装置

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5168352A (en) * 1989-02-16 1992-12-01 Matsushita Electric Industrial Co., Ltd. Coloring device for performing adaptive coloring of a monochromatic image
US4998165A (en) * 1989-03-17 1991-03-05 Picker International, Inc. Software invisible selective monochrome to color signal converter for medical diagnostic imaging
US5210600A (en) * 1990-01-08 1993-05-11 Fuji Xerox Co., Ltd. Extraction of film image parameters in image processing apparatus
US5332968A (en) * 1992-04-21 1994-07-26 University Of South Florida Magnetic resonance imaging color composites
US5534915A (en) * 1992-09-30 1996-07-09 American Film Technologies, Inc. Method of color enhancing a monochrome image using multiple base colors for selected regions of the monochrome image
US5652626A (en) * 1993-09-03 1997-07-29 Kabushiki Kaisha Toshiba Image processing apparatus using pattern generating circuits to process a color image
US6351558B1 (en) * 1996-11-13 2002-02-26 Seiko Epson Corporation Image processing system, image processing method, and medium having an image processing control program recorded thereon
US6125201A (en) * 1997-06-25 2000-09-26 Andrew Michael Zador Method, apparatus and system for compressing data
US6332030B1 (en) * 1998-01-15 2001-12-18 The Regents Of The University Of California Method for embedding and extracting digital data in images and video
US6590996B1 (en) * 2000-02-14 2003-07-08 Digimarc Corporation Color adaptive watermarking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Chae et al., Color Image Embedding using Multidimensional Lattice Structures, 1998, IEEE, pp. 460-464.* *
Piva et al., Exploiting the cross-correlation of RGB-channels for robust watermarking of color images, 1999, IEEE, pp. 306-310. *

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030128861A1 (en) * 1993-11-18 2003-07-10 Rhoads Geoffrey B. Watermark embedder and reader
US7720249B2 (en) * 1993-11-18 2010-05-18 Digimarc Corporation Watermark embedder and reader
US8243980B2 (en) 1996-04-25 2012-08-14 Digimarc Corporation Image processing using embedded registration data to determine and compensate for geometric transformation
US20040264733A1 (en) * 1996-04-25 2004-12-30 Rhoads Geoffrey B. Image processing using embedded registration data to determine and compensate for geometric transformation
US20060033942A1 (en) * 1999-11-19 2006-02-16 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US7436551B2 (en) 1999-11-19 2008-10-14 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US7180630B2 (en) * 2000-11-02 2007-02-20 Seiko Epson Corporation Printer system, printer driver and printer
US20020057443A1 (en) * 2000-11-02 2002-05-16 Yoshio Watanabe Printer system, printer driver and printer
US20060153424A1 (en) * 2001-04-24 2006-07-13 Canon Kabushiki Kaisha Image processing apparatus and method, program code and storage medium
US20050123194A1 (en) * 2002-02-21 2005-06-09 Xerox Corporation Method of embedding color information in printed documents using watermarking
US7515732B2 (en) * 2002-02-21 2009-04-07 Xerox Corporation Method of embedding color information in printed documents using watermarking
US20030188659A1 (en) * 2002-04-05 2003-10-09 Canadian Bank Note Company Limited Method and apparatus for reproducing a color image based on monochrome images derived therefrom
US7356700B2 (en) * 2002-09-04 2008-04-08 Matsushita Electric Industrial Co., Ltd. Digital watermark-embedding apparatus and method, digital watermark-detecting apparatus and method, and recording medium
US20040093498A1 (en) * 2002-09-04 2004-05-13 Kenichi Noridomi Digital watermark-embedding apparatus and method, digital watermark-detecting apparatus and method, and recording medium
US20060274954A1 (en) * 2002-09-24 2006-12-07 Hideaki Yamada Image processing apparatus
US7286819B2 (en) * 2002-10-31 2007-10-23 Canon Kabushiki Kaisha Printing apparatus, print control method, and program product
US20040085567A1 (en) * 2002-10-31 2004-05-06 Canon Kabushiki Kaisha Printing apparatus, print control method, and program product
US20040090655A1 (en) * 2002-11-12 2004-05-13 Murata Kikai Kabushiki Kaisha Color image transmitting device
US7545393B2 (en) * 2003-07-15 2009-06-09 Canon Kabushiki Kaisha Display device, method of manufacturing display device, information processing apparatus, correction value determining method, and correction value determining device
US20050012821A1 (en) * 2003-07-15 2005-01-20 Canon Kabushiki Kaisha Display device, method of manufacturing display device, information processing apparatus, correction value determining method, and correction value determining device
US10038802B2 (en) 2003-09-17 2018-07-31 Canon Kabushiki Kaisha Copy-forgery-inhibited pattern image generation method and image processing apparatus
US8553291B2 (en) 2003-09-17 2013-10-08 Canon Kabushiki Kaisha Copy-forgery-inhibited pattern image generation method and image processing apparatus
US7356161B2 (en) * 2004-06-01 2008-04-08 Microsoft Corporation Class of symmetric lattices for quantization and data embedding
US20050265575A1 (en) * 2004-06-01 2005-12-01 Micrososft Corporation Class of symmetric lattices for quantization and data embedding
US20080279380A1 (en) * 2004-09-07 2008-11-13 Canon Kabushiki Kaisha Information Processing Method, Information Processing Device, Computer Program For Achieving the Information Processing Method, and Computer-Readable Storage Medium of Storing the Computer Program
US7970139B2 (en) 2004-09-07 2011-06-28 Canon Kabushiki Kaisha Decrypting overlapping selected and encrypted image areas
US20060072778A1 (en) * 2004-09-28 2006-04-06 Xerox Corporation. Encoding invisible electronic information in a printed document
US7961905B2 (en) * 2004-09-28 2011-06-14 Xerox Corporation Encoding invisible electronic information in a printed document
US20080175431A1 (en) * 2005-02-09 2008-07-24 Canon Kabushiki Kaisha Information Processing Method and Device, Computer Program, and Computer-Readable Storage Medium
US8050447B2 (en) 2005-02-09 2011-11-01 Canon Kabushiki Kaisha Information processing method and device, computer program, and computer-readable storage medium
US8000494B2 (en) * 2005-10-03 2011-08-16 Mitsubishi Electric Corporation Digital watermark detecting device
US20090136084A1 (en) * 2005-10-03 2009-05-28 Mitsubishi Electric Corporation Digital watermark detecting device
US8250661B2 (en) 2005-12-28 2012-08-21 Canon Kabushiki Kaisha Image processing apparatus, information processing apparatus, and methods thereof
US8090141B2 (en) 2006-01-31 2012-01-03 Xerox Corporation System and method to automatically establish preferred area for image-wise watermark
US8260065B2 (en) * 2006-05-08 2012-09-04 New Jersey Institute Of Technology Data hiding with wavelet transform histogram shifting
US20080002851A1 (en) * 2006-05-08 2008-01-03 Shi Yun Q Data hiding with wavelet transform histogram shifting
US20130188824A1 (en) * 2010-09-16 2013-07-25 Hui-Man Hou Digital watermarking
US9159112B2 (en) * 2010-09-16 2015-10-13 Hewlett-Packard Development Company, L.P. Digital watermarking using saturation patterns
US9111340B2 (en) 2011-05-27 2015-08-18 Cisco Technology Inc. Frequency-modulated watermarking
WO2012164361A1 (en) * 2011-05-27 2012-12-06 Nds Limited Frequency-modulated watermarking
US8922839B2 (en) 2011-09-07 2014-12-30 Canon Kabushiki Kaisha Information processing apparatus and information processing method
WO2014098385A1 (en) * 2012-12-21 2014-06-26 Samsung Electronics Co., Ltd. Method and apparatus for embedding message into image data
US20150294171A1 (en) * 2014-04-11 2015-10-15 Accenture Global Services Limited Multimodal biometric profiling
US9424478B2 (en) * 2014-04-11 2016-08-23 Accenture Global Services Limited Multimodal biometric profiling
US9672581B2 (en) * 2014-04-11 2017-06-06 Accenture Global Services Limited Multimodal biometric profiling
WO2016134730A1 (en) * 2015-02-24 2016-09-01 Protectoria As Method for analysing if display data generated by an application has been tampered with

Also Published As

Publication number Publication date
JP2001148776A (ja) 2001-05-29

Similar Documents

Publication Publication Date Title
US6873711B1 (en) Image processing device, image processing method, and storage medium
JP4218920B2 (ja) 画像処理装置及び画像処理方法並びに記憶媒体
US7142689B2 (en) Image processing apparatus for determining specific images
US6741758B2 (en) Image processor and image processing method
US6879703B2 (en) Method and apparatus for watermarking images
US7978877B2 (en) Digital watermark embedding method, digital watermark embedding apparatus, and storage medium storing a digital watermark embedding program
JP2001218006A (ja) 画像処理装置、画像処理方法および記憶媒体
US7995790B2 (en) Digital watermark detection using predetermined color projections
US8098883B2 (en) Watermarking of data invariant to distortion
US6826290B1 (en) Image processing apparatus and method and storage medium
JP4035717B2 (ja) 画像処理装置及び画像処理方法
JP5541672B2 (ja) 装置、方法、プログラム
US6993148B1 (en) Image processing apparatus and method, and storage medium
JP3647405B2 (ja) 画像処理装置及び画像処理方法
US7058232B1 (en) Image processing apparatus, method and memory medium therefor
US6853736B2 (en) Image processing apparatus, image processing method and storage medium
JP4311698B2 (ja) 画像処理装置及び画像処理方法並びに記録媒体
JPH05145768A (ja) カラー文書画像の適応符号化方式および復号方式
JP3809310B2 (ja) 画像処理装置及び方法及び記憶媒体
JP3884891B2 (ja) 画像処理装置及び方法及び記憶媒体
JP3869983B2 (ja) 画像処理装置及び方法及び記憶媒体
JP2001119558A (ja) 画像処理装置及び方法及び記憶媒体
JP3684181B2 (ja) 画像処理装置及び画像処理方法
JP3740338B2 (ja) 画像処理装置及び方法及び記憶媒体
JP2001292301A (ja) 画像処理装置及び画像処理方法並びに記憶媒体

Legal Events

Date Code Title Description
AS Assignment

Owner name: CANON KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MURAKAMI, TOMOCHIKA;HAYASHI, JUNICHI;REEL/FRAME:011210/0366

Effective date: 20000926

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Expired due to failure to pay maintenance fee

Effective date: 20170329