FR2895188A1 - Method of realigning two multimedia data segments for realigning multimedia documents, involves mapping between video frames of two video, based on minimal pairing cost - Google Patents

Method of realigning two multimedia data segments for realigning multimedia documents, involves mapping between video frames of two video, based on minimal pairing cost Download PDF

Info

Publication number
FR2895188A1
FR2895188A1 FR0553924A FR0553924A FR2895188A1 FR 2895188 A1 FR2895188 A1 FR 2895188A1 FR 0553924 A FR0553924 A FR 0553924A FR 0553924 A FR0553924 A FR 0553924A FR 2895188 A1 FR2895188 A1 FR 2895188A1
Authority
FR
France
Prior art keywords
video
segment
segments
frames
signature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
FR0553924A
Other languages
French (fr)
Inventor
Bertrand Chupeau
Lionel Oisel
Pierrick Jouet
Clerc Francois Le
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing SAS
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Priority to FR0553924A priority Critical patent/FR2895188A1/en
Priority to PCT/EP2006/067587 priority patent/WO2007045680A1/en
Publication of FR2895188A1 publication Critical patent/FR2895188A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/754Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries involving a deformation of the sample pattern or of the reference pattern; Elastic matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The method involves extracting signatures from two videos such as original video and pirated video. A pairing cost between video frames of the original video with video frames of pirate video is computed, based on the extracted signature. Mapping between the videos frames of two videos is performed, based on the minimal pairing cost. An independent claim is included for the device for realigning two multimedia data segments.

Description

Procede et dispositif de recalage temporel de documents multimedia.Method and device for temporal registration of multimedia documents

L'invention concerne un dispositif et une methode de recalage temporel de documents multimedia.  The invention relates to a device and a method for time registration of multimedia documents.

L'invention concerne plus particulierement la possibilite de retrouver une marque dans un film video, que cette marque ait ete inseree physiquement lors de ('impression du support film traditionnel (pellicule) ou numeriquement sur une version numerisee.  The invention more particularly relates to the possibility of finding a mark in a video film, that this mark was physically inserted when printing the traditional film support (film) or digitally on a digital version.

En effet, une preoccupation des producteurs de film est le piratage. Avec la proliferation des copies numeriques via les reseaux de communication tel Internet, it devient de plus en plus difficile d'identifier les copies legales mises en circulation et les copies pirates. Une maniere d'identifier les copies illegales est d'inscrire une marque sur chaque copie legale. Cette marque peut titre differente sur chaque copie, de sorte que chaque copie possede sa propre identite. Ainsi, lorsqu'une copie est retrouvee et qu'elle ne semble pas titre une copie legale, it faut retrouver sa marque de maniere a identifier a partir de quelle copie originale elle a ete reproduite. Ceci peut permettre par la suite de remonter a la source du piratage. La recherche de marques dans des films passe par I'alignement des films. Differentes techniques ont ete proposees pour parvenir a un tel alignement. Cependant, une telle recherche de marque dans les images est fastidieuse aujourd'hui car cette recherche doit se faire par tatonnement par un operateur et la mise en correspondance de deux bandes film est longue. Le document temporal alignment of video sequences for watermarking systems > paru dans les actes du colloque Electronic imaging, SPIE, Santa Clara, CA, January 2003 >>, propose une methode d'alignement de deux films mais cette methode possede differents inconvenients et notamment le fait qu'elle se base uniquement sur la mise en correspondance de trames des, ce qui la rend peu robuste aux coupes temporelles, aux variations de luminance, ainsi qu'aux fortes variations de mouvement.  Indeed, a concern of film producers is piracy. With the proliferation of digital copies via communication networks such as the Internet, it is becoming increasingly difficult to identify the legal copies released and the pirated copies. One way to identify illegal copies is to put a mark on each legal copy. This mark may be different on each copy, so that each copy has its own identity. Thus, when a copy is found and it does not seem like a legal copy, it must find its mark in a way to identify from which original copy it was reproduced. This may allow later to go back to the source of the hacking. The search for marks in films is done by aligning the films. Different techniques have been proposed to achieve such alignment. However, such branding in images is tedious today because this research has to be done by an operator and the matching of two film strips is long. The temporal document alignment of video sequences for watermarking systems published in the proceedings of the conference Electronic imaging, SPIE, Santa Clara, CA, January 2003 >>, proposes a method of alignment of two films but this method has various drawbacks and in particular the It is based solely on the matching of frames, which makes it not very robust to time slices, luminance variations, as well as to large variations in motion.

L'invention propose de resoudre au moins un des inconvenients mentionnes ci-dessus. A cet effet, ('invention propose un procede de recalage de deux segments de donnees multimedia. Selon ('invention, le procede comprend -Une etape d'extraction de signature pour chaque unite elementaire de chaque segment, - Une etape de calcul, a partir de ladite signature, d'un cout d'appariement entre une unite elementaire du premier segment avec une unite elementaire du second segment, - Une etape de determination, a partir du cout d'appariement minimal, du chemin minimal d'appariement entre les unites elementaires des deux segments, ledit chemin minimal representant alors la correspondance entre les deux segments.  The invention proposes to solve at least one of the drawbacks mentioned above. To this end, the invention proposes a method for resetting two segments of multimedia data, according to the invention, the method comprises a step of extracting a signature for each elementary unit of each segment, a calculation step, a from said signature, a cost of matching between an elementary unit of the first segment with an elementary unit of the second segment, - a step of determining, from the minimum matching cost, the minimum path of matching between the elementary units of the two segments, said minimal path then representing the correspondence between the two segments.

Selon un mode de realisation prefere,les donnees multimedia sont des donnees videos, lesdits unites elementaires sont des trames videos, les deux 20 segments de donnees multimedia etant a des frequences differentes, ledit procede comprend, prealablement a I'etape de calcul de cout d'appariement, - une etape de calcul d'un rapport entre les deux frequences desdits segments de donnees video, - une etape d'interpolation pour avoir deux signaux de signature 25 temporelle a la meme frequence.  According to a preferred embodiment, the multimedia data are video data, the said basic units are video frames, the two multimedia data segments being at different frequencies, the said method comprises, before the cost calculation step. pairing; - a step of calculating a ratio between the two frequencies of said video data segments; - an interpolation step for having two time signature signals at the same frequency.

Selon un mode de realisation prefere, le procede comprend, apres I'etape de calcul du cout d'appariement, une etape de raffinement de I'appariement dans laquelle, 30 - on detecte (T1-T11) les ruptures de plans dans les deux segments de donnees video, - on met en correspondance les ruptures de plans des deux segments de donnees video, - on ajuste la mise en correspondance de trames pour la faire coincider avec les correspondances entre ruptures de plans 5 Selon un mode de realisation prefere, lors de I'etape d'extraction de signature pour chaque unite elementaire de chaque segment, - on calcule un histogramme de couleur pour chaque trame de chaque segment video, 10 - on calcule (E2, E4) ladite signature en calculant une distance entre lesdits histogrammes de couleur de trames successives.  According to a preferred embodiment, the method comprises, after the step of calculating the cost of matching, a refinement step of the pairing in which the breaks in the planes are detected (T1-T11). video data segments, - map breaks of the two segments of video data are matched, - matching of frames to coincide with the matches between breaks in shots 5 According to a preferred embodiment, when of the signature extraction step for each elementary unit of each segment, - a color histogram is calculated for each frame of each video segment, 10 - said signature is calculated (E2, E4) by calculating a distance between said histograms color of successive frames.

Selon un mode de realisation prefere, lorsque les deux segments video sont de grande longueur, notamment un long metrage, ledit procede 15 comprend, prealablement auxdites etapes relatives aux trames, - Une etape de decomposition en plans de chacun desdits segments video, - Une etape d'extraction de signature pour chaque plan de chaque segment, 20 - Une etape de calcul, a partir de ladite signature, d'un cout d'appariement entre chaque plan de chaque segment, - Une etape de determination, a partir du cout d'appariement minimal, du chemin minimal entre les plans des deux segments, ledit chemin minimal representant alors la correspondance entre les plans, 25 lesdites etapes ulterieures relatives aux trames etant alors effectuees pour les trames dudit seul plan d'un segment video et du plan mis en correspondance de I'autre segment video pour lequel ledit chemin est minimal.  According to a preferred embodiment, when the two video segments are of great length, in particular a long meter, said method 15 comprises, prior to said steps relating to the frames, - a decomposition step in planes of each of said video segments, - A step signature extraction for each plane of each segment, 20 - A step of calculating, from said signature, a cost of matching between each plane of each segment, - a determination step, from the cost of minimal matching, of the minimal path between the planes of the two segments, said minimal path then representing the correspondence between the planes, said subsequent steps relating to the frames being then performed for the frames of said single plane of a video segment and of the plan set in correspondence with the other video segment for which said path is minimal.

30 Avantageusement, le procede comprend une etape de calcul d'un rapport entre les deux segments de donnees video en fonction des ruptures de plan non detectees dans run au moins des segments de donnees video par rapport a I'autre sequence, ledit rapport etant alors utilise pour reduire la zone de mise en correspondance desdites trames dans lesdits plans mis en correspondance.  Advantageously, the method comprises a step of calculating a ratio between the two segments of video data as a function of the plane breaks not detected in at least the video data segments with respect to the other sequence, said ratio being then uses to reduce the matching area of said frames in said mapped planes.

De maniere preferee, le procede comprend, posterieurement a I'etape de mise en correspondance de rupture de plans, lorsque le nombre de plans detectes dans un segment sans correspondant dans I'autre segment est superieur a un seuil, - Une etape de decomposition du segment le plus long en sous segments, Ladite etape de determination du chemin minimal entre les trames des deux segments etant alors realisee pour chaque sous segment, ledit chemin minimal obtenu etant choisi comme ledit chemin minimal le plus minimal parmi tous les Chemins minimaux obtenus pour chaque sous segment.  Preferably, the method comprises, after the plane-breaking mapping step, when the number of detected planes in a segment without a correspondent in the other segment is greater than a threshold, - A decomposition step of the longest segment in sub segments, said step of determining the minimum path between the frames of the two segments being then performed for each sub-segment, said minimum path obtained being chosen as said minimal minimum path among all the minimum paths obtained for each sub segment.

Selon un mode de realisation prefere, lors de I'etape d'extraction de signature pour chaque unite elementaire de chaque segment, - on calcule un histogramme de couleur pour chaque trame de chaque segment video, - on calcule la duree du plan, - on calcule des vecteurs de mouvement relatifs au plan, - on calcule un vecteur de contour, - on calcule ladite signature en fonction de ('histogramme de couleur, de la duree du plan, des vecteurs de mouvement et dudit vecteur de contour.  According to a preferred embodiment, during the signature extraction step for each elementary unit of each segment, a color histogram is calculated for each frame of each video segment, the duration of the plane is calculated, and calculates motion vectors relative to the plane, - calculates a contour vector, - calculates said signature as a function of color histogram, plane duration, motion vectors and said contour vector.

L'invention concerne egalement un dispositif de recalage de deux segments de donnees multimedia comprenant 30 - des moyens d'extraction de signature pour chaque unite elementaire de chaque segment, 25 - des moyens de calcul, a partir de ladite signature, d'un cout d'appariement entre une unite elementaire du premier segment avec une unite elementaire du second segment, - des moyens de determination, a partir du cout d'appariement minimal, du chemin minimal d'appariement entre les unites elementaires des deux segments, ledit chemin minimal representant alors la correspondance entre les deux segments.  The invention also relates to a device for registering two multimedia data segments comprising: signature extraction means for each elementary unit of each segment; means for calculating, from said signature, a cost; pairing between an elementary unit of the first segment with an elementary unit of the second segment, - means for determining, from the minimum matching cost, the minimum path of matching between the elementary units of the two segments, said minimal path representing then the correspondence between the two segments.

L'invention sera mieux comprise et illustree au moyen d'exemples de modes de realisation et de mise en ceuvre avantageux, nullement limitatifs, en reference aux figures annexees sur lesquelles :  The invention will be better understood and illustrated by means of examples of embodiments and implementations which are advantageous, in no way limiting, with reference to the appended figures in which:

- la figure 1 represente un organigramme d'un mode de realisation prefere de ('invention, - la figure 2 represente un mode de realisation prefere de I'etape de recalage trame, - Ia figure 3 represente un organigramme de fonctionnement d'un mode de realisation prefere applique au recalage de deux videos de grande longueur, - Ia figure 4 represente un mode de realisation prefere de I'etape de recalage trame applique au recalage de deux videos de grande longueur, - Ia figure 5 represente un organigramme de fonctionnement d'un mode de realisation prefere de I'etape de detection de plans. - Ia figure 6 represente un exemple de detection de ruptures de plans. 25 Les modules representes sont des unites fonctionnelles, qui peuvent ou non correspondre a des unites physiquement distinguables. Par exemple, ces modules ou certains d'entre eux peuvent titre regroupes dans un unique composant, ou constituer des fonctionnalites d'un meme logiciel. A contrario, 30 certains modules peuvent titre eventuellement composes d'entites physiques separees.  FIG. 1 represents a flowchart of a preferred embodiment of the invention; FIG. 2 represents a preferred embodiment of the frame alignment step, FIG. 3 represents a flow diagram of a mode of operation. The preferred embodiment applies to the registration of two videos of great length. FIG. 4 shows a preferred embodiment of the frame registration step applied to the registration of two videos of great length. FIG. A preferred embodiment of the plan detection step Figure 6 shows an example of plan break detection The modules shown are functional units, which may or may not correspond to physically distinguishable units. For example, these modules or some of them may be grouped together in a single component, or they may be functions of the same software. composed of separate physical entities.

Le mode de realisation prefere se rapporte a la recherche de marques dans une video pirate. Cependant, la presente invention se rapporte plus generalement au recalage temporel de deux documents multimedia, de preference audio ou video. Parmi d'autres applications, on peut notamment envisager de retrouver un segment video dans un autre segment video, et de les synchroniser temporellement par exemple. On peut egalement envisager la creation d'effets speciaux... La figure 1 represente un mode de realisation prefere de ('invention lorsque I'on s'interesse a deux videos dont ('une au moins est de courte duree. On peut par exemple chercher a retrouver et a aligner un extrait de courte duree de la video originale dans une copie pirate.. En effet, le recalage d'une sequence de courte duree avec une autre sequence de longue ou de courte duree est moins gourmande en temps de calcul et en occupation memoire que le recalage d'une sequence d'une video de grande longueur avec une video de meme longueur, et peut titre realisee directement selon le procede represents figure 1. La figure 3 decrite ulterieurement represente le cas de comparaison de deux videos de grande longueur, (par exemple d'au moins une heure chacune), pour lequel le procede decrit figure 1.  The preferred embodiment relates to the search for marks in a pirate video. However, the present invention relates more generally to the time registration of two multimedia documents, preferably audio or video. Among other applications, it is possible to consider finding a video segment in another video segment, and synchronize time for example. We can also consider the creation of special effects ... Figure 1 represents a preferred embodiment of the invention when we are interested in two videos of which (at least one is of short duration. example seek to find and align a short excerpt of the original video in a pirate copy .. Indeed, the registration of a short-term sequence with another sequence of long or short duration is less greedy in time of computation and in memory use that the registration of a sequence of a video of great length with a video of the same length, and can title realized directly according to the process represents figure 1. Figure 3 described later represents the case of comparison of two videos of great length, (for example of at least one hour each), for which the method described in FIG.

Ainsi, les videos revues sont decodees si necessaire lors d'une etape El pour la video originale et lors d'une etape E3 pour la video piratee. Les videos peuvent effectivement titre encodees selon differentes normes de codage, par exemple MPEG-1, MPEG-2, MPEG-4... Ensuite les signatures sont extraites des deux videos, lors d'une etape E2 pour la video originale et lors d'une etape E4 pour la video piratee. Lors de ces deux stapes, on transforme la video en un signal monodimensionnel, que I'on appelle signature > par la suite et qu'on note respectivement p1(t) et p2(t) pour les deux videos a recaler, plus facile a manipuler tout en etant representatif du contenu de la video et robuste aux conversions de format, aux variations de lumiere et autres alterations du signal video. II est preferable de prendre comme valeurs de signature des distances entre les caracteristiques d'images successives plutOt que des descripteurs relatifs aux images elles-memes. Afin d'etre robuste aux distorsions geometriques, on choisit des statistiques globales au niveau image, sous la forme d'un histogramme.  Thus, the reviewed videos are decoded if necessary during a stage El for the original video and during a stage E3 for the pirated video. The videos can actually title encoded according to different coding standards, for example MPEG-1, MPEG-2, MPEG-4 ... Then the signatures are extracted from the two videos, during a stage E2 for the original video and when a step E4 for the pirated video. During these two stapes, we transform the video into a one-dimensional signal, which we call signature> afterwards and which we note respectively p1 (t) and p2 (t) for the two videos to be recalibrated, easier to manipulate while being representative of the content of the video and robust to format conversions, light variations and other alterations of the video signal. It is preferable to take as signature values distances between the characteristics of successive images rather than descriptors relating to the images themselves. In order to be robust to geometric distortions, global statistics are chosen at the image level, in the form of a histogram.

Plus precisement, on calcule pour chaque image un histogramme de couleur sur 512 niveaux. L'utilisation de la couleur permet d'obtenir des informations plus precises que ('utilisation de la luminance dans certains cas critiques. La signature p(t) pour ('image a ('instant t sera la distance entre ('histogramme de couleur H(t) associe a ('image t et ('histogramme H(t-1) associe a ('image precedente.  More precisely, for each image, a color histogram is calculated on 512 levels. The use of color makes it possible to obtain more precise information than the use of luminance in certain critical cases.The signature p (t) for the image at (instant t will be the distance between the color histogram H (t) associates with the image and the histogram H (t-1) associated with the preceding image.

Chaque image ou chaque trame peut titre consideree comme une unite elementaire. La methode consistant alors a apparier chaque unite elementaire de la video de reference avec une unite elementaire de la video pirate.  Each image or frame can be considered as an elementary unit. The method then consists of matching each elementary unit of the reference video with an elementary unit of the pirate video.

La distance entre histogrammes utilisee de preference est la distance de Bhattacharyya. Cette distance permet de mesurer la dissimilarite entre des distributions statistiques. Cette distance donnee dans ('equation cidessous, pour deux histogrammes H(i) et K(i) contenant 512 classes (N egal 512 dans ('equation ci-dessous), permet de capturer les variations temporelles entre les histogrammes de couleur successifs avec moins de bruit qu'une distance classique. N dBhat(H,K)=ùlog L htkt =1 i Chaque image est caracterisee par sa signature p(t), representant la distance entre deux histogrammes consecutifs, calcules avec la methode de Bhattacharyya. p(t) = dBhat(H(t),H(t -1)) Lors de I'etape E5, on effectue ('operation de recalage entre les deux videos. La mise en correspondance de deux trames des deux videos, est faite par une approche dite de o programmation dynamique D.  The distance between histograms preferably used is the distance from Bhattacharyya. This distance makes it possible to measure the dissimilarity between statistical distributions. This distance given in the equation below, for two histograms H (i) and K (i) containing 512 classes (N equals 512 in the equation below), makes it possible to capture the temporal variations between the successive color histograms with less noise than a classical distance N dBhat (H, K) = ùlog L htkt = 1 i Each image is characterized by its signature p (t), representing the distance between two consecutive histograms, calculated with the Bhattacharyya method. p (t) = dBhat (H (t), H (t-1)) In step E5, the registration operation between the two videos is performed, and the matching of two frames of the two videos is performed. made by a so-called dynamic programming approach D.

Cette approche par programmation dynamique est representee en figure 2 et est decrite ci-dessous en reference a cette figure 2. Le but est de retrouver dans la video pirate la sequence marquee qui permettra d'identifier la source de cette copie pirate. II faut donc retrouver dans la video pirate la premiere trame qui correspond a cette sequence de trames marquees. Or bien souvent la video pirate et la video originale sont a des frequences differentes. En effet, par exemple, les camescopes americains capturent les images a une frequence de 29.97 Hz alors que la frequence image dans une video pour le cinema est de 24Hz. On effectue donc une etape prealable de pre-traitement afin d'interpoler la signature de reference avec un facteur adequat, soit 29.97/24 ici. La cadence des images video est recuperee dans le flux binaire.  This dynamic programming approach is represented in FIG. 2 and is described below with reference to this FIG. 2. The goal is to find in the pirate video the marked sequence that will make it possible to identify the source of this pirate copy. It is therefore necessary to find in the pirate video the first frame that corresponds to this sequence of marked frames. But often the pirate video and the original video are at different frequencies. For example, American camcorders capture images at a frequency of 29.97 Hz while the video frame rate for a cinema is 24 Hz. A pre-processing step is thus performed in order to interpolate the reference signature with an adequate factor, ie 29.97 / 24 here. The video frame rate is recovered in the bit stream.

La methode de programmation dynamique est basee sur un tableau a deux dimensions. Selon ('axe horizontal (i), on represente chaque trame de Ia video originale. Selon ('axe vertical (j), on represente chaque trame de la video piratee. Chaque trame est representee par sa signature calculee comme indiquee precedemment. Chaque element A(i,j) du tableau donne le cout minimal entre deux sequences pour apparier une trame i de la video originale et une trame j de Ia video piratee. Ce cout minimal est calcule comme etant la somme de : - Ia distance entre les signatures associees a chaque image, - le chemin de cout minimal pour arriver a ('element A(i,j).  The dynamic programming method is based on a two-dimensional array. According to the horizontal axis (i), each frame of the original video is represented, according to the vertical axis (j) each frame of the pirated video is represented, each frame is represented by its signature calculated as indicated above. A (i, j) in the table gives the minimum cost between two sequences to match a frame i of the original video and a frame j of the pirated video.This minimum cost is calculated as being the sum of: - The distance between the signatures associated with each image, - the minimum cost path to arrive at ('element A (i, j).

Ce qui donne ('equation suivante :Which gives the following equation:

A(i, j ) = Min(A(i ù 1, jù1),wh*A(i, j -1), wv * A(i ù 1, j))+dist((i, j) ) Wh et wv sont des penalites associees aux transitions horizontales et verticales. Les transitions horizontales et verticales correspondent a la mise en correspondance respectivement d'une trame de la video piratee avec plusieurs trames de la video originale et d'une trame de la video originale avec plusieurs trames de la video piratee. Les valeurs de wh et wv sont de preference superieures a 1 afin de penaliser ces transitions par rapport aux transitions obliques.  A (i, j) = Min (A (i ù 1, j1), wh * A (i, j -1), wv * A (i ù 1, j)) + dist ((i, j)) Wh and wv are penalties associated with horizontal and vertical transitions. The horizontal and vertical transitions correspond to the matching respectively of a frame of the pirated video with several frames of the original video and a frame of the original video with several frames of the pirate video. The values of wh and wv are preferably greater than 1 in order to penalize these transitions with respect to the oblique transitions.

La distance dist(i,j) entre les signatures des deux images est calculee de preference de la maniere suivante (distance dite du x2 ): dist(i, j)ù (P1(i)ùP2(J))*(P1(i)ùP2(J)) (P1(i)+P2(J)) Avec p1(i) et p2(j) representant respectivement les signatures de la trame i du premier signal et de la trame j du second signal et calculees comme indiquees precedemment par la methode de Bhattacharyya. II peut arriver que le recalage trame ainsi effectue dans I'etape E5 necessite une etape supplementaire pour affiner la mise en correspondance (par exemple quand la video varie peu temporellement entre deux ruptures de plan, ce qui donne un signal de signature p(t) quasi-plat pour lequel la programmation dynamique favorise par defaut le chemin diagonal dans le tableau). Cette etape supplementaire consiste a mettre en correspondance les ruptures de plan entre les deux sequences et a en deduire la transformation Iineaire entre la video de reference (T2) et la video pirate (Ti) entre deux ruptures de plans. Dans ('equation suivante, a est proche du ratio des frequences images calcule entre les deux videos et 13 est un decalage de quelques trames.  The distance dist (i, j) between the signatures of the two images is preferably calculated in the following manner (so-called x2 distance): dist (i, j) ù (P1 (i) ùP2 (J)) * (P1 ( p1 (i) and p2 (j) respectively representing the signatures of the frame i of the first signal and the frame j of the second signal previously indicated by the Bhattacharyya method. It may happen that the frame registration thus carried out in step E5 requires an additional step to refine the matching (for example when the video varies slightly temporally between two plane breaks, which gives a signature signal p (t). quasi-flat for which the dynamic programming by default favors the diagonal path in the table). This additional step consists of matching the plane breaks between the two sequences and deducing the linear transformation between the reference video (T2) and the pirate video (Ti) between two plane breaks. In the following equation, a is close to the ratio of the image frequencies computed between the two videos and 13 is a shift of some frames.

T2 =cT1 + R , T representant la coordonnee temporelle de la trame. Comme illustre en figure 6, la video originale etant en bas et la video pirate etant en haut, les signatures temporelles etant binarisees et seuillees, ce qui permet d'obtenir les ruptures de plan, it y a 9 ruptures de plans dans la video pirate qui trouvent egalement une correspondance dans la video originale qui contient elle d'autres ruptures de plans qui peuvent titre dues a du bruit de signal. Afin de determiner a et R, on recherche donc pour chaque rupture de plan de la video pirate, une rupture de plan dans la video originale, dans un voisinage proche qui pourrait lui correspondre et on fait une estimation dite robuste sur la Iiste des plans correspondants pour calculer les coefficients a et R. Cette mise en correspondance des ruptures de plan permet de s'assurer que la mise en correspondance des trames est precise. Si toutefois, cette mise en correspondance montre que I'on a beaucoup de detections de ruptures de plans qui ne peuvent titre mises en correspondance entre les deux videos, par beaucoup de ruptures de plans sans correspondance, on peut par exemple entendre superieur a 50%, alors on fait appel a un procede de recalage qualifie de mode de repli (plus demandeur en calculs). Ce mode de repli consiste a decouper la video piratee en plusieurs sous segments plus petits, pour chacun desquels on applique le procede de recalage decrit precedemment. On obtient alors un appariement pour la video originale dans chacun de ces sous segments de la video pirate. L'appariement choisi est celui qui minimise le cout d'appariement trame a trame global.  T2 = cT1 + R, where T represents the temporal coordinate of the frame. As illustrated in Figure 6, the original video is downstairs and the pirate video is up, the time signatures are binarized and thresholded, which allows to get the plane breaks, there are 9 breaks in the pirate video clips who also find a match in the original video which contains other breaks in shots that may be due to signal noise. In order to determine a and R, we therefore search for each plane break of the pirate video, a plan break in the original video, in a nearby neighborhood that could correspond to it and make a so-called robust estimate on the list of corresponding plans. to calculate the coefficients a and R. This mapping of plane breaks makes it possible to ensure that the mapping of the frames is accurate. If, however, this mapping shows that we have a lot of detections of breaks of shots that can not be matched between the two videos, by many breaks of plans without correspondence, we can for example hear more than 50% , then we resort to a method of recalage qualified mode of fallback (more applicant in calculations). This fallback mode consists in cutting the pirated video into several smaller sub-segments, for each of which the registration method previously described is applied. We then obtain a match for the original video in each of these sub segments of the pirate video. The pairing chosen is the one that minimizes the cost of matching frame to global frame.

Le procede de recalage permet de determiner quelles sont les images de depart exact de la sequence originale dans la video pirate. Ceci permet avantageusement de mettre en correspondance chaque trame de la video originale avec une trame de la video piratee et donc de rechercher la marque dans la bonne trame.  The registration process is used to determine which are the exact starting images of the original sequence in the pirate video. This advantageously makes it possible to match each frame of the original video with a frame of the pirated video and thus to search for the mark in the good frame.

Ainsi, sur la figure 2, la case entouree d'un cercle represente le point de depart du chemin donnant le cout minimal d'appariement entre les deux videos. Ainsi, on constate que la trame numero 4 de la video originale (axe horizontal) est mise en correspondance avec la trame 1 de la video pirate. Connaissant la localisation de la trame dans la video originale, on peut alors aisement la rechercher dans les trames correspondantes de la video pirate. Par exemple, si la marque est situee dans les trames 12 a 50 de la video originale, alors la marque sera situee dans les trames 9 a 47 de la video pirate.  Thus, in Figure 2, the box surrounded by a circle represents the starting point of the path giving the minimum cost of matching between the two videos. Thus, we see that the frame number 4 of the original video (horizontal axis) is matched to the frame 1 of the pirate video. Knowing the location of the frame in the original video, we can then easily search for it in the corresponding frames of the pirate video. For example, if the mark is located in frames 12 to 50 of the original video, then the mark will be located in frames 9 to 47 of the pirate video.

La recherche de la marque dans la trame ne fait pas ('objet de cette invention mais peut titre realisee simplement par des methodes de traitement d'image.  Searching for the mark in the frame is not the subject of this invention but can be done simply by image processing methods.

La figure 3 est relative a un autre mode de realisation dans lequel les deux videos mises en correspondance sont toutes les deux de grande longueur. II est Bien entendu possible d'utiliser la methode decrite dans les figures 1 et 2 precedemment mais la comparaison peut s'averer tres longue vu le nombre de trames a comparer (et la memorisation du tableau A(i,j) exceder les capacites memoire de I'ordinateur). La figure 3 illustre alors une methode avantageuse dans ce contexte.  Figure 3 relates to another embodiment in which the two videos matched are both of great length. It is of course possible to use the method described in FIGS. 1 and 2 above, but the comparison can be very long given the number of frames to be compared (and the storage of table A (i, j) exceed the memory capacities of the computer). Figure 3 then illustrates an advantageous method in this context.

Ainsi, les videos revues sont decodees si necessaire lors d'une etape S1 pour la video originale et lors d'une etape S4 pour la video piratee. Les videos peuvent effectivement titre encodees selon differentes normes de codage, par exemple MPEG-1, MPEG-2, MPEG-4... Ensuite, les videos sont decoupees en plans grace a une detection de plans lors d'une etape S2 pour la video originale et lors d'une etape S5 pour la video pirate. Cette decomposition en plans est detaillee en reference a la figure 5. On utilise pour effectuer cette decomposition en plans des methodes connues de I'homme du metier. A ('issue des etapes S2 et S5, on obtient donc une liste de plans des deux videos.  Thus, the reviewed videos are decoded if necessary during a stage S1 for the original video and during a stage S4 for the pirate video. The videos can actually title encoded according to different coding standards, for example MPEG-1, MPEG-2, MPEG-4 ... Then, the videos are cut into shots thanks to a detection of shots during a step S2 for the original video and at a stage S5 for the pirate video. This decomposition in planes is detailed with reference to FIG. 5. This method of decomposition into planes uses methods known to those skilled in the art. At the end of steps S2 and S5, we thus obtain a list of shots of the two videos.

Lors d'une etape S3 pour la video originale et d'une etape S6 pour la video piratee, on extrait une signature pour chaque plan. Lors de ces deux etapes, on transforme la liste des plans en un signal monodimensionnel, plus facile a manipuler tout en etant representatif du contenu de la video et robuste aux conversions de format, aux variations de lumiere et autres alterations du signal video. II est preferable de calculer une distance entre les caracteristiques de plans successifs plutOt que de privilegier des descripteurs relatifs aux plans eux-memes. Afin d'etre robuste aux distorsions geometriques, on choisit des statistiques globales au niveau plan, sous la forme d'un histogramme.  During a step S3 for the original video and a step S6 for the pirated video, we extract a signature for each shot. During these two steps, the list of shots is transformed into a one-dimensional signal, easier to manipulate while being representative of the content of the video and robust to format conversions, light variations and other alterations of the video signal. It is preferable to calculate a distance between the characteristics of successive plans rather than to privilege descriptors relating to the plans themselves. In order to be robust to geometric distortions, global statistics are chosen at the plane level, in the form of a histogram.

Plus precisement, on calcule pour chaque plan un histogramme de couleur sur 512 niveaux. L'utilisation de la couleur permet d'obtenir des informations plus precises que ('utilisation de la luminance dans certains cas critiques. La valeur p(t) de la signature associe au plan t est la distance entre les histogrammes couleurs H(t) et H(t-1) calcules sur le plan courant et le plan precedent. La distance preferee entre histogrammes est la distance de Bhattacharyya. Cette distance permet de mesurer la dissimilarite entre des distributions statistiques. Cette distance donnee dans ('equation ci-dessous, pour deux histogrammes H(i) et K(i) contenant 512 classes (N egal 512 dans ('equation ci-dessous), permet de capturer les variations temporelles entre les histogrammes de couleur successifs avec moins de bruit qu'une distance classique. N dBhat(H,K)=ùlog L htkt ~ Chaque plan est caracterisee par sa signature p(t), representant la distance entre deux histogrammes consecutifs, calcules avec la methode de Bhattacharyya. p(t) = dB hat (H(t),H(t -1)) Lors d'une etape S7, on effectue un recalage de plans. Le recalage de plans effectue correspond au recalage de trames effectue dans le mode de realisation precedemment decrit en reference aux figures 1 et 2. Les plans sont donc mis en correspondance en utilisant une methode de programmation dynamique La methode de programmation dynamique est basee sur un tableau a deux dimensions. Selon ('axe horizontal (i), on represente chaque plan de la video originale. Chaque plan est represents par sa signature. Selon ('axe vertical (j), on represente chaque plan de la video piratee par sa signature. Chaque element A(i,j) du tableau donne le cout minimal entre deux sequences pour apparier un plan i de la video originale et un plan j de la video piratee. Ce cout minimal est calcule comme etant la somme de : - la distance entre les signatures associees a chaque plan, - le chemin de cout minimal pour arriver a ('element A(i,j). Ce qui donne ('equation suivante : A(i, j ) =Min(A(i -1, jù1),wh*A(i, j -1),wv * A(i -1, j))+dist((i, j) )  More precisely, we calculate for each plan a histogram of color on 512 levels. The use of color makes it possible to obtain more precise information than the use of luminance in certain critical cases The value p (t) of the signature associated with the plane t is the distance between the color histograms H (t) and H (t-1) computed on the current plane and the preceding plane The preferred distance between histograms is the Bhattacharyya distance This distance makes it possible to measure the dissimilarity between statistical distributions This distance given in the equation below , for two histograms H (i) and K (i) containing 512 classes (N equals 512 in the equation below), it is possible to capture the temporal variations between the successive color histograms with less noise than a conventional distance. N dBhat (H, K) = ùlog L htkt ~ Each plane is characterized by its signature p (t), representing the distance between two consecutive histograms, computed with the method of Bhattacharyya p (t) = dB hat (H ( t), H (t -1)) During a step S7, one carries out a registration of plans. The plan resetting performed corresponds to the frame registration performed in the embodiment previously described with reference to FIGS. 1 and 2. The plans are therefore mapped using a dynamic programming method. The dynamic programming method is based on a table. has two dimensions. According to the horizontal axis (i), each plane of the original video is represented, each plane is represented by its signature, and according to the vertical axis (j), each plane of the pirated video is represented by its signature. (i, j) of the table gives the minimal cost between two sequences to match a plan i of the original video and a plan j of the pirated video.This minimal cost is computed as being the sum of: - the distance between the associated signatures at each plane, - the minimal cost path to arrive at ('element A (i, j), which gives the following equation: A (i, j) = Min (A (i -1, ji1), wh * A (i, j -1), wv * A (i -1, j)) + dist ((i, j))

Wh et wv sont des penalites associees aux transitions horizontales et verticales. Les transitions horizontales et verticales correspondent a la mise en correspondance respectivement d'un plan de la video piratee avec plusieurs plans de la video originale et d'un plan de la video originale avec plusieurs plans de la video piratee. Les valeurs de wh et wv sont de preference superieures a 1 afin de penaliser ces transitions par rapport aux transitions obliques. La distance d(i,j) entre les signatures des deux plans peut titre calculee de la maniere suivante (distance dite du x2 ): dist(i, j)= (P1(0ùP2(j))*(P1OùP2(j)) (P1(i)+P2(j))  Wh and wv are penalties associated with horizontal and vertical transitions. The horizontal and vertical transitions correspond to the mapping respectively of a shot of the pirated video with several shots of the original video and a shot of the original video with several shots of the pirate video. The values of wh and wv are preferably greater than 1 in order to penalize these transitions with respect to the oblique transitions. The distance d (i, j) between the signatures of the two planes can be calculated in the following manner (distance of x2): dist (i, j) = (P1 (0uP2 (j)) * (P1OuP2 (j)) (P1 (i) + P2 (j))

Avec p1(i) et p2(j) representant respectivement les signatures du plan i du premier signal et du plan j du second signal.  With p1 (i) and p2 (j) respectively representing the signatures of the plane i of the first signal and the plane j of the second signal.

Une fois le tableau remplit, it s'agit alors de determiner le chemin dont le cout sera minimal entre les plans de la video originale et les plans de la video piratee.. Ceci permet avantageusement de mettre en correspondance chaque plan de la video originale avec un plan de la video piratee.  Once the table fills, it is then necessary to determine the path whose cost will be minimal between the plans of the original video and the plans of the pirate video. This advantageously allows to match each shot of the original video with a shot of the pirate video.

Une fois les plans mis en correspondance, on effectue un recalage au niveau trame lors d'une etape S8. L'etape de recalage au niveau plan a permis de trouver dans la video pirate le plan qui correspond au plan de la video originale qui contient la marque, chaque plan de la video originale etant mis en correspondance avec un plan de la video pirate. Le recalage au niveau trame de I'etape S8 est effectue uniquement pour les trames des deux plans ainsi mis en correspondance. Cette etape de recalage au niveau trame correspond a I'etape de recalage au niveau trame effectuee dans le mode de realisation precedent en reference aux figures 1 et 2. Elie est illustree en figure 4. Avantageusement, cette etape de calcul de recalage au niveau trame effectue la correspondance uniquement dans la bande grisee sur la figure 4. Ainsi, on reduit considerablement le nombre de calculs a effectuer. Cette etape comprend une etape prealable de calcul de ratio entre les deux sequences d'images. En effet, comme decrit dans le mode de realisation precedent, la video originale et la video piratee sont souvent a des frequences differentes. Le nombre de trames pour un meme plan dans la video pirate et dans la video originale est donc souvent different. Le calcul de ce ratio R entre les deux sequences doit prendre en compte le nombre de ruptures de plans qui n'ont pas ete detectees. Le ratio final est etabli en faisant une moyenne calculee sur un nombre de plans importants.  Once the plans are matched, a frame-level registration is performed in step S8. The step of recaling at the level plan has found in the pirate video plan that corresponds to the plan of the original video that contains the mark, each shot of the original video being matched with a plan of the pirate video. The frame level registration of step S8 is performed only for the frames of the two planes thus mapped. This frame-level adjustment step corresponds to the frame-level adjustment step performed in the preceding embodiment with reference to FIGS. 1 and 2. Elie is illustrated in FIG. 4. Advantageously, this frame-level adjustment calculation step performs the correspondence only in the gray band in FIG. 4. Thus, the number of computations to be performed is considerably reduced. This step includes a prior step of calculating the ratio between the two image sequences. Indeed, as described in the previous embodiment, the original video and pirated video are often at different frequencies. The number of frames for the same shot in the pirate video and in the original video is therefore often different. The calculation of this ratio R between the two sequences must take into account the number of plane breaks that have not been detected. The final ratio is established by averaging over a number of important plans.

Ce ratio est donc utilise lors de I'etape S8 pour limiter le nombre de calculs a effectuer pour la mise en correspondance des trames. Ce ratio est plus precisement utilise pour calculer la bande en grise sur la figure 4. Les caracteristiques de la bande sont les suivantes : - le point de depart est constitue des deux premieres trames des deux premiers plans pour lesquels on obtient une correspondance (distance minimale), -('orientation (la pente) de la bande est donnee par le debit correspondant au ratio precedemment calcule, - la largeur de la bande correspond a I'ecart type associe a ('ensemble des ratios entre plans se correspondant. Chaque element A(i,j) du tableau donne le cout minimal entre deux sequences pour apparier une trame du plan i de la video originale et du plan j de la video piratee. Ce cout minimal est calcule comme etant la somme de : - Ia distance entre les signatures associees a chaque image, -le chemin de cout minimal pour arriver a ('element A(i,j). Ce qui donne ('equation suivante : A(i, j ) =Min(A(i -1, jù1),wh*A(i, j -1),wv * A(i -1, j))+dist((i, j) )  This ratio is therefore used in step S8 to limit the number of calculations to be performed for the mapping of the frames. This ratio is more precisely used to calculate the gray band in FIG. 4. The characteristics of the band are as follows: the starting point is constituted by the first two frames of the first two planes for which a correspondence is obtained (minimum distance The orientation (slope) of the band is given by the flow corresponding to the ratio previously calculated, the width of the band corresponds to the standard deviation associated with the set of ratios between corresponding planes. A (i, j) in the table gives the minimum cost between two sequences to match a frame of the original video clip i and the pirate video clip plane j.This minimal cost is calculated as the sum of: - the distance between the signatures associated with each image, the path of minimum cost to arrive at element A (i, j), which gives the following equation: A (i, j) = Min (A (i -1, j1) ), wh * A (i, j -1), wv * A (i -1, j)) + dist ((i, j))

Wh et wv sont des penalites associees aux transitions horizontales et verticales. Les transitions horizontales et verticales correspondent a la mise en correspondance respectivement d'une trame de la video piratee avec plusieurs trames de la video originale et d'une trame de la video originale avec plusieurs trames de la video piratee. Les valeurs de wh et wv sont de preference superieures a 1 afin de penaliser ces transitions par rapport aux transitions obliques. La distance d(i,j) entre les signatures des deux images est calculee de la maniere suivante : dist(i, j)= (P1(0ùP2(j))*(P1OùP2(j)) (P1(i)+P2(j))  Wh and wv are penalties associated with horizontal and vertical transitions. The horizontal and vertical transitions correspond to the matching respectively of a frame of the pirated video with several frames of the original video and a frame of the original video with several frames of the pirate video. The values of wh and wv are preferably greater than 1 in order to penalize these transitions with respect to the oblique transitions. The distance d (i, j) between the signatures of the two images is calculated in the following way: dist (i, j) = (P1 (uP2 (j)) * (P1OuP2 (j)) (P1 (i) + P2 (j))

Avec p1(i) et p2(j) representant respectivement la signature de la trame i du premier signal et de la trame j du second signal.  With p1 (i) and p2 (j) respectively representing the signature of the frame i of the first signal and the frame j of the second signal.

Ainsi, les trames sont mises en correspondance et la trame ou les trames de la video pirates susceptibles de contenir la marque sont detectees car mises en correspondance avec les trames de la video originale contenant la marque. La recherche de la marque dans ('image est ensuite faite par des techniques de traitement d'image connues de I'homme du metier qui ne font pas partie de ('objet de la presente invention. II peut arriver que le recalage trame ainsi effectue dans I'etape E5 necessite une etape supplementaire pour affiner la mise en correspondance (par exemple quand la video varie peu temporellement entre deux ruptures de plan, ce qui donne un signal de signature p(t) quasi-plat pour lequel la programmation dynamique favorise par defaut le chemin diagonal dans le tableau). Cette etape supplementaire consiste a mettre en correspondance les ruptures de plan entre les deux sequences et a en deduire la transformation lineaire entre la video de reference (T2) et la video pirate (T1) entre deux ruptures de plans. Dans ('equation suivante, a est proche du ratio des frequences images calcule entre les deux videos et 13 est un decalage de quelques trames. T2 =cT1 + R , T representant la coordonnee temporelle de la trame. Comme illustre en figure 6, la video originale etant en bas et la video pirate etant en haut, les signatures temporelles etant binarisees et seuillees, ce qui permet d'obtenir les ruptures de plan, it y a 9 ruptures de plans dans la video pirate qui trouvent egalement une correspondance dans la video originale qui contient elle d'autres ruptures de plans qui peuvent titre dues a du bruit de signal. Afin de determiner a et R, on recherche donc pour chaque rupture de plan de la video pirate, une rupture de plan dans la video originale, dans un voisinage proche qui pourrait lui correspondre et on fait une estimation dite robuste sur la liste des plans correspondants pour calculer les coefficients a et R.  Thus, the frames are matched and the frame or frames of the video pirates likely to contain the mark are detected because matched with the frames of the original video containing the mark. The search for the mark in the image is then made by image processing techniques known to those skilled in the art which are not part of the object of the present invention. in step E5 requires an additional step to refine the matching (for example when the video varies slightly temporally between two plane breaks, which gives a quasi-flat signature signal p (t) for which the dynamic programming favors by default, the diagonal path in the table.) This additional step consists in matching the plane breaks between the two sequences and deducing the linear transformation between the reference video (T2) and the pirate video (T1) between two In the following equation, a is close to the ratio of the image frequencies computed between the two videos and 13 is a shift of some frames T2 = cT1 + R, T representing the time coordinate of the frame. As illustrated in Figure 6, the original video is downstairs and the pirate video is up, the time signatures are binarized and thresholded, which allows to get the plane breaks, there are 9 breaks in the pirate video clips who also find a match in the original video which contains other breaks in shots that may be due to signal noise. In order to determine a and R, we therefore search for each plane break of the pirate video, a plane break in the original video, in a nearby neighborhood that could correspond to it and make a so-called robust estimate on the list of corresponding plans. to calculate the coefficients a and R.

La figure 5 represente une methode de detection de rupture de plans. Les etapes T5 a T8 effectuees sur la trame courante sont respectivement  Figure 5 shows a method of detecting plan rupture. The steps T5 to T8 performed on the current frame are respectively

identiques aux etapes T1 a T4 effectuees sur la trame precedentes. Lors d'une etape T1, on extrait un histogramme de luminance a partir des images basse resolution des sequences video. Lorsque ('image est encodee en utilisant des algorithmes de compression tels MPEG-2 avec une transformation en cosinus discrete, ('image basse resolution obtenue est tout simplement obtenue a partir des coefficients DC. Les histogrammes sont lisses, etapes T4 et T8, en utilisant un filtre passe bas de type FIR (acronyme anglais de Finite Impulse Response D).  identical to the steps T1 to T4 performed on the previous frame. During a step T1, a luminance histogram is extracted from the low resolution images of the video sequences. When the image is encoded using compression algorithms such as MPEG-2 with a discrete cosine transform, the low resolution image obtained is simply obtained from the DC coefficients, the histograms are smooth, steps T4 and T8, and using a FIR type low pass filter (acronym for Finite Impulse Response D).

Lors d'une etape T9, on calcule la difference entre des trames consecutives. On detecte une rupture de plans lorsque la distance est superieure a un seuil predetermine (etape T10). Ensuite lors d'une etape T11, on decide si effectivement iI y a rupture de plan en fonction du seuillage effectue.  In a step T9, the difference between consecutive frames is calculated. A break in planes is detected when the distance is greater than a predetermined threshold (step T10). Then, during a step T11, it is decided whether or not there is a break in the plane as a function of the thresholding performed.

Dans d'autres modes de realisation, I'histogramme de couleur est remplace par un vecteur representatif de plusieurs caracteristiques et notamment : - la duree du plan (pour le second mode de realisation), representee par le nombre total de trames formant le plan, - I'activite du plan. Cette valeur est calculee a partir des vecteurs de 10 mouvement MPEG et represente le mouvement dans le plan, - le vecteur de contour. Ce vecteur est un histogramme ou ('axe des abscisses correspond a ('orientation des sommets et ('axe des ordonnees correspond au nombre de sommets selon chaque orientation dans toutes les images du plan. 15 Cette Iiste peut etre etendue a d'autres caracteristiques representatives des images.  In other embodiments, the color histogram is replaced by a vector representative of several characteristics and in particular: the duration of the plane (for the second embodiment), represented by the total number of frames forming the plane, - the activity of the plan. This value is calculated from the MPEG motion vectors and represents the motion in the plane, the contour vector. This vector is a histogram where the x-axis corresponds to the orientation of the vertices and the ordinate axis corresponds to the number of vertices according to each orientation in all the images of the plane This list can be extended to other characteristics representatives of the images.

Dans d'autres modes de realisation, la recherche de la copie pirate peut se faire non pas sur des copies de video pirate mais sur des copies de 20 fichiers audio pirates. La recherche de la copie audio pirate se fait alors en extrayant un vecteur caracterisant le contenu audio, qui ne peut etre celui decrit precedemment pour de la video mais un vecteur audio defini par des caracteristiques propres a ('audio, comme des caracteristiques frequentielles.  In other embodiments, the pirate copy search can be done not on pirated video copies but on copies of pirated audio files. The search for the pirate audio copy is then done by extracting a vector characterizing the audio content, which can not be the one previously described for video but an audio vector defined by characteristics specific to audio, such as frequency characteristics.

25 D'autres applications autres que la recherche de marques dans une copie pirate peuvent etre envisagees, it peut s'agir tout simplement de retrouver un segment de video/audio dans un autre (et de les synchroniser temporellement)  Other applications other than the search for brands in a pirate copy can be envisaged, it may be simply to find one segment of video / audio in another (and synchronize temporally)

Claims (10)

Revendicationsclaims 1. Procede de recalage de deux segments de donnees multimedia caracterise en ce qu'il comprend - Une etape d'extraction (E2, E3) de signature pour chaque unite elementaire de chaque segment, - Une etape (E5) de calcul, a partir de ladite signature, d'un cout d'appariement entre une unite elementaire du premier segment avec une unite elementaire du second segment, - Une etape (E5) de determination, a partir du cout d'appariement minimal, du chemin minimal d'appariement entre les unites elementaires des deux segments, ledit chemin minimal representant alors la correspondance entre les deux segments.  A method of resetting two segments of multimedia data characterized in that it comprises: - a step of extraction (E2, E3) of signature for each elementary unit of each segment, - a step (E5) of calculation, starting from of said signature, a cost of matching between an elementary unit of the first segment with an elementary unit of the second segment, - A step (E5) of determining, from the minimum matching cost, the minimum path of matching. between the elementary units of the two segments, said minimal path then representing the correspondence between the two segments. 2. Procede selon la revendication 1 caracterise en ce que les donnees multimedia sont des donnees videos, lesdits unites elementaires sont des trames videos, les deux segments de donnees multimedia etant a des frequences differentes, ledit procede comprend, prealablement a I'etape de calcul de cout d'appariement, a. une etape de calcul d'un rapport entre les deux frequences desdits segments de donnees video, b. une etape d'interpolation pour avoir deux signaux de signature temporelle a la meme frequence.  2. Method according to claim 1 characterized in that the multimedia data are video data, said basic units are video frames, the two multimedia data segments being at different frequencies, said method comprises, before the computation step matching cost, a. a step of calculating a ratio between the two frequencies of said video data segments, b. an interpolation step to have two time signature signals at the same frequency. 3. Procede selon la revendication 2 caracterise en ce qu'il comprend, apres I'etape de calcul du cout d'appariement, une etape de raffinement de I'appariement dans laquelle, - on detecte (T1-T11) les ruptures de plans dans les deux segments de donnees video, - on met en correspondance les ruptures de plans des deux segments de donnees video,- on ajuste la mise en correspondance de trames pour la faire coincider avec les correspondances entre ruptures de plans  3. Method according to claim 2, characterized in that it comprises, after the step of calculating the cost of pairing, a step of refinement of the pairing in which the breaks in the planes are detected (T1-T11). in the two video data segments, - map breaks of the two video data segments are matched, - the matching of frames is adjusted to coincide with the matches between breaks in the shots. 4. Procede selon rune des revendications 2 ou 3 caracterise en ce que lors de I'etape d'extraction de signature pour chaque unite elementaire de chaque segment, - on calcule un histogramme de couleur pour chaque trame de chaque segment video, - on calcule (E2, E4) ladite signature en calculant une distance entre lesdits histogrammes de couleur de trames successives.  4. Method according to one of claims 2 or 3 characterized in that during the signature extraction step for each elementary unit of each segment, - a color histogram is calculated for each frame of each video segment, - it calculates (E2, E4) said signature by calculating a distance between said color histograms of successive frames. 5. Procede selon rune des revendications 2 a 4 caracterise en ce que lorsque les deux segments videos sont de grande longueur, notamment un long metrage, ledit procede comprend, prealablement auxdites etapes relatives aux trames, - Une etape (S2, S5) de decomposition en plans de chacun desdits segments video, - Une etape (S3, S6) d'extraction de signature pour chaque plan de chaque segment, - Une etape (S7) de calcul, a partir de ladite signature, d'un cout d'appariement entre chaque plan de chaque segment, - Une etape (S7) de determination, a partir du cout d'appariement minimal, du chemin minimal entre les plans des deux segments, ledit chemin minimal representant alors la correspondance entre les plans, lesdites etapes ulterieures relatives aux trames etant alors effectuees pour les trames dudit seul plan d'un segment video et du plan mis en correspondance de I'autre segment video pour lequel ledit chemin est minimal.  5. Method according to rune of claims 2 to 4 characterized in that when the two video segments are of great length, including a long meter, said method comprises, prior to said steps relating to frames, - A step (S2, S5) of decomposition in planes of each of said video segments, - a step (S3, S6) of signature extraction for each plane of each segment, - a step (S7) of calculation, from said signature, of a matching cost between each plane of each segment, - a step (S7) of determining, from the minimum matching cost, the minimum path between the planes of the two segments, said minimal path then representing the correspondence between the plans, said subsequent steps thereafter the frames being then performed for the frames of said single plane of a video segment and the map mapped to the other video segment for which said path is minimal. 6. Procede selon la revendication 5 caracterise en ce qu'il comprend une etape de calcul d'un rapport entre les deux segments de donneesvideo en fonction des ruptures de plan non detectees dans run au moins des segments de donnees video par rapport a I'autre sequence, ledit rapport etant alors utilise pour reduire la zone de mise en correspondance desdites trames dans lesdits plans mis en correspondance.  6. Method according to claim 5, characterized in that it comprises a step of calculating a ratio between the two data segments according to the undetected plane breaks in at least the video data segments with respect to the data segments. another sequence, said ratio being then used to reduce the matching area of said frames in said mapped planes. 7. Procede selon la revendication 4 caracterise en ce qu'il comprend, posterieurement a I'etape de mise en correspondance de rupture de plans, lorsque le nombre de plans detectes dans un segment sans correspondant dans I'autre segment est superieur a un seuil, - Une etape de decomposition du segment le plus long en sous segments, Ladite etape de determination du chemin minimal entre les trames des deux segments etant alors realisee pour chaque sous segment, ledit chemin minimal obtenu etant choisi comme ledit chemin minimal le plus minimal parmi tous les Chemins minimaux obtenus pour chaque sous segment.  7. Method according to claim 4 characterized in that it comprises, after the step of mapping of plane rupture, when the number of planes detected in a segment without a correspondent in the other segment is greater than a threshold. A step of decomposing the longest segment into sub-segments, said step of determining the minimum path between the frames of the two segments being then performed for each sub-segment, said minimum path obtained being chosen as said minimal minimum path among all the Minimum Paths obtained for each sub segment. 8. Procede selon rune des revendications 5 a 7 caracterise en ce que lors de I'etape d'extraction de signature pour chaque unite elementaire de chaque segment, - on calcule un histogramme de couleur pour chaque trame de chaque segment video, - on calcule la duree du plan, - on calcule des vecteurs de mouvement relatifs au plan, - on calcule un vecteur de contour, - on calcule ladite signature en fonction de ('histogramme de couleur, de la duree du plan, des vecteurs de mouvement et dudit vecteur de contour.  8. Method according to one of claims 5 to 7 characterized in that during the signature extraction step for each elementary unit of each segment, - a color histogram is calculated for each frame of each video segment, - it calculates the duration of the plane, - motion vectors relative to the plane are calculated, - a contour vector is calculated, - the said signature is calculated as a function of the color histogram, the plane duration, the motion vectors and the said outline vector. 9. Dispositif de recalage de deux segments de donnees multimedia caracterise en ce qu'il comprend- des moyens d'extraction de signature pour chaque unite elementaire de chaque segment, - des moyens de calcul, a partir de ladite signature, d'un cout d'appariement entre une unite elementaire du premier segment avec une 5 unite elementaire du second segment, - des moyens de determination, a partir du cout d'appariement minimal, du chemin minimal d'appariement entre les unites elementaires des deux segments, ledit chemin minimal representant alors la correspondance entre les deux segments.  9. Device for resetting two segments of multimedia data, characterized in that it comprises signature extraction means for each elementary unit of each segment, means for calculating, from said signature, a cost pairing between an elementary unit of the first segment with an elementary unit of the second segment, - means for determining, from the minimum matching cost, the minimum path of matching between the elementary units of the two segments, said path minimal then representing the correspondence between the two segments. 1010
FR0553924A 2005-10-21 2005-12-16 Method of realigning two multimedia data segments for realigning multimedia documents, involves mapping between video frames of two video, based on minimal pairing cost Pending FR2895188A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
FR0553924A FR2895188A1 (en) 2005-12-16 2005-12-16 Method of realigning two multimedia data segments for realigning multimedia documents, involves mapping between video frames of two video, based on minimal pairing cost
PCT/EP2006/067587 WO2007045680A1 (en) 2005-10-21 2006-10-19 Method and device for temporally realigning multimedia documents

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
FR0553924A FR2895188A1 (en) 2005-12-16 2005-12-16 Method of realigning two multimedia data segments for realigning multimedia documents, involves mapping between video frames of two video, based on minimal pairing cost

Publications (1)

Publication Number Publication Date
FR2895188A1 true FR2895188A1 (en) 2007-06-22

Family

ID=36926408

Family Applications (1)

Application Number Title Priority Date Filing Date
FR0553924A Pending FR2895188A1 (en) 2005-10-21 2005-12-16 Method of realigning two multimedia data segments for realigning multimedia documents, involves mapping between video frames of two video, based on minimal pairing cost

Country Status (1)

Country Link
FR (1) FR2895188A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2493514A (en) * 2011-08-02 2013-02-13 Qatar Foundation Using a measure of depth to detect if video data derives from a reference video
EP2569722A1 (en) * 2011-08-02 2013-03-20 Qatar Foundation Copy detection

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1244291A2 (en) * 2001-03-16 2002-09-25 Kabushiki Kaisha Toshiba Moving image compression and cut detection

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1244291A2 (en) * 2001-03-16 2002-09-25 Kabushiki Kaisha Toshiba Moving image compression and cut detection

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DORIN COMANICIU AND VISVANATHAN RAMESH: "Real-Time Tracking of Non-Rigid Objects using Mean Shift", CVPR, 2000, pages 1 - 8, XP002397440 *
HUI CHENG ET AL: "Spatial temporal and histogram video registration for digital watermark detection", PROCEEDINGS 2003 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING. ICIP-2003. BARCELONA, SPAIN, SEPT. 14 - 17, 2003, INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, NEW YORK, NY : IEEE, US, vol. VOL. 2 OF 3, 14 September 2003 (2003-09-14), pages 735 - 738, XP010670576, ISBN: 0-7803-7750-8 *
HUI CHENG: "A review of video registration methods for watermark detection in digital cinema applications", CIRCUITS AND SYSTEMS, 2004. ISCAS '04. PROCEEDINGS OF THE 2004 INTERNATIONAL SYMPOSIUM ON VANCOUVER, BC, CANADA 23-26 MAY 2004, PISCATAWAY, NJ, USA,IEEE, US, vol. 5, 23 May 2004 (2004-05-23), pages 704 - 707, XP010720361, ISBN: 0-7803-8251-X *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2493514A (en) * 2011-08-02 2013-02-13 Qatar Foundation Using a measure of depth to detect if video data derives from a reference video
EP2569722A1 (en) * 2011-08-02 2013-03-20 Qatar Foundation Copy detection
GB2493514B (en) * 2011-08-02 2015-04-08 Qatar Foundation Copy detection

Similar Documents

Publication Publication Date Title
US7653211B2 (en) Digital watermark embedding apparatus and digital watermark detection apparatus
US7380127B2 (en) Digital watermark detection method and apparatus
US20080226125A1 (en) Method of Embedding Data in an Information Signal
US7471807B2 (en) Digital watermark detection method and apparatus
Singh et al. Detection of upscale-crop and splicing for digital video authentication
Bestagini et al. Image phylogeny tree reconstruction based on region selection
US7376241B2 (en) Discrete fourier transform (DFT) watermark
JP2006209741A (en) Data processor and data processing method
Costa et al. Hash-based frame selection for video phylogeny
FR2895188A1 (en) Method of realigning two multimedia data segments for realigning multimedia documents, involves mapping between video frames of two video, based on minimal pairing cost
EP1330110B1 (en) Method and system for watermark decoding
JP4829891B2 (en) Method and apparatus for reading digital watermarks, computer program product and corresponding storage means
US7277488B2 (en) Data processing apparatus and method
US7194108B2 (en) Data processing apparatus and method
Pham et al. Resolution enhancement of low-quality videos using a high-resolution frame
Lefèbvre et al. Image and video fingerprinting: forensic applications
WO2007045680A1 (en) Method and device for temporally realigning multimedia documents
Seong et al. Scene-based watermarking method for copy protection using image complexity and motion vector amplitude
JP2004271958A (en) Data processing method and apparatus therefor
Liu et al. Video watermarking based on scene detection and 3D DFT
JP2008085540A (en) Program, detection method and detector