EP1222634A4 - Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing - Google Patents

Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing

Info

Publication number
EP1222634A4
EP1222634A4 EP00966554A EP00966554A EP1222634A4 EP 1222634 A4 EP1222634 A4 EP 1222634A4 EP 00966554 A EP00966554 A EP 00966554A EP 00966554 A EP00966554 A EP 00966554A EP 1222634 A4 EP1222634 A4 EP 1222634A4
Authority
EP
European Patent Office
Prior art keywords
video
describing
interval
hierarchicalsummary
inputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP00966554A
Other languages
German (de)
French (fr)
Other versions
EP1222634A1 (en
Inventor
Jae Gon Kim
Hyun Sung Chang
Munchur Kim
Jin Woong Kim
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electronics and Telecommunications Research Institute ETRI
Original Assignee
Electronics and Telecommunications Research Institute ETRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electronics and Telecommunications Research Institute ETRI filed Critical Electronics and Telecommunications Research Institute ETRI
Publication of EP1222634A1 publication Critical patent/EP1222634A1/en
Publication of EP1222634A4 publication Critical patent/EP1222634A4/en
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • G06F16/739Presentation of query results in form of a video summary, e.g. the video summary being a video sequence, a composite still image or having synthesized frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • G06F16/745Browsing; Visualisation therefor the internal structure of a single video sequence

Definitions

  • the present invention relates to a video summary description scheme for efficient video overview and browsing, and also relates to a method and system of video summary description generation to describe video summary according to the video summary description scheme.
  • the technical fields in which the present invention is involved are content based video indexing and browsing/searching and summarizing video to the content based and then describing it.
  • the format of summarizing video largely falls into dynamic summary and static summary.
  • the video description scheme according to the present invention is for efficiently describing the dynamic summary and the static summary into the unification based description scheme.
  • the existing video summary and description scheme provide simply the information of video interval which is included in the video summary, the existing video summary and description scheme are limited to conveying overall video contents through the playing of the summary video.
  • the existing video summary provides only the video interval which is considered to be important according to the criteria determined by the video summary provider. Accordingly, if the criteria of users and the video provider are different from each other or users have special criteria, the users can not obtain video summary of their desires. That is, although the existing summary video permits the users selecting the summary video with desired level by providing several levels' summary videos, it makes the selecting extent of the users to be limited that the users can not select by the contents of the summary videos.
  • the US patent 5,821,945 entitled “Method and apparatus for video browsing based on content and structure" represents video in compact form and provides browsing functionality accessing to the video with desired content through the representation.
  • the patent is on the static summary based on the representative frame and although the existing static summary summarizes by using the representative frame of the video shot, the representative frame of this patent provides only visual information representing the shot, the patent has limitation on conveying the information using summary.
  • the video description scheme and browsing method utilize the dynamic summary based on the video segment.
  • An object of the present invention is to provide a hierarchical video summary description scheme, which comprises the representative frame information and the representative sound information at each video interval which is included in the summary video and makes the user customized event based summary providing users' selection for the contents of the summary video and efficient browsing to be feasible, and a video summary description data generation method and system using the description scheme.
  • the HierarchicalSummary DS according to an executable example of the present invention comprises at least one HighlightLevel DS which is describing highlight level, and the HighlightLevel DS comprises at least HighlightSegment
  • the HighlightLevel DS is composed of at least one lower level HighlightLevel DSs. More preferably, the HighlightSegment DS comprises a VideoSegmen Locator
  • the HighlightSegment DS further comprises ImageLocator DS which is describing the representative frame of said corresponding highlight segment. It is more preferable that the HighlightSegment DS further comprises
  • SoundLocator DS which is describing the representative sound information of said corresponding highlight segment.
  • the HighlightSegment DS further comprises ImageLocator DS which is describing the representative frame of said corresponding highlight segment and SoundLocator DS which is describing the representative sound information of said corresponding highlight segment.
  • the ImageLocator DS describes time information or image data of the representative frame of video interval corresponding to said corresponding highlight segment.
  • the HighlightSegment DS further comprises AudioSegmentLocator DS which is describing the audio segment information constituting an audio summary of said corresponding highlight segment.
  • the AudioSegmentLocator DS describes time information or audio data of the audio interval of said corresponding highlight segment.
  • the HierarchicalSummary DS includes SummaryComponentList describing and enumerating all of the SummaryComponentTypes which is included in the HierarchicalSummary DS.
  • the HierarchicalSummary DS includes Summary ThemeList DS which is enumerating the event or subject comprised in the summary and describing the ID and then describes event based summary and permits the users to browse the summary video by the event or subject described in said SummaryThemeList
  • the SummaryThemeList DS includes arbitrary number of Summary Themes as elements and said SummaryTheme includes an attribute of id representing the corresponding event or subject, and the SummaryTheme further includes an attribute of parentID which is to describe the id of the event or subject of the upper level
  • the HighlightLevel DS includes an attribute of themelds describing said attribute of ids of common events or subjects if all of the HighlightSegments and HighlightLevels which are constituting corresponding highlight level have common events or subjects.
  • the HighlightSegment DS includes an attribute of themelds describing said attribute of id and describes the event or subject of the corresponding highlight segment.
  • a computer-readable recording medium where a HierarchicalSummary DS is stored therein is provided.
  • the HierarchicalSummary DS is stored therein.
  • HierarchicalSummary DS comprises at least one HighlightLevel DS which is describing highlight level
  • the HighlightLevel DS comprises at least one HighlightSegment DS which is describing highlight segment information constituting the summary video of that the highlight level
  • the HighlightSegment DS comprises VideoSegmentLocator DS describing time information or video itself of said corresponding highlight segment.
  • a method for generating video summary description data according to video summary description scheme by inputting original video includes the following steps: video analyzing step which is producing video analysis result by inputting the original video and then analyzing the original video; summary rule defining step which is defining the summary rule for selecting summary video interval; summary video interval selecting step which is constituting summary video interval information by selecting the video interval capable of summarizing video contents from the original video by inputting said original video analysis result and said summary rule; and video summary describing step which is producing video summary description data according to the HierarchicalSummary DS by inputting the summary video interval information output by said summary video interval selecting step.
  • the video analyzing step comprises feature extracting step which is outputting the types of features and video time interval at which those features are detected by inputting the original video and extracting those features, event detecting step which is detecting key events included in the original video by inputting said types of features and video time interval at which those features are detected; and episode detecting step which is detecting episode by dividing the original video into story flow base on the basis of said detected event:
  • the summary rule defining step provides the types of summary events, which are bases in selecting the summary video interval, after defining them to said video summary describing step.
  • the method further comprises representative frame extracting step which is providing the representative frame to said video summary describing step by inputting said summary video interval information and extracting representative frame.
  • the method further comprises representative sound extracting step which is providing the representative sound to said video summary describing step by inputting said summary video interval information and extracting representative sound.
  • representative sound extracting step which is providing the representative sound to said video summary describing step by inputting said summary video interval information and extracting representative sound.
  • the program executes the following steps: feature extracting step which is outputting the types of features and video time interval at which those features are detected; event detecting step which is detecting key events included in the original video by inputting said types of features and said video time interval at which those features are detected; episode detecting step which is detecting episode by dividing the original video into story flow base on the basis of said detected key events; summary rule defining step which is defining the summary rule for selecting the summary video interval; summary video interval selecting step which is constituting summary video interval information by selecting the video interval capable of summarizing the video contents of the original video by inputting said detected episode and said summary rule; and video summary describing step which is generating video summary description data with HierarchicalSummary DS by inputting the summary video interval information output by said summary video interval selecting step.
  • a system for generating video summary description data according to video summary description scheme by inputting original video includes video analyzing means for outputting video analysis result by inputting original video and analyzing the original video, summary rule defining means for defining the summary rule for selecting the summary video interval, summary video interval selecting means for constituting summary video interval information by selecting the video interval capable of summarizing the video contents of the original video by inputting said video analysis result and said summary rule, and video summary describing means for generating video summary description data with HierarchicalSummary DS by inputting the summary video interval information output by said summary video interval selecting means.
  • the HierarchicalSummary DS comprises at least one HighlightLevel DS which is describing highlight level
  • the HighlightLevel DS comprises at least one HighlightSegment DS which is describing highlight segment information constituting the summary video of the highlight level
  • the HighlightSegment DS comprises VideoSegmentLocator DS describing time information or video itself of said corresponding highlight segment.
  • the video analyzing means comprises feature extracting means for outputting the types of features and video time interval at which those features are detected by inputting the original video and extracting those features, event detecting means for detecting key events included in the original video by inputting said types of features and video time interval at which those features are detected; and episode detecting means for detecting episode by dividing the original video into story flow base on the basis of said detected event.
  • the summary rule defining means provides the types of summary events, which are bases in selecting the summary video interval, after defining them to said video summary describing means.
  • system further comprises representative frame extracting means for providing the representative frame to said video summary describing means by inputting said summary video interval information and extracting representative frame.
  • system further comprises representative sound extracting means for providing the representative sound to said video summary describing means by inputting said summary video interval information and extracting representative sound.
  • representative sound extracting means for providing the representative sound to said video summary describing means by inputting said summary video interval information and extracting representative sound.
  • the program is for functioning feature extracting means for outputting the types of features and video time interval at which those features are detected, event detecting means for detecting key events included in the original video by inputting said types of features and said video time interval at which those features are detected, episode detecting means for detecting episode by dividing the original video into story flow base on the basis of said detected key events, summary rule defining means for defining the summary rule for selecting the summary video interval, summary video interval selecting means for constituting summary video interval information by selecting the video interval capable of summarizing the video contents of the original video by inputting said detected episode and said summary rule, and video summary describing means for generating video summary description data with HierarchicalSummary DS by inputting the summary video interval information output by said summary video interval selecting step.
  • a Video browsing system in a server/client circumstance includes a server which is equipped with video summary description data generation system which generates video summary description data on the basis of HierarchicalSummary DS by inputting original video and links said original video and video summary description data, and a client which is browsing and navigating video by overview of said original video and access to the original video of said server using said video summary description data.
  • FIG. 1 is a block diagram illustrating a system for generating video summary description data according to the description scheme of the present invention.
  • FIG. 2 is a drawing that illustrates the data structure of the HierarchicalSummary DS describing the video summary description scheme according to the present invention in UML (Unified Modeling Language).
  • FIG. 3 is a compositional drawing of user interface of the tool for playing and browsing of the summary video inputting the video summary description data described by the same description scheme as FIG. 2.
  • FIG. 4 is a compositional drawing for the flow of the data and control for hierarchical browsing using the summary video of the present invention.
  • FIG. 1 is a block diagram illustrating a system for generating video summary description data according to the description scheme of the present invention.
  • the apparatus for generating video description data is composed of a feature extracting part 101, an event detecting part 102, an episode detecting part 103, a summary video interval selecting part 104, a summary rule defining part 105, a representative frame extracting part 106, a representative sound extracting part 107 and a video summary describing part 108.
  • the feature extracting part 101 extracts necessary features to generate summary video by inputting the original video.
  • the general features include shot boundary, camera motion, caption region, face region and so on.
  • the types of features and video time interval at which those features are detected are output to the step of detecting event in the format of (types of features, feature serial number, time interval) by extracting those features.
  • (camera zoom, 1, 100 ⁇ 150) represents the information that the first zoom of camera was detected in the 100 ⁇ 150 frame.
  • the event detecting part 102 detects key events which are included in the original video. Because these events must represent the contents of the original video well and are the references for generating summary video, these events are generally differently defined according to genre of the original video. These events either may represent higher meaning level or may be visual features which can directly infer higher meaning. For example, in the case of soccer video, goal, shoot, caption, replay and so on can be defined as events.
  • the event detecting part 102 outputs the types of detected events and the time interval in the format of (types of events, event serial number, time interval). For example, the event information indicating that the first goal occurred at between 200 and 300 frame is output in the format of (goal, 1, 200 ⁇ 300).
  • the episode detecting part 103 divides the video into an episode with larger unit than an event based on the story flow. After detecting key events, an episode is detected while including accompanied events which follow the key event.
  • accompanied events For example, in the case of soccer video, goal and shoot can be key events and bench scene, audiences scene, goal ceremony scene, replay of goal scene and so on compose accompanied events of the key events.
  • the episode detection information is output in the format of (episode number, time interval, priority, feature shot, associated event information).
  • the episode number is serial number of the episode and the time interval represents the time interval of the episode by the shot unit.
  • the priority represents the degree of importance of the episode.
  • the feature shot represents the shot number including the most important information out of the shots comprising the episode and the associated event information represents the event number of the event related to the episode.
  • the information means that the first episode includes 4 ⁇ 6th shot, the priority is the highest (1), the feature shot is fifth shot, and the associated events are the first goal and the third caption.
  • the summary video interval selecting part 104 selects the video interval at which the contents of the original video can be summarized well on the basis of the detected episode. The reference of selecting the interval is performed by the predefined summary rule of the summary rule defining part 105.
  • the summary rule defining part 105 defines rule for selecting the summary interval and outputs control signal for selecting the summary interval.
  • the summary rule defining part 105 also outputs the types of summary events, which are bases in selecting the summary video interval, to the video summary describing part 108.
  • the summary video interval selecting part 104 outputs the time information of the selected summary video intervals by frame units and outputs the types of events corresponding to the video intervals. That is, the format of (100 ⁇ 200, goal), (500 ⁇ 700. shoot) and so on represent that the video segments selected as the summary video intervals are 100 ⁇ 200 frame, 500 ⁇ 700 frame and so on and the event of each segment is goal and shoot respectively. As well, the information such as file name can be output to facilitate the access of an additional video which is composed of only the summary video interval. If the summary video interval selection is completed, the representative frame and the representative sound are extracted from the representative frame extracting part 106 and the representative sound extracting part 107 respectively by using the summary video interval information. The representative frame extracting part 106 outputs the image frame number representing the summary video interval or outputs the image data.
  • the representative sound extracting part 107 outputs the sound data representing the summary video interval or outputs the sound time interval.
  • the video summary describing part 108 describes the related information in order to make efficient summary and browsing functionalities to be feasible according to the Hierarchical Summary Description Scheme of the present invention shown in FIG. 2.
  • the main information of the Hierarchical Summary Description Scheme comprises the types of summary events of the summary video, the time information describing each summary video interval, the representative frame, the representative sound, and the event types in each interval.
  • the video summary describing part 108 outputs the video summary description data according to the description scheme illustrated in FIG. 2.
  • FIG. 2 is a drawing that illustrates the data structure of the HierarchicalSummary DS describing the video summary description scheme according to the present invention in UML (Unified Modeling Language).
  • the HierarchicalSummary DS 201 describing the video summary is composed of one or more HighlightLevel DS 202 and one or zero SummaryThemeList DS 203.
  • the SummaryThemeList DS provides the functionality of the event based summary and browsing by enumeratively describing the information of subject or event constituting the summary.
  • the HighlightLevel DS 202 is composed of the HighlightSegment
  • DSs 204 as many as the number of the video intervals constituting the summary video of that level and zero or several number of HighlightLevel DS.
  • the HighlightSegment DS describes the information corresponding to the interval of each summary video.
  • the HighlightSegment DS is composed of one VideoSegmentLocator DS 205, zero or several ImageLocator DSs 206, zero or several SoundLocator DSs 207 and AudioSegmentLocator 208.
  • HierarchicalSummary DS has an attribute of SummaryComponentList which obviously represents the summary type, which is comprised by the HierarchicalSummary DS.
  • the SummaryComponentList is derived on the basis of the SummaryCornponentType and describes by enumerating all comprised SummaryComponentTypes.
  • the keyFrames represents the key frame summary composed of representative frames.
  • the keyVideoClips represents the key video clip summary composed of key video intervals' sets.
  • the keyEvents represents the summary composed of the video interval corresponding to either the event or the subject.
  • the keyAudioClips represents the key audio clip summary composed of representative audio intervals' sets.
  • the unconstraint represents the types of summary defined by users except for said summaries.
  • the HierarchicalSummary DS might comprise the SummaryThemeList DS which is enumerating the event (or subject) comprised in the summary and describing the ID.
  • the SummaryThemeList has arbitrary number of SummaryThemes as elements.
  • the SummaryTheme has an attribute of id of ID type and selectively has an attribute of parentld.
  • the SummaryThemeList DS permits the users browsing the summary video from the viewpoint of each event or several subjects described in the SummaryThemeList. That is, the application tool inputting description data makes the users to select the desired subject by parsing the SummaryThemeList DS and providing the information to the users. At this time, in the case of enumerating these subjects into simple format, if the number of the subjects are large, it might not easy to find out the subject desired by the users.
  • the users efficiently can do browsing at each subject after finding out the desired subject.
  • the present invention permits the attribute of parentld being selectively used in the SummaryTheme.
  • the parentld means the upper element (upper subject) in the tree structure.
  • the HierarchicalSummary DS of the present invention comprises HighlightLevel DSs and each HighlightLevel DS comprises one or more HighlightSegment DS which corresponds to a video segment (or interval) constituting the summary video.
  • the HighlightLevel DS has an attribute of themelds of IDREFS type.
  • the themelds describes the subject and event id, common to the children HighlightLevel DS of corresponding HighlightLevel DS or all HighlightSegment DSs comprised in the HighlightLevel, and the id is described in said SummaryThemeList DS.
  • the themelds can denote several events and, when doing event based summary, solve the problem that same id is unnecessarily repeated in all segments constituting the level by having the themelds representing common subject type in the HighlightSegment constituting the level.
  • the HighlightSegment DS comprises one VideoSegmentLocator DS and one or more ImageLocator DS, zero or one SoundLocator DS and zero or one AudioSegmentLocator DS.
  • the VideoSegmentLocator DS describes the time information or video itself of the video segment constituting the summary video.
  • the ImageLocator DS describes the image data information of the representative frame of the video segment.
  • the SoundLocator DS describes the sound information representing the corresponding video segment interval.
  • the AudioSegmentLocator DS describes the interval time information of the audio segment constituting the audio summary or the audio information itself.
  • the HighlightSegment DS has an attribute of themelds. The themelds describes using the id defined in the SummaryThemeList which subjects or events described in said SummaryThemeList DS relates to the corresponding highlight segment.
  • the themelds can denote more than one events and by allowing one highlight segment to have several subjects, it is an efficient technique of the present invention which is solving the problem of indispensable duplication of descriptions caused by describing the video segment at each event (or subject) when using the existing method for event based summary.
  • the present invention makes the overview through the highlight segment video and the navigation and browsing utilizing the representative frame and the representative sound of the segment to be feasible to efficiently utilize through the introduction of the HighlightSegment DS for describing the highlight segment constituting the summary video.
  • the SoundLocator DS capable of describing the representative sound corresponding to the video interval in real instances through the characteristic sound capable of representing the video interval, for example gun shot, outcry, anchor's comment in soccer (for example, goal and shoot), actors' name in drama, specific word, etc., it is possible to do efficient browsing by roughly understanding whether the interval is important interval containing the desired contents or what contents are contained in the interval within short time without playing the video interval.
  • the characteristic sound capable of representing the video interval for example gun shot, outcry, anchor's comment in soccer (for example, goal and shoot), actors' name in drama, specific word, etc.
  • FIG. 3 is a compositional drawing of user interface of the tool for playing and browsing of the summary video inputting the video summary description data described by the same description scheme as FIG. 2.
  • the video playing part 301 plays the original video or the summary video according to the control of the user.
  • the original video representative frame part 305 shows the representative frames of the original video shots. That is, it is composed of a series of images with reduced sizes. The representative frame of the original video shot is described not by the
  • HierarchicalSummary DS of the present invention but by additional description scheme and can be utilized when both the description data are provided along with the summary description data described by the HierarchicalSummary DS of the present invention.
  • the user accesses to the original video shot corresponding to the representative frame by clicking the representative frame.
  • the summary video level 0 representative frame part and the representative sound part 307 and the summary video level 1 representative frame part and the representative sound part 306 shows the frame and sound information representing each video interval of the summary video level 0 and the summary video level 1 respectively. That is, it is composed of the iconic images representing a series of the images and sounds with reduced sizes.
  • the user clicks the representative frame of the summary video representative frame part and the representative sound part, the user accesses to the original video interval corresponding to the representative frame.
  • the representative sound icon corresponding to the representative frame of the summary video, the representative sound of the video interval is played.
  • the summary video controlling part 302 inputs the control for user selection to play the summary video.
  • the user does overview and browsing by selecting the summary of the desired level through the level selecting part 303.
  • the event selecting part 304 enumerates the event and the subject provided by the SummaryThemeList and the user does overview and browsing by selecting the desired event. After all, this realizes the summary of the user customization type.
  • FIG. 4 is a compositional drawing for the flow of the data and control for hierarchical browsing using the summary video of the present invention.
  • the browsing is performed by accessing the data for browsing with the method of FIG. 4 through the use of the user interface of FIG.3.
  • the data for browsing are the summary video and the representative frame of the summary video and the original video 406 and the original video representative frame 405.
  • the summary video is assumed to have two levels. Needless to say, the summary video may have more levels than two.
  • the summary video level 0 401 is what is summarized with shorter time than the summary video level 1 403. That is, the summary video level 1 contains more contents than the summary video level 0.
  • the summary video level 0 representative frame 402 is the representative frame of the summary video level 0 and the summary video level 1 representative frame 404 is the representative frame of the summary video level 1.
  • the summary video and the original video are played through the video playing part 301 of FIG. 3.
  • the summary video level 0 representative frame is displayed in the summary video level 0 representative frame and the representative sound part 306, the summary video level 1 representative frame is displayed in the summary video level 1 representative frame and the representative sound part 307, and the original video representative frame is displayed in the original video representative frame part 305.
  • the hierarchical browsing method illustrated in FIG. 4 can have various types of hierarchical paths as the following example. Case l : (l) - (2)
  • the summary video may play either the summary video level 0 or the summary video level 1.
  • the interested video interval is identified through the summary video representative frame. If the scene which is desired to be exactly found, is identified in the summary video representative frame, play it by directly accessing to the video interval of the original video to which the representative frame is connected. And if the more detailed information is needed, the user may access to the desired original video either by understanding the representative frame of the next level or by hierarchically understanding the contents of the representative frame of the original video.
  • the existing general video indexing and browsing techniques divide the original video in shot unit and access to the shot by perceiving the desired shot from the representative frame after constituting the representative frame representing each shot.
  • the case 1 is the case that plays the summary video level 0 and directly accesses to the original video from the summary video level 0 representative frame.
  • the case 2 is the case that plays the summary video level 0 and selects the most interested representative frame from the summary video level 0 representative frame and identifies the desired scene in the summary video level 1 representative frame corresponding to the neighborhood of the representative frame to understand more detailed information before access to the original video and then accesses to the original video.
  • the case 3 is the case that selects the most interested representative frame to obtain more detailed information in the case that the access from the summary video level 1 representative frame to the original video is difficult in the case 2 and by the original video representative frames neighboring the representative frame identifies the desired scene and then accesses to the original video using the representative frame of the original frame.
  • the case 4 and case 5 are the cases that start at the playing of the summary video level 1 and the paths are similar to the above cases.
  • the present invention can provide the system in which multiple clients access to one server and can do video overview and browsing.
  • the original video is inputted to the server and the video summary description data is produced on the basis of the hierarchical summary description scheme and the summary video description data generation system linking said original video and the video summary description data is equipped.
  • the client accesses to the server through the communication network, does overview of the video using the video summary description data and does browsing and navigation of the video by accessing to the original video.

Abstract

The present invention relates to a video summary description scheme for describing video summary by meta data. The video summary provides overview functionality, which makes feasible to understand overall contents of the original video within short time and navigation and browsing functionalities, which make feasible to search the desired video contents efficiently. According to the present invention the HierarchicalSummary Description Scheme (DS) comprises at least on HighlightLevel DS and selectively comprises the SummaryThemeList DS. The HightlightLevel DS describe highlight level an may have zero or at least one lower HighlightLevel DS. The HighlightLevel DS comprises one or more HighlightSegment DS which is describing highlight segment information constituting the video summary of the highlight level. The HighlightSegment DS comprises the VideoSegmentLocator DS for describing the time information of corresponding segment interval. Also, the HighlightSegment DS may comprise the ImageLocator DS for describing the representative image information of corresponding segment, the SoundLocator DS for describing the representative sound information, and the AudioSegmentLocator DS for describing the audio segment information constituting the audio summary.

Description

VIDEO SUMMARY DESCRIPTION SCHEME AND METHOD AND SYSTEM OF VIDEO SUMMARY DESCRIPTION DATA GENERATION FOR EFFICIENT
OVERVIEW AND BROWSING
TECHNICAL FIELD The present invention relates to a video summary description scheme for efficient video overview and browsing, and also relates to a method and system of video summary description generation to describe video summary according to the video summary description scheme.
The technical fields in which the present invention is involved are content based video indexing and browsing/searching and summarizing video to the content based and then describing it.
BACKGROUND OF THE INVENTION
The format of summarizing video largely falls into dynamic summary and static summary. The video description scheme according to the present invention is for efficiently describing the dynamic summary and the static summary into the unification based description scheme.
Generally, because the existing video summary and description scheme provide simply the information of video interval which is included in the video summary, the existing video summary and description scheme are limited to conveying overall video contents through the playing of the summary video.
However, in many cases, the browsing for identifying and revisiting concerned parts through overview of overall contents is needed rather than only overview of overall contents through the summary video.
Also, the existing video summary provides only the video interval which is considered to be important according to the criteria determined by the video summary provider. Accordingly, if the criteria of users and the video provider are different from each other or users have special criteria, the users can not obtain video summary of their desires. That is, although the existing summary video permits the users selecting the summary video with desired level by providing several levels' summary videos, it makes the selecting extent of the users to be limited that the users can not select by the contents of the summary videos. The US patent 5,821,945 entitled "Method and apparatus for video browsing based on content and structure" represents video in compact form and provides browsing functionality accessing to the video with desired content through the representation.
However, the patent is on the static summary based on the representative frame and although the existing static summary summarizes by using the representative frame of the video shot, the representative frame of this patent provides only visual information representing the shot, the patent has limitation on conveying the information using summary.
As compared with the patent, the video description scheme and browsing method utilize the dynamic summary based on the video segment.
The video summary description scheme was proposed by the MPEG-7 Description Scheme (V0.5) announced ISO/IEC JTC1/SC29/WG11 MPEG-7 Output
Document No. N2844 on July 1999. Because the scheme describes the interval information of each video segment of dynamic summary video, in spite of providing basic functionalities describing dynamic summary, the scheme has problem in following aspects.
First, there is the drawback that it can not provide access to original video from summary segments constituting the summary video. That is, the users wanted to access to the original video to understand more detailed information on the basis of the summary contents and overview through summary video, however the existing scheme could not meet the need.
Secondly, the existing scheme can not provide sufficient audio summary description functionalities. And finally, there is the drawback that in the case of representing event based summary, the duplicate description and the complexity of searching is indispensable. SUMMARY OF THE INVENTION
An object of the present invention is to provide a hierarchical video summary description scheme, which comprises the representative frame information and the representative sound information at each video interval which is included in the summary video and makes the user customized event based summary providing users' selection for the contents of the summary video and efficient browsing to be feasible, and a video summary description data generation method and system using the description scheme.
In order to achieve the object, the HierarchicalSummary DS according to an executable example of the present invention comprises at least one HighlightLevel DS which is describing highlight level, and the HighlightLevel DS comprises at least HighlightSegment
DS which is describing highlight segment information constituting the summary video of the highlight level.
Preferably, the HighlightLevel DS is composed of at least one lower level HighlightLevel DSs. More preferably, the HighlightSegment DS comprises a VideoSegmen Locator
DS which is describing time information or video itself of said corresponding highlight segment.
It is preferable that the HighlightSegment DS further comprises ImageLocator DS which is describing the representative frame of said corresponding highlight segment. It is more preferable that the HighlightSegment DS further comprises
SoundLocator DS which is describing the representative sound information of said corresponding highlight segment.
Preferably, the HighlightSegment DS further comprises ImageLocator DS which is describing the representative frame of said corresponding highlight segment and SoundLocator DS which is describing the representative sound information of said corresponding highlight segment.
More preferably, the ImageLocator DS describes time information or image data of the representative frame of video interval corresponding to said corresponding highlight segment. Preferably, the HighlightSegment DS further comprises AudioSegmentLocator DS which is describing the audio segment information constituting an audio summary of said corresponding highlight segment.
More preferably, the AudioSegmentLocator DS describes time information or audio data of the audio interval of said corresponding highlight segment.
It is preferable that the HierarchicalSummary DS includes SummaryComponentList describing and enumerating all of the SummaryComponentTypes which is included in the HierarchicalSummary DS.
Also, it is preferable that the HierarchicalSummary DS includes Summary ThemeList DS which is enumerating the event or subject comprised in the summary and describing the ID and then describes event based summary and permits the users to browse the summary video by the event or subject described in said SummaryThemeList
It is more preferable that the SummaryThemeList DS includes arbitrary number of Summary Themes as elements and said SummaryTheme includes an attribute of id representing the corresponding event or subject, and the SummaryTheme further includes an attribute of parentID which is to describe the id of the event or subject of the upper level
Preferably, the HighlightLevel DS includes an attribute of themelds describing said attribute of ids of common events or subjects if all of the HighlightSegments and HighlightLevels which are constituting corresponding highlight level have common events or subjects.
More preferably, the HighlightSegment DS includes an attribute of themelds describing said attribute of id and describes the event or subject of the corresponding highlight segment.
Also, according to the present invention, a computer-readable recording medium where a HierarchicalSummary DS is stored therein is provided. Preferably, the
HierarchicalSummary DS comprises at least one HighlightLevel DS which is describing highlight level, and the HighlightLevel DS comprises at least one HighlightSegment DS which is describing highlight segment information constituting the summary video of that the highlight level, and the HighlightSegment DS comprises VideoSegmentLocator DS describing time information or video itself of said corresponding highlight segment.
Also, according to the present invention, a method for generating video summary description data according to video summary description scheme by inputting original video is provided. The method includes the following steps: video analyzing step which is producing video analysis result by inputting the original video and then analyzing the original video; summary rule defining step which is defining the summary rule for selecting summary video interval; summary video interval selecting step which is constituting summary video interval information by selecting the video interval capable of summarizing video contents from the original video by inputting said original video analysis result and said summary rule; and video summary describing step which is producing video summary description data according to the HierarchicalSummary DS by inputting the summary video interval information output by said summary video interval selecting step.
Preferably, the video analyzing step comprises feature extracting step which is outputting the types of features and video time interval at which those features are detected by inputting the original video and extracting those features, event detecting step which is detecting key events included in the original video by inputting said types of features and video time interval at which those features are detected; and episode detecting step which is detecting episode by dividing the original video into story flow base on the basis of said detected event:
Preferably, the summary rule defining step provides the types of summary events, which are bases in selecting the summary video interval, after defining them to said video summary describing step.
More preferably, the method further comprises representative frame extracting step which is providing the representative frame to said video summary describing step by inputting said summary video interval information and extracting representative frame.
More preferably, the method further comprises representative sound extracting step which is providing the representative sound to said video summary describing step by inputting said summary video interval information and extracting representative sound. Also, according to the present invention, a computer-readable recording medium where a program is stored therein is provided. The program executes the following steps: feature extracting step which is outputting the types of features and video time interval at which those features are detected; event detecting step which is detecting key events included in the original video by inputting said types of features and said video time interval at which those features are detected; episode detecting step which is detecting episode by dividing the original video into story flow base on the basis of said detected key events; summary rule defining step which is defining the summary rule for selecting the summary video interval; summary video interval selecting step which is constituting summary video interval information by selecting the video interval capable of summarizing the video contents of the original video by inputting said detected episode and said summary rule; and video summary describing step which is generating video summary description data with HierarchicalSummary DS by inputting the summary video interval information output by said summary video interval selecting step. Also, according to the present invention, a system for generating video summary description data according to video summary description scheme by inputting original video is provided. The system includes video analyzing means for outputting video analysis result by inputting original video and analyzing the original video, summary rule defining means for defining the summary rule for selecting the summary video interval, summary video interval selecting means for constituting summary video interval information by selecting the video interval capable of summarizing the video contents of the original video by inputting said video analysis result and said summary rule, and video summary describing means for generating video summary description data with HierarchicalSummary DS by inputting the summary video interval information output by said summary video interval selecting means.
Preferably, the HierarchicalSummary DS comprises at least one HighlightLevel DS which is describing highlight level, the HighlightLevel DS comprises at least one HighlightSegment DS which is describing highlight segment information constituting the summary video of the highlight level, and the HighlightSegment DS comprises VideoSegmentLocator DS describing time information or video itself of said corresponding highlight segment.
Preferably, the video analyzing means comprises feature extracting means for outputting the types of features and video time interval at which those features are detected by inputting the original video and extracting those features, event detecting means for detecting key events included in the original video by inputting said types of features and video time interval at which those features are detected; and episode detecting means for detecting episode by dividing the original video into story flow base on the basis of said detected event. More preferably, the summary rule defining means provides the types of summary events, which are bases in selecting the summary video interval, after defining them to said video summary describing means.
It is preferable that the system further comprises representative frame extracting means for providing the representative frame to said video summary describing means by inputting said summary video interval information and extracting representative frame.
It is more preferable that the system further comprises representative sound extracting means for providing the representative sound to said video summary describing means by inputting said summary video interval information and extracting representative sound. Also, according to the present invention, a computer-readable recording medium where a program is stored therein is provided. The program is for functioning feature extracting means for outputting the types of features and video time interval at which those features are detected, event detecting means for detecting key events included in the original video by inputting said types of features and said video time interval at which those features are detected, episode detecting means for detecting episode by dividing the original video into story flow base on the basis of said detected key events, summary rule defining means for defining the summary rule for selecting the summary video interval, summary video interval selecting means for constituting summary video interval information by selecting the video interval capable of summarizing the video contents of the original video by inputting said detected episode and said summary rule, and video summary describing means for generating video summary description data with HierarchicalSummary DS by inputting the summary video interval information output by said summary video interval selecting step.
Also, a Video browsing system in a server/client circumstance according to the present invention is provided. The system includes a server which is equipped with video summary description data generation system which generates video summary description data on the basis of HierarchicalSummary DS by inputting original video and links said original video and video summary description data, and a client which is browsing and navigating video by overview of said original video and access to the original video of said server using said video summary description data.
BRIEF DESCRIPTION OF THE DRAWINGS
The embodiments of the present invention will be explained with reference to the accompanying drawings, in which:
FIG. 1 is a block diagram illustrating a system for generating video summary description data according to the description scheme of the present invention.
FIG. 2 is a drawing that illustrates the data structure of the HierarchicalSummary DS describing the video summary description scheme according to the present invention in UML (Unified Modeling Language).
FIG. 3 is a compositional drawing of user interface of the tool for playing and browsing of the summary video inputting the video summary description data described by the same description scheme as FIG. 2.
FIG. 4 is a compositional drawing for the flow of the data and control for hierarchical browsing using the summary video of the present invention.
DETAILED DESCRIPTION OF THE INVENTION The present invention will be described in detail by way of a preferred embodiment with reference to accompanying drawings, in which like reference numerals are used to identify the same or similar parts. FIG. 1 is a block diagram illustrating a system for generating video summary description data according to the description scheme of the present invention.
As illustrated in FIG. 1, the apparatus for generating video description data according to the present invention is composed of a feature extracting part 101, an event detecting part 102, an episode detecting part 103, a summary video interval selecting part 104, a summary rule defining part 105, a representative frame extracting part 106, a representative sound extracting part 107 and a video summary describing part 108.
The feature extracting part 101 extracts necessary features to generate summary video by inputting the original video. The general features include shot boundary, camera motion, caption region, face region and so on.
In the step of extracting features, the types of features and video time interval at which those features are detected are output to the step of detecting event in the format of (types of features, feature serial number, time interval) by extracting those features.
For example, in the case of camera motion, (camera zoom, 1, 100 ~ 150) represents the information that the first zoom of camera was detected in the 100 ~ 150 frame.
The event detecting part 102 detects key events which are included in the original video. Because these events must represent the contents of the original video well and are the references for generating summary video, these events are generally differently defined according to genre of the original video. These events either may represent higher meaning level or may be visual features which can directly infer higher meaning. For example, in the case of soccer video, goal, shoot, caption, replay and so on can be defined as events.
The event detecting part 102 outputs the types of detected events and the time interval in the format of (types of events, event serial number, time interval). For example, the event information indicating that the first goal occurred at between 200 and 300 frame is output in the format of (goal, 1, 200 ~ 300).
The episode detecting part 103, on the basis of the detected event, divides the video into an episode with larger unit than an event based on the story flow. After detecting key events, an episode is detected while including accompanied events which follow the key event. For example, in the case of soccer video, goal and shoot can be key events and bench scene, audiences scene, goal ceremony scene, replay of goal scene and so on compose accompanied events of the key events.
That is, the episode is detected on the basis of the goal and shoot. The episode detection information is output in the format of (episode number, time interval, priority, feature shot, associated event information). Herein, the episode number is serial number of the episode and the time interval represents the time interval of the episode by the shot unit. The priority represents the degree of importance of the episode. The feature shot represents the shot number including the most important information out of the shots comprising the episode and the associated event information represents the event number of the event related to the episode. For example, in the case of representing the episode detection information as (episode 1, 4 - 6, 1, 5, goal 1, caption 3), the information means that the first episode includes 4 ~ 6th shot, the priority is the highest (1), the feature shot is fifth shot, and the associated events are the first goal and the third caption. The summary video interval selecting part 104 selects the video interval at which the contents of the original video can be summarized well on the basis of the detected episode. The reference of selecting the interval is performed by the predefined summary rule of the summary rule defining part 105.
The summary rule defining part 105 defines rule for selecting the summary interval and outputs control signal for selecting the summary interval. The summary rule defining part 105 also outputs the types of summary events, which are bases in selecting the summary video interval, to the video summary describing part 108.
The summary video interval selecting part 104 outputs the time information of the selected summary video intervals by frame units and outputs the types of events corresponding to the video intervals. That is, the format of (100 ~ 200, goal), (500 ~ 700. shoot) and so on represent that the video segments selected as the summary video intervals are 100 ~ 200 frame, 500 ~ 700 frame and so on and the event of each segment is goal and shoot respectively. As well, the information such as file name can be output to facilitate the access of an additional video which is composed of only the summary video interval. If the summary video interval selection is completed, the representative frame and the representative sound are extracted from the representative frame extracting part 106 and the representative sound extracting part 107 respectively by using the summary video interval information. The representative frame extracting part 106 outputs the image frame number representing the summary video interval or outputs the image data.
The representative sound extracting part 107 outputs the sound data representing the summary video interval or outputs the sound time interval.
The video summary describing part 108 describes the related information in order to make efficient summary and browsing functionalities to be feasible according to the Hierarchical Summary Description Scheme of the present invention shown in FIG. 2.
The main information of the Hierarchical Summary Description Scheme comprises the types of summary events of the summary video, the time information describing each summary video interval, the representative frame, the representative sound, and the event types in each interval.
The video summary describing part 108 outputs the video summary description data according to the description scheme illustrated in FIG. 2.
FIG. 2 is a drawing that illustrates the data structure of the HierarchicalSummary DS describing the video summary description scheme according to the present invention in UML (Unified Modeling Language).
The HierarchicalSummary DS 201 describing the video summary is composed of one or more HighlightLevel DS 202 and one or zero SummaryThemeList DS 203.
The SummaryThemeList DS provides the functionality of the event based summary and browsing by enumeratively describing the information of subject or event constituting the summary. The HighlightLevel DS 202 is composed of the HighlightSegment
DSs 204 as many as the number of the video intervals constituting the summary video of that level and zero or several number of HighlightLevel DS.
The HighlightSegment DS describes the information corresponding to the interval of each summary video. The HighlightSegment DS is composed of one VideoSegmentLocator DS 205, zero or several ImageLocator DSs 206, zero or several SoundLocator DSs 207 and AudioSegmentLocator 208.
The followings give more detailed description about the HierarchicalSummary DS. The HierarchicalSummary DS has an attribute of SummaryComponentList which obviously represents the summary type, which is comprised by the HierarchicalSummary DS.
The SummaryComponentList is derived on the basis of the SummaryCornponentType and describes by enumerating all comprised SummaryComponentTypes.
In the SummaryComponentList, there are five types such as keyFrames, keyVideoClips, keyAudioClips, keyEvents, and unconstraint.
The keyFrames represents the key frame summary composed of representative frames. The keyVideoClips represents the key video clip summary composed of key video intervals' sets. The keyEvents represents the summary composed of the video interval corresponding to either the event or the subject. The keyAudioClips represents the key audio clip summary composed of representative audio intervals' sets. And, the unconstraint represents the types of summary defined by users except for said summaries.
Also, in order to describe event based summary, the HierarchicalSummary DS might comprise the SummaryThemeList DS which is enumerating the event (or subject) comprised in the summary and describing the ID.
The SummaryThemeList has arbitrary number of SummaryThemes as elements. The SummaryTheme has an attribute of id of ID type and selectively has an attribute of parentld. The SummaryThemeList DS permits the users browsing the summary video from the viewpoint of each event or several subjects described in the SummaryThemeList. That is, the application tool inputting description data makes the users to select the desired subject by parsing the SummaryThemeList DS and providing the information to the users. At this time, in the case of enumerating these subjects into simple format, if the number of the subjects are large, it might not easy to find out the subject desired by the users.
Accordingly, by representing the subject as tree structure similar to ToC (Table of Content), the users efficiently can do browsing at each subject after finding out the desired subject.
In order to do so, the present invention permits the attribute of parentld being selectively used in the SummaryTheme. The parentld means the upper element (upper subject) in the tree structure.
The HierarchicalSummary DS of the present invention comprises HighlightLevel DSs and each HighlightLevel DS comprises one or more HighlightSegment DS which corresponds to a video segment (or interval) constituting the summary video.
The HighlightLevel DS has an attribute of themelds of IDREFS type.
The themelds describes the subject and event id, common to the children HighlightLevel DS of corresponding HighlightLevel DS or all HighlightSegment DSs comprised in the HighlightLevel, and the id is described in said SummaryThemeList DS.
The themelds can denote several events and, when doing event based summary, solve the problem that same id is unnecessarily repeated in all segments constituting the level by having the themelds representing common subject type in the HighlightSegment constituting the level. The HighlightSegment DS comprises one VideoSegmentLocator DS and one or more ImageLocator DS, zero or one SoundLocator DS and zero or one AudioSegmentLocator DS.
Herein, the VideoSegmentLocator DS describes the time information or video itself of the video segment constituting the summary video. The ImageLocator DS describes the image data information of the representative frame of the video segment. The SoundLocator DS describes the sound information representing the corresponding video segment interval. The AudioSegmentLocator DS describes the interval time information of the audio segment constituting the audio summary or the audio information itself. The HighlightSegment DS has an attribute of themelds. The themelds describes using the id defined in the SummaryThemeList which subjects or events described in said SummaryThemeList DS relates to the corresponding highlight segment.
The themelds can denote more than one events and by allowing one highlight segment to have several subjects, it is an efficient technique of the present invention which is solving the problem of indispensable duplication of descriptions caused by describing the video segment at each event (or subject) when using the existing method for event based summary.
When describing the highlight segment constituting the summary video, in a different way from the existing hierarchical summary description scheme describing only the time information of the highlight video interval, in order to describe the video interval information of each highlight segment, the representative frame information and the representative sound information, by placing the VideoSegmentLocator DS, the ImageSegmentLocator DS and the SoundLocator DS, the present invention makes the overview through the highlight segment video and the navigation and browsing utilizing the representative frame and the representative sound of the segment to be feasible to efficiently utilize through the introduction of the HighlightSegment DS for describing the highlight segment constituting the summary video.
By placing the SoundLocator DS capable of describing the representative sound corresponding to the video interval, in real instances through the characteristic sound capable of representing the video interval, for example gun shot, outcry, anchor's comment in soccer (for example, goal and shoot), actors' name in drama, specific word, etc., it is possible to do efficient browsing by roughly understanding whether the interval is important interval containing the desired contents or what contents are contained in the interval within short time without playing the video interval.
FIG. 3 is a compositional drawing of user interface of the tool for playing and browsing of the summary video inputting the video summary description data described by the same description scheme as FIG. 2. The video playing part 301 plays the original video or the summary video according to the control of the user. The original video representative frame part 305 shows the representative frames of the original video shots. That is, it is composed of a series of images with reduced sizes. The representative frame of the original video shot is described not by the
HierarchicalSummary DS of the present invention but by additional description scheme and can be utilized when both the description data are provided along with the summary description data described by the HierarchicalSummary DS of the present invention.
The user accesses to the original video shot corresponding to the representative frame by clicking the representative frame.
The summary video level 0 representative frame part and the representative sound part 307 and the summary video level 1 representative frame part and the representative sound part 306 shows the frame and sound information representing each video interval of the summary video level 0 and the summary video level 1 respectively. That is, it is composed of the iconic images representing a series of the images and sounds with reduced sizes.
If the user clicks the representative frame of the summary video representative frame part and the representative sound part, the user accesses to the original video interval corresponding to the representative frame. Herein, in the case of clicking the representative sound icon corresponding to the representative frame of the summary video, the representative sound of the video interval is played.
The summary video controlling part 302 inputs the control for user selection to play the summary video. In the case of being provided with the multi level summary video, the user does overview and browsing by selecting the summary of the desired level through the level selecting part 303. The event selecting part 304 enumerates the event and the subject provided by the SummaryThemeList and the user does overview and browsing by selecting the desired event. After all, this realizes the summary of the user customization type.
FIG. 4 is a compositional drawing for the flow of the data and control for hierarchical browsing using the summary video of the present invention. The browsing is performed by accessing the data for browsing with the method of FIG. 4 through the use of the user interface of FIG.3. The data for browsing are the summary video and the representative frame of the summary video and the original video 406 and the original video representative frame 405. The summary video is assumed to have two levels. Needless to say, the summary video may have more levels than two. The summary video level 0 401 is what is summarized with shorter time than the summary video level 1 403. That is, the summary video level 1 contains more contents than the summary video level 0. The summary video level 0 representative frame 402 is the representative frame of the summary video level 0 and the summary video level 1 representative frame 404 is the representative frame of the summary video level 1.
The summary video and the original video are played through the video playing part 301 of FIG. 3. The summary video level 0 representative frame is displayed in the summary video level 0 representative frame and the representative sound part 306, the summary video level 1 representative frame is displayed in the summary video level 1 representative frame and the representative sound part 307, and the original video representative frame is displayed in the original video representative frame part 305.
The hierarchical browsing method illustrated in FIG. 4 can have various types of hierarchical paths as the following example. Case l : (l) - (2)
Case 2 : (l) - (3) - (5)
Case 3 : (1) - (3) - (4) - (6)
Case 4 : (7) - (5)
Case 5 : (7) - (4) - (6) The overall browsing scheme is as follows.
First, understand the overall contents of the original video by watching the summary video of the original video. Herein, the summary video may play either the summary video level 0 or the summary video level 1. When more detailed browsing is wanted after watching the summary video, the interested video interval is identified through the summary video representative frame. If the scene which is desired to be exactly found, is identified in the summary video representative frame, play it by directly accessing to the video interval of the original video to which the representative frame is connected. And if the more detailed information is needed, the user may access to the desired original video either by understanding the representative frame of the next level or by hierarchically understanding the contents of the representative frame of the original video.
Although these hierarchical browsing techniques might take long time in browsing to access to the desired contents while the original video is being played, the browsing time is drastically reduced by directly accessing to the contents of the original video through the hierarchical representative frame.
The existing general video indexing and browsing techniques divide the original video in shot unit and access to the shot by perceiving the desired shot from the representative frame after constituting the representative frame representing each shot.
In this case, because the number of the shots of the original video is large, lots of time and efforts are necessary to do browsing the desired contents out of many representative frames.
In the present invention, it is feasible to quickly access to the desired video by constituting the hierarchical representative frame with the representative frame of the summary video. The case 1 is the case that plays the summary video level 0 and directly accesses to the original video from the summary video level 0 representative frame.
The case 2 is the case that plays the summary video level 0 and selects the most interested representative frame from the summary video level 0 representative frame and identifies the desired scene in the summary video level 1 representative frame corresponding to the neighborhood of the representative frame to understand more detailed information before access to the original video and then accesses to the original video.
The case 3 is the case that selects the most interested representative frame to obtain more detailed information in the case that the access from the summary video level 1 representative frame to the original video is difficult in the case 2 and by the original video representative frames neighboring the representative frame identifies the desired scene and then accesses to the original video using the representative frame of the original frame.
The case 4 and case 5 are the cases that start at the playing of the summary video level 1 and the paths are similar to the above cases. When applied to the server/client circumstance, the present invention can provide the system in which multiple clients access to one server and can do video overview and browsing. The original video is inputted to the server and the video summary description data is produced on the basis of the hierarchical summary description scheme and the summary video description data generation system linking said original video and the video summary description data is equipped. The client accesses to the server through the communication network, does overview of the video using the video summary description data and does browsing and navigation of the video by accessing to the original video.
Although, the present invention was described on the basis of preferably executable examples, these executable examples do not limit the present invention but exemplify. Also, it will be appreciated by those skilled in the art that changes and variations in the embodiments herein can be made without departing from the spirit and scope of the present invention as defined by the following claims.

Claims

What we claim:
1. A HierarchicalSummary Description Scheme (DS) for describing a video summary, the HierarchicalSummary DS comprises at least one HighlightLevel DS which is describing highlight level, wherein said HighlightLevel DS comprises at least one HighlightSegment DS which is describing highlight segment information constituting the summary video of the highlight level.
2. The HierarchicalSummary DS according to claim 1, wherein said HighlightLevel DS is composed of at least one lower level HighlightLevel DSs.
3. The HierarchicalSummary DS according to claim 1, wherein said
HighlightSegment DS comprises a VideoSegmentLocator DS which is describing time information or video itself of said corresponding highlight segment. 4. The HierarchicalSummary DS according to claim 3, wherein said HighlightSegment DS further comprises ImageLocator DS which is describing the representative frame of said corresponding highlight segment.
5. The HierarchicalSummary DS according to claim 3, wherein said HighlightSegment DS further comprises SoundLocator DS which is describing the representative sound information of said corresponding highlight segment.
6. The HierarchicalSummary DS according to claim 3, wherein said HighlightSegment DS further comprises ImageLocator DS which is describing the representative frame of said corresponding highlight segment and SoundLocator DS which is describing the representative sound information of said corresponding highlight segment.
7. The HierarchicalSummary DS according to claim 4, wherein said ImageLocator DS describes time information or image data of the representative frame of video interval corresponding to said corresponding highlight segment.
8. The HierarchicalSummary DS according to claim 3, wherein said HighlightSegment DS further comprises AudioSegmentLocator DS which is describing the audio segment information constituting an audio summary of said corresponding highlight segment.
9. The HierarchicalSummary DS according to claim 8, wherein said AudioSegmentLocator DS describes time information or audio data of the audio interval of said corresponding highlight segment.
10. The HierarchicalSummary DS according to claim 1, wherein said HierarchicalSummary DS includes SummaryComponentList describing and enumerating all of the SummaryComponentTypes which is included in the HierarchicalSummary DS.
11. The HierarchicalSummary DS according to claim 10, wherein said SummaryComponentType includes keyFrames representing the key frame summary composed of representative frames, keyVideoClips representing the key video clip summary composed of key video segment' sets, keyEvents representing the summary of the video interval corresponding to either the event or the subject, keyAudioClips representing the key audio clip summary composed of representative audio intervals' sets, and unconstraint representing the type of summary defined by users except for said summaries.
12. The HierarchicalSummary DS according to claim 1, wherein said HierarchicalSummary DS includes SummaryThemeList DS which is enumerating the event or subject comprised in the summary and describing the ID and then describes event based summary and permits the users to browse the summary video by the event or subject described in said SummaryThemeList
13. The HierarchicalSummary DS according to claim 11, wherein said SummaryThemeList DS includes arbitrary number of SummaryThemes as elements and said SummaryTheme includes an attribute of id representing the corresponding event or subject.
14. The HierarchicalSummary DS according to claim 13, wherein said SummaryTheme further includes an attribute of parentID which is to describe the id of the event or subject of the upper level.
15. The HierarchicalSummary DS according to claim 13, wherein said HighlightLevel DS includes an attribute of themelds describing said attribute of ids of common events or subjects if all of the HighlightSegments and HighlightLevels which are constituting corresponding highlight level have common events or subjects.
16. The HierarchicalSummary DS according to claim 13, wherein said HighlightSegment DS includes an attribute of themelds describing said attribute of id and describes the event or subject of the corresponding highlight segment.
17. A computer-readable recording medium where a HierarchicalSummary DS is stored therein, the HierarchicalSummary DS comprises at least one HighlightLevel DS which is describing highlight level, wherein said HighlightLevel DS comprises at least one HighlightSegment DS which is describing highlight segment information constituting the summary video of that the highlight level, wherein said HighlightSegment DS comprises VideoSegmentLocator DS describing time information or video itself of said corresponding highlight segment.
18. A method for generating video summary description data according to video summary description scheme by inputting original video, comprising: video analyzing step which is producing video analysis result by inputting the original video and then analyzing the original video; summary rule defining step which is defining the summary rule for selecting summary video interval; summary video interval selecting step which is constituting summary video interval information by selecting the video interval capable of summarizing video contents from the original video by inputting said original video analysis result and said summary rule; and video summary describing step which is producing video summary description data according to the HierarchicalSummary DS by inputting the summary video interval information output by said summary video interval selecting step.
19. The method for generating video summary description data according to claim 18, wherein said HierarchicalSummary DS comprises at least one HighlightLevel DS which is describing highlight level, wherein said HighlightLevel DS comprises at least HighlightSegment DS which is describing highlight segment information constituting the summary video of the highlight level, wherein said HighlightSegment DS comprises VideoSegmentLocator DS describing time information or video itself of said corresponding highlight segment.
20. The method for generating video summary description data according to claim 18, wherein said video analyzing step comprises: feature extracting step which is outputting the types of features and video time interval at which those features are detected by inputting the original video and extracting those features; event detecting step which is detecting key events included in the original video by inputting said types of features and video time interval at which those features are detected; and episode detecting step which is detecting episode by dividing the original video into story flow base on the basis of said detected event.
21. The method for generating video summary description data according to claim 18, wherein said summary rule defining step provides the types of summary events, which are bases in selecting the summary video interval, after defining them to said video summary describing step.
22. The method for generating video summary description data according to claim 18, the method further comprises representative frame extracting step which is providing the representative frame to said video summary describing step by inputting said summary video interval information and extracting representative frame.
23. The method for generating video summary description data according to claim 18, the method further comprises representative sound extracting step which is providing the representative sound to said video summary describing step by inputting said summary video interval information and extracting representative sound.
24. A computer-readable recording medium where a program is stored therein, the program is to execute: feature extracting step which is outputting the types of features and video time interval at which those features are detected; event detecting step which is detecting key events included in the original video by inputting said types of features and said video time interval at which those features are detected; episode detecting step which is detecting episode by dividing the original video into story flow base on the basis of said detected key events; summary rule defining step which is defining the summary rule for selecting the summary video interval; summary video interval selecting step which is constituting summary video interval information by selecting the video interval capable of summarizing the video contents of the original video by inputting said detected episode and said summary rule; and video summary describing step which is generating video summary description data with HierarchicalSummary DS by inputting the summary video interval information output by said summary video interval selecting step.
25. A system for generating video summary description data according to video summary description scheme by inputting original video, comprising: video analyzing means for outputting video analysis result by inputting original video and analyzing the original video; summary rule defining means for defining the summary rule for selecting the summary video interval; summary video interval selecting means for constituting summary video interval information by selecting the video interval capable of summarizing the video contents of the original video by inputting said video analysis result and said summary rule; and video summary describing means for generating video summary description data with HierarchicalSummary DS by inputting the summary video interval information output by said summary video interval selecting means.
26. The system for generating video summary description data according to claim 25, wherein said HierarchicalSummary DS comprises at least one HighlightLevel DS which is describing highlight level, wherein said HighlightLevel DS comprises at least one HighlightSegment DS which is describing highlight segment information constituting the summary video of the highlight level, wherein said HighlightSegment DS comprises VideoSegmentLocator DS describing time information or video itself of said corresponding highlight segment.
27. The system for generating video summary description data according to claim 25, wherein said video analyzing means comprises: feature extracting means for outputting the types of features and video time interval at which those features are detected by inputting the original video and extracting those features; event detecting means for detecting key events included in the original video by inputting said types of features and video time interval at which those features are detected; and episode detecting means for detecting episode by dividing the original video into story flow base on the basis of said detected event.
28. The system for generating video summary description data according to claim 25, wherein said summary rule defining means provides the types of summary events, which are bases in selecting the summary video interval, after defining them to said video summary describing means.
29. The system for generating video summary description data according to claim 25, the system further comprises representative frame extracting means for providing the representative frame to said video summary describing means by inputting said summary video interval information and extracting representative frame.
30. The system for generating video summary description data according to claim 25, the system further comprises representative sound extracting means for providing the representative sound to said video summary describing means by inputting said summary video interval information and extracting representative sound.
31. A computer-readable recording medium where a program is stored therein, the program is for functioning: feature extracting means for outputting the types of features and video time interval at which those features are detected; event detecting means for detecting key events included in the original video by inputting said types of features and said video time interval at which those features are detected; episode detecting means for detecting episode by dividing the original video into story flow base on the basis of said detected key events; summary rule defining means for defining the summary rule for selecting the summary video interval; summary video interval selecting means for constituting summary video interval information by selecting the video interval capable of summarizing the video contents of the original video by inputting said detected episode and said summary rule; and video summary describing means for generating video summary description data with HierarchicalSummary DS by inputting the summary video interval information output by said summary video interval selecting step.
32. A Video browsing system in a server/client circumstance, comprising: a server which is equipped with video summary description data generation system which generates video summary description data on the basis of HierarchicalSummary DS by inputting original video and links said original video and video summary description data; and a client which is browsing and navigating video by overview of said original video and access to the original video of said server using said video summary description data.
EP00966554A 1999-10-11 2000-09-29 Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing Ceased EP1222634A4 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR9943712 1999-10-11
KR19990043712 1999-10-11
PCT/KR2000/001084 WO2001027876A1 (en) 1999-10-11 2000-09-29 Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing

Publications (2)

Publication Number Publication Date
EP1222634A1 EP1222634A1 (en) 2002-07-17
EP1222634A4 true EP1222634A4 (en) 2006-07-05

Family

ID=19614707

Family Applications (1)

Application Number Title Priority Date Filing Date
EP00966554A Ceased EP1222634A4 (en) 1999-10-11 2000-09-29 Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing

Country Status (7)

Country Link
EP (1) EP1222634A4 (en)
JP (1) JP4733328B2 (en)
KR (1) KR100371813B1 (en)
CN (2) CN100485721C (en)
AU (1) AU7689200A (en)
CA (1) CA2387404A1 (en)
WO (1) WO2001027876A1 (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001333353A (en) * 2000-03-16 2001-11-30 Matsushita Electric Ind Co Ltd Data processing method, recording medium and program for executing data processing method via computer
US7134074B2 (en) 1998-12-25 2006-11-07 Matsushita Electric Industrial Co., Ltd. Data processing method and storage medium, and program for causing computer to execute the data processing method
US20020108112A1 (en) * 2001-02-02 2002-08-08 Ensequence, Inc. System and method for thematically analyzing and annotating an audio-visual sequence
US7432940B2 (en) 2001-10-12 2008-10-07 Canon Kabushiki Kaisha Interactive animation of sprites in a video production
KR100464076B1 (en) * 2001-12-29 2004-12-30 엘지전자 주식회사 Video browsing system based on keyframe
CN101132528B (en) * 2002-04-12 2011-08-03 三菱电机株式会社 Metadata reproduction apparatus, metadata delivery apparatus, metadata search apparatus, metadata re-generation condition setting apparatus
EP1496701A4 (en) 2002-04-12 2009-01-14 Mitsubishi Electric Corp Meta data edition device, meta data reproduction device, meta data distribution device, meta data search device, meta data reproduction condition setting device, and meta data distribution method
JP4228662B2 (en) * 2002-11-19 2009-02-25 日本電気株式会社 Video browsing system and method
JP4218319B2 (en) * 2002-11-19 2009-02-04 日本電気株式会社 Video browsing system and method
US8392834B2 (en) 2003-04-09 2013-03-05 Hewlett-Packard Development Company, L.P. Systems and methods of authoring a multimedia file
EP1538536A1 (en) 2003-12-05 2005-06-08 Sony International (Europe) GmbH Visualization and control techniques for multimedia digital content
EP1708101B1 (en) * 2004-01-14 2014-06-25 Mitsubishi Denki Kabushiki Kaisha Summarizing reproduction device and summarizing reproduction method
JP4525437B2 (en) * 2005-04-19 2010-08-18 株式会社日立製作所 Movie processing device
CN100455011C (en) * 2005-10-11 2009-01-21 华为技术有限公司 Method for providing media resource pre-review information
US8301669B2 (en) 2007-01-31 2012-10-30 Hewlett-Packard Development Company, L.P. Concurrent presentation of video segments enabling rapid video file comprehension
JP5092469B2 (en) * 2007-03-15 2012-12-05 ソニー株式会社 Imaging apparatus, image processing apparatus, image display control method, and computer program
US8238719B2 (en) 2007-05-08 2012-08-07 Cyberlink Corp. Method for processing a sports video and apparatus thereof
CN101753945B (en) * 2009-12-21 2013-02-06 无锡中星微电子有限公司 Program previewing method and device
US10679671B2 (en) * 2014-06-09 2020-06-09 Pelco, Inc. Smart video digest system and method
US9998799B2 (en) * 2014-08-16 2018-06-12 Sony Corporation Scene-by-scene plot context for cognitively impaired
KR101640317B1 (en) * 2014-11-20 2016-07-19 소프트온넷(주) Apparatus and method for storing and searching image including audio and video data
CN104391960B (en) * 2014-11-28 2019-01-25 北京奇艺世纪科技有限公司 A kind of video labeling method and system
KR102350917B1 (en) * 2015-06-15 2022-01-13 한화테크윈 주식회사 Surveillance system
KR102592904B1 (en) * 2016-02-19 2023-10-23 삼성전자주식회사 Apparatus and method for summarizing image
US10409279B2 (en) * 2017-01-31 2019-09-10 GM Global Technology Operations LLC Efficient situational awareness by event generation and episodic memory recall for autonomous driving systems

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999041684A1 (en) * 1998-02-13 1999-08-19 Fast Tv Processing and delivery of audio-video information
EP0938054A2 (en) * 1998-02-23 1999-08-25 Siemens Corporate Research, Inc. A system for interactive organization and browsing of video
US5956026A (en) * 1997-12-19 1999-09-21 Sharp Laboratories Of America, Inc. Method for hierarchical summarization and browsing of digital video

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3407840B2 (en) * 1996-02-13 2003-05-19 日本電信電話株式会社 Video summarization method
JPH1169281A (en) * 1997-08-15 1999-03-09 Media Rinku Syst:Kk Summary generating device and video display device
JPH1188807A (en) * 1997-09-10 1999-03-30 Media Rinku Syst:Kk Video software reproducing method, video software processing method, medium recording video software reproducing program, medium recording video software processing program, video software reproducing device, video software processor and video software recording medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5956026A (en) * 1997-12-19 1999-09-21 Sharp Laboratories Of America, Inc. Method for hierarchical summarization and browsing of digital video
WO1999041684A1 (en) * 1998-02-13 1999-08-19 Fast Tv Processing and delivery of audio-video information
EP0938054A2 (en) * 1998-02-23 1999-08-25 Siemens Corporate Research, Inc. A system for interactive organization and browsing of video

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HONGJIANG ZHANG ET AL: "CONTENT-BASED VIDEO BROWSING TOOLS", PROCEEDINGS OF THE SPIE, SPIE, BELLINGHAM, VA, US, vol. 2417, 6 February 1995 (1995-02-06), pages 389 - 398, XP000571808, ISSN: 0277-786X *
JONGJIANG ZHANG ET AL: "Structured and content-based video browsing", SIGNALS, SYSTEMS & COMPUTERS, 1998. CONFERENCE RECORD OF THE THIRTY-SECOND ASILOMAR CONFERENCE ON PACIFIC GROVE, CA, USA 1-4 NOV. 1998, PISCATAWAY, NJ, USA,IEEE, US, vol. 1, 1 November 1998 (1998-11-01), pages 910 - 914, XP010324279, ISBN: 0-7803-5148-7 *
See also references of WO0127876A1 *

Also Published As

Publication number Publication date
CN101398843A (en) 2009-04-01
CN100485721C (en) 2009-05-06
KR20010050596A (en) 2001-06-15
AU7689200A (en) 2001-04-23
CN101398843B (en) 2011-11-30
CA2387404A1 (en) 2001-04-19
EP1222634A1 (en) 2002-07-17
JP4733328B2 (en) 2011-07-27
JP2003511801A (en) 2003-03-25
KR100371813B1 (en) 2003-02-11
CN1382288A (en) 2002-11-27
WO2001027876A1 (en) 2001-04-19

Similar Documents

Publication Publication Date Title
US7181757B1 (en) Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing
WO2001027876A1 (en) Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing
JP4652462B2 (en) Metadata processing method
KR100512138B1 (en) Video Browsing System With Synthetic Key Frame
JP4408768B2 (en) Description data generation device, audio visual device using description data
JP4363806B2 (en) Audiovisual program management system and audiovisual program management method
JP2001028722A (en) Moving picture management device and moving picture management system
JP4732418B2 (en) Metadata processing method
CN101132528A (en) Metadata reproduction apparatus, metadata delivery apparatus, metadata search apparatus, metadata re-generation condition setting apparatus
JP4652389B2 (en) Metadata processing method
EP1085756A2 (en) Description framework for audiovisual content

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20020503

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

A4 Supplementary search report drawn up and despatched

Effective date: 20060602

17Q First examination report despatched

Effective date: 20071121

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20140214