CN112291574B - Large-scale sports event content management system based on artificial intelligence technology - Google Patents
Large-scale sports event content management system based on artificial intelligence technology Download PDFInfo
- Publication number
- CN112291574B CN112291574B CN202010980580.0A CN202010980580A CN112291574B CN 112291574 B CN112291574 B CN 112291574B CN 202010980580 A CN202010980580 A CN 202010980580A CN 112291574 B CN112291574 B CN 112291574B
- Authority
- CN
- China
- Prior art keywords
- information
- video
- unit
- athlete
- original video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 10
- 238000005516 engineering process Methods 0.000 title claims abstract description 10
- 230000007704 transition Effects 0.000 claims abstract description 25
- 239000000463 material Substances 0.000 claims abstract description 21
- 230000000694 effects Effects 0.000 claims description 10
- 230000000386 athletic effect Effects 0.000 claims description 6
- 239000012634 fragment Substances 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000000034 method Methods 0.000 abstract description 4
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 230000000875 corresponding effect Effects 0.000 description 33
- 238000010586 diagram Methods 0.000 description 6
- 230000001815 facial effect Effects 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
- G06F16/784—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/231—Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/232—Content retrieval operation locally within server, e.g. reading video streams from disk arrays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23418—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234345—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8455—Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8549—Creating video summaries, e.g. movie trailer
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Library & Information Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Security & Cryptography (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Television Signal Processing For Recording (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a large-scale sports event content management system based on an artificial intelligence technology, which comprises an acquisition module, a storage module, an automatic cataloging module, a retrieval module and a brocade generation module. The technical scheme of the invention has the beneficial effects that: according to the technical scheme, various video compilations with natural transition can be generated; the method has the advantages that the accurate marks of the video frame images are automatically generated, and the corresponding videos are retrieved by matching the marks with the index catalogue which is automatically generated, so that the replay of sports events in the global scope, news reports and direct watching of audiences are facilitated, the manpower and material resources in the later period are saved, and the overall cost is reduced.
Description
Technical Field
The invention relates to the field of video processing, in particular to a large-scale sports event content management system based on an artificial intelligence technology.
Background
Currently in the international broadcast television center (International Broadcast Centre, IBC) for large events, there are content management systems that serve the right-to-hold rebroadcasts and news media, and conventional systems must manually sort, retrieve and ultimately provide the recorded video material content to the rebroadcasts, media or journalists via hard disk media. This not only requires the repeaters, media and reporters to go to the large event site to extract the material content or wait for a specific time to perform satellite signal tradition, but also has inefficiency and time-lapse.
In the prior art, the recording, the memorizing and the cataloging of large-scale event materials are all completed by manpower, in a real-time competition, the content of manually cataloging is very limited, and in the case of large-scale events, 20-30 competition can be carried out simultaneously, so that a great deal of manpower and a great deal of time are required for cataloging live competition pictures.
In the prior art, after the live broadcast is finished, the player's wonderful instant content in the large-scale international event is usually manually clipped by special content production team personnel, and then the video is given to a rebroadster, a media or a reporter through a satellite or a hard disk medium, and then the video is played on the television or the internet. Not only is a great deal of manpower and clipping equipment required, but also the efficiency is low and the timeliness is behind.
Disclosure of Invention
According to the problems existing in the prior art, the invention provides a large-scale sports event content management system based on an artificial intelligence technology, which aims to solve the problems of manually editing video content and manually editing video highlights and manually exchanging content, and the problems of automatic cataloging of programs, automatic generation of video highlights and content exchange in an internet mode are completed through the artificial intelligence technology.
The technical scheme specifically comprises the following steps:
a large-scale athletic event content management system based on artificial intelligence technology, comprising:
the acquisition module is used for acquiring and storing a recording and editing list according to different competition venues and competition time schedules in a large-scale competition aiming at live broadcast signals of multiple paths of competition, and acquiring original video materials according to the competition recording and editing list in a storage module;
the automatic cataloging module is connected to the storage module and used for automatically cataloging according to the original video materials to form automatic cataloging information aiming at different original video materials, and all the automatic cataloging information forms an index catalog of the original video materials and is stored in the storage module;
and the retrieval module is connected to the storage module and is used for retrieving in the storage module by adopting the index catalog according to the input information of the user so as to feed back the original video material corresponding to the user.
Preferably, the automatic cataloging module includes:
the information base is pre-stored with attribute information of a plurality of athletes, wherein the attribute information comprises face characteristic information of the athletes, a represented country and event type information of an affiliated sports event;
the segmentation unit is used for segmenting the original video material to obtain multi-frame continuous video frame images;
the extraction unit is connected with the segmentation unit and is used for extracting corresponding face features to be identified from each frame of video frame image respectively;
the characteristic recognition unit is connected with the information base and the extraction unit, and is used for matching in the information base according to the face characteristics to be recognized for each frame of the video frame images so as to recognize and obtain the athlete included in the video frame images, and further extracting the representing country and the event type information corresponding to the athlete as the representing country and the event type of the video frame images, and the characteristic recognition unit is used for including all the athlete and the representing country and the event type information corresponding to the video frame images in a recognition result and outputting the recognition result;
the marking unit is respectively connected with the characteristic identification unit and the information base and is used for calculating the duty ratio information of the athlete corresponding to the original video material according to the identification result, marking the original video material according to the duty ratio information and storing the mark in the information base, wherein one original video material corresponds to at least one mark respectively, and the mark at least comprises the athlete corresponding to the original video material, the country represented by the athlete and the event type information;
the index directory includes all of the markers for each of the original video material.
Preferably, the marking unit specifically includes:
a counting unit configured to calculate the number of occurrences of each of the athletes included in the original video material with the video frame image as a calculation unit;
the duty ratio calculating part is connected with the counting part and is used for calculating and obtaining the duty ratio information of each athlete in the original video material;
a ranking component connected with the duty ratio calculating component for ranking the athletes from high to low with the duty ratio information and extracting a plurality of the output of the athletes ranked at the top;
and the marking component is connected with the ranking component and is used for taking the plurality of athletes output by the ranking component and the country and the event type information of the representation corresponding to each athlete as the marks of the original video material.
Preferably, the feature recognition unit specifically includes:
the feature recognition component is used for matching in the information base according to the face features to be recognized so as to recognize and obtain the athlete included in the video frame image, and further extracting the country represented by the athlete and the event type information corresponding to the athlete as the country and the event type of the video frame image;
the result judging part is connected with the feature recognition part and is used for judging the recognition result output by the feature recognition part and outputting a matching failure prompt when the matching of the face features to be recognized corresponding to the video frame image fails;
the first recording part is connected with the result judging part and is used for adding the face features to be identified which are failed to be matched into the information base according to the prompt of failed matching so as to represent the new athlete;
a second recording unit connected to the first recording unit for:
when the video frame image in which the face features to be identified which are failed in matching exist other face features to be identified which are successful in matching, the country of the corresponding representation of the face features to be identified which are successful in matching and the event type information are adopted to correlate the face features to be identified which are failed in matching; and
when any successfully matched face feature to be identified does not exist in the video frame image where the failed face feature to be identified exists, the country of the corresponding representation of the successfully matched face feature to be identified in the video frame image of the adjacent frame and the event type information are adopted to correlate the face feature to be identified which is failed to be matched.
Preferably, the feature recognition unit further includes:
and the feature determining component is used for determining the face feature information with the largest outline as the face feature to be identified when a plurality of different face features are included in the video frame image.
Preferably, the automatic cataloging module further comprises:
the voice recognition unit is used for carrying out voice recognition on the original video material to obtain a corresponding voice recognition result;
and the abstract unit is respectively connected with the voice recognition unit and the information base and is used for taking part or all of the voice recognition results as abstract information of the original video materials and storing the abstract information in the information base.
Preferably, the highlight generation module specifically includes:
the first merging unit is used for merging continuous multi-frame video frame images which belong to the same content in the index catalog into a video fragment aiming at each original video material according to the identification result of the characteristic identification unit;
an expansion unit, connected to the first merging unit, for expanding a preset number of video frames before and after the video clips to generate a plurality of expanded video clips;
the sorting unit is connected with the expansion unit and is used for sorting the expansion video fragments belonging to the same content in the index catalog according to a time sequence for each original video material and outputting a sorting result;
the effect presetting unit is used for presetting a transitional video segment, wherein the transitional video segment is used for displaying a preset transitional animation effect;
the second merging unit is respectively connected with the sorting unit and the effect presetting unit and is used for comparing the sum of the frames of every two adjacent video fragments belonging to the same content in the index catalog with the frames of the transition video segment on the basis of the sorting result and obtaining a comparison result;
when the judging result shows that the sum of the frames of the extended video segments is smaller than the frames of the transition video segments, the second merging unit does not add the transition video segments between two adjacent transition video segments;
and when the judging result shows that the sum of the frames of the two adjacent extended video segments is not less than the frames of the transition video segments, inserting the transition video segments between the two adjacent extended video segments by the second merging unit to form video highlights belonging to the same content in the index catalog.
Preferably, the input information of the user includes at least one of information of the athlete, a country of representation of the athlete, a face image of the athlete, and the event type information.
Preferably, the input mode of the input information of the user comprises at least one of text input, voice input and image input;
the retrieving module specifically includes:
the word processing unit is used for processing the input information of the word input mode into standard format information and outputting the standard format information;
the voice processing unit is used for recognizing the input information of the voice input mode to obtain a voice recognition result, and then processing the voice recognition result into the standard format information and outputting the standard format information;
the image processing unit is used for extracting and outputting image characteristics of input information input by the image;
and the retrieval unit is respectively connected with the word processing unit, the voice processing unit and the image processing unit and is used for retrieving and matching in the storage module according to the standard format information and/or the image characteristics so as to feed back the original video material corresponding to the user.
The technical scheme of the invention has the beneficial effects that: according to the technical scheme, various video compilations with natural transition can be generated; the method has the advantages that the accurate marks of the video frame images are automatically generated, and the corresponding videos are retrieved by matching the marks with the index catalogue which is automatically generated, so that the replay of sports events in the global scope, news reports and direct watching of audiences are facilitated, the manpower and material resources in the later period are saved, and the overall cost is reduced.
Drawings
Embodiments of the present invention will now be described more fully with reference to the accompanying drawings. The drawings, however, are for illustration and description only and are not intended as a definition of the limits of the invention.
FIG. 1 is a block diagram of a system for managing content of a large-scale athletic event based on artificial intelligence technology in accordance with an embodiment of the present invention;
FIG. 2 is a block diagram of an automatic catalog module according to an embodiment of the present invention;
FIG. 3 is a block diagram of a marking module according to an embodiment of the present invention;
FIG. 4 is a block diagram of a feature recognition unit according to an embodiment of the present invention;
FIG. 5 is a block diagram of a search module according to an embodiment of the present invention;
fig. 6 is a block diagram of a highlight reel generating module according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The invention is further described below with reference to the drawings and specific examples, which are not intended to be limiting.
A large-scale athletic event content management system based on artificial intelligence technology, comprising:
the acquisition module 1 is used for acquiring live broadcast signals of multiple paths of events, recording and editing a list on multiple occasions according to multiple event types in a large-scale sports event, acquiring original video materials according to the recording and editing list of the event, and storing the acquired original video materials in the storage module 2;
the automatic cataloging module 3 is connected to the storage module 2 and is used for automatically cataloging according to the original video materials to form automatic cataloging information of the original video materials aiming at a plurality of event types, a plurality of athletes and a plurality of occasions, and all the automatic cataloging information forms an index catalog of the original video materials and is stored in the storage module 2;
and the retrieval module 4 is connected to the storage module 2 and is used for retrieving in the storage module 2 by adopting an index catalog according to the input information of the user so as to feed back the corresponding original video material to the user.
Specifically, the automatic catalog module 3 includes:
the information base 31, the attribute information of a plurality of athletes is pre-stored in the information base 31, and the attribute information includes face characteristic information, nationality and event type information of the affiliated sports event of the athletes;
a splitting unit 32, configured to split the original video material to obtain multiple continuous video frame images;
the extracting unit 33 is connected with the segmentation unit 32 and is used for extracting corresponding face features to be identified from each frame of video frame image respectively;
the feature recognition unit 34 is connected with the information base 31 and the extraction unit 33, and the feature recognition unit 34 is used for matching in the information base 31 according to the face features to be recognized for each frame of video frame images so as to recognize and obtain the athlete included in the video frame images, and further extract the representing country and event type information corresponding to the athlete as the representing country and event type of the video frame images, and the feature recognition unit 34 includes the athlete corresponding to all the video frame images and the representing country and event type information in a recognition result and outputs the recognition result;
the marking unit 35 is respectively connected with the feature recognition unit 34 and the information base 31, and is used for calculating the duty ratio information of the athlete corresponding to the original video material according to the recognition result, marking the original video material according to the duty ratio information and storing the mark in the information base 31, wherein one original video material corresponds to at least one mark, and the mark at least comprises the athlete corresponding to the original video material, the country corresponding to the athlete and the event type information;
the index directory includes all the tags for each original video material.
Specifically, the marking of the marking unit 35 includes face feature information of the athlete, nationality, and event type information of the sports event to which the nationality belongs.
Specifically, the index directory includes face feature information of the athlete and event type information of the sports event to which the athlete belongs.
In a large-scale sports event, there are often cases where athletes in a plurality of countries are participating in the event, and the automatic cataloging module 3 in the technical scheme takes the country represented by each of the plurality of athletes as one of the marked contents, so that the processing of the event contents in different countries by users such as later media, right-holding rebroadstats and the like is facilitated.
Specifically, the attribute information of the plurality of athletes may further include nationality information, prize information, and competition information of the athletes, such as in basketball events, and may further include information of clubs where the athletes are located.
In particular, the user may provide relevant videos of all champion athletes to the user by retrieving, for example, a "champion" typeface, the retrieval module 4.
The specific contents of the feature library are not limited in the present technical solution, but those skilled in the art should recognize that other contents of the feature library extended therefrom should be included in the protection scope of the present technical solution.
Specifically, the marking unit 35 specifically includes:
a counting section 351 for calculating the number of occurrences of each athlete included in the original video material in units of calculation of video frame images;
a duty ratio calculating unit 352 connected to the counting unit 351 for calculating duty ratio information of each athlete appearing in the original video material;
a ranking component 353, coupled to the duty cycle calculation component 352, for ranking the athlete from high to low in duty cycle information and extracting a plurality of top ranked athlete outputs;
a marking unit 354 is connected to the ranking unit 353 for marking the plurality of players and the country represented by each player, corresponding event type information output by the ranking unit 353 as raw video material.
In general, in one original video material, there are a plurality of athletes, and therefore, the mark in the original video material by the mark unit 25 is of a plurality of event types, a plurality of countries, and a plurality of athletes.
Further, the counting section 351 counts the number of occurrences of the plurality of athletes, the duty ratio calculating section 352 calculates the number of occurrences of each athlete in the original video material, and the ranking section 353 ranks the duty ratio information from high to low.
In one embodiment, the ranking component 353 can take as indicia of the original video material a number of top ranked athletes and the country, event type information of the respective representatives of those athletes;
or (b)
In another embodiment, the ranking component 353 can take as indicia of the original video material athlete's percentage of the ratio information exceeding a ratio threshold and the country of representation, event type information corresponding to those athlete's.
Specifically, the feature recognition unit 34 specifically includes:
a feature recognition unit 341, configured to perform matching in the information base 31 according to the face feature to be recognized, so as to recognize and obtain the athlete included in the video frame image, and further extract country and event type information corresponding to the athlete as country and event types of the video frame image;
a result judging unit 342 connected to the feature identifying unit 341 and configured to judge the identification result output by the feature identifying unit 341, and output a matching failure prompt when the matching of the face feature to be identified corresponding to the video frame image fails;
a first recording unit 343, a connection result judging unit 342, configured to add the face feature to be identified that fails to match to the information base 31 according to the prompt of the failed match to characterize the new athlete;
a second recording section 344 connected to the first recording section 343 for:
when other successfully matched face features to be identified exist in the video frame image of the failed face feature to be identified, the successfully matched face features to be identified are correlated by adopting the country and event type information of the representation corresponding to the successfully matched face features to be identified; and
when any successfully matched face feature to be identified does not exist in the video frame image where the unsuccessfully matched face feature to be identified exists, the country and event type information of the representation corresponding to the successfully matched face feature to be identified in the video frame image of the adjacent frame is adopted to correlate the unsuccessfully matched face feature to be identified.
In one embodiment, only one athlete is present in the video frame image, and the feature recognition component 341 performs matching in the information base 31 according to the face feature to be recognized;
further, the result judging section 343 judges the recognition result outputted by the feature recognizing section 341, and if an athlete who participates in a large sports event for the first time appears and the facial feature information of the athlete does not exist in the information base 31, the first recording section 343 adds the facial feature of the athlete to the information base 31.
In one embodiment, where there are one or more matching-failed athletes and matching-successful athletes in the video image frames, the second recording component 344 correlates the country of the corresponding representation of the matching-successful athlete, event type information, and the matching-failed athlete.
In another embodiment, where there are one or more matching-failed athletes in a video image frame and there are no matching-successful athletes, the second recording component 344 correlates the country of representation, event type information, and matching-failed athletes corresponding to matching-successful athletes in video image frames adjacent to the video image frame.
In another embodiment, no athlete appears in the video frame image, i.e. the video frame image is a stadium environment, etc., and the mark of the video image frame is country and event type information of the representative corresponding to the athlete successfully matched in the adjacent video image frame.
Specifically, the feature recognition unit 34 further includes:
the feature determining unit 345 is configured to determine, when a plurality of different face features are included in the video frame image, face feature information having the largest contour as a face feature to be identified.
In one embodiment, where there are multiple athletes in the video frame image, the feature determination component 345 takes the face feature information with the largest profile as the face feature to be identified.
Specifically, the automatic cataloging module 3 further includes:
a voice recognition unit 36, configured to perform voice recognition on the original video material, so as to obtain a corresponding voice recognition result;
the abstracting unit 37 is connected to the voice recognition unit 36 and the information base 31, respectively, and is configured to store a part or all of the voice recognition results in the information base 31 as abstracted information of the original video material.
Specifically, at this time, the index catalog includes face feature information of the athlete, event type information of the sports event to which the athlete belongs, and summary information.
Further, the retrieval module 4 compares the retrieval words input by the user with the index catalog, and presents the original video materials with the retrieval words in the voice recognition result to the user.
Specifically, the automatic catalog module 3 can use part or all of the speech recognition results of the summarization unit 37 and/or the labels of the labeling unit 35 as an index catalog of the original video material.
Specifically, the large-scale sports event content management system further includes:
the collection generating module 5 is respectively connected with the storage module 2 and the automatic cataloging module 3, the collection generating module 5 is used for generating various video collection according to the index catalogue, and the collection generating module 5 takes the various video collection as an original video material and stores the video collection in the storage module 2.
Specifically, the original video material includes live video and video highlights.
Further, the collection generating module 5 may generate a collection of information for each athlete/athletes
And/or
Single/multiple event types
And/or
Video syndication in single/multiple countries.
In a preferred embodiment, the highlight reel generating module 5 specifically comprises:
a first merging unit 51, configured to merge, for each original video material, consecutive multi-frame video frame images belonging to the same content in the index directory into one video segment according to the identification result of the feature identification unit 34;
an expansion unit 52 connected to the first merging unit 51 for expanding a preset number of video frames before and after the video clips to generate a plurality of expanded video clips;
a sorting unit 53, connected to the expansion unit 52, for sorting video segments belonging to the same content in the index directory according to a time sequence for each original video material, and outputting a sorting result;
the effect presetting unit 54, wherein a transition video segment is preset in the effect presetting unit 54, and the transition video segment is used for displaying a preset transition animation effect;
the second merging unit 55 is respectively connected with the sorting unit 53 and the effect presetting unit 54, and is used for comparing the sum of the frames of every two adjacent video segments belonging to the same content in the index catalog with the frames of the transition video segments on the basis of the sorting result to obtain a comparison result and form a video highlight belonging to the same content in the index catalog;
when the comparison result indicates that the sum of the frames of the adjacent two extension video segments is less than the frames of the transition video segments, the second merging unit 55 does not add the transition video segments between the adjacent two transition video segments;
when the comparison result indicates that the sum of the frames of the adjacent two extended video segments is not less than the frame number of the transition video segment, the second merging unit 55 inserts the transition video segment between the adjacent two extended video segments and forms a video highlight belonging to the same content in the index directory.
Specifically, the input information of the user includes at least one of information of the athlete, a face image of the athlete, and event type information.
Specifically, the user can search through the athlete's name, winning title, photograph, event type, country represented, etc.
Specifically, the first merging unit 51 merges a plurality of consecutive video frame images to form video clips that are time-consecutive video clips of the same content in the index directory.
Further, the second merging unit 55 merges the plurality of video clips, and inserts a transition video segment between the plurality of video clips, to form a plurality of scenes of the same content in the index directory, and a time-discontinuous video highlight.
In a preferred embodiment, the input mode of the input information of the user comprises at least one of text input, voice input and image input;
the retrieving module 4 specifically includes:
a word processing unit 41 for processing the input information of the word input mode into standard format information and outputting the standard format information;
a voice processing unit 42, configured to recognize input information of a voice input mode to obtain a voice recognition result, and then process the voice recognition result into the standard format information and output the standard format information;
an image processing unit 43 for extracting and outputting image features of input information of an image input;
the retrieval unit 44 is respectively connected with the word processing unit 41, the voice processing unit 42 and the image processing unit 43, and is used for retrieving and matching in the storage module according to the standard format information and/or the image characteristics so as to feed back the corresponding original video material to the user.
Specifically, the user can perform retrieval using text input, voice input, and image input.
Further, the user inputs search contents, and the search module 4 performs search.
Further, the retrieval unit 44 retrieves and matches the index directory based on standard format information and/or image features of the retrieved content.
Further, the retrieval unit 44 presents the matched original video material to the user.
The technical scheme of the invention has the beneficial effects that: according to the technical scheme, various video compilations with natural transition can be generated; the method has the advantages that the accurate marks of the video frame images are automatically generated, and the corresponding videos are retrieved by matching the marks with the index catalogue which is automatically generated, so that the replay of sports events in the global scope, news reports and direct watching of audiences are facilitated, the manpower and material resources in the later period are saved, and the overall cost is reduced.
The foregoing is merely illustrative of the present invention and is not intended to limit the embodiments and scope of the invention, and it should be appreciated by those skilled in the art that equivalent substitutions and obvious variations may be made using the description and illustrations of the present invention, and are intended to be included within the scope of the present invention.
Claims (6)
1. A large-scale athletic event content management system based on artificial intelligence technology, comprising:
the acquisition module is used for acquiring live broadcast signals of multiple paths of events, acquiring and storing original video materials according to the acquisition and editing list of the events by a plurality of athletes and a plurality of occasions according to the types of the events in the large-scale sports event;
the automatic cataloging module is connected to the storage module and used for automatically cataloging according to the original video materials to form automatic cataloging information aiming at different original video materials, and all the automatic cataloging information forms an index catalog of the original video materials and is stored in the storage module;
the retrieval module is connected to the storage module and is used for retrieving in the storage module by adopting the index catalog according to the input information of the user so as to feed back the original video material corresponding to the user;
the automatic cataloging module comprises:
the information base is pre-stored with attribute information of a plurality of athletes, wherein the attribute information comprises face characteristic information of the athletes, a represented country and event type information of an affiliated sports event;
the segmentation unit is used for segmenting the original video material to obtain multi-frame continuous video frame images;
the extraction unit is connected with the segmentation unit and is used for extracting corresponding face features to be identified from each frame of video frame image respectively;
the characteristic recognition unit is connected with the information base and the extraction unit, and is used for matching in the information base according to the face characteristics to be recognized for each frame of the video frame images so as to recognize and obtain the athlete included in the video frame images, and further extracting the representing country and the event type information corresponding to the athlete as the representing country and the event type of the video frame images, and the characteristic recognition unit is used for including all the athlete and the representing country and the event type information corresponding to the video frame images in a recognition result and outputting the recognition result;
the marking unit is respectively connected with the characteristic identification unit and the information base and is used for calculating the duty ratio information of the athlete corresponding to the original video material according to the identification result, marking the original video material according to the duty ratio information and storing the mark in the information base, wherein one original video material corresponds to at least one mark respectively, and the mark at least comprises the athlete corresponding to the original video material, the country represented by the athlete and the event type information;
the duty ratio information is the number of times of occurrence of each athlete in the original video material;
said index directory includes all of said markers for each of said original video material;
the marking unit specifically includes:
a counting unit configured to calculate the number of occurrences of each of the athletes included in the original video material with the video frame image as a calculation unit;
the duty ratio calculating part is connected with the counting part and is used for calculating and obtaining the duty ratio information of each athlete in the original video material;
a ranking component connected with the duty ratio calculating component for ranking the athletes from high to low with the duty ratio information and extracting a plurality of the output of the athletes ranked at the top;
a marking component connected with the ranking component and used for taking the plurality of athletes output by the ranking component and the representing country and the event type information corresponding to each athlete as the marks of the original video material;
when no athlete appears in the video frame images, the marking component takes the country of the representation corresponding to the athlete successfully matched in the adjacent video frame image and the event type information as the mark of the video frame images;
the large-scale sports event content management system further includes:
the video collection generation module is respectively connected with the storage module and the automatic cataloging module, and is used for generating various video collection according to the index catalogue, and the video collection is used as the original video material and stored in the storage module;
the collection generating module specifically comprises:
the first merging unit is used for merging continuous multi-frame video frame images which belong to the same content in the index catalog into a video fragment aiming at each original video material according to the identification result of the characteristic identification unit;
an expansion unit, connected to the first merging unit, for expanding a preset number of video frames before and after the video clips to generate a plurality of expanded video clips;
the sorting unit is connected with the expansion unit and is used for sorting the expansion video fragments belonging to the same content in the index catalog according to a time sequence for each original video material and outputting a sorting result;
the effect presetting unit is used for presetting a transitional video segment, wherein the transitional video segment is used for displaying a preset transitional animation effect;
the second merging unit is respectively connected with the sorting unit and the effect presetting unit and is used for comparing the sum of the frames of every two adjacent video fragments belonging to the same content in the index catalog with the frames of the transition video segment on the basis of the sorting result and obtaining a comparison result;
when the comparison result shows that the sum of the frames of two adjacent extended video segments is smaller than the frames of the transition video segments, the second merging unit does not add the transition video segments between the two adjacent transition video segments, and forms video highlights belonging to the same content in the index catalog;
and when the result shows that the sum of the frames of the two adjacent extended video segments is not less than the frames of the transition video segments, the second merging unit inserts the transition video segments between the two adjacent extended video segments and forms video highlights belonging to the same content in the index catalog.
2. The large sports event content management system according to claim 1, wherein the feature recognition unit specifically includes:
the feature recognition component is used for matching in the information base according to the face features to be recognized so as to recognize and obtain the athlete included in the video frame image, and further extracting the country represented by the athlete and the event type information corresponding to the athlete as the country and the event type of the video frame image;
the result judging part is connected with the feature recognition part and is used for judging the recognition result output by the feature recognition part and outputting a matching failure prompt when the matching of the face features to be recognized corresponding to the video frame image fails;
the first recording part is connected with the result judging part and is used for adding the face features to be identified which are failed to be matched into the information base according to the prompt of failed matching so as to represent the new athlete;
a second recording unit connected to the first recording unit for:
when the video frame image in which the face features to be identified which are failed in matching exist other face features to be identified which are successful in matching, the country of the corresponding representation of the face features to be identified which are successful in matching and the event type information are adopted to correlate the face features to be identified which are failed in matching; and
when any successfully matched face feature to be identified does not exist in the video frame image where the failed face feature to be identified exists, the country of the corresponding representation of the successfully matched face feature to be identified in the video frame image of the adjacent frame and the event type information are adopted to correlate the face feature to be identified which is failed to be matched.
3. The large athletic event content management system of claim 1, wherein the feature identification unit further comprises:
and the feature determining component is used for determining the face feature information with the largest outline as the face feature to be identified when a plurality of different face features are included in the video frame image.
4. The large athletic event content management system of claim 1, wherein the automated cataloging module further comprises:
the voice recognition unit is used for carrying out voice recognition on the original video material to obtain a corresponding voice recognition result;
and the abstract unit is respectively connected with the voice recognition unit and the information base and is used for taking part or all of the voice recognition results as abstract information of the original video materials and storing the abstract information in the information base.
5. The large sports event content management system according to claim 1, wherein the input information of the user includes at least one of information of the athlete, a country of representation of the athlete, a face image of the athlete, and the event type information.
6. The large-scale sports event content management system according to claim 5, wherein the input means of the input information of the user includes at least one of text input, voice input and image input;
the retrieving module specifically includes:
the word processing unit is used for processing the input information of the word input mode into standard format information and outputting the standard format information;
the voice processing unit is used for recognizing the input information of the voice input mode to obtain a voice recognition result, and then processing the voice recognition result into the standard format information and outputting the standard format information;
the image processing unit is used for extracting and outputting image characteristics of input information input by the image;
and the retrieval unit is respectively connected with the word processing unit, the voice processing unit and the image processing unit and is used for retrieving and matching in the storage module according to the standard format information and/or the image characteristics so as to feed back the original video material corresponding to the user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010980580.0A CN112291574B (en) | 2020-09-17 | 2020-09-17 | Large-scale sports event content management system based on artificial intelligence technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010980580.0A CN112291574B (en) | 2020-09-17 | 2020-09-17 | Large-scale sports event content management system based on artificial intelligence technology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112291574A CN112291574A (en) | 2021-01-29 |
CN112291574B true CN112291574B (en) | 2023-07-04 |
Family
ID=74421053
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010980580.0A Active CN112291574B (en) | 2020-09-17 | 2020-09-17 | Large-scale sports event content management system based on artificial intelligence technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112291574B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114630142B (en) * | 2022-05-12 | 2022-07-29 | 北京汇智云科技有限公司 | Large-scale sports meeting rebroadcast signal scheduling method and broadcasting production system |
CN117132925B (en) * | 2023-10-26 | 2024-02-06 | 成都索贝数码科技股份有限公司 | Intelligent stadium method and device for sports event |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101296322A (en) * | 2007-04-27 | 2008-10-29 | 新奥特硅谷视频技术有限责任公司 | Sports event logging system |
TW201304519A (en) * | 2011-07-04 | 2013-01-16 | Gorilla Technology Inc | Automatic media editing apparatus, editing method, broadcasting method and system for broadcasting the same |
JP2013114596A (en) * | 2011-11-30 | 2013-06-10 | Kddi Corp | Image recognition device and method |
CN109710806A (en) * | 2018-12-06 | 2019-05-03 | 苏宁体育文化传媒(北京)有限公司 | The method for visualizing and system of football match data |
CN110188241A (en) * | 2019-06-04 | 2019-08-30 | 成都索贝数码科技股份有限公司 | A kind of race intelligence manufacturing system and production method |
CN110401878A (en) * | 2019-07-08 | 2019-11-01 | 天脉聚源(杭州)传媒科技有限公司 | A kind of video clipping method, system and storage medium |
CN110555136A (en) * | 2018-03-29 | 2019-12-10 | 优酷网络技术(北京)有限公司 | Video tag generation method and device and computer storage medium |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090210395A1 (en) * | 2008-02-12 | 2009-08-20 | Sedam Marc C | Methods, systems, and computer readable media for dynamically searching and presenting factually tagged media clips |
US20110173235A1 (en) * | 2008-09-15 | 2011-07-14 | Aman James A | Session automated recording together with rules based indexing, analysis and expression of content |
CN102572293A (en) * | 2010-12-16 | 2012-07-11 | 新奥特(北京)视频技术有限公司 | Field recording-based retrieval system |
CN102650993A (en) * | 2011-02-25 | 2012-08-29 | 北大方正集团有限公司 | Index establishing and searching methods, devices and systems for audio-video file |
CN103049459A (en) * | 2011-10-17 | 2013-04-17 | 天津市亚安科技股份有限公司 | Feature recognition based quick video retrieval method |
US8789120B2 (en) * | 2012-03-21 | 2014-07-22 | Sony Corporation | Temporal video tagging and distribution |
CN102799684B (en) * | 2012-07-27 | 2015-09-09 | 成都索贝数码科技股份有限公司 | The index of a kind of video and audio file cataloguing, metadata store index and searching method |
US9094692B2 (en) * | 2012-10-05 | 2015-07-28 | Ebay Inc. | Systems and methods for marking content |
CN103530652B (en) * | 2013-10-23 | 2016-09-14 | 北京中视广信科技有限公司 | A kind of video categorization based on face cluster, search method and system thereof |
WO2015081303A1 (en) * | 2013-11-26 | 2015-06-04 | Double Blue Sports Analytics, Inc. | Automated video tagging with aggregated performance metrics |
CN103995826A (en) * | 2014-04-09 | 2014-08-20 | 浙江图讯科技有限公司 | Automatic cataloguing method for safety production supervision and administration governmental information |
US20150312652A1 (en) * | 2014-04-24 | 2015-10-29 | Microsoft Corporation | Automatic generation of videos via a segment list |
US10536758B2 (en) * | 2014-10-09 | 2020-01-14 | Thuuz, Inc. | Customized generation of highlight show with narrative component |
US20170228600A1 (en) * | 2014-11-14 | 2017-08-10 | Clipmine, Inc. | Analysis of video game videos for information extraction, content labeling, smart video editing/creation and highlights generation |
EP3262643A4 (en) * | 2015-02-24 | 2019-02-20 | Plaay, LLC | System and method for creating a sports video |
US20180301169A1 (en) * | 2015-02-24 | 2018-10-18 | Plaay, Llc | System and method for generating a highlight reel of a sporting event |
JP6402653B2 (en) * | 2015-03-05 | 2018-10-10 | オムロン株式会社 | Object recognition device, object recognition method, and program |
US10430664B2 (en) * | 2015-03-16 | 2019-10-01 | Rohan Sanil | System for automatically editing video |
JP7033587B2 (en) * | 2016-06-20 | 2022-03-10 | ピクセルロット エルティーディー. | How and system to automatically create video highlights |
US10681391B2 (en) * | 2016-07-13 | 2020-06-09 | Oath Inc. | Computerized system and method for automatic highlight detection from live streaming media and rendering within a specialized media player |
CN106354861B (en) * | 2016-09-06 | 2019-09-20 | 中国传媒大学 | Film label automatic indexing method and automatic indexing system |
US10417500B2 (en) * | 2017-12-28 | 2019-09-17 | Disney Enterprises, Inc. | System and method for automatic generation of sports media highlights |
CN109257649B (en) * | 2018-11-28 | 2021-12-24 | 维沃移动通信有限公司 | Multimedia file generation method and terminal equipment |
CN109886165A (en) * | 2019-01-23 | 2019-06-14 | 中国科学院重庆绿色智能技术研究院 | A kind of action video extraction and classification method based on moving object detection |
CN109657100B (en) * | 2019-01-25 | 2021-10-29 | 深圳市商汤科技有限公司 | Video collection generation method and device, electronic equipment and storage medium |
CN110502661A (en) * | 2019-07-08 | 2019-11-26 | 天脉聚源(杭州)传媒科技有限公司 | A kind of video searching method, system and storage medium |
CN110418076A (en) * | 2019-08-02 | 2019-11-05 | 新华智云科技有限公司 | Video Roundup generation method, device, electronic equipment and storage medium |
CN110532426A (en) * | 2019-08-27 | 2019-12-03 | 新华智云科技有限公司 | It is a kind of to extract the method and system that Multi-media Material generates video based on template |
CN111046235B (en) * | 2019-11-28 | 2022-06-14 | 福建亿榕信息技术有限公司 | Method, system, equipment and medium for searching acoustic image archive based on face recognition |
CN110856039A (en) * | 2019-12-02 | 2020-02-28 | 新华智云科技有限公司 | Video processing method and device and storage medium |
-
2020
- 2020-09-17 CN CN202010980580.0A patent/CN112291574B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101296322A (en) * | 2007-04-27 | 2008-10-29 | 新奥特硅谷视频技术有限责任公司 | Sports event logging system |
TW201304519A (en) * | 2011-07-04 | 2013-01-16 | Gorilla Technology Inc | Automatic media editing apparatus, editing method, broadcasting method and system for broadcasting the same |
JP2013114596A (en) * | 2011-11-30 | 2013-06-10 | Kddi Corp | Image recognition device and method |
CN110555136A (en) * | 2018-03-29 | 2019-12-10 | 优酷网络技术(北京)有限公司 | Video tag generation method and device and computer storage medium |
CN109710806A (en) * | 2018-12-06 | 2019-05-03 | 苏宁体育文化传媒(北京)有限公司 | The method for visualizing and system of football match data |
CN110188241A (en) * | 2019-06-04 | 2019-08-30 | 成都索贝数码科技股份有限公司 | A kind of race intelligence manufacturing system and production method |
CN110401878A (en) * | 2019-07-08 | 2019-11-01 | 天脉聚源(杭州)传媒科技有限公司 | A kind of video clipping method, system and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112291574A (en) | 2021-01-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101616264B (en) | Method and system for cataloging news video | |
Uchihashi et al. | Video manga: generating semantically meaningful video summaries | |
CN106162223B (en) | News video segmentation method and device | |
Sang et al. | Character-based movie summarization | |
Karpenko et al. | Tiny videos: a large data set for nonparametric video retrieval and frame classification | |
Duygulu et al. | Towards auto-documentary: Tracking the evolution of news stories | |
US20110052086A1 (en) | Electronic Apparatus and Image Processing Method | |
CN112291574B (en) | Large-scale sports event content management system based on artificial intelligence technology | |
Babaguchi | Towards abstracting sports video by highlights | |
CN113613065B (en) | Video editing method and device, electronic equipment and storage medium | |
Pickering et al. | ANSES: Summarisation of news video | |
US8051446B1 (en) | Method of creating a semantic video summary using information from secondary sources | |
US8214854B2 (en) | Method and system for facilitating analysis of audience ratings data for content | |
JP4270118B2 (en) | Semantic label assigning method, apparatus and program for video scene | |
Pramod Sankar et al. | Text driven temporal segmentation of cricket videos | |
Haloi et al. | Unsupervised story segmentation and indexing of broadcast news video | |
Messer et al. | Automatic sports classification | |
CN114385859A (en) | Multi-modal retrieval method for video content | |
Vallet et al. | High-level TV talk show structuring centered on speakers’ interventions | |
Saraceno | Video content extraction and representation using a joint audio and video processing | |
Li et al. | Bridging the semantic gap in sports | |
Rozsa et al. | TV News Database Indexing System with Video Structure Analysis, Representative Images Extractions and OCR for News Titles | |
Kim et al. | Multimodal approach for summarizing and indexing news video | |
Coimbra et al. | The shape of the game | |
JP2002014973A (en) | Video retrieving system and method, and recording medium with video retrieving program recorded thereon |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |