KR101758824B1 - Device for conversational tagging based on media content and method thereof - Google Patents

Device for conversational tagging based on media content and method thereof Download PDF

Info

Publication number
KR101758824B1
KR101758824B1 KR1020150113238A KR20150113238A KR101758824B1 KR 101758824 B1 KR101758824 B1 KR 101758824B1 KR 1020150113238 A KR1020150113238 A KR 1020150113238A KR 20150113238 A KR20150113238 A KR 20150113238A KR 101758824 B1 KR101758824 B1 KR 101758824B1
Authority
KR
South Korea
Prior art keywords
content
user
media content
tagging
objects
Prior art date
Application number
KR1020150113238A
Other languages
Korean (ko)
Other versions
KR20170019180A (en
Inventor
권용무
사르키 루비나
장하은
Original Assignee
한국과학기술연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국과학기술연구원 filed Critical 한국과학기술연구원
Priority to KR1020150113238A priority Critical patent/KR101758824B1/en
Publication of KR20170019180A publication Critical patent/KR20170019180A/en
Application granted granted Critical
Publication of KR101758824B1 publication Critical patent/KR101758824B1/en

Links

Images

Classifications

    • G06Q50/30
    • G06F17/30038
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Multimedia (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Operations Research (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention relates to an interactive tagging apparatus based on media content comprising a data storage for storing media content and a media tagging unit for tagging first user content to the media content based on user input. Specifically, the media tagging unit sets a first object in the media content as a user content provider, and sets a second object in the media content as a user content receiver.

Description

TECHNICAL FIELD [0001] The present invention relates to an interactive tagging device and a method thereof,

The present invention relates to an apparatus and method for tagging a user voice or image to media content, and more particularly to a tagging apparatus and method in which a provider and a recipient of a voice or image to be tagged are displayed.

Recently, the usage of smart devices by elderly people is increasing. According to one survey, seniors spend the most time watching family photos using smart devices.

However, in the case of elderly people whose memory is gradually diminishing, family photographs may not remember the names, age, and family relationships of each family member.

There is a technique of tagging content (text or voice) in a conventional photograph, so that it is possible to store and reproduce a voice message in a family photograph, but the tagged contents can not describe at all who is in the photograph, There is a disadvantage in that the relationship between the respective contents can not be expressed at all.

U.S. Patent Publication No. US2013 / 0346068 A1 U.S. Patent Publication No. US2014 / 0086458 A1

In order to solve the above-mentioned problems, the present invention can display the relationship between each object in the media content including the image and the provider and the receiver of the content to be tagged. In addition, when there are a plurality of contents to be tagged, the tagged contents can be reproduced in an interactive manner by arranging the contents in chronological order.

A method for interactive tagging based on media content, the method comprising the steps of: displaying media content; and displaying, based on user input, first and second objects for the first and second objects in the media content, Tagging the user content, wherein the first object is a provider of a first user content and the second object is a receiver of a first user content.

The interactive tagging method according to claim 1, wherein tagging the first user content comprises: authenticating a user; setting the first object as a user content provider by the authenticated user; Setting the second object as a user content receiver, and receiving the first user content from the user terminal.

In addition, in the interactive tagging method based on the media content, the first object may be an object related to the authenticated user, and after the user authentication, the first object may be displayed to be distinguished from other objects in the media content.

Also, in the interactive tagging method based on the media content, each of the first object and the second object may be composed of one or more objects.

Also, in the interactive tagging method based on the media content, the first object and the second object may be a person.

In the interactive tagging method of the media content, the media content may be a photograph or a moving picture, and the first user content may be a moving picture or a voice.

The interactive tagging method may further include executing the tagged first user content based on a user input, wherein during the execution of the first user content, Provider and recipient may be displayed together.

In addition, the interactive tagging method may further include displaying the provider, the recipient, and the meta information of the first user content.

In addition, in the interactive tagging method based on the media content, the step of displaying the media content may include overlapping at least one object among the objects in the media content with the remaining objects, Step < / RTI >

In the interactive tagging method based on the media content,

Wherein executing the first user content comprises:

The step of reproducing the voice of the first user content or the step of displaying the contents of the voice as text may be included. The computer-readable recording medium according to an embodiment of the present invention may include the above- A program for executing the tagging method can be stored.

The interactive tagging device based on media content according to an embodiment of the present invention includes a data storage unit for storing media content and a media tagging unit for tagging the first user content to the media content based on user input, The media tagging unit may set a first object in the media content as a user content provider and a second object in the media content as a user content receiver.

Also, in the interactive tagging device based on the media content, the first object or the second object may be composed of one or more objects.

In addition, in the interactive tagging device based on the media content, the first object and the second object may be a person.

In addition, in the interactive tagging device based on the media content, the media content may be a photograph or a moving picture, and the first user content may be a moving picture or a voice.

The interactive tagging device may further include a media content display unit displaying the media content and a tagging information display unit displaying information related to tagging of the first user content.

The interactive tagging device may further include a content execution unit that executes the first user content in response to a user input to the tagging information display unit.

In the interactive tagging apparatus based on the media content, the media content display unit may display the provider and the recipient of the first user content together while the first user content is being executed.

In the interactive tagging device based on the media content, the media content display unit may display a relationship between at least one object and at least one remaining object among the objects in the media content by overlapping the remaining objects.

Also, in the interactive tagging apparatus based on the media content, the tagging information display unit may display the provider, the recipient, and the meta information of the first user content.

In the interactive tagging device based on the media content, the data storage unit may store at least one of a contact, a note, and an album for each object in the media content, In response to the input, the user may be provided with an execution option for one or more of: phone or text transfer, memo viewing or modification, or viewing or modifying an album.

The interactive tagging apparatus may further include a user authentication unit for performing a user authentication for granting a tagging right to the media content, wherein the media content display unit displays the first object after the user authentication Wherein the first object is an object related to the authenticated user, the second object being distinguished from other objects in the media content.

Further, in the interactive tagging device based on the media content, the content executing section may reproduce the voice of the first user content or display the content of the voice in text.

According to the present invention, a plurality of user contents can be tagged in the media content, and the content recipient of the user content to be tagged can be expressed through the media contents.

Accordingly, the user can view the user content in an interactive manner between the objects displayed in the media content, and can confirm the relationship information with respect to the object represented in the media content.

1 is a configuration diagram of an interactive tagging device 1 based on a media content according to an embodiment of the present invention.
Fig. 2 is a configuration diagram of the screen configuration unit 40. Fig.
3 is a diagram illustrating media content displayed through an interactive tagging device based on a media content according to an exemplary embodiment of the present invention.
4A and 4B are diagrams illustrating options for user content tagging.
5 is an exemplary diagram showing a state of inputting a voice as a user content input.
6 to 10 illustrate an interactive tagging interface based on media content according to an embodiment of the present invention.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The singular expressions include plural expressions unless the context clearly dictates otherwise. In this specification, the terms "comprises ", or" having ", or the like, specify that there is a stated feature, number, step, operation, , Steps, operations, components, parts, or combinations thereof, as a matter of principle.

Unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries should be construed as meaning consistent with meaning in the context of the relevant art and are not to be construed as ideal or overly formal in meaning unless expressly defined herein . Like reference numerals in the drawings denote like elements.

In the following description, well-known functions or constructions are not described in detail to avoid unnecessarily obscuring the subject matter of the present invention. In addition, the size of each component in the drawings may be exaggerated for the sake of explanation and does not mean the size actually applied

Embodiments described herein may be wholly hardware, partially hardware, partially software, or entirely software. A "unit," "module," "device," or "system" or the like in this specification refers to a computer-related entity such as a hardware, a combination of hardware and software, or software. A processor, an object, an executable, a thread of execution, a program, and / or a computer, for example, a computer, but is not limited to, a computer. For example, both an application running on a computer and a computer may correspond to a part, module, device or system of the present specification.

Embodiments have been described with reference to the flowcharts shown in the drawings. While the above method has been shown and described as a series of blocks for purposes of simplicity, it is to be understood that the invention is not limited to the order of the blocks, and that some blocks may be present in different orders and in different orders from that shown and described herein And various other branches, flow paths, and sequences of blocks that achieve the same or similar results may be implemented. Also, not all illustrated blocks may be required for implementation of the methods described herein. Furthermore, the method according to an embodiment of the present invention may be implemented in the form of a computer program for performing a series of processes, and the computer program may be recorded on a computer-readable recording medium.

Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the following description, well-known functions or constructions are not described in detail to avoid unnecessarily obscuring the subject matter of the present invention. In addition, the size of each component in the drawings may be exaggerated for the sake of explanation and does not mean a size actually applied.

1 is a configuration diagram of an interactive tagging device 1 based on a media content according to an embodiment of the present invention.

According to the interactive tagging device 1 based on the media content according to the embodiment of the present invention, in tagging the user content with respect to the object included in the media content, not only the media content but also the user content tagged in the media content Provider and recipient. Accordingly, a user using the media content can advantageously identify who is the provider and the recipient of the tagged user content.

The interactive tagging device 1 based on the media content is any electronic device capable of displaying and executing images and moving pictures on the other device or itself and can be used as a server, a smart phone, a tablet PC, an electronic book, a notebook, Includes digital album. For example, the interactive content tagging device 1 may include a communication module (not shown) for enabling communication via a wireless network.

Referring to FIG. 1, an interactive tagging device 1 based on a media content may include a data storage unit 10 and a media tagging unit 20. In another embodiment, the interactive tagging device 1 based on media content may further include a content executing unit 30 or a screen configuring unit 40.

Fig. 2 is a configuration diagram of the screen configuration unit 40. Fig. Referring to FIG. 1B, the screen configuration unit 40 may include a media content display unit 41 and a tagging information display unit 42.

Data storage 10 is any storage device capable of storing media content. The media content may display a plurality of objects as a photograph or a moving image. The object may include a person or an object. In a preferred embodiment, the media content is a family photo, and some of the plurality of objects may be family members. For example, the family member may include a person, a son, a daughter, a daughter-in-law, a grandchild, and the like.

The media tagging unit 20 may tag the first user content to the media content based on the user input. The first user content may include a content received from a user terminal or a link to a content stored in another server. For example, the first user content may be audio or video data input from a user terminal, or a URL link to music or video data stored in another server.

The user terminal may be the same device as the device 1 or may be an independent device capable of communicating with the device 1.

The media tagging unit 20 may set the first object in the media content as a user content provider and the second object in the media content as a user content receiver. The media tagging unit 20 can receive a command from a user terminal and set a user content provider and a receiver. The media tagging unit 20 can display the user content provider and the recipient in a visual manner through the screen configuration unit 40.

The content executing section 30 can execute the user content tagged in the media content. That is, it is possible to play back a voice if the user content is audio, and to play back the video if it is a video. You can also display that image if your content is text or an image. For example, in one embodiment, the content executing section 30 may include a voice-to-text conversion section. Accordingly, the user content of the voice format may be displayed as text along with the playback of the voice.

The screen configuration unit 40 may configure the user terminal screen with the tagging information related to the tagging of the media content and the user content. For example, the screen configuration unit 40 may display the media content and the tagging information through a web browser.

In one embodiment, the device 1 may further comprise a user authentication unit (not shown) for performing user authentication for granting tagging authority for media content. According to the authentication, the device 1 can confirm the relationship between the object included in the media content and the authenticated user.

After the user authentication, the media content display unit may display the first object to be distinguished from other objects in the media content after user authentication. In one example, the media content display unit may display only the first object in the media content. The user content may then be received based on user input to the first object. The first object may be the same object as the authenticated user. The user terminal may include a smart phone, a tablet PC, an electronic book, a notebook, a digital camera, and a digital album.

In addition, the tagging information may include a recipient and a provider of user content to be tagged, metadata of user content, and the like. The metadata may include the type, time, and capacity of the user content.

The first object and the second object may include one or more objects and may include objects that overlap with each other.

3 is a diagram illustrating media content displayed through an interactive tagging device based on a media content according to an exemplary embodiment of the present invention. In FIG. 3, an image is shown as an example of the media content. In another embodiment, the media content may be a moving image. If the media content is a video, the user content can be tagged based on the object displayed in each frame.

Referring to FIG. 3, the type of the media content 100 is an image, and the objects in the media content are a family. That is, objects 101-106 representing family members are being displayed in the media content.

The user can tag the user content to the media content 100 by selecting each object through the user terminal and inputting the user content. Further, in tagging, the user can set the provider and the recipient of the user content. The order of object selection, provider-recipient settings, and user content entry may be changed arbitrarily.

In order to facilitate selection of each object, the interactive tagging device 1 based on the media content may further include a face recognition unit (not shown). The face recognition unit may analyze the media content to recognize faces of the objects, and may separately display the face recognition units 101a, 103a, and 105a as shown in FIG. The user can distinguish the objects using facial recognition media contents and facilitate tagging. In FIG. 4, the face recognition portion is displayed only for some objects 101, 103, and 105 for convenience of explanation.

4A and 4B are diagrams illustrating options for user content tagging. 4A and 4B, when a user selects one or more objects in a media content through a user terminal such as a screen touch, a mouse selection, and the like, the media tagging unit 20 displays the corresponding objects in the media content provider 201 or a recipient / recipient option 200 that determines whether the recipient 202 is a recipient. In addition, the media tagging unit 20 may provide a tagging content type option 300, as shown in FIG. 4B, whether to tag the audio content 301 or the image 302. The order of the above options 200 and 300 may be changed and the options 200 and 300 may overlap on the media content 100 and be displayed in a pop-up window.

For example, after the user selects an object 103 in the media content 100, the provider 201 may be selected from the provider / recipient option 200 and the voice 301 may be selected from the tagging content type. And the user can select the recipient 202 from the provider / recipient option 200 by selecting the object 101. [ The user can then enter the user content.

User input for the content providers, recipients and user content described above may be performed after user authentication.

5 is an exemplary diagram showing a state of inputting a voice as a user content input.

Referring to FIG. 5, an object 103 selected as a user content provider and an object 101 selected as a user content receiver appear in the user content input window 400 together with the content delivery direction f1. The user content input window 400 may display a time line tb1 and a time t1 of the input user content. When the user inputs the user's content, the user can save the user's content using the save button 401. [

Accordingly, the user content, which is a voice message input by the user, can be tagged in the media content 100 and displayed together with the media content with the object 103 as a provider and the object 101 as a receiver.

When the tagging is completed in this way, the third party or the user can confirm the provider and the recipient together with the execution of the tagged user content in the media content.

The interactive tagging device 1 based on the media content can tag the user content through the additional options 200, 300, and 400 while the media content is displayed as described above. However, in another embodiment, Lt; / RTI >

6 to 10 illustrate an interactive tagging interface based on media content according to an embodiment of the present invention.

Referring to FIG. 6, the interactive tagging interface based on media content includes a media content display unit 41 for displaying media content, a tagging information display unit 42 for displaying information related to tagging, media content mode changing units 43a and 43b, A user information display unit 44, a tagging addition unit 45, and a contents provider / receiver setting unit 46a and 46b.

 The media content display unit 41 may display the media content 100. [ The media content display unit 41 displays the original image in the original mode 43a according to the media content mode changing units 43a and 43b and the state 43b in which the provider / ). ≪ / RTI >

The tagging information display unit 42 may include tagging information 421 tagged with respect to the media content 100. The tagging information included in the tagging information display unit 42 may be a plurality of tags 421 and 422 as shown in FIG.

The tagging information 421 may include at least one of the content provider 103, the content receiver 101, the content delivery direction f1, the tagging related meta information 4211, and the user content execution button 4212. [ In response to the input of the user content execution button 4212, the user content execution unit 30 may execute the corresponding user content.

The user information display unit 44 displays information identifying the user who has accessed the media content for tagging the user content. For example, the user information display unit 44 may indicate the login ID or name of the user.

For example, when the media content 100 is a family picture of Jack, Jack is displayed as an object 103, Jack's father John is displayed as an object 101, Jack 103 leaves a voice message to John 101 The user can tag the voice message by connecting to the interface and setting Jack 103 and John 101 as the user content provider and receiver respectively and recording the voice.

To this end, the Jack is subjected to user authentication, and according to the authentication, the device 1 can confirm that the authenticated user is Jack and the object 103 in the media content. Accordingly, the device 1 can display the object 103 separately from other objects or display only the object 103 in the media content.

The jack can then select a distinct or individually marked object 103 to input its own user content and specify the recipient of the content (object 101 in FIG. 3).

The tagging addition unit 45 may perform a function for adding tagging to one media content. That is, referring to FIG. 7, in addition to the tagging information 421, the tagging information 431 is further input.

The tagging information 421 indicates that the object 103 is tagged with respect to the object 101 and the tagging information 431 indicates that the object 105 and the object 106 simultaneously tag the object 101 with respect to the object 101. [

The content provider setting unit 46a and the content receiver setting unit 46b are similar to the provider / recipient option 200 of FIG. 4A and can be used for setting a provider and a recipient of content to be tagged .

In order to display the tagging of the user content to the media content 100, the media content display unit 41 may display the tags related to tagging of the user content to be distinguished from other objects. For example, as shown in FIG. 6, a square box P1 may be displayed for an object 103, which is a user content provider, and a square box R1 of a different shape or color may be displayed for an object 101 that is a player. These identification indications are exemplary and a variety of methods can be performed.

For example, the contents provider and the receiver may be displayed in the arrows f11 and f21 as shown in Fig. Also, the identification marks P1, R1, f11, and f21 may be displayed only while the corresponding user content is being executed.

6 and 7, arrows 424 and 434 are shown in the tagging information 421 and 431, but the present invention is not limited thereto. In the media content 100, The content provider and the recipient may be displayed through the content provider 110b.

In the present invention, there may be a plurality of user content providers and a plurality of user content receivers. Referring to the tagging information 422 of FIG. 8, the user content provider is two objects 105 and 106. Referring to the tagging information 422 of FIG. 9, the user content provider is two objects 105 and 106, The content recipient is two objects (101, 102). Although not shown, a single user content provider may provide user content for a plurality of objects.

Assume that the media content 100 of Figures 6-10 is John's family photo. Jack 103, his son, leaves a voice message to John 101, the eldest member of the family displayed in the media content 100, and in addition to John 101, his grandchildren 105 and 106 leave a voice message Appears.

Referring to FIG. 10, the media content display unit 41 may display a relationship between at least one object in the media content and the remaining objects by overlapping the remaining objects.

For example, when the relationship between the objects of the media content is displayed with the center of the object 101, the relationship information 120 of the son, daughter-in-law, wife, grandchild, granddaughter, etc. can be displayed. The relationship information 120 may include a name of a target object or a current age. For example, "grandchild, David, five years old" may be displayed as overlapping relation information 120 for the object 105. [

Also, in one embodiment, the data store 10 may store one or more of contacts, memos, and albums for each entity in the media content. The content executing section 30 may execute one or more of a telephone or text transmission, a memo viewing or amending, or an album viewing or amending in response to a user input for each object.

Referring to FIG. 10, when the object 104 is selected, the content executing unit 30 can execute the additional function window 500 for the object 104. The additional function window 500 may provide a function for at least one of the contact 510, the note 520, and the album view 530. Accordingly, the user can accumulate or communicate information on objects (other users) included in the media content based on the media content. For communication such as telephone or text, the device 1 may comprise a microphone and a speaker.

Accordingly, when the user of the media content is an elderly person, there is an advantage that a message can be stored and confirmed based on a family photograph (media content), and furthermore, a user can be contacted.

A method of interactive tagging based on media content according to an exemplary embodiment of the present invention includes displaying media content and tagging a first user content with respect to a first object and a second object in the media content, Wherein the first object is a provider of a first user content and the second object is a receiver of a first user content.

Wherein tagging the first user content comprises authenticating a user, setting the first object to a user content provider by the authenticated user, setting the second object to a user content receiver, And receiving the first user content from the user.

The first object is an object related to the authenticated user,

After user authentication, the first object may be displayed to distinguish it from other objects in the media content.

Here, each of the first object and the second object may be composed of one or more objects, and the first object and the second object may be persons or objects. In a preferred embodiment, the objects may be family members.

The media content may be a photograph or a moving picture, and the first user content may be a moving picture or a voice, but the present invention is not limited thereto.

According to another embodiment of the present invention, an interactive tagging method based on media content may further include executing the tagged first user content based on a user input. Wherein during the execution of the first user content, the provider and the recipient of the first user content may be displayed together. Accordingly, the user viewing the media content can confirm who is the provider and the receiver of the user content to be executed.

The method may further include displaying the provider, the recipient, and the meta information of the first user content.

The step of displaying the media content may include overlapping at least one object among the objects in the media content with the remaining objects and displaying the overlapped relation with the remaining objects. Specifically, each of the above relationships may be a family relationship, title, or nickname.

The step of executing the first user content may include reproducing the voice of the first user content or displaying the content of the voice as text.

And tagging the second user content for the third and fourth objects in the media content after the first user content is tagged. That is, a plurality of interactive user contents can be tagged for one media content. Thus, the third object may be the provider of the second user content, and the fourth object may be the receiver of the second user content.

A computer-executable recording medium according to an exemplary embodiment of the present invention may store a program for executing the above-described interactive content-based interactive tagging method.

While the invention has been shown and described with reference to certain embodiments thereof, it will be understood by those skilled in the art that various changes and modifications may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. However, it should be understood that such modifications are within the technical scope of the present invention. Accordingly, the true scope of the present invention should be determined by the technical idea of the appended claims.

Claims (23)

A method for interactive tagging based on media content, the method comprising:
Displaying media content; And
Tagging a first user content for a first object and a second object in the media content based on user input;
Tagging a second user content for a third object and a fourth object in the media content based on user input;
Executing the tagged first user content or the second user content based on user input,
Wherein the first object is a provider of a first user content, the second object is a receiver of a first user content, the third object is a provider of a second user content, Lt; / RTI >
While the user content is being executed, the providing direction of the provider, the receiver and the user content of the currently executed user content are displayed together in the media content,
Wherein at least three of the first to fourth objects are different objects.
The method according to claim 1,
Wherein tagging the first user content comprises:
Authenticating the user;
Setting, by the authenticated user, the first object as a user content provider;
Setting the second object as a user content receiver; And
And receiving the first user content from the user terminal.
3. The method of claim 2,
Wherein the first object is an object associated with the authenticated user,
Wherein after the user authentication, the first object is displayed to be distinguished from other objects in the media content.
The method according to claim 1,
Wherein each of the first to fourth objects comprises one or more objects.
The method according to claim 1,
Wherein the first object to the fourth object includes a person.
The method according to claim 1,
Wherein the media content is a photo or video,
Wherein the first or second user content is video or audio.
delete The method according to claim 1,
Further comprising the step of displaying a provider, a recipient, and meta information of the first or second user content.
The method according to claim 1,
Wherein the step of displaying the media content comprises:
And displaying the overlapped relation between at least one of the objects in the media content and the remaining objects with respect to the remaining objects.
The method according to claim 1,
Wherein executing the first or second user content comprises:
The method comprising the steps of: reproducing a voice of the first or second user content; or displaying the content of the voice as text.
A recording medium on which a program for executing the method according to any one of claims 1 to 6 and 8 to 10 is stored.
A media content display unit for displaying media content;
A media tagging unit for tagging the first user content and the second user content to the media content based on user input; And
And a content execution unit for executing the first user content or the second user content in response to a user input to the tagging information display unit,
Wherein the media tagging unit sets a first object and a third object in the media content as providers of a first user content and a second user content, respectively, and sets a second object and a fourth object in the media content as a first user content and a second user content, The second user content,
The direction of providing the provider of the currently executed user content, the receiver and the user content are displayed together in the media content while the user content is being executed, and at least three of the first to fourth objects are different from each other Wherein the media content is an interactive tagging device.
13. The method of claim 12,
Wherein the first object to the fourth object are composed of one or more objects.
13. The method of claim 12,
Wherein the first object to the fourth object include a person.
13. The method of claim 12,
Wherein the media content is a photo or video,
Wherein the first or second user content is video or audio.
13. The method of claim 12,
Further comprising a tagging information display unit for displaying information related to tagging of the first or second user contents.
delete delete 13. The method of claim 12,
The media content display unit includes:
Wherein the at least one of the objects in the media content and the remaining objects are overlapped with the remaining objects and displayed.
13. The method of claim 12,
The tagging information display unit displays,
And displays the provider, recipient, and meta information of the first or second user content.
13. The method of claim 12,
Further comprising a data storage unit for storing at least one of a contact, a note, and an album for each object in the media content,
Wherein the content execution unit provides the user with an execution option for one or more of a phone or text transmission, a memo viewing or modifying, or an album viewing or modifying in response to a user input for each of the objects. Interactive tagging device.
13. The method of claim 12,
Further comprising a user authentication unit for performing user authentication for granting a tagging right for the media content,
Wherein the media content display unit displays the first object to be distinguished from other objects in the media content after user authentication,
Wherein the first object is an object related to the authenticated user.
13. The method of claim 12,
The content execution unit,
Wherein the first or second user content is played back or the content of the voice is displayed as text.
KR1020150113238A 2015-08-11 2015-08-11 Device for conversational tagging based on media content and method thereof KR101758824B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150113238A KR101758824B1 (en) 2015-08-11 2015-08-11 Device for conversational tagging based on media content and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150113238A KR101758824B1 (en) 2015-08-11 2015-08-11 Device for conversational tagging based on media content and method thereof

Publications (2)

Publication Number Publication Date
KR20170019180A KR20170019180A (en) 2017-02-21
KR101758824B1 true KR101758824B1 (en) 2017-07-18

Family

ID=58313701

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150113238A KR101758824B1 (en) 2015-08-11 2015-08-11 Device for conversational tagging based on media content and method thereof

Country Status (1)

Country Link
KR (1) KR101758824B1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101419010B1 (en) * 2007-07-19 2014-07-15 삼성전자주식회사 Apparatus and method for providing phonebook using image in portable terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9521175B2 (en) 2011-10-07 2016-12-13 Henk B. Rogers Media tagging
US20130346068A1 (en) 2012-06-25 2013-12-26 Apple Inc. Voice-Based Image Tagging and Searching

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101419010B1 (en) * 2007-07-19 2014-07-15 삼성전자주식회사 Apparatus and method for providing phonebook using image in portable terminal

Also Published As

Publication number Publication date
KR20170019180A (en) 2017-02-21

Similar Documents

Publication Publication Date Title
US9619713B2 (en) Techniques for grouping images
KR20140143725A (en) Image correlation method and electronic device therof
US10860862B2 (en) Systems and methods for providing playback of selected video segments
US20180341705A1 (en) Portable electronic device and method for controlling the same
US8209614B2 (en) Graphical user interface, display control device, display method, and program
US20140040712A1 (en) System for creating stories using images, and methods and interfaces associated therewith
JP5870742B2 (en) Information processing apparatus, system, and information processing method
KR101774914B1 (en) Systems and methods for multiple photo feed stories
US20130031101A1 (en) Method for determining communicative value
US20130339440A1 (en) Creating, sharing and discovering digital memories
CN103813126A (en) Method of providing information-of-users' interest when video call is made, and electronic apparatus thereof
WO2012169112A1 (en) Content processing device, content processing method, program, and integrated circuit
US10972254B2 (en) Blockchain content reconstitution facilitation systems and methods
US8943020B2 (en) Techniques for intelligent media show across multiple devices
US20110305437A1 (en) Electronic apparatus and indexing control method
US11348587B2 (en) Review system for online communication, method, and computer program
WO2019201197A1 (en) Image desensitization method, electronic device and storage medium
JP2012507812A (en) Method and apparatus for optimizing an image displayed on a screen
CN102750966A (en) Reproduction apparatus and filmmaking system
JP2011101251A (en) Electronic apparatus and image display method
JP2013171599A (en) Display control device and display control method
CN105204718B (en) Information processing method and electronic equipment
KR101758824B1 (en) Device for conversational tagging based on media content and method thereof
CN110381356A (en) Audio-video generation method, device, electronic equipment and readable medium
US20130185658A1 (en) Portable Electronic Device, Content Publishing Method, And Prompting Method

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E90F Notification of reason for final refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant