WO2012070429A1 - 感性表現語処理装置、感性表現語処理方法および感性表現語処理プログラム - Google Patents
感性表現語処理装置、感性表現語処理方法および感性表現語処理プログラム Download PDFInfo
- Publication number
- WO2012070429A1 WO2012070429A1 PCT/JP2011/076292 JP2011076292W WO2012070429A1 WO 2012070429 A1 WO2012070429 A1 WO 2012070429A1 JP 2011076292 W JP2011076292 W JP 2011076292W WO 2012070429 A1 WO2012070429 A1 WO 2012070429A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- expression word
- sensitivity
- information
- processing device
- sentiment
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7847—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
- G06F16/786—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using motion, e.g. object motion or camera motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/5866—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
- G06F16/784—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Definitions
- the present invention relates to a sensitivity expression processing apparatus, a sensitivity expression processing method, and a sensitivity expression processing program.
- the present invention has been made in order to solve the above-described problems, and a sensitivity expression word processing apparatus and a sensitivity that can make an impression of an atmosphere and an object at the time of shooting similar to those at the time of shooting.
- An object is to provide an expression word processing method and a sensitivity expression word processing program.
- the sensitivity expression word processing device of the present invention analyzes a captured image, calculates a sensitivity information indicating a temporal change of a field represented in the image and an action of an object, and the sensitivity in advance.
- a sentiment expression word extracting unit that extracts the sentiment expression word corresponding to the sentiment information calculated by the sentiment information calculating unit from the sentiment expression words expressing the sentiment stored in association with the information.
- the Kansei expression word processing method of the present invention comprises a Kansei information calculation step of analyzing a captured image and calculating Kansei information indicating a temporal change of a field represented in the image and an action of an object, and the Kansei in advance.
- the sensitivity expression word processing program of the present invention causes a computer to execute each step included in the above sensitivity expression word processing method.
- the Kansei expression word processing device, Kansei expression word processing method, and Kansei expression word processing program according to the present invention analyze the inputted photographed image to calculate the Kansei information, and are expressed in the photographed image based on the Kansei information.
- Kansei expression words corresponding to the situation of the field, the state of the object, the change of the field and the movement of the object are extracted and output.
- the photographed image includes a single frame image (hereinafter referred to as “still image”) and a frame image group (hereinafter referred to as “moving image”) constituting the video signal.
- the above sensibility information is information indicating the situation of the field, the state of the object, the temporal change of the field and the action of the object represented in the captured image.
- Examples of the information indicating the situation of the field and the state of the object include the number of human faces, the inclination of the face, the degree of smile, and the number of fingers extended.
- Examples of the information indicating the temporal change of the field and the motion of the object include, for example, the number of areas with large motion (hereinafter referred to as “moving object”), the moving amount of the moving object, the transition information of the moving object, the lighting / This corresponds to a change in luminance of the image due to turning off (hereinafter referred to as “luminance change”).
- the above emotional expression words are based on the visual conditions such as the situation of the field, the state of the object, the temporal change of the field, and the movement of the object.
- As an onomatopoeia for example, “Wai Wai” representing the lively atmosphere of the place corresponds.
- a mimetic word for example, “Noronoro” that expresses a slow motion of a moving object is applicable.
- the sensitivity expression word processing device receives a still image signal as an input signal, calculates the number of human faces in the still image, the inclination of the face, and the degree of smile as sensitivity information. It is an Example at the time of extracting and outputting the sensitivity expression word corresponding to.
- the emotion expression word processing device 1 is physically configured to include, for example, a CPU (Central Processing Unit), a storage device, and an input / output interface.
- the storage device includes, for example, a ROM (Read Only Memory) and HDD (Hard Disk Drive) for storing programs and data processed by the CPU, and a RAM (Random Access Memory) mainly used as various work areas for control processing. ) Etc. are included. These elements are connected to each other via a bus.
- the CPU executes the program stored in the ROM and processes the signals received via the input / output interface and the data expanded in the RAM, so that the functions of each unit of the sensitivity expression word processing device 1 described later can be achieved. Can be realized.
- the affective expression word processing apparatus functionally includes a sensitivity information calculation unit 11 and a sensitivity expression word extraction unit 12.
- the sensitivity information calculation unit 11 includes a face detection unit 111.
- the face detection unit 111 analyzes the input still image to detect a face, and calculates the sensitivity information by calculating the number of faces, the inclination of the face, and the degree of smile.
- a method for calculating the number of faces and the inclination of the face for example, a technique described in Japanese Patent Application Laid-Open No. 2007-233517 can be used.
- a technique for calculating the degree of smile for example, a technique described in Japanese Patent Application Laid-Open No. 2009-141516 can be used.
- the affective expression word extraction unit 12 extracts a sensitivity expression word corresponding to the sensitivity information calculated by the sensitivity information calculation unit 11 from the sensitivity expression word database 21, and outputs the extracted sensitivity expression word.
- a data format for outputting a Kansei expression word for example, text data, metadata of still images such as Exif (Exchangeable Image Image File Format), tag information for video search, audio / acoustic data pre-associated with the Kansei expression word Can be used.
- the affective expression word database 21 has one or a plurality of tables indicating the correspondence between the sensitivity information and the affective expression words.
- the correspondence relationship between the sensitivity information and the sensitivity expression word may be one-to-one, one-to-many, many-to-one, or many-to-many.
- a plurality of sensitivity expression words are associated with one sensitivity information, when selecting the sensitivity expression words, they may be selected at random, or may be selected according to a predetermined order. You may choose according to other criteria.
- the sensitivity expression word database 21 in the first embodiment has a face number table, a face tilt table, and a smile degree table.
- the face number table includes, as data items, for example, a face number item and a sensitivity expression word item.
- the number of faces detected by the face detection unit 111 is stored in the number of faces item.
- the sensitivity expression word item stores a sensitivity expression word corresponding to the number of faces.
- a sensitivity expression word in the face number table a word that expresses the excitement of the field as the number of faces increases is used.
- the face tilt table includes, for example, a face 1 tilt item, a face 2 tilt item, and a sensitivity expression word item as data items.
- the face tilt detected by the face detection unit 111 is stored.
- the face inclination is represented by using values from “ ⁇ 90” degrees to “90” degrees, where “0” is the face facing the face and the clockwise rotation is positive from the face-to-face state. Accordingly, when the two aligned faces are inclined toward each other, the inclination of the face 1 and the inclination of the face 2 become a positive value and a negative value.
- Sensitivity expression words corresponding to the inclination of face 1 and the inclination of face 2 are stored in the sensitivity expression word item.
- a sensibility expression word of the face inclination table a word that deeply expresses the friendliness as the inclinations of the two aligned faces increase toward each other is used.
- the smile degree table has, for example, a smile degree item and a sensitivity expression word item as data items.
- the smile level item stores a range of smile levels detected by the face detection unit 111.
- the degree of smile is expressed using a value normalized from “0.0” to “1.0”.
- a sensitivity expression word corresponding to the degree of smile is stored in the sensitivity expression word item.
- a sensitivity expression word in the smile degree table a word that expresses joy and fun as the degree of smile increases and expresses anger and sadness as the smile degree decreases.
- the number of faces, the inclination of the face, and the degree of smile may be expressed using values other than those described above.
- the sensitivity information any one of the number of faces, the inclination of the face, and the degree of smile may be used, or a plurality of them may be used in combination.
- the face detection unit 111 of the sentiment information calculation unit 11 detects a face represented in the still image (step S101).
- the sentiment expression word extraction unit 12 determines whether or not the number of faces detected in step S101 is two or more (step S102). If this determination is YES (step S102; YES), the sentiment expression word extraction unit 12 corresponds to the number of faces “2 or more” from the face number table of the sentiment expression word database 21 shown in FIG.
- the stored Kansei expression word “Wai Wai” is extracted and output (step S103). And this operation
- step S102 determines whether the number of faces is one or not. Determination is made (step S104). If this determination is YES (step S104; YES), the emotional expression word extraction unit 12 selects the emotional expression word “Nick” stored in correspondence with the number of faces “1” from the face number table. Extract and output (step S105). And this operation
- step S104 determines whether the number of faces is not 1 (step S104; NO)
- the sentiment expression word extraction unit 12 corresponds to the number of faces “0” from the face number table.
- the sentiment expression word “scene” stored in this way is extracted and output (step S106). And this operation
- the sensitivity expression word processing device 1 in the first embodiment it is possible to extract and output the sensitivity expression words corresponding to the number of faces in the still image, the inclination of the face, and the degree of smile. it can. This makes it possible to clarify and emphasize the situation of the field and the state of the object when taking a still image, so that the atmosphere and the impression of the object can be imaged in the same way as when you are in the shooting location. It becomes possible.
- the Kansei expression word processing apparatus of the second embodiment receives a still image signal as an input signal, calculates the number of fingers extended in the still image as Kansei information, and extracts a Kansei expression word corresponding to the Kansei information. It is an Example at the time of outputting.
- the sensitivity expression word processing device 1 of the second embodiment has a sensitivity information calculation unit 11 and a sensitivity expression word extraction unit 12, and in this respect, the sensitivity expression word processing device of the first embodiment. 1 (see FIG. 1).
- the sensitivity expression word processing device 1 of the second embodiment is different from the sensitivity expression word processing device 1 of the first embodiment in that the sensitivity information calculation unit 11 includes a face detection unit 112 and a finger detection unit 113. .
- the sensitivity information calculation unit 11 includes a face detection unit 112 and a finger detection unit 113.
- the face detection unit 112 detects the face represented in the still image, similarly to the face detection unit 111 of the first embodiment.
- the face detection unit 112 calculates the center coordinates of the detected face area, the width of the face area, and the height of the face area as face information.
- the finger detection unit 113 detects the finger (hand) shown in the still image and calculates the number of fingers extended, thereby calculating the sensitivity information.
- the finger detection unit 113 uses the face information calculated by the face detection unit 112 to identify hand region candidates, and the finger detection unit 113 determines the number of fingers extended from the identified hand region. Detect numbers.
- a method for specifying a hand region candidate for example, a method of specifying a region having the largest area among skin color regions near the face region as a hand region candidate can be used. Note that as a method for specifying a hand region candidate, a method described in JP-A-2003-346162 may be used, or another method may be used.
- the sensitivity expression word database 21 in the second embodiment has a finger number table.
- the data structure of the finger number table will be described with reference to FIG.
- the finger number table has, for example, a finger number item and a sensitivity expression word item as data items.
- the number of fingers field stores the number of fingers detected by the finger detection unit 113.
- the sensitivity expression word item stores a sensitivity expression word corresponding to the number of fingers.
- the face detection unit 112 of the sensitivity information calculation unit 11 detects a face represented in the still image and calculates face information (Ste S201).
- the finger detection unit 113 of the sensitivity information calculation unit 11 detects the extended finger represented in the still image using the face information calculated in step S201 (step S202).
- the emotional expression word extraction unit 12 determines whether or not the number of fingers detected in step S202 is zero (step S203). When this determination is YES (step S203; YES), the emotional expression word extraction unit 12 stores the number of fingers corresponding to “0” from the finger number table of the emotional expression word database 21 shown in FIG. The sentiment expression word “Goo” is extracted and output (step S204). And this operation
- step S203 when it is determined in step S203 that the number of fingers is not zero (step S203; NO), the emotional expression word extraction unit 12 determines whether the number of fingers is two or not. (Step S205). If this determination is YES (step S205; YES), the sentiment expression word extraction unit 12 extracts the sentiment expression word “piece” stored in correspondence with the number “2” of fingers from the finger number table. Extract and output (step S206). And this operation
- step S205 determines whether the number of fingers is five.
- step S207 determines whether the number of fingers is five.
- the emotional expression word extraction unit 12 selects the emotional expression word “par” stored in correspondence with the number of fingers “5” from the finger number table. Extract and output (step S208). And this operation
- step S207 when it is determined in step S207 that the number of fingers is not five (step S207; NO), this operation is terminated without extracting the affective expression word.
- the affective expression word processing device 1 in the second embodiment it is possible to extract and output a sensitivity expression word corresponding to the number of fingers extended in a still image. Thereby, it is possible to clarify and emphasize a gesture by a finger of a photographed person. In other words, the situation of the field and the state of the object when shooting a still image can be clarified and emphasized, so that the atmosphere and the impression of the object can be imaged as if they were at the shooting location. It becomes.
- the number of fingers is used as the sensitivity information, but the present invention is not limited to this.
- the number of human faces included in the sensitivity information of the first embodiment the inclination of the face, the degree of smile, and the like may be used in combination.
- the Kansei expression word processing device receives a moving image signal as an input signal, and calculates the number of moving objects in the moving image, the moving amount of moving objects, the transition information of moving objects, and the luminance change as sensitivity information.
- the emotion expression word processing device 1 of the third embodiment includes a sensitivity information calculation unit 11 and a sensitivity expression word extraction unit 12, and in this respect, the sensitivity expression word processing device of the first embodiment. 1 (see FIG. 1).
- the emotion expression word processing device 1 of the third embodiment is different from the sensitivity expression word processing device 1 of the first embodiment in that the sensitivity information calculation unit 11 includes a moving object detection unit 114. In the following, differences from the first embodiment will be mainly described.
- the moving object detection unit 114 detects the moving object by analyzing the input moving image, and calculates the sensitivity information by calculating the number of moving objects, the moving amount of the moving object, the transition information of the moving object, and the luminance change.
- a difference between pixel values of the same coordinates between a current frame image and a past frame image is calculated.
- a method of detecting a set of pixels larger than the threshold value as a moving object can be used.
- the moving amount of the moving object is, for example, the difference between the center of gravity position of the moving object on the current frame image and the center of gravity position of the moving object represented on the past frame image corresponding to the vicinity of the moving object position on the current frame image. It is obtained by calculating.
- the moving object transition information is obtained, for example, by determining the direction of the moving object's motion vector and encoding it, and calculating the time change of the encoded value.
- a direction encoding table shown in FIG. 10 can be used. In this case, for example, when the moving body repeats the movement in the negative direction and the positive direction alternately with respect to the horizontal axis, the transition information is calculated as “0101”.
- the change in luminance is obtained, for example, by calculating a difference between the average luminance value of the current frame image and the average luminance value of the past frame image, or calculating a value obtained by encoding the average difference.
- the value “a” obtained by encoding the average difference is expressed by the following equations (1) to (3) when the average difference is “d” and the threshold is “T” (> 0). Can be calculated.
- the sensitivity expression word database 21 in the third embodiment includes a moving object number table, a moving object movement amount table, a moving object transition information table, and a luminance change table.
- the moving object number table has, for example, moving object number items and emotion expression word items as data items.
- the number of moving objects detected by the moving object detection unit 114 is stored in the number of moving objects item.
- the sensitivity expression word item stores a sensitivity expression word corresponding to the number of moving objects.
- a sensitivity expression word in the number table of moving objects a word that expresses the level of noise as the number of moving objects increases is used.
- the moving object moving amount table includes, for example, moving object moving amount items and sensitivity expression word items as data items.
- the moving object movement amount item stores a moving object moving amount range calculated by the moving object detection unit 114.
- the moving amount of the moving object is expressed using a value normalized to “0.0” to “1.0”.
- the sensitivity expression word item stores a sensitivity expression word corresponding to the moving amount of the moving object.
- a sensitivity expression word in the moving amount table of the moving object a word that expresses the moving speed faster as the moving amount of the moving object increases is used.
- the moving object transition information table includes, for example, moving object transition information items and sensitivity expression word items as data items.
- moving object transition information item moving object transition information calculated by the moving object detection unit 114 is stored.
- the sensitivity expression word item stores a sensitivity expression word corresponding to the transition information of the moving object.
- a sensitivity expression word of the moving object transition information table a word expressing a repetitive action corresponding to the periodicity recognized based on the moving object transition information is used.
- the moving object transition information table shown in FIG. 13 when the moving object transition information is “0101” or “1010”, “Uroro” is extracted as a sensitivity expression word, and the moving object transition information is “0000” or In the case of “1111”, “stasta” is extracted as the sensitivity expression word, and in the case where the transition information of the moving object is “2323” or “3232,” “Pyeongpyon” is extracted as the sensitivity expression word.
- the luminance change table includes, for example, a luminance change item and a sensitivity expression word item as data items.
- a luminance change calculated by the moving object detection unit 114 is stored in the luminance change item.
- the luminance change shown in FIG. 14 is represented by an encoded value calculated using the above equations (1) to (3).
- the sensitivity expression word item stores a sensitivity expression word corresponding to the luminance change.
- the number of moving objects, the moving amount of moving objects, the transition information of moving objects, and the luminance change may be expressed using values other than the values described above.
- sensitivity information any one of the number of moving objects, the moving amount of moving objects, transition information of moving objects, and luminance change may be used, or a plurality may be used in combination.
- the sensitivity information used in the third embodiment it may be used in combination with any one or more of the sensitivity information used in the first embodiment and the second embodiment.
- the moving object detection unit 114 of the sensitivity information calculation unit 11 detects a moving object represented in the moving image and calculates transition information of the moving object. (Step S301).
- the sentiment expression word extraction unit 12 determines whether or not the moving object transition information calculated in Step S301 is “0101” or “1010” (Step S302). When this determination is YES (step S302; YES), the affective expression word extraction unit 12 uses the moving object transition information “0101” and “1010” from the moving object transition information table of the affective expression word database 21 shown in FIG. The sentiment expression word “uroro” stored corresponding to “is extracted and output (step S303). And this operation
- step S302 determines whether the moving object transition information is “0101” or “1010” (step S302; NO). If it is determined in step S302 that the moving object transition information is not “0101” or “1010” (step S302; NO), the affective expression word extraction unit 12 sets the moving object transition information to “0000”. "Or” 1111 "is determined (step S304). When this determination is YES (step S304; YES), the sentiment expression word extraction unit 12 stores the sentiment stored in correspondence with the moving body transition information “0000” and “1111” from the moving body transition information table. The expression word “stasta” is extracted and output (step S305). And this operation
- step S304 determines whether the moving object transition information is “0000” or “1111” (step S304; NO).
- the affective expression word extraction unit 12 indicates that the moving object transition information is “2323”. "Or” 3232 "is determined (step S306).
- this determination is YES (step S306; YES)
- the emotional expression word extraction unit 12 stores the movement information corresponding to the moving object “2323” and “3232” from the moving object transition information table.
- the sentiment expression word “Pyeongpyon” is extracted and output (step S307), and this operation is finished.
- step S306 if it is determined in step S306 that the moving object transition information is not “2323” or “3232” (step S306; NO), this operation is terminated without extracting the emotional expression word. .
- the affective expression word processing device 1 in the third embodiment the number of moving objects in the moving image, the moving amount of the moving object, the transition information of the moving object, and the sensitivity expression words corresponding to the luminance change are extracted. Can be output. This makes it possible to clarify and emphasize the temporal change of the field and the movement of the object when shooting a moving image, so that the atmosphere and the impression of the object are imaged as if they were at the shooting location. It becomes possible. Furthermore, by looking at the sensibility expression words, it is possible to grasp temporal changes at the shooting site and motions of moving objects without browsing all moving images.
- a sensitivity expression word processing device according to the fourth embodiment will be described.
- the affective expression word processing device of the fourth embodiment superimposes the sensitivity expression words output from the sensitivity expression word extraction unit 12 of the sensitivity expression word processing device 1 of the first embodiment described above on a still image input from the outside. This is an embodiment when displaying on the display device 5.
- the sensitivity expression word processing device 1 of the fourth embodiment has the sensitivity of the first embodiment in that it further includes a superposition unit 31 in addition to the sensitivity information calculation unit 11 and the sensitivity expression word extraction unit 12. This is different from the expression word processing device 1 (see FIG. 1). In the following, differences from the first embodiment will be mainly described.
- the superimposing unit 31 includes a sensitivity expression word superimposed image generating unit 311.
- the sensitivity expression word superimposed image generation unit 311 uses the input still image and the sensitivity expression word output by the sensitivity expression word extraction unit 12 to superimpose the sensitivity expression word superimposed image on the still image. Is generated.
- the affective expression word superimposed image generation unit 311 generates an affective expression word superimposed image by superimposing a sensitivity expression word on a predetermined position of a still image based on predetermined font information.
- the font information includes, for example, a font (character shape), a font size (character size), and a character color.
- the superimposing unit 31 causes the display device 5 to display the sensitivity expression word superimposed image generated by the sensitivity expression word superimposed image generation unit 311.
- FIG. 17 shows an example of a sensitivity expression word superimposed image displayed on the display device 5.
- the emotional expression word “niconico” extracted according to the degree of smile is superimposed.
- Kansei expression words corresponding to the number of faces in a still image, the inclination of the face, and the degree of smile are extracted. Can be displayed superimposed on a still image. This makes it possible to clarify and emphasize the situation of the field and the state of the object when taking a still image, so that the atmosphere and the impression of the object can be imaged in the same way as when you are in the shooting location. It becomes possible.
- the sensitivity expression word processing device of the fifth embodiment uses the superimposition position and font information when superimposing the sensitivity expression words as face information calculated by the face detection unit 112. It is an Example at the time of determining based on.
- the affective expression word processing device 1 of the fifth embodiment further includes a superposition condition determination unit 312 in the superposition unit 31, and includes the face detection unit 112 of the second embodiment instead of the face detection unit 111. It differs from the Kansei expression word processing device 1 (see FIG. 16) of the fourth embodiment. In the following, differences from the fourth embodiment will be mainly described.
- the superimposition condition determination unit 312 determines the superimposition position of the sensitivity expression word according to the face information (the center coordinates of the face area, the width of the face area, and the height of the face area) calculated by the face detection unit 112. It is preferable to determine the position where the emotional expression word is superimposed, for example, at a position that does not overlap the face area or a position near the face area. Accordingly, it is possible to prevent the sensitivity expression word from overlapping the face area that is easy for humans to focus on, and thus it is possible to maintain the visibility of the image. In addition, by superimposing a sensitivity expression word in the vicinity of a face region that is easy for humans to focus on, it is possible to make an impression of the atmosphere and the object on the spot as if it were in the shooting location.
- the superimposition condition determination unit 312 analyzes the input still image and determines font information including the font, font size, and character color of the sensitivity expression word to be superimposed. Specifically, for example, a still image can be analyzed and a font can be determined according to the shooting location. Further, the size of the object area in the still image is analyzed, and the font size can be increased when the object area is large, and the font size can be decreased when the object area is small. Further, the complementary color of the color having the highest appearance frequency in the region where the sensitivity expression word is superimposed can be changed to the character color. Thereby, the visibility of the image can be maintained.
- the affective expression word superimposed image generation unit 311 generates a sensitivity expression word superimposed image by superimposing a sensitivity expression word on the position determined by the overlap condition determination unit 312 based on the font information determined by the overlap condition determination unit 312. To do.
- the sensitivity expression words corresponding to the number of faces in the still image, the inclination of the face, and the degree of smile are extracted. Can be superimposed in the vicinity of the face area that does not overlap the face area. As a result, the situation of the field and the state of the object when taking a still image can be clarified and emphasized, so that the atmosphere and the impression of the object can be imaged in the same way as when in the shooting location. It becomes possible.
- the sensitivity expression word processing device uses the face information calculated by the face detection unit 112 as the superimposition position and font information when superimposing the sensitivity expression words, and the like. It is an Example at the time of determining based on the hand area
- the configuration of the affective expression word processing device in the sixth embodiment will be described.
- the emotional expression word processing device 1 of the sixth embodiment is further provided with a finger detection unit 113 of the second embodiment, the sensitivity expression word processing device 1 of the fifth embodiment (see FIG. 18). And different. In the following, differences from the fifth embodiment will be mainly described.
- the superimposition condition determination unit 312 determines the superimposition position of the sensitivity expression word according to the face information calculated by the face detection unit 112 and the hand region specified by the finger detection unit 113.
- the position where the Kansei expression word is superimposed is, for example, a position that does not overlap the face area described in the fifth embodiment, a position near the face area, a position that does not overlap the hand area, or a position near the hand area. It is preferable to determine this.
- Kansei expression words are extracted according to the number of faces in the still image, the inclination of the face, the degree of smile, and the number of fingers extended.
- the sensitivity expression word can be superimposed on the vicinity of the face area or the vicinity of the hand area that does not overlap the face area or the hand area.
- the Kansei expression word processing device of the seventh embodiment converts a still image input from the outside into a sketch-like image, and superimposes the sensitivity expression word on the converted sketch-like image. This is an embodiment when displaying on the display device 5.
- the configuration of the affective expression word processing device in the seventh embodiment will be described.
- the sensitivity expression word processing device 1 of the seventh embodiment is different from the sensitivity expression word processing device 1 (see FIG. 19) of the sixth embodiment in that it further includes an image conversion unit 313.
- differences from the sixth embodiment will be mainly described.
- the image conversion unit 313 converts the input still image into a sketch-like image.
- a technique for converting into a sketch-like image for example, a technique described in WO 2006/106750 can be used.
- By converting a still image into a sketch-like image fine shadows can be omitted from the still image and the number of colors can be reduced, so that edges can be emphasized.
- the affective expression word superimposed image generation unit 311 generates a sensitivity expression word superimposed image by superimposing a sensitivity expression word on the sketch-like image converted by the image conversion unit 313.
- the emotional expression word is superimposed at the position determined by the superimposition condition determination unit 312 using the font, font size, and character color determined by the superimposition condition determination unit 312.
- FIG. 21 shows an example of a Kansei expression word superimposed image displayed on the display device 5.
- the still image shown in FIG. 21 is obtained by converting a still image taken in the office into a sketch-like image, and the sensitivity expression word “potoon” is superimposed.
- an input still image can be converted into a sketch-like image, and the sensitivity expression word can be superimposed on the converted sketch-like image.
- main shadows, colors, and edges in the still image can be emphasized, so that the subject can be clarified and emphasized.
- by superimposing emotional expressions on sketch-like images the situation of the field and the state of the object when taking a still image can be clarified and emphasized, so the impression of the atmosphere and object in the place Can be imaged in the same manner as when the user is at the shooting location.
- the sensitivity expression word processing device of the eighth embodiment superimposes the sensitivity expression words output from the sensitivity expression word extraction unit 12 of the sensitivity expression word processing device 1 of the above-described third embodiment on a moving image input from the outside.
- the display position is displayed on the display device 5 and the superimposition position and font information for superimposing the emotional expression word are determined based on the variation information indicating the motion of the moving object.
- the sensitivity expression word processing device 1 of the eighth embodiment has the sensitivity of the third embodiment in that it further includes a superposition unit 31 in addition to the sensitivity information calculation unit 11 and the sensitivity expression word extraction unit 12. This is different from the expression word processing device 1 (see FIG. 9). In the following, differences from the third embodiment will be mainly described.
- the superimposing unit 31 includes a superimposing condition determining unit 312 and a sensitivity expression word superimposed image generating unit 311.
- the superimposing condition determination unit 312 calculates variation information based on the moving object detected by the moving object detection unit 114, and determines a position where the emotional expression word is superimposed according to the variation information.
- the variation information for example, information indicating the motion of a moving object is applicable. Specifically, for example, when a pedestrian is shown in a moving image, a position to be superimposed is determined in accordance with variation information indicating the movement of the pedestrian. Thereby, for example, it is possible to superimpose a sensitivity expression word “STASTA” representing a walking motion on a moving image in accordance with the movement of the pedestrian.
- variation information is not limited to information indicating the movement of a moving object.
- information indicating a region with little color change, luminance change, or edge change obtained by analyzing a moving image may be calculated as variation information.
- a street is reflected in a moving image, it is possible to detect a building wall or an empty area and superimpose a sensitivity expression word on the detected area.
- the superimposition condition determination unit 312 analyzes the input moving image and determines font information including the font, font size, and character color of the sensitivity expression word to be superimposed. Specifically, for example, a moving image can be analyzed and a font can be determined according to a shooting location. Further, the size of the object area in the moving image is analyzed, and the font size can be increased when the object area is large, and the font size can be decreased when the object area is small. Further, the complementary color of the color having the highest appearance frequency in the region where the sensitivity expression word is superimposed can be changed to the character color. Thereby, the visibility of the image can be maintained.
- the sensitivity expression word superimposed image generation unit 311 uses the input moving image and the sensitivity expression word output by the sensitivity expression word extraction unit 12 to superimpose the sensitivity expression word superimposed image on the moving image. Is generated.
- the affective expression word superimposed image generation unit 311 generates a sensitivity expression word superimposed image by superimposing a sensitivity expression word on the position determined by the overlap condition determination unit 312 based on the font information determined by the overlap condition determination unit 312. To do.
- the superimposing unit 31 causes the display device 5 to display the sensitivity expression word superimposed image generated by the sensitivity expression word superimposed image generation unit 311.
- the affective expression word processing device 1 in the eighth embodiment the number of moving objects in the moving image, the moving amount of the moving object, the transition information of the moving object, and the sensitivity expression words corresponding to the luminance change are extracted.
- This sensitivity expression word can be superimposed in accordance with the movement or change in the moving image. This makes it possible to clarify and emphasize the temporal change of the field and the movement of the object when shooting a moving image, so that the impression of the atmosphere and object in the place is the same as when shooting at the shooting location. It is possible to image.
- a sensitivity information calculation unit that analyzes a captured image and calculates sensitivity information indicating a temporal change of a field represented in the image and an action of an object, and stores the information in advance in association with the sensitivity information.
- a sentiment expression word extraction unit that extracts the sentiment expression word corresponding to the sentiment information calculated by the sentiment information calculation unit from the sentiment expression words expressing the sentiment Word processing device.
- the sensibility information calculation unit includes the sensibility information including at least the number of moving objects that are large movement areas, the moving amount of the moving objects, transition information of the moving objects, or changes in luminance of the image.
- the affective expression word processing device according to supplementary note 1, characterized in that it is calculated.
- the sensitivity expression word extraction unit indicates the degree of noise so that the degree of noise increases as the number of moving objects increases.
- the emotional expression word processing device wherein the expression word is extracted.
- the sensitivity expression word extraction unit is configured so that the movement speed is expressed faster as the movement amount of the moving object increases.
- the emotional expression word processing device according to appendix 2 or 3, characterized in that the emotional expression word representing is extracted.
- the Kansei expression word extraction unit determines that the periodicity is recognized in the moving object based on the moving information of the moving object.
- the emotional expression word processing device according to any one of appendices 2 to 4, wherein the emotional expression word representing a corresponding repeated action is extracted.
- the sensitivity expression word extraction unit indicates the state when the illumination is turned on when the brightness changes to a higher value.
- the emotional expression according to any one of appendices 2 to 5, wherein a word is extracted, and when the luminance changes to a lower value, the emotional expression word representing a state when the illumination is turned off is extracted. Word processing device.
- the sensitivity expression word extraction unit expresses the sensitivity that expresses the excitement of the field so that the excitement of the field becomes larger as the number of faces increases.
- the emotional expression word processing device according to appendix 8, wherein the expression word is extracted.
- the sensitivity expression word extraction unit is configured so that the friendliness is expressed deeper as the inclinations of the two aligned faces become closer to each other.
- the emotional expression word processing device according to appendix 8 or 9, wherein the emotional expression word representing goodness is extracted.
- the sensitivity expression word extraction unit expresses the sensitivity that expresses joy and enjoyment so that the greater the smile level, the greater the joy and enjoyment is expressed. Any one of appendices 8 to 10, wherein an expression word is extracted, and the sensitivity expression word representing anger and sadness is extracted so that anger and sadness are expressed more greatly as the degree of smile decreases.
- the sensitivity expression word extraction unit extracts the sensitivity expression word representing a clenched fist, If the number is two, extract the Kansei expression word representing peace sign; if the number of fingers is five, extract the Kansei expression word representing a fist open state;
- the emotional expression word processing device according to any one of supplementary notes 8 to 11, characterized in that:
- the sensitivity expression word processing device, the sensitivity expression word processing method, and the sensitivity expression word processing program according to the present invention are suitable for making an image of an atmosphere of a place or an object at the time of photographing similar to that at the time of photographing. .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Library & Information Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
第1実施形態の感性表現語処理装置は、静止画像の信号が入力信号として入力され、静止画像内の人間の顔の数や顔の傾き、笑顔の度合を感性情報として算出し、この感性情報に対応する感性表現語を抽出して出力する際の実施例である。
次に、第2実施形態における感性表現語処理装置について説明する。第2実施形態の感性表現語処理装置は、静止画像の信号が入力信号として入力され、静止画像内の伸ばした指の数を感性情報として算出し、この感性情報に対応する感性表現語を抽出して出力する際の実施例である。
次に、第3実施形態における感性表現語処理装置について説明する。第3実施形態の感性表現語処理装置は、動画像の信号が入力信号として入力され、動画像内の動体の数や、動体の移動量、動体の遷移情報、輝度変化を感性情報として算出し、この感性情報に対応する感性表現語を抽出して出力する際の実施例である。
a=0 … 式(1)
d≧Tの場合
a=1 … 式(2)
d≦-Tの場合
a=-1 … 式(3)
次に、第4実施形態における感性表現語処理装置について説明する。第4実施形態の感性表現語処理装置は、上述した第1実施形態における感性表現語処理装置1の感性表現語抽出部12から出力される感性表現語を、外部から入力された静止画像に重畳して表示装置5に表示させる際の実施例である。
次に、第5実施形態における感性表現語処理装置について説明する。第5実施形態の感性表現語処理装置は、上述した第4実施形態の内容に加え、感性表現語を重畳する際の重畳位置やフォント情報等を、顔検出部112によって算出される顔情報に基づいて決定する際の実施例である。
次に、第6実施形態における感性表現語処理装置について説明する。第6実施形態の感性表現語処理装置は、上述した第5実施形態の内容に加え、感性表現語を重畳する際の重畳位置やフォント情報等を、顔検出部112によって算出される顔情報と指検出部113によって特定される手領域とに基づいて決定する際の実施例である。
次に、第7実施形態における感性表現語処理装置について説明する。第7実施形態の感性表現語処理装置は、上述した第6実施形態の内容に加え、外部から入力された静止画像をスケッチ風画像に変換し、変換後のスケッチ風画像に感性表現語を重畳して表示装置5に表示させる際の実施例である。
次に、第8実施形態における感性表現語処理装置について説明する。第8実施形態の感性表現語処理装置は、上述した第3実施形態における感性表現語処理装置1の感性表現語抽出部12から出力される感性表現語を、外部から入力された動画像に重畳して表示装置5に表示させ、さらに、感性表現語を重畳する際の重畳位置やフォント情報等を、動体の動きなどを示す変動情報に基づいて決定する際の実施例である。
Claims (10)
- 撮影された画像を分析し、前記画像に表されている場の時間的変化や物体の動作を示す感性情報を算出する感性情報算出部と、
予め前記感性情報に対応付けて記憶されている感性を表現する感性表現語から、前記感性情報算出部によって算出された前記感性情報に対応する前記感性表現語を抽出する感性表現語抽出部と、
を備えることを特徴とする感性表現語処理装置。 - 前記感性情報算出部は、少なくとも動きの大きな領域である動体の数、前記動体の移動量、前記動体の遷移情報、または前記画像の輝度の変化のいずれかを含む前記感性情報を算出する、
ことを特徴とする請求項1記載の感性表現語処理装置。 - 前記感性表現語抽出部は、前記感性情報が前記動体の数を含む場合に、前記動体の数が多くなるほど喧騒の程度が大きく表現されるように、喧騒の程度を表す前記感性表現語を抽出する、
ことを特徴とする請求項2記載の感性表現語処理装置。 - 前記感性表現語抽出部は、前記感性情報が前記動体の移動量を含む場合に、前記動体の移動量が大きくなるほど移動の速さが速く表現されるように、移動の速さを表す前記感性表現語を抽出する、
ことを特徴とする請求項2または3記載の感性表現語処理装置。 - 前記感性表現語抽出部は、前記感性情報が前記動体の遷移情報を含む場合に、前記動体の遷移情報に基づいて前記動体の遷移に周期性が認められるときには、当該周期性に対応する繰り返しの動作を表す前記感性表現語を抽出する、
ことを特徴とする請求項2~4のいずれか1項に記載の感性表現語処理装置。 - 前記感性表現語抽出部は、前記感性情報が前記画像の輝度の変化を含む場合に、前記輝度がより高い値に変化するときには、照明が点灯するときの様子を表す前記感性表現語を抽出し、前記輝度がより低い値に変化するときには、照明が消灯するときの様子を表す前記感性表現語を抽出する、
ことを特徴とする請求項2~5のいずれか1項に記載の感性表現語処理装置。 - 前記感性情報算出部は、前記場の状況や物体の状態を示す前記感性情報をさらに算出する、
ことを特徴とする請求項1~6のいずれか1項に記載の感性表現語処理装置。 - 前記感性情報算出部は、顔の数、顔の傾き、笑顔の度合、指の数のいずれかを含む前記感性情報を算出する、
ことを特徴とする請求項7記載の感性表現語処理装置。 - 撮影された画像を分析し、前記画像に表されている場の時間的変化や物体の動作を示す感性情報を算出する感性情報算出ステップと、
予め前記感性情報に対応付けて記憶されている感性を表現する感性表現語から、前記感性情報算出部によって算出された前記感性情報に対応する前記感性表現語を抽出する感性表現語抽出ステップと、
を含むことを特徴とする感性表現語処理方法。 - 請求項9に記載の各ステップをコンピュータに実行させるための感性表現語処理プログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012545688A JPWO2012070429A1 (ja) | 2010-11-24 | 2011-11-15 | 感性表現語処理装置、感性表現語処理方法および感性表現語処理プログラム |
US13/824,403 US9183632B2 (en) | 2010-11-24 | 2011-11-15 | Feeling-expressing-word processing device, feeling-expressing-word processing method, and feeling-expressing-word processing program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010261045 | 2010-11-24 | ||
JP2010-261045 | 2010-11-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012070429A1 true WO2012070429A1 (ja) | 2012-05-31 |
Family
ID=46145775
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/076292 WO2012070429A1 (ja) | 2010-11-24 | 2011-11-15 | 感性表現語処理装置、感性表現語処理方法および感性表現語処理プログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US9183632B2 (ja) |
JP (1) | JPWO2012070429A1 (ja) |
WO (1) | WO2012070429A1 (ja) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011122522A1 (ja) * | 2010-03-30 | 2011-10-06 | 日本電気株式会社 | 感性表現語選択システム、感性表現語選択方法及びプログラム |
US10225608B2 (en) * | 2013-05-30 | 2019-03-05 | Sony Corporation | Generating a representation of a user's reaction to media content |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003289499A (ja) * | 2002-03-28 | 2003-10-10 | Sharp Corp | データ編集方法、データ編集装置、データ記録装置および記録媒体 |
JP2010066844A (ja) * | 2008-09-09 | 2010-03-25 | Fujifilm Corp | 動画コンテンツの加工方法及び装置、並びに動画コンテンツの加工プログラム |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6619860B1 (en) * | 1997-11-14 | 2003-09-16 | Eastman Kodak Company | Photobooth for producing digitally processed images |
JP2003018462A (ja) | 2001-06-28 | 2003-01-17 | Canon Inc | 文字挿入装置および文字挿入方法 |
US6931147B2 (en) * | 2001-12-11 | 2005-08-16 | Koninklijke Philips Electronics N.V. | Mood based virtual photo album |
US7003139B2 (en) * | 2002-02-19 | 2006-02-21 | Eastman Kodak Company | Method for using facial expression to determine affective information in an imaging system |
JP3863809B2 (ja) | 2002-05-28 | 2006-12-27 | 独立行政法人科学技術振興機構 | 手の画像認識による入力システム |
JP4278027B2 (ja) | 2002-10-28 | 2009-06-10 | 株式会社報商製作所 | 消火用具の収納箱 |
US7233684B2 (en) * | 2002-11-25 | 2007-06-19 | Eastman Kodak Company | Imaging method and system using affective information |
JP2005044330A (ja) * | 2003-07-24 | 2005-02-17 | Univ Of California San Diego | 弱仮説生成装置及び方法、学習装置及び方法、検出装置及び方法、表情学習装置及び方法、表情認識装置及び方法、並びにロボット装置 |
US7607097B2 (en) | 2003-09-25 | 2009-10-20 | International Business Machines Corporation | Translating emotion to braille, emoticons and other special symbols |
US20060047515A1 (en) | 2004-08-25 | 2006-03-02 | Brenda Connors | Analyzing human movement patterns |
JP4375580B2 (ja) | 2005-03-30 | 2009-12-02 | 日本電気株式会社 | 画像処理装置、画像処理方法、および画像処理プログラム |
US7532752B2 (en) * | 2005-12-30 | 2009-05-12 | Microsoft Corporation | Non-photorealistic sketching |
JP2007233517A (ja) | 2006-02-28 | 2007-09-13 | Fujifilm Corp | 顔検出装置および方法並びにプログラム |
US20070294273A1 (en) * | 2006-06-16 | 2007-12-20 | Motorola, Inc. | Method and system for cataloging media files |
US8126220B2 (en) * | 2007-05-03 | 2012-02-28 | Hewlett-Packard Development Company L.P. | Annotating stimulus based on determined emotional response |
KR20080110489A (ko) * | 2007-06-14 | 2008-12-18 | 소니 가부시끼 가이샤 | 정보 처리 장치 및 방법, 및 컴퓨터 프로그램 |
US8117546B2 (en) * | 2007-08-26 | 2012-02-14 | Cyberlink Corp. | Method and related display device for displaying pictures in digital picture slide show |
US8195598B2 (en) * | 2007-11-16 | 2012-06-05 | Agilence, Inc. | Method of and system for hierarchical human/crowd behavior detection |
JP2009141516A (ja) | 2007-12-04 | 2009-06-25 | Olympus Imaging Corp | 画像表示装置,カメラ,画像表示方法,プログラム,画像表示システム |
US8462996B2 (en) * | 2008-05-19 | 2013-06-11 | Videomining Corporation | Method and system for measuring human response to visual stimulus based on changes in facial expression |
TW201021550A (en) | 2008-11-19 | 2010-06-01 | Altek Corp | Emotion-based image processing apparatus and image processing method |
US20110263946A1 (en) * | 2010-04-22 | 2011-10-27 | Mit Media Lab | Method and system for real-time and offline analysis, inference, tagging of and responding to person(s) experiences |
DE102010018460B4 (de) | 2010-04-27 | 2017-02-16 | Siemens Healthcare Gmbh | Verfahren zur Ermittlung wenigstens einer Änderung einer tubulären Gewebestruktur eines Lebewesens, Recheneinheit und Datenträger |
-
2011
- 2011-11-15 WO PCT/JP2011/076292 patent/WO2012070429A1/ja active Application Filing
- 2011-11-15 JP JP2012545688A patent/JPWO2012070429A1/ja active Pending
- 2011-11-15 US US13/824,403 patent/US9183632B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003289499A (ja) * | 2002-03-28 | 2003-10-10 | Sharp Corp | データ編集方法、データ編集装置、データ記録装置および記録媒体 |
JP2010066844A (ja) * | 2008-09-09 | 2010-03-25 | Fujifilm Corp | 動画コンテンツの加工方法及び装置、並びに動画コンテンツの加工プログラム |
Also Published As
Publication number | Publication date |
---|---|
US9183632B2 (en) | 2015-11-10 |
US20130182907A1 (en) | 2013-07-18 |
JPWO2012070429A1 (ja) | 2014-05-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR100845390B1 (ko) | 영상 처리기, 영상 처리 방법, 기록 매체, 및 반도체 장치 | |
WO2012070430A1 (ja) | 感性表現語処理装置、感性表現語処理方法および感性表現語処理プログラム | |
KR101263686B1 (ko) | 증강 현실을 이용한 노래방 시스템 및 장치, 이의 노래방 서비스 방법 | |
de Lima et al. | Draw your own story: Paper and pencil interactive storytelling | |
EP2239652A1 (en) | Providing an interactive visual representation on a display | |
KR101483054B1 (ko) | 상호작용을 지원하는 모바일 기반 증강현실 제작 시스템 및 방법 | |
WO2012070428A1 (ja) | 感性表現語処理装置、感性表現語処理方法および感性表現語処理プログラム | |
KR20120038616A (ko) | 마커리스 실감형 증강현실 제공 방법 및 시스템 | |
Tripathy et al. | Voice for the mute | |
KR20180037519A (ko) | 기계 학습 기반의 실감 미디어 저작 방법 및 장치 | |
US10955911B2 (en) | Gazed virtual object identification module, a system for implementing gaze translucency, and a related method | |
TW202316373A (zh) | 針對多層顯像中之物件辨識性的系統及方法 | |
WO2012070429A1 (ja) | 感性表現語処理装置、感性表現語処理方法および感性表現語処理プログラム | |
KR20190074911A (ko) | 실감형 영상 콘텐츠 제공 방법 및 이를 이용한 서버 | |
Gomez et al. | Spatial awareness and intelligibility for the blind: audio-touch interfaces | |
CN111651054A (zh) | 音效控制方法、装置、电子设备及存储介质 | |
Kakarla et al. | A real time facial emotion recognition using depth sensor and interfacing with Second Life based Virtual 3D avatar | |
Rasool et al. | Image-driven haptic rendering | |
Carmigniani | Augmented reality methods and algorithms for hearing augmentation | |
JP2023503170A (ja) | 拡張現実を用いてプレイヤのインタラクションを向上させるシステム及び方法 | |
CN114450730A (zh) | 信息处理***及方法 | |
Lee et al. | Enhancing interface design using attentive interaction design toolkit | |
JP2020037155A (ja) | 仕草制御装置及び仕草制御プログラム | |
Amatya et al. | Translation of Sign Language Into Text Using Kinect for Windows v2 | |
TW201105135A (en) | A video detecting and monitoring method with adaptive detection cells and a system thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11843408 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2012545688 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13824403 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11843408 Country of ref document: EP Kind code of ref document: A1 |