WO2011042989A1 - Dispositif de détermination du sentiment d'un spectateur pour une scène reconnue visuellement - Google Patents

Dispositif de détermination du sentiment d'un spectateur pour une scène reconnue visuellement Download PDF

Info

Publication number
WO2011042989A1
WO2011042989A1 PCT/JP2009/067659 JP2009067659W WO2011042989A1 WO 2011042989 A1 WO2011042989 A1 WO 2011042989A1 JP 2009067659 W JP2009067659 W JP 2009067659W WO 2011042989 A1 WO2011042989 A1 WO 2011042989A1
Authority
WO
WIPO (PCT)
Prior art keywords
viewer
visual
viewpoint
attention
emotion
Prior art date
Application number
PCT/JP2009/067659
Other languages
English (en)
Japanese (ja)
Inventor
菊池光一
倉島渡
Original Assignee
Kikuchi Kouichi
Kurashima Wataru
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kikuchi Kouichi, Kurashima Wataru filed Critical Kikuchi Kouichi
Priority to JP2011535257A priority Critical patent/JP5445981B2/ja
Priority to PCT/JP2009/067659 priority patent/WO2011042989A1/fr
Publication of WO2011042989A1 publication Critical patent/WO2011042989A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6814Head
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4088Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to a viewer feeling determination apparatus for a visually recognized scene, and particularly relates to a technique for analyzing a change in physiological reaction of a viewer when visually recognizing the visually recognized scene and evaluating an object and a viewer's own ability.
  • the video content is visually recognized using the movement of the line of sight, the pupil area, the blink frequency, etc. as data obtained by imaging the eye movement. Judging the degree of interest and interest and evaluating video content. As a result, a higher-level evaluation that is more reliable than an evaluation method that uses the audience rating as an index has been realized with a small number of subjects.
  • Emotion is a physiological reaction that reacts unconsciously to a visual target (including sound) or body stimulation.
  • a visual target including sound
  • body stimulation As an example, there is a phenomenon in which attention is unconsciously when there is an interest in a visual target. Emotions are feelings that are felt by people's movements and things. Joy, sadness, anger, surprise, disgust, fear.
  • emotion (E) and emotion (F: Feeling) are collectively defined as emotion (EF).
  • the technique described in the above-mentioned patent document evaluates the degree of attention to video content such as a television program, and is not assumed to be applied to other scenes. Furthermore, although the pupil diameter is determined as the degree of attention as it is, the pupil diameter greatly reacts to the brightness (illuminance and luminance) of the viewing target, and therefore means for removing the pupil diameter response corresponding to the brightness of the viewing target. In addition, the pupil diameter response with an accurate attention degree cannot be obtained.
  • the object of the present invention is to grasp the physiological reaction of the viewer more precisely and finely and to evaluate the object and the viewer's own ability more accurately and finely in various situations. It is another object of the present invention to provide a viewer feeling determination device for visual recognition scenes (including auditory, taste, tactile, olfactory stimuli, and image recall).
  • the invention according to claim 1 is a viewer emotion determination device for a visual scene, which diagnoses the emotion of a viewer who looks at the scene including the specific object with respect to the specific object.
  • Analyzing part to calculate, normal position (normal emotion value) of each part of the face of the viewer, changes in physiological reaction data when the viewer is looking at an object of low interest and attention, and the accompanying speed Accumulate information including acceleration as a normal emotion value, and the change value when the viewer's emotion is high such as the viewer's anger, fear, disgust, happiness (satisfaction), sadness, surprise, etc.
  • a diagnostic unit that diagnoses the viewer's feelings about the specific object and the viewer's own reaction ability by comparing with the high emotion value (both are referred to as emotion values), and, if necessary, the body It has a function to measure the degree of viewer's feelings for a visual scene, which includes a normal temperature of the body and a part for diagnosing the degree of excitement by measuring the body temperature and movement at the time of excitement. It is characterized by.
  • the object at the time of visual recognition of claim 1 includes the case where the object is visually recognized while touching or operating the actual object. Furthermore, in addition to vision, the same response is also seen when receiving auditory, gustatory, tactile, and olfactory stimuli. Analyze the degree of attention to these stimuli (five senses), the movement of the viewpoint, and the movement of each part of the face. And a similar feeling determination for the object.
  • the “normal emotion value” and the “high emotion value” are measured by visually recognizing the basic video showing the “normal emotion value” and the “high emotion value”, and the “normal emotion value” and the “high emotion value” are the faces.
  • the position of each part of the face of “normal” (normal emotion value) and the maximum expression showing “anger”, “fear”, “disgust”, “happiness (joy)”, “sadness”, “surprise” The anger level, fear level, disgust level, happiness level, sadness level, and surprise level are calculated based on the displacement values of the movements of each part of the face. It is provided with measuring as "high emotion value”.
  • the invention of claim 2 is the viewer feeling determination apparatus for the visually recognized scene according to claim 1, wherein the change of the physiological response data and the accompanying velocity and acceleration are determined by the viewer's viewpoint being the specific object.
  • the approach speed and acceleration when approaching the object, or the disengagement speed and acceleration when the viewer's viewpoint is detached from the specific object, and the diagnosis unit increases the identification speed and acceleration as the approach speed and acceleration increase.
  • the smaller the acceleration is the more the diagnosis is made that the specific object is more interesting.
  • the visual recognition target may be a certain part in the target object (for example, in the case of a product, a product name, a manufacturer name, an expiration date, a note, a specification, an operation button, etc.).
  • the analysis unit is configured to shoot a visual image with a visual scene photographing device worn on the head of the viewer.
  • the movement distance, movement speed, and movement acceleration of the viewer's head are calculated by calculating from the change in the position of the stationary object in the visual image using the position of the stationary object in the visual image captured by the camera as a base point. Then, the change in the position of the viewpoint in the visual image is added to calculate the approach speed and acceleration or the detachment speed and acceleration of the viewpoint with respect to the specific object including the head movement.
  • An existing method such as a measurement method using a gyroscope may be used to calculate the moving direction and distance, moving speed, and moving acceleration of the viewer's head.
  • the physiological response data when the physiological response data is a change in pupil diameter,
  • the visual target is a video display
  • the visual image is the content displayed on the display
  • the eyeball photographing device, face photographing device, and brightness measuring device are not attached to the viewer's head, but are fixed to the video display side.
  • the visual target when the visual target is a video display, the luminance of the video display is changed, the pupil diameter of the viewer corresponding to the video display is obtained, and stored as relation data between the basic luminance and the pupil diameter.
  • the pupil diameter of the viewer is measured, and at the same time, the luminance is measured for the viewing range with a luminance meter, and the accumulated basic luminance and pupil diameter are calculated from the pupil diameter value when the object is viewed.
  • the viewer's feeling for a visual scene characterized in that the pupil diameter corresponding to only the degree of attention to the object is obtained by subtracting the value of the pupil diameter corresponding to the luminance when the object is viewed in the relation data Judgment device.
  • an eyeball photographing device, a face photographing device, a visual scene photographing device, and a brightness measuring device are integrally mounted on the viewer's head for measurement.
  • the illuminance indicating the brightness of the visible range is applied instead of the luminance, and the illuminometer is used instead of the luminance meter.
  • the red (r), green (G), and blue (B) of the pixel having the viewpoint on the display is compared with the luminance of the video display according to the second aspect.
  • the luminance of the pixel at the viewpoint is calculated using the influence on the luminance of the image as a parameter, and the pupil corresponding to the luminance of the pixel at the time of visual recognition of “basic relationship data of display luminance, viewpoint luminance, and pupil diameter” from the pupil diameter of the viewer
  • a viewer feeling determination device for a visual scene, wherein a pupil diameter corresponding to the degree of attention of the viewer's viewpoint including the brightness of the display can be obtained by subtracting the diameter (pupil diameter that reacts to brightness) .
  • the luminance of each pixel may be obtained by the measuring device.
  • the “basic relationship data of display brightness, viewpoint brightness, and pupil diameter” shows that the brightness of a circle of about 1.7 cm is changed from the highest (white) to the minimum (black) in the brightness of the display that changes in stages.
  • the pupil diameter when the viewer visually recognizes the circle is measured, and “basic relation data of display luminance, viewpoint luminance, and pupil diameter” is calculated and accumulated.
  • a sixth aspect of the present invention provides the relationship between the basic luminance and the pupil diameter in the fourth and fifth aspects, wherein the maximum value of the pupil diameter (the lowest luminance of the video display) and the minimum value of the pupil diameter (the highest luminance of the video display).
  • the ratio between the change range of the pupil diameter and the pupil diameter corresponding to the degree of attention only to the object when the viewer visually recognizes the object is the degree of attention that absorbs the individual difference of the viewer A viewer feeling determination device for a viewing scene to be performed.
  • the viewer feeling determination device for the visual scene if the degree of attention is high, it is determined that the interest / interest in the visual target is high. This is an emotional reaction, and the attention level in this case is an emotional value.
  • the speed and acceleration associated with the change in the physiological response data are an expansion speed and acceleration of the attention level of the viewer, or a reduction speed and acceleration of the attention level of the viewer. The greater the degree and the greater the enlargement speed and acceleration, the more the diagnosis is made. The less the degree of attention and the greater the reduction speed and acceleration, the greater the particular object. It is characterized by diagnosing that there is no interest.
  • the movement of each part (nose, eyes, mouth, eyebrows, etc.) of the visual image of the viewer is measured and the change thereof is measured.
  • the quantity is measured, and the magnitude of the emotion value based on the facial expression of the viewer is determined based on the changed size of each part.
  • the feeling of the viewer with respect to the visual recognition object is determined by combining this determination with the emotion value based on the degree of attention due to eye movement during visual recognition of the visual recognition object of claims 1 to 3 and 7.
  • each emotion value (F) is low. Therefore, it can be determined that the evaluation is negative with respect to the visual recognition target.
  • the emotional value (E) is normal, the subject has little special interest / interest and little attention. In this case, each emotion (F) generally does not increase. Therefore, it cannot be said that emotion can be determined accurately.
  • the emotion value (E) is high and each emotion is high, it can be determined that the happiness (HP) is positive and effective.
  • the emotion value (E) is high and each emotion is high, it can be determined that the happiness (HP) is positive and effective.
  • SD sadness
  • anger, fear, or disgust it is a negative reaction. If you are surprised, you will be surprised and interested (interested), and depending on the subject, it is generally an effective response.
  • the object to be viewed is content such as a movie and the content intentionally invites each emotion, the happiness (joy) degree (HP) and surprise (SP), sadness (SD), anger of the emotion value (F)
  • the content to be visually recognized has an emotional reaction as intended and can be determined to be effective.
  • a specific determination method will be described for each target.
  • the invention of claim 9 is a viewer feeling determination apparatus for a visually recognized scene according to claim 1,
  • the object at the time of visual recognition in claim 1 is a case where the object is visually recognized while touching or operating the actual object, when listening to only the sound, smelling, or eating. Analyzes the attention level and viewpoint movement of the viewer and the movement of each part of the face when the user feels the taste, when a part of the body is in contact with the object, or when there are multiple stimuli Then, it is characterized by performing emotion determination for those stimuli. In particular, when the emotional value (E) is low for these stimuli, it can be determined that there is no response to the stimulus.
  • E emotional value
  • the emotion value (E) when the emotion value (E) is high, and when the happiness (joy) degree (HP) of the emotion value (F) is generally high, the stimulus can be determined to be comfortable.
  • sadness (SD), anger (AG), fear (FR), and disgust (DI) of other emotion values (F) when high, it can be generally determined that the stimulus is not comfortable for the viewer.
  • attention is paid and the surprise (SP) is high, it depends on the situation, but generally, it can be determined that an unexpected feeling is felt.
  • a tenth aspect of the present invention is the viewer feeling determination apparatus for the visual recognition scene according to any one of the first to third aspects and the seventh to ninth aspects, wherein the specific object is a product, and the diagnosis unit And diagnosing the viewer's willingness to purchase the product.
  • the emotional value is low (not interested and not paying attention), it can be clearly determined that it does not lead to purchase.
  • the emotional value is high (interesting and paying attention)
  • it can be determined that the purchase is positive when the happiness (joy) level and the surprise are generally high at that time.
  • the sadness, anger, fear, and disgust of other emotion values are high, it can be generally determined that the purchase is not aggressive.
  • the eleventh aspect of the present invention is the viewer feeling determination apparatus for a visual recognition scene according to any one of the first to third and seventh to ninth aspects, wherein the specific object is a learning object (lecture, Reading, teacher movement, teaching materials, etc.), and the diagnosis unit diagnoses the viewer's willingness to learn about the learning target. If the emotion value is low (not interested and not interested), it can be determined that the learning object is clearly not interested and is “bored”. However, if the emotional value is high (interested and focused), it can be said that the willingness to learn is high.
  • the specific object is a learning object (lecture, Reading, teacher movement, teaching materials, etc.)
  • the diagnosis unit diagnoses the viewer's willingness to learn about the learning target. If the emotion value is low (not interested and not interested), it can be determined that the learning object is clearly not interested and is “bored”. However, if the emotional value is high (interested and focused), it can be said that the willingness to learn is high.
  • the content of learning is a content that invites each emotion
  • happiness (joy) level of the emotion value and surprise, sadness, anger, fear, and disgust are high as intended
  • the learning can be determined to be good.
  • the emotional value is high when tackling a problem, but when sadness, anger, fear, and disgust are high, something is confusing and it can be determined that there is a problem with this learning target.
  • the invention of claim 12 is a viewer feeling determination apparatus for a visual scene according to any one of claims 1 to 3 and 7 to 9,
  • the specific object is video content such as a television or a movie
  • the viewer is the video content viewer.
  • the attention level of interest and interest in each scene of the video content that the video content producer expects from the viewer, and the happiness (joy), surprise, sadness, anger, fear, and aversion sensitivity of each basic emotion And the level of excitement level are set on the time axis of the content software program, and at the time of the video content projection, the attention level and happiness (joy) level, surprise level, sadness level, anger level, fear level, It is characterized by diagnosing how much disgust sensitivity and excitement have reached the level expected by the video content producer, and evaluating the video content.
  • the target content includes sound at the same time as the video, but there may be a case where only the sound is present without the video.
  • the most effective use methods are pre-evaluation of production movies and pre-evaluation of commercial audience response.
  • the invention according to claim 13 is the viewer feeling determination apparatus for the visually recognized scene according to any one of claims 1 to 3 and 7 to 9, wherein the specific object is video content such as a television or a movie. And The viewer is the video content viewer. Display the distribution of attention points on the video screen according to the degree of attention of multiple viewers in a specific scene or still image of the video content, and compare the appeal points of the content with the appeal points on the video screen of the content creator It is characterized by judging the effect.
  • the invention of claim 14 is the viewer feeling determination apparatus for the visually recognized scene according to any one of claims 1 to 3 and 7 to 9, wherein the receiving unit receives each of two videophones.
  • the caller's face, the visual image and the eye movement image are received, and the diagnosis unit diagnoses feelings from one caller to the other caller, and from one caller to the other caller.
  • It is characterized by comprising a transmission unit for transmitting the emotion diagnosis result to an intermediary such as a video phone of the other party or a diagnostic center.
  • an intermediary such as a video phone of the other party or a diagnostic center.
  • the compatibility is good if the degree of happiness (joy) is generally high at that time.
  • the degree of happiness happiness
  • anger, fear, and disgust are high, it can be determined that compatibility is good in the sense of conflict in the case of a reaction based on the content of the talk.
  • the invention of claim 15 is the viewer feeling determination apparatus for a visually recognized scene according to any one of claims 1 to 3 and 7 to 9, wherein the viewer is difficult to speak a language. They are animals, and their visual objects are all their visual objects and are characterized by determining their feelings for the visual objects.
  • the viewer is a human (for example, a baby or a person with dementia)
  • it can be clearly determined that he / she is not interested in the visual target if the emotion value is low (not interested).
  • the emotion value is high (attention is paid)
  • the invention of claim 16 is a viewer feeling determination apparatus for a visual recognition scene according to any one of claims 1 to 3 and 7 to 9, wherein the viewer feeling determination apparatus of the present invention is incorporated in a robot,
  • the viewer is an emotional human being and an animal, and the visual target is all that the viewer visually recognizes.
  • the robot includes a selection unit and an execution function for selecting an appropriate speech and response based on a viewer's diagnosis result input from the diagnosis unit in the robot. For example, when the viewer's emotional value is low (not interested), it is clearly determined that the viewer's visual target is not interested, and the robot behaves and moves to the interested subject. If the viewer's emotional anger, fear, or disgust is high, the robot will take appropriate measures such as uttering appropriate comfort words or changing the visual target.
  • the invention of claim 17 is the viewer feeling determination apparatus for a visually recognized scene according to any one of claims 1 to 3 and 7 to 9, wherein the viewer is a criminal suspect and the identification
  • the object is characterized in that it is a crime scene, a criminal tool, an accomplice or a victim.
  • the viewer's emotional value is low (not paying attention), it can be determined that the visual target is not relevant to the criminal suspect.
  • the emotional value is high (attracting attention), it cannot be said to be related to criminals simply because of high interest.
  • the emotional value is generally high and the emotional value is high in happiness (joy), surprise, anger, fear, or disgust, it can be determined that the crime suspect is of some relevance.
  • the invention of claim 18 is the viewer feeling determination apparatus for a visually recognized scene according to any one of claims 1 to 3 and 7 to 9, wherein the viewer is a vehicle or an airplane such as an airplane. If it is a driver and the diagnosis unit further determines whether attention is being focused on by measuring the position of the viewpoint and the pupil diameter, and if it is diagnosed that the vehicle is in a dangerous driving state, a warning is given to the viewer Or it has the function to perform appropriate operation assistance, such as a brake.
  • the emotional value of the viewer (driver) is low, it can be said that it is a dangerous driving state because it can be determined that it is clearly inadvertent state or close to dozing.
  • the emotional value is high (attention is paid)
  • the driving is good when the viewpoint is always located at an appropriate position. Furthermore, it can be determined that the situation is good if the happiness (joy) level of the emotion value is high. In addition, when feelings of anger, fear, and disgust are high, it can be determined that there is some obstacle during driving, other than when being influenced by the conversation in the car. For example, when stress is high.
  • the invention of claim 19 is the viewer feeling determination apparatus for a visually recognized scene according to any one of claims 1 to 3 and 7 to 9, wherein the viewer is a worker, It is a machine operation simulator or a real machine, and the analysis unit analyzes response delay time and an operation method using response data by the operation of the viewer's response device, and the diagnosis unit is an expert of the viewer It is characterized by diagnosing degree or appropriateness.
  • the analysis unit analyzes response delay time and an operation method using response data by the operation of the viewer's response device
  • the diagnosis unit is an expert of the viewer It is characterized by diagnosing degree or appropriateness.
  • the invention of claim 20 is the viewer feeling determination device for a visually recognized scene according to any one of claims 1 to 3 and 7 to 9, wherein the viewer is a worker before work,
  • the target is content of an operation video with a dangerous part or a simulator, and the diagnosis unit diagnoses the physical condition of the viewer to determine suitability for work.
  • the diagnosis unit diagnoses the physical condition of the viewer to determine suitability for work.
  • the viewer's emotional value is low, it can be determined that the state is clearly inadvertent or close to dozing. For example, being distracted by other things.
  • the emotional value is high (attention is paid)
  • the gaze always keeps the right target in the video, Furthermore, it can be determined that the situation is good if the happiness (joy) level of the emotion value is high. If the feelings of anger, fear, and disgust are high, it can be determined that the viewer has some kind of disability (such as the person's physical condition, home, friend, etc.).
  • the invention of claim 21 is the viewer feeling determination device for a visually recognized scene according to any one of claims 1 to 3 and 7 to 9, wherein the viewer is an athlete,
  • the object is an object to be seen when the athlete is exercising (the movement of the pitcher and the ball's ball muscle for baseball batters, the movement of the opponent's player and the ball's ball muscle for tennis players, etc.)
  • the diagnostic unit is the viewer It is characterized by diagnosing the tone, proficiency level, or appropriateness level. When the line of sight does not go to the target to be noticed and the viewer's emotional value is low (not focused), the viewer can clearly be determined to be unskilled.
  • the viewer feeling determination device of the present invention is incorporated in a game machine,
  • the viewer is a game player
  • the viewing target is game content that is viewed by the viewer.
  • the game machine includes a selection unit and an execution function for selecting an appropriate game content screen, sound and game program based on a diagnosis result of a viewer input from the diagnosis unit in the game machine.
  • the game console will take appropriate measures such as changing the viewing target (changing the game content screen, sound, and game program). Will do.
  • the invention according to claim 23 is the viewer feeling determination apparatus for a visually recognized scene according to any one of claims 1 to 3 and 7 to 9, wherein the specific object is arranged in a space.
  • the diagnostic unit diagnoses the appropriateness of the arrangement of the specific objects viewed from the viewer. If the viewer's emotional value is low (not interested and low in attention), it can be clearly determined that the item on the shelf and the item on the shelf are not noticed. However, even if the emotional value is high (attracting attention), it cannot be said that the interest in purchasing is high simply because the interest is high. However, in general, when the emotional value is high in happiness (joy) and surprise, it can be determined as favorable. However, when the feelings of sadness, anger, fear, and disgust of other emotion values are high, it can be generally determined that there is an element that leads to dissatisfaction.
  • the viewer in the viewer feeling determination apparatus for a visual recognition scene according to any one of the first to third and seventh to ninth aspects, the viewer is a visitor and the visual target is a customer. It is a store clerk, a receptionist, and the like, and it is characterized by diagnosing the feeling of the viewer in the reception of the store clerk and the receptionist and determining whether the reception is good or bad. If the viewer's emotional value is low (not interested), it can be determined that there is no particular problem with the other party and that he / she is responding normally. When the viewer's emotional value is high (attracting interest), it can generally be determined that the response is good if the emotional value is high in happiness (joy). However, when anger, fear, and disgust are high, it can be determined that the response is not preferable except for the synchronous response to the conversation content.
  • the viewer is a general TV viewer (receiver of video content)
  • the viewing target is the content on the TV display connected to the broadcast, communication network or video storage device.
  • a viewer feeling determination apparatus for a visual scene which recommends content of high interest and interest level of a viewer when the viewer watches the television.
  • a twenty-sixth aspect of the present invention is the viewer feeling determination apparatus for a visual recognition scene according to any one of the first to third aspects and the seventh to ninth aspects, wherein the viewer is a general television viewer, and the viewing target is a network. Alternatively, it is a television receiver (video content receiver) that receives via radio waves. While watching a program, the object or person with a viewpoint in the receiver is automatically marked at the moment of high sensitivity, and the information on the marked object is distributed to the viewer via e-mail later. This system is characterized by being able to purchase products and contact people with the intention of the viewer thereafter. However, when automatically marking an object or person with a viewpoint in the receiver, even if the emotion (interest / interest level) is high, those with high disgust sensitivity and anger level are automatically excluded. .
  • the ability of the target object and the viewer himself / herself is evaluated by combining the change in physiological response data, the accompanying velocity and acceleration, and facial expression analysis based on the change in each part of the face. .
  • the physiological reaction of the viewer can be grasped more accurately, and the target object and the viewer's own ability can be more accurately evaluated in various situations.
  • FIG. 5 is a flowchart illustrating a method for calculating viewpoint tracking time according to an embodiment of the present invention.
  • Figure of facial part measurement by face analysis technique for emotion determination of the present invention Example of average attention level distribution of viewers in specific video content (minimum attention level is 0 and maximum attention level is 100) Average attention concentration (%) of viewers in specific video content
  • FIG. 1 is a block diagram showing a configuration of a viewer feeling determination system according to an embodiment of the present invention.
  • This viewer feeling determination system includes an eye camera 3, a transmission device 7, a storage device 8, a body motion photographing device 9 including a viewer's face, a body temperature measuring device 10, a brightness measuring device 11 (a visual target is a video display).
  • a luminance measurement device in the case of the visual recognition, the illuminance measurement device in the case of other visual recognition target
  • the response device 14 the viewer feeling determination device 100.
  • the viewer 1 equips the head with the eye camera 3 and directs his / her line of sight to the viewing scene including the object 2.
  • the visual target is content of a video display
  • a device corresponding to an eye camera may be fixed near the display.
  • the visual recognition scene may be anything as long as it is visible to the viewer 1, for example, video data displayed on a display of a computer, a mobile phone, a television, etc., in addition to a store, a plaza, a human being, and a townscape. Web page data, computer game data, data output by a computer program, and the like are also included.
  • response prompting information for causing the viewer 1 to react such as a character string, a figure, a symbol, a picture, a photo, or a video
  • the character string, the figure, the symbol, the picture, Reaction target information such as a photo or a moving image, to which the viewer 1 reacts is included.
  • the eye camera 3 includes an eyeball photographing device 4, a visual scene photographing device 5, and a microphone 6.
  • the eye camera 3 is connected to the storage device 8 and the transmission device 7, and the viewer 1 also wears the storage device 8 and the transmission device 7.
  • the visual recognition target is a display such as a computer, a mobile phone, or a television
  • the eye camera may be installed on the side of the display without mounting the eye camera on the head.
  • the eyeball photographing device 4 photographs the eyeball of the viewer 1.
  • the eye movement image b photographed by the eyeball photographing device 4 is accumulated in the accumulating device 8.
  • the eye movement video b may be accumulated as a moving image or may be accumulated as a still image periodically shot.
  • the visual scene photographing device 5 photographs the visual scene including the object 2 to which the viewer 1 is looking.
  • the visual image a photographed by the visual scene photographing device 5 is stored in the storage device 8. Note that the visual image a may be accumulated as a moving image or may be accumulated as a still image that is periodically taken.
  • the brightness measuring device 11 has a function of measuring the brightness g of the visual recognition target.
  • the visual recognition target is a video display
  • the luminance of the video display is measured.
  • the illuminance indicating the brightness of the viewing range is measured instead of the luminance.
  • the brightness data g is stored in the storage device 8.
  • the microphone 6 captures the viewer 1 and surrounding sounds.
  • the voice data c captured by the microphone 6 is stored in the storage device 8.
  • the response device 14 acquires a response signal by the operation of the viewer 1.
  • a response signal by the operation of the viewer 1.
  • an operation of the viewer for example, a button press operation, a keyboard operation, a mouse operation, a touch panel operation, a remote control operation, an operation of a controller attached to a game device, an operation of a machine, a raise of a hand, a voice, etc.
  • a body action is mentioned.
  • the response data d acquired by the response device 14 is stored in the storage device 8.
  • the storage device 8 stores the visual image a from the visual scene photographing device 5, the eye movement video b from the eyeball photographing device 4, the audio data c from the microphone 6, and the response data d from the response device 14.
  • the human body motion image e, the body temperature data f, and the brightness data g including the above images are stored in time series as synchronized data.
  • the transmission device 7 includes a visual image a, an eye movement video b, audio data c and response data d accumulated by the accumulation device 8, a viewer motion video e including a video of the viewer's face, body temperature data f, and brightness data. g is transmitted to the viewer feeling determination apparatus 100 by wireless or wired. At this time, the transmission apparatus 7 may transmit each data to the viewer feeling determination apparatus 100 at a predetermined time interval, and each data is transmitted to the viewer feeling determination apparatus 100 according to an instruction from the viewer feeling determination apparatus 100. May be sent to.
  • the body motion photographing device 9 photographs the motion of each part of the body including the face of the viewer 1. Moreover, the body motion imaging device 9 transmits the captured body motion video e to the viewer feeling determination device 100 by wireless or wired.
  • a face photographing device for photographing only the face of the viewer may be prepared separately, and for example, the face can be photographed by being attached to the tip of the eaves of a hat-type eye camera.
  • the body temperature measuring device 10 measures the temperature of the body of the viewer 1. In addition, the body temperature measurement device 10 transmits the body temperature data f to the viewer feeling determination device 100 by wireless or wired. In FIG. 1, the temperature of the body of the viewer 1 is remotely measured by the body temperature measuring device 10, but may be directly measured by a sensor attached to the viewer 1. In this case, the measured body temperature data f is stored in the storage device 8 and transmitted to the viewer feeling determination device 100 by the transmission device 7.
  • the imaging frequency of the video frame in the eyeball photographing device 4, the visual scene photographing device 5 and the body motion photographing device 9 is as high as 240 Hz or higher. Further, it is desirable that the shooting frequencies of these video frames are the same.
  • the eye camera 3 and the response device 14 are connected to the storage device 8 and transmit each data to the storage device 8 by wire, but each data is transmitted to the storage device 8 wirelessly. You can send it. Furthermore, in the case of wireless, the visual scene photographing device 5, the eyeball photographing device 4, the brightness measuring device 11, the microphone 6, and the response device 14 may directly transmit each data to the viewer feeling determination device 100.
  • FIG. 2 is a block diagram illustrating a configuration of the viewer feeling determination apparatus 100 according to the embodiment of the present invention.
  • the viewer feeling determination apparatus 100 includes a receiving unit 101, an analysis unit 102, a storage unit 103, a diagnosis unit 104, and a display unit 105.
  • the receiving unit 101 receives a visual image a, an eye movement video b, audio data c, brightness data g, and response data d from the transmission device 7. In addition, the receiving unit 101 receives the body motion image e from the body motion imaging device 9. Further, the receiving unit 101 receives body temperature data f from the body temperature measuring device 10. Then, the receiving unit 101 synchronizes all received data and outputs each data to the analyzing unit 102 at predetermined time intervals or according to an instruction from the analyzing unit 102.
  • a method of including the time of the clock in each data as a synchronization signal and a synchronization signal from the viewer feeling determination device 100, the transmission device 7, the body motion imaging device 9, and the body temperature measurement device 10 Visual image a from transmission device 7, eye movement image b, audio data c and response data d, body motion image e from body motion photographing device 9, body temperature data f from body temperature measuring device 10, brightness
  • a method of including the brightness data g from the measuring device 11 as synchronization signal data Thereby, all data can be processed synchronously.
  • Analyzing unit 102 receives visual image a, eye movement video b, audio data c, response data d, body motion video e, body temperature from receiving unit 101 at predetermined time intervals or by instructing receiving unit 101. Data f and brightness data g are acquired. Then, the analysis unit 102 attaches synchronization signal data to each of the data a to g and accumulates them in the accumulation unit 103. Further, the analysis unit 102 reads the data ag stored in the storage unit 103 as necessary, and uses the read data ag for analysis. The analysis unit 102 calculates the position of the viewpoint in the viewing image a from the viewing image a and the eye movement image b, and calculates the change in the physiological response data of the viewer and the accompanying acceleration. Details will be described later.
  • FIG. 3 is a diagram for explaining the calculation of the position of the viewpoint using the visual image a and the eye movement image b.
  • the analysis unit 102 obtains a line of sight from the eye movement image b, and calculates the position of the viewpoint by combining the line of sight with the visual image a.
  • various gaze calculation methods For example, there is a method of irradiating a near-infrared ray of a point light source to an eyeball called a cornea reflection method and using a reflection image (hereinafter referred to as a Purkinje image) on the cornea surface.
  • the line-of-sight measurement method by the corneal reflection method is roughly divided into two types: a method for obtaining the line of sight from the distance between the pupil center and the Purkinje image, and a line connecting the corneal curvature center and the pupil center obtained from the Purkinje image as virtual lines of sight. There is a method of obtaining a line of sight by performing error correction.
  • FIG. 4 is a diagram for explaining the relationship between the movement of the viewer 1's head (including the movement of the face) and the visual image a.
  • the movement of the head can be measured only with the visual image a captured by the visual scene photographing device 5 worn by the viewer 1 on the head. However, since the measurement is based on the visual image a, it is two-dimensional.
  • the viewing angle (horizontal ⁇ h, vertical ⁇ v) of the visual image a is set in advance by the visual scene photographing device 5.
  • the analysis unit 102 can also measure the head movement using the body motion image e.
  • the head motion is measured by identifying the head from the body motion image e by image processing and tracking the head.
  • FIG. 5 is an example of coordinate axes when the visual image a is analyzed.
  • FIG. 6 is a diagram illustrating the movement of the head when the viewer 1 moves in the horizontal direction.
  • the entire field of view around the head of the viewer 1 is compared to a spherical surface, and the movement of the head is represented by the horizontal direction h and the vertical direction by v.
  • the coordinate axes shown in FIG. 5 may be represented by only numbers (h, v), or by radians (rad) or angles (°).
  • each round may be represented by 1 ( ⁇ 0.5 to +0.5), 2 ⁇ rad ( ⁇ to + ⁇ ), or 360 ° ( ⁇ 180 ° to + 180 °).
  • the first frame picture as F 0 represents the next frame screen after 1 / S seconds as F 0 +1.
  • FIG. 7 is a diagram illustrating the movement of the base point of the stationary object when the head moves from the frame screen F 0 of the visual image a to the next frame screen F 0 +1.
  • the visual image a is represented by coordinates (x, y) with the lower left corner as the origin, and the point of the non-moving object (stationary object) “A” in the first frame screen F 0 is defined as the base point. Is (x0, y0).
  • the stationary object “A” and its point (coordinates) may be determined by an operator who operates the viewer feeling determination apparatus 100 by looking at the visual image “a”, or may be determined in advance.
  • the feature of the still object “A” may be captured by the visual image a and automatically tracked. Note that image processing for identifying and tracking the stationary object “A” is a known technique, and thus detailed description thereof is omitted here.
  • the analysis unit 102 calculates the following data.
  • the analysis unit 102 can calculate the head moving speed qhv ⁇ d (phv) / dt, and the head moving speed qhv0 in the frame screen F 0 is qhv0 ⁇ d (phv) / dt ⁇ (phv1-phv0) / (1 / S) Calculate as However, t represents time, and the analysis unit 102 can calculate as head movement acceleration khv ⁇ d (qhv) / (1 / S), The movement acceleration khv0 of the head in the frame screen F 0 is khv0 ⁇ d (qhv) / dt ⁇ (qhv1-qhv0) / (1 / S) Calculate as
  • a gyroscope unit: angular velocity: radians or degrees / second.
  • a general method is to calculate the movement angle by integrating the angular velocity with the elapsed time.
  • the angular movement speed of the head in the L frame can be obtained by measuring the moving speed (qhv L ) at the frame L time with a gyroscope.
  • Horizontal movement speed of head at frame L qh L
  • Vertical movement speed of head at frame L qv l
  • the horizontal movement distance of the head up to N frames is: Vertical movement distance of head up to N frames: It becomes.
  • FIG. 8 is a diagram illustrating movement of the viewpoint B in the visual image a.
  • the position of the viewpoint in the visual image a on the frame screen F 0 is displayed as (H0, V0). Further, the position of the viewpoint on the next frame screen F 0 +1 is displayed as (H1, V1).
  • the horizontal speed qH0 of the viewpoint in the visual image a on the frame screen F 0 is set as follows: qH0 ⁇ d (pH) / dt ⁇ (pH1-pH0) / (1 / S).
  • the vertical velocity qV0 is qV0 ⁇ d (pV) / dt ⁇ (pV1 ⁇ pV0) / (1 / S).
  • the horizontal acceleration kH0 of the viewpoint in the visual image a on the frame screen F 0 is expressed as follows: Calculate as kH0 ⁇ d (qH) / dt ⁇ (qH1 ⁇ qH0) / (1 / S).
  • the vertical acceleration kV0 is Calculate as kV0 ⁇ d (qV) / dt ⁇ (qV1 ⁇ qV0) / (1 / S).
  • FIG. 9 is a diagram illustrating movement of the viewpoint when moving from the frame F 0 to the frame F 0 +1.
  • the movement acceleration of the viewpoint including the head in the frame F 0 is calculated as follows.
  • the analysis unit 102 calculates the movement of the object in the visual image a by the same method as the movement of the viewpoint.
  • FIG. 10 is a diagram illustrating a positional relationship between the object and the viewpoint in the visual image a.
  • the viewpoint B when the viewpoint B is included in the video of the object C in the visual image a, it is determined that the viewer's viewpoint tracks the target object.
  • the contour of the object can be specified by image processing and can be handled by a known technique, and thus detailed description thereof is omitted here.
  • the analysis unit 102 determines the distance between the position of the viewpoint B (H, V) and the position of the object C (X, Y), the approach speed (Bv) when the viewpoint approaches the object, and the approach acceleration (Bk). ), A separation speed (Bs) and a separation acceleration (Br) when the viewpoint is separated from the object are calculated.
  • the target object C deviates from the visual image “a” or the viewpoint B is separated from the target object C and a predetermined time elapses, it is clearly determined that the viewpoint B has left the target object. .
  • FIG. 11 is a flowchart illustrating a method of calculating the approach speed (Bv), the approach acceleration (Bk), the separation speed (Bs), and the separation acceleration (Br).
  • a method of calculating the approach speed (Bv), the approach acceleration (Bk), the separation speed (Bs), and the separation acceleration (Br) will be described with reference to FIG.
  • the analysis unit 102 determines whether or not the viewpoint is within the range of the object. Whether or not the object is within the range is obtained from the distance between the position of the viewpoint B (H, V) and the position of the object C (X, Y).
  • step 101: N When the position (H, V) of the viewpoint B is not included in the range of the position (X, Y) of the object C (step 101: N), that is, when the viewpoint B exists outside the object C, The following processes A to C are performed.
  • step 103 when U1-U0 ⁇ 0, it is determined that the viewpoint B is approaching the object C, the process proceeds to step 104, and the approach speed and acceleration are calculated.
  • the approach speed (Bv) of the viewpoint B to the object C is as follows.
  • Bv0 -(U1-U0) / (1 / S)
  • the approach acceleration (Bk) of the viewpoint B to the object C is as follows.
  • step 103 when U1-U0> 0, it is determined that the viewpoint B is being detached from the object C, the process proceeds to step 105, and the separation speed and acceleration are calculated.
  • the departure speed (Bs) of the viewpoint B from the object C is as follows.
  • Bs0 (U1-U0) / (1 / S)
  • the separation acceleration (Br) of the viewpoint B from the object C is as follows.
  • Br0 (Bs1-Bs0) / (1 / S)
  • step 101 the process returns to step 101.
  • FIG. 12 is a flowchart illustrating a method for calculating the viewpoint tracking time.
  • the viewpoint tracking time calculation method shown in FIG. 12 corresponds to step 102 shown in FIG.
  • the analysis unit 102 determines whether or not the viewpoint is within the range of the target object.
  • the method for determining whether or not the viewpoint is within the range of the object is the same as the method described in FIG. If the viewpoint is within the range of the object, the process proceeds to step 202 and the time is counted.
  • step 203 it is determined whether or not the viewpoint is within the range of the object.
  • step 202 If the viewpoint is within the range, the process returns to step 202 and the time is counted. On the other hand, if the viewpoint is not within the range of the object, the process proceeds to step 204, the counted time is set as the viewpoint tracking time, and the process returns to step 101 in FIG.
  • the pupil diameter is affected by the brightness (luminance, illuminance, etc.) of the visual target. Therefore, it is necessary to erase the pupil diameter change portion corresponding to the brightness (luminance, illuminance, etc.) of the visual recognition target.
  • the viewing target is “video display”: Slowly change the display brightness from lowest (black screen) to highest (white screen), conversely from highest to lowest, and from lowest to highest.
  • the viewer's pupil diameter during the last motion is measured. This is referred to as “basic luminance and pupil diameter relationship data”.
  • the reason for adopting the measured pupil diameter for the last change in brightness is that the viewer will be interested in the "changes in the beginning" of the brightness change at the beginning, and pay attention to it.
  • the display that measures the actual content should be displayed as “basic luminance and pupil diameter relationship data”. It is assumed that the display that measures the actual content is adjusted so that it has the same brightness and hue as the same room environment as the display that measured “”.
  • the luminance (Yt) of the display is simultaneously measured using a luminance meter or the like.
  • the analysis unit 102 obtains “relation data between basic luminance and pupil diameter” corresponding to the luminance (Yt) when the object 2 is visually recognized from the pupil diameter (Pt) when the object 2 (in this case, the video display) is visually recognized.
  • the pupil diameter (Pbt) at By subtracting the pupil diameter (Pbt) at, the pupil diameter due to the brightness of the display is erased, and the pupil diameter (Pmt) corresponding to only the degree of attention of the viewer to the object at that moment (t) can be obtained. .
  • Pmt ⁇ 0
  • the degree of attention is inferior to that in normal times, which means that the user is in a bored state.
  • the pupil diameter is very small, Pmt ⁇ 0. It ⁇ 0 On the contrary, it can be said that the degree of attention is lower than usual and it is bored. It> 1 In the case of, it can be said that it is in a state of extreme attention.
  • the attention degree is Ih.
  • the luminance (Yt) and the viewer's pupil diameter (Pt) are simultaneously measured over time on a display of content that changes with time, and the degree of attention is calculated and analyzed to determine the degree of attention.
  • the display of the content is measured only for the luminance (Yt) over time, and then the viewer measures only the pupil diameter (Pt) for the display of the content over time.
  • the attention level (It) may be calculated by matching the luminance (Yt) of the video content with the time (t) of the pupil diameter (Pt). For other visual scenes except video display: Apply illuminance instead of luminance.
  • an illuminometer is used instead of the luminance meter, and the “basic illuminance and pupil diameter relationship data” corresponding to “basic luminance and pupil diameter relationship data” provides a room where the basic illuminance can be changed. It can be obtained by measuring the pupil diameter.
  • the illuminance meter is attached to the viewer's head along with the visual field scene photographing device and the eyeball photographing device to measure the illuminance of the object.
  • the object to be viewed is a video display and the brightness of the viewpoint is not negligible compared to the brightness of the entire display (contents with a sharp contrast in the screen, or a room environment that is not bright)
  • the luminance of the viewpoint since the luminance of the viewpoint also affects the pupil diameter, it is necessary to eliminate the pupil diameter change of the viewer corresponding to the luminance of the viewpoint.
  • the luminance Yj of the entire screen at the time of the scene of the frame j can be measured by the previous item “0070”.
  • the degree of influence on the luminance of red (R), green (G), and blue (B) of each pixel of the display varies depending on the manufacturer, but the degree of influence on the luminance of red (R) (Cr) Road (Cg) that affects green (G) brightness Blue (B) luminance influence (Cb)
  • the intensity of each of R, G, and B of the pixel (x) at the time of frame j is set to the intensity of red (R) (Srjx) Green (G) intensity (Sgjx) Blue (B) intensity (Sbjx)
  • the luminance of each pixel may be obtained by the measuring device.
  • the total luminance of pixels within a diameter of 17 cm centered on a certain viewpoint pixel x is Ydx.
  • Change the brightness Yd of the entire display step by step For each stage of the overall brightness of the display, the brightness (Ydx) of a pixel within a diameter of 17 cm centered on pixel x is slowly changed from maximum to minimum, further from minimum to maximum, and further from maximum to minimum three times.
  • This table is referred to as “basic relationship data of display luminance, viewpoint luminance, and pupil diameter”.
  • the viewer's pupil diameter (Pg) at the time of actual content viewing is measured, and the brightness (Yg) of the display at that time and the brightness (Ygx) with a diameter of 17 cm centered on the viewpoint x are measured and analyzed.
  • the pupil diameter (Pcx) corresponding to the degree of attention at the viewing viewpoint including the influence of the luminance of the display is obtained.
  • the luminance of each pixel may be obtained by the measuring device. Since Ijx corresponds to “It” of the item “0070”, it will be treated in the same manner. Therefore, after that, It is (1) When the display is a visual target, The degree of attention due to the brightness of the display of 1 item “0070” is It, and Ijx is used when the brightness of the viewpoint of the display of 2 items “0071” is adopted. (2) It is assumed that It is replaced with the illuminance instead of the illuminance.
  • the analysis unit 102 analyzes the pupil diameter (P) and the number of blinks from the eye movement image b. Further, the analysis unit 102 calculates the attention level (It), the attention level expansion speed (Its), and the attention level expansion acceleration (Itk) from the pupil diameter (P) data. When Its and Itk are minus ( ⁇ ), the attention level reduction speed and the attention level reduction acceleration are respectively indicated.
  • the analysis unit 102 analyzes various values, and outputs the analyzed values as the analysis value A to the storage unit 103 and the diagnosis unit 104 (see FIG. 2). Therefore, the values included in the analysis value A include the attention level (It), the attention level change speed (Its), the attention level expansion acceleration (Itk), the viewpoint tracking time (Bt), the viewpoint approach speed (Bv), and the viewpoint approach.
  • Other emotion values and excitement levels include acceleration (Bk), viewpoint departure speed (Bs), and viewpoint departure acceleration (Br).
  • the storage unit 103 includes a visual image a, an eye movement image b, audio data c, response data d, body motion image e, body temperature data f, and brightness output from the analysis unit 102. Accumulate data.
  • the storage unit 103 stores the analysis value A output from the analysis unit 102. Further, the storage unit 103 is the degree of attention (Ih), the degree of attention expansion (Ihv), and the degree of attention expansion when the viewer 1 is watching the object 2 with low interest, that is, mediocre and low stimulus.
  • Acceleration (Ihk), viewpoint tracking time (Bht), viewpoint approach speed (Bhv), viewpoint approach acceleration (Bhk), viewpoint departure speed (Bhs), and viewpoint departure acceleration (Bhr) are stored as normal values H in advance.
  • the accumulation unit 103 also draws attention (Ie), attention expansion speed (Iev), attention expansion acceleration (Iek), and viewpoint tracking time when the viewer 1 is looking at the object 2 of high interest.
  • Bet viewpoint approach speed
  • Bek viewpoint approach acceleration
  • Bes viewpoint departure speed
  • viewpoint departure acceleration (Ber) are stored in advance as moving values K.
  • the accumulation unit 103 accumulates the diagnosis result output from the diagnosis unit 104.
  • the diagnosis unit 104 diagnoses that the object 2 is interested when the number of blinks within a predetermined time is less than the predetermined number. Furthermore, when the degree of attention is greater than a predetermined value, the diagnosis unit 104 diagnoses that the object 2 is interested. In addition to the above, the diagnosis unit 104 diagnoses that the object 2 is interested when the enlargement speed and the enlargement acceleration of the attention degree are larger than the predetermined value, and conversely, when the attention degree is smaller than the predetermined value, When the reduction rate and the reduction acceleration of the attention level are larger than the predetermined values, it is diagnosed that the object 2 is not interested.
  • the diagnosis unit 104 diagnoses that the object 2 is interested, and conversely, when the viewpoint approach speed and the viewpoint approach acceleration are less than the predetermined values. Diagnose that the object 2 is not interested. If the viewpoint leaving speed and the viewpoint leaving acceleration are larger than the predetermined values, it is diagnosed that the object 2 is not interested or disgust, and conversely, the viewpoint leaving speed and the viewpoint leaving acceleration are below the predetermined values. Then, it is diagnosed that the object 2 is interested.
  • the diagnosis unit 104 calculates the viewpoint moving speed and acceleration by combining the approaching moving speed and approach acceleration of the head direction with respect to the object 2, the viewpoint approaching speed, and the viewpoint approaching acceleration. Determination similar to “0075” is performed. If the viewer 1 is interested, the head also keeps its direction in order for the viewpoint to track the object. Although the head may continue to be directed toward the object even if not interested, in this case, the degree of attention is reduced. Further, when the head keeps moving away from the object 2 and the attention level is reduced, it is diagnosed that there is no interest.
  • the diagnosis unit 104 diagnoses that the viewer 1 is interested in the object 2. In this case, when the object 2 is moving, the viewer tracks while the object 2 is visible. On the other hand, in the case of the object 2 that does not move, a time longer than the normal value (Bht) is tracked.
  • the diagnosis unit 104 once tracks the viewpoint within the range of the object 2 for a short time ⁇ Bht ⁇ , and the degree of attention does not expand to the moving value (Ie).
  • the viewer 1 diagnoses that the object 2 is not interested.
  • the diagnostic unit 104 diagnoses that there is disgust when the viewer 1 once sees the object 2 and the degree of attention expands, but the viewpoint leaves the object 2 in a short time within a predetermined time. To do.
  • the diagnosis unit 104 diagnoses that the viewer 1 is excited about the object 2 when the body temperature rises using the body temperature data f as compared to the body temperature measured in advance on the day. To do. Further, the diagnosis unit 104 evaluates the capabilities of the object 2 and the viewer 1 based on the number of responses, the speed of the response operation, and the accuracy using the response data d by the operation of the response device 14. A specific method of using the response data d will be described in detail later. Further, the diagnosis unit 104 outputs the diagnosis result to the display unit 105 and the storage unit 103.
  • FIG. 13 shows the viewer face image data e from the viewer motion photographing device 9. Characteristic points of each part (eye, lips, eyebrows, etc.) of the face of the viewer are marked by a face analysis method to analyze the facial expression. Although some of these feature points are shown in the figure for explanation, they are based on an actual face analysis method. Typical emotion judgments are surprise, fear, disgust, anger, joy (happiness), and sadness. Preliminarily measure and place the maximum value of each part of the face and the normal movement of the viewer, and determine the degree of emotion (F) based on the ratio to the maximum value of the amount of change at each measurement To do. Thus, together with the emotional value (E) of the degree of attention analyzed from the change in the pupil diameter or blink frequency of the eyes according to claims 1 to 4, it is possible to determine the viewer's emotion (EF value) for the visual target. .
  • the display unit 105 displays the diagnosis result in the diagnosis unit 104 on an output device such as a display. From the diagnosis result displayed on the display unit 105, the operator can determine the capabilities of the object 2 and the viewer 1 themselves.
  • the object 2 or the viewer 1 himself / herself is used by using the change in physiological response data, the accompanying acceleration, and the change in the viewer's face.
  • changes in physiological response data include attention (It), its enlarged acceleration (Itk), viewpoint approach acceleration (Bk), and viewpoint departure acceleration (Br).
  • attention It
  • Itk degree of attention and its enlargement speed and acceleration
  • viewpoint approach speed and acceleration Bk
  • viewpoint departure acceleration Br
  • the degree of attention (It) is It is diagnosed that the object 2 is not interested as the negative acceleration ( ⁇ Itk) increases or the viewpoint departure acceleration (Br) increases.
  • the change of the emotion of the viewer is determined by analyzing the partial change of the face.
  • the physiological reaction of the viewer can be grasped more accurately, and the ability of the object or the viewer himself / herself can be more accurately evaluated in various situations. can do.
  • the application scene of the viewer feeling determination apparatus 100 according to the embodiment of the present invention will be described in detail later.
  • the body motion imaging device 9 and the body temperature measurement device 10 directly transmit data to the viewer feeling determination device 100, but once stored in the storage device 8, together with the visual image a and the like. Then, it may be transmitted from the transmission device 7 to the viewer feeling determination device 100.
  • the eye camera 3 is configured to include the eyeball photographing device 4, the visual scene photographing device 5 and the microphone 6. However, the eyeball photographing device 4, the visual scene photographing device 5 and the microphone 6 are not necessarily physically connected. It is not necessary, and each may be provided as a single unit.
  • the corneal reflection method is used to calculate the position of the viewpoint.
  • the method of calculating the position of the viewpoint is not limited to this, and various known methods can be used.
  • the viewer feeling determination apparatus 100 is configured by a computer including a volatile storage medium such as a CPU and a RAM, a nonvolatile storage medium such as a ROM, an interface, and the like.
  • the functions of the receiving unit 101, the analyzing unit 102, the accumulating unit 103, the diagnosis unit 104, and the display unit 105 provided in the viewer feeling determination device 100 are realized by causing the CPU to execute a program describing these functions.
  • the These programs can also be stored and distributed in a storage medium such as a magnetic disk (floppy (registered trademark) disk, hard disk, etc.), optical disk (CD-ROM, DVD, etc.), semiconductor memory, or the like.
  • the viewing method may be a method of viewing an actual product or a product displayed on a display such as an advertisement, a television, a personal computer, or a mobile terminal (including a mobile phone).
  • the viewer feeling determination device 100 shown in FIGS. 1 and 2 is used.
  • the object 2 is an actual product or an image of the product.
  • the diagnosis unit 104 is a. Looking at the product, the pupil has not expanded to the emotional value (Pe) ⁇ Pe ⁇ (indifference), b.
  • the viewer 1 diagnoses that the product has no willingness to purchase. Further, when the viewpoint departure speed (Bs) from the product and the first departure acceleration (Br) are larger than the normal values ⁇ Bhs, Bhr ⁇ , the disgust (DI) by the analysis of the emotion value (F) by the face analysis method is calculated.
  • the aversion can be determined by determining. In this case, it can be said that the behavior is clearly opposite to the case where there is a willingness to purchase. From this, it is possible to measure the degree of diagnosis of “no purchase intention”.
  • the degree of attention (It) when expanding and the viewpoint tracking time (Bt) to the product is longer than the normal value (Bht) and reaches the impression value (Bet), it can be diagnosed as being interested. It cannot be diagnosed until there is a willingness to purchase. That is, when there is no interest, there is no willingness to purchase, but there is a case where there is no willingness to purchase even if there is interest. Therefore, the emotion determination is combined with the face analysis method. Generally, when the emotional value (F) is high in happiness (joy) (HP) and surprise (SP), it can be determined that the purchase is positive.
  • the learning motivation level diagnosis is diagnosed by setting the learning object (teacher, textbook, teaching material, blackboard, etc.) as the object 2.
  • the viewer's feeling determination apparatus 100 shown in FIGS. 1 and 2 is used for the learning motivation degree diagnosis.
  • the degree of attention (It) has not expanded to the impression value (Ie) (indifference);
  • the viewpoint tracking time (Bt) is shorter than the normal value (Bht) (not interested), or when disgust is shown by the face analysis method, it can be determined that the learning object is disliked. From this, the diagnosis unit 104 diagnoses “no learning motivation”.
  • the diagnosis unit 104 determines that the learning object is more than the other learning objects. Diagnose with high motivation to learn. On the other hand, a specific learning object is a. The degree of attention is small. The viewpoint tracking time is short. If the viewpoint withdrawal speed and the initial acceleration are large and the face analysis method does not show a sense of happiness (joy), the positive response will be small, so the diagnosis unit 104 learns the learning target. It is diagnosed that the degree of motivation is low compared to other learning subjects.
  • the viewer emotion reaction degree diagnosis with respect to the expected emotion reaction degree of the video content creator uses the viewer feeling determination apparatus 100 shown in FIGS. 1 and 2.
  • Degree (HP), degree of surprise (SP), degree of sadness (SD), degree of anger (AG), degree of fear (FR), aversion (DI) and excitement level with respect to the level expected by the video content creator It is characterized by diagnosing whether it is reacting to a certain extent, and evaluating and judging video content.
  • the producer's expected emotional response is set in the content software program.
  • the emotion value (EF) expected by the author on the scene (time) of the video content is set with the maximum value of emotion value (E) set to 100, and the expected value of emotion value (F) expected The maximum value is set to 100, and the happiness (joy) level (HP), surprise level (SP), sadness level (SD), anger level (AG), fear level (FR), aversion level (DI) of each basic emotion ) And the expected level of excitement.
  • CM viewer response evaluation it is determined that a CM with a high degree of attention to appeal points in the appeal scene and a low disgust sensitivity and low anger level is a good CM.
  • the specific object is a content such as one page of a television, a specific screen of a video, a still image, a poster, or a magazine
  • the viewer is the content viewer.
  • the attention level distribution of attention points of a plurality of viewers is displayed to determine the appeal point and level of the content producer.
  • the average value of the attention level based on the positions of the viewpoints and the attention levels of multiple people is displayed on the screen.
  • the concentration level of the attention level when the whole is set as 100 is displayed, and it is evaluated whether or not the portion as the content creator intended is focused. .
  • FIG. 16 is a block diagram showing a configuration of a viewer feeling determination system used for compatibility diagnosis
  • FIG. 17 is a block diagram showing a configuration of a viewer feeling determination device 100 used for compatibility diagnosis.
  • the object 2 is a face image displayed on the displays 22a and 22b.
  • the storage unit 103 of the viewer feeling determination apparatus 100 shown in FIG. 17 measures and stores the personal attribute data of the viewers (here, Mr. A and Mr. B) in advance.
  • the personal attribute data includes the normal values (“attention level”, “attention level”) when viewing an object that is mediocre and has low irritation, in addition to the age, sex, birthplace, etc. of the viewer (Mr. A, Mr. B, etc.) Magnification speed, "Attention expansion acceleration”, “Viewpoint tracking time”, “Viewpoint approach speed”, “Viewpoint approach acceleration”, “Viewpoint departure speed”, “Viewpoint departure acceleration”).
  • Impression value "attention level”, “attention level expansion speed”, “attention level expansion acceleration”, “start point position”, “viewpoint tracking time”, “viewpoint approach speed” “Viewpoint approach acceleration”, “viewpoint departure speed”, “viewpoint departure acceleration”).
  • values maximum values
  • the accumulation unit 103 accumulates average values such as the normal value and the emotional value for each layer such as human age, sex, and occupation.
  • the video phones 20a and 20b include face cameras 21a and 21b, displays 22a and 22b, eyeball cameras 23a and 23b, and tracking cameras 24a and 24b, respectively.
  • the face cameras 21a and 21b capture the caller's face.
  • the displays 22a and 22b display communication partners.
  • the eyeball cameras 23a and 23b capture the caller's eyeball.
  • the tracking cameras 24a and 24b are provided on both sides of the eyeball cameras 23a and 23b, and measure the position of the caller's eyeball using the principle of triangulation.
  • the eyeball cameras 23a and 23b track the caller's eyeballs according to the measurement results of the tracking cameras 24a and 24b.
  • the eyeball cameras 23a and 23b are omitted, the face cameras 21a and 21b are made highly detailed, and the analysis unit 102 in the viewer feeling determination apparatus 100 enlarges the face image of the caller and performs image analysis.
  • a caller's eyeball image may be captured.
  • the eyeball cameras 23a and 23b are omitted, the face cameras 21a and 21b are made highly detailed, and the analysis unit 102 in the viewer feeling determination apparatus 100 enlarges the caller's face video and performs image analysis.
  • other means for measuring the distance from the video phone 20a, 20b to the caller's face may be used instead of the tracking cameras 24a, 24b. In this way, since the eyeball is captured and the distance is known, the diameter of the pupil can be measured.
  • the face camera 21a is used to photograph Mr. A's face
  • the eyeball camera 23a is used to photograph Mr. A's eyeball.
  • the video phone 20a transmits Mr. A's face data and Mr. A's eyeball data (gaze direction, pupil image, distance from the camera to the eyeball) to the viewer feeling determination apparatus 100 via the communication line 30.
  • Mr. B's face is photographed by the face camera 21b
  • Mr. B's eyeball is photographed by the eyeball camera 23b.
  • the video phone 20b transmits Mr. B's face data and Mr. B's eyeball data (line-of-sight direction, pupil image, distance from the camera to the eyeball) to the viewer feeling determination apparatus 100 via the communication line 30.
  • the receiving unit 101 of the viewer feeling determination apparatus 100 in FIG. 17 receives the face data and eyeball data (line-of-sight direction, pupil image, distance from the camera to the eyeball) of Mr. A and Mr. B.
  • the analysis unit 102 of the viewer feeling determination apparatus 100 determines the viewpoint position and viewpoint movement speed (approach speed) in the image of Mr. B's face on the display 22a owned by Mr. A from the line-of-sight direction data of Mr. A's eyeball data. Bv, separation speed Bs) and acceleration (approach acceleration Bk, separation acceleration Br) are calculated. Further, the attention degree is calculated from the eyeball data of Mr.
  • the analysis unit 102 outputs the analyzed value to the diagnosis unit 104.
  • the diagnosis unit 104 tracks the face of Mr. B reflected on Mr. A's display 22a as long as the face of Mr. B is reflected, and the attention level (It) of Mr. A is stored in the storage unit. If it is expanded to the emotional value data (Ie) read out from 103, Mr. A diagnoses that Mr. B is interested. Also, by facial analysis, emotions such as surprise, fear, disgust, anger, joy (happiness), sadness, and the like of Mr.
  • the transmission unit 106 transmits this diagnosis result to the videophone 20b (or mediator) of Mr. B.
  • the videophone 20b (or the intermediary videophone) that has received the diagnosis result displays a symbol (for example, a heart symbol) indicating that “Mr. A is interested / interested in B” on a specific location on the display 22b.
  • Mr. B's time to track Mr. A's face reflected on Mr. B's display 22b is shorter than when Mr. A's face appears on Mr. B's videophone display 22b (from Mr.
  • the attention level (It) does not increase to the impression value data (Ie) extracted from the storage unit 103, the attention level expansion speed (Its), the initial attention level expansion acceleration ( If Itk) is less than the normal value (Ihv, Ihk), the diagnosis unit 104 diagnoses that “Mr. B is not interested in Ms. A”. Further, when visually recognizing Mr. A's face by face analysis, the emotions such as surprise, fear, disgust, anger, joy (happiness), sadness and the like of Mr. B are judged. The transmitting unit 106 transmits this diagnosis result to the videophone 20a (or mediator) of Mr. A.
  • the videophone 20a (or the mediator's videophone) that has received the diagnosis result displays a symbol (for example, a x mark) indicating that ⁇ B is not interested or interested in Mr. A ⁇ and emotion on the display 22a at a specific location. To display.
  • a symbol for example, a x mark
  • the above-described compatibility diagnosis can be applied to the following situations.
  • A Used for marriage arrangements.
  • B Used for interview tests.
  • diagnosis result can be used as a reference for supporting the content of the statement.
  • C Used for interviews with criminals.
  • the psychological state of the criminal suspect can be known by using it for a scene of interview with a criminal suspect in a criminal case. In these scenes, when the user is interested in the content of the story and the displayed video, the property that the degree of attention (It) is greater than the normal value (Ih) is used.
  • the analysis unit 102 measures the viewpoint position and attention level (It) of babies, animals, persons with dementia, etc., and the diagnosis unit 104 tracks the viewpoint on the visual target, and the attention level (It) is a normal value (Ih). )
  • the attention level (It) is a normal value (Ih).
  • the diagnosis unit 104 has a tracking time (Bt) of the viewpoint to the visual recognition object shorter than the normal value (Bht), the viewpoint withdrawal speed (Bs), and the acceleration (Br) are faster than the normal values (Bhs, Bhr),
  • Bt tracking time
  • the degree of attention (It) is also less than the impression value (Ie)
  • Ie impression value
  • facial analysis techniques can determine the emotions, such as baby's surprise, fear, disgust, anger, joy (happiness), and sadness, and their levels, and can be used as a reference for the response.
  • the eye camera 3, the body motion photographing device 9, the body temperature measurement device 10, and the viewer feeling determination device 100 shown in FIG. 18 are incorporated in the robot, and a diagnosis is performed on a person who responds to the robot.
  • the robot constantly measures the face of the person who responds, the baby who does not speak, the person with dementia, the animal, the movement of the eyeball, the body movement, the movement of the face, and the body temperature.
  • the viewer feeling determination system shown in FIG. 18 is incorporated using the viewer feeling determination system shown in FIG.
  • FIG. 18 is a block diagram showing a configuration of the viewer feeling determination device 100 used for the robot diagnosis.
  • the diagnosis unit 104 of the viewer feeling determination apparatus 100 diagnoses feelings of a person who interacts with the robot, or a baby who does not speak, a person with dementia, or an animal. Based on the diagnosis result input from the diagnosis unit 104, the behavior selection unit 107 selects a behavior corresponding to the diagnosis result from a plurality of pre-stored behaviors. As a result, the robot can grasp the feelings of the person and animal who responds according to the diagnosis result, and can issue appropriate words. Appropriate support actions can also be taken.
  • the robot response diagnosis has been described with reference to the block diagram shown in FIG. 18, but the parts that are not particularly described are the same as those in FIG. 2 described above.
  • [Criminal evidence diagnosis] a criminal suspect is diagnosed by displaying a video image including a crime scene in the middle of an ordinary image on a video display.
  • the viewer feeling determination apparatus 100 shown in FIGS. 1 and 2 is used.
  • the diagnosis unit 104 looks at when the degree of attention (It) increases. It can be used as a reference for identifying the crime scene, assuming that the location of the image was the crime scene.
  • the diagnosis unit 104 diagnoses that there is disgust in the video of the scene. That is, the criminal suspect wants to avoid the scene as soon as possible. Furthermore, the facial analysis technique can determine the suspect's emotions such as fear, disgust, anger, joy (happiness), sadness, and their degree, and the diagnosis result by the diagnosis unit 104 determines that the place is a criminal place. For reference.
  • the diagnosis unit 104 looks at the video or photograph of the victim of the crime and compares it with other persons.
  • the degree of attention (It) increases, it is diagnosed that the viewer may be a criminal suspect.
  • the diagnosis unit 104 has a normal point of view (Bht) as the tracking time (Bt) for the subject.
  • the facial analysis technique can determine the suspect's emotions and their degree of fear, disgust, anger, joy (happiness), sadness, etc. You can confirm the feelings you want to avoid.
  • video including crime scenes by attaching eye camera 3 to suspects of crimes and going to crime scenes, it is possible to diagnose attention to crime objects and facial expressions that are close to disgust, Can be helpful.
  • FIG. 19 is a block diagram illustrating a configuration of the viewer feeling determination device 100 used for driver drowsiness detection diagnosis.
  • the diagnosis unit 104 of the viewer feeling determination device 100 diagnoses that the driving state is dangerous.
  • the diagnosis unit 104 diagnoses that the driving direction is further inattentive and dangerous when the direction of the line of sight deviates from the front.
  • the warning unit 108 warns the driver when the diagnosis result input from the diagnosis unit 104 is used and it is determined that the diagnosis result is a dangerous driving state.
  • the warning may be an announcement or a warning sound, as long as it gives the driver a stimulus for restoring attention.
  • the viewer 1 wears the eye camera 3 and the driver 1 operates the car, an airplane simulator or an actual car to diagnose the appropriateness of driving and driving. To do.
  • the viewer feeling determination device 100 shown in FIGS. 1 and 2 is used.
  • the analysis unit 102 of the viewer feeling determination apparatus 100 analyzes the line-of-sight direction and attention level data of the eye movement data of the viewer 1, and the diagnosis unit 104 tracks the viewpoint to a point to be noted when attention is to be paid.
  • the diagnosis unit 104 diagnoses that the head direction and the line-of-sight direction are always in front. Further, in order to determine whether the response device 14 is responding with the steering wheel or the brake in a timely manner at the time when the response reaction should be performed, the analysis unit 102 uses the response data d to determine the response delay time and the operation method.
  • Analyzing and analyzing unit 104 determines whether or not the operation delay time is equal to or shorter than a predetermined time (predetermined set time) for avoiding danger, and the operation obtained from response data d is an appropriate operation method set in advance. And the skill level and appropriateness level of the viewer 1 are diagnosed based on the operation delay time and the operation method. In addition, put the video of the car that the driver of the selfish behavior is driving into the simulator, let the viewer perform driving operation, calculate the degree of anger and fear of the viewer by face analysis, It is also possible to determine whether driving is appropriate.
  • Occupational safety skill diagnosis In the occupational safety skill diagnosis, the worker wearing the eye camera 3 diagnoses the occupational safety skill level by operating the equipment with a simulator or an actual machine. In the occupational safety skill level diagnosis, a viewer feeling determination device 100 shown in FIGS. 1 and 2 is used. The analysis unit 102 of the viewer feeling determination apparatus 100 measures the position of the viewpoint, the degree of attention, and the degree of carelessness based on changes in the gaze direction of the worker and the degree of attention (It).
  • the viewpoint goes to the part to be noted when it should be noted (this becomes the object), the attention level (It) increases to the moving value level (Ie), and the speed of expansion of the attention level (Its) ),
  • the acceleration (Itk) is also as large as the moving value level (Iev, Iek)
  • the diagnosis unit 104 diagnoses whether or not an appropriate response reaction is performed in a timely manner (no time delay) within a predetermined time to be careful.
  • the diagnosis unit 104 diagnoses as an inappropriate state or careless state. To do. Based on the above diagnosis, the skill level and appropriateness level of the worker are diagnosed based on data such as the delay time and the number of correct operations among the entire operations.
  • pre-work appropriateness diagnosis the worker diagnoses the appropriateness of work on the day before starting work every day.
  • the viewer feeling determination device 100 shown in FIGS. 1 and 2 is used.
  • the content of attention determination including points that should be noted in each part (parts that are dangerous if not operated carefully) is displayed on the video display to show the worker.
  • the analysis unit 102 of the viewer feeling determination apparatus 100 analyzes the line-of-sight direction and the attention level (It) in the data from the eye camera 3, and the diagnosis unit 104 determines the viewpoint position, the attention level, and the inattention level.
  • the attention level (It) increases to the emotional level (Ie), the speed of attention expansion (Its),
  • the acceleration (Itk) is also as large as the emotional value levels (Iev, Iek)
  • the acceleration (Bk) is also slower than the specified value
  • the timing is later than the specified value, and there is no appropriate response response. If any one of them applies, it is determined that the worker's physical condition on the day is not normal.
  • the athlete's suitability diagnosis in sports, it is diagnosed which part is being viewed according to the viewpoint direction at the time of competition of the athlete wearing the eye camera 3.
  • the viewer feeling determination device 100 shown in FIGS. 1 and 2 is used.
  • the analysis unit 102 of the viewer feeling determination device 100 analyzes the degree of attention based on the magnitude of the degree of attention (It), the enlargement speed (Its) of the degree of attention, and the acceleration (Itk). And the analysis part 102 analyzes the reaction (action
  • the analysis unit 102 uses the response data d to analyze whether or not the appropriate response device 14 is being operated at an appropriate timing, so that it is timely (no time delay) to be noted. ) Analyze whether the response is appropriate. At the same time, the face analysis at that time is performed, the emotion at that time is judged, and the emotional problem is inferred. And the diagnosis part 104 diagnoses a player's skill and appropriateness compared with the average of the appropriate value of all the athletes of the same competition.
  • Game excitement diagnosis In the game excitement level diagnosis, the viewer 1 wears the eye camera 3 and executes the game. In the excitement level diagnosis of the game, the viewer feeling determination device 100 shown in FIGS. 1 and 2 is used. The diagnosis unit 104 of the viewer feeling determination device 100 diagnoses that the interest level (It) is clearly interested in the game because the attention level (It) is expanded to the emotional value level (Ie) during the game. At the same time, the face at that time is analyzed, and emotions such as fear, disgust, anger, joy (happiness), sadness, etc. are determined to determine the compatibility of the game. Moreover, the diagnostic part 104 diagnoses the excitement degree according to the height, when a body temperature is higher than the normal value before a game by viewer body temperature measurement.
  • the diagnosis unit 104 diagnoses that it is not excited.
  • the viewer feeling determination device 100 of the present invention is incorporated in a game machine, the viewer is a game player, and the visual target is a game content visually recognized by the viewer, which is input from the diagnostic unit in the game machine. If the game machine has an appropriate game content screen, a selection unit for selecting a sound and a game program, and an execution function based on the diagnosis result of the viewer, for example, the emotion value of the viewer is If it is low (not interested), it is clearly determined that the viewer is not interested in the visual target, and the game machine guides the game to the content of interest. In addition, according to the viewer's emotional surprise, anger, fear, sadness, disgust, happiness (joy), the game machine changes the appropriate visual target (changes the game content screen, sound, game program, etc.) You can also deal with it.
  • the visual recognition method may be a method of viewing an actual space or an object displayed on a display such as a television or a PC.
  • the space refers to a space such as a cityscape, a department store, a design or display in a store, a design or display in an exhibition hall, a station passage, a station platform, a station building, or a car interior.
  • the viewer feeling determination device 100 shown in FIGS. 1 and 2 is used for the space design diagnosis. Note that the analysis unit 102 of the viewer feeling determination device 100 determines whether the viewer 1 is moving forward, backward, or moving left and right in the actual space by analyzing the visual image a.
  • the object 2 is a specific object in the video (the object of appeal may be plural).
  • the diagnosis unit 104 is a. Looking at the object, the degree of attention (It) has not expanded to the moving value (Ie) ⁇ Ie ⁇ (indifference); When the tracking time (Bt) of the viewpoint is shorter than the normal value (Bht) (not interested), the viewer 1 diagnoses the object 2 as “inappropriate design”. Furthermore, when face analysis is performed and it is determined that there is a feeling of disgust, anger, sadness, etc. at that time, it can be said that the behavior is clearly the opposite of the case where “design is appropriate”. From this, it is possible to determine the degree of diagnosis that “design is not appropriate”.
  • the degree of attention (It) is large.
  • the viewpoint tracking time (Bt) is long.
  • the viewpoint departure speed (Bs) and the initial acceleration (Br) are small, the object in the design is most interesting. Further, if it can be determined that the feeling of joy (happiness) at that time by face analysis is large, the diagnosis unit 104 diagnoses that “the design is more appropriate than other designs”. On the other hand, the object is a.
  • the degree of attention (It) is small; The viewpoint retention time (Bt) is short.
  • the diagnosis unit 104 determines that “the object of the design is compared with the object of the other design. Is not appropriate. "
  • the visual recognition target is a store clerk, a receptionist, or the like who responds to the customer. Viewers act from the customer's perspective. In the reception of the store clerk and the receptionist, it is characterized by diagnosing the feeling of the viewer and determining whether the reception of the store clerk and the receptionist is good or bad. If the viewer's emotional value (E) is low (not interested), it can be determined that there is clearly no particular problem with the other party and that he / she is responding normally.
  • E viewer's emotional value
  • [Video content selection priority diagnosis] 10 The viewer feeling determination apparatus for a viewing scene according to any one of claims 1 to 3 and 7 to 9, wherein the viewer is a general TV viewer (receiver of video content) and is a viewing target. Is content on a TV display connected to a broadcast, network or video storage device.
  • a viewer feeling determination apparatus 100 shown in FIGS. 1 and 2 is used. Initially, the viewer feeling determination apparatus 100 selects a scene in which the viewer 1 is highly interested when the viewer 1 views the basic content in advance. However, if anger (AG), fear (FR), and disgust (DI) are high after face analysis, it is removed from the selection.
  • the content area of the viewer's high interest / interest is determined, and the content of high interest / interest viewed by other viewers showing similar interest / interest is stored in the storage unit 103 as recommended content.
  • the recommended content is displayed when the viewer 1 actually views it, and one content can be selected and viewed from the viewer's intention.
  • the diagnostic unit 104 of the viewer feeling determination apparatus 100 increases the attention level (It) to the impression value (Ie) and the attention level expansion speed (Its) when the viewer 1 is viewing the video content from the next time.
  • the diagnosis result is recorded in the storage unit 103 and used as recommendation data for selecting the content at the time of viewing the video content at the next viewing time.
  • the diagnosis unit 104 is content other than that field, and the interest / interest / excitement has been previously Switch to a content that has not been viewed yet in a high-frequency field, and at the same time, when selecting the next content to be viewed, lower the priority for selecting the content in that field.
  • the diagnosis result is recorded in the storage unit 103 and used as recommendation data for selecting content at the time of viewing the video content in the next viewing time.
  • the viewer feeling determination apparatus for a viewing scene according to any one of claims 1 to 3 and 7 to 9, wherein the viewer is a general TV viewer (receiver of video content) and is a viewing target. Is content on a TV display connected to a broadcast, network or video storage device. A viewer feeling determination apparatus 100 shown in FIGS. 1 and 2 is used. Automatic marking of objects or people with high interest / interest in the content automatically marks the object or person with a viewpoint in the receiver at the moment of high sensitivity while watching the program, and later network In the system in which the information on the item or person is distributed by e-mail, the item can be purchased later and the person can be contacted by the viewer's intention.
  • the fear value depends on the aversion sensitivity, anger level, and conditions in the emotional value by face analysis. It is characterized by automatically removing those with a high degree.
  • the purchase willingness degree diagnosis, the learning willingness degree diagnosis, the viewer emotion reaction degree diagnosis with respect to the expected emotion reaction degree of the video content creator Content evaluation by distribution of attention points, compatibility diagnosis, diagnosis of communication with non-speaking babies, people with dementia, animals, etc., robot response method determination, criminal evidence diagnosis, driver drowsiness detection diagnosis, car driving aptitude diagnosis, labor Safety proficiency diagnosis, pre-work appropriateness diagnosis, athlete appropriateness diagnosis, game excitement diagnosis, spatial design diagnosis, audience response diagnosis, video content selection priority diagnosis, high content interest / interest
  • various situations such as automatic marking of a person, the ability of the object or viewer itself can be more accurately evaluated.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Psychiatry (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Hospice & Palliative Care (AREA)
  • Developmental Disabilities (AREA)
  • Social Psychology (AREA)
  • Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Eye Examination Apparatus (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un dispositif de détermination du sentiment d'un spectateur pouvant saisir plus précisément la réaction physiologique d'un spectateur et d'évaluer plus précisément l'aptitude d'un objet ou du spectateur dans diverses conditions. Le dispositif de détermination du sentiment d'un spectateur (100) pour une scène reconnue visuellement comprend une unité d'analyse (102), une unité d'accumulation (103), et une unité de diagnostic (104). L'unité d'analyse (102) calcule la position d'un point de vue pour une image reconnue visuellement (a) à partir de l'image reconnue visuellement (a) et d'une image de mouvement oculaire (b) et calcul le changement des données relatives à la réaction physiologique du spectateur et l'accélération associée. L'unité d'accumulation (103) accumule des informations comprenant le changement de données relatives à la réaction physiologique dans le cas où le spectateur regarde un objet intéressant et l'accélération associée, sous la forme d'une valeur émotion (K), et accumule les informations comprenant le changement de données relatives à la réaction physiologique et l'accélération associée qui sont calculées par l'unité d'analyse (102), sous la forme d'une valeur analyse (A). L'unité de diagnostic (104) compare la valeur analyse (A) à la valeur émotion (K), analyse le sentiment du spectateur envers un objet spécifique par un procédé d'analyse du visage selon lequel le changement de chaque partie du visage du spectateur est calculé, et fait un diagnostic de l'émotion et du sentiment.
PCT/JP2009/067659 2009-10-09 2009-10-09 Dispositif de détermination du sentiment d'un spectateur pour une scène reconnue visuellement WO2011042989A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2011535257A JP5445981B2 (ja) 2009-10-09 2009-10-09 視認情景に対する視認者情感判定装置
PCT/JP2009/067659 WO2011042989A1 (fr) 2009-10-09 2009-10-09 Dispositif de détermination du sentiment d'un spectateur pour une scène reconnue visuellement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2009/067659 WO2011042989A1 (fr) 2009-10-09 2009-10-09 Dispositif de détermination du sentiment d'un spectateur pour une scène reconnue visuellement

Publications (1)

Publication Number Publication Date
WO2011042989A1 true WO2011042989A1 (fr) 2011-04-14

Family

ID=43856481

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/067659 WO2011042989A1 (fr) 2009-10-09 2009-10-09 Dispositif de détermination du sentiment d'un spectateur pour une scène reconnue visuellement

Country Status (2)

Country Link
JP (1) JP5445981B2 (fr)
WO (1) WO2011042989A1 (fr)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015014834A (ja) * 2013-07-03 2015-01-22 株式会社Lassic 機械対話による感情推定システム及びそのプログラム
JP2015503414A (ja) * 2012-01-05 2015-02-02 ユニバーシティー コート オブ ザユニバーシティー オブ アバディーン 精神医学的評価用装置および方法
KR101490505B1 (ko) * 2014-07-08 2015-02-10 주식회사 테라클 관심도 생성 방법 및 장치
WO2015056742A1 (fr) 2013-10-17 2015-04-23 光一 菊池 Dispositif de mesure de l'efficacité visuelle
JP2015516714A (ja) * 2012-03-08 2015-06-11 エンパイア テクノロジー ディベロップメント エルエルシー モバイルデバイスに関連付けられた体感品質の測定
WO2016143759A1 (fr) * 2015-03-06 2016-09-15 株式会社 脳機能研究所 Dispositif d'estimation d'émotion et procédé d'estimation d'émotion
JP6042015B1 (ja) * 2016-06-07 2016-12-14 株式会社採用と育成研究社 オンライン面接評価装置、方法およびプログラム
WO2017057631A1 (fr) * 2015-10-01 2017-04-06 株式会社夏目綜合研究所 Appareil de détermination de l'émotion d'un spectateur, qui élimine l'influence de la luminosité, de la respiration et du pouls, système de détermination de l'émotion d'un spectateur, et programme
EP2637078A4 (fr) * 2010-11-02 2017-05-17 NEC Corporation Système de traitement d'informations et procédé de traitement d'informations
JP2017516140A (ja) * 2014-04-29 2017-06-15 マイクロソフト テクノロジー ライセンシング,エルエルシー 顔の表情のトラッキング
JP2017184996A (ja) * 2016-04-05 2017-10-12 渡 倉島 瞳孔径拡大による脳活動量判定装置およびプログラム
JP2018027267A (ja) * 2016-08-19 2018-02-22 Kddi株式会社 映像視聴者の状態推定装置、方法及びプログラム
CN108337539A (zh) * 2017-12-22 2018-07-27 新华网股份有限公司 一种比较观众反应的方法和装置
EP3496099A1 (fr) * 2017-12-08 2019-06-12 Nokia Technologies Oy Procédé et appareil permettant de définir un synopsis sur la base des probabilités de trajet
EP3496100A1 (fr) * 2017-12-08 2019-06-12 Nokia Technologies Oy Procédé et appareil permettant d'appliquer un comportement de visualisation vidéo
WO2020016969A1 (fr) * 2018-07-18 2020-01-23 株式会社ソニー・インタラクティブエンタテインメント Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JPWO2020194529A1 (fr) * 2019-03-26 2020-10-01
WO2021044540A1 (fr) * 2019-09-04 2021-03-11 日本電気株式会社 Dispositif de commande, procédé de commande et support de stockage
WO2022107288A1 (fr) * 2020-11-19 2022-05-27 日本電信電話株式会社 Dispositif d'estimation, procédé d'estimation et programme d'estimation
JP7138998B1 (ja) * 2021-08-31 2022-09-20 株式会社I’mbesideyou ビデオセッション評価端末、ビデオセッション評価システム及びビデオセッション評価プログラム
WO2022230156A1 (fr) * 2021-04-29 2022-11-03 株式会社I’mbesideyou Système d'analyse vidéo
JP7481398B2 (ja) 2022-07-04 2024-05-10 ソフトバンク株式会社 判定装置、プログラム、及び判定方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113397544B (zh) * 2021-06-08 2022-06-07 山东第一医科大学附属肿瘤医院(山东省肿瘤防治研究院、山东省肿瘤医院) 一种病患情绪监测方法及***

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009508553A (ja) * 2005-09-16 2009-03-05 アイモーションズ−エモーション テクノロジー エー/エス 眼球性質を解析することで、人間の感情を決定するシステムおよび方法
JP2009116697A (ja) * 2007-11-07 2009-05-28 Sony Corp 情報提示装置、情報提示方法及びデータベース作成方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009508553A (ja) * 2005-09-16 2009-03-05 アイモーションズ−エモーション テクノロジー エー/エス 眼球性質を解析することで、人間の感情を決定するシステムおよび方法
JP2009116697A (ja) * 2007-11-07 2009-05-28 Sony Corp 情報提示装置、情報提示方法及びデータベース作成方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HIROSHI YAMADA ET AL.: "Ganmen Hyojo no Chikakuteki Handan Katei ni Kansuru Setsumei Model", JAPANESE PSYCHOLOGICAL REVIEW, vol. 4, no. 2, 2000, pages 245 - 255 *
WATARU KURASHIMA ET AL.: "Dokokei Hanno to Kao Hyojo Hanno no Yugo ni yoru Jokan Hyoka no Atarashii Teian", JOURNAL OF JAPANESE ACADEMY OF FACIAL STUDIES, vol. 9, no. 1, 5 October 2009 (2009-10-05), pages 206 *

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2637078A4 (fr) * 2010-11-02 2017-05-17 NEC Corporation Système de traitement d'informations et procédé de traitement d'informations
JP2015503414A (ja) * 2012-01-05 2015-02-02 ユニバーシティー コート オブ ザユニバーシティー オブ アバディーン 精神医学的評価用装置および方法
US9507997B2 (en) 2012-03-08 2016-11-29 Empire Technology Development Llc Measuring quality of experience associated with a mobile device
KR101819791B1 (ko) 2012-03-08 2018-01-17 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 모바일 디바이스와 연관된 경험의 품질 측정
JP2015516714A (ja) * 2012-03-08 2015-06-11 エンパイア テクノロジー ディベロップメント エルエルシー モバイルデバイスに関連付けられた体感品質の測定
JP2015014834A (ja) * 2013-07-03 2015-01-22 株式会社Lassic 機械対話による感情推定システム及びそのプログラム
JPWO2015056742A1 (ja) * 2013-10-17 2017-03-09 株式会社夏目綜合研究所 視認対象効果度測定装置
WO2015056742A1 (fr) 2013-10-17 2015-04-23 光一 菊池 Dispositif de mesure de l'efficacité visuelle
JP2017516140A (ja) * 2014-04-29 2017-06-15 マイクロソフト テクノロジー ライセンシング,エルエルシー 顔の表情のトラッキング
KR101490505B1 (ko) * 2014-07-08 2015-02-10 주식회사 테라클 관심도 생성 방법 및 장치
WO2016143759A1 (fr) * 2015-03-06 2016-09-15 株式会社 脳機能研究所 Dispositif d'estimation d'émotion et procédé d'estimation d'émotion
JPWO2016143759A1 (ja) * 2015-03-06 2017-12-14 株式会社脳機能研究所 感情推定装置及び感情推定方法
WO2017057631A1 (fr) * 2015-10-01 2017-04-06 株式会社夏目綜合研究所 Appareil de détermination de l'émotion d'un spectateur, qui élimine l'influence de la luminosité, de la respiration et du pouls, système de détermination de l'émotion d'un spectateur, et programme
EP3357424A4 (fr) * 2015-10-01 2019-06-19 Natsume Research Institute, Co., Ltd. Appareil de détermination de l'émotion d'un spectateur, qui élimine l'influence de la luminosité, de la respiration et du pouls, système de détermination de l'émotion d'un spectateur, et programme
CN108366764A (zh) * 2015-10-01 2018-08-03 株式会社夏目综合研究所 排除明暗、呼吸和脉搏的影响的观看者情绪判定装置、观看者情绪判定***和程序
JPWO2017057631A1 (ja) * 2015-10-01 2018-09-06 株式会社夏目綜合研究所 明暗、呼吸及び脈拍の影響を排除する視認者情感判定装置、視認者情感判定システム及びプログラム
JP2017184996A (ja) * 2016-04-05 2017-10-12 渡 倉島 瞳孔径拡大による脳活動量判定装置およびプログラム
JP6042015B1 (ja) * 2016-06-07 2016-12-14 株式会社採用と育成研究社 オンライン面接評価装置、方法およびプログラム
JP2018027267A (ja) * 2016-08-19 2018-02-22 Kddi株式会社 映像視聴者の状態推定装置、方法及びプログラム
EP3496100A1 (fr) * 2017-12-08 2019-06-12 Nokia Technologies Oy Procédé et appareil permettant d'appliquer un comportement de visualisation vidéo
US11195555B2 (en) 2017-12-08 2021-12-07 Nokia Technologies Oy Method and apparatus for defining a storyline based on path probabilities
WO2019110873A1 (fr) * 2017-12-08 2019-06-13 Nokia Technologies Oy Procédé et appareil permettant de définir un synopsis sur la base de probabilités de trajet
WO2019110874A1 (fr) * 2017-12-08 2019-06-13 Nokia Technologies Oy Procédé et appareil permettant d'appliquer un comportement de visualisation de vidéo
CN111527495B (zh) * 2017-12-08 2023-08-11 诺基亚技术有限公司 用于应用视频观看行为的方法和装置
EP3496099A1 (fr) * 2017-12-08 2019-06-12 Nokia Technologies Oy Procédé et appareil permettant de définir un synopsis sur la base des probabilités de trajet
CN111527495A (zh) * 2017-12-08 2020-08-11 诺基亚技术有限公司 用于应用视频观看行为的方法和装置
US11188757B2 (en) 2017-12-08 2021-11-30 Nokia Technologies Oy Method and apparatus for applying video viewing behavior
CN108337539A (zh) * 2017-12-22 2018-07-27 新华网股份有限公司 一种比较观众反应的方法和装置
WO2020016969A1 (fr) * 2018-07-18 2020-01-23 株式会社ソニー・インタラクティブエンタテインメント Dispositif de traitement d'informations, procédé de traitement d'informations et programme
JPWO2020194529A1 (fr) * 2019-03-26 2020-10-01
WO2020194529A1 (fr) * 2019-03-26 2020-10-01 日本電気株式会社 Dispositif de détermination d'intérêt, système de détermination d'intérêt, procédé de détermination d'intérêt, et support non transitoire lisible par ordinateur dans lequel un programme est stocké
JP7207520B2 (ja) 2019-03-26 2023-01-18 日本電気株式会社 興味判定装置、興味判定システム、興味判定方法及びプログラム
US11887349B2 (en) 2019-03-26 2024-01-30 Nec Corporation Interest determination apparatus, interest determination system, interest determination method, and non-transitory computer readable medium storing program
WO2021044540A1 (fr) * 2019-09-04 2021-03-11 日本電気株式会社 Dispositif de commande, procédé de commande et support de stockage
WO2022107288A1 (fr) * 2020-11-19 2022-05-27 日本電信電話株式会社 Dispositif d'estimation, procédé d'estimation et programme d'estimation
JP7444286B2 (ja) 2020-11-19 2024-03-06 日本電信電話株式会社 推定装置、推定方法、および、推定プログラム
WO2022230156A1 (fr) * 2021-04-29 2022-11-03 株式会社I’mbesideyou Système d'analyse vidéo
JP7138998B1 (ja) * 2021-08-31 2022-09-20 株式会社I’mbesideyou ビデオセッション評価端末、ビデオセッション評価システム及びビデオセッション評価プログラム
WO2023032057A1 (fr) * 2021-08-31 2023-03-09 株式会社I’mbesideyou Terminal, système et programme d'évaluation de session vidéo
JP7481398B2 (ja) 2022-07-04 2024-05-10 ソフトバンク株式会社 判定装置、プログラム、及び判定方法

Also Published As

Publication number Publication date
JP5445981B2 (ja) 2014-03-19
JPWO2011042989A1 (ja) 2013-02-28

Similar Documents

Publication Publication Date Title
JP5445981B2 (ja) 視認情景に対する視認者情感判定装置
JP2010094493A (ja) 視認情景に対する視認者情感判定装置
Foulsham et al. The where, what and when of gaze allocation in the lab and the natural environment
TWI741512B (zh) 駕駛員注意力監測方法和裝置及電子設備
CN107929007B (zh) 一种利用眼动追踪和智能评估技术的注意力和视觉能力训练***及方法
CN112034977B (zh) Mr智能眼镜内容交互、信息输入、应用推荐技术的方法
CA2545202C (fr) Procede et appareil de poursuite oculaire sans etalonnage
CN103181180B (zh) 提示控制装置以及提示控制方法
EP2600331A1 (fr) Formation et éducation au moyen de casques de visualisation de réalité
JP2017507400A (ja) 注視によるメディア選択及び編集のためのシステム並びに方法
US20170263017A1 (en) System and method for tracking gaze position
JP5921674B2 (ja) 動きガイド提示方法、そのシステム及び動きガイド提示装置
US20120194648A1 (en) Video/ audio controller
Hessels et al. Looking behavior and potential human interactions during locomotion
US11243609B2 (en) Information processing apparatus, information processing method, and program
JP7066115B2 (ja) パブリックスピーキング支援装置、及びプログラム
KR20190048144A (ko) 발표 및 면접 훈련을 위한 증강현실 시스템
Chukoskie et al. Quantifying gaze behavior during real-world interactions using automated object, face, and fixation detection
TWM562459U (zh) 用於互動式線上教學的即時監控系統
WO2023037348A1 (fr) Système et procédé de surveillance d'interactions de dispositif humain
JP2010063621A (ja) 視認情景に対する視認者感性反応装置
KR102038413B1 (ko) 그레이디언트 벡터 필드와 칼만 필터를 이용한 온라인 강의 모니터링 방법
CN116133594A (zh) 基于声音的注意力状态评价
Jianwattanapaisarn et al. Investigation of real-time emotional data collection of human gaits using smart glasses
US20240050831A1 (en) Instructor avatars for augmented reality experiences

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09850260

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2011535257

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09850260

Country of ref document: EP

Kind code of ref document: A1