WO2018235595A1 - 情報提供装置、情報提供方法及びプログラム - Google Patents
情報提供装置、情報提供方法及びプログラム Download PDFInfo
- Publication number
- WO2018235595A1 WO2018235595A1 PCT/JP2018/021581 JP2018021581W WO2018235595A1 WO 2018235595 A1 WO2018235595 A1 WO 2018235595A1 JP 2018021581 W JP2018021581 W JP 2018021581W WO 2018235595 A1 WO2018235595 A1 WO 2018235595A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- display
- child
- information providing
- image
- time
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/14—Display of multiple viewports
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44204—Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0613—The adjustment depending on the type of the information to be displayed
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0464—Positioning
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2354/00—Aspects of interface with display user
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
Definitions
- the present invention relates to an information providing apparatus, an information providing method, and a program.
- the display area on the display is divided into a main display area and a sub display area, and the video is displayed in each of the main display area and the sub display area.
- video which a viewer's interest gathers is detected based on the gaze information of a viewer, and the positional information on a face image.
- the display method when a video of interest is displayed in the sub display area, the video is switched so that the video is displayed in the main display area.
- a line-of-sight mode determination unit that determines a mode in which the lines of sight of a plurality of subjects are directed to an object based on a line-of-sight information acquisition unit that acquires line-of-sight information of the subjects and The object is an object based on the relationship determining unit that determines the relationship between the plurality of subjects, the relationship between the plurality of subjects determined by the relationship determining unit, and the mode determined by the sight line mode determining unit
- An availability calculation device is disclosed that includes an availability calculation unit that calculates an index value that indicates the possibility of being used by a person.
- the content display method described in Patent Document 3 determines the affirmation degree by visual recognition based on the recognition time of the visual line with respect to the display panel and the number of times the visual line deviates from the display panel, and displays on the display panel based on the affirmation Evaluate the ads that are being Then, based on the evaluation result, an advertisement to be displayed on the display panel is determined.
- Acquisition means for acquiring a photographed image obtained by photographing the periphery of the display;
- An image analysis means for detecting a position on the display that is viewed by a person extracted from the photographed image;
- a control unit configured to divide a display area of the display into a plurality of child areas, display contents in each of the plurality of child areas, and change a display of the child areas viewed by a person extracted from the photographed image;
- An information providing apparatus is provided.
- Acquisition means for acquiring a photographed image obtained by photographing the periphery of the display;
- An image analysis that extracts the face of a person from the captured image by analyzing the captured image, and detects the number of people looking at the display and the display viewing time which is the time when the display is viewed Means,
- Control means for determining the content to be displayed on the display according to the detection result;
- An information providing apparatus is provided.
- the computer is An acquisition step of acquiring a photographed image obtained by photographing the periphery of the display; An image analysis step of detecting a position on the display viewed by a person extracted from the photographed image; A control step of dividing a display area of the display into a plurality of child areas, displaying contents in each of the plurality of child areas, and changing a display of the child areas viewed by a person extracted from the photographed image; A method of providing information is provided.
- Computer Acquisition means for acquiring a photographed image obtained by photographing the periphery of the display, An image analysis means for detecting a position on the display viewed by a person extracted from the photographed image; Control means for dividing a display area of the display into a plurality of child areas, displaying contents in each of the plurality of child areas, and changing a display of the child areas viewed by a person extracted from the photographed image A program to function as is provided.
- the computer is An acquisition step of acquiring a photographed image obtained by photographing the periphery of the display; An image analysis that extracts the face of a person from the captured image by analyzing the captured image, and detects the number of people looking at the display and the display viewing time which is the time when the display is viewed Process, A control step of determining the content to be displayed on the display according to the detection result; A method of providing information is provided.
- Computer Acquisition means for acquiring a photographed image obtained by photographing the periphery of the display, An image analysis that extracts the face of a person from the captured image by analyzing the captured image, and detects the number of people looking at the display and the display viewing time which is the time when the display is viewed means, Control means for determining the content to be displayed on the display according to the detection result, A program to function as is provided.
- FIG. 1 It is a figure which shows an example of the hardware constitutions of the information provision apparatus of this embodiment. It is an example of the functional block diagram of the information provision apparatus of this embodiment. It is a figure which shows typically an example of the data produced
- the information providing system of this embodiment is so-called digital signage (the same applies to all the following embodiments).
- the information providing system has a display installed at an arbitrary place, a camera for photographing the periphery of the display, and an information providing device for controlling the display.
- the display is installed anywhere, such as outdoors, indoors, and public areas.
- the display outputs predetermined content (eg, a moving image, a still image, a sound, etc.) according to the control of the information providing apparatus.
- the file of the content to be output may be stored in advance in the storage device of the display, or may be input to the display from an external device (eg, an information providing device).
- the camera is configured to be capable of capturing a moving image.
- the camera may be configured to be capable of continuously shooting still images at predetermined time intervals (eg, every one second).
- the camera is installed (fixed) at a position where the periphery of the display (for example, the front area where the viewer of the display is located) can be photographed. Then, the camera transmits the generated image file (moving image file or still image file) to the information providing device in real time processing.
- the information providing apparatus includes an acquisition unit, an image analysis unit, and a control unit.
- the acquisition unit acquires a photographed image (image file) obtained by photographing the periphery of the display.
- the image analysis unit extracts the face of a person from the captured image by analyzing the captured image, and detects the position on the display viewed by the extracted person.
- the control unit divides the display area of the display into a plurality of child areas, displays the content in each of the plurality of child areas, and changes the display of the child area viewed by the person extracted from the photographed image. For example, the content to be displayed in the child area is changed.
- the content to be displayed in the child area viewed by the viewer is not fixed by the same thing, but by changing it (that is, by causing a change), the viewer for the child area Can be made more important.
- the viewer's attention to the child area can be further increased.
- the processing load on the display can be reduced by changing only the content to be displayed in a part (the child area viewed by the viewer) instead of changing the contents of all the plurality of child areas.
- the information providing device may be installed near the display.
- the display and the information providing apparatus may be physically and / or logically integrated, or may be separately configured. In the latter case, the display and the information providing device are configured to be communicable by any communication means.
- the information providing device may be installed at a position distant from the display to remotely control the display.
- the display and the information providing apparatus may be connected to each other via a communication network such as the Internet, or may be connected to each other via a dedicated communication network.
- the display and the camera provided in the information providing system of the present embodiment can be realized according to the prior art.
- the configuration of the information providing apparatus will be described in detail.
- Each functional unit included in the information providing apparatus includes a central processing unit (CPU) of any computer, a memory, a program loaded to the memory, and a storage unit such as a hard disk storing the program (the apparatus is shipped in advance In addition to programs stored from the stage, it can also store storage media such as CD (Compact Disc) and programs downloaded from a server on the Internet, etc.) Arbitrary hardware and software options centering on the network connection interface It is realized by the combination of And it is understood by those skilled in the art that there are various modifications in the implementation method and apparatus.
- FIG. 1 is a block diagram illustrating the hardware configuration of the information providing apparatus of the present embodiment.
- the information providing apparatus includes a processor 1A, a memory 2A, an input / output interface 3A, a peripheral circuit 4A, and a bus 5A.
- Peripheral circuit 4A includes various modules.
- the information providing apparatus may be configured by a plurality of physically and / or logically divided devices. In this case, each of the plurality of devices may have a processor 1A, a memory 2A, an input / output interface 3A, a peripheral circuit 4A, and a bus 5A.
- the bus 5A is a data transmission path for the processor 1A, the memory 2A, the peripheral circuit 4A, and the input / output interface 3A to mutually transmit and receive data.
- the processor 1A is an arithmetic processing unit such as a CPU or a graphics processing unit (GPU), for example.
- the memory 2A is, for example, a memory such as a random access memory (RAM) or a read only memory (ROM).
- the input / output interface 3A is an interface for acquiring information from an input device (eg, a keyboard, a mouse, a microphone, a physical key, a touch panel display, a code reader, etc.), an external device, an external server, an external sensor, etc. Example: Display, speaker, printer, mailer etc.), an external device, an interface for outputting information to an external server, etc. are included.
- the processor 1A can issue an instruction to each module and perform an operation based on the result of the operation.
- the information providing apparatus 10 includes an acquisition unit 11, an image analysis unit 12, and a control unit 13.
- the acquisition unit 11 acquires a photographed image (image file) obtained by photographing the periphery of the display from the above-described camera.
- the image file includes a plurality of captured images (frames).
- the acquisition unit 11 may acquire information capable of specifying the photographing date and time of each of a plurality of photographed images from the above-described camera.
- the image analysis unit 12 analyzes the captured image to extract the face of a person from the captured image, and detects the position on the display viewed by the extracted person (first detection processing). Further, the image analysis unit 12 detects a child area browsing time which is a time when a person extracted from the photographed image is looking at a predetermined child area (child area viewed at that timing) (second detection processing ). The image analysis unit 12 may analyze all captured images included in the image file, or may analyze captured images at predetermined time intervals (e.g., every 1 second, every 3 seconds).
- the means for extracting the human face from the photographed image is not particularly limited, and any conventional technique can be adopted.
- the image analysis unit 12 can detect the position on the display viewed by the extracted person using any conventional technique. For example, the gaze direction of the person, the distance between the person and the camera, the position of the person's face in the captured image, and the relative relationship between the camera and display held in advance (example: display of the optical axis of the camera and the display) Based on the angle with the area (surface), the distance between the camera and the display, etc., it is possible to detect the position on the display that a person is looking at.
- the gaze direction of a person may be identified by image analysis or by other means.
- the gaze direction can be estimated based on the direction of the face, the position of the eyeball, and the like.
- the orientation of the face can be estimated based on the position in the face area of the characteristic part such as the eyes, nose, and mouth, the relative positional relationship between these, and the like.
- the distance between the person and the camera may be specified by image analysis, or may be specified by other means.
- image analysis for example, it can be estimated based on the size of a face (occupied area in a photographed image) or the like.
- FIG. 3 schematically shows an example of analysis result data generated by the first detection process.
- the image analysis unit 12 analyzes the captured image transmitted from the camera by real time processing, and adds a new analysis result to the analysis result data.
- the extraction ID (identifier), the viewer ID, the date and time, the time, the viewing position, and the in-image position are associated with one another. In addition, other information may be further included.
- “Date and time” indicates the shooting date and time of each captured image (each frame).
- the extraction IDs 0000001 to 0000003 in which the pieces of information match each other are information obtained by analyzing the same captured image.
- the extraction IDs 0000004 to 0000006 are information obtained by analyzing the same photographed image.
- extraction ID is an ID attached each time a person is extracted from a photographed image.
- different extraction IDs are attached to each of the plurality of extractions.
- different extraction IDs are assigned to the respective extractions.
- the “viewer ID” is an ID assigned to each person extracted from the photographed image.
- the image analysis unit 12 groups faces considered to be the same person existing across a plurality of photographed images, and attaches one viewer ID.
- the image analysis unit 12 may calculate the positions of n (n is one or more) faces in the photographed image of the first photographed image and m (m (m) extracted from the second photographed image immediately thereafter. Is associated with the positions of one or more faces in the photographed image of the face, which may be the same as n, and the faces of the same person existing across the photographed images are associated with each other. For example, “the distance between the position in the captured image in the first captured image (position in the image shown in FIG. 3) and the position in the captured image in the second captured image (position in the image shown in FIG. 3) The faces satisfying “less than or equal to a predetermined value” may be associated with each other.
- the image analysis unit 12 can attach a viewer ID to each group.
- the image analysis unit 12 may extract feature amounts of the face of each person extracted from the captured image. Then, using the feature amount, the same person existing across a plurality of photographed images may be grouped, and a viewer ID may be attached to each group.
- Either method can be adopted in the present embodiment.
- the former it is not necessary to acquire information (face feature amount) for identifying a viewer.
- the former can be said to be a preferable method in consideration of privacy.
- the "viewing position” indicates the position on the display at which each of the extracted persons is looking.
- the viewing position may be indicated by a two-dimensional coordinate system in which an arbitrary position of the display is an origin and an arbitrary direction of the display is an x-axis and a y-axis.
- the browsing position of the extraction ID 0000003 is blank. From this, it can be understood that the viewer IDV00003 has not visually recognized on the display at 12:00:01 on June 1, 2017.
- the “in-image position” indicates the position in the captured image of each face of the extracted person.
- the position of the face in the photographed image may be indicated by a two-dimensional coordinate system in which an arbitrary position of the photographed image is an origin and an arbitrary direction of the photographed image is an x axis and ay axis.
- the image analysis unit 12 checks the child area browsing time which is the time when the person extracted from the photographed image is looking at a predetermined child area (child area viewed at that timing) To detect.
- the child area browsing time may be a time in which the line of sight is continuously browsed until the timing without taking the sight line out of the child area, or may be an integrated time when the child area is browsed by that timing.
- the image analysis unit 12 detects, based on analysis result data as shown in FIG. 3, for example, the time when each person extracted from the latest captured image sees the child area viewed at that timing by that timing .
- the display area 100 of the display D is divided into a plurality of child areas 101 to 104, and the content is displayed in each of the plurality of child areas 101 to 104.
- the number and layout of the plurality of child areas are not limited to those illustrated.
- the image analysis unit 12 holds layout information of a plurality of child areas in advance. Then, based on the layout information and information on the viewing position included in the analysis result data as shown in FIG. Can be identified. Then, the image analysis unit 12 can calculate the child area browsing time based on the identification result.
- control unit 13 divides the display area 100 of the display D into a plurality of child areas 101 to 104, and causes the contents to be displayed on each of the plurality of child areas 101 to 104.
- the number and layout of the plurality of child areas are not limited to those illustrated.
- control unit 13 changes the display of the child area viewed by the person extracted from the photographed image. For example, the control unit 13 determines the content to be displayed in the child area viewed by each person at the timing, which is extracted from the latest captured image, according to the child area browsing time when the child area is viewed by that timing. . Then, the control unit 13 displays the determined content in the child area.
- the content to be displayed may be, for example, an advertisement, but is not limited thereto.
- control unit 13 may display a still image when the child area browsing time is less than the first reference time, and may display a moving image when the child area browsing time is equal to or more than the first reference time.
- the first reference time is arbitrarily set in advance.
- the still image is not necessarily limited to one in which the entire child area is stationary, and a part of the image (e.g., a character, a symbol that is not the advertisement itself, etc.) may move.
- control unit 13 may display coupon information when the child area browsing time is equal to or more than the second reference time.
- the second reference time is arbitrarily set in advance.
- the display is installed at a shopping center where a plurality of stores gather.
- the control unit 13 causes each of the plurality of child areas to display attribute information (name, place, appearance photograph, goods, etc.) of each of the plurality of stores as a still image.
- control unit 13 causes the child area to display a moving image including more detailed information of the store in the child area. Start playback.
- the control unit 13 may temporarily stop reproduction of the moving image. Then, when the viewing position of the person returns to the child area again, the pause may be canceled and the reproduction from the paused position may be resumed. In addition, if a person who was looking at the child area is not included in the captured image (stopped from the front of the display) during reproduction, the control unit 13 stops the reproduction of the moving image and displays the child area. You may return to the still image of.
- the control unit 13 causes the child area to display the coupon information of the store.
- the display of coupon information in accordance with the end of playback of a moving image can be realized by appropriately setting the second reference time.
- the control unit 13 can individually change the content to be displayed in each of the plurality of child areas without changing the layout (size, arrangement method, and the like) of the plurality of child areas in the display.
- control unit 13 may operate the display in the power saving mode when no human face is extracted from the captured image.
- the power consumption is smaller than that in the normal mode.
- the brightness of the display is lower than in the normal mode.
- control unit 13 introduces stores and the like in detail videos showing details in a child area in which a viewer is watching, and introduces stores and the like in a movie of digests and simple information in a child area in which no viewer is watching You may
- control unit 13 may display a still image that is a part of the moving image in a child area where there is no viewer, and may display a moving image in a child area where the viewer is present.
- the control unit 13 displays a moving image of a digest or simple information (e.g. information for introducing a summary of a store or the like), and the child area browsing time If it becomes more than the first reference time, a moving image of detailed information (for example, information for introducing details such as a store) may be displayed.
- a digest or simple information e.g. information for introducing a summary of a store or the like
- detailed information for example, information for introducing details such as a store
- the acquisition unit 11 acquires a photographed image obtained by photographing the periphery of the display.
- the image analysis unit 12 analyzes the captured image to extract the face of a person from the captured image, and detects the position on the display viewed by the extracted person. In addition, the image analysis unit 12, based on, for example, analysis result data as shown in FIG. 3, the time when each person extracted from the latest captured image sees the child area viewed at that timing by that timing ( Detect child area browsing time).
- control unit 13 changes the display of the child area according to the child area browsing time of the child area that a person is viewing that is extracted from the photographed image. For example, the control unit 13 determines the content to be displayed in the child area according to the child area browsing time. Then, the control unit 13 displays the determined content in the child area.
- the content to be displayed in the child area viewed by the viewer is not fixed by the same one, but is changed (i.e., brought about by changing), the child
- the viewer's attention to the area can be made greater.
- the viewer's attention to the child area can be further increased.
- the processing load on the display can be reduced by changing only the content to be displayed in a part (the child area viewed by the viewer) instead of changing the contents of all the plurality of child areas.
- the time during which a person is looking at a predetermined child area (child area browsing time) is detected, and the content to be displayed in the predetermined child area is determined accordingly.
- Can. By changing the content displayed according to the change of the child area viewing time, it is possible to attract the interest of the viewer without getting tired of the content displayed in the child area.
- the content to be displayed is changed according to only what the viewer sees without considering the child area browsing time, there is a possibility that the content switching process may be unnecessarily performed. For example, there is a possibility that the content may be changed in response to the passerby looking at it.
- the content is switched when the child area browsing time exceeds the threshold value, unnecessary content switching processing can be avoided.
- the information providing system of the present embodiment differs from the first embodiment in the configuration of the information providing apparatus 10. That is, the configurations of the display and the camera are the same as in the first embodiment.
- the information providing apparatus 10 determines the control content of the display based on the child area viewing time described in the first embodiment and the number of persons viewing each of the plurality of child areas. It differs from the first embodiment.
- the other configuration is the same as that of the first embodiment. The details will be described below.
- An example of the hardware configuration of the information providing device 10 is the same as that of the first embodiment.
- FIG. 2 An example of a functional block diagram of the information providing apparatus 10 of the present embodiment is shown in FIG. 2 as in the first embodiment.
- the information providing apparatus 10 includes an acquisition unit 11, an image analysis unit 12, and a control unit 13.
- the configuration of the acquisition unit 11 is the same as that of the first embodiment.
- the image analysis unit 12 further has a function of detecting the number of people looking at each of the plurality of child areas.
- the image analysis unit 12 detects the number of people viewing each of the plurality of child areas in the latest captured image. Also, as described in the first embodiment, the image analysis unit 12 checks the time at which each person extracted from the latest captured image sees the child area viewed at that timing (child area (child area Detect browsing time).
- control unit 13 divides the display area 100 of the display D into a plurality of child areas 101 to 104, and causes the contents to be displayed on each of the plurality of child areas 101 to 104.
- the number and layout of the plurality of child areas are not limited to those illustrated.
- control unit 13 changes the display of the child area viewed by the person extracted from the photographed image. For example, the control unit 13 changes the content to be displayed in the child area.
- the control unit 13 displays the child area browsing time when the child area is displayed with the content to be displayed in the child area that each person is viewing at that timing and the child area extracted from the latest captured image The timing is determined according to the number of people watching the child area. Then, the determined content is displayed in the child area.
- the control unit 13 can individually change the content to be displayed in each of the plurality of child areas without changing the layout (size, arrangement method, and the like) of the plurality of child areas in the display.
- the control unit 13 displays a still image when the number of persons whose child area browsing time is equal to or more than the third reference time is less than the first reference number, and the child area browsing time is the third reference time
- a moving image may be displayed when the number of persons having the above is equal to or more than the first reference number.
- the third reference time and the first reference number are arbitrarily set in advance.
- the still image is not necessarily limited to one in which the entire child area is stationary, and a part of the image (e.g., a character, a symbol that is not the advertisement itself, etc.) may move.
- control unit 13 determines that the digest or the simplified information (eg, information for introducing the outline of the store etc.) when the number of persons whose child area browsing time is equal to or more than the third reference time is less than the first reference number.
- the moving image of the detailed information eg, information for introducing details such as stores
- An image may be displayed.
- control unit 13 may display coupon information when the number of persons whose child area browsing time is equal to or more than the fourth reference time becomes equal to or more than the second reference number.
- the fourth reference time and the second reference number are arbitrarily set in advance.
- control unit 13 may change the layout of the plurality of child areas in the display according to the number of people viewing each of the plurality of child areas. For example, the control unit 13 may set the ratio of the occupied area of at least one of the plurality of child areas in the display according to the ratio of the number of persons viewing the child area. In addition, the control unit 13 may increase the ratio of the occupied area as the number of viewers increases.
- control unit 13 may switch the brightness of each of the plurality of child areas in the display according to the number of people viewing each of the plurality of child areas. In this case, the control unit 13 lowers the luminance when the number of looking people is less than a predetermined number (eg, less than 1), and the number of looking people is equal to or more than a predetermined number (eg, 1 or more) May increase the brightness. In addition, the control unit 13 may increase the luminance as the number of viewers increases.
- a predetermined number eg, less than 1
- a predetermined number eg, 1 or more
- control unit 13 may operate the display in the power saving mode when the face of a person is not extracted from the captured image.
- control unit 13 displays an operation of drawing attention of a person when the number of people looking at the display is less than a predetermined number (for example, less than 1) although the face of the person is extracted from the captured image. You may do it. For example, a sound such as "Provide advantageous information" may be output on the display, or the user may be alerted by blinking or the like.
- the acquisition unit 11 acquires a photographed image obtained by photographing the periphery of the display.
- the image analysis unit 12 analyzes the captured image to extract the face of a person from the captured image, and detects the position on the display viewed by the extracted person.
- the image analysis unit 12 based on, for example, analysis result data as shown in FIG. 3, the time when each person extracted from the latest captured image sees the child area viewed at that timing by that timing ( The child area viewing time) and the number of persons who are viewing each of the plurality of child areas at that timing are detected.
- the control unit 13 changes the display of the child area according to the child area viewing time of the child area viewed by the person extracted from the photographed image and the number of the persons looking at it. For example, the control unit 13 determines the content to be displayed in the child area according to the child area browsing time and the number of people watching. Then, the control unit 13 displays the determined content in the child area. For example, the control unit 13 displays a still image when the number of persons whose child area browsing time is equal to or more than the third reference time is less than the first reference number, and the child area browsing time is the third reference time A moving image may be displayed when the number of persons having the above is equal to or more than the first reference number.
- the information providing system of the present embodiment it is possible to switch the content displayed in the child area in response to the sight lines having a predetermined number or more being gathered. According to such a method, for example, when a plurality of groups are viewing the display, it is possible to easily recognize the content of interest in the group by switching the displayed content. .
- the advertising effectiveness is enhanced because an increase in the number of viewers of the advertisement can be expected.
- the information providing system of the present embodiment is different from the first and second embodiments in the configuration of the information providing apparatus 10. That is, the configurations of the display and the camera are the same as in the first and second embodiments. The details will be described below.
- An example of the hardware configuration of the information providing device 10 is the same as in the first and second embodiments.
- FIG. 2 An example of a functional block diagram of the information providing apparatus 10 of the present embodiment is shown in FIG. 2 as in the first and second embodiments.
- the information providing apparatus 10 includes an acquisition unit 11, an image analysis unit 12, and a control unit 13.
- the configuration of the acquisition unit 11 is the same as in the first and second embodiments.
- the image analysis unit 12 performs the first detection process described in the first embodiment to obtain analysis result data as shown in FIG. 3, for example.
- the image analysis unit 12 may perform the second detection process described in the first embodiment.
- the image analysis unit 12 detects the number of people looking at the display in the latest captured image, and the time (display browsing time) at which the display is viewed by that timing. Further, the image analysis unit 12 may detect the position on the display that each of the extracted persons is watching.
- the display viewing time may be a time in which the line of sight is continuously viewed up to the timing without removing it from the display, or may be an integrated time in which the display is viewed until the timing.
- the control unit 13 determines the control content of the display, for example, the content to be displayed, the operation mode, etc., according to the detection result of the image analysis unit 12.
- the control unit 13 holds in advance correspondence information in which the elapsed time from the sight line detection (display browsing time), the number of people watching the display, and the control content of the display are associated as shown in FIG. 6, for example You may leave it. Then, the control unit 13 may determine the control content of the display based on the correspondence information.
- the numerical values and the contents indicated by the correspondence information in FIG. 6 are merely examples, and may be other contents.
- the correspondence information shown in FIG. 6 will be described.
- the face of the person is not extracted from the photographed image (gaze detection 0 seconds or more and less than 2 seconds-face not detected), that is, the display in front of the display Operate the display in the power saving mode when no one is turned.
- control unit 13 extracts a human face from the captured image, but when the number of people looking at the display is 0 (gaze detection 0 seconds or more and less than 2 seconds-face detection completed: gaze detection 0
- the display perform an action of drawing people's attention.
- a sound such as “Provide advantageous information” may be output on the display, or the user may be alerted by blinking or the like.
- control unit 13 causes the display to execute processing according to the number of people looking at the display and the display viewing time.
- the control unit 13 provides predetermined information from the still image (may be a digest or a moving image). Display a guidance message indicating that you want to start. This guidance message is displayed on the entire display. That is, at this stage, the display area of the display is not divided into a plurality of child areas.
- the still image is not necessarily limited to one in which the entire display is stationary, and a part of the image (eg, characters, symbols, etc. that are not the advertisement itself) may move.
- the information to be provided shall correspond to the number of people watching the display. For example, if the number of people watching is one, it provides information on a store that is easy to enter by one person (for example, a restaurant), and if the number of people watching is two, it is for couples Information on stores (e.g., date spots) may be provided, and information on stores for families (e.g., restaurants) may be provided if the number of viewers is three or more.
- Information on stores e.g., date spots
- stores for families e.g., restaurants
- the control unit 13 ends the guidance message.
- the display area of the display is divided into a plurality of child areas (for example, four), and information of each of the plurality of stores is displayed in each of the plurality of child areas.
- attribute information name, place, appearance photograph, goods, etc.
- the still image is not necessarily limited to one in which the entire child area is stationary, and a part of the image (e.g., a character, a symbol that is not the advertisement itself, etc.) may move.
- the store to which it guides shall be according to the number of people who are looking at a display. For example, if the number of people watching is one, they will guide you to a store that is easy to enter by one person (e.g. a restaurant), and if the number of people watching is two, a store for couples ( Example: A date spot may be shown, and if there are three or more viewers, a family-oriented store (eg, a restaurant) may be shown.
- the control unit 13 selects a child area that has attracted attention from among a plurality of child areas (e.g. Determine the child area with the highest number, the child area with the highest viewing time, etc.). Then, the layout of the display area of the display is changed, and the information of the store displayed in the child area where attention is focused is displayed on a large screen. In addition, a moving image showing detailed information of the store is reproduced on a large screen.
- the control unit 13 may not change the layout of the plurality of child areas in the display without changing the large screen display, and may change only the content to be displayed in the child area where attention is focused to a moving image.
- control unit 13 can display a still image when the display browsing time is less than the fifth reference time, and can display a moving image when the display browsing time is the fifth reference time or more.
- the fifth reference time is arbitrarily set in advance.
- control unit 13 causes the display to display coupon information of the store regarding the moving image being reproduced.
- control unit 13 can display the coupon information on the display when the display browsing time is equal to or more than the sixth reference time.
- the sixth reference time is arbitrarily set in advance.
- the control unit 13 divides the display area of the display into a plurality of child areas, and while displaying the contents of each of the plurality of child areas, the ratio of the occupied area of at least one of the plurality of child areas in the display is It may be adapted to the ratio of the number of people looking at the child area. In addition, the control unit 13 may switch the brightness of each of the plurality of child areas in the display according to the number of people viewing each of the plurality of child areas.
- An example of the process flow of the information providing apparatus 10 of the present embodiment is the same as that of the second embodiment.
- the information providing system of the present embodiment it is possible to switch the display of the display according to the number of people viewing the display and the display viewing time.
- the number of people watching the display represents the attributes of the person or group watching the display.
- Providing content according to the number of people watching the display can provide information more suitable for the viewer.
- the control unit 13 may cause the display to display information indicating the position viewed by the person on the display, which is extracted from the captured image.
- An example is shown in FIG. In the example shown in FIG. 7, the viewing positions of each of the three viewers are clearly indicated by the marks indicated by M1 to M3.
- the plurality of marks may be distinguishable in appearance from each other by color, shape, or the like.
- the viewer can easily grasp the position on the display viewed by oneself or another person.
- the viewer in addition to being able to watch as a game, it is possible to easily grasp other people's trends.
- the control unit 13 may control the display such that different contents can be viewed according to the viewing direction. For example, by separating the light from the display in multiple directions, the content viewed from each direction can be made different.
- the display can be realized by using a viewing angle control technology. Further, by using the result of the image analysis by the image analysis unit 12, it is possible to specify which one of the plurality of directions the viewer is looking from.
- the playback timing of the moving image and the display timing of the coupon information described above are made different for each viewer. Can.
- control unit 13 can see a moving image from a position or an angle where the number of people watching is relatively large, and can see a still image from a position or an angle where the number of people watching is relatively low
- the content to be displayed on the display may be determined.
- the still image is not necessarily limited to one in which the entire child area is stationary, and a part of the image (e.g., a character, a symbol that is not the advertisement itself, etc.) may move.
- content viewed from the right side of the display, content viewed from the front, and content viewed from the left can be different contents.
- the number of people looking at the right is greater than or equal to the reference number, and the child area viewing time is more than the third reference time (either one or the other).
- the movie is displayed for.
- the number of people viewing the child area in front or on the left is less than the reference number, and the child area viewing time is less than the third reference time (either one is acceptable) Display a still image for the viewer on the left.
- the information providing apparatus 10 may aggregate analysis results by the image analysis unit 12 and generate aggregated data. 8 to 11 show an example of the aggregated data.
- the aggregate data in FIG. 8 is a graph of the display viewing time calculated for each viewer who is currently viewing the display.
- the aggregated data in FIG. 9 is a graph of the number of viewers calculated for each time zone.
- the tabulated data in FIG. 10 is a graph obtained by calculating the total of child area browsing time by child area (panel) (for example, for one day).
- the aggregate data in FIG. 11 is a graph formed by calculating an average value of display browsing time per viewer.
- the usage condition of the display can be quantified. Then, based on the aggregated data, the arrangement position of the display, the content of the content, and the like can be changed.
- Acquisition means for acquiring a photographed image obtained by photographing the periphery of the display;
- An image analysis means for detecting a position on the display that is viewed by a person extracted from the photographed image;
- a control unit configured to divide a display area of the display into a plurality of child areas, display contents in each of the plurality of child areas, and change a display of the child areas viewed by a person extracted from the photographed image;
- An information providing apparatus having: 2.
- the image analysis means detects a child area browsing time which is a time when a person extracted from the photographed image is looking at a predetermined child area, The information providing apparatus, wherein the control means determines the content to be displayed on the child area viewed by a person extracted from the photographed image according to the child area browsing time. 3.
- the control unit displays information when the child area browsing time is less than the first reference time, and displays moving images when the child area browsing time is the first reference time or more. Provision device. 4.
- the information providing device described in 2 or 3 The information providing device, wherein the control means displays coupon information when the child area browsing time is equal to or more than a second reference time. 5.
- the image analysis means further detects the number of people viewing each of the plurality of child areas;
- the control means displays the content to be displayed in the child area viewed by the person extracted from the photographed image according to the child area viewing time and the number of persons viewing each of the plurality of child areas Information provider to decide.
- the control means causes a still image to be displayed when the number of persons whose child area browsing time is equal to or more than a third reference time is less than a first reference number, and the child area browsing time is the third standard.
- the information provision apparatus which displays a moving image, when the number of the persons who are more than time is more than the said 1st reference
- control means displays coupon information when the number of persons whose child area browsing time is equal to or more than a fourth reference time is equal to or more than a second reference number.
- Acquisition means for acquiring a photographed image obtained by photographing the periphery of the display; An image analysis that extracts the face of a person from the captured image by analyzing the captured image, and detects the number of people looking at the display and the display viewing time which is the time when the display is viewed Means, Control means for determining the content to be displayed on the display according to the detection result;
- An information providing apparatus having: 9.
- the information providing device displays the still image when the display browsing time is less than a fifth reference time, and displays the moving image when the display browsing time is more than the fifth reference time. 10.
- the information providing apparatus wherein the control means causes the display to display coupon information when the display browsing time is equal to or more than a sixth reference time. 11.
- the control means causes the display to perform the action of drawing people's attention when the face of a person is extracted from the photographed image but the number of people looking at the display is less than a predetermined number . 12.
- control means sets a ratio of an occupied area of at least one of the plurality of child areas in the display according to a ratio of the number of people looking at the child area. 13. In the information providing device according to any one of 5 to 12, The information providing device, wherein the control means switches the luminance of each of the plurality of child areas in the display according to the number of people viewing each of the plurality of child areas. 14. In the information providing device according to any one of 1 to 13, The information providing device, wherein the control means individually changes the content to be displayed in each of the plurality of child areas without changing the layout of the plurality of child areas in the display. 15.
- the information providing apparatus which causes the display to operate in a power saving mode when the face of a person is not extracted from the photographed image. 16.
- the image analysis means detects a position on the display at which the extracted person is looking;
- the information providing apparatus causes the display to display information indicating a position viewed by the person on the display, which is extracted from the photographed image. 17.
- the control means controls the display such that different contents can be seen according to the viewing direction. 18.
- the control means can display a moving image on the display so that a moving image can be viewed from a direction in which the number of viewers is relatively large, and a still image can be viewed from a direction in which the number of viewers is relatively small.
- An information providing apparatus that determines the content to be displayed. 19.
- the computer is An acquisition step of acquiring a photographed image obtained by photographing the periphery of the display; An image analysis step of detecting a position on the display viewed by a person extracted from the photographed image; A control step of dividing a display area of the display into a plurality of child areas, displaying contents in each of the plurality of child areas, and changing a display of the child areas viewed by a person extracted from the captured image A method of providing information. 20.
- Computer Acquisition means for acquiring a photographed image obtained by photographing the periphery of the display, An image analysis means for detecting a position on the display viewed by a person extracted from the photographed image; Control means for dividing a display area of the display into a plurality of child areas, displaying contents in each of the plurality of child areas, and changing a display of the child areas viewed by a person extracted from the photographed image A program to function as 21.
- the computer is An acquisition step of acquiring a photographed image obtained by photographing the periphery of the display; An image analysis that extracts the face of a person from the captured image by analyzing the captured image, and detects the number of people looking at the display and the display viewing time which is the time when the display is viewed Process, A control step of determining the content to be displayed on the display according to the detection result; How to provide information to do. 22.
- Computer Acquisition means for acquiring a photographed image obtained by photographing the periphery of the display, An image analysis that extracts the face of a person from the captured image by analyzing the captured image, and detects the number of people looking at the display and the display viewing time which is the time when the display is viewed means, Control means for determining the content to be displayed on the display according to the detection result, A program to function as
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Hardware Design (AREA)
- Databases & Information Systems (AREA)
- Social Psychology (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Controls And Circuits For Display Device (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Digital Computer Display Output (AREA)
- User Interface Of Digital Computer (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
ディスプレイの周辺を撮影した撮影画像を取得する取得手段と、
前記撮影画像から抽出された人が見ている前記ディスプレイ上の位置を検出する画像解析手段と、
前記ディスプレイの表示領域を複数の子領域に分け、前記複数の子領域各々にコンテンツを表示させるとともに、前記撮影画像から抽出された人が見ている前記子領域の表示を変更させる制御手段と、
を有する情報提供装置が提供される。
ディスプレイの周辺を撮影した撮影画像を取得する取得手段と、
前記撮影画像を解析することで、前記撮影画像から人の顔を抽出するとともに、前記ディスプレイを見ている人の数、及び、前記ディスプレイを見ている時間であるディスプレイ閲覧時間を検出する画像解析手段と、
検出結果に応じて、前記ディスプレイに表示させるコンテンツを決定する制御手段と、
を有する情報提供装置が提供される。
コンピュータが、
ディスプレイの周辺を撮影した撮影画像を取得する取得工程と、
前記撮影画像から抽出された人が見ている前記ディスプレイ上の位置を検出する画像解析工程と、
前記ディスプレイの表示領域を複数の子領域に分け、前記複数の子領域各々にコンテンツを表示させるとともに、前記撮影画像から抽出された人が見ている前記子領域の表示を変更させる制御工程と、
を有する情報提供方法が提供される。
コンピュータを、
ディスプレイの周辺を撮影した撮影画像を取得する取得手段、
前記撮影画像から抽出された人が見ている前記ディスプレイ上の位置を検出する画像解析手段、
前記ディスプレイの表示領域を複数の子領域に分け、前記複数の子領域各々にコンテンツを表示させるとともに、前記撮影画像から抽出された人が見ている前記子領域の表示を変更させる制御手段、
として機能させるプログラムが提供される。
コンピュータが、
ディスプレイの周辺を撮影した撮影画像を取得する取得工程と、
前記撮影画像を解析することで、前記撮影画像から人の顔を抽出するとともに、前記ディスプレイを見ている人の数、及び、前記ディスプレイを見ている時間であるディスプレイ閲覧時間を検出する画像解析工程と、
検出結果に応じて、前記ディスプレイに表示させるコンテンツを決定する制御工程と、
を実行する情報提供方法が提供される。
コンピュータを、
ディスプレイの周辺を撮影した撮影画像を取得する取得手段、
前記撮影画像を解析することで、前記撮影画像から人の顔を抽出するとともに、前記ディスプレイを見ている人の数、及び、前記ディスプレイを見ている時間であるディスプレイ閲覧時間を検出する画像解析手段、
検出結果に応じて、前記ディスプレイに表示させるコンテンツを決定する制御手段、
として機能させるプログラムが提供される。
まず、本実施形態の情報提供システムの全体像を説明する。本実施形態の情報提供システムは、いわゆるデジタルサイネージである(以下のすべての実施形態も同様)。情報提供システムは、任意の場所に設置されたディスプレイと、当該ディスプレイの周辺を撮影するカメラと、当該ディスプレイを制御する情報提供装置とを有する。
本実施形態の情報提供システムは、情報提供装置10の構成において、第1の実施形態と異なる。すなわち、ディスプレイ及びカメラの構成は第1の実施形態と同様である。
本実施形態の情報提供システムは、情報提供装置10の構成において、第1及び第2の実施形態と異なる。すなわち、ディスプレイ及びカメラの構成は第1及び第2の実施形態と同様である。以下、詳細に説明する。
第1乃至第3の実施形態に適用可能な変形例を説明する。
制御部13は、撮影画像から抽出された人がディスプレイ上で見ている位置を示す情報を、ディスプレイに表示させてもよい。図7に一例を示す。図7に示す例では、M1乃至M3で示されるマークにより、3人の視聴者各々の閲覧位置を明示している。なお、複数のマークは、色や形等により、互いに外観的に識別可能になっていてもよい。
制御部13は、視聴方向に応じて異なるコンテンツが見えるようにディスプレイを制御してもよい。例えば、ディスプレイからの光を複数方向に分離することで、各方向から見えるコンテンツを異ならせることができる。当該表示は、視野角制御技術を利用することで実現できる。また、画像解析部12による画像解析の結果を用いることで、各視聴者が上記複数方向のうちのいずれから見ているかを特定することができる。
情報提供装置10は、画像解析部12による解析結果を集計し、集計データを生成してもよい。図8乃至図11に、集計データの一例を示す。
1. ディスプレイの周辺を撮影した撮影画像を取得する取得手段と、
前記撮影画像から抽出された人が見ている前記ディスプレイ上の位置を検出する画像解析手段と、
前記ディスプレイの表示領域を複数の子領域に分け、前記複数の子領域各々にコンテンツを表示させるとともに、前記撮影画像から抽出された人が見ている前記子領域の表示を変更させる制御手段と、
を有する情報提供装置。
2. 1に記載の情報提供装置において、
前記画像解析手段は、前記撮影画像から抽出された人が所定の前記子領域を見ている時間である子領域閲覧時間を検出し、
前記制御手段は、前記子領域閲覧時間に応じて、前記撮影画像から抽出された人が見ている前記子領域に表示させるコンテンツを決定する情報提供装置。
3. 2に記載の情報提供装置において、
前記制御手段は、前記子領域閲覧時間が第1の基準時間未満の場合には静止画像を表示させ、前記子領域閲覧時間が前記第1の基準時間以上の場合には動画像を表示させる情報提供装置。
4. 2又は3に記載の情報提供装置において、
前記制御手段は、前記子領域閲覧時間が第2の基準時間以上になるとクーポン情報を表示させる情報提供装置。
5. 2に記載の情報提供装置において、
前記画像解析手段は、前記複数の子領域各々を見ている人の数をさらに検出し、
前記制御手段は、前記子領域閲覧時間と、前記複数の子領域各々を見ている人の数とに応じて、前記撮影画像から抽出された人が見ている前記子領域に表示させるコンテンツを決定する情報提供装置。
6. 5に記載の情報提供装置において、
前記制御手段は、前記子領域閲覧時間が第3の基準時間以上である人の数が第1の基準人数未満の場合には静止画像を表示させ、前記子領域閲覧時間が前記第3の基準時間以上である人の数が前記第1の基準人数以上の場合には動画像を表示させる情報提供装置。
7. 5又は6に記載の情報提供装置において、
前記制御手段は、前記子領域閲覧時間が第4の基準時間以上である人の数が第2の基準人数以上になるとクーポン情報を表示させる情報提供装置。
8. ディスプレイの周辺を撮影した撮影画像を取得する取得手段と、
前記撮影画像を解析することで、前記撮影画像から人の顔を抽出するとともに、前記ディスプレイを見ている人の数、及び、前記ディスプレイを見ている時間であるディスプレイ閲覧時間を検出する画像解析手段と、
検出結果に応じて、前記ディスプレイに表示させるコンテンツを決定する制御手段と、
を有する情報提供装置。
9. 8に記載の情報提供装置において、
前記制御手段は、前記ディスプレイ閲覧時間が第5の基準時間未満の場合に静止画像を表示させ、前記ディスプレイ閲覧時間が前記第5の基準時間以上の場合に動画像を表示させる情報提供装置。
10. 8又は9に記載の情報提供装置において、
前記制御手段は、前記ディスプレイ閲覧時間が第6の基準時間以上になると前記ディスプレイにクーポン情報を表示させる情報提供装置。
11. 5から10のいずれかに記載の情報提供装置において、
前記制御手段は、前記撮影画像から人の顔が抽出されているが、前記ディスプレイを見ている人の数が所定数未満の場合、人の注意を引く動作を前記ディスプレイに行わせる情報提供装置。
12. 5から11のいずれかに記載の情報提供装置において、
前記制御手段は、前記ディスプレイにおける前記複数の子領域の中の少なくとも1つの占有面積の比率を、当該子領域を見ている人の数の比率に応じたものとする情報提供装置。
13. 5から12のいずれかに記載の情報提供装置において、
前記制御手段は、前記ディスプレイにおける前記複数の子領域各々の輝度を、前記複数の子領域各々を見ている人の数に応じて切り替える情報提供装置。
14. 1から13のいずれかに記載の情報提供装置において、
前記制御手段は、前記ディスプレイにおける前記複数の子領域のレイアウトを変更せず、前記複数の子領域各々に表示させるコンテンツを個別に変更させる情報提供装置。
15. 1から14のいずれかに記載の情報提供装置において、
前記制御手段は、前記撮影画像から人の顔が抽出されていない場合、前記ディスプレイを省電力モードで動作させる情報提供装置。
16. 1から15のいずれかに記載の情報提供装置において、
前記画像解析手段は、抽出された人が見ている前記ディスプレイ上の位置を検出し、
前記制御手段は、前記撮影画像から抽出された人が前記ディスプレイ上で見ている位置を示す情報を、前記ディスプレイに表示させる情報提供装置。
17. 1から16のいずれかに記載の情報提供装置において、
前記制御手段は、視聴方向に応じて異なるコンテンツが見えるように前記ディスプレイを制御する情報提供装置。
18. 1から17のいずれかに記載の情報提供装置において、
前記制御手段は、視聴している人の数が相対的に多い方向からは動画像が見え、視聴している人の数が相対的に少ない方向からは静止画像が見えるように、前記ディスプレイに表示させるコンテンツを決定する情報提供装置。
19. コンピュータが、
ディスプレイの周辺を撮影した撮影画像を取得する取得工程と、
前記撮影画像から抽出された人が見ている前記ディスプレイ上の位置を検出する画像解析工程と、
前記ディスプレイの表示領域を複数の子領域に分け、前記複数の子領域各々にコンテンツを表示させるとともに、前記撮影画像から抽出された人が見ている前記子領域の表示を変更させる制御工程と、
を有する情報提供方法。
20. コンピュータを、
ディスプレイの周辺を撮影した撮影画像を取得する取得手段、
前記撮影画像から抽出された人が見ている前記ディスプレイ上の位置を検出する画像解析手段、
前記ディスプレイの表示領域を複数の子領域に分け、前記複数の子領域各々にコンテンツを表示させるとともに、前記撮影画像から抽出された人が見ている前記子領域の表示を変更させる制御手段、
として機能させるプログラム。
21. コンピュータが、
ディスプレイの周辺を撮影した撮影画像を取得する取得工程と、
前記撮影画像を解析することで、前記撮影画像から人の顔を抽出するとともに、前記ディスプレイを見ている人の数、及び、前記ディスプレイを見ている時間であるディスプレイ閲覧時間を検出する画像解析工程と、
検出結果に応じて、前記ディスプレイに表示させるコンテンツを決定する制御工程と、
を実行する情報提供方法。
22. コンピュータを、
ディスプレイの周辺を撮影した撮影画像を取得する取得手段、
前記撮影画像を解析することで、前記撮影画像から人の顔を抽出するとともに、前記ディスプレイを見ている人の数、及び、前記ディスプレイを見ている時間であるディスプレイ閲覧時間を検出する画像解析手段、
検出結果に応じて、前記ディスプレイに表示させるコンテンツを決定する制御手段、
として機能させるプログラム。
Claims (22)
- ディスプレイの周辺を撮影した撮影画像を取得する取得手段と、
前記撮影画像から抽出された人が見ている前記ディスプレイ上の位置を検出する画像解析手段と、
前記ディスプレイの表示領域を複数の子領域に分け、前記複数の子領域各々にコンテンツを表示させるとともに、前記撮影画像から抽出された人が見ている前記子領域の表示を変更させる制御手段と、
を有する情報提供装置。 - 請求項1に記載の情報提供装置において、
前記画像解析手段は、前記撮影画像から抽出された人が所定の前記子領域を見ている時間である子領域閲覧時間を検出し、
前記制御手段は、前記子領域閲覧時間に応じて、前記撮影画像から抽出された人が見ている前記子領域に表示させるコンテンツを決定する情報提供装置。 - 請求項2に記載の情報提供装置において、
前記制御手段は、前記子領域閲覧時間が第1の基準時間未満の場合には静止画像を表示させ、前記子領域閲覧時間が前記第1の基準時間以上の場合には動画像を表示させる情報提供装置。 - 請求項2又は3に記載の情報提供装置において、
前記制御手段は、前記子領域閲覧時間が第2の基準時間以上になるとクーポン情報を表示させる情報提供装置。 - 請求項2に記載の情報提供装置において、
前記画像解析手段は、前記複数の子領域各々を見ている人の数をさらに検出し、
前記制御手段は、前記子領域閲覧時間と、前記複数の子領域各々を見ている人の数とに応じて、前記撮影画像から抽出された人が見ている前記子領域に表示させるコンテンツを決定する情報提供装置。 - 請求項5に記載の情報提供装置において、
前記制御手段は、前記子領域閲覧時間が第3の基準時間以上である人の数が第1の基準人数未満の場合には静止画像を表示させ、前記子領域閲覧時間が前記第3の基準時間以上である人の数が前記第1の基準人数以上の場合には動画像を表示させる情報提供装置。 - 請求項5又は6に記載の情報提供装置において、
前記制御手段は、前記子領域閲覧時間が第4の基準時間以上である人の数が第2の基準人数以上になるとクーポン情報を表示させる情報提供装置。 - ディスプレイの周辺を撮影した撮影画像を取得する取得手段と、
前記撮影画像を解析することで、前記撮影画像から人の顔を抽出するとともに、前記ディスプレイを見ている人の数、及び、前記ディスプレイを見ている時間であるディスプレイ閲覧時間を検出する画像解析手段と、
検出結果に応じて、前記ディスプレイに表示させるコンテンツを決定する制御手段と、
を有する情報提供装置。 - 請求項8に記載の情報提供装置において、
前記制御手段は、前記ディスプレイ閲覧時間が第5の基準時間未満の場合に静止画像を表示させ、前記ディスプレイ閲覧時間が前記第5の基準時間以上の場合に動画像を表示させる情報提供装置。 - 請求項8又は9に記載の情報提供装置において、
前記制御手段は、前記ディスプレイ閲覧時間が第6の基準時間以上になると前記ディスプレイにクーポン情報を表示させる情報提供装置。 - 請求項5から10のいずれか1項に記載の情報提供装置において、
前記制御手段は、前記撮影画像から人の顔が抽出されているが、前記ディスプレイを見ている人の数が所定数未満の場合、人の注意を引く動作を前記ディスプレイに行わせる情報提供装置。 - 請求項5から11のいずれか1項に記載の情報提供装置において、
前記制御手段は、前記ディスプレイにおける前記複数の子領域の中の少なくとも1つの占有面積の比率を、当該子領域を見ている人の数の比率に応じたものとする情報提供装置。 - 請求項5から12のいずれか1項に記載の情報提供装置において、
前記制御手段は、前記ディスプレイにおける前記複数の子領域各々の輝度を、前記複数の子領域各々を見ている人の数に応じて切り替える情報提供装置。 - 請求項1から13のいずれか1項に記載の情報提供装置において、
前記制御手段は、前記ディスプレイにおける前記複数の子領域のレイアウトを変更せず、前記複数の子領域各々に表示させるコンテンツを個別に変更させる情報提供装置。 - 請求項1から14のいずれか1項に記載の情報提供装置において、
前記制御手段は、前記撮影画像から人の顔が抽出されていない場合、前記ディスプレイを省電力モードで動作させる情報提供装置。 - 請求項1から15のいずれか1項に記載の情報提供装置において、
前記画像解析手段は、抽出された人が見ている前記ディスプレイ上の位置を検出し、
前記制御手段は、前記撮影画像から抽出された人が前記ディスプレイ上で見ている位置を示す情報を、前記ディスプレイに表示させる情報提供装置。 - 請求項1から16のいずれか1項に記載の情報提供装置において、
前記制御手段は、視聴方向に応じて異なるコンテンツが見えるように前記ディスプレイを制御する情報提供装置。 - 請求項1から17のいずれか1項に記載の情報提供装置において、
前記制御手段は、視聴している人の数が相対的に多い方向からは動画像が見え、視聴している人の数が相対的に少ない方向からは静止画像が見えるように、前記ディスプレイに表示させるコンテンツを決定する情報提供装置。 - コンピュータが、
ディスプレイの周辺を撮影した撮影画像を取得する取得工程と、
前記撮影画像から抽出された人が見ている前記ディスプレイ上の位置を検出する画像解析工程と、
前記ディスプレイの表示領域を複数の子領域に分け、前記複数の子領域各々にコンテンツを表示させるとともに、前記撮影画像から抽出された人が見ている前記子領域の表示を変更させる制御工程と、
を有する情報提供方法。 - コンピュータを、
ディスプレイの周辺を撮影した撮影画像を取得する取得手段、
前記撮影画像から抽出された人が見ている前記ディスプレイ上の位置を検出する画像解析手段、
前記ディスプレイの表示領域を複数の子領域に分け、前記複数の子領域各々にコンテンツを表示させるとともに、前記撮影画像から抽出された人が見ている前記子領域の表示を変更させる制御手段、
として機能させるプログラム。 - コンピュータが、
ディスプレイの周辺を撮影した撮影画像を取得する取得工程と、
前記撮影画像を解析することで、前記撮影画像から人の顔を抽出するとともに、前記ディスプレイを見ている人の数、及び、前記ディスプレイを見ている時間であるディスプレイ閲覧時間を検出する画像解析工程と、
検出結果に応じて、前記ディスプレイに表示させるコンテンツを決定する制御工程と、
を実行する情報提供方法。 - コンピュータを、
ディスプレイの周辺を撮影した撮影画像を取得する取得手段、
前記撮影画像を解析することで、前記撮影画像から人の顔を抽出するとともに、前記ディスプレイを見ている人の数、及び、前記ディスプレイを見ている時間であるディスプレイ閲覧時間を検出する画像解析手段、
検出結果に応じて、前記ディスプレイに表示させるコンテンツを決定する制御手段、
として機能させるプログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/620,570 US11463618B2 (en) | 2017-06-20 | 2018-06-05 | Apparatus for providing information and method of providing information, and non-transitory storage medium |
GB1918897.8A GB2578043B (en) | 2017-06-20 | 2018-06-05 | Apparatus for providing information and method of providing information, and program |
JP2019525341A JPWO2018235595A1 (ja) | 2017-06-20 | 2018-06-05 | 情報提供装置、情報提供方法及びプログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017-120735 | 2017-06-20 | ||
JP2017120735 | 2017-06-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018235595A1 true WO2018235595A1 (ja) | 2018-12-27 |
Family
ID=64737536
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2018/021581 WO2018235595A1 (ja) | 2017-06-20 | 2018-06-05 | 情報提供装置、情報提供方法及びプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US11463618B2 (ja) |
JP (1) | JPWO2018235595A1 (ja) |
GB (1) | GB2578043B (ja) |
WO (1) | WO2018235595A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020113075A (ja) * | 2019-01-11 | 2020-07-27 | 沖電気工業株式会社 | 情報処理装置、情報処理方法、プログラム、及び情報処理システム |
WO2021214981A1 (ja) * | 2020-04-24 | 2021-10-28 | シャープNecディスプレイソリューションズ株式会社 | コンテンツ表示装置、コンテンツ表示方法およびプログラム |
JP7118383B1 (ja) | 2022-04-25 | 2022-08-16 | ナーブ株式会社 | 表示システム、表示方法、及び表示プログラム |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021107886A (ja) * | 2019-12-27 | 2021-07-29 | 富士フイルムビジネスイノベーション株式会社 | 制御装置およびプログラム |
JP7441673B2 (ja) * | 2020-02-21 | 2024-03-01 | シャープ株式会社 | 学習用データ生成装置、再生スケジュール学習システム、及び学習用データ生成方法 |
CN111881763A (zh) * | 2020-06-30 | 2020-11-03 | 北京小米移动软件有限公司 | 确定用户注视位置的方法、装置、存储介质和电子设备 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005514827A (ja) * | 2001-12-21 | 2005-05-19 | シンキング ピクチャーズ, インコーポレイテッド | メディア配信および/または聴視確認のための方法、システムおよび装置 |
JP2005267611A (ja) * | 2004-01-23 | 2005-09-29 | Sony United Kingdom Ltd | 表示装置 |
JP2006197373A (ja) * | 2005-01-14 | 2006-07-27 | Mitsubishi Electric Corp | 視聴者情報測定装置 |
JP2009500737A (ja) * | 2005-07-08 | 2009-01-08 | シャープ株式会社 | マルチビュー表示システム |
JP2010004355A (ja) * | 2008-06-20 | 2010-01-07 | Olympus Imaging Corp | 音声・映像情報報知システム及びその制御方法 |
WO2015190093A1 (ja) * | 2014-06-10 | 2015-12-17 | 株式会社ソシオネクスト | 半導体集積回路およびそれを備えた表示装置並びに制御方法 |
WO2016141248A1 (en) * | 2015-03-03 | 2016-09-09 | Misapplied Sciences, Inc. | System and method for displaying location dependent content |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5370380B2 (ja) | 2011-01-25 | 2013-12-18 | ソニー株式会社 | 映像表示方法および映像表示装置 |
JP2015064513A (ja) | 2013-09-26 | 2015-04-09 | カシオ計算機株式会社 | 表示装置、コンテンツ表示方法及びプログラム |
DE102014018393A1 (de) * | 2014-10-14 | 2016-04-14 | Infineon Technologies Ag | Chipkartenmodul-Anordnung, Chipkarten-Anordnung und Verfahren zum Herstellen einer Chipkarten-Anordnung |
JP6447108B2 (ja) | 2014-12-24 | 2019-01-09 | 富士通株式会社 | 利用可能性算出装置、利用可能性算出方法及び利用可能性算出プログラム |
KR102179958B1 (ko) * | 2015-09-02 | 2020-11-17 | 삼성전자주식회사 | LFD(large format display) 장치 및 그 제어 방법 |
-
2018
- 2018-06-05 JP JP2019525341A patent/JPWO2018235595A1/ja active Pending
- 2018-06-05 GB GB1918897.8A patent/GB2578043B/en active Active
- 2018-06-05 WO PCT/JP2018/021581 patent/WO2018235595A1/ja active Application Filing
- 2018-06-05 US US16/620,570 patent/US11463618B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005514827A (ja) * | 2001-12-21 | 2005-05-19 | シンキング ピクチャーズ, インコーポレイテッド | メディア配信および/または聴視確認のための方法、システムおよび装置 |
JP2005267611A (ja) * | 2004-01-23 | 2005-09-29 | Sony United Kingdom Ltd | 表示装置 |
JP2006197373A (ja) * | 2005-01-14 | 2006-07-27 | Mitsubishi Electric Corp | 視聴者情報測定装置 |
JP2009500737A (ja) * | 2005-07-08 | 2009-01-08 | シャープ株式会社 | マルチビュー表示システム |
JP2010004355A (ja) * | 2008-06-20 | 2010-01-07 | Olympus Imaging Corp | 音声・映像情報報知システム及びその制御方法 |
WO2015190093A1 (ja) * | 2014-06-10 | 2015-12-17 | 株式会社ソシオネクスト | 半導体集積回路およびそれを備えた表示装置並びに制御方法 |
WO2016141248A1 (en) * | 2015-03-03 | 2016-09-09 | Misapplied Sciences, Inc. | System and method for displaying location dependent content |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020113075A (ja) * | 2019-01-11 | 2020-07-27 | 沖電気工業株式会社 | 情報処理装置、情報処理方法、プログラム、及び情報処理システム |
WO2021214981A1 (ja) * | 2020-04-24 | 2021-10-28 | シャープNecディスプレイソリューションズ株式会社 | コンテンツ表示装置、コンテンツ表示方法およびプログラム |
US11960647B2 (en) | 2020-04-24 | 2024-04-16 | Sharp Nec Display Solutions, Ltd. | Content display device, content display method, and storage medium using gazing point identification based on line-of-sight direction detection |
JP7118383B1 (ja) | 2022-04-25 | 2022-08-16 | ナーブ株式会社 | 表示システム、表示方法、及び表示プログラム |
JP2023161493A (ja) * | 2022-04-25 | 2023-11-07 | ナーブ株式会社 | 表示システム、表示方法、及び表示プログラム |
Also Published As
Publication number | Publication date |
---|---|
GB2578043A (en) | 2020-04-15 |
US20200128177A1 (en) | 2020-04-23 |
GB2578043B (en) | 2022-06-22 |
JPWO2018235595A1 (ja) | 2020-04-09 |
US11463618B2 (en) | 2022-10-04 |
GB201918897D0 (en) | 2020-02-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018235595A1 (ja) | 情報提供装置、情報提供方法及びプログラム | |
KR102463304B1 (ko) | 비디오 처리 방법 및 장치, 전자기기, 컴퓨터 판독 가능한 저장 매체 및 컴퓨터 프로그램 | |
US9424767B2 (en) | Local rendering of text in image | |
TWI605712B (zh) | 互動式媒體系統 | |
KR20190075177A (ko) | 컨텍스트 기반 증강 광고 | |
JP5391224B2 (ja) | 映像付加情報表示制御装置およびその動作方法 | |
KR20160121287A (ko) | 이벤트에 기반하여 화면을 디스플레이하는 방법 및 장치 | |
CN110663044A (zh) | 用于提供产品放置的方法和设备 | |
US9813666B2 (en) | Video transmission and reconstruction | |
JP2000105583A (ja) | インタラクティブ表示装置 | |
JP5489197B2 (ja) | 電子広告装置・方法及びプログラム | |
WO2019078867A1 (en) | CONTENT ARRANGEMENTS ON MIRROR DISPLAY DEVICES | |
US11144763B2 (en) | Information processing apparatus, image display method, and non-transitory computer-readable storage medium for display control | |
CN113255431B (zh) | 用于远程教学的提醒方法、装置及头戴显示设备 | |
US20150121307A1 (en) | Information processing device, information processing method, and program | |
JP5698574B2 (ja) | 音声・映像情報報知システム及びその制御方法 | |
JP2012181328A (ja) | 広告配信システム、広告配信装置、広告配信方法、及びプログラム | |
JP2012094103A (ja) | 画像表示システム | |
US11962743B2 (en) | 3D display system and 3D display method | |
KR102414925B1 (ko) | 제품 간접 광고 표시 장치 및 방법 | |
US20240119643A1 (en) | Image processing device, image processing method, and computer-readable storage medium | |
WO2024079778A1 (ja) | 情報処理装置、表示システム、情報処理方法 | |
WO2022239117A1 (ja) | 情報処理装置、コンテンツ表示システム、コンテンツ表示方法 | |
TWI659366B (zh) | 基於臉部特徵播放廣告的方法及電子裝置 | |
JP2009216819A (ja) | 画像信号処理装置、画像呈示方法、プログラム及び記憶媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18821407 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019525341 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 201918897 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20180605 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18821407 Country of ref document: EP Kind code of ref document: A1 |