US20210117040A1 - System, method, and apparatus for an interactive container - Google Patents

System, method, and apparatus for an interactive container Download PDF

Info

Publication number
US20210117040A1
US20210117040A1 US17/138,208 US202017138208A US2021117040A1 US 20210117040 A1 US20210117040 A1 US 20210117040A1 US 202017138208 A US202017138208 A US 202017138208A US 2021117040 A1 US2021117040 A1 US 2021117040A1
Authority
US
United States
Prior art keywords
interactive
touch area
interactive container
creation method
touch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/138,208
Inventor
Stephen Howard
Larry McNutt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Omni Consumer Products LLC
Original Assignee
Omni Consumer Products LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/890,709 external-priority patent/US9360888B2/en
Priority claimed from US14/535,823 external-priority patent/US9465488B2/en
Priority claimed from PCT/US2015/068192 external-priority patent/WO2016109749A1/en
Application filed by Omni Consumer Products LLC filed Critical Omni Consumer Products LLC
Priority to US17/138,208 priority Critical patent/US20210117040A1/en
Assigned to OMNI CONSUMER PRODUCTS, LLC reassignment OMNI CONSUMER PRODUCTS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HOWARD, STEPHEN, MCNUTT, LARRY
Publication of US20210117040A1 publication Critical patent/US20210117040A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • G06K9/00288
    • G06K9/00355
    • G06K9/209
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N5/2251
    • H04N5/232
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/041012.5D-digitiser, i.e. digitiser detecting the X/Y position of the input means, finger or stylus, also when it does not touch, but is proximate to the digitiser's interaction surface and also measures the distance of the input means within a short range in the Z direction, possibly with a separate measurement setup
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04108Touchless 2D- digitiser, i.e. digitiser detecting the X/Y position of the input means, finger or stylus, also when it does not touch, but is proximate to the digitiser's interaction surface without distance measurement in the Z direction

Definitions

  • the disclosure relates to systems, apparatus and methods for creating and operating interactive containers. More specifically, this disclosure relates to creating and operating interactive containers that relate to any assets that are projected, printed, displayed, etc.
  • Embodiments described herein relate to an interactive container creation method, apparatus and system.
  • the method includes creating a list, deploying the list to at least one device, calibrating and identifying touch areas, identifying at least one of an asset and a shape to be defined as a touch area, identifying the x,y axis of each point for a predetermined number of points for each of the at least one of asset or shape, and creating a touch area based of the identified x,y axis.
  • FIG. 1 is an embodiment illustrating a flow diagram of a method for creating at least one interactive container
  • FIG. 2 is an embodiment illustrating a flow diagram of a method for calibrating at least one interactive container
  • FIG. 3 is a block diagram illustrating an embodiment of an apparatus of interactive containers
  • FIG. 4 is a block diagram illustrating an embodiment of an interactive system relating to at least one interactive container
  • FIG. 5 is an embodiment illustrating a flow diagram of a method for refining touch recognition
  • FIG. 6A-C are diagrams depicting an embodiment of an interactive container.
  • aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Therefore, aspects of the present disclosure may be implemented entirely in hardware or combining software and hardware implementation that may all generally be referred to herein as a “circuit,” “module,” “component,” or “system” (including firmware, resident software, micro-code, etc.). Further, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
  • the computer readable media may be a computer readable signal medium, any type of memory or a computer readable storage medium.
  • a computer readable storage medium may be, but not limited to, an electronic, magnetic, optical, electromagnetic, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Computer program code for carrying out operations utilizing a processor for aspects of the present disclosure may be written in any combination of one or more programming languages, markup languages, style sheets and JavaScript libraries, including but not limited to Windows Presentation Foundation (WPF), HTML/CSS, XAML, and JQuery, C, Basic, *Ada, Python, C++, C#, Pascal, *Arduino. Additionally, operations can be carried out using any variety of compiler available.
  • WPF Windows Presentation Foundation
  • HTML/CSS HyperText Markup languages
  • XAML HyperText Markup Language
  • JQuery C, Basic, *Ada, Python, C++, C#, Pascal, *Arduino. Additionally, operations can be carried out using any variety of compiler available.
  • These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer program instructions may also be stored in a computer readable medium that when executed can direct a computer, processor, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions when stored in the computer readable medium produce an article of manufacture including instructions which when executed, cause a computer to implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, processor, other programmable instruction execution apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatuses or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • FIG. 1 is an embodiment illustrating a flow diagram of a method 100 for creating at least one interactive container.
  • the method 100 starts at step 102 and proceeds to step 104 .
  • the method 100 creates a list.
  • the list may contain images, assets, attributes, WISPSs, rules, menus, etc.
  • a WISP in this application relates to a shell that defines the rules and the interaction between the assets and/or containers.
  • the creation of the list is performed at a remote location or on a cloud.
  • the creation of the list is performed on the same device operating the interaction between the assets, menus, and/or containers. In such embodiments, the deployment step would not be necessary.
  • the method 100 deploys at least one list to a device that is operating the interaction between the assets, menus, and/or containers.
  • the deployment may occur on several devices that may or may not be at the same location.
  • the device(s) may be at the same location as the container being operated.
  • the axis location, i.e. x, y, x, location of the assets may be incorporated into the list at the list creation time or it may be determined on the device controlling the interaction, i.e., a device located at the same location as the container.
  • the device controlling the interaction may learn the location of the assets, it may display the assets, or it may scan for characteristics to learn their location.
  • a list may already exist and only changes, omissions and/or additions are deployed, rather than the entire list.
  • the deployment may be initiated/conducted manually or it may be automatic.
  • the method 100 calibrates assets subjects in the container and/or identifies the touch areas.
  • the method 100 may perform projection mapping for every container to ensure that the display matches the physical space.
  • the method 100 uses image training during calibration to detect a known image, item, logo, etc.
  • a person manually calibrates the system by shifting from point to point identifying the touch area and triggering a new touch area when the current touch area is done and another touch area exists and needs to be identified by the system.
  • the system automatically identifies a predetermined number of points per touch area relating to assets and/or shapes.
  • a calibration stream is cropped to where only areas of interest are calibrated. Only calibrating areas of interest results in a more accurate and more efficient calibration. The calibration process is better described in FIG. 2 .
  • Method 100 ends at step 110 .
  • FIG. 2 is an embodiment illustrating a flow diagram of a method 200 for calibrating at least one interactive container.
  • Method 200 starts at step 202 and proceeds to step 204 , wherein the method 202 detects an asset or shape displayed that needs to be defined as a touch area.
  • the method 200 identifies a predetermined number of points relating to the asset or shape where each point is defined by its x, y axis.
  • the method 200 determines if there are more assets or shapes to be identified as touch areas. If there are more assets or shapes to be identified as touch areas, the method 200 returns to step 204 . Otherwise, the method 200 ends at step 210 .
  • a projector displays a pre-determined shape over a touch area not identified yet.
  • the method identifies the x, y axis for each point in a pre-determined number of points relating to the asset or displayed shape. Once the axis is identified, the method 200 proceeds to the next asset or shape in the container.
  • the method 200 may perform such function on a single container or multiple containers.
  • the method 200 may utilize asset identification, display recognition, shape recognition, light, exposure, contrast, RGB difference, infrared, etc. to determine the areas that need to be identified as touch areas. When all touch areas are identified, the camera and/or method are capable of identifying the touch areas and identify the corresponding rule, menu, activity etc. relating to the touch area.
  • FIG. 3 is a block diagram illustrating an embodiment of an apparatus 300 of interactive containers.
  • the apparatus 300 has two containers 302 A and 302 B, where container 302 A has two menus/attributes 304 A and 304 B.
  • Container 302 B has a single menu/attributes 304 C.
  • Each of the menu/attribute's 304 A, 304 B and 304 C has a WISP/Rules 306 A, 306 B and 306 C, respectively.
  • Each of the WISP/Rules 306 A, 306 B and 306 C has assets 308 A, 308 B and 308 C, respectively.
  • a single interactive apparatus 300 may include any number of containers that may or may not communicate and/or interact. As such, in one embodiment, interacting with one container may cause a change in another container.
  • Containers create an interactive experience using the menus/attributes and WISP/rules relating to assets.
  • the menu/attributes are options at an instance, which may be a default instance or options that come about due to an interaction or touch on or around a menu item or attribute presented.
  • a container may contain any number of menus/attributes 306 , which may interact or stand alone. Attributes may be audio, video, image, change in display, etc.
  • WISP/rules are the interactive active mask over a touch area that triggers a menu or attribute due to a pre-determined activity. Assets may be pre-determined object or person, printouts of objects, displayed items, images, video, an identified object or person, and the like.
  • a weighted average may be used.
  • a new object/asset is added to a container.
  • the weighted average method adds the object/asset incrementally over time where the accounting of the new item increases in percentile in relation to the whole picture over time. Such a method insures that the item is truly added, allows of real-time reaction to change in a container, and allows for a realistic change over time.
  • FIG. 4 is a block diagram illustrating an embodiment of an interactive system 400 relating to at least one interactive container.
  • the system 400 includes a processor 402 , memory/storage medium 404 , a calibrator 406 , a touch detector 408 , a touch listener 410 , an analytics module 412 and an I/O 414 .
  • the memory 404 include deployed data 404 A, touch area data 404 B, analytics data 404 C and the likes.
  • a cloud may communicate with the systems 400 to deploy items from remote locations, such as, the deployed data 404 A.
  • the touch detector 408 detects touch and its related information, which includes identifying coordinate related to a touch area.
  • the touch detector 408 may distinguish between a hover and a touch, where the distinction relates to the z axis of the touch. If the hand or object is closer to the object or further from a camera or system then it is a touch. If the hand or object is further from the object or closer to a camera or system then it is hover.
  • the touch detector may identify different types of touch based on thresholds, such as time, proximity, color of the object doing the touch, based on a sequence of touches, etc.
  • the touch detection 408 may refine the recognition of a touch by performing the method of FIG. 5 , which will be described herein below.
  • the touch detector may crop areas to where only areas of interest are detected, resulting in a touch detection that is more accurate and more efficient.
  • the touch listener 410 reads the coordinates determined by the touch detector and determines if the touch occurred in a touch area identified during calibration.
  • the touch listener 410 determines the type of reaction or no reaction to take place based on the deployed data, the location of the touch and sometime the type of touch. In some cases, the touch listener 410 may facilitate a zoom in/out or a drag based on the determination of the type of touch.
  • Touch listener may determine that there are no persons and/or no touch for a predetermined time or sense a person walk away and initiate a default display or a predetermined activity.
  • the analytics module 412 is designed to collect data and/or measure characteristics related to a predetermined object, person, movement, lack of movement, etc. for example, the analytics module 412 may identify a person, follow a path of a person, follow selections of a person, duration of a touch, lack of touch, list a person's activity, determine gender, personal characteristics, traffic, dwell time, etc.
  • FIG. 5 is an embodiment illustrating a flow diagram of a method 500 for refining touch recognition.
  • the method 500 starts at step 502 and proceeds to step 504 .
  • the method 500 creates a baseline depth area using multi-frames from a depth camera.
  • the method 500 creates a moving average of a real-time area from the depth camera.
  • the method 500 determines the difference between the baseline and the moving average.
  • the method 500 determines if the difference is less than a pre-determined threshold. If it is less, then the method 500 proceeds to step 512 and looks at the surrounding pixels to determine if the event is a touch or noise.
  • the method 500 proceeds to step 514 .
  • the radius of the surrounding pixels changes based on the depth of the camera. If the difference is greater than the threshold, then determine that the event is a touch, at step 514 . If the surrounding pixels have different z-axis, then the method 500 proceeds to step 516 . At step 516 , the method 500 determines that the event is not a touch. From steps 514 and 516 , the method 500 proceeds to step 518 where it ends.
  • FIG. 6A-C are diagrams depicting an embodiment of an interactive container.
  • a container is shown that displays a car engine with its mechanics and electronics.
  • a touch is detected activating a touch area.
  • the touch results in the display of information related to the touch area. In other embodiments, such a touch may result in an engine sound, a menu display, a video activation etc.

Abstract

An interactive container creation method, apparatus and system. The method includes creating a list, deploying the list to at least one device, calibrating and identifying touch areas, identifying at least one of an asset and a shape to be defined as a touch area, identifying the x,y axis of each point for a predetermined number of points for each of the at least one of asset or shape, and creating a touch area based of the identified x,y axis.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 15/394,799, filed Dec. 27, 2016, which is a continuation-in-part of U.S. application Ser. No. 15/258,973, filed on Sep. 7, 2016, which is a continuation-in-part of U.S. application Ser. No. 14/535,823 filed Nov. 7, 2014, which is a continuation-in-part of U.S. application Ser. No. 13/890,709 filed May 9, 2013. This application is a continuation of U.S. patent application Ser. No. 15/394,799, filed Dec. 27, 2016, which is a continuation-in-part of U.S. application Ser. No. 14/985,044 and a continuation-in-part of PCT Application No. PCT/US2015/068192 both filed on Dec. 30, 2015. This application claims priority to U.S. Provisional Applications 62/311,354 filed on Mar. 21, 2016 and 62/373,272 filed on Aug. 10, 2016. The above identified patent applications are incorporated herein by reference in their entirety to provide continuity of disclosure.
  • FIELD OF THE INVENTION
  • The disclosure relates to systems, apparatus and methods for creating and operating interactive containers. More specifically, this disclosure relates to creating and operating interactive containers that relate to any assets that are projected, printed, displayed, etc.
  • BACKGROUND OF THE INVENTION
  • It has become more common from assets of different origin or type to communicate and cause an activity based on such interaction. For example, it has become common for users to utilize their portable devices to control various products in their home and/or office made by different manufacturers. The selection of the assets and its interaction can be customizable and variable. Therefore, it is desirable to be able to simulate such interactions and to be able to customize it. In addition, some assets may be susceptible to tampering. Thus, it is beneficial to display an interactive image, printout, etc. of such assets. Therefore, there is a need for an improved system, apparatus and method for creating and operating interactive container(s).
  • SUMMARY OF THE INVENTION
  • Embodiments described herein relate to an interactive container creation method, apparatus and system. The method includes creating a list, deploying the list to at least one device, calibrating and identifying touch areas, identifying at least one of an asset and a shape to be defined as a touch area, identifying the x,y axis of each point for a predetermined number of points for each of the at least one of asset or shape, and creating a touch area based of the identified x,y axis.
  • BRIEF DESCRIPTION OF DRAWINGS
  • Reference will now be made to the following drawings:
  • FIG. 1 is an embodiment illustrating a flow diagram of a method for creating at least one interactive container;
  • FIG. 2 is an embodiment illustrating a flow diagram of a method for calibrating at least one interactive container;
  • FIG. 3 is a block diagram illustrating an embodiment of an apparatus of interactive containers;
  • FIG. 4 is a block diagram illustrating an embodiment of an interactive system relating to at least one interactive container;
  • FIG. 5 is an embodiment illustrating a flow diagram of a method for refining touch recognition; and
  • FIG. 6A-C are diagrams depicting an embodiment of an interactive container.
  • DETAILED DESCRIPTION
  • In the descriptions that follow, like parts are marked throughout the specification and drawings with the same numerals, respectively. The drawing figures are not necessarily drawn to scale and certain figures may be shown in exaggerated or generalized form in the interest of clarity and conciseness.
  • It will be appreciated by those skilled in the art that aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Therefore, aspects of the present disclosure may be implemented entirely in hardware or combining software and hardware implementation that may all generally be referred to herein as a “circuit,” “module,” “component,” or “system” (including firmware, resident software, micro-code, etc.). Further, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
  • Any combination of one or more computer readable media may be utilized. The computer readable media may be a computer readable signal medium, any type of memory or a computer readable storage medium. For example, a computer readable storage medium may be, but not limited to, an electronic, magnetic, optical, electromagnetic, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the computer readable storage medium would include, but are not limited to: a portable computer diskette, a hard disk, a random access memory (“RAM”), a read-only memory (“ROM”), an erasable programmable read-only memory (“EPROM” or Flash memory), an appropriate optical fiber with a repeater, a portable compact disc read-only memory (“CD-ROM”), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. Thus, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Computer program code for carrying out operations utilizing a processor for aspects of the present disclosure may be written in any combination of one or more programming languages, markup languages, style sheets and JavaScript libraries, including but not limited to Windows Presentation Foundation (WPF), HTML/CSS, XAML, and JQuery, C, Basic, *Ada, Python, C++, C#, Pascal, *Arduino. Additionally, operations can be carried out using any variety of compiler available.
  • Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, systems and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions.
  • These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer program instructions may also be stored in a computer readable medium that when executed can direct a computer, processor, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions when stored in the computer readable medium produce an article of manufacture including instructions which when executed, cause a computer to implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, processor, other programmable instruction execution apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatuses or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • FIG. 1 is an embodiment illustrating a flow diagram of a method 100 for creating at least one interactive container. The method 100 starts at step 102 and proceeds to step 104. At step 102, the method 100 creates a list. The list may contain images, assets, attributes, WISPSs, rules, menus, etc. A WISP in this application relates to a shell that defines the rules and the interaction between the assets and/or containers. In an embodiment, the creation of the list is performed at a remote location or on a cloud. In other embodiments, the creation of the list is performed on the same device operating the interaction between the assets, menus, and/or containers. In such embodiments, the deployment step would not be necessary.
  • At step 106, the method 100 deploys at least one list to a device that is operating the interaction between the assets, menus, and/or containers. In one embodiment, the deployment may occur on several devices that may or may not be at the same location. The device(s) may be at the same location as the container being operated. In one embodiment, the axis location, i.e. x, y, x, location of the assets may be incorporated into the list at the list creation time or it may be determined on the device controlling the interaction, i.e., a device located at the same location as the container. The device controlling the interaction may learn the location of the assets, it may display the assets, or it may scan for characteristics to learn their location. In one embodiment, a list may already exist and only changes, omissions and/or additions are deployed, rather than the entire list. Furthermore, the deployment may be initiated/conducted manually or it may be automatic.
  • At step 108, the method 100 calibrates assets subjects in the container and/or identifies the touch areas. During the calibration process, the method 100 may perform projection mapping for every container to ensure that the display matches the physical space. In one embodiment, the method 100 uses image training during calibration to detect a known image, item, logo, etc.
  • In other embodiments, a person manually calibrates the system by shifting from point to point identifying the touch area and triggering a new touch area when the current touch area is done and another touch area exists and needs to be identified by the system. Whereas, during an automatic calibration, the system automatically identifies a predetermined number of points per touch area relating to assets and/or shapes. In another embodiment, a calibration stream is cropped to where only areas of interest are calibrated. Only calibrating areas of interest results in a more accurate and more efficient calibration. The calibration process is better described in FIG. 2. Method 100 ends at step 110.
  • FIG. 2 is an embodiment illustrating a flow diagram of a method 200 for calibrating at least one interactive container. Method 200 starts at step 202 and proceeds to step 204, wherein the method 202 detects an asset or shape displayed that needs to be defined as a touch area. At step 206, the method 200 identifies a predetermined number of points relating to the asset or shape where each point is defined by its x, y axis. At step 208, the method 200 determines if there are more assets or shapes to be identified as touch areas. If there are more assets or shapes to be identified as touch areas, the method 200 returns to step 204. Otherwise, the method 200 ends at step 210.
  • For example, a projector displays a pre-determined shape over a touch area not identified yet. Using a camera, the method identifies the x, y axis for each point in a pre-determined number of points relating to the asset or displayed shape. Once the axis is identified, the method 200 proceeds to the next asset or shape in the container. The method 200 may perform such function on a single container or multiple containers. The method 200 may utilize asset identification, display recognition, shape recognition, light, exposure, contrast, RGB difference, infrared, etc. to determine the areas that need to be identified as touch areas. When all touch areas are identified, the camera and/or method are capable of identifying the touch areas and identify the corresponding rule, menu, activity etc. relating to the touch area.
  • FIG. 3 is a block diagram illustrating an embodiment of an apparatus 300 of interactive containers. In this embodiment, the apparatus 300 has two containers 302A and 302B, where container 302 A has two menus/ attributes 304A and 304B. Container 302B has a single menu/attributes 304C. Each of the menu/attribute's 304A, 304B and 304C has a WISP/ Rules 306A, 306B and 306C, respectively. Each of the WISP/ Rules 306A, 306B and 306C has assets 308A, 308B and 308C, respectively.
  • A single interactive apparatus 300 may include any number of containers that may or may not communicate and/or interact. As such, in one embodiment, interacting with one container may cause a change in another container. Containers create an interactive experience using the menus/attributes and WISP/rules relating to assets. The menu/attributes are options at an instance, which may be a default instance or options that come about due to an interaction or touch on or around a menu item or attribute presented. A container may contain any number of menus/attributes 306, which may interact or stand alone. Attributes may be audio, video, image, change in display, etc. WISP/rules are the interactive active mask over a touch area that triggers a menu or attribute due to a pre-determined activity. Assets may be pre-determined object or person, printouts of objects, displayed items, images, video, an identified object or person, and the like.
  • In one embodiment, a weighted average may be used. In such an embodiment, a new object/asset is added to a container. The weighted average method adds the object/asset incrementally over time where the accounting of the new item increases in percentile in relation to the whole picture over time. Such a method insures that the item is truly added, allows of real-time reaction to change in a container, and allows for a realistic change over time.
  • FIG. 4 is a block diagram illustrating an embodiment of an interactive system 400 relating to at least one interactive container. In this embodiment, the system 400 includes a processor 402, memory/storage medium 404, a calibrator 406, a touch detector 408, a touch listener 410, an analytics module 412 and an I/O 414. The memory 404 include deployed data 404A, touch area data 404B, analytics data 404C and the likes.
  • Even though all these items are shown to be in the same system 400, yet, they may be distributed in multiple systems that may or may not be in the same location. In one embodiment, a cloud may communicate with the systems 400 to deploy items from remote locations, such as, the deployed data 404A.
  • The touch detector 408 detects touch and its related information, which includes identifying coordinate related to a touch area. In one embodiment, the touch detector 408 may distinguish between a hover and a touch, where the distinction relates to the z axis of the touch. If the hand or object is closer to the object or further from a camera or system then it is a touch. If the hand or object is further from the object or closer to a camera or system then it is hover. In one embodiment, the touch detector may identify different types of touch based on thresholds, such as time, proximity, color of the object doing the touch, based on a sequence of touches, etc. The touch detection 408 may refine the recognition of a touch by performing the method of FIG. 5, which will be described herein below. In another embodiment, the touch detector may crop areas to where only areas of interest are detected, resulting in a touch detection that is more accurate and more efficient.
  • The touch listener 410 reads the coordinates determined by the touch detector and determines if the touch occurred in a touch area identified during calibration. The touch listener 410 determines the type of reaction or no reaction to take place based on the deployed data, the location of the touch and sometime the type of touch. In some cases, the touch listener 410 may facilitate a zoom in/out or a drag based on the determination of the type of touch. Touch listener may determine that there are no persons and/or no touch for a predetermined time or sense a person walk away and initiate a default display or a predetermined activity.
  • The analytics module 412 is designed to collect data and/or measure characteristics related to a predetermined object, person, movement, lack of movement, etc. for example, the analytics module 412 may identify a person, follow a path of a person, follow selections of a person, duration of a touch, lack of touch, list a person's activity, determine gender, personal characteristics, traffic, dwell time, etc.
  • FIG. 5 is an embodiment illustrating a flow diagram of a method 500 for refining touch recognition. The method 500 starts at step 502 and proceeds to step 504. At step 504 the method 500 creates a baseline depth area using multi-frames from a depth camera. At step 506, the method 500 creates a moving average of a real-time area from the depth camera. At step 508, the method 500 determines the difference between the baseline and the moving average. At step 510, the method 500 determines if the difference is less than a pre-determined threshold. If it is less, then the method 500 proceeds to step 512 and looks at the surrounding pixels to determine if the event is a touch or noise. If the surrounding pixels have the same z-axis depth, the event is a touch, and the method 500 proceeds to step 514. In one embodiment, the radius of the surrounding pixels changes based on the depth of the camera. If the difference is greater than the threshold, then determine that the event is a touch, at step 514. If the surrounding pixels have different z-axis, then the method 500 proceeds to step 516. At step 516, the method 500 determines that the event is not a touch. From steps 514 and 516, the method 500 proceeds to step 518 where it ends.
  • FIG. 6A-C are diagrams depicting an embodiment of an interactive container. In FIG. 6A, a container is shown that displays a car engine with its mechanics and electronics. In FIG. 6B, a touch is detected activating a touch area. In FIG. 6C, the touch results in the display of information related to the touch area. In other embodiments, such a touch may result in an engine sound, a menu display, a video activation etc.
  • It will be appreciated by those skilled in the art that changes could be made to the embodiments described above without departing from the broad inventive concept. It is understood, therefore, that this disclosure is not limited to the particular embodiments herein, but it is intended to cover modifications within the spirit and scope of the present disclosure as defined by the appended claims.

Claims (20)

1. An interactive system comprising a processor capable of executing instruction relating to an interactive container creation method for creating an interactive experience, the interactive container creation method comprising:
identifying at least one of an asset and a shape, in an image captured by a camera, to be defined as a touch area in a first interactive container,
wherein the identifying comprises at least:
retrieving a baseline depth;
retrieving a value of a real-time area;
determining a difference between the baseline depth and the values;
comparing the difference to a threshold and determining if the real-time area is to be defined as the touch area based on the comparison;
identifying the x, y axis of at least one point of the at least one of asset or shape; and
creating the touch area based on the identified x,y axis and in response to the real-time area being defined as the touch area in the first interactive container based on the comparison.
2. The interactive container creation method of claim 1, further comprising:
creating a list; and
deploying the list to at least one device.
3. The interactive container creation method of claim 2, further comprising:
creating a correlation between (i) at least a portion of the list to (ii) the touch area, image captured from the camera or a projected image from a projector,
wherein the touch area of the first interactive container produces an activity identified in the list resulting from interaction related to the touch area, and
wherein the produced activity is in a second interactive container.
4. The interactive container creation method of claim 3, wherein the second interactive container relates to the display from the projector.
5. The interactive container creation method of claim 4, wherein the display is viewable by a human eye without need for wearable devices.
6. The interactive container creation method of claim 1, wherein the retrieving the baseline depth comprises retrieving a baseline depth area utilizing multiple depth frames.
7. The interactive container creation method of claim 1, wherein the value is a moving average.
8. The interactive container creation method of claim 1, wherein the method utilizes at least one of a depth camera, a weighted average to add items into the container over time, an interactive container that at least one of communicates and causes change in another container.
9. The interactive container creation method of claim 3, wherein a radius of a surrounding pixels changes based on the depth of the camera.
10. The interactive container creation method of claim 2, wherein the list comprises at least one of an image, an asset, an attribute, a wisp, a rule, a menu, axis location and any combination thereof, wherein the attribute is at least one of audio, video, image, display, or combination thereof, and wherein the asset is at least one of an object, a person, printout of an object or person, a displayed item, an image, a video, an identified item or person, or a combination thereof.
11. The interactive container creation method of claim 2, wherein the list is created on a machine by identifying at least one of the asset and the shape.
12. The interactive container creation method of claim 2, wherein the list is deployed simultaneously on several devices in the same or in different locations.
13. The interactive container creation method of claim 1, further comprising a calibration method, wherein the calibration method comprises:
identifying an item to be defined as the touch area,
wherein the item is one of the asset, a display, a shape, light, exposure, contrast, RGB difference, infrared, or a combination thereof;
identifying coordinates of predetermined number of points related to the item; and
identifying an area within the predetermined points as the touch area.
14. The interactive container creation method of claim 13, wherein the calibration method utilizes the camera to identify the coordinates.
15. The interactive container creation method of claim 13, wherein the calibration method is performed on a single container or multiple containers at the same time.
16. The interactive container creation method of claim 13, wherein the calibration method is one of automatic or manual.
17. The interactive container creation method of claim 13, wherein the calibration method further comprises identifying one of a rule, a menu, a display, and an activity related to the identified touch area.
18. The interactive container creation method of claim 13, further comprising at least one of:
training an image to detect at least one of a known image, asset, logo, item or combination thereof; and
cropping calibration stream to calibrate only areas of interest.
19. An interactive container creation method for creating an interactive experience, the method comprising:
calibrating and identifying at least one interactive container;
identifying at least one of an asset and a shape to be defined as a touch area in the interactive container;
identifying the x,y axis of at least one point of the at least one of the asset and the shape; and
creating a correlation between the touch area and at least a portion of a list,
wherein the list identifies a plurality of activities resulting from interaction related to the touch area, and
wherein each touch area of the interactive container produces an activity from among the plurality of activities identified in the list.
20. An interactive container creation system for creating an interactive experience, comprising:
a processor;
a storage medium comprising at least one of deployed data and touch area data,
wherein the processor is coupled to the storage medium;
a touch detector for generating the touch area data,
wherein the touch area data relates to at least one of an asset or a shape;
a touch listener, coupled to the processor and the touch detector, for determining an activity related to a touch area,
wherein each touch area of the interactive container produces an activity determined by the touch area data resulting from interaction related to the touch area; and
at least one input/output device for at least one of receiving input to the interactive container system or to cause an action related to the interactive container system.
US17/138,208 2013-05-09 2020-12-30 System, method, and apparatus for an interactive container Pending US20210117040A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/138,208 US20210117040A1 (en) 2013-05-09 2020-12-30 System, method, and apparatus for an interactive container

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
US13/890,709 US9360888B2 (en) 2013-05-09 2013-05-09 System and method for motion detection and interpretation
US14/535,823 US9465488B2 (en) 2013-05-09 2014-11-07 System and method for motion detection and interpretation
PCT/US2015/068192 WO2016109749A1 (en) 2014-12-30 2015-12-30 System and method for interactive projection
US14/985,044 US11233981B2 (en) 2014-12-30 2015-12-30 System and method for interactive projection
US201662311354P 2016-03-21 2016-03-21
US201662373272P 2016-08-10 2016-08-10
US15/258,973 US20160378267A1 (en) 2013-05-09 2016-09-07 System and Method for Motion Detection and Interpretation
US15/394,799 US10891003B2 (en) 2013-05-09 2016-12-29 System, method, and apparatus for an interactive container
US17/138,208 US20210117040A1 (en) 2013-05-09 2020-12-30 System, method, and apparatus for an interactive container

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/394,799 Continuation US10891003B2 (en) 2013-05-09 2016-12-29 System, method, and apparatus for an interactive container

Publications (1)

Publication Number Publication Date
US20210117040A1 true US20210117040A1 (en) 2021-04-22

Family

ID=59088325

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/394,799 Active US10891003B2 (en) 2013-05-09 2016-12-29 System, method, and apparatus for an interactive container
US17/138,208 Pending US20210117040A1 (en) 2013-05-09 2020-12-30 System, method, and apparatus for an interactive container

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/394,799 Active US10891003B2 (en) 2013-05-09 2016-12-29 System, method, and apparatus for an interactive container

Country Status (1)

Country Link
US (2) US10891003B2 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087351B (en) * 2018-07-26 2021-04-16 北京邮电大学 Method and device for carrying out closed-loop detection on scene picture based on depth information
CN109656416B (en) * 2018-12-28 2022-04-15 腾讯音乐娱乐科技(深圳)有限公司 Control method and device based on multimedia data and related equipment
US11874929B2 (en) * 2019-12-09 2024-01-16 Accenture Global Solutions Limited Method and system for automatically identifying and correcting security vulnerabilities in containers

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110102438A1 (en) * 2009-11-05 2011-05-05 Microsoft Corporation Systems And Methods For Processing An Image For Target Tracking

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4843410A (en) 1988-02-04 1989-06-27 Lifetouch National School Studios Inc. Camera and projector alignment for front screen projection system
US5933132A (en) 1989-11-07 1999-08-03 Proxima Corporation Method and apparatus for calibrating geometrically an optical computer input system
US5528263A (en) 1994-06-15 1996-06-18 Daniel M. Platzker Interactive projected video image display system
JP3794180B2 (en) 1997-11-11 2006-07-05 セイコーエプソン株式会社 Coordinate input system and coordinate input device
US6031519A (en) 1997-12-30 2000-02-29 O'brien; Wayne P. Holographic direct manipulation interface
US6512536B1 (en) 1999-02-09 2003-01-28 The United States Of America As Represented By The United States National Aeronautics And Space Administration Cable and line inspection mechanism
US7046838B1 (en) 1999-03-30 2006-05-16 Minolta Co., Ltd. Three-dimensional data input method and apparatus
DE10007891C2 (en) 2000-02-21 2002-11-21 Siemens Ag Method and arrangement for interacting with a representation visible in a shop window
US6889064B2 (en) 2000-03-22 2005-05-03 Ronald Baratono Combined rear view mirror and telephone
AU6262501A (en) 2000-05-29 2001-12-11 Vkb Inc. Virtual data entry device and method for input of alphanumeric and other data
USRE40368E1 (en) 2000-05-29 2008-06-10 Vkb Inc. Data input device
JP2002140630A (en) 2000-11-01 2002-05-17 Sony Corp System and method for clearing contents charge based on ticket
US6907418B2 (en) * 2000-12-21 2005-06-14 Metabiz Co., Ltd. Advertisement servicing system using e-mail arrival notifying program and method therefor
US6759979B2 (en) 2002-01-22 2004-07-06 E-Businesscontrols Corp. GPS-enhanced system and method for automatically capturing and co-registering virtual models of a site
JP4532982B2 (en) 2004-05-14 2010-08-25 キヤノン株式会社 Arrangement information estimation method and information processing apparatus
TWI271549B (en) 2004-10-14 2007-01-21 Nanophotonics Ltd Rectilinear mirror and imaging system having the same
WO2006060746A2 (en) 2004-12-03 2006-06-08 Infrared Solutions, Inc. Visible light and ir combined image camera with a laser pointer
WO2006086508A2 (en) 2005-02-08 2006-08-17 Oblong Industries, Inc. System and method for genture based control system
WO2007063477A2 (en) * 2005-12-02 2007-06-07 Koninklijke Philips Electronics N.V. Depth dependent filtering of image signal
US8441467B2 (en) 2006-08-03 2013-05-14 Perceptive Pixel Inc. Multi-touch sensing display through frustrated total internal reflection
US8094129B2 (en) 2006-11-27 2012-01-10 Microsoft Corporation Touch sensing using shadow and reflective modes
JP5455639B2 (en) 2006-12-08 2014-03-26 ジョンソン コントロールズ テクノロジー カンパニー Display device and user interface
KR101108684B1 (en) * 2007-06-19 2012-01-30 주식회사 케이티 Apparatus, Method for Providing Three Dimensional Display and Terminal for Three Dimensional Display
US8100373B2 (en) 2007-11-01 2012-01-24 Meyer Christopher E Digital projector mount
WO2009141855A1 (en) 2008-05-23 2009-11-26 新世代株式会社 Input system, input method, computer program, and recording medium
WO2010029415A2 (en) * 2008-09-10 2010-03-18 Opera Software Asa Method and apparatus for providing finger touch layers in a user agent
NL2002211C2 (en) 2008-11-14 2010-05-17 Nedap Nv INTELLIGENT MIRROR.
US20100302138A1 (en) 2009-05-29 2010-12-02 Microsoft Corporation Methods and systems for defining or modifying a visual representation
KR101070864B1 (en) 2009-12-11 2011-10-10 김성한 optical touch screen
US8941620B2 (en) 2010-01-06 2015-01-27 Celluon, Inc. System and method for a virtual multi-touch mouse and stylus apparatus
US8452109B2 (en) * 2010-01-11 2013-05-28 Tandent Vision Science, Inc. Image segregation system with method for handling textures
US8730309B2 (en) 2010-02-23 2014-05-20 Microsoft Corporation Projectors and depth cameras for deviceless augmented reality and interaction
JP5648299B2 (en) 2010-03-16 2015-01-07 株式会社ニコン Eyeglass sales system, lens company terminal, frame company terminal, eyeglass sales method, and eyeglass sales program
US8957919B2 (en) 2010-04-05 2015-02-17 Lg Electronics Inc. Mobile terminal and method for displaying image of mobile terminal
US8485668B2 (en) 2010-05-28 2013-07-16 Microsoft Corporation 3D interaction for mobile device
TW201201077A (en) 2010-06-18 2012-01-01 Nlighten Trading Shanghai Co Ltd A light uniforming system of projected touch technology
US9760123B2 (en) * 2010-08-06 2017-09-12 Dynavox Systems Llc Speech generation device with a projected display and optical inputs
US20120113223A1 (en) 2010-11-05 2012-05-10 Microsoft Corporation User Interaction in Augmented Reality
US9063573B2 (en) 2011-02-17 2015-06-23 The Board Of Trustees Of The Leland Stanford Junior University Method and system for touch-free control of devices
US20120262366A1 (en) 2011-04-15 2012-10-18 Ingeonix Corporation Electronic systems with touch free input devices and associated methods
US9372540B2 (en) 2011-04-19 2016-06-21 Lg Electronics Inc. Method and electronic device for gesture recognition
KR20130055119A (en) 2011-11-18 2013-05-28 전자부품연구원 Apparatus for touching a projection of 3d images on an infrared screen using single-infrared camera
US9573052B2 (en) * 2012-01-31 2017-02-21 Konami Digital Entertainment Co., Ltd. Game device, control method for a game device, and non-transitory information storage medium
US8933912B2 (en) * 2012-04-02 2015-01-13 Microsoft Corporation Touch sensitive user interface with three dimensional input sensor
US9292923B2 (en) 2013-03-06 2016-03-22 The Nielsen Company (Us), Llc Methods, apparatus and articles of manufacture to monitor environments

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110102438A1 (en) * 2009-11-05 2011-05-05 Microsoft Corporation Systems And Methods For Processing An Image For Target Tracking

Also Published As

Publication number Publication date
US20170185228A1 (en) 2017-06-29
US10891003B2 (en) 2021-01-12

Similar Documents

Publication Publication Date Title
US20210117040A1 (en) System, method, and apparatus for an interactive container
US10586391B2 (en) Interactive virtual reality platforms
US20240152548A1 (en) Electronic apparatus for searching related image and control method therefor
CN105339868B (en) Vision enhancement based on eyes tracking
CN105009031B (en) Augmented reality equipment and the method in operation user interface thereon
US10965913B2 (en) Monitoring apparatus, monitoring system and monitoring method
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
CN111178191B (en) Information playing method and device, computer readable storage medium and electronic equipment
US20220138994A1 (en) Displaying augmented reality responsive to an augmented reality image
US20130283202A1 (en) User interface, apparatus and method for gesture recognition
CN106716302A (en) Method, apparatus and computer program for displaying an image
US20100281373A1 (en) Method and system for annotating video content
CN110888532A (en) Man-machine interaction method and device, mobile terminal and computer readable storage medium
KR20210088601A (en) State recognition method, apparatus, electronic device and recording medium
KR20180074180A (en) Method and apparatus for providing information for virtual reality video
KR101647969B1 (en) Apparatus for detecting user gaze point, and method thereof
JP2004532441A (en) System and method for extracting predetermined points of an object in front of a computer-controllable display captured by an imaging device
WO2019228969A1 (en) Displaying a virtual dynamic light effect
US11810336B2 (en) Object display method and apparatus, electronic device, and computer readable storage medium
US20150154775A1 (en) Display control method, information processor, and computer program product
US20190310741A1 (en) Environment-based adjustments to user interface architecture
US20200226833A1 (en) A method and system for providing a user interface for a 3d environment
CN114003160B (en) Data visual display method, device, computer equipment and storage medium
EP3477434B1 (en) Information processing device, information processing method, and program
JP2019125305A (en) Support device for creating teacher data

Legal Events

Date Code Title Description
AS Assignment

Owner name: OMNI CONSUMER PRODUCTS, LLC, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HOWARD, STEPHEN;MCNUTT, LARRY;REEL/FRAME:054874/0172

Effective date: 20181129

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED