WO2020169084A1 - A method and system for selecting and displaying augmented reality content - Google Patents

A method and system for selecting and displaying augmented reality content Download PDF

Info

Publication number
WO2020169084A1
WO2020169084A1 PCT/CN2020/076186 CN2020076186W WO2020169084A1 WO 2020169084 A1 WO2020169084 A1 WO 2020169084A1 CN 2020076186 W CN2020076186 W CN 2020076186W WO 2020169084 A1 WO2020169084 A1 WO 2020169084A1
Authority
WO
WIPO (PCT)
Prior art keywords
markers
marker
marker groups
user device
user
Prior art date
Application number
PCT/CN2020/076186
Other languages
French (fr)
Inventor
Antoine VANDENHESTE
Original Assignee
100 Fire Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 100 Fire Limited filed Critical 100 Fire Limited
Publication of WO2020169084A1 publication Critical patent/WO2020169084A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • the present disclosure relates to a method and system for selecting and displaying augmented reality content to a user.
  • Augmented reality may be considered to be the blending of digital elements into real world elements.
  • the digital elements (which may be a digital representation of a real world object or any other digital data or reference) may be interactive and can provide sensory feedback to a user e.g. visual feedback, haptic feedback audible feedback etc. when certain conditions are met.
  • augmented reality systems operate by overlaying interactive digital elements on to a real world environment as captured by a digital device so as to provide some form of interaction with a user. This interaction may be in the form of games, teaching guides, referencing or simply an artistic manipulation of the real world for entertainment purposes.
  • Augmented reality has become an increasingly popular due to the improvement of mobile user devices. Technological advances in mobile devices has allowed augmented reality to become more accessible due to superior processing and storage powers as well as advancements in networking and data exchange. Nonetheless, there are still limitations with the use of augmented reality due to the large amount of real world data and the variability of such data when it is applied in the real world. In turn, creating an augmented reality experience is often limited to a small set of real world objects. This can be very limiting and thus reducing the quality of such an experience.
  • the present disclosure relates to a method and system for selecting and displaying augmented reality content to a user on a user device.
  • the disclosed method and system for selecting and displaying augmented selects markers or marker groups or image processing tools in a manner that conserves the user device memory and processing power, and improves efficiency of processing and memory use.
  • the method and system for selecting and displaying augmented reality content as disclosed herein is advantageous because it increases the range of operation of the user device and allows the user device to recognise several categories of objects without needing to store every conceivable type of marker, marker group or image processing tool.
  • a method for selecting and displaying augmented reality content to a user on a user device comprising the steps of:
  • each of the downloaded marker groups is associated with a time to live parameter defining a time limit the one or more marker groups are stored on the user device, and;
  • each marker group comprises one or more markers and augmented reality content associated with the one or more markers.
  • each marker group and contents of each marker group corresponds to a category of real world object.
  • each marker group comprises a reference to the one or more markers and a reference to the one or more augmented reality content associated with each marker, wherein the user device can use the references to access the one or more markers and/or the augmented reality content from a server.
  • each marker group comprises one or more image processing tools, each image processing tool configured to identify a category of object within a digital image or identify one or markers associated with the category of object within a digital image.
  • each image processing tool is a convolution neural network, wherein each convolution neural network is configured to process a digital image and identify a category of object.
  • the method comprises the additional steps of:
  • the trigger comprises processing user data to infer one or more markers or marker groups based on the processing of the user data, wherein the user data is profile data associated with a specific user and the user data being stored as part of a user profile.
  • the user data comprises one or more of: user data comprises one or more of: location of the user, data from the user device sensors, purchase history of the user, data from external APIs on the user device, proximity to other users based on the location of another user’s user device, user social media data, user preferences and proximity to wireless or wired communication nodes.
  • the method comprises the additional steps of:
  • the method further comprises:
  • markers or marker groups within tracked marker database are the markers or marker groups that are actively used to process the received one or more digital images.
  • the method comprises the additional steps of:
  • markers or marker groups from a server that correspond to the inferred markers or marker groups, wherein the server stores a plurality of markers or marker groups,
  • the method comprises the additional steps of:
  • the trigger comprises detecting, by the user device, a marker node, wherein the marker node comprises markers or marker groups or a reference to markers or marker groups associated with the marker node.
  • the method comprises the additional steps of:
  • the method comprises the additional steps of:
  • markers or marker groups within the tracked marker database are the markers or marker groups that are actively used to process the received one or more digital images.
  • the method comprises the additional steps of:
  • the downloaded markers or marker groups are deleted from a memory unit of the user device once the time defined in the time to live parameters has expired.
  • the method comprises the additional steps of:
  • the marker node comprises one or more of a machine readable code, NFC signal, RFID signal, Bluetooth signal and Wifi signal.
  • the method as described above is executed by a processor of a user device.
  • a method of selecting and displaying augmented reality content on a user device comprising a memory unit and a processor, the method being executed by the user device, method comprising the steps of:
  • markers or marker groups within the tracked markers database are prioritised for use over other markers or marker groups within the memory unit of the user device,
  • the method comprises the additional step of presenting the identified augmented reality content on the user device, wherein the augmented reality content is overlaid onto the real world object on a user interface of the user device.
  • the predicted markers are pre-emptively identified and loaded in the tracked marker database based on the user data, prior to receiving the one or more digital images.
  • each marker or marker group includes time to live parameters that define the amount of time a marker or marker group remains cached in the memory unit of the user device.
  • the method comprises the additional step of deleting markers or marker groups from tracked markers database if the tracked markers database is full, wherein markers or marker groups with the lowest time to live parameter are deleted from the tracked markers database.
  • the method comprises refreshing one or more markers or marker groups within the tracked markers database if the step of predicting markers based on user data identifies markers or marker groups already present in the tracked markers database.
  • the step of predicting markers or markers groups comprises identifying user preferences or user behaviour based on processing stored user data, and wherein the step of predicting markers or marker groups is executed by a prediction module.
  • the prediction module is a neural network or a machine learning program that is executed by the user device.
  • the method comprises the additional steps of:
  • markers or marker groups identical to the predicted markers or marker groups from the server, if matching markers or marker groups are located on the server,
  • the method comprises the step of deleting markers or marker groups having the lowest time to live parameter from the memory unit such that new markers or marker groups can be downloaded and stored in the memory unit.
  • the augmented reality content associated with a marker or marker group is stored in the memory unit of the user device, accessed from the user device and presented on the user device if a corresponding marker is detected in a digital image.
  • a method for selecting and displaying augmented reality content on a user device comprising the steps of:
  • the one or more digital images are transmitted to the server,
  • the method comprises the additional steps of:
  • the method comprises downloading augmented reality content corresponding the identified one or more markers or marker groups onto a memory unit of the user device.
  • the digital image transmitted to the server is an image of everything visible in the real world environment as captured by an image capture apparatus of the user device.
  • the method comprises the additional steps of:
  • a system for selecting a processing tool comprising:
  • a computing unit including a processor, a memory unit, a user profile database and a prediction module,
  • the user profile database configured to store user data
  • the computing unit configured to:
  • the user data comprises one or more of: location of the user, data from the user device sensors, purchase history of the user, data from external APIs on the user device, proximity to other users based on other user location, social media data, user preferences, proximity to wireless or wired communication nodes.
  • the prediction module is a machine learning module that is executed by the processor of the computing unit, the processor configured to determine user preferences based on the user data.
  • the machine learning module is configured to process user data and determine user preferences, the user preferences being stored in the user profile database in association with a user profile, and the user preferences constantly updated as new user data is received.
  • the processing tools are markers or marker groups, the markers or marker groups being used to process received images to identify a specific object, and the system being configured to select specific markers or marker groups based on the user preferences.
  • the processing tools are convolution neural networks, each convolution neural network configured to process one or more images and identify a specific object within the images, and the system being configured to select a specific processing tool based on the user preferences.
  • the selected processing tool is cached such that the use of the selected processing tool is prioritised by the processor and the selected processing tool is utilised by the processor to process one or more received images to identify a specific object within the one or more images.
  • the system comprises a processing tool database configured to stored the plurality of processing tools, the computing unit configured to select a processing tool from the processing tool database based on the user preferences.
  • the system comprises a content database, the content database storing augmented reality content, the computing unit configured to select augmented content corresponding to the identified object in the digital images and/or select augmented reality content corresponding to the selected processing tool.
  • the content database storing augmented reality content corresponding to markers or marker groups
  • the computing device configured to select augmented reality content corresponding to the selected markers or marker groups based on the user preferences.
  • the computing device further configured to present the augmented reality content on a user device in spatial relation to the identified object.
  • a user device comprises the computing unit.
  • a server comprises the computing unit.
  • This invention may also be said broadly to consist in the parts, elements and features referred to or indicated in the specification of the application, individually or collectively, and any or all combinations of any two or more said parts, elements or features, and where specific integers are mentioned herein which have known equivalents in the art to which this invention relates, such known equivalents are deemed to be incorporated herein as if individually set forth.
  • augmented reality means augmented reality and mixed reality.
  • augmented reality refers to the integration of digital information on a user’s view of the real world, thus providing a composite view.
  • real world or “real life” or variants thereof relates to things that are in the real world i.e. relates to things that are in the real environment, and not related to digital objects or digital environments.
  • marker or “image marker” or variants thereof means a two dimensional or three dimensional image for which an augmented reality software application will actively search for and use to determine either one or more of orientation or size in a three dimensional space or category of object present in an image.
  • a marker can be any element that is present on a real world object or present in an image, and can be visible to the human eye or may be invisible to the human eye but discernible to an image capture apparatus of a user device e.g. a camera. Some examples of markers are specific features of an image or specific orientation of colours or light frequencies or a pattern etc.
  • marker group refers to markers and content associated with each specific marker.
  • the term “marker group” can also refer to references to markers and references to content associated with the specific markers.
  • tracked markers or variants thereof refers to markers or marker groups that are actively searched for while processing a digital image.
  • marker node refers to node or device containing information.
  • a marker node embodies or contains a marker group or information about a marker group such as a location of the marker group or contents of the marker group. Examples of a marker node are a barcode, QR code, NFC signal, RFID signal, Bluetooth signal, Wifi signal or any other device that can be detected by the user device.
  • classification indicator refers to a scannable element that is present on or adjacent a real world object.
  • the classification indicator can be scanned by the user device to cause the user device to communicate with the server or the classification indicator can be scanned by the user device to facilitate a predefined procedure to be executed by the user device.
  • the word “comprising” and its variations, such as “comprises” has its usual meaning in accordance with International patent practice. That is, the word does not preclude additional or unrecited elements, substances or method steps, in addition to those specifically recited. Thus, the described apparatus, system, substance or method may have other elements, substances or steps in various embodiments.
  • the term “comprising” (and its grammatical variations) as used herein are used in the inclusive sense of “having” or “including” and not in the sense of “consisting only of” .
  • Figure 1 illustrates an example system for selecting and displaying augmented reality content including a user device and a server.
  • Figure 2 illustrates a schematic diagram of a user device and its components.
  • Figure 3 illustrates an example embodiment of the server and its components.
  • Figure 4 illustrates an example of a method of selecting and displaying augmented reality content on a user device by predicting markers or marker groups and obtaining the predicted parameters.
  • Figure 5 illustrates a further example method of selecting and displaying augmented reality content using a marker node and contents associated with a marker node.
  • Figure 6 illustrates a method for selecting and displaying augmented reality content based on a classification indicator that may be encountered.
  • Figure 7 shows example operation of the user device to select and display augmented reality content.
  • Figure 8 shows example operation of the user device and server to select and display augmented reality content.
  • Figures 9 and 10 illustrate real world objects that can be used to display augmented reality content, wherein the objects are shirts.
  • the present invention relates to a method and system for selecting and displaying augmented reality content to a user on a user device.
  • Augmented reality is the blending of digital elements into real world elements.
  • the digital elements i.e. digital information
  • the digital elements are interactive digital elements and can provide sensory feedback to a user e.g. visual feedback, haptic feedback audible feedback etc.
  • Augmented reality includes overlaying interactive digital elements on to the real world.
  • An augmented reality system generally comprises a user device that includes one or more sensors that collect information about a user’s real world, a processor is configured to process the information about the real world and generate digital elements based on the processed information.
  • the digital elements may be displayed as overlaid elements on a user device.
  • the digital elements are presented and perceived as being part of the real world.
  • Augmented reality systems generally identify one or more real world objects from a digital image and then overlay digital elements onto a user device.
  • the use of augmented reality has been facilitated by the improvement in the processing capability of various mobile user devices such as for example smartphones, tablets, smart watches (e.g. Apple watch) , smart glasses (e.g. Google glass) or other wearable devices.
  • augmented reality frameworks such as Apple’s ARKit and Google’s ARCore have become highly efficient and able to recognize markers (e.g. ARAnchors) and track their positions and orientations in three dimensional space. With the ability to track such markers, it is possible to develop software applications that actively search for these image markers and display information or media content on top or next to the markers.
  • markers e.g. ARAnchors
  • Existing AR applications on existing hardware require the user device to physically store a copy or copies of the marker or markers it is searching for in its memory in order to identify it as a marker in an image.
  • the user device will be required to store a large number of markers to correspond to various objects.
  • Current user devices do not have the memory to store so many markers because there is a finite limit to user device memory.
  • the processing power of current user devices limit the number of markers that can be searched for at once.
  • augmented reality takes place in real time over an image capture apparatus (e.g. a camera) , images from the camera must be processed every fraction of a second.
  • the user device processes the images by finding matching markers from the detected markers from the local list in order to generate and present augmented reality content corresponding to the markers.
  • the current existing technology i.e. current products
  • the application will need to search for large numbers of markers.
  • Further storing markers and corresponding content would occupy a large amount of space on the user’s device. This is especially the case for a large number of markers.
  • the present disclosure relates to a method and system for selecting and displaying augmented reality content on a user device.
  • the presently disclosed method and system address the shortcomings of current AR technology.
  • the presently disclosed method and system provides a more efficient manner of downloading and loading markers to manage the user device’s memory and processing power.
  • the presently described method of selecting and displaying augmented reality content comprises downloading or selecting markers before they are encountered.
  • the method and system anticipates the markers or marker groups that will be required and downloading or selecting these markers, thereby the user device storing only the markers that are most likely to be needed. This reduces the space and processing requirements of the user device.
  • a method for selecting and displaying augmented reality content to a user on a user device comprising the steps of: selecting and/or downloading one or more marker groups onto a memory unit of the user device, in response to a trigger; receiving one or more digital images at the user device via an image capturing apparatus of the user device; processing the digital image using the downloaded one or more marker groups; wherein each of the downloaded marker groups is associated with a time to live parameter defining a time limit the one or more marker groups are stored on the user device, and; deleting the stored marker groups at the expiration of the time limit defined in the time to live parameter or maintaining the downloaded marker groups on the user device in response to a further trigger.
  • the trigger comprises processing user data to infer one or more markers or marker groups based on the processing of the user data, wherein the user data is profile data associated with a specific user and the user data being stored as part of a user profile.
  • the trigger comprises detecting, by the user device, a marker node, wherein the marker node comprises markers or marker groups or a reference to markers or marker groups associated with the marker node.
  • a method of selecting and displaying augmented reality content on a user device comprising a memory unit and a processor, the method being executed by the user device, and comprises the steps of: accessing user data of the user associated with the user device; predicting one or more markers or marker groups the user is likely to require based on the user data; checking if locally stored markers or marker groups in a memory unit of the user device match the predicted markers or marker groups; loading the locally stored markers and marker groups that match the predicted markers or marker groups into a tracked marker database, if the locally stored markers or marker groups match the predicted markers or marker groups; wherein the markers or marker groups within the tracked markers database are prioritised for use over other markers or marker groups within the memory unit of the user device; receiving one or more digital images or a real world object; processing the received digital image using the markers or marker groups from the tracked marker database to identify corresponding augmented reality content.
  • the predicted markers are pre-emptively identified and loaded in the tracked marker database based on the user data, prior to receiving the one or more digital images and each marker or marker group includes time to live parameters that define the amount of time a marker or marker group remains cached in the memory unit of the user device.
  • a method for selecting and displaying augmented reality content on a user device comprising the steps of: detecting an classification indicator in one or more received digital images of a real world object; if an classification indicator is detected in the one or more digital images, the one or more digital images are transmitted to the server; identifying the classification indicator in the one or more digital images at the server; identifying one or more markers or marker groups that correspond to the classification indicator that are stored on the server; downloading the identified one or more markers or marker groups corresponding to the classification indicator on to the memory unit of the user device from the server, and downloading time to live parameters associated with the identified one or more markers or marker groups onto the memory unit of the user device, such that the markers or marker groups and associated time to live parameters are locally stored on the user device.
  • Figure 1 shows one example embodiment of a system 100 for selecting and displaying augmented reality content on a user device.
  • the system 100 comprises at least one user device 102 and a server 200.
  • the server is a computing device comprising at least a processor, a memory unit, additional hardware and/or software modules and wireless communication capabilities.
  • the server 200 is arranged in two way communication with the user device 102.
  • the user device is a mobile user device that can be carried by the user.
  • the system 100 can comprise a plurality of user devices that are arranged in two way communication with the server 200.
  • the user devices are configured to communicate with each other too.
  • the user device 102 comprises at least a processor, a memory unit and an image capture apparatus to capture digital images (e.g. a digital camera) .
  • the user device 102 may be any one of a smartphone, tablet, smartwatch, wearable device, smart glasses (e.g. Google glasses) or any other mobile device.
  • the user device 102 is a smartphone and the server 200 is a CPU unit.
  • the server 200 communicates with the user device 102 through any suitable wireless communication network 103 such as a 4G, 5G network or any other suitable wireless communication network.
  • the network 103 may be a TCP/IP network.
  • the user device 102 further comprises an augmented reality application (AR app) 104 that is executed on the user device 102.
  • AR app is a software application that is downloaded onto the user device.
  • the AR app may be a web based application that can be accessed via a browser on the user device.
  • the AR app includes computer readable and executable instructions.
  • the AR app is stored within a memory unit of the user device 102.
  • the user device 102 is configured to select and display augmented reality content on the user device, when the instructions of the AR app are executed by hardware components of the user device 102, e.g. a processor of the user device 102.
  • the AR app also provides the user device 102 with the functionality to access and communicate with the server 200.
  • the user device 102 includes a camera or other suitable image capture device that is configured to capture digital images of a real world object 105.
  • the camera may capture single digital images or may capture a plurality of digital images or capture a stream of digital images (i.e. a video) .
  • the user device 102 locally stores a plurality of markers or marker groups stored on the user device.
  • the user device 102 locally stores content associated with the markers or marker groups.
  • the user device 102 is configured to utilise the locally stored markers or marker groups, and associated content to identify markers in a digital image and present augmented reality content.
  • the locally stored markers or marker groups are stored for a predefined period of time defined as a time to live parameter.
  • the markers or marker groups are deleted at the end of the time defined in the time to live parameter.
  • the AR app 104 includes a timer function that is configured to determine and calculate the time left as per the defined time to live parameter.
  • Each marker group may comprise one or more markers and augmented reality content associated with the one or more markers.
  • Each marker group may also include an image processing tool that can be used to process the received digital images.
  • the image processing tool may be a convolution neural network or another type of neural network or another software module or hardware module that can be used to identify objects within digital images.
  • the user device 102 is configured to utilise the locally stored markers to identify real world objects within the received digital images.
  • the user device 102 is further configured to present augmented reality content corresponding to the identified markers within the digital image.
  • the user device 102 includes sets of markers that correspond to a particular category of object or correspond to a specific object.
  • the user device 102 comprises marker groups that correspond to the specific category of object or the user device 102 uses image processing tools e.g. convolution neural networks that are specifically configured to identify a category of object.
  • the category of objects can be animals or world leaders or cars or any other categories.
  • the server 200 is configured to store a plurality of markers or marker groups.
  • the server 200 also stores content associated with the various markers.
  • the markers or marker groups stored in the server 200 are arranged as per category of object. Alternatively the markers may be arranged in other categories that relate to various elements in the real world that may be compatible with the AR app 104, in order to provide AR content. Some examples include Objects at a location, People, A news event and so on.
  • the server 200 is further configured to store multiple user profiles, each user profile associated with a unique user. Each user accesses the server 200 through the AR app and creates a user profile as part of the sign in process. User data is stored in relation to the user profile. User data for each user is tracked by the user device. The user data is continuously updated as new user data tracked by the user device 102 is transmitted to the server 200 for storing. The user device 102 also stores a local copy of the user profile of the particular user that is either logged into the AR app on the device 102 or the person that owns the user device 102.
  • User data comprises at least the name, the IP address of the user device, location of the user, data from the user device sensors, purchase history, device data, data captured on the device, browsing history of the user, user preferences, and social media data of the user.
  • Other user data can also include additional location data such as for example proximity to other users and proximity to wireless or wired communication nodes.
  • the user data may also include information from the user device sensors e.g. accelerometers, gyroscopes etc. as well as information from external API’s on the user device.
  • the user data is gathered continuously at specified time intervals.
  • the user data is updated within the user profile on the user device 102 and the server 200.
  • the user updated user data is used to predictively download markers or marker groups onto the user device 102 for use in processing images and identifying objects in the image. Older user data is overwritten in the user device 102.
  • FIG. 2 shows an example embodiment of the user device 102 and its components.
  • the user device 102 comprises suitable components necessary to receive, store and execute appropriate computer instructions.
  • the user device 102 in this example is a smartphone and the illustrated figure is a generalised schematic that does not illustrate all the components of the user device 102.
  • Other user devices e.g. Google glasses or tablets or smart wearables will include similar components as described.
  • the user device comprises a processor 110.
  • the processor may be any suitable processor e.g. an ASIC or other chip.
  • the user device 102 may include a “system on chip” (SoC) that comprises the processor 110 (i.e. CPU) and other hardware components e.g. video processor, display processor, LTE modem and GPU all formed as a single integrated chip.
  • SoC system on chip
  • the device 102 comprises a memory unit.
  • the user device comprises random access memory (RAM) 112 and flash memory 114.
  • RAM 112 may be a RAM chip e.g. LPDDR4 or LPDDR4X or LPDDR3.
  • the flash memory 114 includes pre installed applications and the operating system of the user device 102.
  • the user device 102 further comprises one or more communication units 116.
  • the user device 102 comprises a modem 116 which be for example an LTE modem implemented on an LTE chip.
  • the device 102 comprises a battery 118 that is configured to power the device when the device is used remotely.
  • the user device 102 further comprises additional sensors (shown as a single block) 120.
  • the sensor block 120 can include multiple sensors e.g. accelerometers, gyroscopes, ambient light sensors, digital compass, proximity sensors or other suitable sensors.
  • the user device 102 also comprises an image capture apparatus 122 that is in electronic communication with the flash memory 114 and the processor 110.
  • the image capture apparatus 122 is a digital camera that also includes an integrated sensor, lens and image processor for initial processing of the image to generate a clear digital image.
  • the user device 102 comprises I/O devices 124, which in this example is a touch screen.
  • the touch screen 124 can be used to output information and receive information inputs.
  • the user device 102 comprises a markers database 130.
  • the markers database 130 stores markers locally on the user device 102, e.g. in the flash memory 114.
  • the markers database 130 may include markers or may store marker groups that includes markers and/or associated content and/or image processing tools.
  • the marker database 130 is a local repository of markers that can be updated and used to access markers.
  • the user device 102 further comprises a tracked marker database 132 that is stored in the flash memory 114 of the device.
  • the tracked marker database 132 comprises a plurality of markers within it that are actively used to process received digital images.
  • the user device 102 may include a content database 134.
  • the content database 134 is configured to store augmented reality content, and the content is stored in relation to particular markers in the markers database 130 or the tracked markers database 132.
  • the user device 102 further includes a time to live parameter database that stores the various time to live parameters associated with the markers stored on the user device 102.
  • the processor 110 of the user device is configured to track the time limit as per the time to live parameter for each marker, and then delete the specific marker and associated content once the time defined in the time to live parameter has expired.
  • the time to live parameter defines the amount of time a marker or marker group remains cached within the memory e.g. in a cache memory, such that the marker is prioritised and does not need to be fetched over other data that needs to be fetched. However once the time period defined in the time to live parameter (TTL) expires the marker group (or marker) may be deleted or moved out of the cache.
  • TTL time period defined in the time to live parameter
  • the user device 102 also comprises a user profile database 136.
  • the user profile database 136 stores user data in a memory (e.g. flash memory) of the user device 102.
  • the user data of the user is tracked, updated and stored in the user profile.
  • the user data is used to anticipate a user’s needs and used to predict the markers or marker groups the user will likely need in order to identify digital images and generate augmented reality content.
  • the user device also comprises a prediction module 138.
  • the prediction module 138 may be a software module e.g. asoftware program that can be executed by the processor. Alternatively, the prediction module 138 may be a hardware module. In one example the prediction module 138 is a machine learning algorithm that is configured to process the user data to anticipate the user’s needs and predict required markers or marker groups. The prediction module 138 may be a machine learning algorithm that operates based on probability or behavioural patterns of the user or user preferences, to predict the markers or marker groups that the user requires to identify images.
  • the user device 102 may comprise a database 140 to store image processing tools e.g. convolution neural networks.
  • the convolution neural networks may be stored in the marker database or as part of a marker group or may be stored in a separate image processing tool database 140.
  • the user device 102 may store multiple neural networks, where each neural network is trained to identify a specific category of real world object, for example world leaders or cars or airplanes or any other objects.
  • An appropriate neural network can be selected based on processing the user data, i.e. a neural network required to process digital images may be predicted based on processing the user data.
  • FIG. 3 shows an example embodiment of the server 200 and its components.
  • the server 200 that comprises at least a processor and an associated memory unit or a plurality of memory banks.
  • the server 200 comprises suitable components necessary to receive, store and execute appropriate computer instructions.
  • the components may include a processor 202 (i.e. a processing unit) , read-only memory (ROM) 204, random access memory (RAM) 206, and input/output devices such as disk drives 208, input devices 210 such as an Ethernet port, a USB port, etc.
  • the server 200 further comprises communications links 214 i.e. a communication module that is configured to facilitate wireless communications via a communication network.
  • the communication link 214 allows the server 200 to wirelessly link to a plurality of communication networks.
  • the communication link 214 may also allow the server to link with localised networks such as for example Wifi or other LAN (local area networks) .
  • localised networks such as for example Wifi or other LAN (local area networks) .
  • At least one of a plurality of communications links may be connected to an external computing network through a telephone line or other type of communications link.
  • the processor 202 may comprise one or more electronic processors.
  • the processors may be microprocessors or FPGAs or any IC based processor.
  • the processor 202 comprises a plurality of linked processors that allow for increased processing speed and also provide redundancy.
  • the processor 202 may include adequate processors to provide some redundant processing power that can be accessed during periods of high need e.g. when multiple functions are being executed.
  • the server 200 may include storage devices such as a disk drive 208 which may encompass solid state drives, hard disk drives, optical drives or magnetic tape drives.
  • the server 200 may use a single disk drive or multiple disk drives.
  • the server 200 may also have a suitable operating system which resides on the disk drive or in the ROM of the server 200.
  • the operating system can use any suitable operating system such as for example Windows or Mac OS or Linux.
  • the server 200 comprises a server marker database 220.
  • the server marker database 220 is configured to store a plurality of markers or marker groups.
  • the server 200 comprises a content database 222.
  • the content database is configured to store content (i.e. augmented reality content) .
  • the augmented reality content stored in the database 222 is related to the markers stored in the server marker database 220.
  • the server 200 can store several markers and associated content.
  • the server 200 can store more markers and associated content than the user device 102 can, since the server 200 comprises much greater memory space.
  • the server 200 also comprises a time to live parameter database 224.
  • the time to live parameter database 224 stores time to live parameters associated with each of the markers or marker groups stored in the marker database 220.
  • the databases 220, 222 and 224 may be stored in the memory unit of the server.
  • the server 200 also comprises an image classification module 226.
  • the image classification module 226 may be a software module or a software program.
  • the image classification module is configured to receive an image and classify one or more objects in the image, in particular the image classification module 226 may identify a classification indicator present in the image.
  • the image classification module 226 is further configured to identify markers or marker groups that correspond to classified objects in the image.
  • the server 200 is configured to receive an image, classify objects in the image by identifying an classification indicator, identify markers or marker groups (or neural networks) for the classified object and then transmit this information of the specific markers to be used to the user device 102.
  • the user device 102 can download the markers or marker groups (or neural networks) that can be used to classify additional images by the user device 102.
  • the user device 102 is configured to download and load markers before they are encountered.
  • the user device 102 processes user data and anticipates the user’s needs and optimally specifies markers or marker groups or neural networks that the user is most likely to encounter (i.e. use) during the user’s day.
  • the prediction module 138 is used to process the user data using machine learning to learn the user’s behaviour based on the user data.
  • the content e.g. augmented reality content
  • the method of selecting and displaying augmented reality content actively manages the memory and processing resources of the user device 102.
  • the markers on the user device 102 are deleted based on the prediction as well as based on the time to live parameter (TTL) of the specific markers. Once the time defined in the time to live parameter (TTL) is expired the markers can be deleted.
  • the markers that are predicted by the prediction module can be downloaded from the server 200, or may be present in the user device 102. If the markers (or marker groups) are present in the device 102, these markers (or marker groups) can be cached and prioritized over other stored markers.
  • the approach of predictive selection and/or downloading of markers reduces the overall number of markers required on the user device 102.
  • the user device 102 is configured to receive and process digital images using the prioritized (i.e. cached) markers to display augmented reality content corresponding to the prioritized markers.
  • the user device 102 may download or cache neural networks (e.g. convolution neural networks) that correspond to identified markers based on the prediction can be used to process images and classify objects within the images.
  • Augmented reality content corresponding to the classified objects can be displayed on the user device 102 e.g. on the touch screen 124.
  • Figure 4 shows an example of a method 400 of selecting and displaying augmented reality content on a user device 102.
  • the method 400 comprises pre-emptively identifying markers that are used to process received digital images and generate augmented reality content.
  • the user device accesses user data of the user that is logged in i.e. the user data of the user associated with the user profile.
  • Step 404 comprises predicting one or more markers or marker groups the user is likely to require based on the user data.
  • the predicting module 138 is used to predict (i.e. perform an inference) the required markers or marker groups using machine learning or a neural network or hardcoded logic.
  • the prediction module 138 also provides an indication of which markers need to be cached and when.
  • Step 406 comprises checking if the locally stored markers or marker groups in a memory unit of the user device match the predicted markers or marker groups i.e. check if markers matching the inferred markers or marker groups are present in the local memory of the user device 102.
  • Step 408 comprises loading the locally stored markers or marker groups that match the predicted markers or marker groups into a tracked marker database 132, if the locally stored markers or marker groups match the predicted markers or marker groups.
  • the markers or marker groups in the tracked marker database 132 are prioritised over the other markers or marker groups in the user device 102 memory.
  • the markers or marker groups in the tracker marker database 132 are cached and assigned precedence such that these markers or marker groups are used.
  • the marker groups may include convolution neural networks, and convolution neural networks stored in the tracked markers database 132 are prioritised over other locally stored neural networks.
  • Step 410 comprises receiving one or more digital images of a real world object.
  • the images are captured by the camera 122.
  • the images may be a continuous stream of images.
  • Step 412 comprises processing the received digital images using the markers or marker groups (or convolution neural networks) stored in the tracked marker database 132 to identify markers present in the digital images. Alternatively the images are processed to identify real world objects in the images.
  • Step 414 comprises presenting identified augmented reality content associated with the markers or marker groups in the tracked marker database 132.
  • the augmented reality content is presented on the user device touch screen 124.
  • the AR content is overlaid onto the real world object i.e. overlaid on the real world content visible on the screen.
  • the augmented reality content is overlaid in a predefined orientation and can be positioned spatially in relation to markers detected in the digital images.
  • the method 400 also includes defining time to live parameters (TTL) of the markers or marker groups in the tracked marker database 132.
  • TTL time to live parameters
  • the time to live parameters (TTL) define the amount of time a marker or marker group remains cached i.e. remains prioritized over other markers present on the user device.
  • the method 400 comprises the step 416.
  • Step 416 comprises deleting markers or marker groups from the tracked marker database 132 if the tracked marker database 132 is full i.e. occupied.
  • the markers or marker groups with the lowest time to live parameter i.e. lowest time value are deleted first.
  • the time to live parameter of any marker may be reset in response to a trigger for example if the prediction module identifies a marker or marker group that has not been used for some time but then gets identified, the time of the TTL is reset (i.e. refreshed) , hence re-prioritising that identified marker.
  • TTL defines the time that a particular marker or marker group is prioritized and fetched over other data.
  • the TTL value i.e. the time defined in the TTL can be reset or expanded if specific markers are repeatedly identified by the prediction module 138. This indicates that there is a higher probability of specific markers or marker groups being used.
  • Step 418 comprises searching for markers or marker groups identical to the predicted markers or marker groups on the server 200.
  • the user device 102 interrogates the server 200 for markers or marker groups identical to the predicted markers or marker groups.
  • Step 420 comprises downloading markers or marker groups identical to the predicted markers or marker groups from the server 200, if the matching markers or marker groups are located on the server 200.
  • Step 422 comprise downloading associated time to live parameters (TTL) of the downloaded markers or marker groups.
  • TTL time to live parameters
  • Step 424 comprises locally storing the markers or marker groups downloaded from the server 200.
  • the downloaded markers or marker groups may be automatically stored in the tracked marker database 132 for use in processing digital images received. If the memory unit e.g. flash memory 114 of the user device is full, the method comprises deleting markers or marker groups having the lowest TTL from the memory unit at step 426. Method 400 can be repeated by the user device 102.
  • Augmented reality content associated with the markers or marker groups in the tracked marker databased 132 can displayed on the user device if processing the digital images results in detecting markers in the digital images that match the markers or marker groups in the tracked marker database i.e. the AR content associated with the tracked markers is presented to the user on the user device screen.
  • neural networks associated with the tracked markers can be used to process the digital images and identify AR content for display to the user.
  • One example use case may be that user X likes cars as is prevalent from user X’s social media data or pictures stored on user X’s camera or GPS locations around car dealerships or purchasing history or browsing history etc.
  • User X’s device (device X) is configured to infer user X’s fondness for cars from the various user data.
  • Device X predicts i.e. identifies one or more markers or marker groups or neural networks related to cars.
  • Device X searches its local memory for markers identical to the identified car related markers or marker groups. If car related markers or marker groups are found in the local memory, these markers or marker groups or neural networks are loaded into a tracked marker database on device X. The markers or marker groups or neural networks on the tracked marker database are prioritised over other markers stored on the phone.
  • device X interrogates the server for markers or marker groups identical to the identified car markers.
  • Device X downloads car markers or marker groups or neural networks from the server onto device X and stores these markers in the tracked marker database.
  • Device X processes images and identifies cars using the markers or marker groups or neural networks in the tracked marker database, to identify cars in the various digital images captured by the camera of device X.
  • Device X then overlays AR content corresponding to the car markers or marker groups onto the user device.
  • This example shows how markers or marker groups or neural networks are pre-emptively stored and prioritised onto device X (i.e. user device) without the need for storing every single type of marker.
  • the predictive approach described in method 400 of predictively and pre-emptively selecting markers or marker groups or neural networks and prioritising these for use i.e. caching them and prioritising them is advantageous since the user device can prioritise which and selectively store the required markers or marker groups or neural networks used for image processing.
  • the method 400 is triggered based on processing user data and inferring user preferences or tendencies. The user’s tendencies or preferences are used to identify specific markers based on the tendencies or preferences.
  • the method 400 allows the user device 102 to behave predictively.
  • Step 5 shows a further method 500 for selecting and displaying augmented reality content to the user.
  • the method 400 is executed by the user device.
  • the user device 102 can execute the method 500 described below.
  • Method 500 commences at step 502.
  • Step 502 comprises detecting, by the user device, a marker node.
  • the marker node may be detected on or adjacent a real world object.
  • the marker node may be detected in an area or may be detected within a digital image captured by the user device.
  • a marker node embodies information about markers or marker groups or neural networks.
  • Step 504 comprises checking if markers or marker groups stored locally on the user device correspond to (i.e. match) markers or marker groups associated with the detected marker group. If yes, the method proceeds to step 506.
  • Step 506 comprises identifying the locally stored markers or marker groups (or neural networks) that correspond to the markers or marker groups associated with the detected marker node.
  • Step 508 comprises loading the identified markers or markers groups into the tracked marker database 132. More specifically the locally stored markers or marker groups that correspond to the markers or marker groups associated with the marker nodes are loaded into the tracked marker database 132.
  • the markers or marker groups within the tracked marker database 132 are actively used to process the one or more received digital images to identify markers in the image or identify real world objects, and generate associated augmented reality content for presenting on the user device as an overlay. Markers or marker groups originally within the tracked marker database 132 may be deleted when new markers or marker groups are added to the tracked marker database.
  • step 510 comprises removing markers or marker groups with the lowest time value as per the time to live parameter (TTL) . Deleting markers or marker groups with the lowest TTL value ensures that markers or marker groups that are required are cached and prioritised over other data. This step of deleting or overriding the tracked marker database with the markers or marker groups (or neural networks) most relevant to the current operations of the user reduces the memory space occupied by markers or marker groups (or neural networks) and reduces processing load.
  • TTL time to live parameter
  • Step 512 comprises searching the server 200 for markers or marker groups corresponding to the markers or marker groups associated with the marker node.
  • Step 514 comprises downloading markers or marker groups (or neural networks) from the server 200 that correspond to the markers or marker groups associated with the marker node.
  • Step 516 also comprises downloading time to live parameters (TTL) associated with the markers or marker groups that are downloaded at step 514.
  • TTL time to live parameters
  • Step 518 comprises storing the downloaded markers or marker groups locally on the user device 102 (i.e. on a memory 114 of the user device) , for use by the user device to process the received digital images and generate augmented reality content.
  • the markers or marker groups downloaded from the server may be stored in the tracked marker database 132.
  • the downloaded markers or marker groups are deleted from a memory unit of the user device once the time defined in the time to live parameters (TTL) have expired.
  • TTL time to live parameters
  • Step 520 comprises the step of checking the memory unit of the user device (e.g. memory 114) is full prior to downloading the markers or marker groups from the server at step 516. If the there is space on the local memory unit, the method proceeds to step 516.
  • the memory unit of the user device e.g. memory 114
  • Step 522 comprises deleting locally stored markers or marker groups that have the shortest time defined in their time to live parameter (TTL) or deleting markers or marker groups that have not been used for the time defined in the TTL.
  • the method 500 may be executed by the processor of the user device. The method 500 is executed each time a marker node is encountered.
  • One example use case includes a user X scanning a t shirt with a marker node on the t shirt.
  • the user device of user X i.e. device X
  • the t shirt includes a marker node e.g. a QR code in this example.
  • the user device of user X scans the QR code (i.e. the marker node) .
  • the QR code embodies information, wherein the information is a marker group (or markers or neural networks) associated with the t shirt, such that the t shirt can be recognised and AR content associated with the t shirt can be presented to the user.
  • Device X is configured to check locally for marker groups that match the marker group associated with the marker node. Alternatively, device X interrogates the server 200 to obtain marker groups associated with the marker node. Device X can store the markers or marker groups that correspond to the information embodied in the marker in an active database e.g. the tracked marker database. The markers or marker groups corresponding to the marker node information are used by device X to recognise the t shirt or images on the t shirt and display associated content. This use case is an example application of method 500.
  • Figure 6 shows another embodiment of a method 600 for selecting and displaying augmented reality content.
  • Method 600 is executed when the user device 102 encounters a classification indicator on a real world object.
  • Method 600 is executed if the user device cannot decipher the objects in the received digital images or if the user device cannot process the digital images.
  • One reason for this can be the lack of the required markers or maker groups or neural networks required to process the digital images.
  • a classification indicator is a scannable element.
  • the classification indicator may be a standard or generic element or marker that is always searched for by the user device.
  • the classification indicator may be a specific logo or symbol.
  • Step 602 comprises detecting a classification indicator in one or more received digital images of the real world object.
  • the user device 102 may be configured to continuously scan each received digital image for a classification indicator. Alternatively, the user device 102 may scan for a classification indicator if it cannot process the received digital image or images and determine objects present in the digital images.
  • Step 604 comprises transmitting the digital images to the server 200 if a classification indicator is detected.
  • the images transmitted to the server 200 include everything visible through the lens of the camera.
  • Step 606 comprises the server 200 processing the received digital images to identify the classification indicator present in the images.
  • the server 200 uses the classification module 226 to identify the classification indicator.
  • the classification indicator classifies the type of markers or marker groups that will be required by the user device 102.
  • Step 608 comprises the identifying one or more markers or marker groups associated with the indicator, i.e. markers or marker groups stored on the server that correspond to the classification indicator.
  • the server 200 may identify markers or marker groups that correspond to the category of object that the classification indicator relates to.
  • the server 200 may identify one or more neural networks that correspond to the classification indicator to allow processing of digital images and identify a specific category of objects in the images.
  • Step 610 comprises downloading the identified markers or marker groups corresponding to the classification indicator onto a memory unit (e.g. flash memory 114) of the user device 102.
  • Step 610 also comprises downloading the time to live parameters (TTL) associated with the markers or marker groups downloaded from the server 200.
  • TTL time to live parameters
  • the markers or marker groups downloaded from the server 200 are locally stored in the user device 102 such that the user device 102 can use these downloaded markers or marker groups for processing received digital images and generate augmented reality content for the user.
  • Step 612 comprises checking if the local memory 114 of the user device 102 is full. If the local memory of the user device is not full the markers or marker groups are downloaded. Optionally the downloaded markers or marker groups corresponding to the indicator may be stored in a tracked marker database 132 such that these markers are cached and prioritised for use. Step 610 is performed if the local memory is not full i.e. there is space on the user device 102.
  • Step 614 comprises identifying locally stored markers or marker groups within the memory unit 114 that have the lowest time to live parameter (TTL) i.e. identify the markers or marker groups with the lowest time value defined in their TTL.
  • TTL time to live parameter
  • Step 616 comprises deleting these markers or marker groups with the lowest TTL value to free space within the memory 114 of the user device. Following step 616 the method can proceed to step 610 where the markers or marker groups from the server 200 are downloaded and locally stored.
  • the method can comprise checking the tracked marker database 132 to check if it full. If so,markers or marker groups with the lowest TTL value can be deleted from the tracked marker database in order to create space therein.
  • the markers or marker groups downloaded from the server may be loaded into the tracked marker database 132.
  • User X captures a digital image of an object using device X (i.e. user X’s device) .
  • Device X does not have the markers or marker groups or neural networks to process the images and hence cannot identify the object in the digital images.
  • Device X scans the digital images for a classification indicator.
  • the object may have a classification indicator e.g. a logo on it.
  • Device X detects a classification indicator after scanning the images. If a classification indicator is detected the digital images are transmitted to the server 200 for processing.
  • the server 200 processes the received images and identifies the classification indicator present in the one or more digital images.
  • the server 200 further identifies one or more markers or marker groups or neural networks corresponding to the classification indicator.
  • the identified markers or marker groups or neural networks are downloaded onto device X.
  • the TTL instructions associated with the markers or marker groups are also downloaded onto device X.
  • Device X can use the markers or marker groups or neural networks corresponding to the classification indicator to process the digital images, identify objects within the images and generate augmented reality content. Augmented reality content related to the markers or marker groups is presented to the user.
  • Figure 7 shows example operation of the user device to select and display augmented reality content.
  • Figure 7 illustrates implementation of some steps of method 400 and 500 as described earlier.
  • the user device begins by opening the AR app 104 and running the AR app at step 702.
  • the AR app 104 includes instructions that cause the user device 102 to execute various functions.
  • the software i.e. AR app
  • makes an inference from user data e.g. from user preferences at step 704.
  • Step 704 may also include using machine learning to make an inference. Marker groups or neural networks are inferred from the user preferences or machine learning.
  • the software i.e. AR app
  • the AR app may continuously search for the presence of a marker node. If a marker node is detected the marker groups associated with the node are identified and/or extracted. Steps 704 and 706 are triggers that are detected by the AR app.
  • Step 708 comprises checking if corresponding marker groups is in the memory but not in the Tracked Markers database. As seen in figure 7, the marker groups stored on the phone memory is checked. Further figure 7 also shows the phone memory stores marker groups and associated TTL instructions. Step 710 comprises adding marker groups to the Tracked Markers database, wherein the marker groups are the ones identified at step 708. Figure 7 shows the Tracked Marker database that includes marker groups and their associated TTL instructions. The marker groups in the Tracked Marker database 132 are cached and prioritised over other data in the phone memory. Step 712 comprises checking if the Tracked Markers database 132 is full. If so, step 714 comprises removing marker groups with the lowest TTL.
  • Figure 8 shows an example of operation of the user device 102 and interaction with the server 200 to download marker groups (or markers or neural networks) .
  • Figure 8 is a diagram of the various interactions between the server and user device in order to download marker groups.
  • the augmented reality app is run i.e. executed by the user device 102.
  • Functions 804, 806 or 808 may be performed simultaneously or either one of them may be performed depending on what occurs.
  • the AR app awaits a trigger i.e. detects a trigger.
  • Step 808 comprises the AR app making an inference from user preferences or machine learning executed using user data.
  • Step 810 comprises searching for an inferred marker group on the server. This is generally performed if the inferred marker groups are not located on the user device.
  • Step 812 comprises locating the marker based on the request from the user device 102.
  • Step 814 comprises downloading the identified marker groups and associated TTL instructions onto the phone memory. As shown in figure 8, the phone memory receives and stores the marker groups and associated TTL instructions.
  • Step 804 comprises the AR software detecting a marker node.
  • the marker node is detected and scanned by the AR software to identify a marker group associated with the marker node.
  • Step 816 comprises searching corresponding marker group (or marker groups) from the server 200. Following step 816, steps 812 and 814 are repeated to download and store marker groups onto the phone memory.
  • Step 806 comprises the AR software (i.e. AR app) detecting a classification indicator (i.e. a facilitator marker) . If the AR software detects a classification indicator, at step 818 a picture of the camera view is sent to the server 200 for additional processing and identification of marker groups. The method proceeds to steps 812 and 814 to identify marker groups and download marker groups onto the phone memory.
  • AR software i.e. AR app
  • a classification indicator i.e. a facilitator marker
  • the AR software checks if the device memory is full. If the device memory is full, then at step 822 the marker groups with the lowest TTL value are removed. Steps 804, 806 and 808 are triggers. If the AR app detects one of the triggers, the AR app causes the user device to perform the described steps.
  • Figures 9 and 10 illustrate examples of real world objects with images.
  • the real world objects as shown in figures 9 and 10 are shirts of various type with graphics printed thereon.
  • Figure 9 shows a long sleeve shirt 900.
  • the shirt 900 includes a graphic 902 in the chest region of the shirt and each sleeve includes cloud graphics on each sleeve.
  • the central graphic 902 is a graphic of Adam (i.e. the painting of the “Creation of Adam” ) .
  • the central graphic 902 is centred relative to in the middle and positioned at a DCG distance (shown as 904) .
  • DCG stands for distance from collar top of chest graphic.
  • the DCG distance can be a predetermined distance.
  • the DCG may be a marker that is searched for.
  • the DCG distance is 8cm.
  • the marker may be the clouds on the sleeve or the face of Adam or any other features in the painting.
  • Figure 10 illustrates a short sleeve shirt 910 that includes a central graphic 912.
  • the shirt also includes additional text 914 below the central graphic.
  • the central graphic 912 is a stylized mona lisa graphic as well as two other characters.
  • the text 914 may be a marker such as an image marker.
  • the sunglasses on mona lisa may be marker.
  • Additional shirt designs can be developed.
  • the additional shirt designs can include graphics arranged in any predetermined orientation or configuration.
  • the shirts can include markers disposed on the shirt and may include some markers that are embedded in a graphic on the shirt.
  • the image processing tool is selected to identify content in the graphics based either on the identified markers or a user data.
  • the augmented reality content is also selected based on the marker or the identified content of the shirt.
  • the shirts shown in figures 7 and 8 illustrate example real world objects (e.g. shirts) that can be used to display augmented reality content upon.
  • the shirts include graphics, pictures or images that can include markers.
  • the user may be determined to like famous paintings based on user data.
  • the user device 102 can infer markers or marker groups related to famous paintings based on the user data.
  • the user device 102 searches a local memory of markers or marker groups to locate markers or marker groups that correspond to the inferred markers. These markers or marker groups are selected from the local store and cached in a priority list e.g. a tracked markers database.
  • These famous painting related markers or marker groups are prioritised for use over other data.
  • the user device uses the prioritised markers or marker groups to identify the painting and present augmented reality content related to the markers or marker groups.
  • the augmented reality content is displayed as overlaid content.
  • the overlaid content may be oriented relative to markers detected in the image i.e. on the painting of the shirt.
  • the markers or marker groups from the tracked markers database are deleted once the time limit defined in the TTL parameter are expired.
  • the user device may infer neural networks that are related to famous paintings.
  • the user device 102 searches a local memory for neural networks corresponding to the inferred neural networks.
  • These famous painting neural networks may be prioritised for use.
  • the neural networks can be used to identify the painting on the shirt and select corresponding AR content for displaying on the user device.
  • the user device may search for a marker node.
  • the marker node may be a code e.g. a barcode or QR code on the shirt.
  • the marker node may be the text 914.
  • the marker node can be scanned and identifies marker groups corresponding to the marker node.
  • the user device 102 is configured to check the local device memory for the identified marker group. If the marker group is detected on the local memory, the marker group is cached and used to process the images and identify objects e.g. paintings on the shirt. Corresponding AR content can be displayed.
  • the device 102 interrogates a server for markers or marker groups.
  • the markers or marker groups are downloaded to the user device from the server and then utilised to process digital images and identify paintings. Corresponding AR content is displayed on the user device.
  • the described method and system for selecting and displaying augmented reality is advantageous because the required markers or marker groups are pre-emptively identified and stored locally on the user device.
  • the user device uses these specific markers or marker groups to identify objects in the digital images and present corresponding AR content to the user.
  • TTL parameters for each of the locally stored markers and marker groups ensure that unused i.e. “old” markers or marker groups are deleted from the memory of the user device to free up additional space.
  • the method provides for memory and processing power to be conserved, since there is a finite number of markers or marker groups.
  • the marker groups or markers that are used by the user device 102 are targeted i.e. based on user preferences or user behaviour or related to a marker node. This allows the user device to detect a wide variety of objects in images without the need for locally storing markers for each type of object.
  • the described approach expands the abilities of the user device to process digital images and identify objects.
  • markers or marker groups may be used for processing digital images.
  • one or more image processing tools e.g. convolution neural networks may be used to process the digital images and identify objects, and then identify AR content corresponding to the identified objects.
  • the method and system may use one or more neural networks e.g. convolution neural networks.
  • some elements or parts of the example embodiments described with reference to the Figures can be implemented to file an application programming interface (API) or as a series of libraries for use by a developer or can be included within another software application, such as a terminal or personal computer operating system or a portable computing device operating system.
  • API application programming interface
  • program modules include routines, programs, objects, components and data files the skilled person assisting in the performance of particular functions, will understand that the functionality of the software application may be distributed across a number of routines, objects or components to achieve the same functionality.
  • computing system or partly implemented by computing systems than any appropriate computing system architecture may be utilised.
  • This will include stand alone computers, network computers and dedicated computing devices.
  • computing system and “computing device” are used, these terms are intended to cover any appropriate arrangement of computer hardware for implementing the function described.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method for selecting and displaying augmented reality content to a user on a user device, the method comprising the steps of selecting and/or downloading one or more marker groups onto a memory unit of the user device, in response to a trigger, receiving one or more digital images at the user device via an image capturing apparatus of the user device, processing the digital image using the downloaded one or more marker groups, wherein each of the downloaded marker groups is associated with a time to live parameter defining a time limit the one or more marker groups are stored on the user device, and; deleting the stored marker groups at the expiration of the time limit defined in the time to live parameter or maintaining the downloaded marker groups on the user device in response to a further trigger.

Description

A METHOD AND SYSTEM FOR SELECTING AND DISPLAYING AUGMENTED REALITY CONTENT TECHNICAL FIELD
The present disclosure relates to a method and system for selecting and displaying augmented reality content to a user.
BACKGROUND
Augmented reality may be considered to be the blending of digital elements into real world elements. In some examples, the digital elements (which may be a digital representation of a real world object or any other digital data or reference) may be interactive and can provide sensory feedback to a user e.g. visual feedback, haptic feedback audible feedback etc. when certain conditions are met. In turn, such augmented reality systems operate by overlaying interactive digital elements on to a real world environment as captured by a digital device so as to provide some form of interaction with a user. This interaction may be in the form of games, teaching guides, referencing or simply an artistic manipulation of the real world for entertainment purposes.
Augmented reality has become an increasingly popular due to the improvement of mobile user devices. Technological advances in mobile devices has allowed augmented reality to become more accessible due to superior processing and storage powers as well as advancements in networking and data exchange. Nonetheless, there are still limitations with the use of augmented reality due to the large amount of real world data and the variability of such data when it is applied in the real world. In turn, creating an augmented reality experience is often limited to a small set of real world objects. This can be very limiting and thus reducing the quality of such an experience.
SUMMARY OF THE INVENTION
The present disclosure relates to a method and system for selecting and displaying augmented reality content to a user on a user device. The disclosed method and system for selecting and displaying augmented selects markers or marker groups or image processing tools in a manner that conserves the user device memory and processing power, and improves efficiency of processing and memory use. The method and system for selecting and displaying augmented reality content as disclosed herein is advantageous because it increases the range of operation of the user device and allows the user device to recognise several categories of objects without needing to store every conceivable type of marker, marker group or image processing tool.
In accordance with one aspect there is provided a method for selecting and displaying augmented reality content to a user on a user device, the method comprising the steps of:
selecting and/or downloading one or more marker groups onto a memory unit of the user device, in response to a trigger,
receiving one or more digital images at the user device via an image capturing apparatus of the user device,
processing the digital image using the downloaded one or more marker groups,
wherein each of the downloaded marker groups is associated with a time to live parameter defining a time limit the one or more marker groups are stored on the user device, and;
deleting the stored marker groups at the expiration of the time limit defined in the time to live parameter or maintaining the downloaded marker groups on the user device in response to a further trigger.
In one configuration each marker group comprises one or more markers and augmented reality content associated with the one or more markers.
In one configuration each marker group and contents of each marker group corresponds to a category of real world object.
In one configuration each marker group comprises a reference to the one or more markers and a reference to the one or more augmented reality content associated with each marker, wherein the user device can use the references to access the one or more markers and/or the augmented reality content from a server.
In one configuration each marker group comprises one or more image processing tools, each image processing tool configured to identify a category of object within a digital image or identify one or markers associated with the category of object within a digital image.
In one configuration each image processing tool is a convolution neural network, wherein each convolution neural network is configured to process a digital image and identify a category of object.
In one configuration the method comprises the additional steps of:
processing the one or more digital images to identify markers present within the one or more digital images and;
presenting augmented reality content on the user device based on the processing of the one or more digital images.
In one configuration the trigger comprises processing user data to infer one or more markers or marker groups based on the processing of the user data, wherein the user data is profile data associated with a specific user and the user data being stored as part of a user profile.
In one configuration the user data comprises one or more of: user data comprises one or more of: location of the user, data from the user device sensors, purchase history of the user, data from external APIs on the user device, proximity to other users based on the location of another user’s user device, user social media data, user preferences and proximity to wireless or wired communication nodes.
In one configuration the method comprises the additional steps of:
checking if markers or marker groups stored locally on the user device correspond to the inferred markers or marker groups,
if the locally stored markers or marker groups correspond to the inferred markers or marker groups, the method further comprises:
identifying the locally stored markers or marker groups that correspond to the inferred markers or marker groups
loading the locally stored markers or marker groups that correspond to the inferred markers or marker groups into a tracked marker database, if the locally stored markers or marker groups correspond to the inferred markers or marker groups, and;
wherein the markers or marker groups within tracked marker database are the markers or marker groups that are actively used to process the received one or more digital images.
In one configuration if the locally stored markers or marker groups do not correspond to the inferred markers or marker groups, the method comprises the additional steps of:
downloading markers or marker groups from a server that correspond to the inferred markers or marker groups, wherein the server stores a plurality of markers or marker groups,
downloading time to live parameters associated with marker or marker group that is downloaded,
storing the downloaded markers or marker groups locally on the user device for use by the user device to process the received digital images and generate augmented reality content, and;
wherein the downloaded markers or marker groups are deleted from a memory unit of the user device once the time defined in the time to live parameter has expired. (10)
In one configuration the method comprises the additional steps of:
checking if the memory unit of the user device is full prior to downloading the markers or marker groups that correspond to the inferred markers or marker groups from the server,
deleting locally stored markers or marker groups that have the shortest time defined in their associated time to live parameters or deleting locally stored markers or marker groups that have not been used for the time define in their associated time to live parameters.
In one configuration the trigger comprises detecting, by the user device, a marker node, wherein the marker node comprises markers or marker groups or a reference to markers or marker groups associated with the marker node.
In one configuration the method comprises the additional steps of:
checking if markers or marker groups stored locally on the user device correspond to the markers or marker groups associated with the detected marker node,
if the locally stored markers or marker groups correspond to the markers or marker groups associated with the detected marker node, the method comprises the additional steps of:
identifying the locally stored markers or marker groups that correspond to the markers or marker groups associated with the detected marker node,
loading the locally stored markers or marker groups that correspond to the markers or marker groups associated with the marker node into a tracked marker database, and;
wherein the markers or marker groups within the tracked marker database are the markers or marker groups that are actively used to process the received one or more digital images.
In one configuration if the user device does not have markers or marker groups that correspond to the markers or marker groups associated with the detected marker node stored within the memory unit of the user device, the method comprises the additional steps of:
downloading markers or marker groups from a server that correspond to the markers or marker groups associated with the marker node,
downloading time to live parameters associated with marker or marker group that is downloaded,
storing the downloaded markers or marker groups locally on the user device for use by the user device to process the received digital images and generate augmented reality content, and;
wherein the downloaded markers or marker groups are deleted from a memory unit of the user device once the time defined in the time to live parameters has expired.
In one configuration the method comprises the additional steps of:
checking if the memory unit of the user device is full prior to downloading the markers or marker groups from the server, that correspond to the markers or marker groups associated with the detected marker node,
deleting locally stored markers or marker groups that have the shortest time defined in their associated time to live parameters or deleting locally stored markers or marker groups that have not been used for the time define in their associated time to live parameters.
In one configuration the marker node comprises one or more of a machine readable code, NFC signal, RFID signal, Bluetooth signal and Wifi signal.
In an embodiment the method as described above is executed by a processor of a user device.
In accordance with one aspect there is provided a method of selecting and displaying augmented reality content on a user device, the user device comprising a memory unit and a processor, the method being executed by the user device, method comprising the steps of:
accessing user data of the user associated with the user device,
predicting one or more markers or marker groups the user is likely to require based on the user data,
checking if locally stored markers or marker groups in a memory unit of the user device match the predicted markers or marker groups,
loading the locally stored markers and marker groups that match the predicted markers or marker groups into a tracked marker database, if the locally stored markers or marker groups match the predicted markers or marker groups,
wherein the markers or marker groups within the tracked markers database are prioritised for use over other markers or marker groups within the memory unit of the user device,
receiving one or more digital images of a real world object,
processing the received digital image using the markers or marker groups from the tracked marker database to identify corresponding augmented reality content.
In one configuration the method comprises the additional step of presenting the identified augmented reality content on the user device, wherein the augmented reality content is overlaid onto the real world object on a user interface of the user device.
In one configuration the predicted markers are pre-emptively identified and loaded in the tracked marker database based on the user data, prior to receiving the one or more digital images.
In one configuration each marker or marker group includes time to live parameters that define the amount of time a marker or marker group remains cached in the memory unit of the user device.
In one configuration if the tracked marker database is full, the method comprises the additional step of deleting markers or marker groups from tracked markers database if the tracked markers database is full, wherein markers or marker groups with the lowest time to live parameter are deleted from the tracked markers database.
In one configuration the method comprises refreshing one or more markers or marker groups within the tracked markers database if the step of predicting markers based on user data identifies markers or marker groups already present in the tracked markers database.
In one configuration the step of predicting markers or markers groups comprises identifying user preferences or user behaviour based on processing stored user data, and wherein the step of predicting markers or marker groups is executed by a prediction module.
In one configuration the prediction module is a neural network or a machine learning program that is executed by the user device.
In one configuration if the predicted markers or marker groups are not locally available in the memory unit of the user device, the method comprises the additional steps of:
searching for markers or marker groups identical to the predicted markers or marker groups on a server,
download the markers or marker groups identical to the predicted markers or marker groups from the server, if matching markers or marker groups are located on the server,
download associated time to live parameters of the downloaded markers or marker groups,  locally store the downloaded markers or marker groups and the associated time to live parameters.
In one configuration if the memory unit of the user device is full, then the method comprises the step of deleting markers or marker groups having the lowest time to live parameter from the memory unit such that new markers or marker groups can be downloaded and stored in the memory unit.
In one configuration the augmented reality content associated with a marker or marker group is stored in the memory unit of the user device, accessed from the user device and presented on the user device if a corresponding marker is detected in a digital image.
In accordance with a further aspect there is provided a method for selecting and displaying augmented reality content on a user device, the method comprising the steps of:
detecting an classification indicator in one or more received digital images of a real world object,
if an classification indicator is detected in the one or more digital images, the one or more digital images are transmitted to the server,
identifying the classification indicator in the one or more digital images at the server,
identifying one or more markers or marker groups that correspond to the classification indicator that are stored on the server,
downloading the identified one or more markers or marker groups corresponding to the classification indicator on to the memory unit of the user device from the server, and downloading time to live parameters associated with the identified one or more markers or marker groups onto the memory unit of the user device, such that the markers or marker groups and associated time to live parameters are locally stored on the user device.
In one configuration the method comprises the additional steps of:
processing the one or more digital images, by the user device, using the downloaded markers or marker groups to generate augmented reality content, and;
presenting the augmented reality content on the user device.
In one configuration the method comprises downloading augmented reality content corresponding the identified one or more markers or marker groups onto a memory unit of the user device.
In one configuration the digital image transmitted to the server is an image of everything visible in the real world environment as captured by an image capture apparatus of the user device.
In one configuration if the memory unit of the user device is full, the method comprises the additional steps of:
identifying locally stored markers or marker groups within the memory unit that have the lowest time to live parameter, wherein the time to live parameter defining a time limit the one or more marker groups are stored on the user device,
deleting the markers or marker groups with the lowest time defined in their associated time to live parameter.
In accordance with a further aspect there is provided a system for selecting a processing tool comprising:
a computing unit including a processor, a memory unit, a user profile database and a prediction module,
the user profile database configured to store user data,
a plurality of processing tools in communication with the computing unit,
the computing unit configured to:
receive and store user data,
process the user data, by the prediction module, and predict at least one processing tool for use by the computing unit
select the at least one processing tool from the plurality of processing tools based on the user data.
The user data comprises one or more of: location of the user, data from the user device sensors, purchase history of the user, data from external APIs on the user device, proximity to other users based on other user location, social media data, user preferences, proximity to wireless or wired communication nodes.
In one configuration the prediction module is a machine learning module that is executed by the processor of the computing unit, the processor configured to determine user preferences based on the user data.
In one configuration the machine learning module is configured to process user data and determine user preferences, the user preferences being stored in the user profile database in association with a user profile, and the user preferences constantly updated as new user data is received.
In one configuration the processing tools are markers or marker groups, the markers or marker groups being used to process received images to identify a specific object, and the system being configured to select specific markers or marker groups based on the user preferences.
In one configuration the processing tools are convolution neural networks, each convolution neural network configured to process one or more images and identify a specific object within the images, and the system being configured to select a specific processing tool based on the user preferences.
In one configuration the selected processing tool is cached such that the use of the selected processing tool is prioritised by the processor and the selected processing tool is utilised by the  processor to process one or more received images to identify a specific object within the one or more images.
In one configuration the system comprises a processing tool database configured to stored the plurality of processing tools, the computing unit configured to select a processing tool from the processing tool database based on the user preferences.
In one configuration the system comprises a content database, the content database storing augmented reality content, the computing unit configured to select augmented content corresponding to the identified object in the digital images and/or select augmented reality content corresponding to the selected processing tool.
In one configuration the content database storing augmented reality content corresponding to markers or marker groups, the computing device configured to select augmented reality content corresponding to the selected markers or marker groups based on the user preferences.
In one configuration the computing device further configured to present the augmented reality content on a user device in spatial relation to the identified object.
In one configuration a user device comprises the computing unit.
In one configuration a server comprises the computing unit.
It is intended that reference to a range of numbers disclosed herein (for example, 1 to 10) also incorporates reference to all rational numbers within that range (for example, 1, 1.1, 2, 3, 3.9, 4, 5, 6, 6.5, 7, 8, 9 and 10) and also any range of rational numbers within that range (for example, 2 to 8, 1.5 to 5.5 and 3.1 to 4.7) and, therefore, all sub-ranges of all ranges expressly disclosed herein are hereby expressly disclosed. These are only examples of what is specifically intended and all possible combinations of numerical values between the lowest value and the highest value enumerated are to be considered to be expressly stated in this application in a similar manner.
This invention may also be said broadly to consist in the parts, elements and features referred to or indicated in the specification of the application, individually or collectively, and any or all combinations of any two or more said parts, elements or features, and where specific integers are mentioned herein which have known equivalents in the art to which this invention relates, such known equivalents are deemed to be incorporated herein as if individually set forth.
As used herein the term ‘and/or’ means ‘and’ or ‘or’ , or where the context allows both.
As used herein the term “augmented reality” means augmented reality and mixed reality. The term augmented reality refers to the integration of digital information on a user’s view of the real world, thus providing a composite view.
As used herein the term “real world” or “real life” or variants thereof relates to things that are in the real world i.e. relates to things that are in the real environment, and not related to digital objects or digital environments.
As used herein the term “marker” or “image marker” or variants thereof means a two dimensional or three dimensional image for which an augmented reality software application will actively search for and use to determine either one or more of orientation or size in a three dimensional space or category of object present in an image. A marker can be any element that is present on a real world object or present in an image, and can be visible to the human eye or may be invisible to the human eye but discernible to an image capture apparatus of a user device e.g. a camera. Some examples of markers are specific features of an image or specific orientation of colours or light frequencies or a pattern etc.
As used herein the term “marker group” refers to markers and content associated with each specific marker. The term “marker group” can also refer to references to markers and references to content associated with the specific markers.
As used herein the term “tracked markers” or variants thereof refers to markers or marker groups that are actively searched for while processing a digital image.
As used herein the term “marker node” refers to node or device containing information. A marker node embodies or contains a marker group or information about a marker group such as a location of the marker group or contents of the marker group. Examples of a marker node are a barcode, QR code, NFC signal, RFID signal, Bluetooth signal, Wifi signal or any other device that can be detected by the user device.
The term “classification indicator” as used herein refers to a scannable element that is present on or adjacent a real world object. The classification indicator can be scanned by the user device to cause the user device to communicate with the server or the classification indicator can be scanned by the user device to facilitate a predefined procedure to be executed by the user device.
As used herein “ (s) ” following a noun means the plural and/or singular forms of the noun.
In the following description like numbers denote like features.
In the following description, specific details are given to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example circuits, etc., may be shown in block diagrams in order not to obscure the embodiments in unnecessary detail. In other instances, well-known modules, structures and techniques may not be shown in detail in order not to obscure the embodiments.
In this specification, the word “comprising” and its variations, such as “comprises” , has its usual meaning in accordance with International patent practice. That is, the word does not preclude  additional or unrecited elements, substances or method steps, in addition to those specifically recited. Thus, the described apparatus, system, substance or method may have other elements, substances or steps in various embodiments. The term “comprising” (and its grammatical variations) as used herein are used in the inclusive sense of “having” or “including” and not in the sense of “consisting only of” .
BRIEF DESCRIPTION OF THE DRAWINGS
Notwithstanding any other forms which may fall within the scope of the present invention, example embodiments of the present invention or inventions will now be described, by way of example only, with reference to the accompanying drawings in which:
Figure 1 illustrates an example system for selecting and displaying augmented reality content including a user device and a server.
Figure 2 illustrates a schematic diagram of a user device and its components.
Figure 3 illustrates an example embodiment of the server and its components.
Figure 4 illustrates an example of a method of selecting and displaying augmented reality content on a user device by predicting markers or marker groups and obtaining the predicted parameters.
Figure 5 illustrates a further example method of selecting and displaying augmented reality content using a marker node and contents associated with a marker node.
Figure 6 illustrates a method for selecting and displaying augmented reality content based on a classification indicator that may be encountered.
Figure 7 shows example operation of the user device to select and display augmented reality content.
Figure 8 shows example operation of the user device and server to select and display augmented reality content.
Figures 9 and 10 illustrate real world objects that can be used to display augmented reality content, wherein the objects are shirts.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIM ENTS
The present invention relates to a method and system for selecting and displaying augmented reality content to a user on a user device. Augmented reality is the blending of digital elements into real world elements. The digital elements (i.e. digital information) are interactive digital elements and can provide sensory feedback to a user e.g. visual feedback, haptic feedback audible feedback etc. Augmented reality includes overlaying interactive digital elements on to the real world. An augmented reality system generally comprises a user device that includes one or more sensors that collect information about a user’s real world, a processor is configured to process the information about the real world and generate digital elements based on the processed information.
The digital elements may be displayed as overlaid elements on a user device. The digital elements are presented and perceived as being part of the real world.
Augmented reality systems generally identify one or more real world objects from a digital image and then overlay digital elements onto a user device. The use of augmented reality has been facilitated by the improvement in the processing capability of various mobile user devices such as for example smartphones, tablets, smart watches (e.g. Apple watch) , smart glasses (e.g. Google glass) or other wearable devices.
With improvements to smartphone processing over the past decade, augmented reality frameworks such as Apple’s ARKit and Google’s ARCore have become highly efficient and able to recognize markers (e.g. ARAnchors) and track their positions and orientations in three dimensional space. With the ability to track such markers, it is possible to develop software applications that actively search for these image markers and display information or media content on top or next to the markers.
However due to limitations in both user device memory and user device processing power there is a limit to the number of marker i.e. image markers a user device can search for. Although Apple’s ARKit does not feature a hard limit, Apple recommends to use no more than 25 markers. Google’s ARCore has a limitation on the number of markers set to 1000, although the ARCore application will begin to lag and run slowly before reaching this number of markers, depending on the size of the image or device specs, complexity etc.
Existing AR applications on existing hardware require the user device to physically store a copy or copies of the marker or markers it is searching for in its memory in order to identify it as a marker in an image. In order for AR applications to function to recognise several types of objects, the user device will be required to store a large number of markers to correspond to various objects. Current user devices do not have the memory to store so many markers because there is a finite limit to user device memory. Furthermore, the processing power of current user devices limit the number of markers that can be searched for at once. As augmented reality takes place in real time over an image capture apparatus (e.g. a camera) , images from the camera must be processed every fraction of a second. The user device processes the images by finding matching markers from the detected markers from the local list in order to generate and present augmented reality content corresponding to the markers.
The current existing technology (i.e. current products) is not suited to use cases where the application will need to search for large numbers of markers. Further storing markers and corresponding content would occupy a large amount of space on the user’s device. This is especially the case for a large number of markers.
The present disclosure relates to a method and system for selecting and displaying augmented reality content on a user device. The presently disclosed method and system address the shortcomings of current AR technology. The presently disclosed method and system provides a more efficient manner of downloading and loading markers to manage the user device’s memory  and processing power. The presently described method of selecting and displaying augmented reality content comprises downloading or selecting markers before they are encountered. The method and system anticipates the markers or marker groups that will be required and downloading or selecting these markers, thereby the user device storing only the markers that are most likely to be needed. This reduces the space and processing requirements of the user device.
In one embodiment there is provided a method for selecting and displaying augmented reality content to a user on a user device, the method comprising the steps of: selecting and/or downloading one or more marker groups onto a memory unit of the user device, in response to a trigger; receiving one or more digital images at the user device via an image capturing apparatus of the user device; processing the digital image using the downloaded one or more marker groups; wherein each of the downloaded marker groups is associated with a time to live parameter defining a time limit the one or more marker groups are stored on the user device, and; deleting the stored marker groups at the expiration of the time limit defined in the time to live parameter or maintaining the downloaded marker groups on the user device in response to a further trigger.
In one example the trigger comprises processing user data to infer one or more markers or marker groups based on the processing of the user data, wherein the user data is profile data associated with a specific user and the user data being stored as part of a user profile. Alternatively, in another example the trigger comprises detecting, by the user device, a marker node, wherein the marker node comprises markers or marker groups or a reference to markers or marker groups associated with the marker node.
In a further embodiment there is provided a method of selecting and displaying augmented reality content on a user device, the user device comprising a memory unit and a processor, the method being executed by the user device, and comprises the steps of: accessing user data of the user associated with the user device; predicting one or more markers or marker groups the user is likely to require based on the user data; checking if locally stored markers or marker groups in a memory unit of the user device match the predicted markers or marker groups; loading the locally stored markers and marker groups that match the predicted markers or marker groups into a tracked marker database, if the locally stored markers or marker groups match the predicted markers or marker groups; wherein the markers or marker groups within the tracked markers database are prioritised for use over other markers or marker groups within the memory unit of the user device; receiving one or more digital images or a real world object; processing the received digital image using the markers or marker groups from the tracked marker database to identify corresponding augmented reality content. In one configuration the predicted markers are pre-emptively identified and loaded in the tracked marker database based on the user data, prior to receiving the one or more digital images and each marker or marker group includes time to live parameters that define the amount of time a marker or marker group remains cached in the memory unit of the user device.
In another embodiment there is provided a method for selecting and displaying augmented reality content on a user device, the method comprising the steps of: detecting an classification indicator  in one or more received digital images of a real world object; if an classification indicator is detected in the one or more digital images, the one or more digital images are transmitted to the server; identifying the classification indicator in the one or more digital images at the server; identifying one or more markers or marker groups that correspond to the classification indicator that are stored on the server; downloading the identified one or more markers or marker groups corresponding to the classification indicator on to the memory unit of the user device from the server, and downloading time to live parameters associated with the identified one or more markers or marker groups onto the memory unit of the user device, such that the markers or marker groups and associated time to live parameters are locally stored on the user device.
Figure 1 shows one example embodiment of a system 100 for selecting and displaying augmented reality content on a user device. The system 100 comprises at least one user device 102 and a server 200. The server is a computing device comprising at least a processor, a memory unit, additional hardware and/or software modules and wireless communication capabilities.
The server 200 is arranged in two way communication with the user device 102. The user device is a mobile user device that can be carried by the user. The system 100 can comprise a plurality of user devices that are arranged in two way communication with the server 200. The user devices are configured to communicate with each other too.
The user device 102 comprises at least a processor, a memory unit and an image capture apparatus to capture digital images (e.g. a digital camera) . For example, the user device 102 may be any one of a smartphone, tablet, smartwatch, wearable device, smart glasses (e.g. Google glasses) or any other mobile device. In the example shown in figure 1, the user device 102 is a smartphone and the server 200 is a CPU unit.
The server 200 communicates with the user device 102 through any suitable wireless communication network 103 such as a 4G, 5G network or any other suitable wireless communication network. The network 103 may be a TCP/IP network.
The user device 102 further comprises an augmented reality application (AR app) 104 that is executed on the user device 102. The AR app is a software application that is downloaded onto the user device. Alternatively, the AR app may be a web based application that can be accessed via a browser on the user device.
The AR app includes computer readable and executable instructions. The AR app is stored within a memory unit of the user device 102. The user device 102 is configured to select and display augmented reality content on the user device, when the instructions of the AR app are executed by hardware components of the user device 102, e.g. a processor of the user device 102. The AR app also provides the user device 102 with the functionality to access and communicate with the server 200.
The user device 102 includes a camera or other suitable image capture device that is configured to capture digital images of a real world object 105. The camera may capture single digital images or may capture a plurality of digital images or capture a stream of digital images (i.e. a video) .
The user device 102 locally stores a plurality of markers or marker groups stored on the user device. The user device 102 locally stores content associated with the markers or marker groups. The user device 102 is configured to utilise the locally stored markers or marker groups, and associated content to identify markers in a digital image and present augmented reality content. The locally stored markers or marker groups are stored for a predefined period of time defined as a time to live parameter. The markers or marker groups are deleted at the end of the time defined in the time to live parameter. The AR app 104 includes a timer function that is configured to determine and calculate the time left as per the defined time to live parameter.
Each marker group may comprise one or more markers and augmented reality content associated with the one or more markers. Each marker group may also include an image processing tool that can be used to process the received digital images. The image processing tool may be a convolution neural network or another type of neural network or another software module or hardware module that can be used to identify objects within digital images.
The user device 102 is configured to utilise the locally stored markers to identify real world objects within the received digital images. The user device 102 is further configured to present augmented reality content corresponding to the identified markers within the digital image. The user device 102 includes sets of markers that correspond to a particular category of object or correspond to a specific object. Alternatively, the user device 102 comprises marker groups that correspond to the specific category of object or the user device 102 uses image processing tools e.g. convolution neural networks that are specifically configured to identify a category of object. For example, the category of objects can be animals or world leaders or cars or any other categories.
The server 200 is configured to store a plurality of markers or marker groups. The server 200 also stores content associated with the various markers. The markers or marker groups stored in the server 200 are arranged as per category of object. Alternatively the markers may be arranged in other categories that relate to various elements in the real world that may be compatible with the AR app 104, in order to provide AR content. Some examples include Objects at a location, People, A news event and so on.
The server 200 is further configured to store multiple user profiles, each user profile associated with a unique user. Each user accesses the server 200 through the AR app and creates a user profile as part of the sign in process. User data is stored in relation to the user profile. User data for each user is tracked by the user device. The user data is continuously updated as new user data tracked by the user device 102 is transmitted to the server 200 for storing. The user device 102 also stores a local copy of the user profile of the particular user that is either logged into the AR app on the device 102 or the person that owns the user device 102.
User data comprises at least the name, the IP address of the user device, location of the user, data from the user device sensors, purchase history, device data, data captured on the device, browsing history of the user, user preferences, and social media data of the user. Other user data can also include additional location data such as for example proximity to other users and proximity to wireless or wired communication nodes. The user data may also include information from the user  device sensors e.g. accelerometers, gyroscopes etc. as well as information from external API’s on the user device.
The user data is gathered continuously at specified time intervals. The user data is updated within the user profile on the user device 102 and the server 200. The user updated user data is used to predictively download markers or marker groups onto the user device 102 for use in processing images and identifying objects in the image. Older user data is overwritten in the user device 102.
Figure 2 shows an example embodiment of the user device 102 and its components. The user device 102 comprises suitable components necessary to receive, store and execute appropriate computer instructions. The user device 102 in this example is a smartphone and the illustrated figure is a generalised schematic that does not illustrate all the components of the user device 102. Other user devices e.g. Google glasses or tablets or smart wearables will include similar components as described. Referring to figure 2, the user device comprises a processor 110. The processor may be any suitable processor e.g. an ASIC or other chip. In one example the user device 102 may include a “system on chip” (SoC) that comprises the processor 110 (i.e. CPU) and other hardware components e.g. video processor, display processor, LTE modem and GPU all formed as a single integrated chip. The device 102 comprises a memory unit. In the illustrated example the user device comprises random access memory (RAM) 112 and flash memory 114. RAM 112 may be a RAM chip e.g. LPDDR4 or LPDDR4X or LPDDR3. The flash memory 114 includes pre installed applications and the operating system of the user device 102.
The user device 102 further comprises one or more communication units 116. In one example the user device 102 comprises a modem 116 which be for example an LTE modem implemented on an LTE chip. The device 102 comprises a battery 118 that is configured to power the device when the device is used remotely. The user device 102 further comprises additional sensors (shown as a single block) 120. The sensor block 120 can include multiple sensors e.g. accelerometers, gyroscopes, ambient light sensors, digital compass, proximity sensors or other suitable sensors. The user device 102 also comprises an image capture apparatus 122 that is in electronic communication with the flash memory 114 and the processor 110. The image capture apparatus 122 is a digital camera that also includes an integrated sensor, lens and image processor for initial processing of the image to generate a clear digital image. The user device 102 comprises I/O devices 124, which in this example is a touch screen. The touch screen 124 can be used to output information and receive information inputs.
The user device 102 comprises a markers database 130. The markers database 130 stores markers locally on the user device 102, e.g. in the flash memory 114. The markers database 130 may include markers or may store marker groups that includes markers and/or associated content and/or image processing tools. The marker database 130 is a local repository of markers that can be updated and used to access markers. The user device 102 further comprises a tracked marker database 132 that is stored in the flash memory 114 of the device. The tracked marker database 132 comprises a plurality of markers within it that are actively used to process received digital images. The user device 102 may include a content database 134. The content database 134 is  configured to store augmented reality content, and the content is stored in relation to particular markers in the markers database 130 or the tracked markers database 132.
The user device 102 further includes a time to live parameter database that stores the various time to live parameters associated with the markers stored on the user device 102. The processor 110 of the user device is configured to track the time limit as per the time to live parameter for each marker, and then delete the specific marker and associated content once the time defined in the time to live parameter has expired. The time to live parameter defines the amount of time a marker or marker group remains cached within the memory e.g. in a cache memory, such that the marker is prioritised and does not need to be fetched over other data that needs to be fetched. However once the time period defined in the time to live parameter (TTL) expires the marker group (or marker) may be deleted or moved out of the cache.
The user device 102 also comprises a user profile database 136. The user profile database 136 stores user data in a memory (e.g. flash memory) of the user device 102. The user data of the user is tracked, updated and stored in the user profile. The user data is used to anticipate a user’s needs and used to predict the markers or marker groups the user will likely need in order to identify digital images and generate augmented reality content.
The user device also comprises a prediction module 138. The prediction module 138 may be a software module e.g. asoftware program that can be executed by the processor. Alternatively, the prediction module 138 may be a hardware module. In one example the prediction module 138 is a machine learning algorithm that is configured to process the user data to anticipate the user’s needs and predict required markers or marker groups. The prediction module 138 may be a machine learning algorithm that operates based on probability or behavioural patterns of the user or user preferences, to predict the markers or marker groups that the user requires to identify images.
Optionally the user device 102 may comprise a database 140 to store image processing tools e.g. convolution neural networks. The convolution neural networks may be stored in the marker database or as part of a marker group or may be stored in a separate image processing tool database 140. The user device 102 may store multiple neural networks, where each neural network is trained to identify a specific category of real world object, for example world leaders or cars or airplanes or any other objects. An appropriate neural network can be selected based on processing the user data, i.e. a neural network required to process digital images may be predicted based on processing the user data.
Figure 3 shows an example embodiment of the server 200 and its components. Referring to figure 3, there is illustrated the server 200 that comprises at least a processor and an associated memory unit or a plurality of memory banks. The server 200 comprises suitable components necessary to receive, store and execute appropriate computer instructions. The components may include a processor 202 (i.e. a processing unit) , read-only memory (ROM) 204, random access memory (RAM) 206, and input/output devices such as disk drives 208, input devices 210 such as an Ethernet port, a USB port, etc. The server 200 further comprises communications links 214 i.e. a  communication module that is configured to facilitate wireless communications via a communication network. The communication link 214 allows the server 200 to wirelessly link to a plurality of communication networks. The communication link 214 may also allow the server to link with localised networks such as for example Wifi or other LAN (local area networks) . At least one of a plurality of communications links may be connected to an external computing network through a telephone line or other type of communications link.
The processor 202 may comprise one or more electronic processors. The processors may be microprocessors or FPGAs or any IC based processor. In one exemplary construction the processor 202 comprises a plurality of linked processors that allow for increased processing speed and also provide redundancy. In some instances, the processor 202 may include adequate processors to provide some redundant processing power that can be accessed during periods of high need e.g. when multiple functions are being executed.
The server 200 may include storage devices such as a disk drive 208 which may encompass solid state drives, hard disk drives, optical drives or magnetic tape drives. The server 200 may use a single disk drive or multiple disk drives. The server 200 may also have a suitable operating system which resides on the disk drive or in the ROM of the server 200. The operating system can use any suitable operating system such as for example Windows or Mac OS or Linux.
The server 200 comprises a server marker database 220. The server marker database 220 is configured to store a plurality of markers or marker groups. The server 200 comprises a content database 222. The content database is configured to store content (i.e. augmented reality content) . The augmented reality content stored in the database 222 is related to the markers stored in the server marker database 220. The server 200 can store several markers and associated content. The server 200 can store more markers and associated content than the user device 102 can, since the server 200 comprises much greater memory space. The server 200 also comprises a time to live parameter database 224. The time to live parameter database 224 stores time to live parameters associated with each of the markers or marker groups stored in the marker database 220. The  databases  220, 222 and 224 may be stored in the memory unit of the server.
The server 200 also comprises an image classification module 226. The image classification module 226 may be a software module or a software program. The image classification module is configured to receive an image and classify one or more objects in the image, in particular the image classification module 226 may identify a classification indicator present in the image. The image classification module 226 is further configured to identify markers or marker groups that correspond to classified objects in the image. The server 200 is configured to receive an image, classify objects in the image by identifying an classification indicator, identify markers or marker groups (or neural networks) for the classified object and then transmit this information of the specific markers to be used to the user device 102. The user device 102 can download the markers or marker groups (or neural networks) that can be used to classify additional images by the user device 102.
The user device 102 is configured to download and load markers before they are encountered. The user device 102 processes user data and anticipates the user’s needs and optimally specifies markers or marker groups or neural networks that the user is most likely to encounter (i.e. use) during the user’s day. The prediction module 138 is used to process the user data using machine learning to learn the user’s behaviour based on the user data. The content (e.g. augmented reality content) associated with the markers from the prediction module 138 can be stored on the user device 102 or may be downloaded on to the user device 102. This predictive selection and storing of markers and/or contents (or marker groups or neural networks) manages the device memory and processing power. The method of selecting and displaying augmented reality content actively manages the memory and processing resources of the user device 102. The markers on the user device 102 are deleted based on the prediction as well as based on the time to live parameter (TTL) of the specific markers. Once the time defined in the time to live parameter (TTL) is expired the markers can be deleted. The markers that are predicted by the prediction module can be downloaded from the server 200, or may be present in the user device 102. If the markers (or marker groups) are present in the device 102, these markers (or marker groups) can be cached and prioritized over other stored markers.
The approach of predictive selection and/or downloading of markers reduces the overall number of markers required on the user device 102. The user device 102 is configured to receive and process digital images using the prioritized (i.e. cached) markers to display augmented reality content corresponding to the prioritized markers. In another example implementation the user device 102 may download or cache neural networks (e.g. convolution neural networks) that correspond to identified markers based on the prediction can be used to process images and classify objects within the images. Augmented reality content corresponding to the classified objects can be displayed on the user device 102 e.g. on the touch screen 124.
Figure 4 shows an example of a method 400 of selecting and displaying augmented reality content on a user device 102. The method 400 comprises pre-emptively identifying markers that are used to process received digital images and generate augmented reality content. At step 402 the user device accesses user data of the user that is logged in i.e. the user data of the user associated with the user profile.
Step 404 comprises predicting one or more markers or marker groups the user is likely to require based on the user data. The predicting module 138 is used to predict (i.e. perform an inference) the required markers or marker groups using machine learning or a neural network or hardcoded logic. The prediction module 138 also provides an indication of which markers need to be cached and when.
Step 406 comprises checking if the locally stored markers or marker groups in a memory unit of the user device match the predicted markers or marker groups i.e. check if markers matching the inferred markers or marker groups are present in the local memory of the user device 102.
Step 408 comprises loading the locally stored markers or marker groups that match the predicted markers or marker groups into a tracked marker database 132, if the locally stored markers or marker groups match the predicted markers or marker groups. The markers or marker groups in the tracked marker database 132 are prioritised over the other markers or marker groups in the user device 102 memory. The markers or marker groups in the tracker marker database 132 are cached and assigned precedence such that these markers or marker groups are used. Optionally the marker groups may include convolution neural networks, and convolution neural networks stored in the tracked markers database 132 are prioritised over other locally stored neural networks.
Step 410 comprises receiving one or more digital images of a real world object. The images are captured by the camera 122. The images may be a continuous stream of images.
Step 412 comprises processing the received digital images using the markers or marker groups (or convolution neural networks) stored in the tracked marker database 132 to identify markers present in the digital images. Alternatively the images are processed to identify real world objects in the images.
Step 414 comprises presenting identified augmented reality content associated with the markers or marker groups in the tracked marker database 132. The augmented reality content is presented on the user device touch screen 124. The AR content is overlaid onto the real world object i.e. overlaid on the real world content visible on the screen. The augmented reality content is overlaid in a predefined orientation and can be positioned spatially in relation to markers detected in the digital images.
The method 400 also includes defining time to live parameters (TTL) of the markers or marker groups in the tracked marker database 132. The time to live parameters (TTL) define the amount of time a marker or marker group remains cached i.e. remains prioritized over other markers present on the user device.
The method 400 comprises the step 416. Step 416 comprises deleting markers or marker groups from the tracked marker database 132 if the tracked marker database 132 is full i.e. occupied. The markers or marker groups with the lowest time to live parameter i.e. lowest time value are deleted first. The time to live parameter of any marker may be reset in response to a trigger for example if the prediction module identifies a marker or marker group that has not been used for some time but then gets identified, the time of the TTL is reset (i.e. refreshed) , hence re-prioritising that identified marker. TTL defines the time that a particular marker or marker group is prioritized and fetched over other data. The TTL value i.e. the time defined in the TTL can be reset or expanded if specific markers are repeatedly identified by the prediction module 138. This indicates that there is a higher probability of specific markers or marker groups being used.
If the check at step 406 results in a negative result i.e. the user device 102 does not have locally stored markers or marker groups that match the predicted markers or marker groups, the method  proceeds to step 418. Step 418 comprises searching for markers or marker groups identical to the predicted markers or marker groups on the server 200. The user device 102 interrogates the server 200 for markers or marker groups identical to the predicted markers or marker groups.
Step 420 comprises downloading markers or marker groups identical to the predicted markers or marker groups from the server 200, if the matching markers or marker groups are located on the server 200. Step 422 comprise downloading associated time to live parameters (TTL) of the downloaded markers or marker groups.
Step 424 comprises locally storing the markers or marker groups downloaded from the server 200. The downloaded markers or marker groups may be automatically stored in the tracked marker database 132 for use in processing digital images received. If the memory unit e.g. flash memory 114 of the user device is full, the method comprises deleting markers or marker groups having the lowest TTL from the memory unit at step 426. Method 400 can be repeated by the user device 102.
Augmented reality content associated with the markers or marker groups in the tracked marker databased 132 can displayed on the user device if processing the digital images results in detecting markers in the digital images that match the markers or marker groups in the tracked marker database i.e. the AR content associated with the tracked markers is presented to the user on the user device screen. Alternatively, neural networks associated with the tracked markers can be used to process the digital images and identify AR content for display to the user.
One example use case may be that user X likes cars as is prevalent from user X’s social media data or pictures stored on user X’s camera or GPS locations around car dealerships or purchasing history or browsing history etc. User X’s device (device X) is configured to infer user X’s fondness for cars from the various user data. Device X predicts i.e. identifies one or more markers or marker groups or neural networks related to cars. Device X searches its local memory for markers identical to the identified car related markers or marker groups. If car related markers or marker groups are found in the local memory, these markers or marker groups or neural networks are loaded into a tracked marker database on device X. The markers or marker groups or neural networks on the tracked marker database are prioritised over other markers stored on the phone. Alternatively, if there are no markers or marker groups or neural networks are found on the device memory, device X interrogates the server for markers or marker groups identical to the identified car markers. Device X downloads car markers or marker groups or neural networks from the server onto device X and stores these markers in the tracked marker database. Device X processes images and identifies cars using the markers or marker groups or neural networks in the tracked marker database, to identify cars in the various digital images captured by the camera of device X. Device X then overlays AR content corresponding to the car markers or marker groups onto the user device. This example shows how markers or marker groups or neural networks are pre-emptively stored and prioritised onto device X (i.e. user device) without the need for storing every single type of marker. Once the time defined in the TTL for the car markers is expired, the car related markers can be deleted. Otherwise if cars are continuously detected, the time parameter in their TTL is reset and  these markers are maintained in the tracked marker database. This described use case is an example of method 400.
The predictive approach described in method 400 of predictively and pre-emptively selecting markers or marker groups or neural networks and prioritising these for use i.e. caching them and prioritising them is advantageous since the user device can prioritise which and selectively store the required markers or marker groups or neural networks used for image processing. The method 400 is triggered based on processing user data and inferring user preferences or tendencies. The user’s tendencies or preferences are used to identify specific markers based on the tendencies or preferences. The method 400 allows the user device 102 to behave predictively.
Figure 5 shows a further method 500 for selecting and displaying augmented reality content to the user. Preferably the method 400 is executed by the user device. However, in some instances the user device 102 can execute the method 500 described below. Method 500 commences at step 502. Step 502 comprises detecting, by the user device, a marker node. The marker node may be detected on or adjacent a real world object. Alternatively the marker node may be detected in an area or may be detected within a digital image captured by the user device. As stated earlier a marker node embodies information about markers or marker groups or neural networks.
Step 504 comprises checking if markers or marker groups stored locally on the user device correspond to (i.e. match) markers or marker groups associated with the detected marker group. If yes, the method proceeds to step 506.
Step 506 comprises identifying the locally stored markers or marker groups (or neural networks) that correspond to the markers or marker groups associated with the detected marker node.
Step 508 comprises loading the identified markers or markers groups into the tracked marker database 132. More specifically the locally stored markers or marker groups that correspond to the markers or marker groups associated with the marker nodes are loaded into the tracked marker database 132. The markers or marker groups within the tracked marker database 132 are actively used to process the one or more received digital images to identify markers in the image or identify real world objects, and generate associated augmented reality content for presenting on the user device as an overlay. Markers or marker groups originally within the tracked marker database 132 may be deleted when new markers or marker groups are added to the tracked marker database.
If the tracked marker database 132 is full, then step 510 comprises removing markers or marker groups with the lowest time value as per the time to live parameter (TTL) . Deleting markers or marker groups with the lowest TTL value ensures that markers or marker groups that are required are cached and prioritised over other data. This step of deleting or overriding the tracked marker database with the markers or marker groups (or neural networks) most relevant to the current operations of the user reduces the memory space occupied by markers or marker groups (or neural networks) and reduces processing load.
If the output of step 504 is negative i.e. a No (N) , the method proceeds to step 512. Step 512 comprises searching the server 200 for markers or marker groups corresponding to the markers or marker groups associated with the marker node.
Step 514 comprises downloading markers or marker groups (or neural networks) from the server 200 that correspond to the markers or marker groups associated with the marker node. Step 516 also comprises downloading time to live parameters (TTL) associated with the markers or marker groups that are downloaded at step 514.
Step 518 comprises storing the downloaded markers or marker groups locally on the user device 102 (i.e. on a memory 114 of the user device) , for use by the user device to process the received digital images and generate augmented reality content. Optionally the markers or marker groups downloaded from the server may be stored in the tracked marker database 132. The downloaded markers or marker groups are deleted from a memory unit of the user device once the time defined in the time to live parameters (TTL) have expired.
Step 520 comprises the step of checking the memory unit of the user device (e.g. memory 114) is full prior to downloading the markers or marker groups from the server at step 516. If the there is space on the local memory unit, the method proceeds to step 516.
If at step 520 there is no space on the user device 102, the method proceeds to step 522. Step 522 comprises deleting locally stored markers or marker groups that have the shortest time defined in their time to live parameter (TTL) or deleting markers or marker groups that have not been used for the time defined in the TTL. The method 500 may be executed by the processor of the user device. The method 500 is executed each time a marker node is encountered.
One example use case includes a user X scanning a t shirt with a marker node on the t shirt. The user device of user X (i.e. device X) attempts to recognise the t shirt but may not be able to recognise images on the t shirt of identify the t shirt itself. However the t shirt includes a marker node e.g. a QR code in this example. The user device of user X scans the QR code (i.e. the marker node) . The QR code embodies information, wherein the information is a marker group (or markers or neural networks) associated with the t shirt, such that the t shirt can be recognised and AR content associated with the t shirt can be presented to the user. Device X is configured to check locally for marker groups that match the marker group associated with the marker node. Alternatively, device X interrogates the server 200 to obtain marker groups associated with the marker node. Device X can store the markers or marker groups that correspond to the information embodied in the marker in an active database e.g. the tracked marker database. The markers or marker groups corresponding to the marker node information are used by device X to recognise the t shirt or images on the t shirt and display associated content. This use case is an example application of method 500.
Figure 6 shows another embodiment of a method 600 for selecting and displaying augmented reality content. Method 600 is executed when the user device 102 encounters a classification  indicator on a real world object. Method 600 is executed if the user device cannot decipher the objects in the received digital images or if the user device cannot process the digital images. One reason for this can be the lack of the required markers or maker groups or neural networks required to process the digital images.
A classification indicator is a scannable element. For example the classification indicator may be a standard or generic element or marker that is always searched for by the user device. For example the classification indicator may be a specific logo or symbol.
The method 600 commences step 602. Step 602 comprises detecting a classification indicator in one or more received digital images of the real world object. The user device 102 may be configured to continuously scan each received digital image for a classification indicator. Alternatively, the user device 102 may scan for a classification indicator if it cannot process the received digital image or images and determine objects present in the digital images.
Step 604 comprises transmitting the digital images to the server 200 if a classification indicator is detected. The images transmitted to the server 200 include everything visible through the lens of the camera.
Step 606 comprises the server 200 processing the received digital images to identify the classification indicator present in the images. The server 200 uses the classification module 226 to identify the classification indicator. The classification indicator classifies the type of markers or marker groups that will be required by the user device 102.
Step 608 comprises the identifying one or more markers or marker groups associated with the indicator, i.e. markers or marker groups stored on the server that correspond to the classification indicator. In one example the server 200 may identify markers or marker groups that correspond to the category of object that the classification indicator relates to. Optionally the server 200 may identify one or more neural networks that correspond to the classification indicator to allow processing of digital images and identify a specific category of objects in the images.
Step 610 comprises downloading the identified markers or marker groups corresponding to the classification indicator onto a memory unit (e.g. flash memory 114) of the user device 102. Step 610 also comprises downloading the time to live parameters (TTL) associated with the markers or marker groups downloaded from the server 200. The markers or marker groups downloaded from the server 200 are locally stored in the user device 102 such that the user device 102 can use these downloaded markers or marker groups for processing received digital images and generate augmented reality content for the user.
Step 612 comprises checking if the local memory 114 of the user device 102 is full. If the local memory of the user device is not full the markers or marker groups are downloaded. Optionally the downloaded markers or marker groups corresponding to the indicator may be stored in a tracked  marker database 132 such that these markers are cached and prioritised for use. Step 610 is performed if the local memory is not full i.e. there is space on the user device 102.
If the local memory 114 of the user device 102 is full, the method proceeds to step 614. Step 614 comprises identifying locally stored markers or marker groups within the memory unit 114 that have the lowest time to live parameter (TTL) i.e. identify the markers or marker groups with the lowest time value defined in their TTL.
If the local memory of the user device is full then the method proceeds to step 616. Step 616 comprises deleting these markers or marker groups with the lowest TTL value to free space within the memory 114 of the user device. Following step 616 the method can proceed to step 610 where the markers or marker groups from the server 200 are downloaded and locally stored.
Optionally the method can comprise checking the tracked marker database 132 to check if it full. If so,markers or marker groups with the lowest TTL value can be deleted from the tracked marker database in order to create space therein. The markers or marker groups downloaded from the server may be loaded into the tracked marker database 132.
One example use case of method 600 will be described. User X captures a digital image of an object using device X (i.e. user X’s device) . Device X does not have the markers or marker groups or neural networks to process the images and hence cannot identify the object in the digital images. Device X scans the digital images for a classification indicator. The object may have a classification indicator e.g. a logo on it. Device X detects a classification indicator after scanning the images. If a classification indicator is detected the digital images are transmitted to the server 200 for processing. The server 200 processes the received images and identifies the classification indicator present in the one or more digital images. The server 200 further identifies one or more markers or marker groups or neural networks corresponding to the classification indicator. The identified markers or marker groups or neural networks are downloaded onto device X. The TTL instructions associated with the markers or marker groups are also downloaded onto device X. Device X can use the markers or marker groups or neural networks corresponding to the classification indicator to process the digital images, identify objects within the images and generate augmented reality content. Augmented reality content related to the markers or marker groups is presented to the user.
Figure 7 shows example operation of the user device to select and display augmented reality content. Figure 7 illustrates implementation of some steps of  method  400 and 500 as described earlier. Referring to figure 7, the user device begins by opening the AR app 104 and running the AR app at step 702. The AR app 104 includes instructions that cause the user device 102 to execute various functions. The software (i.e. AR app) makes an inference from user data e.g. from user preferences at step 704. Step 704 may also include using machine learning to make an inference. Marker groups or neural networks are inferred from the user preferences or machine learning. Alternatively, or simultaneously the software (i.e. AR app) detects a marker node at step 706. The AR app may continuously search for the presence of a marker node. If a marker node is  detected the marker groups associated with the node are identified and/or extracted.  Steps  704 and 706 are triggers that are detected by the AR app.
Step 708 comprises checking if corresponding marker groups is in the memory but not in the Tracked Markers database. As seen in figure 7, the marker groups stored on the phone memory is checked. Further figure 7 also shows the phone memory stores marker groups and associated TTL instructions. Step 710 comprises adding marker groups to the Tracked Markers database, wherein the marker groups are the ones identified at step 708. Figure 7 shows the Tracked Marker database that includes marker groups and their associated TTL instructions. The marker groups in the Tracked Marker database 132 are cached and prioritised over other data in the phone memory. Step 712 comprises checking if the Tracked Markers database 132 is full. If so, step 714 comprises removing marker groups with the lowest TTL.
Figure 8 shows an example of operation of the user device 102 and interaction with the server 200 to download marker groups (or markers or neural networks) . Figure 8 is a diagram of the various interactions between the server and user device in order to download marker groups. At 802 the augmented reality app is run i.e. executed by the user device 102.  Functions  804, 806 or 808 may be performed simultaneously or either one of them may be performed depending on what occurs. The AR app awaits a trigger i.e. detects a trigger. Step 808 comprises the AR app making an inference from user preferences or machine learning executed using user data. Step 810 comprises searching for an inferred marker group on the server. This is generally performed if the inferred marker groups are not located on the user device. Step 812 comprises locating the marker based on the request from the user device 102. Step 814 comprises downloading the identified marker groups and associated TTL instructions onto the phone memory. As shown in figure 8, the phone memory receives and stores the marker groups and associated TTL instructions.
Step 804 comprises the AR software detecting a marker node. The marker node is detected and scanned by the AR software to identify a marker group associated with the marker node. Step 816 comprises searching corresponding marker group (or marker groups) from the server 200. Following step 816,  steps  812 and 814 are repeated to download and store marker groups onto the phone memory.
Step 806 comprises the AR software (i.e. AR app) detecting a classification indicator (i.e. a facilitator marker) . If the AR software detects a classification indicator, at step 818 a picture of the camera view is sent to the server 200 for additional processing and identification of marker groups. The method proceeds to  steps  812 and 814 to identify marker groups and download marker groups onto the phone memory.
At step 820 the AR software checks if the device memory is full. If the device memory is full, then at step 822 the marker groups with the lowest TTL value are removed.  Steps  804, 806 and 808 are triggers. If the AR app detects one of the triggers, the AR app causes the user device to perform the described steps.
Figures 9 and 10 illustrate examples of real world objects with images. The real world objects as shown in figures 9 and 10 are shirts of various type with graphics printed thereon. The methods of selecting and displaying augmented reality content. Figure 9 shows a long sleeve shirt 900. The shirt 900 includes a graphic 902 in the chest region of the shirt and each sleeve includes cloud graphics on each sleeve. The central graphic 902 is a graphic of Adam (i.e. the painting of the “Creation of Adam” ) . The central graphic 902 is centred relative to in the middle and positioned at a DCG distance (shown as 904) . DCG stands for distance from collar top of chest graphic. The DCG distance can be a predetermined distance. In some instances the DCG may be a marker that is searched for. For the shirt in figure 9, the DCG distance is 8cm. In the shirt of figure 9, the marker may be the clouds on the sleeve or the face of Adam or any other features in the painting.
Figure 10 illustrates a short sleeve shirt 910 that includes a central graphic 912. The shirt also includes additional text 914 below the central graphic. The central graphic 912 is a stylized mona lisa graphic as well as two other characters. The text 914 may be a marker such as an image marker. Alternatively, as shown in figure 10, the sunglasses on mona lisa may be marker.
Additional shirt designs can be developed. The additional shirt designs can include graphics arranged in any predetermined orientation or configuration. The shirts can include markers disposed on the shirt and may include some markers that are embedded in a graphic on the shirt. The image processing tool is selected to identify content in the graphics based either on the identified markers or a user data. The augmented reality content is also selected based on the marker or the identified content of the shirt.
The shirts shown in figures 7 and 8 illustrate example real world objects (e.g. shirts) that can be used to display augmented reality content upon. The shirts include graphics, pictures or images that can include markers.
In one example the user may be determined to like famous paintings based on user data. The user device 102 can infer markers or marker groups related to famous paintings based on the user data. The user device 102 searches a local memory of markers or marker groups to locate markers or marker groups that correspond to the inferred markers. These markers or marker groups are selected from the local store and cached in a priority list e.g. a tracked markers database. These famous painting related markers or marker groups are prioritised for use over other data. The user device uses the prioritised markers or marker groups to identify the painting and present augmented reality content related to the markers or marker groups. The augmented reality content is displayed as overlaid content. The overlaid content may be oriented relative to markers detected in the image i.e. on the painting of the shirt. The markers or marker groups from the tracked markers database are deleted once the time limit defined in the TTL parameter are expired.
Alternatively, the user device may infer neural networks that are related to famous paintings. The user device 102 searches a local memory for neural networks corresponding to the inferred neural networks. These famous painting neural networks may be prioritised for use. The neural networks  can be used to identify the painting on the shirt and select corresponding AR content for displaying on the user device.
In a further example when the shirt of figure 10 is visible the user device may search for a marker node. The marker node may be a code e.g. a barcode or QR code on the shirt. As per the illustrated example the marker node may be the text 914. The marker node can be scanned and identifies marker groups corresponding to the marker node. The user device 102 is configured to check the local device memory for the identified marker group. If the marker group is detected on the local memory, the marker group is cached and used to process the images and identify objects e.g. paintings on the shirt. Corresponding AR content can be displayed.
In these instances if the inferred markers or marker groups or the marker group corresponding the marker node is not found on the local memory of the device 102, then the device 102 interrogates a server for markers or marker groups. The markers or marker groups are downloaded to the user device from the server and then utilised to process digital images and identify paintings. Corresponding AR content is displayed on the user device.
The described method and system for selecting and displaying augmented reality is advantageous because the required markers or marker groups are pre-emptively identified and stored locally on the user device. The user device uses these specific markers or marker groups to identify objects in the digital images and present corresponding AR content to the user. TTL parameters for each of the locally stored markers and marker groups ensure that unused i.e. “old” markers or marker groups are deleted from the memory of the user device to free up additional space. The method provides for memory and processing power to be conserved, since there is a finite number of markers or marker groups. The marker groups or markers that are used by the user device 102 are targeted i.e. based on user preferences or user behaviour or related to a marker node. This allows the user device to detect a wide variety of objects in images without the need for locally storing markers for each type of object. The described approach expands the abilities of the user device to process digital images and identify objects.
As described herein markers or marker groups may be used for processing digital images. Alternatively one or more image processing tools e.g. convolution neural networks may be used to process the digital images and identify objects, and then identify AR content corresponding to the identified objects. Where markers or marker groups is described the method and system may use one or more neural networks e.g. convolution neural networks.
The description of any alternative configurations or alternatives herein is considered exemplary. Any of the alternative configurations and features can be used in combination with each other or with the embodiments or configurations described with respect to the figures.
Although not required, some elements or parts of the example embodiments described with reference to the Figures can be implemented to file an application programming interface (API) or as a series of libraries for use by a developer or can be included within another software  application, such as a terminal or personal computer operating system or a portable computing device operating system. Generally, as program modules include routines, programs, objects, components and data files the skilled person assisting in the performance of particular functions, will understand that the functionality of the software application may be distributed across a number of routines, objects or components to achieve the same functionality.
It will also be appreciated that the methods and systems of the present invention are implemented by computing system or partly implemented by computing systems than any appropriate computing system architecture may be utilised. This will include stand alone computers, network computers and dedicated computing devices. Where the terms “computing system” and “computing device” are used, these terms are intended to cover any appropriate arrangement of computer hardware for implementing the function described.
The foregoing describes some example embodiments of the present invention or inventions and modifications, obvious to those skilled in the art, can be made thereto without departing from the scope of the present invention or inventions. While the invention or inventions have been described with reference to a number of example embodiments it should be appreciated that the invention or inventions can be embodied in many other forms.

Claims (47)

  1. A method for selecting and displaying augmented reality content to a user on a user device, the method comprising the steps of:
    selecting and/or downloading one or more marker groups onto a memory unit of the user device, in response to a trigger,
    receiving one or more digital images at the user device via an image capturing apparatus of the user device,
    processing the digital image using the downloaded one or more marker groups,
    wherein each of the downloaded marker groups is associated with a time to live parameter defining a time limit the one or more marker groups are stored on the user device, and;
    deleting the stored marker groups at the expiration of the time limit defined in the time to live parameter or maintaining the downloaded marker groups on the user device in response to a further trigger.
  2. A method for selecting and displaying augmented reality content in accordance with claim 1, wherein each marker group comprises one or more markers and augmented reality content associated with the one or more markers.
  3. A method for selecting and displaying augmented reality content in accordance with claim 2, wherein each marker group and contents of each marker group corresponds to a category of real world object.
  4. A method for selecting and displaying augmented reality content in accordance with claim 2, wherein each marker group comprises a reference to the one or more markers and a reference to the one or more augmented reality content associated with each marker, wherein the user device can use the references to access the one or more markers and/or the augmented reality content from a server.
  5. A method for selecting and displaying augmented reality content in accordance with claim 2, wherein each marker group comprises one or more image processing tools, each image processing tool configured to identify a category of object within a digital image or identify one or markers associated with the category of object within a digital image.
  6. A method for selecting and displaying augmented reality content in accordance with claim 5, wherein each image processing tool is a convolution neural network, wherein each convolution neural network is configured to process a digital image and identify a category of object.
  7. A method for selecting and displaying augmented reality content in accordance with claim 1, wherein the method comprises the additional steps of:
    processing the one or more digital images to identify markers present within the one or more digital images and;
    presenting augmented reality content on the user device based on the processing of the one or more digital images.
  8. A method for selecting and displaying augmented reality content in accordance with claim 1, wherein the trigger comprises processing user data to infer one or more markers or marker groups based on the processing of the user data, wherein the user data is profile data associated with a specific user and the user data being stored as part of a user profile.
  9. A method for selecting and displaying augmented reality content in accordance with claim 8, wherein the user data comprises one or more of: user data comprises one or more of: location of the user, data from the user device sensors, purchase history of the user, data from external APIs on the user device, proximity to other users based on the location of another user’s user device, user social media data, user preferences and proximity to wireless or wired communication nodes.
  10. A method for selecting and displaying augmented reality content in accordance with claim 8, wherein the method comprises the additional steps of:
    checking if markers or marker groups stored locally on the user device correspond to the inferred markers or marker groups,
    if the locally stored markers or marker groups correspond to the inferred markers or marker groups, the method further comprises:
    identifying the locally stored markers or marker groups that correspond to the inferred markers or marker groups
    loading the locally stored markers or marker groups that correspond to the inferred markers or marker groups into a tracked marker database, if the locally stored markers or marker groups correspond to the inferred markers or marker groups, and;
    wherein the markers or marker groups within tracked marker database are the markers or marker groups that are actively used to process the received one or more digital images.
  11. A method for selecting and displaying augmented reality content in accordance with claim 10, wherein if the locally stored markers or marker groups do not correspond to the inferred markers or marker groups, the method comprises the additional steps of:
    downloading markers or marker groups from a server that correspond to the inferred markers or marker groups, wherein the server stores a plurality of markers or marker groups,
    downloading time to live parameters associated with marker or marker group that is downloaded,
    storing the downloaded markers or marker groups locally on the user device for use by the user device to process the received digital images and generate augmented reality content, and;
    wherein the downloaded markers or marker groups are deleted from a memory unit of the user device once the time defined in the time to live parameter has expired.
  12. A method for selecting and displaying augmented reality content in accordance with claim 11, wherein the method comprises the additional steps of:
    checking if the memory unit of the user device is full prior to downloading the markers or marker groups that correspond to the inferred markers or marker groups from the server,
    deleting locally stored markers or marker groups that have the shortest time defined in their associated time to live parameters or deleting locally stored markers or marker groups that have not been used for the time define in their associated time to live parameters.
  13. A method for selecting and displaying augmented reality content in accordance with claim 1, wherein the trigger comprises detecting, by the user device, amarker node, wherein the marker node comprises markers or marker groups or a reference to markers or marker groups associated with the marker node.
  14. A method for selecting and displaying augmented reality content in accordance with claim 13, wherein the method comprises the additional steps of:
    checking if markers or marker groups stored locally on the user device correspond to the markers or marker groups associated with the detected marker node,
    if the locally stored markers or marker groups correspond to the markers or marker groups associated with the detected marker node, the method comprises the additional steps of:
    identifying the locally stored markers or marker groups that correspond to the markers or marker groups associated with the detected marker node,
    loading the locally stored markers or marker groups that correspond to the markers or marker groups associated with the marker node into a tracked marker database, and;
    wherein the markers or marker groups within the tracked marker database are the markers or marker groups that are actively used to process the received one or more digital images.
  15. A method for selecting and displaying augmented reality content in accordance with claim 14, wherein the user device does not have markers or marker groups that correspond to the markers or marker groups associated with the detected marker node stored within the memory unit of the user device, the method comprises the additional steps of:
    downloading markers or marker groups from a server that correspond to the markers or marker groups associated with the marker node,
    downloading time to live parameters associated with marker or marker group that is downloaded,
    storing the downloaded markers or marker groups locally on the user device for use by the user device to process the received digital images and generate augmented reality content, and;
    wherein the downloaded markers or marker groups are deleted from a memory unit of the user device once the time defined in the time to live parameters has expired.
  16. A method for selecting and displaying augmented reality content in accordance with claim 15, wherein the method comprises the additional steps of:
    checking if the memory unit of the user device is full prior to downloading the markers or marker groups from the server, that correspond to the markers or marker groups associated with the detected marker node,
    deleting locally stored markers or marker groups that have the shortest time defined in their associated time to live parameters or deleting locally stored markers or marker groups that have not been used for the time define in their associated time to live parameters.
  17. A method for selecting and displaying augmented reality content in accordance with any one of claims 13 to 15, wherein the marker node comprises one or more of a machine readable code, NFC signal, RFID signal, Bluetooth signal and Wifi signal.
  18. A method for selecting and displaying augmented reality content in accordance with any one of claims 1 to 16, wherein the method as described above is executed by a processor of a user device.
  19. A method of selecting and displaying augmented reality content on a user device, the user device comprising a memory unit and a processor, the method being executed by the user device, method comprising the steps of:
    accessing user data of the user associated with the user device,
    predicting one or more markers or marker groups the user is likely to require based on the user data,
    checking if locally stored markers or marker groups in a memory unit of the user device match the predicted markers or marker groups,
    loading the locally stored markers and marker groups that match the predicted markers or marker groups into a tracked marker database, if the locally stored markers or marker groups match the predicted markers or marker groups,
    wherein the markers or marker groups within the tracked markers database are prioritised for use over other markers or marker groups within the memory unit of the user device,
    receiving one or more digital images of a real world object,
    processing the received digital image using the markers or marker groups from the tracked marker database to identify corresponding augmented reality content.
  20. A method of selecting and displaying augmented reality content in accordance with claim 19, wherein the method comprises the additional step of presenting the identified augmented reality content on the user device, wherein the augmented reality content is overlaid onto the real world object on a user interface of the user device.
  21. A method of selecting and displaying augmented reality content in accordance with claim 19, wherein the predicted markers are pre-emptively identified and loaded in the tracked marker database based on the user data, prior to receiving the one or more digital images.
  22. A method for selecting and displaying augmented reality content in accordance with any one of claims 19 to 21, wherein each marker or marker group includes time to live parameters that define the amount of time a marker or marker group remains cached in the memory unit of the user device.
  23. A method of selecting and displaying augmented reality content in accordance with claim 22, wherein the tracked marker database is full, the method comprises the additional step of deleting markers or marker groups from tracked markers database if the tracked markers database is full, wherein markers or marker groups with the lowest time to live parameter are deleted from the tracked markers database.
  24. A method for selecting and displaying augmented reality content in accordance with any one of claims 19 to 23, wherein the method comprises refreshing one or more markers or marker groups within the tracked markers database if the step of predicting markers based on user data identifies markers or marker groups already present in the tracked markers database.
  25. A method of selecting and displaying augmented reality content in accordance with claim 24, wherein the step of predicting markers or markers groups comprises identifying user preferences or user behaviour based on processing stored user data, and wherein the step of predicting markers or marker groups is executed by a prediction module.
  26. A method of selecting and displaying augmented reality content in accordance with claim 25, wherein the prediction module is a neural network or a machine learning program that is executed by the user device.
  27. A method for selecting and displaying augmented reality content in accordance with any one of claims 19 to 24, wherein if the predicted markers or marker groups are not locally available in the memory unit of the user device, the method comprises the additional steps of:
    searching for markers or marker groups identical to the predicted markers or marker groups on a server,
    download the markers or marker groups identical to the predicted markers or marker groups from the server, if matching markers or marker groups are located on the server,
    download associated time to live parameters of the downloaded markers or marker groups,
    locally store the downloaded markers or marker groups and the associated time to live parameters.
  28. A method of selecting and displaying augmented reality content in accordance with claim 27, wherein if the memory unit of the user device is full, then the method comprises the step of deleting markers or marker groups having the lowest time to live parameter from the memory unit such that new markers or marker groups can be downloaded and stored in the memory unit.
  29. A method for selecting and displaying augmented reality content in accordance with any one of claims 20 to 28, wherein the augmented reality content associated with a marker or marker group is stored in the memory unit of the user device, accessed from the user device and presented on the user device if a corresponding marker is detected in a digital image.
  30. A method for selecting and displaying augmented reality content on a user device, the method comprising the steps of:
    detecting an classification indicator in one or more received digital images of a real world object,
    if an classification indicator is detected in the one or more digital images, the one or more digital images are transmitted to the server,
    identifying the classification indicator in the one or more digital images at the server,
    identifying one or more markers or marker groups that correspond to the classification indicator that are stored on the server,
    downloading the identified one or more markers or marker groups corresponding to the classification indicator on to the memory unit of the user device from the server, and downloading time to live parameters associated with the identified one or more markers or marker groups onto the memory unit of the user device, such that the markers or marker groups and associated time to live parameters are locally stored on the user device.
  31. A method of selecting and displaying augmented reality content in accordance with claim 30, wherein the method comprises the additional steps of:
    processing the one or more digital images, by the user device, using the downloaded markers or marker groups to generate augmented reality content, and;
    presenting the augmented reality content on the user device.
  32. A method of selecting and displaying augmented reality content in accordance with claim 30, wherein the method comprises downloading augmented reality content corresponding the identified one or more markers or marker groups onto a memory unit of the user device.
  33. A method of selecting and displaying augmented reality content in accordance with claim 30, wherein the digital image transmitted to the server is an image of everything visible in the real world environment as captured by an image capture apparatus of the user device.
  34. A method for selecting and displaying augmented reality content in accordance with any one of claims 30 to 33, wherein if the memory unit of the user device is full, the method comprises the additional steps of:
    identifying locally stored markers or marker groups within the memory unit that have the lowest time to live parameter, wherein the time to live parameter defining a time limit the one or more marker groups are stored on the user device,
    deleting the markers or marker groups with the lowest time defined in their associated time to live parameter.
  35. A system for selecting a processing tool comprising:
    a computing unit including a processor, amemory unit, auser profile database and a prediction module,
    the user profile database configured to store user data,
    a plurality of processing tools in communication with the computing unit,
    the computing unit configured to:
    receive and store user data,
    process the user data, by the prediction module, and predict at least one processing tool for use by the computing unit
    select the at least one processing tool from the plurality of processing tools based on the user data.
  36. A system for selecting a processing tool in accordance with claim 35, wherein the user data comprises one or more of: location of the user, data from the user device sensors, purchase history of the user, data from external APIs on the user device, proximity to other users based on other user location, social media data, user preferences, proximity to wireless or wired communication nodes.
  37. A system for selecting a processing tool in accordance with claim 35, wherein the prediction module is a machine learning module that is executed by the processor of the computing unit, the processor configured to determine user preferences based on the user data.
  38. A system for selecting a processing tool in accordance with claim 35, wherein the machine learning module is configured to process user data and determine user preferences, the user preferences being stored in the user profile database in association with a user profile, and the user preferences constantly updated as new user data is received.
  39. A system for selecting a processing tool in accordance with claim 35, wherein the processing tools are markers or marker groups, the markers or marker groups being used to process received images to identify a specific object, and the system being configured to select specific markers or marker groups based on the user preferences.
  40. A system for selecting a processing tool in accordance with claim 35, wherein the processing tools are convolution neural networks, each convolution neural network configured to process one or more images and identify a specific object within the images, and the system being configured to select a specific processing tool based on the user preferences.
  41. A system for selecting a processing tool in accordance with claim 35, wherein the selected processing tool is cached such that the use of the selected processing tool is prioritised by the processor and the selected processing tool is utilised by the processor to process one or more received images to identify a specific object within the one or more images.
  42. A system for selecting a processing tool in accordance with claim 35, wherein the system comprises a processing tool database configured to stored the plurality of  processing tools, the computing unit configured to select a processing tool from the processing tool database based on the user preferences.
  43. A system for selecting a processing tool in accordance with claim 35, wherein the system comprises a content database, the content database storing augmented reality content, the computing unit configured to select augmented content corresponding to the identified object in the digital images and/or select augmented reality content corresponding to the selected processing tool.
  44. A system for selecting a processing tool in accordance with claim 39, wherein the content database storing augmented reality content corresponding to markers or marker groups, the computing device configured to select augmented reality content corresponding to the selected markers or marker groups based on the user preferences.
  45. A system for selecting a processing tool in accordance with claim 43, wherein the computing device further configured to present the augmented reality content on a user device in spatial relation to the identified object.
  46. A system for selecting a processing tool in accordance with claim 35, wherein a user device comprises the computing unit.
  47. A system for selecting a processing tool in accordance with claim 35, wherein a server comprises the computing unit.
PCT/CN2020/076186 2019-02-22 2020-02-21 A method and system for selecting and displaying augmented reality content WO2020169084A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
HK19119912 2019-02-22
HK19119912.4 2019-02-22

Publications (1)

Publication Number Publication Date
WO2020169084A1 true WO2020169084A1 (en) 2020-08-27

Family

ID=72143649

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/076186 WO2020169084A1 (en) 2019-02-22 2020-02-21 A method and system for selecting and displaying augmented reality content

Country Status (1)

Country Link
WO (1) WO2020169084A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014145532A1 (en) * 2013-03-15 2014-09-18 Rubicon Group Holding Limited Augmented reality systems and methods used to identify toys and trigger video and graphics
US20150022551A1 (en) * 2013-07-19 2015-01-22 Lg Electronics Inc. Display device and control method thereof
CN107885331A (en) * 2017-11-09 2018-04-06 北京易讯理想科技有限公司 A kind of exchange method that Audio conversion is realized based on augmented reality
WO2018128964A1 (en) * 2017-01-05 2018-07-12 Honeywell International Inc. Head mounted combination for industrial safety and guidance
CN108983971A (en) * 2018-06-29 2018-12-11 北京小米智能科技有限公司 Labeling method and device based on augmented reality
US20180365855A1 (en) * 2017-06-16 2018-12-20 Thomson Licensing Method and devices to optimize marker management for pose estimation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014145532A1 (en) * 2013-03-15 2014-09-18 Rubicon Group Holding Limited Augmented reality systems and methods used to identify toys and trigger video and graphics
US20150022551A1 (en) * 2013-07-19 2015-01-22 Lg Electronics Inc. Display device and control method thereof
WO2018128964A1 (en) * 2017-01-05 2018-07-12 Honeywell International Inc. Head mounted combination for industrial safety and guidance
US20180365855A1 (en) * 2017-06-16 2018-12-20 Thomson Licensing Method and devices to optimize marker management for pose estimation
CN107885331A (en) * 2017-11-09 2018-04-06 北京易讯理想科技有限公司 A kind of exchange method that Audio conversion is realized based on augmented reality
CN108983971A (en) * 2018-06-29 2018-12-11 北京小米智能科技有限公司 Labeling method and device based on augmented reality

Similar Documents

Publication Publication Date Title
US20210103779A1 (en) Mobile image search system
US12026529B2 (en) Interactive informational interface
KR20210143891A (en) Semantic Texture Mapping System
US11334768B1 (en) Ephemeral content management
US12003577B2 (en) Real-time content integration based on machine learned selections
US11169675B1 (en) Creator profile user interface
KR20220098814A (en) Virtual vision system
US11800189B2 (en) Automated graphical image modification scaling based on rules
US11488368B2 (en) Crowd sourced mapping system
CN104838336A (en) Data and user interaction based on device proximity
US11809972B2 (en) Distributed machine learning for improved privacy
US11579997B2 (en) Selectively enabling features based on rules
KR102671052B1 (en) Dynamic contextual media filter
US11687150B2 (en) Occlusion detection system
WO2020264013A1 (en) Real-time augmented-reality costuming
US11704135B2 (en) Automated scaling of application features based on rules
WO2020169084A1 (en) A method and system for selecting and displaying augmented reality content
US11841896B2 (en) Icon based tagging
US11860888B2 (en) Event detection system
CN116601961A (en) Visual label reveal mode detection
US20230030397A1 (en) Context based interface options
US10742749B2 (en) Media hotspot payoffs with alternatives lists
US9244944B2 (en) Method, electronic device, and computer program product
CN116757247A (en) Click rate prediction model training method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20758446

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.11.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 20758446

Country of ref document: EP

Kind code of ref document: A1