CA2373459A1 - Method and system for sharing the visual likeness of a user among a multitude of applications or services - Google Patents

Method and system for sharing the visual likeness of a user among a multitude of applications or services Download PDF

Info

Publication number
CA2373459A1
CA2373459A1 CA 2373459 CA2373459A CA2373459A1 CA 2373459 A1 CA2373459 A1 CA 2373459A1 CA 2373459 CA2373459 CA 2373459 CA 2373459 A CA2373459 A CA 2373459A CA 2373459 A1 CA2373459 A1 CA 2373459A1
Authority
CA
Canada
Prior art keywords
user
data
likeness
application
applications
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA 2373459
Other languages
French (fr)
Inventor
Carlos Saldanha
Gregory Saumier-Finch
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MON MANNEQUIN VIRTUEL Inc
Original Assignee
MON MANNEQUIN VIRTUEL Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MON MANNEQUIN VIRTUEL Inc filed Critical MON MANNEQUIN VIRTUEL Inc
Publication of CA2373459A1 publication Critical patent/CA2373459A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • A63F13/12
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/63Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/40Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network
    • A63F2300/401Secure communication, e.g. using encryption or authentication
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/40Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network
    • A63F2300/407Data transfer via internet
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/532Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing using secure communication, e.g. by encryption, authentication
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/55Details of game data or player data management
    • A63F2300/5546Details of game data or player data management using player registration data, e.g. identification, account, preferences, game history
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/65Methods for processing data by generating or executing the game program for computing the condition of a game character
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention has as one object to provide a system and method that allow users to make use of a virtual identity across a number of sites or environments for the purposes of generating an image to be associated with the user at the site or environment without the drawbacks of the state of the art. The present invention has as another object to provide a system and method that allow users to make use of a virtual identity across a number of sites or environments having special virtual identity data requirements without the drawbacks of the state of the art.

Description

~~' ~~ o/~~~
METHOD AND SYSTEM FOR SHARING THE VISUAL LIKENESS OF A USER AMONG
A MULTITUDE OF APPLICATIONS OR SERVICES
Technical Field The present invention relates generally to the field of creating and maintaining a virtual identity. By virtual idenfiiy is meant the identity of a user which is adopted for the purposes of interacting in a virtual environment, such as a website, computer game, etc.
More particularly, the present invention relates to creating and maintaining a virtual identity across a multitude of network centric or online applications and services, and to the field of graphically representing the user as part of their virtual identity.
Background of the Invention Many online applications and services require the user create an account with the site in order to fully exploit the personalized features of the site. As a result, all user specific data and customizations are accessed and updated every time the user returns to the site and togins with their account name and password. The benefit to the user is persistence of the customizations affected during his or her visit to the site. Simply put, users do not have to start personalization from scratch every time they visit the site. By signing-in, the Web application gives the user access to personal data, which represents the user's identity with respect to the application.
The MicrosoftT"" (MS) Passport is probably the best-known example of a virtual identity system. All online applications that recognize a user's MS Passport is a member of the MS
Passport Network. The idea of virtual identity is to provide users with a unique account across a multitude of Web applications and services, and across multiple devices. With a virtual identity account, a user does not have to remember an account name and password for each Web application and service he or she uses on a regular basis. This becomes the function of the virtual identity technology employed as the single sign-in system.
All sites employing or compliant with a single virtual identity system constitute the network of applications and services associated with the specific virtual identity system. Within that network, users nrgister a single account name and password and then have access to personalized services with network applications. The virtual identity system provider is responsible for maintaining and operating the infrastructure that support the single sign-in across all applications in the network.
The personal data contained in the MS Passport is generic personal data. This is useful for many environments and sites that need this basic level of identity data.
However, the MS
Passport does not address the need for specific identity information required by environments that make use of more detailed identity data. Some environments, such as those which provide the user with highly personalized environment features, such as special interface adaptations and account information, cannot use data from a centralized sign-in containing general identify data.
Many applications and services on the Web also employ a persona or avatar, which could be that of the user or that of a generic character. Applications such as virtual dressing rooms, weight loss programs, instant messaging systems and computer games may in one form or another includes a visual representation of the user. In some cases the application can allow the user to personalize the avatar with the objective of having it resemble the user's own body and/or face. In the context of the virtual dressing room application, the user is able to provide values to parameters such as weight, height, shoulder size, waist size, and hip size, which together establish the user's morphology, and is able to render a virtual model representing the user. An example is the MyVirtualModeIT"" environment developed by Applicants' assignee. Afterwards, the user can view how online catalog garments fit the model as an aid in deciding on which garments) to purchase.
This type of highly personal information is not presently shared among different environments. Furthermore, in the case of a visual representation of a user, different environments may either require different likeness data or generate a different representation or likeness image based on the environment-specific image generating engine. Thus reliance on such virtual identity data cannot be used across different platforms without risking offending users who would see their likeness change in a displeasing manner from environment to environment.
Summary of the Invention The present invention has as one object to provide a system and method that allow users to make use of a virtual identity across a number of sites or environments for the purposes of generating an image to be associated with the user at the site or environment without the drawbacks of the state of the art.
The present invention has as another object to provide a system and method that allow users to make use of a virtual identity across a number of sites or environments having special virtual identity data requirements without the drawbacks of the state of the art.
According to a first broad aspect of the invention, there is provided a method of providing an image-based likeness of a user in a plurality of environments having access via a telecommunications nefwork to a repository of personal likeness data. The environments each comprise an image generating engine having rendering characteristics which results in a different rendered image based on the likeness data for at least some instances of the likeness data. For each of the plurality of environments, the method involves the following steps: recognizing the user and retrieving from the repository the likeness data for the user;
rendering an image corresponding to the likeness data retrieved using the engine according to the rendering characteristics; providing a display of the rendered image to the user at an interface of the environment; and providing the user with an option at the interface to edit the likeness data, wherein the user edits the likeness data when the rendering characteristics result in the rendered image at the environment being unsatisfactory to the user. The rendering characteristics may comprise image characteristics attributed to likeness data parameters unique to only some of the environments. The step of rendering the likeness data may also comprise sending a request to a central remote server to render the likeness data and obtain the rendered image.
Preferably, the personal likeness data is categorized according to what is referred to herein as metaphor, and within each metaphor, profile data for the user defines the individual parameters defining the user's personal likeness within the metaphor. Examples of metaphors are facial profile, body silhouette, a style of cartoon character drawing, fantasy game avatar, and real-to-life form model. The virtual identity likeness data can contain metaphor specific data that can only be shared among environments employing the same metaphor, and data that can be adapted for a plurality of metaphors.
155.39-1 US Page 3 Preferably, the edits to the likeness data are stored in the repository. The step of rendering may comprise rendering the image corresponding to the likeness data as mod~ed by the stored edits.
The step of providing a display preferably comprises including the rendered image as part of an introductory portion of a session in the environment. Alternatively, the step of providing the user with an option may comprise providing the user with a selection between the option and an introductory portion of a session in the environment.
The repository may be stored at a central virtual identity server, and alternatively the repository may be stored in a cookie for each user.
According to a second broad aspect of the invention, there is provided a method of providing a virtual identity dataset to a plurality of environments. The method comprises providing a central virtual identity server for the environments, the central virtual identity server containing a virtual identity data repository, configuring a list of elements defining a desired virtual identity dataset for each of the environments, recognizing a user and requesting the central virtual identity server to retrieve the virtual identity dataset for use with a selected one of the environments and the user, compiling the virtual identity dataset using data stored in the repository and in response to the configured fist of elements and to the selected one of the environments and the user, and, when required, machine interpretable instructions for obtaining additional-information defining one or more of the list of elements for which the repository has no data for the user, using the machine interpretable instructions in a user interface to obtain the additional information from user, and to complete the virtual identity dataset, and providing the virtual identity dataset to the selected environment.
Preferably, the step of compiling comprises sending the virtual identity dataset and the instructions to an environment server of the selected one of the environments.
The virtual identity dataset and the instructions may be provided in an object executable by the environment server.
Preferably, the complete virtual identity dataset for the user is stored in the repository. The additional information may be stored for use locally within the selected environment.
__ _ 15539-1 US Page 4 Advantageously, the user may be prompted to define whether the additional information provided for the selected environment may be shared by other ones of the plurality of environments. More specifically, the user may be prompted to define whether a field of the additional information may be shared by other ones of the plurality of environments.
It will be appreciated that the virtual identity system allows the personal likeness data of the user to be shared among applications or environments. In each case, the application generates a visual rendering of the data suitable for the context and service being offered by the site. Due to this, users provide information about their visual likeness once for every instance of visual definition. Each application linked to the virtual identity system is then able to retrieve this likeness information and render the appropriate image of the user. From the user's perspective, his or her visual identity becomes mobility from application to application.
The ability to visualize one's self may be either central to the user experience or may enhance it tremendously in applications that rely on presenting an image of the user as part of the user experience.
In addition, the user can register specific information with the virtual identity system. This information may be securely stored online in the virtual identity data repository and shared according to sharing agreements when necessary to access a user's personalized features of a participating environment, e.g. Web site. The virtual identity account may be effectively an aggregate of the users' identity with respect to each application.
Brief Description of the Drawings The present invention will be better understood by way of the following detailed description of a preferred embodiment with reference to the appended drawings, in which:
Fig. 1 is a block diagram of the system according to the prefdrred embodiment.
Fig. 2 is a schematic diagram illustrating a plurality of applications including a virtual identity or a likeness representation of a user.
Fig. 3 is a screen from one application including a visual identity, a session interface and an edit option selection object, according to the preferred embodiment.
Fig. 4 is a flow chart of the steps according the method of one aspect of the invention.
Fig. 5 is a flow chart of the steps according the method of another aspect of the invention.

Fig. 6 is a model diagram of the software implemented in the central virtual identity server, according to the preferred embodiment.
Fig. 7 is a diagram illustrating schematically the sequence and structure of the system according to the preferred embodiment:
Detailed Description of the Preferred Embodiment In one embodiment, the present invention provides a computer system called the Universal Visual Identify System {UVIS) design to personifying virtual identity {VI) systems and consequently, enabling the mobility of the users' visual identities across Web applications and devices. In the UVIS system, a visual identity is called a Virtual Model.
In a preferred embodiment, applications in a UVIS-enabled network register the visual descriptor, also termed likeness data, required by the application's rendering engine to generate the virtual model (VM) of the user. Subsequently, when a user signs into the application with their virtual identity account for the first time, UVIS queries the users to establish an answer set of the likeness data descriptors. This information is instantly stored temporarily within the application or permanently in a UVIS-managed online repository. At this point, the application typically :displays the resulting image of the user as generated by the rendering engine of the application. The user may accept the image as is, or can modify the image by editing the initial likeness data parameters. Once the user is satisfied with his or her virtual model, the likeness data is stored permanently in a UVIS-managed online repository.
Afterwards, other applications within the UVIS-enabled network having the.
same rendering engine will automatically display the same user image whenever the he or: she uses the persona-based services of those applications. At any time during a session or upon repeat visits to any UV1S-enabled network application the user has the opportunity to edit their likeness data and thus impacting the appearance of their virtual model across the network.
Visual Identity Metaphor The preferred embodiment includes a specification for defining visual identity descriptors that can be common to a visual identity metaphor, an application domain and/or an application instance. As a result, Web applications and services can render the users' visual identities in a manner suitable and specific to the application's requirements.
The term "visual identity metaphor" refers to a target visual concept such as the user's body or the user's face. The body metaphor is very important for application domains such as gaming and apparel shopping, while the face metaphor is predominantly important for makeover and instant messaging domains.
Visual Identity Profile A Visual Identity Profile refers to a set of descriptors or questions which, when interpreted by a target MVM-compliant application, generates a graphical image corresponding to the descriptor values or answers supplied by a user. It enables the user to interact with the application content. Afterwards, other applications using the same profile data will automatically allow users to see their visual selves.
Users with existing Visual Identity Profiles will be queried for additional descriptor values if necessary, while new users will be asked to create a profile and provide all question set values necessary for the specfic applications' needs. Given the user's set of answers, the application will generate the visual image matching the profile.
Mobili In the preferred embodiment, several application domains can share the same visual identity metaphor. This implies Visual Identity Profile descriptors associated with the metaphor are employed to ensure a common thread of resemblance to all applications that use the metaphor regardless of the domain.
For example, in the Virtual Dressing Room (VDR) application domain (see the example of Fig. 3), the visual identity metaphor is the user's body morphology. The same body metaphor can be used across VDR applications; but also in other domains such as gaming.
A user's body parameters registered in the user's Visual Identity Profile is interpreted by a VDR application to render an image 18 of the user's body. The same Visual Identity Profile parameters can be accepted by a gaming application, allowing the user to see herself or himself in action. As a result, mobility can occur across.domains.
Resemblance Actual resemblance to the user is an abstraction controlled both by the information given by the user as well as by the domain applications themselves. The resulting image for a given profile can vary from application to application depending on the rendering engines available to the application as well as on the intended use.
In Figure 2; application 14e provides fashion advice based on the user's body morphology, while application 14a allows the user to try-on garments. Both images are different - a silhouette being sufficient for the purpose of application 14e, while a fully rendered body is necessary for application 14e. The same network centric Visual Identity Profile is interpreted by both applications.
An application can also generate multiple images based on the same profile and on the intended use. For example, for the weight loss program application, different images can con-espond to the same Visual Identity Profile. In this case, the images stay local to the application and will be re-generated the next time the user fogs in using his or her Visual Identity Profile. .
An instant messaging application can use this multiple image feature to provide its users with a complete set of personalized facial expressions, body postures and gestures to define a more believable visual identity.
Resemblance is thus an abstraction, which is not limited to a single visual representation of the user: The important characteristic of MVM VI is that the user controls his or her visual identity through the same Visual Identity Profile account.
Visual Identi~~ Information Server UVIS provides a UVIS client component 24 including an application-programming interface (API). This enables applications to communicate with other UVIS network centric components in order to achieve the functionality of visual identity mobility in a seamless manner.
At the core of the MVM technology implementation resides the Visual Identity Information Server (VIIS).
As illustrated in Figure 7, an application provider registers a UVIS compliant application with VIIS prior to offering the service online. In this process, re-use of existing visual identity descriptors are identified, while additional ones are defined in order to account for the application's specific needs.
VIIS assigns each application a unique ident~er, which consequently enables it to know exactly which visual identity metaphor to apply in what context.
The conceptual model of VIIS is descrybed as follows. The Site owner configures the information on the left, while the User fills in the information on the right.
The site-speck interpretation of the VIP occurs when a user signs-in to the site at which point the relationships between right and left are enforced and a subset of the information in selected from the central repository and sent to the site for rendering. For example, only the Answers and AnswerSets in he user's Sign-in that match the sites Questions and QuestionSets are selected. All of the other Answers attached to the User's Sign-in are filtered out.
Visual Identity Applications The image rendering capabilities of an application can vary from domain to domain as well as from application to application. In the first case, a body image can be static and needs high quality imaging in the VDR context since the intent is to be photo-realistic when selling apparel online. On the other hand, gaming requires real-time 3D where image quality is not as important. In the second case, a specific VDR application can render bodies for virtual try-on as 2D images, while another VDR application can render bodies as' 3D
images in order to create an immersed user experience. in all cases, the resulting graphics is ' . modulated by the same set of visual identity parameters stored in the Visual Identity Profile account.
The Visual Identity Profile descriptor set is designed to expand as new domains and applications are made available o the MVM Network. The MVM VI specification provides a way to define these visual identity domains and parameters so that specific Web applications and services can render the users' visual identities in a manner appropriate to each and every application's business logic.
This flexibility to be both image and application agnostic is easily understood by the typical structure of a visual identity application. Each application includes an MVM
compliant client that allows it to communicate directly with VIIS in retrieving and updating Visual Identity Profiles. The image-rendering engine contained within the application employs it own library of visual assets to construct and display the visual identity image matching the Visual Identity Profiles of its users.
The MVM VI specification also provides a means of registering visual assets with VIIS. The impact of this feature is to allow VI images generated by one application to be shared by other MVM compliant applications. fn Figure 2, the MVM Face Mapping application 14d generates 2D or 3D images of the user's face. Once stored in VIIS, other applications such as games can make use of this user's asset.
The MVM Virtual Dressing Room The MVM Virtual Dressing Room (VDR) is an example of a technology based on the MVM
VI engine. Online apparel vendors create custom versions of VDR to provide their customers the service of trying on virtual garments online. In this domain, a user's visual identity or Virtual Model is designed to represent the user's body. The Visual Identity Profile for the virtual model contains parameters such as weight, height, shoulder size, waist size, and hip size, which establish the user's morphology. Once a Virtual Model is created by the VDR, users benefit by visualizing how specific garments and outfits fit their morphology.
The MVM Network Network centric VI engines allow a multitude of independent applications to share the same VI user account. As a result, users need to remember a single sign-in name and password and manage and disclose their information once across many applications. The MVM
Network is composed of all the Web applications that use the MVM VI engine.
Rather than having to re-create their Virtual Model from application to application, users create a Model the first time they use an MVM-enabled application. The Virtual Identity Profile associated the model is stored in a network centric repository, which is then accessed by MVM Network application providers on behalf of the user-and only with the user's permission. At any time, users may choose to share the information needed for a particular product or service with their preferred MVM-enabled Web site. MVM securely protects the users' privacy online. In Figure 3, this mobility of the Virtual Model is demonstrated for The Virtual Dressing Room application. Within each apparel site; a user will have the same visual identity derived from their VIP account with the MVM Network.
Site Development Kit (SDK) Finally, a part of the MVM VI implementation, a Site Development Kit (SDK) is provided which allows and eases the integn~tion of MVM visual identity business logic (BL) as well as domain dependent business logic onto a web site application. It is based on the "tag library"
concept recently introduced. It allows a greater flexibility on the user interface (U1) and minimizes coupling between UI and business logic and permits web application developers to use the MVM core functionalities without instantiating a single programming class. This is accomplished using the MVM Java Server Page Tag Library Application Programming Interface (TL-API).
The basic concept behind this TL-API is the use of a series of calls to the application business logic to assemble the required information for a given page. Each application BL
method call returns, when required, a fragment of information formatted into an XML
document. Each of these fragments is then appended to the page global XML
document (often refer-ed as the master document). XSL is then used to create and format the end result into the client language (HTML for example). Alternative client language can also be outputted for more specialized clients such as cell phones, web TV, etc.
The client makes a request to a JSP page that contains an SDK master tag. In between the "master" tag, other SDK tags are called in order to access application BL
services. These services return XML fragments that are maintained in the master XML document (managed by the SDK). Once all business sent back to logic requests are done, the resulting XML is processed with XSL to generate the HTML that is sent to the client.
The current implementation combines the fD and Likeness Dataset Storage device, the Likeness Data Retrieval + Filter device and the Likeness Data Export device into a central server called VMIS. VMIS uses a central Oracle database accessed through a JAVA interface local to the Likeness Editor device. The local VMIS interface, called VMIS

Client communicates across the Internet using the HTTP protocol. WebObjects by Apple is used as the application server, and Apache is used as the web server.
VMIS Client uses a class called MobitityServer to establish communication with the remote Dataset Storage system. The constructor of the class as documented in the VMIS
Client JavaDoc is:
MobiIityServer Class Constructor public MobilityServer(java.lang.String newServerURL, java.lang.String newSiteName, java.lang.String newSitePassword) throws Mobilit)rError, InvalidLogin, VersionMismatch, InvalidDatabaseVersion, CommunicationError Constructor that takes the URL of the mobility server and the application site address.
Parameters:
newServerURL - server URL
newSiteName - site name newSitePassword - site password Once the class is successfully instantiated and a communication channel to VMIS is open, the method getSignln() is called to obtain an Object that represents a user's set of parameters. VMIS uses its knowledge of the local site to filter the user's dataset and return only the information pertinent to the site. The Signin object returned by the method getSignInQ contains. a filtered version of the user's profile.
GetSignin method of the MobilityServer Class public Si nln getSignln(java.lang.String loginl~ame, java.utiLHashtable passwords) throws MobilityError, InvalidLogin, InvalidSession, InvalidDatabaseVersion, CommunicatyonError This method implements a login of a user account and returns its descriptor:
The password parameter is a dictionary whose key is the way to access the password and the contents are the password value. The group of passwords provided should have an index greater than or equal to 100% for the login to be successful.
Parameters:
foginName - usemame passwords - password dictionary Returns:
The object Signln related to the user Throws:
MobitityError, - InvalidLogin, InvalidSession For VM1S to be able to filter the user's dataset, the owner of the site (owner of the environment of the Likeness Editor) must input the information into VMIS.
The conceptual model of VMIS is as follows:
The Site owner configures the information on the left, while the User fills in the information on the right. The filtering occurs when a user signs-in to a specific site at which point the relationships between right and left are enforced and a subset of the information in selected from the central repository and sent to the site. For example, only the Answers and AnswerSets in the user's Sign-in that match the sites Questions and Qu~stionSets are selected. All of the other Answers attached to the User's Sign-in are filtered out.
An example of the information available through the methods of the Signln Object filtered for a specific site will be described. !n this case the site has 2 domains: UP
(User) and VM

(Model). The VM domain has 3 question sets each with one question. The A
(Appearance) Question set also shares the BodyShape question with the F (Fashion Advice) Question set.
VM1S also handles the Likeness Data Export device.
Likeness Data Retrieval & Filter Data is filtered and sent to the Likeness Editor. The filter is a function of the likeness data and the environment of the Imaging system. In the current implementation, this device is combined with the Storage system.
Likeness Data Export Data is exported for use with a remote system. In the current implementation, the data is serialized into a binary stream using Java classes.
Likeness Editor (3D Model Editor) Mappers , questionnaire Current Implementation The current implementation of the Likeness Editor is called a Fashion Server.
It combines the Likeness Editor and the Rendering Engine. A user signs-in to the application using the Identity plug-in. The Identity Plug-in uses VMIS Client to retieve and filter the user's likeness dataset. The application then forwards the likeness data to the 2D
Visualisation plug-in. The 2D visualization plug-in uses a Mapper to map the likeness data to a series of parameters that it can then use to reconstruct the likeness of the user with the available local resources and the Rendering Engine.
If the user is not satisfied with their Likeness, they can modify it. The user can modify their likeness dataset using the Identity plug-in. Each time data is changed by the user it is re-forwarded to the 2D Visualisation plug-in where it is re-mapped and re-displayed. Once the user is satisfied, the dataset is saved back on VMIS using the Indentity Plug-in The API of the fdenity Plug-in is as follows:

Image Generator PIu4-In API
The following methods available in the Image Generation Plug-in (25) are used by the Site Application (14) to render a Likeness Image (18) as illustrated in Figure 1.
1. showVM (IMAGE SIZE): render the image of the current VM to be displayed and return the image name 2. turnVM (DIRECTION) : change view ID of the current VM
3. changeBackground (IMAGEFILE_NAME) : change the background image Identfir Plua-In API
The following methods available in the Identity Plug-In (22) are used by the Site Application (14) to manage the user's likeness dataset and virtual identity.
Application related:
1. getSitelnfo () : get information about the retailer site 2. getDomains(): return the fist of the domains available for the connected retailer site 3. getQuestionnaire(domain, questionSet): return the XML Fragment of the chosen question set name.
User related:
4. signln (signlnName, password): get the user object from Vmis 5. modifySignln (QUESTION SET) update the modifications of the answers of user 6. createVM() : create a "guest" signln with all VM profiles (Apparence, sizeSuggestion, Fashion) provide by Vmis for the chosen domain 7. getVM () : get the cun-ent VM in the session after a signln or a create 8. createAnswerSet (VMName, QUESTION SET): create an answer set from a (Appearance, sizeSuggestion, Fashion) 9. getAnswerSetsStatus () : allow to get infom~ation about the AnswerSets (filled or not filled) the signin or guest user has filled.
10. getAnswers (VMName, QUESTION SET ): to populate an Answer Set 11. modifyAnswerSet (VMName, QUESTION SET): update the modifications of the answers of VM Answer Set 12. updateAnswerSet (VMName, QUESTION_SET ): update the modifications of the answers of VM Answer:Set after missingQuestion event 13. saveAnswerSet (QUESTION SET): save one answerSet of the current VM
14. getVMlnfo() : get informations about the currentVM
15. saveProfile () : save all the answerSets of the current VM
16. signOut() : Reset all the user info except the datacollection Info from the current user ' session.
The Fashion Server application uses XMUXSL technology to build a compelling user experience. To enable developers of the user experience to have a simple and easy to use entry point to the plug-ins we developed t>1e Fashion Tag Libraries. With this approach, developers can build many different user experiences and share a common set of services.
In the preferred embodiment, the system would function as follows:
The user would create their likeness using the Likeness editor The user would go to a site The user would identity themselves The user would retrieve their likeness data The system would show the rendered likeness to the user The user would accept of refuse the likeness The likeness dataset would be updated Mobility Scenarios Case 1: A model going to a site with a more recent render engine Case 2: A model going to a site with an older render engine Case 3: A model with parameters that are missing Case 4: A model with parameters out of range Case 5: A model with unsupported parameters 3D Model Imaging Current Implementation is Compositing - Layering (+alpha). Any off the-shelf compositing software could do the job. Our compositor is in C++ and has minimum functionality to maximize speed.
Other possible implementations are:
3D rendering Compositing - Z-buffer (+alpha) image selection (2D sketches) Low resolution bitmaps (cell phones) Vectors (NAPLPS) ASCII art Hologram Model Display Web site Cell phone Kiosk

Claims (20)

What is claimed is:
1. A method of providing an image-based likeness of a user in a plurality of applications having access to a repository of personal likeness data, said applications each comprising an image generating engine having rendering characteristics which results in a different rendered image based on the same said likeness data for at least some instances of said likeness data, the method comprising the steps of, at each of said plurality of applications:
recognizing said user and retrieving from said repository said likeness data for said user;
rendering an image corresponding to said likeness data retrieved using said engine according to said rendering characteristics;
providing a display of said rendered image to said user at an interface of said application;
and providing said user with an option at said interface to edit said likeness data, wherein said user edits said likeness data when said rendering characteristics result in said rendered image at said application being unsatisfactory to said user.
2. The method as claimed in claim 1, wherein said rendering characteristics comprise image characteristics attributed to likeness data parameters unique to only some of said applications.
3. The method as claimed in claim 2, wherein said step of rendering said likeness data comprises sending a request to a central remote server to render said likeness data and obtain said rendered image.
4. The method as claimed in claim 1, further comprising a step of storing edits to said likeness data in said repository.
5. The method as claimed in claim 1, further comprising a step of storing edits to said likeness data for use with said application only.
6. The method as claimed in claim 5, wherein said step of rendering comprises modifying said likeness data by previously stored edit data, and rendering said image corresponding to said likeness data as modified by said stored edits.
7. The method as claimed in claim 1, wherein said step of providing said user with an option comprises providing said user with a selection between said option and an introductory portion of a session in said application.
8. The method as claimed in claim 1, wherein said step of providing a display comprises including said rendered image as part of an introductory portion of a session in said application.
9. The method as claimed in claim 1, wherein said repository is stored at a central virtual identity server.
10. The method as claimed in claim 1, wherein said repository is stored in a cookie for each said user.
11. The method as claimed in claim 1, wherein said plurality of application access said repository via a telecommunications network.
12. The method as claimed in claim 1, wherein said likeness data comprises image data.
13. The method as claimed in claim 12, wherein said step of rendering comprises face mapping, wherein said image data is mapped onto a face of said rendered image.
14. A method of providing a virtual identity dataset to a plurality of applications, comprising the steps of:
providing a central virtual identity server for said applications, said server containing a virtual identity data repository for a plurality of users;
configuring a list of elements defining a desired virtual identity dataset for each of said applications;
recognizing a user and requesting said server to retrieve said virtual identity dataset for use with a selected one of said applications and said user;
generating said virtual identity dataset using data stored in said repository and in response to said configured list of elements and to said selected one of said applications and said user, and, when required, machine interpretable instructions for obtaining additional information defining one or more of said elements for which said repository has no data for said user;
using said instructions in a user interface to obtain said additional information from said user, and to complete said virtual identity dataset; and providing said virtual identity dataset to said selected application.
15. The method as claimed in claim 14, wherein said compiling comprises sending said virtual identity dataset and said instructions to an application server of said selected one of said applications.
16. The method as claimed in claim 15, wherein said virtual identity dataset and said instructions are provided in an object executable by said application server.
17. The method as claimed in claim 14, wherein said complete virtual identity dataset for said user is stored in said repository.
18. The method as claimed in claim 14, wherein said additional information is stored for use locally within said selected application.
19. The method as claimed in claim 14, wherein said user is prompted to define whether said additional information provided for said selected application may be shared by other ones of said plurality of applications.
20. The method as claimed in claim 19, wherein said user is prompted to define whether a field of said additional information may be shared by other ones of said plurality of applications.
CA 2373459 2001-10-18 2001-10-18 Method and system for sharing the visual likeness of a user among a multitude of applications or services Abandoned CA2373459A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA0101546 2001-10-18

Publications (1)

Publication Number Publication Date
CA2373459A1 true CA2373459A1 (en) 2003-04-18

Family

ID=4143174

Family Applications (1)

Application Number Title Priority Date Filing Date
CA 2373459 Abandoned CA2373459A1 (en) 2001-10-18 2001-10-18 Method and system for sharing the visual likeness of a user among a multitude of applications or services

Country Status (1)

Country Link
CA (1) CA2373459A1 (en)

Similar Documents

Publication Publication Date Title
JP3859018B2 (en) 3D virtual reality space sharing method, 3D virtual reality space sharing system, address management method, and address management server terminal
KR101334066B1 (en) Self-evolving Artificial Intelligent cyber robot system and offer method
US7342587B2 (en) Computer-implemented system and method for home page customization and e-commerce support
US20080052242A1 (en) Systems and methods for exchanging graphics between communication devices
Alpcan et al. Towards 3d internet: Why, what, and how?
US10878627B2 (en) Multilayer depth and volume preservation of stacked meshes
US11727123B2 (en) Systems and methods to control access to components of virtual objects
US10733780B2 (en) Portable and persistent virtual identity systems and methods
WO2021178630A1 (en) Content and context morphing avatars
CN101981578A (en) Method and apparatus for collaborative design of an avatar or other graphical structure
KR20030054104A (en) Contents personalization method and apparatus by aggregating multiple profiles
JP4236717B2 (en) Information processing apparatus, information processing method, and information providing medium in 3D virtual reality space sharing system
Moodley et al. Beyond reality: An application of extended reality and blockchain in the metaverse
KR20020003919A (en) The method and system by using the internet to offer the human synthesis modeling
KR20010000774A (en) Method and apparatus for producing divided object window on Internet communications-based terminal and method and server-client system for providing additional service using the same
US8473551B2 (en) Confidential presentations in virtual world infrastructure
US20230059361A1 (en) Cross-franchise object substitutions for immersive media
KR20000054325A (en) providing method of hair production service and system for the same
CA2373459A1 (en) Method and system for sharing the visual likeness of a user among a multitude of applications or services
Won et al. Your money or your data: Avatar embodiment options in the identity economy
KR100385896B1 (en) Method and Apparatus for Providing and Using of 3 Dimensional Image Which Represents the User in Cyber-Space
JP3879154B2 (en) Information processing apparatus and information processing method
KR20040024627A (en) Apparatus and method for a real-time avatar image generating system
US20240193885A1 (en) Multi-platform virtual retail store engine
US20240195793A1 (en) Cross platform account unification and normalization

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued