WO2015130309A1 - Customizable profile to modify an identified feature in video feed - Google Patents

Customizable profile to modify an identified feature in video feed Download PDF

Info

Publication number
WO2015130309A1
WO2015130309A1 PCT/US2014/019524 US2014019524W WO2015130309A1 WO 2015130309 A1 WO2015130309 A1 WO 2015130309A1 US 2014019524 W US2014019524 W US 2014019524W WO 2015130309 A1 WO2015130309 A1 WO 2015130309A1
Authority
WO
WIPO (PCT)
Prior art keywords
video feed
identified feature
identified
feature
computing device
Prior art date
Application number
PCT/US2014/019524
Other languages
French (fr)
Inventor
Chi So
Jeff Johnson
Juan Martinez
Kent E. Biggs
Nam Nguyen
Original Assignee
Hewlett-Packard Development Company, L.P.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett-Packard Development Company, L.P. filed Critical Hewlett-Packard Development Company, L.P.
Priority to PCT/US2014/019524 priority Critical patent/WO2015130309A1/en
Publication of WO2015130309A1 publication Critical patent/WO2015130309A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/157Conference systems defining a virtual conference space and using avatars or agents

Definitions

  • Video conferencing is a type of visual collaboration where two or more users may communicate simultaneously over two-way video and audio transmissions. Video conferencing may be used in many environments and situations by bringing people together and reducing travel and environmental impact.
  • FIG. 1 is a block diagram of an example computing device including a camera to capture a video feed and processor to modify an identified feature within the video feed in accordance with a customizable profile;
  • FIGS. 2A-2C are illustrations of example user interfaces for a creation of a customizable profile
  • FIG. 3 is a flowchart of an example method to create a customizable profile and modifying an identified feature within a video feed in accordance with the customizable profile for output;
  • FIG. 4 is a flowchart of an example method to create a customizable profile and modify an identified feature by removing a headset from video feed;
  • FIG. 5 is a flowchart of an example method to create a customizable profile for modifying an identified feature and output the modification of the identified feature and the video feed;
  • FIG. 6 is a block diagram of an example computing device with a processor to execute instructions in a machine-readable storage medium for creating a customizable profiie and processing video feed to modify a feature within the video feed including one of a headset removal, body modification, and/or other identified feature.
  • Video conferencing is used in various environmen ts and situations. In many of these situations, a user may want to present themselves in an aesthetically planner manner. There may be pre-defined profil es which may be used to represent an appearance of the user; however these may not present the user in an aesthetically manner as the pre-defined profiles are more comedic and cartoonish in nature. Additionally, these pre-defined profiles may not provide a fine granularity to enable the user to modify their appearance. Further, the user may manually manipulate their appearance in images, but this may become infeasible in live video.
  • examples disclosed herein create a customizable profile in which users may change their appearance to appear in a more aesthetically pleasing manner.
  • the examples disclose a computing device including a camera to capture a video feed for processing and a processor to create the customizable profile.
  • the customizable profile indicates which particular features to modify within the video feed.
  • the processor may utilize facial recognition to locate a position of these particular features. Locating the positions of the particular features within the video feed, enables the processor to modify t e identified features in accordance with the customizable profile.
  • the customizable profile is a type of template which identifies particular features within the video feed which should be modified. In this manner, the user may customize the profile to determine how they may want to appear. Additionally, creating the customizable profile enables the user much control and granularity for adjusting particular features of their appearance.
  • the video feed is captured and modified for output in real-time.
  • This enables the modified video feed to output in real-time to the camera capturing the video feed.
  • this implementation provides a live video manipulation thus allowing customization of the appearance of the user. This enables the user to present themselves in a live video conference in a manner they deem an aesthetically pleasing manner.
  • the modification of the identified feature includes removing a headset from the user.
  • the computing device utilizes facial recognition to determine boundaries of the user. Additionally, the computing device may use depth perception in a three-dimensional video to separate the background of the video from the user. This enables the computing device to fill in the headset removal with the user's facial features and the identified background. This enables a seamless presentation of the user in the video conferencing environment without the visual distraction of the headset.
  • examples disclosed herein create a customizable profile in which users may change their appearance to appear in a more aesthetically pleasing manner.
  • FIG. 1 is a block diagram of an example computing device 102 including a processor 104 and a camera 110.
  • the processor 104 creates a customizable profile which describes what particular features should be modified within a video feed! 12.
  • the camera 110 captures the video feed for the processor 104 to modify an identified feature within the video feed 1 12 at module 108.
  • the computing device 102 may output the modified video feed.
  • the video feed 1 12 is captured, modified, and output in real-time.
  • the modified video feed may be output in real-time to the camera 110 capturing the video feed 112. This enables a live video manipulation for a user to determine how they may want to appear.
  • the computing device 102 may include a display (not illustrated) in which to output the modified video feed 1 12.
  • the computing device 102 may present a user interface on the display for the user to select which particular features to modify within the video feed 112.
  • the computing device 102 is an electronic device including the processor 104 and the camera 110 and as such may include a target device, mobile device, client device, personal computer, desktop computer, laptop, tablet, video game console, or other type of el ectronic device capable of creating the customizable profile at module 106 and modifying the identified feature within the video feed 112 at module 108.
  • the processor 104 creates the customizable profile and modifies the identified features within the video feed 112 in accordance with the customizable profile
  • implementations of the processor 104 include a controller, circuit logic, a microchip, chipset, electronic circuit, microprocessor, semiconductor, microcontroller, central processing unit (CPU), application-specific integrated circuit (ASIC), embedded controller, or other type of electronic device capable of producing the customizable profile and modifying video feed in accordance with the customizable profile at modules 106 and 108.
  • the computing device 102 creates the customizable profile.
  • the customizable profile is a type of template which identifies the particular features within the video feed 112 for modification or changing. In this manner, the user may customize the profile to determine how they may want to appear.
  • Creating the customizable profile provides a fine granularity for adjusting the particular features in the live video feed.
  • the particular features within the video feed 1 12 may correspond to attributes of the user in which the user may desire to modify.
  • module 106 may include presenting a user interface for the user to input which attributes to modify and an extent of the modification. This enables the user to present themselves in a five video conference in a manner they deem an aesthetically pleasing manner.
  • the modification of the identified feature may include blemish removal, skin tone adjustment, hair color change, clothing change, headset removal, oil shine removal, facial hair removal, teeth whitening, facial symmetry, tattoo removal, tattoo placement eye whitening, eye brightening, shadow removal, etc.
  • Modifications of the identified featui'e including skin and hair may include a sampling of pixel values. Upon sampling (he pixel values, the processor 104 may determine which pixel values may not be included within a particular range and thus adjust that pixel value to the range of values. This presents a seamless presentation without blurring the identified features upon the modification.
  • the features may be identified through facial recognition technology.
  • the processor 104 may identify a particular user which may correspond to the customizable profile. Additionally using facial recognition, serves as a base for the processor 104 to locate the particular features on the user. For example, using facial recognition, the processor 104 may locate a mouth and as such may determine eyes are located within a particular distance from the mouth. In another implementation, the processor 104 may sample various pixel values of corresponding features on the user. Using the pixel values, the processor 104 may identify hair, eyes, skin, clothes, etc.
  • the module 106 may include an instruction, set of instructions, process, operation, logic, technique, function, firmware, and/or software executable by the computing device 102 to create the customizable profile for modification of identified features within the video feed 1 12.
  • the camera 110 is an optical instrument which records video.
  • the camera 1 10 includes a three-dimensional camera.
  • the camera 1 10 may include a range camera and/or a stereo camera.
  • the range camera operates to produce a two- dimensional video feed 1 12 which ma sho distance points in a scene within the video feed 1 12 from a specific point.
  • the stereo camera utilizes multiple lenses with separate film frame sensors for each lens, thus allowing the camera 110 to simulate human binocular video feed and capturing the three-dimensional video feed 1 12,
  • FIG. 1 illustrates the camera I I 0 as internal to the computing device 102, implementations should not be limited as this done for illustration purposes.
  • the camera 110 may be in communication with the computing device 102 and as such may be external to the computing device 102.
  • the video feed 1 12 is a video stream which consists of multiple image frames and as such is captured by the camera 110. Upon capturing the video feed 1 12, the camera 110 communicates with the computing device 102 to transmit the video stream to the processor 104 for modification.
  • the processor 104 modifies the identified features in accordance with the customizable profile.
  • the customizable profile is the template which the processor 104 utilizes to direct which features to modify within the video feed 1 12 and how to modify those particular features.
  • the features are identified utilizing facial recognition technology which enables the processor 104 to determine the positions of the features for modification.
  • the module 108 may include an instruction, set of instructions, process, operation, logic, technique, function, firmware, and/or software executable by the computing device 102 to modify' the identified featitre within the video feed 1 12 in accordance with the customizable profile at module 106.
  • FIGS. 2A-2C illustrate example user interfaces for creating a customizable profile.
  • the customizable profile may be created from various selections of features on a user interface 208
  • FIG. 2A illustrates a display 202 associated with a computing device which may present a video feed 204.
  • the computing device may present a frame from the video feed 204 which may be used to create the customizable profile.
  • the video feed 204 depicts facial technology 206 in which a processor identifies features for modification.
  • the facial technology 2.06 may locate certain features on the user's face, such as eyes, nose, and mouth as a base for locating the particular features for modification according to the user's selections. For example, upon locating the user's eyes, the processor may locate hair and/or clothes.
  • the user interface 208 illustrates possible features in which a user may select to modify.
  • Possible modifications to identified features may include facial beautification such as skin toning, facial slimming/widening, skin smoothing blemish, removal, facial symmetry, teeth straightening, eye brightening, eye gaze correction, and/or cosmetics application.
  • Each of the identified features may include a sliding bar 210 to adjust modification of the particular feature.
  • skin toning includes the sliding bar from light to dark.
  • the sliding bar 210 enables a granular modification of each of the identified features. This further enables the user to further customize their respective profile, in another implementation, the identified features may include a check or other type of indicator for the customizable profile to apply that particular modification.
  • the eye gaze correction is checked to apply that particular modification in the creation of the customizable profile.
  • the user may accept the modifications, cancel the modifications, reset, and/or default to the original captured video feed as indicated with selections at the bottom of FIG. 2A,
  • FIG. 2B illustrates the video feed 204 for tattoo removal as modifying the identified feature in accordance with the customizable profile.
  • the user interface 208 includes the modification of the tattoo removal.
  • the user interface 208 represents the possible features the user may select for modification.
  • the tattoo may include a coordinate position to locate the identified feature.
  • the top display 202 FIG. 2B depicts the user with a tattoo prior to modification of the feature.
  • the botiom display depicts the user upon the modificaiion of the tattoo removal,
  • FIG. 2C illustrates the video feed 204 to change clothing attire and remove a headset in accordance with the customizable profile.
  • the identified features for modification on the user interface 208 include clothing options and an indicator to remove the headset from the user.
  • the top display 202 of FIG. 2C represents the possible choices for the user to select to create the customizable profile.
  • the bottom display 202 of FIG. 2C represents how the video feed 204 may appear upon modification of the identified features. Selecting the various features from the user interface 208 to modify on the display 202 enables user input to determine how they may want to appear in the video feed. For example, the top display 202 illustrates the user in casual clothing with a headset.
  • the bottom display 202 illustrates the user upon modification of the identified features in accordance with the customizable profile as the user is wearing a business suit with tie and the headset is removed. Removing the headset, the processor may fill in the space with the user's hair or other facial features and the background. This implementation is described in detail in a later figure,
  • FIG. 3 is a flowchart of an example method executable by a computing device to create a customizable profile for modifying an identified feature within video feed in accordance with the customizable profile.
  • the computing device creates the customizable profile which specifies with particular features to modify within the video feed.
  • the particular features are identified using facial recognition. Using facial recognition serves as a base for the computing device to locate positions of the particular features on a user.
  • the computing device may capture a video feed and/or video stream which may be processed to identify the particular features to modify. Capturing the video feed, the computing device modifies the particular features identified from the customizable profile. Upon modifying the video feed, the computing device outputs the modified video feed and the video feed.
  • the computing device processes the video feed for modification of the identified feature and output in real-time. This enables a live video manipulation for a user to determine ho they may want to appear.
  • processor 104 as in FIG. 1 may execute operations 302-308 to modify the identified feature according to the customizable profile.
  • the computing device 102 executes operations 302- 308,
  • FIG. 3 is described as implemented by the computing device 102 as in FIG. 1 , it may be executed on other suitable components.
  • FIG. 3 may be implemented in the form of executable instructions on a machine-readable storage medium 604 as in FIG. 6.
  • the computing device creates the customizable profile.
  • the customizable profile directs which particular features should be modified within the video feed according to the user input.
  • operation 502 may include presenting an interface for the user to customize their appearance. Such customization may include skin smoothing, blemish removal, oil shine removal, facial hair removal, teeth whitening, facial symmetry, headset removal, eye whitening, eye brightening, shadow removal, etc.
  • the customizable profile includes particular features which should be modified according to the user preferences.
  • the input may include the user input to direct the profile to the identified feature to modify and how to modify that feature.
  • the identified feature may include clothing, headset removal, skin tone, etc.
  • the computing device processes the video feed and/or video stream.
  • a three-dimensional camera captures the video feed and/or video stream.
  • the computing device may utilize facial recognition to locate a position of the user.
  • the computing device modifies the identified feature within the video feed in accordance with the customizable profile.
  • the customizable profile is a template which indicates which particular features should be modified within the video feed.
  • the modification may include removing a headset on the user within the video feed.
  • the computing device utilizes facial recognition to define the boundaries of the user. The computing device may then identify the background from the user.
  • the computing device may remove the headset from the video feed and fill in gaps created by the removal with the background and facial features.
  • the modification may include changing clothes of ihe user.
  • the modification may include changing a user's hair, eye color, skin tone, hair color, etc.
  • the computing device outputs the modification of the identified feature and the video feed.
  • the modified video feed is overlaid onto the video feed for output.
  • the computing device utilizes the facial recognition technology to track movement of the user over time. This enables the computing device to align the modified video feed with the video feed to present a seamless presentation,
  • FIG. 4 is a flowchart of an example method executable by a computing device to create a customizable profile and modify an identified feature by removmg a headset from video feed.
  • the computing device removes the headset by identifying a background of the video feed and utilizing facial and the identified background to fill in blank space left by the headset removal.
  • the computing device may output the video feed with the headset removal.
  • the headset may be electronically removed in a live video stream so the user appears as if he or she is not wearing the headset.
  • the video feed and/or video stream is captured by a three-dimensional camera that may perceive depth to identify the background from a user.
  • the headset may be isolated, removed and filled in with the background and/or facial features of the user.
  • the computing de vice utilizes facial recogniiion to determine boundaries of the users' head and face for determining whether to fill in the gaps with the user's head and/or face and the background.
  • Including the headset removal enables the user to appear in a more aesihetically appeasing manner without the bulky appearance of the headset while still including the benefits of the headset.
  • the headset removal may be performed in real-time so there is no perceivable lag in the video feed and/or video stream.
  • FIG. 4 references may be made to the components in FIG. I and FIGS. 2A-2C to provide contextual examples.
  • processor 104 as in FIG, 1 may execute operations 402-414 to modify the identified feature according to the customizable profile.
  • the computing device 102 executes operations 402-414.
  • FIG. 4 is described as implemented by the computing device 102 as in FIG. 1, it may be executed on other suitable components.
  • FIG. 4 may be implemented in the form of executable instructions on a machine- readable storage medium 604 as in FIG. 6.
  • the computing device creates the customizable profile.
  • the customizable profile indicates which particular feature to modify within the video feed.
  • a three-dimensional camera captures the video feed and/or video stream for depth perception to identify the background at operation 410.
  • the computing device scans the user and/or the headset so the physical dimensions and shapes are known so once identifying the background at operation 410, the computing device may remove the headset.
  • Operation 402 may be similar in functionality to operation 302 as in FIG . 3.
  • the computing device processes the video feed and/or video stream.
  • the video feed may be captured at operation 402 with the three-dimensional camera. Processing the video feed and/or video stream, the computing device may modify particular identified features of the user within the video stream and/or video feed. Operation 404 may be similar in functionality to operation 304 as in FIG. 3.
  • the computing device modifies the identified feature in the video feed in accordance with the customizable profile created at operation 402.
  • the user may prefer to remove the headset in the video stream, thus an interface may include input indicating to remove the headset.
  • the modification of the identified feature includes the headset removal from the video feed as at operation 408-412.
  • the modification of the identified feature includes adjusting a pixel value of the particular identified feature. This implementation is described in detail in the next figure. Operation 406 may be similar in functionality to operation 306 as in FIG. 3.
  • the computing device removes the headset in accordance with the customizable profile.
  • the computing device may utilize facial recognition to determine the boundaries of a person's face, hair, body, etc. Determining the boundaries enables the computing device to isolate the headset from the person.
  • the computing device identities ihe background in the video feed. Using the three-dimensional camera, the computing device may perceive the depth of the user and identify the background from the depth. This enables the computing device to further isolate the headset from the person and the background for remo val at operation 412.
  • the computing device utilizes the identified background at operation 410 and facial recognition to remove the headset from the video feed.
  • the computing device may fill in the space with the identified background and the facial recognition. Filling in the space may include the computing device sampling pixel values of the background, face, and/or hair. Thus, the space may be adjusted to a particular range of values which correspond to the background, face, and/or hair.
  • the computing device outputs the headset remo val and the video feed.
  • processing the video feed at operation 404, modifying the identified feature at operation 406, and ourputting the modification and the video feed at operation 414 occur in real time.
  • the modified video feed is overlaid with the captured video feed for output. This implementation is described in detail in the next figure. Operation 414 may be similar in functionality to operation 308 as in FIG. 3.
  • FIG. 5 is a flowchart of an example method executable by a computing device to create a customizable profile for modifying an identified feature and outputting the modification of the identified feature and a video feed.
  • the computing device creates the customizable profile and captures video feed for processing.
  • the computing device modifies an identified feature in accordance with the customizable profile.
  • the modification of the identified feature may include adjusting a pixel value of the identified feature and utilizing facial recognition of a user to track movement. Tracking movement of the user enables positioning of the modification to align accurately with the video feed to present a seamless video feed to another party.
  • the computing device may output the modification of the identified feature by overlaying the modification onto the captured video feed for output.
  • FIG. 5 Overlaying the modification onto the captured video feed enables the modification of the identified feature to occur during the live feed during a video conference.
  • processor 104 as in FIG. 1 may execute operations 502-514 to modify the identified feature according to the customizable profile.
  • the computing device 102 executes operations 502-514.
  • FIG. 5 is described as implemented by the computing device 102 as in FIG. 1, it may be executed on other suitable components.
  • FIG. 5 may be implemented in the form of executable instructions on a machine-readable storage medium 604 as in FIG. 6.
  • the computing device creates the customizable profile.
  • Operation 502 may include presenting an interface to a user for input.
  • the customizable profile includes particular features which should be modified according to the user preferences.
  • the customizable profile directs the computing device to which particular features to modify.
  • the computing device identifies the particular features in which to modify in the video siream by using facial recognition.
  • Creating the customizable profile may include the user preferences according to input.
  • the input may include the user input to direct the profile to the identified feature to modify and how to modify that feature.
  • the identified feature may include clothing, headset removal, skin tone, etc.
  • Creating the customizable profile provides a fine granularity of modifying features of the user in a five video feed.
  • operation 504 processes the live video while the user may input their identified feature preferences to modify.
  • Operation 502 may be similar in functionality to operations 302 and 402 as in FIGS. 3-4.
  • Operation 504 includes capturing the live video feed using a camera. Operation 504 may be similar in functionality to operations 304 and 404 as in FIGS. 3-4.
  • the device modifies the identified feature in the video feed according to the customizable profile.
  • Operation 506 may include identifying the feature based on facial recognition. Using facial recognition, the device may determine where a mouth, eyes, nose, etc. are located on the user. Based on the location of facial recognition, the de vice may identify the particular feature which should be modified according to the customizable profile.
  • the customizable profile may identify the particular feature in which to modify in relation to the user. Identifying the particular feature allows the user much control in determine what to modify in a video stream. Additionally, the customizable profile may also identify how to modify the particular feature. This may include smoothing out the particular feature by including a range of pixel values in which to modify the feature.
  • the device may adjust the pixel value of the identified feature and uiilize facial recognition to track movement for position the modification to align with the video feed as at operations 508-510.
  • Operation 506 may be similar in functionality to operations 306 and 406 as in FIGS. 3-4.
  • the device adjusts the pixel value of the identified feature.
  • the customizable profile may include the pixel value range for the particular features.
  • the device adjusts the pixel value to modify the particular feature.
  • the device utilizes facial recognition to track movement for positioning the modification of the identified feature to align with, the video feed, in one implementation, the device utilizes facial recognition technology to track and positions of the user over time. This enables the modified stream to properly align with the captured video feed, thus presenting the seamless video.
  • the device outputs the modification of the identified feature and the video feed.
  • the device outputs the modification and the video feed in realtime to capturing the video feed. This provides a dynamic aspect by allowing the user to choose how they present themselves in live video conferencing environments.
  • the modification of the identified feature is laid onto the video feed as at operation 514. Operation 512 may be similar in functionality to operations 308 and 414 as in FIGS. 3-4.
  • the device overlays the modified stream onto the captured video feed. This allows the seamless presentation of the two video streams.
  • a component within the device merges both the modified stream and the captured video feed, thus providing modifications of features in real-time to the capture of the video feed.
  • FIG. 6 is a block diagram of computing device 600 with a processor 602 to execute instructions 606-630 within a machine-readable storage medium 604.
  • the computing device 600 with the processor 602 is to create a customizable profile to modify an identified feature in captured video feed and in turn output the modification and the video feed.
  • the computing device 600 includes processor 602 and machine-readable storage medium 604, it may also include other components that would be suitable to one skilled in the art.
  • the computing device 600 may include the camera 1 10 as in FIG. 1.
  • the computing device 600 is an electronic device with the processor 602 capable of executing instructions 606- 630, and as such embodiments of the computing device 600 include a computing device, mobile device, client device, personal computer, desktop computer, laptop, tablet video game console, or other type of electronic device capable of executing instructions 606-630.
  • the instructions 606-630 maybe implemented as methods, functions, operations, and other processes implemented as machine-readable instructions stored on the storage medium 604, which may be non-transitory, such as hardware storage devices (e.g., random access memory (RAM), read only memor (ROM), erasable programmable ROM, electrically erasable ROM, hard drives, and flash memory).
  • RAM random access memory
  • ROM read only memor
  • erasable programmable ROM electrically erasable ROM
  • hard drives and flash memory
  • the processor 602 may fetch, decode, and execute instructions 606-630 to output the modified identified feature based on the customizable profile, accordingly. Tn one implementation, upon executing instructions 606-608, the processor 602 executes instruction 610 by executing instructions 612-616, 618-622, and/or instructions 624-628. For example, in one implementation upon executing 606-608, the processor may execute instructions 610-616 prior to executing instruction 630. In another implementation, upon executing instructions 606-610, the processor may execute instructions 618-622 prior to executing instruction 630. In a further implementation, upon executing 606-610, the processor may execute instructions 624-628 prior to executing instruction 630.
  • the processor 602 executes instructions 606-610 to: create the customizable profile that identifies the modification of a feature within video feed, wherein the feature is identified by facial recognition; capture the video feed for processing according to the customizable profile; and process the video feed.
  • the processor 602 may execute instructions 612-616 to process the video feed for feature modification, the processor 602 is to: adjust a pixel value of the identified feature; and overlay the modification of the identified feature on the video feed for output.
  • the processor 602 may execute instructions 618-622 to remove a headset from the video feed in processing the video feed, the processor 602. is to: identify a background of the video feed using depth location; and utilize facial recognition of the identified feature and the identified background to remove the headset from the video feed.
  • the processor 602 may execute instructions 624-628 to process the video feed for body modification, the processor 602 is to: utilize body recognition for the identified feature associated with the body; and modify the body feature, accordingly. Finally, the processor may execute instruction 630 to output the modification of the identified feature and the video feed in real-time to the capture of the video feed upon execution of instractions 606-610, 612-616, 618-622, and/or 624-628.
  • the machine-readable storage medium 604 includes instructions 606-630 for the processor 602 to fetch, decode, and execute. In another embodiment, the machine-readable storage medium 604 may be an electronic, magnetic, optical, memory, storage, flash-drive, or other physical device that contains or stores executable instructions.
  • the machine-readable storage medium 604 may include, for example, Random Access Memory (RAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage drive, a memory cache, network storage, a Compact Disc Read Only Memory (CDROM) and the like.
  • the machine-readable storage medium 604 may include an application and'or firmware which can be utilized independently and'or in conjunction with the processor 602 to fetch, decode, and/or execute instructions of the machine-readable storage medium 604.
  • the application and/or firmware may be stored on the machine-readable storage medium 604 and'or stored on another location of the computing device 600.
  • examples disclosed herein create a customizable profile in which users may change their appearance to appear in a more aesthetically pleasing manner.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

Examples herein disclose creating a customizable profile to modify an identified feature in a video feed. The feature is identified based on facial recognition. The examples process the video feed and modify the identified feature of the video feed in accordance with the customizable profile.

Description

CUSTOMIZABLE PROFILE TO MODIFY AN IDENTIFIED FEATURE IN VIDEO FEED
BACKGROUND
[0001 ] Video conferencing is a type of visual collaboration where two or more users may communicate simultaneously over two-way video and audio transmissions. Video conferencing may be used in many environments and situations by bringing people together and reducing travel and environmental impact.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] ΐη the accompanying drawings, like numerals refer to like components or blocks. The following detailed description references the drawings, wherein:
[0003] FIG. 1 is a block diagram of an example computing device including a camera to capture a video feed and processor to modify an identified feature within the video feed in accordance with a customizable profile;
[0004] FIGS. 2A-2C are illustrations of example user interfaces for a creation of a customizable profile;
[0005] FIG. 3 is a flowchart of an example method to create a customizable profile and modifying an identified feature within a video feed in accordance with the customizable profile for output;
[0006] FIG. 4 is a flowchart of an example method to create a customizable profile and modify an identified feature by removing a headset from video feed;
[0007] FIG. 5 is a flowchart of an example method to create a customizable profile for modifying an identified feature and output the modification of the identified feature and the video feed; and
[0008] FIG. 6 is a block diagram of an example computing device with a processor to execute instructions in a machine-readable storage medium for creating a customizable profiie and processing video feed to modify a feature within the video feed including one of a headset removal, body modification, and/or other identified feature. DETAILED DESCRIPTION
[0009] Video conferencing is used in various environmen ts and situations. In many of these situations, a user may want to present themselves in an aesthetically planner manner. There may be pre-defined profil es which may be used to represent an appearance of the user; however these may not present the user in an aesthetically manner as the pre-defined profiles are more comedic and cartoonish in nature. Additionally, these pre-defined profiles may not provide a fine granularity to enable the user to modify their appearance. Further, the user may manually manipulate their appearance in images, but this may become infeasible in live video.
[0010] To address these issues, examples disclosed herein create a customizable profile in which users may change their appearance to appear in a more aesthetically pleasing manner. The examples disclose a computing device including a camera to capture a video feed for processing and a processor to create the customizable profile. The customizable profile indicates which particular features to modify within the video feed. The processor may utilize facial recognition to locate a position of these particular features. Locating the positions of the particular features within the video feed, enables the processor to modify t e identified features in accordance with the customizable profile. The customizable profile is a type of template which identifies particular features within the video feed which should be modified. In this manner, the user may customize the profile to determine how they may want to appear. Additionally, creating the customizable profile enables the user much control and granularity for adjusting particular features of their appearance.
[0011] In another example discussed herein, the video feed is captured and modified for output in real-time. This enables the modified video feed to output in real-time to the camera capturing the video feed. Additionally, this implementation provides a live video manipulation thus allowing customization of the appearance of the user. This enables the user to present themselves in a live video conference in a manner they deem an aesthetically pleasing manner.
[0012] In a further example discussed herein, the modification of the identified feature includes removing a headset from the user. In this implementation, the computing device utilizes facial recognition to determine boundaries of the user. Additionally, the computing device may use depth perception in a three-dimensional video to separate the background of the video from the user. This enables the computing device to fill in the headset removal with the user's facial features and the identified background. This enables a seamless presentation of the user in the video conferencing environment without the visual distraction of the headset. [0013] In summar '', examples disclosed herein create a customizable profile in which users may change their appearance to appear in a more aesthetically pleasing manner.
[0014] Referring now to the figures, FIG. 1 is a block diagram of an example computing device 102 including a processor 104 and a camera 110. The processor 104 creates a customizable profile which describes what particular features should be modified within a video feed! 12. The camera 110 captures the video feed for the processor 104 to modify an identified feature within the video feed 1 12 at module 108. Upon modifying the video feed 1 12 at module 108, the computing device 102 may output the modified video feed. In one implementation, the video feed 1 12 is captured, modified, and output in real-time. The modified video feed may be output in real-time to the camera 110 capturing the video feed 112. This enables a live video manipulation for a user to determine how they may want to appear. This enables the user to manipulate how they may appear in a video conference, thus allowing customization of the appearance. In one implementation, the computing device 102 may include a display (not illustrated) in which to output the modified video feed 1 12. In another implementation, the computing device 102 may present a user interface on the display for the user to select which particular features to modify within the video feed 112. The computing device 102 is an electronic device including the processor 104 and the camera 110 and as such may include a target device, mobile device, client device, personal computer, desktop computer, laptop, tablet, video game console, or other type of el ectronic device capable of creating the customizable profile at module 106 and modifying the identified feature within the video feed 112 at module 108.
[0015] The processor 104 creates the customizable profile and modifies the identified features within the video feed 112 in accordance with the customizable profile, implementations of the processor 104 include a controller, circuit logic, a microchip, chipset, electronic circuit, microprocessor, semiconductor, microcontroller, central processing unit (CPU), application- specific integrated circuit (ASIC), embedded controller, or other type of electronic device capable of producing the customizable profile and modifying video feed in accordance with the customizable profile at modules 106 and 108.
[0016] At module 106, the computing device 102 creates the customizable profile. The customizable profile is a type of template which identifies the particular features within the video feed 112 for modification or changing. In this manner, the user may customize the profile to determine how they may want to appear. Creating the customizable profile provides a fine granularity for adjusting the particular features in the live video feed. The particular features within the video feed 1 12 may correspond to attributes of the user in which the user may desire to modify. As such, module 106 may include presenting a user interface for the user to input which attributes to modify and an extent of the modification. This enables the user to present themselves in a five video conference in a manner they deem an aesthetically pleasing manner. For example, the modification of the identified feature may include blemish removal, skin tone adjustment, hair color change, clothing change, headset removal, oil shine removal, facial hair removal, teeth whitening, facial symmetry, tattoo removal, tattoo placement eye whitening, eye brightening, shadow removal, etc. Modifications of the identified featui'e including skin and hair may include a sampling of pixel values. Upon sampling (he pixel values, the processor 104 may determine which pixel values may not be included within a particular range and thus adjust that pixel value to the range of values. This presents a seamless presentation without blurring the identified features upon the modification.
[0017] At module 106, the features may be identified through facial recognition technology. Using facial recognition, the processor 104 may identify a particular user which may correspond to the customizable profile. Additionally using facial recognition, serves as a base for the processor 104 to locate the particular features on the user. For example, using facial recognition, the processor 104 may locate a mouth and as such may determine eyes are located within a particular distance from the mouth. In another implementation, the processor 104 may sample various pixel values of corresponding features on the user. Using the pixel values, the processor 104 may identify hair, eyes, skin, clothes, etc. The module 106 may include an instruction, set of instructions, process, operation, logic, technique, function, firmware, and/or software executable by the computing device 102 to create the customizable profile for modification of identified features within the video feed 1 12.
[0018] The camera 110 is an optical instrument which records video. In one implementation, the camera 1 10 includes a three-dimensional camera. In this implementation, the camera 1 10 may include a range camera and/or a stereo camera. The range camera operates to produce a two- dimensional video feed 1 12 which ma sho distance points in a scene within the video feed 1 12 from a specific point. Using the range camera enables the computing device 102 to identify a user from a background as the user may be closer in distance to the specific point. The stereo camera utilizes multiple lenses with separate film frame sensors for each lens, thus allowing the camera 110 to simulate human binocular video feed and capturing the three-dimensional video feed 1 12, Although FIG. 1 illustrates the camera I I 0 as internal to the computing device 102, implementations should not be limited as this done for illustration purposes. For example, the camera 110 may be in communication with the computing device 102 and as such may be external to the computing device 102.
[0019] The video feed 1 12 is a video stream which consists of multiple image frames and as such is captured by the camera 110. Upon capturing the video feed 1 12, the camera 110 communicates with the computing device 102 to transmit the video stream to the processor 104 for modification.
[0020] At module 108, the processor 104 modifies the identified features in accordance with the customizable profile. As explained earlier, the customizable profile is the template which the processor 104 utilizes to direct which features to modify within the video feed 1 12 and how to modify those particular features. The features are identified utilizing facial recognition technology which enables the processor 104 to determine the positions of the features for modification. The module 108 may include an instruction, set of instructions, process, operation, logic, technique, function, firmware, and/or software executable by the computing device 102 to modify' the identified featitre within the video feed 1 12 in accordance with the customizable profile at module 106.
[0021 ] FIGS. 2A-2C illustrate example user interfaces for creating a customizable profile. In FIG. 2A, the customizable profile may be created from various selections of features on a user interface 208, FIG. 2A illustrates a display 202 associated with a computing device which may present a video feed 204. In another implementation, the computing device may present a frame from the video feed 204 which may be used to create the customizable profile. The video feed 204 depicts facial technology 206 in which a processor identifies features for modification. The facial technology 2.06 may locate certain features on the user's face, such as eyes, nose, and mouth as a base for locating the particular features for modification according to the user's selections. For example, upon locating the user's eyes, the processor may locate hair and/or clothes.
[002.2] The user interface 208 illustrates possible features in which a user may select to modify. Possible modifications to identified features may include facial beautification such as skin toning, facial slimming/widening, skin smoothing blemish, removal, facial symmetry, teeth straightening, eye brightening, eye gaze correction, and/or cosmetics application. Each of the identified features may include a sliding bar 210 to adjust modification of the particular feature. For example, skin toning includes the sliding bar from light to dark. The sliding bar 210 enables a granular modification of each of the identified features. This further enables the user to further customize their respective profile, in another implementation, the identified features may include a check or other type of indicator for the customizable profile to apply that particular modification. For example, the eye gaze correction is checked to apply that particular modification in the creation of the customizable profile. Upon creating the customizable profile, the user may accept the modifications, cancel the modifications, reset, and/or default to the original captured video feed as indicated with selections at the bottom of FIG. 2A,
[0023] FIG. 2B illustrates the video feed 204 for tattoo removal as modifying the identified feature in accordance with the customizable profile. The user interface 208 includes the modification of the tattoo removal. The user interface 208 represents the possible features the user may select for modification. In this implementation, the tattoo may include a coordinate position to locate the identified feature. The top display 202 FIG. 2B depicts the user with a tattoo prior to modification of the feature. The botiom display depicts the user upon the modificaiion of the tattoo removal,
[0024] FIG. 2C illustrates the video feed 204 to change clothing attire and remove a headset in accordance with the customizable profile. In this implementation, the identified features for modification on the user interface 208 include clothing options and an indicator to remove the headset from the user. As such, the top display 202 of FIG. 2C represents the possible choices for the user to select to create the customizable profile. The bottom display 202 of FIG. 2C represents how the video feed 204 may appear upon modification of the identified features. Selecting the various features from the user interface 208 to modify on the display 202 enables user input to determine how they may want to appear in the video feed. For example, the top display 202 illustrates the user in casual clothing with a headset. The bottom display 202 illustrates the user upon modification of the identified features in accordance with the customizable profile as the user is wearing a business suit with tie and the headset is removed. Removing the headset, the processor may fill in the space with the user's hair or other facial features and the background. This implementation is described in detail in a later figure,
[0025] FIG. 3 is a flowchart of an example method executable by a computing device to create a customizable profile for modifying an identified feature within video feed in accordance with the customizable profile. The computing device creates the customizable profile which specifies with particular features to modify within the video feed. The particular features are identified using facial recognition. Using facial recognition serves as a base for the computing device to locate positions of the particular features on a user. The computing device may capture a video feed and/or video stream which may be processed to identify the particular features to modify. Capturing the video feed, the computing device modifies the particular features identified from the customizable profile. Upon modifying the video feed, the computing device outputs the modified video feed and the video feed.
[0026] The computing device processes the video feed for modification of the identified feature and output in real-time. This enables a live video manipulation for a user to determine ho they may want to appear. In discussing FIG. 3, references may be made to the components in FIG. 1 and FIGS. 2A-2C to provide contextual examples. For example, processor 104 as in FIG. 1 may execute operations 302-308 to modify the identified feature according to the customizable profile. In another example, the computing device 102 executes operations 302- 308, Further, although FIG. 3 is described as implemented by the computing device 102 as in FIG. 1 , it may be executed on other suitable components. For example, FIG. 3 may be implemented in the form of executable instructions on a machine-readable storage medium 604 as in FIG. 6.
[0027] At operation 302, the computing device creates the customizable profile. The customizable profile directs which particular features should be modified within the video feed according to the user input. As such, operation 502 may include presenting an interface for the user to customize their appearance. Such customization may include skin smoothing, blemish removal, oil shine removal, facial hair removal, teeth whitening, facial symmetry, headset removal, eye whitening, eye brightening, shadow removal, etc. In this manner, the customizable profile includes particular features which should be modified according to the user preferences. The input may include the user input to direct the profile to the identified feature to modify and how to modify that feature. For example, the identified feature may include clothing, headset removal, skin tone, etc.
[0028] At operation 304, the computing device processes the video feed and/or video stream. In one implementation, a three-dimensional camera captures the video feed and/or video stream. Using this video feed, the computing device may utilize facial recognition to locate a position of the user. [0029] At operation 306, the computing device modifies the identified feature within the video feed in accordance with the customizable profile. The customizable profile is a template which indicates which particular features should be modified within the video feed. In one implementation, the modification may include removing a headset on the user within the video feed. In this implementation, the computing device utilizes facial recognition to define the boundaries of the user. The computing device may then identify the background from the user. Upon identifying the background and the boundaries of the user, ihe computing device may remove the headset from the video feed and fill in gaps created by the removal with the background and facial features. This implementation is described in detail in the next figure. In another implementation, the modification may include changing clothes of ihe user. In a further implementation, the modification may include changing a user's hair, eye color, skin tone, hair color, etc. In this implementation, a pixel value of the identified feature may be adjusted to value which may remove a blemish, shadows, correct skin tone, etc. Adjusting the pixel value of the identified feature ma include sampling pixel values of the corresponding feature to determine a range of the pixel value. Then the computing device may ensure the surrounding pixel values are within the range, thus adjusting pixel values that may be out of the range.
[0030] At operation 308, the computing device outputs the modification of the identified feature and the video feed. I one implementation, the modified video feed is overlaid onto the video feed for output. In this implementation, the computing device utilizes the facial recognition technology to track movement of the user over time. This enables the computing device to align the modified video feed with the video feed to present a seamless presentation,
[0031 ] FIG. 4 is a flowchart of an example method executable by a computing device to create a customizable profile and modify an identified feature by removmg a headset from video feed. The computing device removes the headset by identifying a background of the video feed and utilizing facial and the identified background to fill in blank space left by the headset removal. The computing device may output the video feed with the headset removal. In this manner, the headset may be electronically removed in a live video stream so the user appears as if he or she is not wearing the headset. The video feed and/or video stream is captured by a three-dimensional camera that may perceive depth to identify the background from a user. The headset may be isolated, removed and filled in with the background and/or facial features of the user. In this implementation, the computing de vice utilizes facial recogniiion to determine boundaries of the users' head and face for determining whether to fill in the gaps with the user's head and/or face and the background. Including the headset removal, enables the user to appear in a more aesihetically appeasing manner without the bulky appearance of the headset while still including the benefits of the headset. Additionally, the headset removal may be performed in real-time so there is no perceivable lag in the video feed and/or video stream. In discussing FIG. 4, references may be made to the components in FIG. I and FIGS. 2A-2C to provide contextual examples. For example, processor 104 as in FIG, 1 may execute operations 402-414 to modify the identified feature according to the customizable profile. In another example, the computing device 102 executes operations 402-414. Further, although FIG. 4 is described as implemented by the computing device 102 as in FIG. 1, it may be executed on other suitable components. For example, FIG. 4 may be implemented in the form of executable instructions on a machine- readable storage medium 604 as in FIG. 6.
[0032] At operation 402, the computing device creates the customizable profile. The customizable profile indicates which particular feature to modify within the video feed. In another operation of 402, a three-dimensional camera captures the video feed and/or video stream for depth perception to identify the background at operation 410. In a further implementation of operation 402, the computing device scans the user and/or the headset so the physical dimensions and shapes are known so once identifying the background at operation 410, the computing device may remove the headset. Operation 402 may be similar in functionality to operation 302 as in FIG . 3.
[0033] At operation 404, the computing device processes the video feed and/or video stream. The video feed may be captured at operation 402 with the three-dimensional camera. Processing the video feed and/or video stream, the computing device may modify particular identified features of the user within the video stream and/or video feed. Operation 404 may be similar in functionality to operation 304 as in FIG. 3.
[0034] At operation 406, the computing device modifies the identified feature in the video feed in accordance with the customizable profile created at operation 402. In this operation, the user may prefer to remove the headset in the video stream, thus an interface may include input indicating to remove the headset. In one implementation, the modification of the identified feature includes the headset removal from the video feed as at operation 408-412. In another implementation the modification of the identified feature includes adjusting a pixel value of the particular identified feature. This implementation is described in detail in the next figure. Operation 406 may be similar in functionality to operation 306 as in FIG. 3.
[0035] At operation 408, the computing device removes the headset in accordance with the customizable profile. In one implementation of operation 408, the computing device may utilize facial recognition to determine the boundaries of a person's face, hair, body, etc. Determining the boundaries enables the computing device to isolate the headset from the person.
[0036] At operation 410, the computing device identities ihe background in the video feed. Using the three-dimensional camera, the computing device may perceive the depth of the user and identify the background from the depth. This enables the computing device to further isolate the headset from the person and the background for remo val at operation 412.
[0037] At operation 412, the computing device utilizes the identified background at operation 410 and facial recognition to remove the headset from the video feed. At this operation, the computing device may fill in the space with the identified background and the facial recognition. Filling in the space may include the computing device sampling pixel values of the background, face, and/or hair. Thus, the space may be adjusted to a particular range of values which correspond to the background, face, and/or hair.
[0038] At operation 414, the computing device outputs the headset remo val and the video feed. In one implementation, processing the video feed at operation 404, modifying the identified feature at operation 406, and ourputting the modification and the video feed at operation 414 occur in real time. In another implementation, the modified video feed is overlaid with the captured video feed for output. This implementation is described in detail in the next figure. Operation 414 may be similar in functionality to operation 308 as in FIG. 3.
[0039] FIG. 5 is a flowchart of an example method executable by a computing device to create a customizable profile for modifying an identified feature and outputting the modification of the identified feature and a video feed. The computing device creates the customizable profile and captures video feed for processing. The computing device modifies an identified feature in accordance with the customizable profile. The modification of the identified feature may include adjusting a pixel value of the identified feature and utilizing facial recognition of a user to track movement. Tracking movement of the user enables positioning of the modification to align accurately with the video feed to present a seamless video feed to another party. The computing device may output the modification of the identified feature by overlaying the modification onto the captured video feed for output. Overlaying the modification onto the captured video feed enables the modification of the identified feature to occur during the live feed during a video conference. In discussing FIG. 5, references may be made to the components in FIG, 1 and FIGS, 2A-2C to provide contextual examples. For example, processor 104 as in FIG. 1 may execute operations 502-514 to modify the identified feature according to the customizable profile. In another example, the computing device 102 executes operations 502-514. Further, although FIG. 5 is described as implemented by the computing device 102 as in FIG. 1, it may be executed on other suitable components. For example, FIG. 5 may be implemented in the form of executable instructions on a machine-readable storage medium 604 as in FIG. 6.
[0040] At operation 502, the computing device creates the customizable profile. Operation 502 may include presenting an interface to a user for input. The customizable profile includes particular features which should be modified according to the user preferences. In this manner, the customizable profile directs the computing device to which particular features to modify. The computing device identifies the particular features in which to modify in the video siream by using facial recognition. Creating the customizable profile may include the user preferences according to input. The input may include the user input to direct the profile to the identified feature to modify and how to modify that feature. For example, the identified feature may include clothing, headset removal, skin tone, etc. Creating the customizable profile provides a fine granularity of modifying features of the user in a five video feed. In one implementation, operation 504 processes the live video while the user may input their identified feature preferences to modify. Operation 502 may be similar in functionality to operations 302 and 402 as in FIGS. 3-4.
[0041 ] At operation 504, the devices process the video feed. Operation 504 includes capturing the live video feed using a camera. Operation 504 may be similar in functionality to operations 304 and 404 as in FIGS. 3-4.
[0042] At operation 506, the device modifies the identified feature in the video feed according to the customizable profile. Operation 506 may include identifying the feature based on facial recognition. Using facial recognition, the device may determine where a mouth, eyes, nose, etc. are located on the user. Based on the location of facial recognition, the de vice may identify the particular feature which should be modified according to the customizable profile. In this operation, the customizable profile may identify the particular feature in which to modify in relation to the user. Identifying the particular feature allows the user much control in determine what to modify in a video stream. Additionally, the customizable profile may also identify how to modify the particular feature. This may include smoothing out the particular feature by including a range of pixel values in which to modify the feature. In one implementation of operation 506, the device may adjust the pixel value of the identified feature and uiilize facial recognition to track movement for position the modification to align with the video feed as at operations 508-510. Operation 506 may be similar in functionality to operations 306 and 406 as in FIGS. 3-4.
[0043] At operation 508, the device adjusts the pixel value of the identified feature. In this operation, the customizable profile may include the pixel value range for the particular features. In turn, the device adjusts the pixel value to modify the particular feature.
[0044] At operation 510, the device utilizes facial recognition to track movement for positioning the modification of the identified feature to align with, the video feed, in one implementation, the device utilizes facial recognition technology to track and positions of the user over time. This enables the modified stream to properly align with the captured video feed, thus presenting the seamless video.
[0045] At operation 512, the device outputs the modification of the identified feature and the video feed. In one implementation, the device outputs the modification and the video feed in realtime to capturing the video feed. This provides a dynamic aspect by allowing the user to choose how they present themselves in live video conferencing environments. In another implementation, the modification of the identified feature is laid onto the video feed as at operation 514. Operation 512 may be similar in functionality to operations 308 and 414 as in FIGS. 3-4.
[0046] At operation 514, the device overlays the modified stream onto the captured video feed. This allows the seamless presentation of the two video streams. In one implementation, a component within the device merges both the modified stream and the captured video feed, thus providing modifications of features in real-time to the capture of the video feed.
[0047] FIG. 6 is a block diagram of computing device 600 with a processor 602 to execute instructions 606-630 within a machine-readable storage medium 604. Specifically, the computing device 600 with the processor 602 is to create a customizable profile to modify an identified feature in captured video feed and in turn output the modification and the video feed. Although the computing device 600 includes processor 602 and machine-readable storage medium 604, it may also include other components that would be suitable to one skilled in the art. For example, the computing device 600 may include the camera 1 10 as in FIG. 1. The computing device 600 is an electronic device with the processor 602 capable of executing instructions 606- 630, and as such embodiments of the computing device 600 include a computing device, mobile device, client device, personal computer, desktop computer, laptop, tablet video game console, or other type of electronic device capable of executing instructions 606-630. The instructions 606-630 maybe implemented as methods, functions, operations, and other processes implemented as machine-readable instructions stored on the storage medium 604, which may be non-transitory, such as hardware storage devices (e.g., random access memory (RAM), read only memor (ROM), erasable programmable ROM, electrically erasable ROM, hard drives, and flash memory).
[0048] The processor 602 may fetch, decode, and execute instructions 606-630 to output the modified identified feature based on the customizable profile, accordingly. Tn one implementation, upon executing instructions 606-608, the processor 602 executes instruction 610 by executing instructions 612-616, 618-622, and/or instructions 624-628. For example, in one implementation upon executing 606-608, the processor may execute instructions 610-616 prior to executing instruction 630. In another implementation, upon executing instructions 606-610, the processor may execute instructions 618-622 prior to executing instruction 630. In a further implementation, upon executing 606-610, the processor may execute instructions 624-628 prior to executing instruction 630. Specifically, the processor 602 executes instructions 606-610 to: create the customizable profile that identifies the modification of a feature within video feed, wherein the feature is identified by facial recognition; capture the video feed for processing according to the customizable profile; and process the video feed. The processor 602 may execute instructions 612-616 to process the video feed for feature modification, the processor 602 is to: adjust a pixel value of the identified feature; and overlay the modification of the identified feature on the video feed for output. The processor 602 may execute instructions 618-622 to remove a headset from the video feed in processing the video feed, the processor 602. is to: identify a background of the video feed using depth location; and utilize facial recognition of the identified feature and the identified background to remove the headset from the video feed. The processor 602 may execute instructions 624-628 to process the video feed for body modification, the processor 602 is to: utilize body recognition for the identified feature associated with the body; and modify the body feature, accordingly. Finally, the processor may execute instruction 630 to output the modification of the identified feature and the video feed in real-time to the capture of the video feed upon execution of instractions 606-610, 612-616, 618-622, and/or 624-628. [0049] The machine-readable storage medium 604 includes instructions 606-630 for the processor 602 to fetch, decode, and execute. In another embodiment, the machine-readable storage medium 604 may be an electronic, magnetic, optical, memory, storage, flash-drive, or other physical device that contains or stores executable instructions. Thus, the machine-readable storage medium 604 may include, for example, Random Access Memory (RAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage drive, a memory cache, network storage, a Compact Disc Read Only Memory (CDROM) and the like. As such, the machine-readable storage medium 604 may include an application and'or firmware which can be utilized independently and'or in conjunction with the processor 602 to fetch, decode, and/or execute instructions of the machine-readable storage medium 604. The application and/or firmware may be stored on the machine-readable storage medium 604 and'or stored on another location of the computing device 600.
[0050] In summary, examples disclosed herein create a customizable profile in which users may change their appearance to appear in a more aesthetically pleasing manner.

Claims

CLAIMS We claim:
1. A computing device comprising:
a camera to capture a video feed; and
a processor to:
create a customizable profile, wherein the customizable profile modifies an identified feature in the video feed, and the feature is identified based on facial recognition; and
modify the identified feature of the video feed in accordance with the customizable profile.
2. The computing device of claim 1 further comprises:
an output to provide the modification of the identified feature and video feed in real time to the camera capturing the video feed.
3. The computing device of claim 1 wherein the identified feature of the video feed comprises a headset and the processor is further to:
identify a background of the video feed through depth perception; and
remove the headset from the video feed based on the facial recognition and the identified background,
4. The computing device of claim 1 wherein to modify the identified feature of the video in accordance with the customizable profile, the processor is further to:
adjust a pixel value associated with the identified feature; and
overlay the modified identified feature with the video feed.
5. A method comprising:
creating a customizable profile, wherein the customizable profile is to modify an identified feature in a video feed and the feature is identified based on facial recognition; processing the video feed;
modifying the identified feature of the video feed in accordance with the customizable profile; and
outputting the modified identified feature and the video feed.
6. The method of claim 5 wherein the identified feature includes a removal of a headset, and wherein modifying the identified feature of the video feed in accordance with the customizable profile comprises:
identifying a background of the v ideo feed; and
remove the headset from the video feed based on the facial recognition and the identified background.
7. The method of claim 5 wherein modifying the identified feature of the video feed in accordance with the customizable profile comprises:
adjusting a pixel value of the identified feature.
8. The method of claim 5 wherein outputting the modification of the identified feature and the video feed comprises:
o verlaying ihe modification of the identified feature onto the video feed for output.
9. The method of claim 5 wherein modifying the identified feature of the video feed in accordance with the customizable profile comprises:
utilizing facial recognition to track movement for positioning the modification of the identified feature to align with the video feed.
10. The method of claim 5 wherein processing the video feed, modifying the identified feature, and outputting the modification of the identified feature and the video feed occur in real time.
1 1. A non-transitor machine-readable storage medium comprising instructions ihai when executed by a processor cause a computing device to:
create a customizable profile, wherein the customizable profile is to modify an identified feature in a video feed and the feature is identified based on facial recognition or body recognition; receive the video feed for processing; and
process the video feed for a modification of the identified feature in accordance with the customizable profile.
12. The non-transitory machine-readable storage medium of claim 1 1 , further comprising instructions that when executed by the processor cause the computing device to:
output the video feed with the modification of the identified feature in real-time to the capture of the video feed for processing.
13. The non-transitory machine-readable storage medium of claim 1 1 wherein to process the video for modification of the identified feature in accordance with the customizable profile is further comprising instructions that when executed by the processor cause the computing device to:
adjust a pixel value of the identified feature; and
overlay the modification of the identified feature on the video feed for output.
14. The non- transitory machine-readable storage medium of claim 11 wherem the modification of the identified feature includes removal of a headset and further wherein to process the video feed for the modification of the identified feature in accordance with the customizable profile is fitrther comprising instructions that when executed by the processor cause the computing device to:
identify a background of the video feed through a depth location; and
utilize the facial recognition of the identified feature and the identified background to remove the headset from the video feed.
15. The non-transitory machine-readable storage medium of claim 11 wherein the identified feature includes a body and fitrther wherein to process the video feed for the modification of the identified feature in accordance with the customizable profile is further comprising instructions that when executed by the processor cause the computing device to: utilize the body recognition for the identified feature associated with a body; and modify the identified feaiure associated with the body.
PCT/US2014/019524 2014-02-28 2014-02-28 Customizable profile to modify an identified feature in video feed WO2015130309A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2014/019524 WO2015130309A1 (en) 2014-02-28 2014-02-28 Customizable profile to modify an identified feature in video feed

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2014/019524 WO2015130309A1 (en) 2014-02-28 2014-02-28 Customizable profile to modify an identified feature in video feed

Publications (1)

Publication Number Publication Date
WO2015130309A1 true WO2015130309A1 (en) 2015-09-03

Family

ID=54009480

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/019524 WO2015130309A1 (en) 2014-02-28 2014-02-28 Customizable profile to modify an identified feature in video feed

Country Status (1)

Country Link
WO (1) WO2015130309A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9516255B2 (en) 2015-01-21 2016-12-06 Microsoft Technology Licensing, Llc Communication system
US9531994B2 (en) 2014-10-31 2016-12-27 Microsoft Technology Licensing, Llc Modifying video call data
WO2023067249A1 (en) * 2021-10-22 2023-04-27 Wear2Meet Oy Video manipulation computer program and video communication system
US20230137171A1 (en) * 2021-11-01 2023-05-04 LINE Plus Corporation Method, device, and non-transitory computer-readable recording medium to provide body effect for video call
WO2023191793A1 (en) * 2022-03-31 2023-10-05 Hewlett-Packard Development Company, L.P. Color palettes of background images

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120163679A1 (en) * 2010-12-24 2012-06-28 Telefonaktiebolaget L M Ericsson (Publ) Dynamic profile creation in response to facial recognition
US20120306991A1 (en) * 2011-06-06 2012-12-06 Cisco Technology, Inc. Diminishing an Appearance of a Double Chin in Video Communications
US20130242031A1 (en) * 2012-03-14 2013-09-19 Frank Petterson Modifying an appearance of a participant during a video conference
US20130343600A1 (en) * 2012-06-22 2013-12-26 Microsoft Corporation Self learning face recognition using depth based tracking for database generation and update
US20140037264A1 (en) * 2012-07-31 2014-02-06 Google Inc. Customized video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120163679A1 (en) * 2010-12-24 2012-06-28 Telefonaktiebolaget L M Ericsson (Publ) Dynamic profile creation in response to facial recognition
US20120306991A1 (en) * 2011-06-06 2012-12-06 Cisco Technology, Inc. Diminishing an Appearance of a Double Chin in Video Communications
US20130242031A1 (en) * 2012-03-14 2013-09-19 Frank Petterson Modifying an appearance of a participant during a video conference
US20130343600A1 (en) * 2012-06-22 2013-12-26 Microsoft Corporation Self learning face recognition using depth based tracking for database generation and update
US20140037264A1 (en) * 2012-07-31 2014-02-06 Google Inc. Customized video

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9531994B2 (en) 2014-10-31 2016-12-27 Microsoft Technology Licensing, Llc Modifying video call data
US9973730B2 (en) 2014-10-31 2018-05-15 Microsoft Technology Licensing, Llc Modifying video frames
US9516255B2 (en) 2015-01-21 2016-12-06 Microsoft Technology Licensing, Llc Communication system
WO2023067249A1 (en) * 2021-10-22 2023-04-27 Wear2Meet Oy Video manipulation computer program and video communication system
US20230137171A1 (en) * 2021-11-01 2023-05-04 LINE Plus Corporation Method, device, and non-transitory computer-readable recording medium to provide body effect for video call
WO2023191793A1 (en) * 2022-03-31 2023-10-05 Hewlett-Packard Development Company, L.P. Color palettes of background images

Similar Documents

Publication Publication Date Title
CN107818305B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
TWI712918B (en) Method, device and equipment for displaying images of augmented reality
KR102291461B1 (en) Technologies for adjusting a perspective of a captured image for display
CN110892363B (en) Adaptive pre-filtering of video data based on gaze direction
US9554126B2 (en) Non-linear navigation of a three dimensional stereoscopic display
US20180365484A1 (en) Head-mounted display with facial expression detecting capability
KR20190112712A (en) Improved method and system for video conferencing with head mounted display (HMD)
CN111066026B (en) Techniques for providing virtual light adjustment to image data
WO2015130309A1 (en) Customizable profile to modify an identified feature in video feed
EP3398004B1 (en) Configuration for rendering virtual reality with an adaptive focal plane
US10623633B2 (en) Image synthesis device and image synthesis method
US20200387215A1 (en) Physical input device in virtual reality
CN109522866A (en) Naked eye 3D rendering processing method, device and equipment
AU2021290132C1 (en) Presenting avatars in three-dimensional environments
CN106774929B (en) Display processing method of virtual reality terminal and virtual reality terminal
CN109644235B (en) Method, apparatus, and computer-readable medium for providing mixed reality images
US11579693B2 (en) Systems, methods, and graphical user interfaces for updating display of a device relative to a user's body
CN118159935A (en) Apparatus, method and graphical user interface for content applications
US20230336865A1 (en) Device, methods, and graphical user interfaces for capturing and displaying media
US20230105064A1 (en) System and method for rendering virtual reality interactions
US20230290014A1 (en) Attention-driven rendering for computer-generated objects
EP3133557A1 (en) Method, apparatus, and computer program product for personalized depth of field omnidirectional video
CN106851249A (en) Image processing method and display device
US11237413B1 (en) Multi-focal display based on polarization switches and geometric phase lenses
KR20120092960A (en) System and method for controlling virtual character

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14883870

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14883870

Country of ref document: EP

Kind code of ref document: A1