WO2021238995A1 - 用于肌肤检测的电子设备的交互方法及电子设备 - Google Patents

用于肌肤检测的电子设备的交互方法及电子设备 Download PDF

Info

Publication number
WO2021238995A1
WO2021238995A1 PCT/CN2021/096113 CN2021096113W WO2021238995A1 WO 2021238995 A1 WO2021238995 A1 WO 2021238995A1 CN 2021096113 W CN2021096113 W CN 2021096113W WO 2021238995 A1 WO2021238995 A1 WO 2021238995A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
face
area
detection target
finger
Prior art date
Application number
PCT/CN2021/096113
Other languages
English (en)
French (fr)
Inventor
徐涵
赵学知
丁弦
郜文美
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to US17/927,580 priority Critical patent/US20230215208A1/en
Priority to EP21812300.8A priority patent/EP4145251A4/en
Publication of WO2021238995A1 publication Critical patent/WO2021238995A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Definitions

  • This application relates to the field of software application technology, and in particular to an interactive method and electronic device for electronic equipment for skin detection.
  • this application provides an electronic device interaction method and electronic device for skin detection, which can accurately recognize the user's gesture and the intention indicated by the gesture through the user's real-time image during the skin care or makeup process, and give The skin condition information corresponding to the gesture and suggestions for treatment of skin care or makeup are given. Make the interaction process more natural and smooth, and enhance the user experience.
  • Some embodiments of the present application provide an interactive method of an electronic device for skin detection.
  • the following describes the application from multiple aspects, and the implementations and beneficial effects of the following multiple aspects can be referred to each other.
  • the present application provides an interaction method for an electronic device for skin detection, which is applied to an electronic device, and the method includes: acquiring a plurality of video frames including the user's face and hands at the same time, such as a real-time image of the user ; Recognize the user's hand motion relative to the face in multiple video frames, and determine the target hand motion; in response to the target hand motion, determine the detection target in at least a part of the user's face in the video frame, where
  • the detection target may include one or more of acne, fine lines, pores, blackheads, acne, plaques, red blood streaks, nose, mouth, eyes, eyebrows, facial contours and skin color.
  • the shape of the eyebrows is meniscus, figure eight, etc.
  • the embodiment of the application can accurately recognize the user's gesture and the intention indicated by the gesture through the real-time image of the user during the skin care or makeup process, and provide the skin state information corresponding to the gesture and the treatment suggestions for skin care or makeup. This makes the interaction process between the user and the device more natural and smooth, and enhances the user experience.
  • determining the target hand motion includes: the electronic device determines that the position of the fingertip of the finger in the video frame is less than a preset distance from the position of the face, and then determining that the hand motion is the target Hand movements.
  • determining the target hand movement includes: determining that the position of the fingertip of the finger in the video frame is less than a preset distance from the position of the face, and determining that the finger in the video frame is relative to the face If the relative stillness lasts longer than the preset time, it is determined that the hand movement is the target hand movement. Further improve the accuracy of judging user intentions based on gestures.
  • determining the target hand action includes: determining that the video includes two hands; and determining that the position of the fingertips of the fingers of the two hands in the video frame is less than the position of the face The distance is preset, and it is determined that the time that the fingers of the two hands are relatively static relative to the face in the video frame is longer than the preset time, then the hand motion is determined to be the target hand motion.
  • the electronic device determining that the position of the finger in the video frame is less than the preset distance from the position of the face includes: the position area of the user's finger in the video frame overlaps the position area of the face, Or, the finger in the video frame does not overlap the face, but the distance between the fingertip of the finger and the edge point of the face closest to the fingertip of the finger is less than the preset distance.
  • the electronic device determines the detection target in at least a part of the user's face in the video frame, including: from at least one of the multiple video frames In the video frame, determine the intersection area between the finger pointing area in the target hand motion and the area where the face is located, and determine the detection target in the intersection area.
  • the user can directly find the inspection target through the pointing of the finger, which facilitates the user's intuitive and natural experience and enhances the user's interactive experience.
  • the pointing area of the finger is a geometric figure determined with the fingertip of the finger as the reference point and the pointing direction of the finger as the reference direction, and the geometric figure has a size and contour preset by the user .
  • the pointing area of the finger is a geometric figure determined with the fingertip of the finger as the reference point and the pointing direction of the finger as the reference direction, where the geometric figure It is a geometric figure preset by the user.
  • the intersection or union of the pointing areas of the fingers of the two hands is used as the pointing area of the fingers.
  • the geometric figure includes any one of a trapezoid, a sector, a triangle, a circle, and a square.
  • the face includes at least one preset ROI area; the electronic device determines the detection target in at least a part of the area on the user's face in the video frame in response to the target hand motion, and further The method includes determining the intersection area between the finger pointing area in the target hand motion and the ROI area included in the face from at least one video frame among the multiple video frames, and determining the detection target in the intersection area.
  • the ROI area can be divided into forehead, nose bridge, middle, chin, cheeks, under eyes, apple muscles, etc.
  • the detection target is determined by the intersection of the finger pointing and the ROI area of the user's face, so as to combine the ROI area of the face to analyze the user's specified area, improve the accuracy of the detection target analysis by the electronic device, and improve the interaction interesting.
  • the detection target is determined from the ROI area with the largest area in the intersection area.
  • the detection target of the ROI area when it is determined that more than two ROI areas are covered in the intersection area, based on the preset priority of the ROI area and/or the detection target of the ROI area and the characteristic standard model corresponding to the detection target For the matching degree, select one of the ROI regions, and determine the detection target from the selected ROI region.
  • the method when it is determined that more than two ROI regions are covered in the intersection area, the method further includes: the electronic device determines the detection target based on a first operation of the user on the detection target in the ROI region.
  • the user can directly confirm the detection target by clicking and other methods according to the results of his own observation, which is convenient for the user to choose according to his own subjective opinions and improve the user's interactive experience.
  • the detection target includes one or more skin conditions of acne, fine lines, pores, blackheads, acne, plaque, and red blood streaks, or the detection target includes the nose, One or more of mouth, eyes, eyebrows, facial contours, and skin color.
  • the multiple video frames are consecutive multiple video frames within a preset duration.
  • the method further includes: the electronic device obtains the real-time image of the user through the camera of the electronic device, and displays the real-time image of the user on the first interface, and obtains the real-time image from the real-time image while having Video frames within the set duration of the user’s face and hands. Users can watch their own facial conditions while making hand movements, making the operation more intuitive and easy to operate.
  • the method before acquiring multiple video frames simultaneously including the user's face and hands, the method further includes: the electronic device determines to execute the makeup mode or execute the skin care mode in response to the user's input operation.
  • the user can first confirm whether he wants to put on makeup or skin care.
  • the electronic device can give targeted makeup or skin care suggestions based on the user's gestures.
  • the electronic device outputting extended content includes: the electronic device displays a second interface, and the second interface includes a detection target determined based on the target hand motion and an extension corresponding to the shape of the detection target Content; or, the electronic device voice broadcasts the detection target determined based on the target hand motion and the extended content corresponding to the shape of the detection target.
  • This display method is more intuitive and is convenient for users to give advice on skin care or makeup.
  • the extended content includes: one or more of problem analysis and care advice based on the detection target in the skin care state, or based on the makeup state of the detection target in the makeup state One or more of analysis and makeup recommendations.
  • the present application also provides a device for skin detection.
  • the device includes: an acquisition module for acquiring multiple video frames simultaneously including the user's face and hands, such as a real-time image of the user; and a processing module , Through the recognition module to identify the user's hand motion relative to the face in multiple video frames, and determine the target hand motion; the processing module, in response to the target hand motion, determine at least part of the area on the user's face in the video frame
  • the detection targets in, where the detection targets can include one or more of acne, fine lines, pores, blackheads, acne, plaques, red blood streaks, nose, mouth, eyes, eyebrows, face contour and skin color, etc.
  • the processing module determines the expansion content associated with the detection target and the shape of the detection target from the expanded content library, and outputs the expansion content .
  • the extended content may include: one or more of problem analysis and care advice based on the detection target in the skin care state, or one or more of makeup state analysis and makeup advice based on the detection target in the makeup state .
  • the embodiment of the application can accurately recognize the user's gesture and the intention indicated by the gesture through the real-time image of the user during the skin care or makeup process, and provide the skin state information corresponding to the gesture and the skin care or makeup treatment suggestions. This makes the interaction process between the user and the device more natural and smooth, and enhances the user experience.
  • determining the target hand motion includes: the processing module determines that the position of the fingertip of the finger in the video frame is less than a preset distance from the position of the face, and then determining that the hand motion is the target Hand movements.
  • determining the target hand movement includes: the processing module determines that the position of the fingertip of the finger in the video frame is less than a preset distance from the position of the face, and determines that the finger is relative to each other in the video frame. When the face is relatively static for longer than the preset time, the hand movement is determined to be the target hand movement. Further improve the accuracy of judging user intentions based on gestures.
  • determining the target hand action includes: the processing module determines that the video includes two hands; and determining that the fingertips of the fingers of the two hands in the video frame are located at a distance from the face The position is less than the preset distance, and it is determined that the fingers of the two hands in the video frame are relatively static with respect to the face for longer than the preset time, then the hand motion is determined to be the target hand motion.
  • the processing module determines that the fingers of the two hands in the video frame are relatively stationary with respect to the face. Can improve the accuracy of gesture judgment.
  • the processing module determines that the finger location in the video frame is less than a preset distance from the face location, including: the user’s finger location area in the video frame overlaps the face location area, Or, the finger in the video frame does not overlap the face, but the distance between the fingertip of the finger and the edge point of the face closest to the fingertip of the finger is less than the preset distance.
  • the processing module determines the detection target in at least a part of the area on the user's face in the video frame in response to the target hand motion, including: selecting from at least one of the multiple video frames In the video frame, determine the intersection area between the finger pointing area in the target hand motion and the area where the face is located, and determine the detection target in the intersection area.
  • the user can directly find the inspection target through the pointing of the finger, which facilitates the user's intuitive and natural experience and enhances the user's interactive experience.
  • the pointing area of the finger is a geometric figure determined with the fingertip of the finger as the reference point and the pointing direction of the finger as the reference direction, and the geometric figure has a size and size preset by the user. contour.
  • the pointing area of the finger is a geometric figure determined with the fingertip of the finger as the reference point and the pointing direction of the finger as the reference direction, where the geometric figure It is a geometric figure preset by the user.
  • the intersection or union of the pointing areas of the fingers of the two hands is used as the pointing area of the fingers.
  • the geometric figure includes any one of a trapezoid, a sector, a triangle, a circle, and a square.
  • the face includes at least one preset ROI area; the processing module responds to the target hand motion to determine the detection target in at least a part of the user's face in the video frame, and further The method includes determining the intersection area between the finger pointing area in the target hand motion and the ROI area included in the face from at least one video frame among the multiple video frames, and determining the detection target in the intersection area.
  • the ROI area can be divided into forehead, nose bridge, middle, chin, cheeks, under eyes, apple muscles, etc.
  • the detection target is determined by the intersection of the finger pointing and the ROI area of the user's face, so as to combine the ROI area of the face to analyze the user's specified area, improve the accuracy of the detection target analysis by the electronic device, and improve the interaction interesting.
  • the detection target is determined from the ROI region with the largest area in the intersection region.
  • the standard model based on the preset priority of the ROI region and/or the detection target of the ROI region and the detection target corresponding to the detection target For the matching degree, select one of the ROI regions, and determine the detection target from the selected ROI region.
  • the method when it is determined that the intersection area covers more than two ROI regions, the method further includes: the electronic device determines the detection target based on the user's first operation on the detection target in the ROI region.
  • the user can directly confirm the detection target by clicking and other methods according to the results of his own observation, which is convenient for the user to choose according to his own subjective opinions and improve the user's interactive experience.
  • the detection target includes one or more skin conditions of acne, fine lines, pores, blackheads, acne, plaque, and red blood streak, or the detection target includes nose, One or more of mouth, eyes, eyebrows, facial contours, and skin color.
  • the multiple video frames are consecutive multiple video frames within a preset duration.
  • the device further includes: the processing module obtains the user's real-time image through the acquisition module, and displays the user's real-time image on the first interface of the display module, and obtains the simultaneous image from the real-time image.
  • Video frames within the set duration of the user’s face and hands. Users can watch their own facial conditions while making hand movements, making the operation more intuitive and easy to operate.
  • the method before acquiring multiple video frames simultaneously including the user's face and hands, the method further includes: the processing module determines to execute the makeup mode or execute the skin care mode in response to the user's input operation.
  • the user can first confirm whether he wants to put on makeup or skin care.
  • the electronic device can give targeted makeup or skin care suggestions based on the user's gestures.
  • the processing module outputting the extended content includes: displaying a second interface through the display module, the second interface includes the detection target determined based on the target hand motion and the shape corresponding to the detection target Extended content; or, the electronic device voice broadcasts the detection target determined based on the target's hand motion and the extended content corresponding to the shape of the detection target.
  • This display method is more intuitive and is convenient for users to give advice on skin care or makeup.
  • the extended content includes: one or more of problem analysis and care advice based on the detection target in the skin care state, or based on the makeup state of the detection target in the makeup state One or more of analysis and makeup recommendations.
  • the embodiments of the present application also provide an electronic device, including one or more memories, one or more processors coupled with the memories, and one or more programs, wherein one or more programs are stored in In the memory, the electronic device is used to execute the method of the embodiment of the first aspect described above.
  • an embodiment of the present application provides a computer-readable storage medium, and the computer-readable storage medium stores a computer program.
  • the processor executes the method of the embodiment of the first aspect.
  • the embodiments of the present application provide a computer program product containing instructions, which when the computer program product runs on an electronic device, cause a processor to execute the method of the embodiment of the first aspect.
  • FIG. 1 is a scene diagram of the interaction of a user using a mobile phone according to an embodiment of the application
  • Fig. 2a is a scene diagram of interaction of a user using a mobile phone according to an embodiment of the application
  • FIG. 2b is a diagram of multiple scenes in which a user points a single finger to a facial area according to an embodiment of the application;
  • FIG. 2c is a diagram of multiple scenes in which the user's hands are pointing to the face area according to an embodiment of the application;
  • Fig. 3 is a schematic structural diagram of a mobile phone according to an embodiment of the application.
  • FIG. 4 is a block diagram of the software structure of a mobile phone according to an embodiment of the application.
  • FIG. 5 is a flowchart of an interactive method of an electronic device for skin detection according to an embodiment of the application
  • Fig. 6a is a schematic diagram of a user interface of a user selection mode according to an embodiment of the application.
  • FIG. 6b is a schematic diagram of a user interface of a real-time image of a user in makeup mode according to an embodiment of the application;
  • FIG. 6c is a schematic diagram of the positional relationship between the fingertips of one-handed fingers and the facial area according to an embodiment of the application;
  • FIG. 6d is a schematic diagram of the positional relationship between the fingertips of one-handed fingers and the facial area according to an embodiment of the application;
  • FIG. 7a is a schematic diagram of the intersection area between the pointing area of the finger of one hand and the face according to an embodiment of the application;
  • FIG. 7b is a schematic diagram of the intersection area between the pointing area of the fingers of both hands and the face according to an embodiment of the application;
  • FIG. 8a is a schematic diagram of ROI region division of a face according to an embodiment of the application.
  • FIG. 8b is a schematic diagram of covering multiple ROI areas in an intersection area according to an embodiment of the application.
  • Fig. 8c is a schematic diagram of a user interface of a mobile phone according to an embodiment of the application.
  • Fig. 9a is a schematic diagram of a user interface of a mobile phone according to an embodiment of the application.
  • FIG. 9b is a schematic diagram of a user interface of a acne detection result according to an embodiment of the application.
  • FIG. 9c is a schematic diagram of a user interface of an eyebrow detection result according to an embodiment of the application.
  • FIG. 10a is a schematic diagram of an acne grading image and corresponding description corpus according to an embodiment of this application.
  • FIG. 10b is a schematic diagram of an image of a stain type and a corresponding description corpus according to an embodiment of the application;
  • FIG. 10c is a schematic diagram of a red zone image and a description corpus of a red zone problem according to an embodiment of the application;
  • FIG. 10d is a schematic diagram of description corpus of eyebrow makeup corresponding to different face shapes according to an embodiment of the application.
  • FIG. 10e is a schematic diagram of an interface of a user's face image after virtual makeup in some embodiments of the application.
  • FIG. 11 is a schematic structural diagram of an electronic device according to an embodiment of the application.
  • FIG. 12 is a block diagram of a device according to an embodiment of the application.
  • FIG. 13 is a block diagram of a SoC according to an embodiment of the application.
  • the electronic device may be a mobile phone, a notebook computer, a tablet computer, a desktop computer, a laptop computer, an Ultra-mobile Personal Computer (UMPC), a handheld computer, a netbook, and a personal computer.
  • Digital assistants Personal Digital Assistant, PDA
  • wearable electronic devices smart mirrors and other devices with image recognition functions.
  • FIGs 1 and 2a show the interactive scene diagram of the user using the mobile phone.
  • the mobile phone 10 is equipped with a front camera 11 and a screen 12.
  • a video stream or photo can be captured in real time through the camera, and the captured video stream is displayed on the screen 12 in real time, and the user can observe his hand movements and facial images in real time through the screen 12.
  • the user points his finger to the facial area containing the detection target (image feature) of his face, such as pointing to the facial area containing acne.
  • the distance between the finger and the face, and the time to maintain this distance determines whether the user wants to know the state of skin care based on the hand movement.
  • the hand motion is a target hand motion, that is, the hand motion is determined to be an action command input by the user for skin care
  • the mobile phone 10 further obtains at least one video frame corresponding to the target hand motion, and obtains it from the
  • the image feature is identified in the intersection area between the pointing area of the finger and the face in the video frame, and then the corresponding extended content is output according to the image feature.
  • the detection target may include skin color, acne, fine lines, pores, blackheads, acne, plaques, red blood streaks, nose, eyes, eyebrows, mouth, and chin recognized from the user's video frames.
  • One or more of image features such as forehead and forehead.
  • the shape of the detection target can be determined, such as one or more of image features such as eyebrow shape, nose shape, mouth shape, chin shape, forehead shape, and face contour.
  • the pointing area of the finger refers to a geometric figure determined by the electronic device with the fingertip of the finger as the reference point and the pointing direction of the finger as the reference direction.
  • the geometric figure has a set size and outline, which can be It is pre-set by the user, or pre-stored by the electronic device.
  • the geometric figures may include any geometric figures of trapezoid, sector, triangle, circle, and square. In the specific implementation, the user can freely define the size and contour according to the actual situation.
  • the extended content may include one or more of the state analysis and care advice of the detection target based on the skin care state, or the makeup state analysis and the makeup suggestion of the detection target based on the makeup state One or more of. It can be stored in an extended content library, and the extended content library can be stored in a cloud server.
  • the electronic device communicates with the cloud server, and when the electronic device needs to expand the content, the corresponding content can be obtained from the cloud server.
  • the extended content can be regularly updated on the cloud server to provide users with the most cutting-edge knowledge of makeup and skin care.
  • the extended content may also be directly stored in the electronic device, so that the electronic device can call it at any time.
  • the user's hand motion may include pointing the finger of one hand to the face area where the detection target is located.
  • FIG. 2b shows multiple scene graphs in which the user's single finger points to the face area.
  • This scene includes the user's hand movements such as pointing to the acne on the face, pointing to the spots, pointing to the wrinkles, and pointing to the nose with one-handed fingers. It may also include the fingers of both hands pointing to the face area where the detection target is located.
  • FIG. 2c shows multiple scene diagrams in which the fingers of the user's hands point to the face area.
  • the scene includes hand actions such as the user pointing to the acne on the face with two fingers, pointing to the spots, pointing to the wrinkles, or pointing to the eyebrows.
  • the finger pointing to the face area in this application is only an exemplary description, and this application can also point to other parts, such as the mouth, chin, pointing to the red zone, blackheads, etc., which are not limited herein.
  • the mobile phone can directly understand the user’s intentions through the user’s hand movements, and does not need to detect the user’s entire image. It only detects the area specified by the user.
  • the user is looking in the mirror, pass Point your finger to a certain area of the face, and you can get the relevant knowledge points related to the image features in that area, for example, in the skin care, get the skin status, such as acne, grade and care advice.
  • you can get makeup suggestions, etc. Makes the interaction process easier, smoother and more natural.
  • FIG. 3 shows a schematic diagram of the structure of a mobile phone.
  • the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) connector 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the mobile phone 100.
  • the mobile phone 100 may include more or fewer components than shown in the figure, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the processor can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching instructions and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the interface can include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous transmitter) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • Receiver/transmitter, UART mobile industry processor interface
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • the interfaces between the modules illustrated in the embodiments of the present application are merely illustrative descriptions, and do not constitute a structural limitation of the mobile phone 100.
  • the mobile phone 100 may also adopt different interface connection modes or a combination of multiple interface connection modes in the foregoing embodiments.
  • the USB connector 130 is a connector that complies with the USB standard specification, and can be used to connect the mobile phone 100 and peripheral devices. Specifically, it can be a standard USB connector (for example, a Type C connector), a Mini USB connector, a Micro USB connector, and the like.
  • the USB connector 130 can be used to connect a charger to charge the mobile phone 100, and can also be used to transfer data between the mobile phone 100 and peripheral devices. It can also be used to connect earphones and play audio through earphones.
  • the connector can also be used to connect to other mobile phones, such as AR devices.
  • the processor 110 may support a universal serial bus (Universal Serial Bus), and the standard specifications of the universal serial bus may be USB1.x, USB2.0, USB3.x, and USB4.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB connector 130.
  • the charging management module 140 may receive the wireless charging input through the wireless charging coil of the mobile phone 100. While the charging management module 140 charges the battery 142, it can also supply power to the mobile phone through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the mobile phone 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the mobile phone 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied on the mobile phone 100.
  • the mobile communication module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the mobile communication module 150 may be connected to a cloud server in communication, so that the processor 110 obtains the extended content corresponding to the image feature from the cloud server, for example, based on the problem analysis and care of the detection target in the state of skin care.
  • One or more of the suggestions, or one or more of the makeup state analysis of the detection target based on the makeup state and one or more of the makeup suggestions may be connected to a cloud server in communication, so that the processor 110 obtains the extended content corresponding to the image feature from the cloud server, for example, based on the problem analysis and care of the detection target in the state of skin care.
  • One or more of the suggestions, or one or more of the makeup state analysis of the detection target based on the makeup state and one or more of the makeup suggestions may be connected to a cloud server in communication, so that the processor 110 obtains the extended content corresponding to the image feature from the cloud server, for example, based on the problem analysis and care of the detection target in the state of skin care.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the mobile phone 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellite systems. (Global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS Global navigation satellite system
  • frequency modulation frequency modulation, FM
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive a signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic waves to radiate through the antenna
  • the antenna 1 of the mobile phone 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the mobile phone 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include the global positioning system (GPS), the global navigation satellite system (GLONASS), the Beidou navigation satellite system (BDS), and the quasi-zenith satellite system (quasi). -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the mobile phone 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations and is used for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the mobile phone 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the display screen 194 may be used to display images or videos of the user, or display text information to remind the user of the action that needs to be performed currently, so that the user faces the camera and makes corresponding actions according to the text information instructed to make the processing
  • the device 110 judges that the user is in the awake state according to the image obtained by the camera, and saves the pupil information of the user in this state as a pupil model of the user for comparison during the unlocking process of the mobile phone 100, wherein the pupil information may be Pupil depth information (for example, 3D image data), and the pupil model may be a pupil depth model (human face 3D model). It is also possible that when the processor 110 responds to the received unlocking instruction of the user, an interface when it is not unlocked may be displayed.
  • the interface may include a face input box, or text information prompting the user to unlock. It is also possible that after the processor 110 performs an unlocking operation, an interface that can be directly operated by the user is displayed, or when the processor 110 performs an unlocking prohibited operation, an interface that fails to unlock, etc. may be displayed.
  • the mobile phone 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and is projected to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transfers the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the mobile phone 100 may include one or N cameras 193, and N is a positive integer greater than one.
  • the camera 193 collects videos or static images (video frames) that have the user’s face and hand movements at the same time, so that the mobile phone 100 can determine the user’s target hand movements according to multiple video frames in the video. And the image features specified by the target hand movement, the processor 110 calls the corresponding one or more of the problem analysis and care suggestions based on the detection target in the skin care state according to the image features, or based on the makeup state One or more of the makeup status analysis and makeup suggestions of the detection target.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the mobile phone 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the mobile phone 100 may support one or more video codecs. In this way, the mobile phone 100 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, and so on.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • applications such as intelligent cognition of the mobile phone 100 can be realized, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the NPU can realize the identification of pupil information, fingerprint identification, gait recognition, or voice recognition of the mobile phone 100, so that the mobile phone 100 can realize the identification of itself through various biometric-based identification technologies. Unlock or prohibit unlocking.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the mobile phone 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the mobile phone 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • the processor 110 executes various functional applications and data processing of the mobile phone 100 by running instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the processor 110 may call instructions in the internal memory 121 to cause the mobile phone 100 to sequentially execute the interaction method of the electronic device for skin detection according to the embodiments of the present application.
  • the method includes: acquiring a plurality of video frames simultaneously including the user's face and hands by turning on the camera 193; the processor 110 recognizes the actions of the user's hands relative to the face in the plurality of video frames, and determines Target hand motion; in response to the target hand motion, the processor 110 determines the detection target in at least a part of the area on the user's face in the video frame; the processor 110 based on the detection target and the shape of the detection target, Determine the extended content associated with the detection target and the shape of the detection target from the extended content library, and output the extended content. Users can directly obtain the knowledge points they want to know through hand movements, and the interaction process is simple and smoother.
  • the above-mentioned internal memory 121 and/or external storage area can store the user's extended content.
  • the processor 110 confirms the detection target, it can directly retrieve the extended content corresponding to the detection target, for example, based on the detection based on the state of skin care.
  • One or more of problem analysis and care advice of the target, or one or more of makeup state analysis and makeup advice of the detection target based on the makeup state can be directly retrieve.
  • the mobile phone 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the mobile phone 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the speaker 170A can play voice information to inform the user of the detection target corresponding to the current hand motion and the extended content corresponding to the shape of the detection target. So that users can understand the knowledge points of skin care or makeup through voice.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the mobile phone 100 emits infrared light to the outside through the light emitting diode.
  • the mobile phone 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the mobile phone 100. When insufficient reflected light is detected, the mobile phone 100 can determine that there is no object near the mobile phone 100.
  • the mobile phone 100 can use the proximity light sensor 180G to detect that the user holds the mobile phone 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the proximity light sensor 180G senses that an object is approaching the mobile phone 30, and thus sends a signal to the processor 110 of the mobile phone 30 that the object is approaching.
  • the processor 110 receives the signal that the object is approaching, and controls the display screen 194 to light up, or directly collect the video of the object through the camera 193 so that the processor 110 can determine the target hand movement based on these videos, and determine the detection based on the target hand movement Target.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the mobile phone 100 can adaptively adjust the brightness of the display 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the mobile phone 100 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the mobile phone 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, so as to realize user identity recognition and obtain corresponding permissions, such as accessing application locks, taking photos with fingerprints, answering calls with fingerprints, and so on.
  • the software system of the mobile phone 100 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture.
  • the embodiment of the present application takes an Android system with a layered architecture as an example to illustrate the software structure of the mobile phone 100 by way of example.
  • FIG. 4 is a block diagram of the software structure of the mobile phone 100 according to an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom are the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager, a content provider, a view system, a phone manager, a resource manager, and a notification manager.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include video, image, audio, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the mobile phone 100. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, and so on.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • the status bar prompts text messages, sounds prompts, mobile phones vibrate, and indicator lights flash.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a graphics engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the mobile phone 100 has a camera 193.
  • the touch sensor 180K receives a touch operation, the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps of touch operations, etc.).
  • the original input events are stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and recognizes the control corresponding to the original input event.
  • the touch operation is a touch click operation
  • the control corresponding to the click operation is the makeup and skin care application icon.
  • a skin care/makeup application calls the interface of the application framework layer to start the skin care/makeup application, and then starts the camera driver by calling the kernel layer, and uses the camera 193 to capture static images or videos of the user's face and hand movements.
  • the system library calls the content provider of the application framework layer to obtain the extended content corresponding to the detection target, and calls the kernel layer display driver to make
  • the display screen 194 displays an interface related to the expanded content.
  • FIG. 5 shows a flowchart of an interaction method of an electronic device for skin detection according to the present application.
  • the interaction method may include the following steps:
  • step S500 the mobile phone processor determines the makeup or skin care mode.
  • the user can manually select the makeup or skin care mode according to his own needs, and after the processor receives the mode selected by the user, step 510 is executed.
  • step 510 is executed.
  • the above-mentioned processor determines the makeup or skin care mode and executes various processes as shown in FIG. Application program to perform various processing.
  • step S510 the camera acquires a real-time image of the user.
  • the real-time image can be obtained through the front camera or the rear camera.
  • the front camera can be used so that the user can see the real-time image of himself.
  • step S520 the processor obtains multiple video frames simultaneously including the user's face and hands from the real-time image.
  • the multiple video frames may be multiple consecutive video frames within 5 seconds, and the processor obtains multiple consecutive video frames within 5 seconds to facilitate identification of the user's hand movements.
  • the recognition of the face and the hand can be performed by using the existing recognition technology, which will not be described in detail in this application.
  • step S530 the processor determines whether the position of the fingertip of the finger in the video frame is less than a preset distance from the position of the face, for example, whether the position of the fingertip of the finger is less than 2 cm from the position of the face.
  • the fingers may be the fingers of a single hand or the fingers of both hands.
  • step S540 the processor determines whether the relative static duration of the finger relative to the face in the video frame is greater than a preset time. For example, it is determined whether the relative static duration of the finger relative to the face in the video frame is greater than 3 seconds, if it is greater than 3 seconds, the mobile phone executes step S550, and if it is less than or equal to 3 seconds, then step S520.
  • step S550 the processor determines that the hand motion is the target hand motion. After the processor determines the target hand motion, it uses the target hand motion as an instruction input by the user to execute step S560.
  • step S560 the processor determines the detection target in response to the target hand motion.
  • the specific process of determining the detection target can be described in detail in the following embodiment, and for details, please refer to the detailed description of step S560 in the following embodiment.
  • step S570 the processor outputs the extended content associated with the detection target based on the detection target and the shape of the detection target.
  • the extended content may include one or more of problem analysis and care advice based on the detection target in the skin care state, or one or more of makeup state analysis and makeup advice based on the detection target in the makeup state.
  • the length of the continuous video frame as described above is 5 seconds, whether the position of the fingertip of the finger is less than 2cm from the position of the face, and whether the relative static time of the finger relative to the face is greater than 3 seconds.
  • the parameters are preset during the production of the application in the mobile phone.
  • the time length and distance here are not limited to this, and can also be other values.
  • the length of a continuous video frame can be 10 seconds, 20 seconds, etc., finger relative Whether the face is relatively static for longer than 2 seconds, 4 seconds, etc., and whether the position of the fingertip of the finger is less than 1cm, 3cm, etc. from the position of the face; the application can also allow the user to change the length of the preset time, or the preset Set the size of the distance.
  • the processor may not perform step S540, that is, in step S530, when the processor determines that the position of the fingertip of the finger is less than the preset distance from the position of the face After 2 cm, proceed directly to step 550.
  • the user’s gesture and the intention indicated by the gesture can be accurately recognized through real-time images of the user during the skin care or makeup process, and the skin corresponding to the gesture can be given Status information and recommendations for treatment of skin care or makeup. This makes the interaction process between the user and the device more natural and smooth, and enhances the user experience.
  • the steps shown in Figure 5 can be implemented in a mobile phone.
  • the steps, determination mode, judgment and other steps are executed by the processor of the mobile phone by running an application program, and steps such as obtaining user images can be executed by the mobile phone camera under the instruction of the processor.
  • step S500 a makeup or skin care mode is determined.
  • Figure 6a shows a schematic diagram of the user interface of the user selection mode.
  • the user interface 610 includes an option area 611 for the user's skin care or makeup and a navigation bar 612.
  • the option area 611 is provided with makeup and
  • the navigation bar 612 can be set with icons such as personal center, return to the main page, search, etc. The user clicks on these icons to enter the corresponding page, for example, by clicking on the personal center to enter the personal home page, observe their personal information, Photos before and after makeup and the status of fans, etc. from history.
  • step S510 a real-time image of the user is acquired.
  • the user's real-time image can be obtained by turning on the camera, and in order to facilitate the user to better interact with the mobile phone, the front camera can be turned on to collect the user's real-time image, and at the same time, the collected implementation image can be passed through The interface is displayed.
  • the rear camera may also be used to help others or themselves collect real-time images.
  • the front camera is taken as an example for description.
  • FIG. 6 shows a schematic diagram of a user interface of a real-time image of the user in makeup mode.
  • the user interface 620 includes a user image display area 621 and user action prompts Area 622, front or rear camera option icon 623, image display area 621 can display the user's real-time image so that the user can see his real-time image in time, and the prompt area 622 can prompt the user to take corresponding actions, such as, When putting on makeup, the prompt: "please use your finger to point to your facial features or skin", the user follows the prompt to complete the determination of the target action.
  • the front or rear camera option icon 623 can enable the user to obtain real-time images through the front camera or the rear camera to obtain implementation images of themselves or friends.
  • step S520 multiple video frames simultaneously including the user's face and hands are acquired from the real-time image.
  • the mobile phone obtains multiple video frames of the user's face and hands, and recognizes the user's hand movement relative to the face from the multiple videos, thereby determining the target hand movement.
  • the hand movements include single-handed finger movements, and can also be two-handed finger movements.
  • step S530 the determination process of the target hand motion by taking the fingers of a single hand as an example in conjunction with the accompanying drawings.
  • step S540 the determination of the target hand motion may be directly determined only by step S530, or may be determined by a combination of step S530 and step S540.
  • step S530 and step 540 The process of determining the target hand motion by combining step S530 and step 540 will be described in detail below.
  • step S530 it is determined whether the position of the fingertip of the finger in the video frame is less than a preset distance from the position of the face. That is, the distance between the fingertip and the contact point of the facial area is set to 0. When the distance is greater than 0, it indicates that there is a distance between the fingertip and the fingertip of the face. When the distance is less than 0, the fingertip overlaps the face.
  • Figure 6c shows a schematic diagram of the positional relationship between the fingertips of one-handed fingers and the facial area.
  • the fingertips 601 of the user's one-handed fingers coincide with the facial area 602, and the mobile phone processor can determine The position of the fingertip 601 is less than a preset distance from the position of the face area.
  • the distance between the fingertip 603 of the user’s one-handed finger and the facial area 602 is d.
  • the mobile phone processor can determine that the fingertip 601 is located at a distance from the facial area. The position is less than the preset distance.
  • the preset distance d refers to the shortest geometrically measured distance between the fingertip of the finger and the facial area.
  • the preset distance d can be dynamically adjusted according to the user's height, face size, finger length and other personal conditions. Flexible settings. For example, when an electronic device acquires a user’s target image and shows that the user’s face is small, in practical applications, the preset distance d can be set to a preset distance that matches the small face, such as 2cm, which is slightly smaller than the default setting of 3cm.
  • the fingers of each hand when the fingers are the fingers of both hands of the user, the fingers of each hand satisfy the position relationship between the fingers of the single hand and the face.
  • the method for judging the fingers of a single hand is the same. For details, refer to the judgment of the positional relationship between the fingers of a single hand and the face in FIG. 6c and FIG. 6d, which will not be repeated here.
  • the processor when the processor recognizes that the fingers are the fingers of both hands of the user, and only the fingers of one hand satisfy that the position of the fingertip is less than the preset distance from the position of the facial area, the single Judgment method of a finger.
  • Step S540 is further executed.
  • step S540 it is determined whether the relative static duration of the finger relative to the face in the video frame is greater than a preset time.
  • the processor determines that the user's face area is relatively still relative to the mobile phone screen, that is, within a preset time, the movement range of the face relative to the mobile phone screen is less than the preset value, for example, within 3s, the face area
  • the relative movement range of the mobile phone screen is less than 1cm.
  • the preset time and the preset value can be flexibly set according to the scene in actual applications to achieve the best judgment result.
  • the processor determines that the user's fingers remain relatively still relative to the mobile phone screen, that is, within a preset time, the movement range of the hand movement relative to the mobile phone screen is less than the preset value, for example, within 3s, the hand The movement range of the movement is less than 1cm, and the preset time and the preset distance can be flexibly set according to the actual application to achieve the best judgment result.
  • the processor judges that the finger and face are kept relatively still with respect to the mobile phone screen at the same time. That is, within a preset time, the movement range of the face and hands relative to the mobile phone screen is less than the preset value, for example, within 3s, the movement range of the face and hands relative to the mobile phone screen is less than 1 cm.
  • the preset time and the preset value can be flexibly set according to the scene in actual applications to achieve the best judgment result.
  • the processor determines that the finger and the face meet any one of the conditions a1, a2, and a3, it can determine that the length of time the user's finger and the face are relatively stationary satisfies the preset time of 3 seconds. It is determined that the finger motion is the target hand motion.
  • the determination of the target hand motion may further include: when the hand and face meet any one of the conditions in a1, a2, and a3, it further includes:
  • Condition a4 Determine whether there is extended content associated with the intersection area between the pointing area of the user's finger and the face area, that is, whether there are knowledge points of skin care or makeup.
  • the mobile phone processor determines that there is extended content in the intersection area, it determines the action of the finger For the target hand movement.
  • the pointing area of the finger may be a geometric figure determined with the fingertip of the finger as the reference point and the pointing direction of the finger as the reference direction, where the geometric figure is a geometric figure preset by the user.
  • the geometric figures may include any geometric figures of trapezoids, sectors, triangles, circles, and squares. In the specific implementation, the user can freely define the size and contour according to the actual situation.
  • FIG. 7a shows a schematic diagram of the intersection area between the pointing area of the single-handed finger and the face.
  • the pointing area 701 of the user's one-handed finger is a trapezoid, that is, the intersection area of the pointing area and the face is the area covered by the trapezoid. If the extended content is associated with the intersection area, the processor determines that the hand movement is the target hand movement.
  • FIG. 7b shows a schematic diagram of the intersection area between the pointing area of the two-handed fingers and the face.
  • the pointing area 702 of the fingers of each hand of the user’s hands is trapezoidal, and the pointing areas of the two fingers intersect together.
  • the intersection or union of the pointing areas 702 of the fingers of the two hands can be set.
  • the processor determines that the hand movement is the target hand movement.
  • the mobile phone processor can determine the target hand motion and execute step S560.
  • the detection target is determined in response to the target hand motion.
  • the shape of the detection target can be further determined.
  • the detection target is a detection target in at least a part of the area on the user's face in the video frame, which can be specifically confirmed by any one or a combination of the following methods.
  • the first confirmation method is a first confirmation method
  • the intersection area between the pointing area of the finger of one hand and the area where the face is located can be referred to as shown in FIG. 7a.
  • the intersection area between the pointing area of the fingers of both hands and the area where the face is located can be referred to as shown in FIG. 7b, and the intersection or union of the pointing areas of the fingers of the two hands is used as the pointing area of the fingers.
  • the recognition of the detection target can be recognized and judged by the existing face recognition technology, which will not be described in detail here.
  • the facial area is divided into ROI regions.
  • Figure 8a shows a schematic diagram of the ROI region division of a human face.
  • the ROI region of the human face includes the forehead 801, the cheeks 802, the chin 803, the bridge of the nose 804, Human 805, apple muscle 806, eye bags 807, etc.
  • some of the ROI regions can have overlapping parts, for example, the apple muscle 806 area overlaps the cheek 802 area.
  • each ROI area of the face is respectively associated with an extension Content, and the expanded content can merge the knowledge points of the ROI area.
  • the user's finger pointing area is at the bag under the eyes, which not only contains wrinkles but also includes dark circles related to the bags under the eyes. Therefore, the ROI area is combined to make the expansion content richer and the analysis more accurate, and at the same time Improve user interest.
  • the detection target is determined from the ROI region with the largest coverage area in the intersection region.
  • the coverage area may be the absolute area of the coverage area of the intersection area and the face ROI area, that is, the coverage area is the real area of a plane geometric figure, for example, the real area is 35 square centimeters. It can also be the relative area between the coverage area of the intersection area and the face ROI area and the face size, for example, the coverage area is 35 square centimeters, the face size is 350 square centimeters, and the relative area is the ratio of the coverage area to the face size is 0.1 .
  • Figure 8b shows a schematic diagram of covering multiple ROI areas in the intersection area.
  • the finger pointing area 808 covers ROI area 1, ROI area 2 and ROI area 3.
  • Priority is set for the ROI area in advance, and the detection target is determined from the ROI area with high priority in the intersection area.
  • the detection target is determined from the ROI area where the bag under eyes is located.
  • the weight value for the ROI area in the intersection area For example, you can set the weight for parameters such as the area of the ROI area in the intersection area, the number of times that it is detected, etc., and detect the image features in all ROI areas in the intersection area, and then Among the image features, the image feature with the highest score is found as the detection target.
  • the following takes the weight of the area of the ROI region in the intersection region as an example to describe the calculation of the image feature score.
  • the determination of image features can be determined based on the knowledge points in the extended content. Since the determined image features are different from the feature standard model, the closest feature can be found by finding the degree of matching between the image features and the feature standard model.
  • the image feature of the standard model is multiplied by the weight of the ROI area in the intersection area at the same time, and the image feature with a higher comprehensive score can be found.
  • Figure 8c shows a schematic diagram of the user interface of the mobile phone.
  • the user interface 810 displays multiple image features 811 and a user prompt area 812.
  • the image features 811 can intuitively display the image features in the intersection area, and the user can directly click
  • the image feature is determined as the detection target.
  • the user prompt area 812 is used to remind the user to select the text information of the image feature, for example, to prompt the user: Please click on the image feature and confirm the detection target and other text.
  • step S570 is executed.
  • step S570 the extended content associated with the detection target is output based on the detection target and the shape of the detection target.
  • the output mode may be displayed through an interface, or voice broadcast, or a combination of interface and voice, etc., so that the user can learn the extended content corresponding to the detection target through the user interface and/or voice broadcast.
  • FIG. 9a shows a schematic diagram of a user interface of a mobile phone.
  • the user interface 910 includes an image display area 911 and a knowledge point display area 912, wherein the image display area 911 is used to display the detection target, and the knowledge point display area 912 is used to display the description corpus (text description) corresponding to the detection target.
  • the description corpus can choose anthropomorphic description sentences to express knowledge points such as skin care or makeup, and use various files
  • the format is stored in the internal memory, external memory or cloud of the mobile phone, such as word format, xml format, etc.
  • the description corpus can describe the status of the detection target, such as severity level, professional classification, shape, size, color, etc., and how to care or make-up Suggestions, etc., to make it easier for users to understand the detection target, and understand how to care or make-up.
  • Figure 9b when the mobile phone is in the skin care mode, taking the detection target as acne as an example, Figure 9b shows a schematic diagram of the user interface of the acne detection result. As shown in Figure 9b, the interface of the mobile phone displays acne in the image display area 911.
  • Figure 9c shows a schematic diagram of the user interface of the eyebrow detection results.
  • the image display area 911 of the interface 930 displays the user's face Image
  • the user can see his eyebrows, face shape, facial features, etc. through the face image, and can first determine which of the common face shapes the user belongs to, and give the corresponding description corpus according to the user’s face shape and eyebrow growth, combined
  • the user’s facial skin data, makeup habits, and product use give personalized recommendations.
  • the descriptive corpus of makeup corresponding to the eyebrows is displayed.
  • the image display area 911 and the knowledge point display area 912 can also display the diagnosis of the detection target as shown in FIGS.
  • Descriptive corpus among which, the knowledge points about acne of the extended content can include the professional classification of acne. According to the shape, size, and color of acne, it can be divided into four levels and three degrees. Among them, the medical term for acne is called Acne is classified as shown in Table 1:
  • Acne is the primary type of acne, with a small number of papules and nodules, and the total skin lesions are less than 30
  • stains are divided into freckles, chloasma, pregnancy spots, radiation spots, Ota nevus, lead mercury spots, coffee spots, age spots and sun spots.
  • the problem analysis can include that the red zone belongs to sunburn, red blood streak, inflammation, rosacea, etc.
  • the description corpus of eyebrow makeup corresponding to different face shapes as shown in FIG. 10d includes: description of the face shape and description corpus of which eyebrow shape the face is suitable for.
  • the user when the user selects the makeup mode, the user can also show the user's makeup image as a virtual image after giving the user's makeup suggestion, so that the user can understand the makeup image.
  • the mobile phone interface 1010 can display a virtual face image of the user after makeup. The user saves the virtual face image through the save button 1011, so that the user can intuitively feel the appearance after makeup and improve the user’s Experience.
  • the aforementioned state analysis and suggestions of the user's facial area can be fed back to the user through voice playback, or the interface display combined with voice broadcast can be displayed to the customer, etc. It is not convenient for the user to draw eyebrows.
  • the voice broadcast is more helpful to the interaction method of the present application based on the skin detection device to interact with the user.
  • an embodiment of the present application also provides an electronic device.
  • the electronic device 1100 includes one or more storage 1101, one or more processors 1102 coupled to the storage 1101, and at least one camera 1103 connected to the processor 1102. , And one or more programs, where one or more programs are stored in the memory 1101, and the electronic device 1100 is used to perform the following steps:
  • the camera 1103 obtains multiple video frames simultaneously including the user's face and hands;
  • the processor 1102 recognizes the user's hand movement relative to the face in multiple video frames, and determines the target hand movement;
  • the processor 1102 determines the detection target in at least a part of the area on the user's face in the video frame;
  • the processor 1102 determines the extended content associated with the detection target and the shape of the detection target from the extended content library, and outputs the extended content through the display screen.
  • determining the target hand motion includes: the processor 1102 determines that the position of the fingertip of the finger in the video frame is less than a preset distance from the position of the face, and then determines that the hand motion is the target hand motion.
  • determining the target hand movement includes: the processor 1102 determines that the position of the fingertip of the finger in the video frame is less than a preset distance from the position of the face, and determines that the finger in the video frame is relatively stationary with respect to the face If the duration is longer than the preset time, it is determined that the hand movement is the target hand movement.
  • determining the target hand action includes: the processor 1102 determines that two hands are included in the video; and determining that the position of the fingertips of the fingers of the two hands in the video frame is less than the preset position of the face It is determined that the relative static duration of the fingers of the two hands relative to the face in the video frame is greater than the preset time, and then the hand motion is determined to be the target hand motion.
  • the processor 1102 determines that the position of the finger in the video frame is less than the preset distance from the position of the face, including: the position area of the user's finger in the video frame overlaps the position area of the face, or the video frame The finger in does not overlap the face, but the distance between the fingertip of the finger and the edge point of the face closest to the fingertip is less than the preset distance.
  • the processor 1102 determines the detection target in at least a part of the area on the user's face in the video frame, including: from at least one of the multiple video frames, Determine the intersection area between the pointing area of the finger in the target hand motion and the area where the face is located, and determine the detection target in the intersection area.
  • the pointing area of the finger is a geometric figure determined with the fingertip of the finger as the reference point and the pointing direction of the finger as the reference direction, and the geometric figure has a size and contour preset by the user.
  • Geometric figures include any of trapezoids, sectors, triangles, circles, and squares.
  • the method includes: determining that the video includes two hands; the pointing area of the finger is a geometric figure determined with the fingertip of the finger as the reference point and the pointing direction of the finger as the reference direction, where the geometric figure is the user
  • the pre-set geometric figures, the intersection or union of the pointing areas of the fingers of the two hands are used as the pointing areas of the fingers.
  • Geometric figures include any of trapezoids, sectors, triangles, circles, and squares.
  • the face includes at least one preset ROI area; the processor 1102, in response to the target hand motion, determines the detection target in at least a part of the user's face in the video frame, and further includes: In at least one video frame of the two video frames, determine the intersection area between the pointing area of the finger in the target hand motion and the ROI area included in the face according to the weight, and determine the detection target in the intersection area.
  • the detection target when it is determined that more than two ROI regions are covered in the intersection region, the detection target is determined from the ROI region with the largest coverage area in the intersection region.
  • the ROI region when it is determined that more than two ROI regions are covered in the intersection region, based on the preset weight of the ROI region and the matching degree between the image feature of the ROI region and the feature standard model corresponding to the image feature, the ROI region The image feature with the highest score is determined as the detection target.
  • the method when it is determined that more than two ROI regions are covered in the intersection region, the method further includes: the processor determines the detection target based on the first operation of the user on the detection target in the ROI region.
  • the detection target includes one or more skin conditions of skin color, acne, fine lines, pores, blackheads, acne, plaque, and red blood streak, or the detection target includes nose, mouth, and eyes One or more of, eyebrows, facial contour, and skin color.
  • the multiple video frames are consecutive multiple video frames within a preset duration.
  • the method further includes: the electronic device 1102 obtains a real-time image of the user through the camera 1103 of the electronic device, and displays the real-time image of the user on the first interface, and obtains the face of the user from the real-time image. And the video frame within the set duration of the hand.
  • the method before acquiring multiple video frames simultaneously including the user's face and hands, the method further includes: the electronic device determines to execute the makeup mode or execute the skin care mode in response to the user's input operation.
  • the electronic device 1100 outputs extended content, including: a display screen displays a second interface, the second interface includes the detection target determined based on the target hand motion and the extended content corresponding to the shape of the detection target; or, The speaker voice broadcasts the detection target determined based on the target's hand movement and the extended content corresponding to the shape of the detection target.
  • the extended content includes: one or more of problem analysis and care advice based on the detection target in the skin care state, or makeup state analysis and makeup advice based on the detection target in the makeup state One or more of.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the processor executes the above-mentioned methods for skin detection shown in FIGS. 1 to 10e. Of interactive methods of electronic devices.
  • the embodiment of the present application also provides a computer program product containing instructions.
  • the processor executes the interaction of the electronic device for skin detection shown in FIGS. 1 to 10e. method.
  • the device 1200 may include one or more processors 1201 coupled to the controller hub 1203.
  • the controller hub 1203 is connected via a multi-branch bus such as Front Side Bus (FSB), a point-to-point interface such as Quick Path Interconnect (QPI), or a similar connection 1206 communicates with the processor 1201.
  • the processor 1201 executes instructions that control general types of data processing operations.
  • the controller hub 1203 includes, but is not limited to, a graphics memory controller hub (Graphics Memory Controller Hub, GMCH) (not shown) and an input/output hub (Input Output Hub, IOH) (which may On a separate chip) (not shown), where the GMCH includes a memory and a graphics controller and is coupled to the IOH.
  • GMCH Graphics Memory Controller Hub
  • IOH input/output hub
  • the device 1200 may also include a coprocessor 1202 and a memory 1204 coupled to the controller hub 1203.
  • a coprocessor 1202 and a memory 1204 coupled to the controller hub 1203.
  • one or both of the memory and the GMCH may be integrated in the processor (as described in this application), and the memory 1204 and the coprocessor 1202 are directly coupled to the processor 1201 and the controller hub 1203, the controller hub 1203 and IOH are in a single chip.
  • the memory 1204 may be, for example, a dynamic random access memory (Dynamic Random Access Memory, DRAM), a phase change memory (Phase Change Memory, PCM), or a combination of the two.
  • DRAM Dynamic Random Access Memory
  • PCM Phase Change Memory
  • the coprocessor 1202 is a dedicated processor, such as, for example, a high-throughput MIC processor (Many Integerated Core, MIC), a network or communication processor, a compression engine, a graphics processor, a general graphics processor (General Purpose Computing on GPU, GPGPU), or embedded processor, etc.
  • MIC Manufacturing Integerated Core
  • GPGPU General Purpose Computing on GPU
  • embedded processor etc.
  • the optional properties of the coprocessor 1202 are shown in dashed lines in FIG. 12.
  • the memory 1204 may include one or more tangible, non-transitory computer-readable media for storing data and/or instructions.
  • the memory 1204 may include any suitable non-volatile memory such as flash memory and/or any suitable non-volatile storage device, such as one or more hard disk drives (Hard-Disk Drive, HDD(s)), one or Multiple compact disc (CD) drives, and/or one or more digital versatile disc (Digital Versatile Disc, DVD) drives.
  • any suitable non-volatile memory such as flash memory and/or any suitable non-volatile storage device, such as one or more hard disk drives (Hard-Disk Drive, HDD(s)), one or Multiple compact disc (CD) drives, and/or one or more digital versatile disc (Digital Versatile Disc, DVD) drives.
  • HDD(s) hard disk drives
  • CD Compact disc
  • DVD digital versatile disc
  • the device 1200 may further include a network interface (Network Interface Controller, NIC) 1206.
  • the network interface 1206 may include a transceiver, which is used to provide a radio interface for the device 1200 to communicate with any other suitable devices (such as a front-end module, an antenna, etc.).
  • the network interface 1206 may be integrated with other components of the device 1200.
  • the network interface 1206 can implement the function of the communication unit in the above-mentioned embodiment.
  • the device 1200 may further include an input/output (Input/Output, I/O) device 1205.
  • I/O 1205 may include: a user interface, which is designed to enable users to interact with the device 1200; the design of the peripheral component interface enables peripheral components to also interact with the device 1200; and/or a sensor is designed to determine the environment related to the device 1200 Conditions and/or location information.
  • FIG. 12 is only exemplary. That is, although the device 1200 shown in FIG. 12 includes multiple devices such as the processor 1201, the controller hub 1203, and the memory 1204, in actual applications, the devices using the methods of the present application may only include the devices of the device 1200. Some of the devices in, for example, may only include the processor 1201 and the NIC 1206. The properties of optional devices in Figure 12 are shown by dashed lines.
  • the memory 1204 which is a computer-readable storage medium, stores instructions that when executed on a computer causes the system 1200 to execute the calculation method according to the foregoing embodiment.
  • the memory 1204 which is a computer-readable storage medium, stores instructions that when executed on a computer causes the system 1200 to execute the calculation method according to the foregoing embodiment.
  • the diagram of the foregoing embodiment. 1- The interactive method of the electronic device for skin detection shown in Fig. 10e will not be repeated here.
  • SoC1300 includes: an interconnection unit 1350, which is coupled to an application processor 1310; a system agent unit 1380; a bus controller unit 1390; an integrated memory controller unit 1340; a group or one or more coprocessors 1320, which may include integrated graphics logic, image processors, audio processors, and video processors; a static random access memory (SRAM) unit 1330; and a direct memory access (DMA) unit 1360.
  • the coprocessor 1320 includes a dedicated processor, such as, for example, a network or communication processor, a compression engine, a GPGPU, a high-throughput MIC processor, or an embedded processor.
  • a static random access memory (SRAM) unit 1330 may include one or more computer-readable media for storing data and/or instructions.
  • the computer-readable storage medium may store instructions, specifically, temporary and permanent copies of the instructions.
  • the instruction may include: when executed by at least one unit in the processor, the Soc1300 executes the interaction method of the electronic device for skin detection shown in FIGS. 1 to 10e in the above embodiment. For details, please refer to the above embodiment The method will not be repeated here.
  • the various embodiments of the mechanism disclosed in this application can be implemented in hardware, software, firmware, or a combination of these implementation methods.
  • the embodiments of the present application can be implemented as a computer program or program code executed on a programmable system including at least one processor and a storage system (including volatile and non-volatile memory and/or storage elements) , At least one input device and at least one output device.
  • Program codes can be applied to input instructions to perform the functions described in this application and generate output information.
  • the output information can be applied to one or more output devices in a known manner.
  • the processing system includes any processor having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an Application Specific Integrated Circuit (ASIC), or a microprocessor. system.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • the program code can be implemented in a high-level programming language or an object-oriented programming language to communicate with the processing system.
  • assembly language or machine language can also be used to implement the program code.
  • the mechanism described in this application is not limited to the scope of any particular programming language. In either case, the language can be a compiled language or an interpreted language.
  • the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof.
  • the disclosed embodiments can also be implemented as instructions carried by or stored on one or more transient or non-transitory machine-readable (eg, computer-readable) storage media, which can be executed by one or more processors Read and execute.
  • the instructions can be distributed via a network or via other computer-readable media. Therefore, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (for example, a computer), including, but not limited to, floppy disks, optical disks, optical disks, and compact disc read-only memory (Compact Disc Read Only Memory).
  • a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (for example, a computer).
  • each unit/module mentioned in each device embodiment of this application is a logical unit/module.
  • a logical unit/module can be a physical unit/module or a physical unit/ A part of the module can also be realized by a combination of multiple physical units/modules.
  • the physical realization of these logical units/modules is not the most important.
  • the combination of the functions implemented by these logical units/modules is the solution to this application.
  • the above-mentioned device embodiments of this application do not introduce units/modules that are not closely related to solving the technical problems raised by this application. This does not mean that the above-mentioned device embodiments do not exist. Other units/modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Neurology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供一种用于肌肤检测的电子设备的交互方法及电子设备,该方法通过对用户的手部动作和面部的识别,确定目标手部动作,并基于该目标手部动作确定用户的手部动作针对的检测目标,电子设备基于所述检测目标及检测目标的形态,从扩展内容库中确定与该检测目标及检测目标的形态关联的扩展内容,并输出所述扩展内容。根据本申请实施例的用于肌肤检测的电子设备的交互方法及电子设备,能够通过用户在护肤或化妆过程中的实时图像,准确识别出用户的手势以及该手势表明的意图,给出该手势对应的皮肤状态信息及皮肤护理或化妆的处理建议。使得交互过程更加自然流畅,提升用户的体验。

Description

用于肌肤检测的电子设备的交互方法及电子设备
本申请要求于2020年05月27日提交中国专利局、申请号为202010459215.5、申请名称为“用于肌肤检测的电子设备的交互方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及软件应用技术领域,尤其涉及用于肌肤检测的电子设备的交互方法及电子设备。
背景技术
目前,市场上已存在部分基于图像技术进行皮肤检测的产品,这些产品通过对用户的图像检测分析用户的皮肤状态,可以为客户提供较好的体验,但现有的肌肤检测产品需要对获取的图像的整体的皮肤进行全面检测,在得到检测结果后,用户需要进入指定的功能页面去观看分析结果,如,用户想要观看肤色,则点击肤色的功能选项,进入肤色分析页,以观看肤色结果。这种产品需要客户多次单机操作,操作麻烦,部分现有产品还需要额外的硬件配件,例如美容检测仪,操作硬件配件时有可能接触皮肤问题区域,造成细菌传播,交叉感染,现有产品的另一种应用中,需要用户手动去选择护肤化妆产品或化妆手法,化妆过程中,用户双手并不方便触摸屏幕进行选择操作。
发明内容
有鉴于此,本申请提供一种用于肌肤检测的电子设备的交互方法及电子设备,能够通过用户在护肤或化妆过程中的实时图像,准确识别出用户的手势以及该手势表明的意图,给出该手势对应的皮肤状态信息及皮肤护理或化妆的处理建议。使得交互过程更加自然流畅,提升用户的体验。
本申请的一些实施方式提供了一种用于肌肤检测的电子设备的交互方法。以下从多个方面介绍本申请,以下多个方面的实施方式和有益效果可互相参考。
第一方面,本申请提供一种用于肌肤检测的电子设备的交互方法,应用于电子设备,该方法包括:获取同时包括用户的面部和手部的多个视频帧,如,用户的实时图像;识别多个视频帧中的用户的手部相对于面部的动作,并确定目标手部动作;响应于目标手部动作,确定视频帧中的用户的面部上至少部分区域中的检测目标,其中检测目标可以包括痘痘、细纹、毛孔、黑头、痤疮、斑块、红血丝、鼻子、嘴巴、眼睛、眉毛、脸部轮廓和皮肤颜色等中的一种或多种。基于检测目标及检测目标的形态,如,眉毛的形态为弯月形、八字形等,从扩展内容库中确定与该检测目标及检测目标的形态关联的扩展内容,并输出扩展内容,扩展内容可以包括:基于护肤状态时的检测目标的问题分析和护理建议中的一种或多种,或者,基于化妆状态时的检测目标的化妆状态分析和化妆建议中的一种或多种。本申请实施例,能够通过用户在护肤或化妆过程中的实时图像,准确识别出用户的手势以 及该手势表明的意图,给出该手势对应的皮肤状态信息及皮肤护理或化妆的处理建议。使得用户与设备的交互过程更加自然流畅,提升用户的体验。
在上述第一方面的一种可能的实现中,确定目标手部动作,包括:电子设备确定视频帧中的手指的指尖所在位置距离面部所在位置小于预设距离,则确定手部动作为目标手部动作。通过确定手指与面部的距离的设定作为目标动作,可以避免一些不必要的手势的错误的识别,而将误判用户的意图。
在上述第一方面的一种可能的实现中,确定目标手部动作,包括:确定视频帧中的手指的指尖所在位置距离面部所在位置小于预设距离,并且确定视频帧中手指相对于面部相对静止持续的时间大于预设时间,则确定手部动作为目标手部动作。进一步提高基于手势判断用户意图的准确性。
在上述第一方面的一种可能的实现中,确定目标手部动作,包括:确定视频中包括两只手;并且确定视频帧中的两只手的手指的指尖所在位置距离面部所在位置小于预设距离,并且确定视频帧中两只手的手指相对于面部相对静止持续的时间大于预设时间,则确定手部动作为目标手部动作。
在上述第一方面的一种可能的实现中,当手指相对于面部的相对运动的幅度小于预设值时,确定视频帧中两只手的手指相对于面部相对静止。可以提高手势判断的准确性。
在上述第一方面的一种可能的实现中,电子设备确定视频帧中的手指所在位置距离面部所在位置小于预设距离,包括:视频帧中的用户的手指所在位置区域面部所在位置区域重叠,或者,视频帧中的手指与面部不重叠,但所属手指的指尖与面部距离手指指尖最近的边沿点之间的距离小于预设距离。
在上述第一方面的一种可能的实现中,电子设备响应于目标手部动作,确定视频帧中的用户的面部上至少部分区域中的检测目标,包括:从多个视频帧中的至少一个视频帧中,确定目标手部动作中的手指指向区域与面部所在区域的交集区域,并确定交集区域内的检测目标。通过确定交集区域的检查目标,用户可以直接通过手指的指向找到检测目标,便于用户直观的自然的感受,提升用户的交互体验。
在上述第一方面的一种可能的实现中,手指的指向区域为以手指的指尖为基准点,手指的指向为基准方向确定的几何图形,并且几何图形具有用户预先设定的大小和轮廓。
在上述第一方面的一种可能的实现中,当确定视频中包括两只手;手指的指向区域为以手指的指尖为基准点,手指的指向为基准方向确定的几何图形,其中几何图形是用户预先设定的几何图形。两只手的手指的指向区域的交集或并集作为手指的指向区域。
在上述第一方面的一种可能的实现中,几何图形包括梯形、扇形、三角形、圆形、方形中的任一种。
在上述第一方面的一种可能的实现中,面部包括至少一个预设的ROI区域;电子设备响应于目标手部动作,确定视频帧中的用户的面部上至少部分区域中的检测目标,进一步包括:从多个视频帧中的至少一个视频帧中,确定目标手部动作中的手指指向区域与面部中包括的ROI区域的交集区域,并确定交集区域内的检测目标。其中,ROI区域可分为额头、鼻梁、人中、下巴、双颊、眼下、苹果肌等。通过在手指指向与用户脸部的ROI区域的交集区域确定检测目标,以便于在结合脸部的ROI区域,对用户的指定的区域进行分析,提高电子设备对检测目标分析的准确,以及提高交互的趣味性。
在上述第一方面的一种可能的实现中,当确定交集区域内覆盖两个以上ROI区域时,从交集区域内的面积最大的ROI区域内确定检测目标。
在上述第一方面的一种可能的实现中,当确定交集区域内覆盖两个以上ROI区域时,基于ROI区域的预设优先级和/或ROI区域的检测目标与检测目标对应的特征标准模型的匹配度,选择其中一个ROI区域,并从选择的ROI区域内确定检测目标。
在上述第一方面的一种可能的实现中,当确定交集区域内覆盖两个以上ROI区域时,进一步包括:电子设备基于用户对ROI区域内的检测目标的第一操作,确定检测目标。用户可以根据自己观察的结果直接通过点击等方式确认检测目标,便于用户根据自己的主观意见选择,提高用户的交互体验。
在上述第一方面的一种可能的实现中,检测目标包括痘痘、细纹、毛孔、黑头、痤疮、斑块和红血丝中的一种或多种皮肤状态,或者,检测目标包括鼻子、嘴巴、眼睛、眉毛、脸部轮廓、皮肤颜色中的一种或多种。
在上述第一方面的一种可能的实现中,多个视频帧是预设时长内连续的多个视频帧。
在上述第一方面的一种可能的实现中,该方法还包括:电子设备通过电子设备的摄像头获取用户的实时影像,并在第一界面显示用户的实时影像,以及从实时影像中获取同时具有用户的面部和手部的设定时长内的视频帧。用户可以一边观看自己的面部情况,一边作出手部动作,操作更加直观、便于操作。
在上述第一方面的一种可能的实现中,获取同时包括用户的面部和手部的多个视频帧之前,还包括:电子设备响应于用户的输入操作,确定执行化妆模式或执行护肤模式。也就是说,用户可以首先确认自己是要化妆或者是护肤,电子设备确认用户指定的状态后,以便于电子设备根据用户的手势给出针对性的化妆或护肤的建议。
在上述第一方面的一种可能的实现中,电子设备输出扩展内容,包括:电子设备显示第二界面,第二界面中包括基于目标手部动作确定的检测目标及检测目标的形态对应的扩展内容;或者,电子设备语音播报基于目标手部动作确定的检测目标及检测目标的形态对应的扩展内容。该展示方式,更加直观,便于用户给出的护肤或化妆的建议。
在上述第一方面的一种可能的实现中,扩展内容包括:基于护肤状态时的检测目标的问题分析和护理建议中的一种或多种,或者,基于化妆状态时的检测目标的化妆状态分析和化妆建议中的一种或多种。
第二方面,本申请还提供一种用于肌肤检测的装置,该装置包括:获取模块,用于获取同时包括用户的面部和手部的多个视频帧,如,用户的实时图像;处理模块,通过识别模块识别多个视频帧中的用户的手部相对于面部的动作,并确定目标手部动作;处理模块,响应于目标手部动作,确定视频帧中的用户的面部上至少部分区域中的检测目标,其中检测目标可以包括痘痘、细纹、毛孔、黑头、痤疮、斑块、红血丝、鼻子、嘴巴、眼睛、眉毛、脸部轮廓和皮肤颜色等中的一种或多种。处理模块,基于检测目标及检测目标的形态,如,眉毛的形态为弯月形、八字形等,从扩展内容库中确定与该检测目标及检测目标的形态关联的扩展内容,并输出扩展内容,扩展内容可以包括:基于护肤状态时的检测目标的问题分析和护理建议中的一种或多种,或者,基于化妆状态时的检测目标的化妆状态分析和化妆建议中的一种或多种。本申请实施例,能够通过用户在护肤或化妆过程中的实时图像,准确识别出用户的手势以及该手势表明的意图,给出该手势对应的皮肤状态信息及皮 肤护理或化妆的处理建议。使得用户与设备的交互过程更加自然流畅,提升用户的体验。
在上述第二方面的一种可能的实现中,确定目标手部动作,包括:处理模块确定视频帧中的手指的指尖所在位置距离面部所在位置小于预设距离,则确定手部动作为目标手部动作。通过确定手指与面部的距离的设定作为目标动作,可以避免一些不必要的手势的错误的识别,而将误判用户的意图。
在上述第二方面的一种可能的实现中,确定目标手部动作,包括:处理模块确定视频帧中的手指的指尖所在位置距离面部所在位置小于预设距离,并且确定视频帧中手指相对于面部相对静止持续的时间大于预设时间,则确定手部动作为目标手部动作。进一步提高基于手势判断用户意图的准确性。
在上述第二方面的一种可能的实现中,确定目标手部动作,包括:处理模块确定视频中包括两只手;并且确定视频帧中的两只手的手指的指尖所在位置距离面部所在位置小于预设距离,并且确定视频帧中两只手的手指相对于面部相对静止持续的时间大于预设时间,则确定手部动作为目标手部动作。
在上述第二方面的一种可能的实现中,当手指相对于面部的相对运动的幅度小于预设值时,处理模块确定视频帧中两只手的手指相对于面部相对静止。可以提高手势判断的准确性。
在上述第二方面的一种可能的实现中,处理模块确定视频帧中的手指所在位置距离面部所在位置小于预设距离,包括:视频帧中的用户的手指所在位置区域面部所在位置区域重叠,或者,视频帧中的手指与面部不重叠,但所属手指的指尖与面部距离手指指尖最近的边沿点之间的距离小于预设距离。
在上述第二方面的一种可能的实现中,处理模块响应于目标手部动作,确定视频帧中的用户的面部上至少部分区域中的检测目标,包括:从多个视频帧中的至少一个视频帧中,确定目标手部动作中的手指指向区域与面部所在区域的交集区域,并确定交集区域内的检测目标。通过确定交集区域的检查目标,用户可以直接通过手指的指向找到检测目标,便于用户直观的自然的感受,提升用户的交互体验。
在上述第二方面的一种可能的实现中,,手指的指向区域为以手指的指尖为基准点,手指的指向为基准方向确定的几何图形,并且几何图形具有用户预先设定的大小和轮廓。
在上述第二方面的一种可能的实现中,当确定视频中包括两只手;手指的指向区域为以手指的指尖为基准点,手指的指向为基准方向确定的几何图形,其中几何图形是用户预先设定的几何图形。两只手的手指的指向区域的交集或并集作为手指的指向区域。
在上述第二方面的一种可能的实现中,几何图形包括梯形、扇形、三角形、圆形、方形中的任一种。
在上述第二方面的一种可能的实现中,面部包括至少一个预设的ROI区域;处理模块响应于目标手部动作,确定视频帧中的用户的面部上至少部分区域中的检测目标,进一步包括:从多个视频帧中的至少一个视频帧中,确定目标手部动作中的手指指向区域与面部中包括的ROI区域的交集区域,并确定交集区域内的检测目标。其中,ROI区域可分为额头、鼻梁、人中、下巴、双颊、眼下、苹果肌等。通过在手指指向与用户脸部的ROI区域的交集区域确定检测目标,以便于在结合脸部的ROI区域,对用户的指定的区域进行分析,提高电子设备对检测目标分析的准确,以及提高交互的趣味性。
在上述第二方面的一种可能的实现中,当确定交集区域内覆盖两个以上ROI区域时,从交集区域内的面积最大的ROI区域内确定检测目标。
在上述第二方面的一种可能的实现中,当确定交集区域内覆盖两个以上ROI区域时,基于ROI区域的预设优先级和/或ROI区域的检测目标与检测目标对应的特征标准模型的匹配度,选择其中一个ROI区域,并从选择的ROI区域内确定检测目标。
在上述第二方面的一种可能的实现中,当确定交集区域内覆盖两个以上ROI区域时,进一步包括:电子设备基于用户对ROI区域内的检测目标的第一操作,确定检测目标。用户可以根据自己观察的结果直接通过点击等方式确认检测目标,便于用户根据自己的主观意见选择,提高用户的交互体验。
在上述第二方面的一种可能的实现中,检测目标包括痘痘、细纹、毛孔、黑头、痤疮、斑块和红血丝中的一种或多种皮肤状态,或者,检测目标包括鼻子、嘴巴、眼睛、眉毛、脸部轮廓、皮肤颜色中的一种或多种。
在上述第二方面的一种可能的实现中,多个视频帧是预设时长内连续的多个视频帧。
在上述第二方面的一种可能的实现中,该装置还包括:处理模块通过获取模块获取用户的实时影像,并在显示模块的第一界面显示用户的实时影像,以及从实时影像中获取同时具有用户的面部和手部的设定时长内的视频帧。用户可以一边观看自己的面部情况,一边作出手部动作,操作更加直观、便于操作。
在上述第二方面的一种可能的实现中,获取同时包括用户的面部和手部的多个视频帧之前,还包括:处理模块响应于用户的输入操作,确定执行化妆模式或执行护肤模式。也就是说,用户可以首先确认自己是要化妆或者是护肤,电子设备确认用户指定的状态后,以便于电子设备根据用户的手势给出针对性的化妆或护肤的建议。
在上述第二方面的一种可能的实现中,处理模块输出扩展内容,包括:通过显示模块显示第二界面,第二界面中包括基于目标手部动作确定的检测目标及检测目标的形态对应的扩展内容;或者,电子设备语音播报基于目标手部动作确定的检测目标及检测目标的形态对应的扩展内容。该展示方式,更加直观,便于用户给出的护肤或或化妆的建议。
在上述第二方面的一种可能的实现中,扩展内容包括:基于护肤状态时的检测目标的问题分析和护理建议中的一种或多种,或者,基于化妆状态时的检测目标的化妆状态分析和化妆建议中的一种或多种。
第三方面,本申请实施例还提供了一种电子设备,包含一个或多个存储器,与存储器耦合的一个或多个处理器,以及一个或多个程序,其中一个或多个程序被存储在存储器中,电子设备用于执行上述第一方面实施例的方法。
第四方面,本申请实施例提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被处理器运行时,使得处理器执行上述第一方面实施例的方法。
第五方面,本申请实施例提供了一种包含指令的计算机程序产品,当该计算机程序产品在电子设备上运行时,使得处理器执行上述第一方面实施例的方法。
附图说明
图1为本申请一个实施例的用户使用手机的交互的场景图;
图2a为本申请一个实施例的用户使用手机的交互的场景图;
图2b为本申请一个实施例的用户单手指向面部区域的多个场景图;
图2c为本申请一个实施例的用户双手指向面部区域的多个场景图;
图3为本申请一个实施例的手机的结构示意图;
图4为本申请一个实施例的手机的软件结构框图;
图5为本申请一个实施例用于肌肤检测的电子设备的交互方法的流程图;
图6a为本申请一个实施例的用户选择模式的用户界面示意图;
图6b为本申请一个实施例的用户在化妆模式下的实时影像的用户界面示意图;
图6c为本申请一个实施例的单手手指的指尖与面部区域的位置关系示意图;
图6d为本申请一个实施例的单手手指的指尖与面部区域的位置关系示意图;
图7a为本申请一个实施例的单手手指的指向区域与面部的交集区域的示意图;
图7b为本申请一个实施例的双手手指的指向区域与面部的交集区域的示意图;
图8a为本申请一个实施例的人脸ROI区域划分的示意图;
图8b为本申请一个实施例的交集区域内覆盖多个ROI区域的示意图;
图8c为本申请一个实施例的手机的用户界面示意图;
图9a为本申请一个实施例的手机的用户界面示意图;
图9b为本申请一个实施例的痘痘的检测结果的用户界面示意图;
图9c为本申请一个实施例的眉毛的检测结果的用户界面示意图;
图10a为本申请一个实施例的痘痘分级图像及对应的描述语料的示意图;
图10b为本申请一个实施例的色斑类型的图像及对应的描述语料的示意图;
图10c为本申请一个实施例的红区图像及红区问题的描述语料的示意图;
图10d为本申请一个实施例的不同脸型对应的眉形的化妆的描述语料的示意图;
图10e为本申请一些实施例的虚拟的化妆后的用户的脸部图像的界面示意图;
图11为本申请一个实施例的电子设备的结构示意图;
图12为本申请一个实施例的设备的框图;
图13为本申请一实施例的SoC的框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
在本申请各实施例中,电子设备可以是手机、笔记本电脑、平板电脑、桌上型电脑、膝上型电脑、超级移动个人计算机(Ultra-mobile Personal Computer,UMPC)、手持计算机、上网本、个人数字助理(Personal Digital Assistant,PDA)、可穿戴电子设备、智能镜子等具有图像识别功能的设备。
下面以用户与手机的交互为例,结合具体的场景对本申请实施例进行说明。
图1和图2a示出了用户使用手机的交互的场景图,在该场景中用户想要通过与手机的交互实现护肤的效果,参考图1,该手机10设有前置摄像头11与屏幕12在同侧,通过摄像头11可以通过摄像头实时拍摄视频流或照片,拍摄的视频流实时在屏幕12上显示,用户可以通过屏幕12实时观察自己的手部动作和面部图像。参考图2a,用户用手指向自己面部的含有检测目标(图像特征)的面部区域,如指向含有痘痘的面部区域等,手机10 获取预设时长的视频帧,并从该视频帧中识别出手指与脸部间的距离,以及保持这一距离维持的时间,进而确定用户是否想要依据该手部的动作了解护肤的状态。当确定该手部动作为目标手部动作,即该手部动作被确定为用户输入的想要护肤的动作指令,手机10进一步获取该目标手部动作对应的至少一张视频帧,并从该视频帧中的手指的指向区域与面部的交集区域内识别出图像特征,进而根据该图像特征输出对应的扩展内容。
在本申请的实施例中,检测目标可以包括从用户的视频帧中识别出的肤色、痘痘、细纹、毛孔、黑头、痤疮、斑块、红血丝、鼻子、眼睛、眉毛、嘴巴、下巴和额头等图像特征中的一种或多种。并且基于检测目标可以确定检测目标的形态,如眉毛形态、鼻子形态、嘴巴形状、下巴形状、额头形状和脸部轮廓等图像特征中的一种或多种。
在本申请的实施例中,手指的指向区域是指由电子设备以手指的指尖为基准点,手指的指向为基准方向确定的几何图形,该几何图形具有设定的大小和轮廓,其可以是用户预先设定,也可以是由电子设备预先存储的。该几何图形可以包括:梯形、扇形、三角形、圆形、方形中的任一种几何图形。在具体实现中,用户可根据实际情况自由定义大小和轮廓。
在本申请的实施例中,扩展内容可以包括基于护肤状态时的检测目标的状态分析和护理建议中的一种或多种,或者,基于化妆状态时的检测目标的化妆状态分析和化妆建议中的一种或多种。其可以存储在扩展内容库中,该扩展内容库可以存储在云端服务器中,电子设备与云端服务器进行通信,在电子设备需要扩展内容时可以从云端服务器中获取相应的内容。该扩展内容可以在云服务器端定期的更新,以向用户提供最前沿的化妆和护肤的知识点。在本申请的其他实施例中,扩展内容也可以直接存储在电子设备中,以便于电子设备随时调用。
在本申请的实施例中,用户的手部动作可以包括单手的手指指向检测目标所在的面部区域,参考图2b,图2b示出了用户单手指向面部区域的多个场景图。该场景中包括用户通过单手的手指指向面部的痘痘、指向色斑、指向皱纹及指向鼻子等手部动作。也可以包括双手的手指指向检测目标所在的面部区域,参考图2c,图2c示出了用户双手手指指向面部区域的多个场景图。如图2c所示,该场景包括用户通过双手指指向面部的痘痘、指向色斑、指向皱纹或指向眉毛等手部动作。本申请的手指指向面部区域仅是示例性的说明,本申请还可以指向其他的部位,如嘴巴、下巴的、指向红区、黑头等,在此并不作为限定。
根据本申请的实施例,手机可以通过用户的手部动作直接了解用户的意图,不需要对用户的整张图像进行检测,仅对用户指定的区域进行检测,用户在照镜子的状态下,通过手指指向面部的某个区域,就可以得到该区域内的图像特征相关的相关知识点,如,在护肤时,获取皮肤状态,如痘痘,级别和护理建议等。在化妆时,可以获取化妆的建议等。使得交互过程更加简单、顺畅、自然。
下面将结合附图介绍本申请以下实施例中提供的示例性手机。
图3示出了手机的结构示意图。该手机100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接头130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber  identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对手机100的具体限定。在本申请另一些实施例中,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
处理器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了***的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口。
可以理解的是,本申请实施例示意的各模块间的接口,只是示意性说明,并不构成对手机100的结构限定。在本申请另一些实施例中,手机100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
USB接头130是一种符合USB标准规范的连接器,可以用来连接手机100和***设备,具体可以是标准USB接头(例如Type C接头),Mini USB接头,Micro USB接头等。USB接头130可以用于连接充电器为手机100充电,也可以用于手机100与***设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接头还可以用于连接其他手机,例如AR设备等。在一些实施方案中,处理器110可以支持通用串行总线(Universal Serial Bus),通用串行总线的标准规范可以为USB1.x,USB2.0,USB3.x,USB4。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接头130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过手机100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还 可以通过电源管理模块141为手机供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
手机100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。手机100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在手机100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
在一个实施例中,移动通信模块150可以与云端服务器通信连接,以使得处理器110从云端服务器获取与图像特征对应的扩展内容,如,基于护肤状态时的所述检测目标的问题分析和护理建议中的一种或多种,或者,基于化妆状态时的所述检测目标的化妆状态分析和化妆建议中的一种或多种等知识点。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在手机100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,手机100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得手机100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。
手机100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,手机100可以包括1个或N个显示屏194,N为大于1的正整数。
在一个实施例中,显示屏194可以用于显示用户的图像或视频,或者显示文字信息以提醒用户当前需要进行的动作,以使用户按照指示的文字信息面对摄像头作出对应的动作,使得处理器110根据摄像头获取的影像判断用户处于苏醒状态下,并将该状态下的用户的瞳孔信息保存,作为在手机100解锁的过程中用于比对的用户的瞳孔模型,其中,瞳孔信息可以是瞳孔的深度信息(如,3D图像数据),瞳孔模型可以是瞳孔深度模型(人脸3D模型)。也可以在处理器110响应于接收的用户的解锁指令时,显示未解锁时的界面,界面中可以包括人脸输入框,或提示用户解锁的文字信息等。也可以在处理器110在执行解锁操作后,显示用户可以直接操作的界面,或者处理器110在执行禁止解锁操作时,显示解锁失败的界面等。
手机100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体 (complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,手机100可以包括1个或N个摄像头193,N为大于1的正整数。
在一个实施例中,摄像头193采集同时具有用户的面部、手部动作等视频或静态图像(视频帧),以使手机100能够根据视频中的多个视频帧中确定用户的目标手部动作,以及目标手部动作指定的图像特征,处理器110根据图像特征调取对应的基于护肤状态时的所述检测目标的问题分析和护理建议中的一种或多种,或者,基于化妆状态时的所述检测目标的化妆状态分析和化妆建议中的一种或多种。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当手机100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。手机100可以支持一种或多种视频编解码器。这样,手机100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现手机100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
在一个实施例中,NPU可以实现手机100对瞳孔信息的识别、指纹的识别、步态识别或者声音识别等生物特征的识别,以使得手机100能够通过各种基于生物特征的识别技术实现对自身进行解锁或禁止解锁。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展手机100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储手机100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行手机100的各种功能应用以及数据处理。
在一些实施例中,处理器110可以调用内部存储器121中的指令以使得所述手机100依次执行根据本申请的实施例的用于肌肤检测的电子设备的交互方法。该方法具备包括:通过开启摄像头193获取同时包括用户的面部和手部的多个视频帧;处理器110识别所述多个视频帧中的用户的手部相对于所述面部的动作,并确定目标手部动作;处理器110响应于所述目标手部动作,确定所述视频帧中的用户的面部上至少部分区域中的检测目标;处理器110基于所述检测目标及检测目标的形态,从扩展内容库中确定与该检测目标及检 测目标的形态关联的扩展内容,并输出所述扩展内容。用户可以通过手部动作直接获取想要了解的知识点,交互过程简单、且更加顺畅。
上述的内部存储器121和/或外部存储区域内可以存储用户的扩展内容,处理器110确认检测目标后,可以直接调取与该检测目标对应的扩展内容,例如,基于护肤状态时的所述检测目标的问题分析和护理建议中的一种或多种,或者,基于化妆状态时的所述检测目标的化妆状态分析和化妆建议中的一种或多种
手机100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。手机100可以通过扬声器170A收听音乐,或收听免提通话。
在一个实施例中,扬声器170A可以播放语音信息以告知用户当前的手部动作对应的检测目标及检测目标的形态对应的扩展内容。以便于用户通过语音了解护肤或化妆的知识点。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。手机100通过发光二极管向外发射红外光。手机100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定手机100附近有物体。当检测到不充分的反射光时,手机100可以确定手机100附近没有物体。手机100可以利用接近光传感器180G检测用户手持手机100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
在一些实施例中,当有物体(人脸或者手指等)靠近手机100时,接近光传感器180G感应到有物体靠近手机30,从而向手机30的处理器110发出有物体靠近的信号。处理器110接收该有物体靠近的信号,并控制显示屏194亮起,或者直接通过摄像头193采集物体的视频以便于处理器110根据这些视频判断目标手部动作,以及基于目标手部动作确定检测目标。
环境光传感器180L用于感知环境光亮度。手机100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测手机100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。手机100可以利用采集的指纹特性实现指纹解锁,以实现用户身份识别,并获得相应权限,例如访问应用锁,指纹拍照,指纹接听来电等。
手机100的软件***可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android***为例,示例性说明手机100的软件结构。
图4是本申请实施例的手机100的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android***分为四层,从上至下分别为应用程序层,应 用程序框架层,安卓运行时(Android runtime)和***库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图4所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图4所示,应用程序框架层可以包括窗口管理器,内容提供器,视图***,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图***包括可视控件,例如显示文字的控件,显示图片的控件等。视图***可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供手机100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在***顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,手机振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓***的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
***库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子***进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动, 传感器驱动。
下面结合在手机100中进行化妆或护肤的场景,以及图3-图4的图示,示例性说明手机100软件以及硬件的工作流程。
该手机100具有摄像头193,当触摸传感器180K接收到触控操作,相应的硬件中断被发给内核层。内核层将触控操作加工成原始输入事件(包括触摸坐标,触控操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该原始输入事件所对应的控件,以该触控操作是触摸单击操作,该单击操作所对应的控件为化妆和护肤应用图标的控件为例,护肤/化妆类应用程序调用应用框架层的接口,启动护肤/化妆类应用程序,进而通过调用内核层启动摄像头驱动,通过摄像头193捕获静态用户的面部和手部动作的图像或视频。在应用程序框架层进行图像识别,并确定检测目标,当检测目标确定后,***库调用应用程序框架层的内容提供器以获取与检测目标对应的扩展内容,并调用内核层显示驱动,以使显示屏194显示与扩展内容相关的界面。
下面以具体的实施例来介绍根据本申请的用于肌肤检测的电子设备的交互方法,该方法应用于电子设备,以下将手机作为电子设备,对本申请的用于肌肤检测的电子设备的交互过程进行详细的描述。参考图5,图5示出了根据本申请的用于肌肤检测的电子设备的交互方法的流程图,如图5所示,该交互方法可以包括以下步骤:
步骤S500,手机处理器确定化妆或护肤模式。用户可以根据自身的需要手动选择化妆或者护肤的模式,并在处理器接收到用户选择的模式后,执行步骤510。在根据本申请的实施例中,如上述的处理器确定化妆或护肤模式,以及执行如图5所示的各种处理时,实质上意欲表示的是处理器通过执行存储在手机存储区内的应用程序来执行各种处理。
步骤S510,摄像头获取用户的实时影像。
该实时影像可以通过前置摄像头获取,也可以通过后置摄像头获取,当用户想通过显示屏看到自己的面部时,可以采用前置摄像头,以便于用户可以看到自身的实时影像。
步骤S520,处理器从实时影像中获取同时包括用户的面部和手部的多个视频帧。
多个视频帧可以为5秒内的多个连续的视频帧,处理器通过获取5秒内的多个连续的视频帧,以便于识别出用户的手部动作。其中,面部和手部的识别可以采用现有的识别技术进行识别,本申请中不在详细的介绍。
步骤S530,处理器判断视频帧中的手指的指尖所在位置距离面部所在位置是否小于预设距离,例如,手指的指尖所在位置距离面部所在位置是否小于2cm。其中,手指可以是单手的手指,也可以是双手的手指。当手指的指尖所在位置距离面部所在位置小于2cm,则手机执行步骤540,当大于等于预设距离,则手机执行步骤520。
步骤S540,处理器判断视频帧中手指相对于面部相对静止持续的时间是否大于预设时间。例如,判断视频帧中手指相对于面部相对静止持续的时间是否大于3秒,当大于3秒,则手机执行步骤S550,当小于等于3秒,则步骤S520。
步骤S550,处理器确定该手部动作为目标手部动作。当处理器确定目标手部动作后,将该目标手部动作作为用户输入的指令,以执行步骤S560。
步骤S560,处理器响应于目标手部动作确定检测目标。其中,确定检测目标的具体过程可在下面的实施例中详细说明,具体可参见下面实施例中对步骤S560的详细描述。步 骤S570,处理器基于检测目标及检测目标的形态输出与其关联的扩展内容。扩展内容可以包括基于护肤状态时的检测目标的问题分析和护理建议中的一种或多种,或者,基于化妆状态时的检测目标的化妆状态分析和化妆建议中的一种或多种。
在本申请的具体实施方式中,如上述的连续视频帧的长度为5秒,手指的指尖所在位置距离面部所在位置是否小于2cm,手指相对于面部相对静止持续的时间是否大于3秒,这些参数为手机中应用程序在制作时预设的,这里的时间长度和距离的值不限于此,也可以是其他的值,诸如,连续视频帧的长度可以为10秒、20秒等,手指相对于面部相对静止持续的时间是否大于2秒、4秒等,手指的指尖所在位置距离面部所在位置是否小于1cm、3cm等;应用程序也可以允许用户更改该预设时间的长度,或该预设距离的大小。
在本申请的另一个实施例中,上述方法的步骤中处理器也可以不执行步骤S540,也就是说,在步骤S530,当处理器确定手指的指尖所在位置距离面部所在位置小于预设距离2cm后,直接执行步骤550。
根据本申请实施例的用于肌肤检测的电子设备的交互方法,能够通过用户在护肤或化妆过程中的实时图像,准确识别出用户的手势以及该手势表明的意图,给出该手势对应的皮肤状态信息及皮肤护理或化妆的处理建议。使得用户与设备的交互过程更加自然流畅,提升用户的体验。
下面结合附图和手机用户界面的具体实施例对上述步骤S500-S570进行详细的说明。
图5所示的各步骤可以在手机中实施。对于其中的各步骤、确定模式、判断等步骤由手机的处理器通过运行应用程序来执行,获取用户图像等步骤可以是手机摄像头在处理器的指示下执行。
参考图5,在步骤S500,确定化妆或护肤模式。
参考图6a,图6a示出了用户选择模式的用户界面示意图,如6a所示,该用户界面610,包括用户护肤或化妆的选项区611和导航栏612,,选项区611中设有化妆和护肤的选项框,用户可以通过点选我要化妆,使手机进入化妆模式,或通过点选我要护肤进入护肤模式。导航栏612中可以设置有个人中心,返回主页面、搜索等图标,用户通过点选这些图标,进入相应的页面内,例如,通过点选个人中心进入个人的主页面,观察自己的个人信息、历史留存的化妆前后的照片和粉丝情况等。
在步骤S510,获取用户的实时影像。当手机确认步骤S500输入的模式后,通过开启摄像头获取用户的实时图像,并且为了便于用户与手机更好的交互,可以开启前置摄像头对用户的实时图像进行采集,同时将采集的实施图像通过界面进行显示。在本申请的实施例中也可以通过后置摄像头帮助别人或自己采集实时影像。在本申请中以前置摄像头为例进行说明。
参考图6b,以用户选择化妆模式为例,图6示出了用户在化妆模式下的实时影像的用户界面示意图,如图6b所示,该用户界面620包括用户图像显示区621、用户动作提示区622、前置或后置摄像头选项图标623,图像显示区621可以显示用户的实时影像,以便于用户可以及时看到自己的实时图像,提示区622可以提示用户做出对应的动作,如,在化妆时,提示:“请用手指指向您的五官或皮肤”,用户按照提示完成目标动作的确定。前置或后置摄像头选项图标623可以实现用户通过前置摄像头获取实时图像或后置摄像头获取自己或朋友的实施图像等。
在步骤S520,从实时影像中获取同时包括用户的面部和手部的多个视频帧。也就是说,手机通过获取用户的面部和手部的多个视频帧,并从该多个视频中识别用户的手部相对于面部的动作,进而来确定目标手部动作。其中,手部的动作包括单手的手指动作,也可以是双手的手指动作。
下面结合附图以单手的手指为例,对目标手部动作的确定过程进行描述。其中,目标手部动作的确定可以仅由步骤S530直接确定,也可以是步骤S530和步骤S540的结合确定。以下详细说明步骤S530和步骤540的结合确定目标手部动作的过程。
在步骤S530,判断视频帧中的手指的指尖所在位置距离面部所在位置是否小于预设距离。即将指尖所在位置与面部区域所在位置的接触点的距离设为0,则距离大于0时,表明指尖与面部指尖有间距,当距离小于0,则指尖与面部重合。
参考图6c,图6c示出了单手手指的指尖与面部区域的位置关系示意图,如图6c所示,用户的单手手指的指尖601与面部区域602重合,则手机处理器可以判断指尖601所在位置距离面部区域所在位置小于预设距离。或者,如图6d所示,用户的单手手指的指尖603距离面部区域602的距离为d,当距离d小于预设距离时,则手机处理器可以判断指尖601所在位置距离面部区域所在位置小于预设距离。其中,预设距离d是指手指的指尖与面部区域在几何测量出的最短距离,在实际应用中,预设距离d可以依据用户的身高、面部大小、手指长短等个人情况做动态调整,灵活设置。例如,当电子设备获取用户的目标图像显示用户脸型较小,在实际应用中,可以将预设距离d设置为符合脸型较小的预设距离,如2cm,略小于默认设置的3cm,在用户的手指指向面部区域时,当手指的指尖与面部区域的距离d小于或等于2cm时,即判断指尖所在位置距离面部区域所在位置小于预设距离。
本身的另一个实施例中,当手指为用户的双手手指时,其每只手的手指均满足上述单手的手指与面部的位置关系,其中,双手中的每只手的手指判断方法与上述单只手的手指判断方法相同,具体可以参考图6c和图6d中的单手的手指与面部位置关系的判断,在此不再赘述。
此外,本申请的另一个实施例中,当处理器识别出手指为用户的双手手指时,且仅一只手的手指满足指尖所在位置距离面部区域所在位置小于预设距离,则可以执行单根手指的判断方法。
基于上面的描述,当手机处理器判断视频帧中的手指的指尖所在位置距离面部所在位置小于预设距离。进一步执行步骤S540。
在步骤S540,判断视频帧中手指相对于面部相对静止持续的时间是否大于预设时间。
具体地,可以包括以下任一种判断条件:
条件a1.当手指相对手机屏幕静止时,处理器判断用户的面部区域相对手机屏幕保持相对静止,即在预设时间内,面部相对手机屏幕的移动幅度小于预设值,例如3s内,面部区域相对手机屏幕的移动幅度小于1cm。其中,预设时间和预设值可以在实际应用中,依据场景,灵活设置,以达到最佳判断结果。
条件a2.当面部相对手机屏幕静止时,处理器判断用户的手指相对手机屏幕保持相对静止,即在预设时间内,手部动作相对手机屏幕的移动幅度小于预设值,例如3s内,手部动作移动幅度小于1cm,其中,预设时间和预设距离可以在实际应用中,依据场景,灵活设置,以达到最佳判断结果。
条件a3.处理器判断手指和面部同时相对手机屏幕保持相对静止。即在预设时间内,面部和手部相对于手机屏幕的移动幅度小于预设值,例如3s内,面部和手部相对手机屏幕的移动幅度小于1cm。其中,预设时间和预设值可以在实际应用中,依据场景,灵活设置,以达到最佳判断结果。
当处理器判断手指和面部满足a1、a2和a3中的任一个条件时,则可以判断用户手指与面部相对静止的时长满足预设时间3秒。则确定该手指的动作为目标手部动作。
在本申请的一个实施例中,目标手部动作的确定,还可以包括:当手部与面部满足上述a1、a2和a3中的任一种条件后,进一步包括:
条件a4,判断用户手指的指向区域与面部区域内的交集区域内是否关联有扩展内容,即是否存在护肤或化妆的知识点,当手机处理器判断交集区域内存在扩展内容,则确定手指的动作为目标手部动作。其中,手指的指向区域可以为以手指的指尖为基准点,手指的指向为基准方向确定的几何图形,其中几何图形是用户预先设定的几何图形。几何图形可以包括梯形、扇形、三角形、圆形、方形中的任一种几何图形。在具体实现中,用户可根据实际情况自由定义大小和轮廓。
当为单手手指时,参考图7a,图7a示出了单手手指的指向区域与面部的交集区域的示意图。如图7a所示,用户的单手手指的指向区域701为梯形,即该指向区域与面部的交集区域即为该梯形覆盖的区域。若该交集区域内关联有扩展内容,则处理器判断该手部的动作为目标手部动作。
当为双手手指时,参考图7b,图7b示出了双手手指的指向区域与面部的交集区域的示意图。如图7b所示,用户的双手的每只手的手指的指向区域702为梯形,两个手指的指向区域交集在一起,此时可以将两只手的手指的指向区域702的交集或并集作为手指的指向区域。且该交集或并集区域内联有扩展内容,则处理器判断该手部的动作为目标手部动作。
需要说明的是,在本申请描述的实施例中,当没有条件a4的判断步骤时,可以默认为整张面部所在区域均关联有扩展内容。而手指的指向区域与面部区域的交集区域的确定可以参考图7a和图7b对应的描述。
基于上面的描述,手机处理器可以确定目标手部动作,并执行步骤S560。
在步骤S560,响应于该目标手部动作确定检测目标。根据检测目标进一步的可以确定检测目标的形态。其中,检测目标为视频帧中的用户的面部上至少部分区域中的检测目标,具体可以通过以下几种方法的任一种或几种的结合进行确认。
第一种确认方法:
从多个视频帧中的至少一个视频帧中,确定目标手部动作中的手指的指向区域与面部所在区域的交集区域,并确定交集区域内的检测目标。其中,单手手指的指向区域与面部所在区域的交集区域可以参考图7a所示。双手手指的指向区域与面部所在区域的交集区域可以参考图7b所示,且两只手的手指的指向区域的交集或并集作为手指的指向区域。检测目标的识别可以通过现有的人脸识别技术进行识别和判断,在此不再详细介绍。
第二种确认方法:
首先,将面部区域划分ROI区域,参考图8a,图8a示出了人脸ROI区域划分的示意图,如图8a所示,人脸的ROI区域包括额头801、脸颊802、下巴803、鼻梁804、人中805、 苹果肌806、眼袋807等,其中,部分ROI区域可以有重叠部分,例如,苹果肌806区域与脸颊802区域重叠,在扩展内容库中,人脸的各个ROI区域分别关联有扩展内容,且该扩展内容可以将ROI区域的知识点融合。例如,在化妆模式中,用户手指的指向区域在眼袋处,该位置区域不仅含有皱纹还会包括与眼袋处相关的黑眼圈,因而将ROI区域结合,使得扩展内容更加丰富,分析更加准确,同时提高用户的趣味性。
其次,确定用户的手指的指向区域与ROI区域的交集区域。并从该交集区域中确定检测目标。
在本申请的一个实施例中,在上述第二种方法中,当交集区域内覆盖两个以上ROI区域时,从该交集区域内的覆盖面积最大的ROI区域内确定检测目标。其中,覆盖面积可以是交集区域和人脸ROI区域的覆盖面积的绝对面积,即覆盖面积为平面几何图形的真实面积,例如,真实面积为35平方厘米。也可以是交集区域和人脸ROI区域的覆盖面积与人脸大小的相对面积,例如,覆盖面积35平方厘米,人脸大小是350平方厘米,相对面积为覆盖面积占人脸大小的比值是0.1。
参考图8b,图8b示出了交集区域内覆盖多个ROI区域的示意图,如图8b所示,手指的指向区域808覆盖了ROI区域1,ROI区域2和ROI区域3,其中,在该交集区域内,ROI区域1的面积>ROI区域2>ROI区域3,因此可以从面积最大的ROI区域1内的确定检测目标。
第三种确认方法:
对ROI区域预先设置有优先级,从交集区域内优先级高的ROI区域中确定检测目标。例如,按照优先级从高到低依次排序,依次为眼袋、脸颊、鼻梁等等,当手指的指向区域覆盖三者,则从眼袋所在的ROI区域内确定检测目标。
第四种确认方法:
对交集区域内的ROI区域设置权重值,例如、可以对交集区域内的ROI区域的面积、被检测的次数多少等参数设置权重,并检测交集区域内的所有ROI区域内的图像特征,并从该图像特征中找出分值最高的图像特征作为检测目标。
下面以交集区域内的ROI区域的面积的权重为例,对图像特征分值的计算进行说明。
图像特征分值计算公式为:图像特征分值=图像特征与该图像特征对应的特征标准模型的匹配度*交集区域内的ROI区域的面积权重。其中,图像特征的确定可以基于扩展内容中的知识点进行确定,由于确定的图像特征与特征标准模型会有差异,因而通过求出图像特征与特征标准模型的匹配度,可以找到一个最接近特征标准模型的图像特征,同时与交集区域内的ROI区域的权重相乘后,可以找到综合分值较高的图像特征。例如,通过扩展内容的知识点找到的图像特征为痘痘,痘痘与痘痘标准模型的相似度是99%,也就是匹配度0.99。若该痘痘的所在ROI区域的面积权重为0.7,则痘痘的分值=0.7*0.99=0.693。以此计算其他图像特征分值,选择分值最高的图像特征最为检测目标。
在本申请的上述第一、第二和第三的确定方法中,当处理器确定的交集区域存在多个图像特征时,也可以将全部的图像特征作为检测目标,也可以通过用户进一步的点选等方式选择至少一个图像特征作为检测目标。参考图8c,图8c示出了手机的用户界面示意图,该用户界面810显示多个图像特征811,以及用户提示区812,图像特征811可以直观的展示交集区域内的图像特征,用户可以直接点击图像特征并确定为检测目标。用户提示区 812用于提醒用户对图像特征进行选择的文字信息,例如,提示用户:请您点击图像特征并确定检测目标等文字。
在步骤S560,处理器确定检测目标后,执行步骤S570。
在步骤S570,基于检测目标及检测目标的形态输出与其关联的扩展内容。其中,输出的方式可以通过界面显示,或者语音播报、或者界面和语音的结合的方式等,以便于用户可以通过用户界面和/或语音播报得知与检测目标对应的扩展内容。
参考图9a,以手机的界面输出为例,图9a示出了手机的用户界面示意图,如图9a所示,用户界面910中包括图像显示区911和知识点显示区912,其中,图像显示区911用于显示检测目标,知识点显示区912用于显示与检测目标对应的描述语料(文字描述),该描述语料可以选择拟人化的描述语句表达护肤或化妆等知识点,并以各种文件格式存储在手机的内部存储器、外部存储器或云端,例如word格式、xml格式等,描述语料可以描述检测目标的状态,如,严重等级、专业分类、形状、大小、颜色等,以及如何护理或化妆的建议等,以使得用户更容易了解检测目标,并了解该如何护理或化妆。
参考图9b,当手机处于护肤模式,以检测目标为痘痘为例,图9b示出了痘痘的检测结果的用户界面示意图,如图9b所示,手机的界面在图像显示区911显示痘痘的图像,根据痘痘的图像特征给出对应的描述语料,结合用户的健康数据、生活习惯、使用产品给出个性化建议,在知识点显示区912显示与痘痘对应的护理的描述语料,如:“您当前的痘痘为:中度(II级),临床表现为:炎性丘瘆;建议您使用化苯甲醜/外用抗生素,或过氧化苯甲酰+外用抗生素;最近尽量减少户外运用,正在使用的XXX产品可先暂停使用。”当描述的内容较多,用户可以通过翻动页面或向下滑动当前页面以获取完整的描述。
参考图9c,当手机处于化妆模式,以检测目标为眉毛为例,图9c示出了眉毛的检测结果的用户界面示意图,如图9c所示,界面930的图像显示区911显示用户的人脸图像,用户可以通过人脸图像观看到自己的眉毛、脸型、五官轮廓等,可以首先判断用户属于常见脸型中的哪一种,并根据用户的脸型和眉毛生长情况给出对应的描述语料,结合用户的面部肌肤数据、化妆习惯、使用产品给出个性化建议。并在知识点显示区912显示与眉毛对应的化妆的描述语料,如,您属于鹅蛋脸,适合使用柔和的眉形,不破坏鹅蛋脸型原本的美感。可适当将现有眉形的眉峰拉高,尾部稍加修饰。推荐使用XXX眉笔的YY色号。”
此外,本申请中的一些实施例中,图像显示区911和知识点显示区912还可以展示如图10a-10d所述的检测目标的诊断,如图10a所示的痘痘分级图像及对应的描述语料,其中,扩展内容关于痘痘的知识点可以包括痘痘的专业分类,根据痘痘的形状、大小、颜色,可以分为四级三度,其中,痘痘在医学上的专业术语叫做痤疮,具体分类如下表1所示:
表1
痤疮一级 粉刺为主,少量的丘疹结节,总皮损小于30个
痤疮二级 粉刺和中等量丘疹脓疱,总皮损数量31-50个
痤疮三级 大量丘疹,脓疱,总皮损数50-100个
痤疮四级 结节/囊肿性暗疮或聚合性暗疮,总皮损大于100个,结节/囊肿大于3个
如图10b所示的色斑类型的图像,色斑分为雀斑、黄褐斑、妊娠斑、辐射斑、太田痣、 铅汞斑、咖啡斑、老年斑和日晒斑等。
如图10c所示的红区问题分析及图像,问题分析可以包括,红区属于晒红、红血丝、炎症、玫瑰痤疮等。
如图10d所示的不同脸型对应的眉形的化妆的描述语料,包括:脸型的描述以及该脸型适应哪一种眉形的描述语料等。
在本申请的一个实施例中,当用户选择化妆模式,还可以在给出用户化妆建议后,向用户以虚拟的图像展示用户化妆后的图像,以使用户可以了解化妆后的形象。参考图10e所示,手机界面1010可以显示虚拟的化妆后的用户的脸部图像,用户通过保存键1011保存该虚拟的脸部图像,以便于用户直观的感受到化妆后的模样,提高用户的体验。
根据本申请的一些实施例,上述用户面部区域的状态分析与建议可以通过语音播放的方式反馈给用户,或者,界面显示与语音播报结合的方式向客户展示等,当用户在画眉毛不方便用手操作手机时,语音播报更有助于本申请基于肌肤检测装置的交互方法与用户进行交互。
参考图11,本申请实施例还提供了一种电子设备,电子设备1100包含一个或多个存储1101,与存储器1101耦合的一个或多个处理器1102,与处理器1102连接的至少一个摄像头1103,以及一个或多个程序,其中一个或多个程序被存储在存储器1101中,电子设备1100用于执行以下步骤:
摄像头1103,获取同时包括用户的面部和手部的多个视频帧;
处理器1102识别多个视频帧中的用户的手部相对于面部的动作,并确定目标手部动作;
处理器1102响应于目标手部动作,确定视频帧中的用户的面部上至少部分区域中的检测目标;
处理器1102基于检测目标及检测目标的形态,从扩展内容库中确定与该检测目标及检测目标的形态关联的扩展内容,并通过显示屏输出扩展内容。
根据本申请的一个实施例,确定目标手部动作,包括:处理器1102确定视频帧中的手指的指尖所在位置距离面部所在位置小于预设距离,则确定手部动作为目标手部动作。
根据本申请的一个实施例,确定目标手部动作,包括:处理器1102确定视频帧中的手指的指尖所在位置距离面部所在位置小于预设距离,并且确定视频帧中手指相对于面部相对静止持续的时间大于预设时间,则确定手部动作为目标手部动作。
根据本申请的一个实施例,确定目标手部动作,包括:处理器1102确定视频中包括两只手;并且确定视频帧中的两只手的手指的指尖所在位置距离面部所在位置小于预设距离,并且确定视频帧中两只手的手指相对于面部相对静止持续的时间大于预设时间,则确定手部动作为目标手部动作。
根据本申请的一个实施例,当手指相对于面部的相对运动的幅度小于预设值时,确定视频帧中两只手的手指相对于面部相对静止。
根据本申请的一个实施例,处理器1102确定视频帧中的手指所在位置距离面部所在位置小于预设距离,包括:视频帧中的用户的手指所在位置区域面部所在位置区域重叠,或者,视频帧中的手指与面部不重叠,但所属手指的指尖与面部距离手指指尖最近的边沿点之间的距离小于预设距离。
根据本申请的一个实施例,处理器1102响应于目标手部动作,确定视频帧中的用户的面部上至少部分区域中的检测目标,包括:从多个视频帧中的至少一个视频帧中,确定目标手部动作中的手指的指向区域与面部所在区域的交集区域,并确定交集区域内的检测目标。
根据本申请的一个实施例,手指的指向区域为以手指的指尖为基准点,手指的指向为基准方向确定的几何图形,并且几何图形具有用户预先设定的大小和轮廓。几何图形包括梯形、扇形、三角形、圆形、方形中的任一种。
根据本申请的一个实施例,该方法包括:确定视频中包括两只手;手指的指向区域为以手指的指尖为基准点,手指的指向为基准方向确定的几何图形,其中几何图形是用户预先设定的几何图形,两只手的手指的指向区域的交集或并集作为手指的指向区域。几何图形包括梯形、扇形、三角形、圆形、方形中的任一种。
根据本申请的一个实施例,面部包括至少一个预设的ROI区域;处理器1102响应于目标手部动作,确定视频帧中的用户的面部上至少部分区域中的检测目标,进一步包括:从多个视频帧中的至少一个视频帧中,确定目标手部动作中的手指的指向区域与面部中12.根据权包括的ROI区域的交集区域,并确定交集区域内的检测目标。
根据本申请的一个实施例,当确定交集区域内覆盖两个以上ROI区域时,从交集区域内的覆盖面积最大的ROI区域内确定检测目标。
根据本申请的一个实施例,当确定交集区域内覆盖两个以上ROI区域时,基于ROI区域的预设权重和ROI区域的图像特征与图像特征对应的特征标准模型的匹配度,将ROI区域内分值最高的图像特征确定为检测目标。
根据本申请的一个实施例,当确定交集区域内覆盖两个以上ROI区域时,进一步包括:处理器基于用户对ROI区域内的检测目标的第一操作,确定检测目标。
根据本申请的一个实施例,检测目标包括肤色、痘痘、细纹、毛孔、黑头、痤疮、斑块和红血丝中的一种或多种皮肤状态,或者,检测目标包括鼻子、嘴巴、眼睛、眉毛、脸部轮廓、皮肤颜色中的一种或多种。
根据本申请的一个实施例,多个视频帧是预设时长内连续的多个视频帧。
根据本申请的一个实施例,该方法还包括:电子设备1102通过电子设备的摄像头1103获取用户的实时影像,并在第一界面显示用户的实时影像,以及从实时影像中获取同时具有用户的面部和手部的设定时长内的视频帧。
根据本申请的一个实施例,获取同时包括用户的面部和手部的多个视频帧之前,还包括:电子设备响应于用户的输入操作,确定执行化妆模式或执行护肤模式。
根据本申请的一个实施例,电子设备1100输出扩展内容,包括:显示屏显示第二界面,第二界面中包括基于目标手部动作确定的检测目标及检测目标的形态对应的扩展内容;或者,扬声器语音播报基于目标手部动作确定的检测目标及检测目标的形态对应的扩展内容。
根据本申请的一个实施例,扩展内容包括:基于护肤状态时的检测目标的问题分析和护理建议中的一种或多种,或者,基于化妆状态时的检测目标的化妆状态分析和化妆建议中的一种或多种。
本申请中,电子设备1101各部件及各部件的工作过程,在上述实施例中已经详细的 说明,具体可参见上述图1-图10e示出的用于肌肤检测的电子设备的交互方法,在此不再赘述。
本申请实施例还提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被处理器运行时,使得处理器执行上述图1-图10e示出的用于肌肤检测的电子设备的交互方法。
本申请实施例还提供了一种包含指令的计算机程序产品,当该计算机程序产品在电子设备上运行时,使得处理器执行上述图1-图10e示出的用于肌肤检测的电子设备的交互方法。
现在参考图12,所示为根据本申请的一个实施例的设备1200的框图。设备1200可以包括耦合到控制器中枢1203的一个或多个处理器1201。对于至少一个实施例,控制器中枢1203经由诸如前端总线(Front Side Bus,FSB)之类的多分支总线、诸如快速通道互连(Quick Path Interconnect,QPI)之类的点对点接口、或者类似的连接1206与处理器1201进行通信。处理器1201执行控制一般类型的数据处理操作的指令。在一实施例中,控制器中枢1203包括,但不局限于,图形存储器控制器中枢(Graphics Memory Controller Hub,GMCH)(未示出)和输入/输出中枢(Input Output Hub,IOH)(其可以在分开的芯片上)(未示出),其中GMCH包括存储器和图形控制器并与IOH耦合。
设备1200还可包括耦合到控制器中枢1203的协处理器1202和存储器1204。或者,存储器和GMCH中的一个或两者可以被集成在处理器内(如本申请中所描述的),存储器1204和协处理器1202直接耦合到处理器1201以及控制器中枢1203,控制器中枢1203与IOH处于单个芯片中。存储器1204可以是例如动态随机存取存储器(Dynamic Random Access Memory,DRAM)、相变存储器(Phase Change Memory,PCM)或这两者的组合。在一个实施例中,协处理器1202是专用处理器,诸如例如高吞吐量MIC处理器(Many Integerated Core,MIC)、网络或通信处理器、压缩引擎、图形处理器、通用图形处理器(General Purpose Computing on GPU,GPGPU)、或嵌入式处理器等等。协处理器1202的任选性质用虚线表示在图12中。
存储器1204作为计算机可读存储介质,可以包括用于存储数据和/或指令的一个或多个有形的、非暂时性计算机可读介质。例如,存储器1204可以包括闪存等任何合适的非易失性存储器和/或任何合适的非易失性存储设备,例如一个或多个硬盘驱动器(Hard-Disk Drive,HDD(s)),一个或多个光盘(Compact Disc,CD)驱动器,和/或一个或多个数字通用光盘(Digital Versatile Disc,DVD)驱动器。
在一个实施例中,设备1200可以进一步包括网络接口(Network Interface Controller,NIC)1206。网络接口1206可以包括收发器,用于为设备1200提供无线电接口,进而与任何其他合适的设备(如前端模块,天线等)进行通信。在各种实施例中,网络接口1206可以与设备1200的其他组件集成。网络接口1206可以实现上述实施例中的通信单元的功能。
设备1200可以进一步包括输入/输出(Input/Output,I/O)设备1205。I/O 1205可以包括:用户界面,该设计使得用户能够与设备1200进行交互;***组件接口的设计使得***组件也能够与设备1200交互;和/或传感器设计用于确定与设备1200相关的环境条件和/或位置信息。
值得注意的是,图12仅是示例性的。即虽然图12中示出了设备1200包括处理器1201、控制器中枢1203、存储器1204等多个器件,但是,在实际的应用中,使用本申请各方法的设备,可以仅包括设备1200各器件中的一部分器件,例如,可以仅包含处理器1201和NIC1206。图12中可选器件的性质用虚线示出。
根据本申请的一些实施例,作为计算机可读存储介质的存储器1204上存储有指令,该指令在计算机上执行时使***1200执行根据上述实施例中的计算方法,具体可参照上述实施例的图1-图10e示出的用于肌肤检测的电子设备的交互方法,在此不再赘述。
现在参考图13,所示为根据本申请的一实施例的SoC(System on Chip,片上***)1300的框图。在图13中,相似的部件具有同样的附图标记。另外,虚线框是更先进的SoC的可选特征。在图13中,SoC1300包括:互连单元1350,其被耦合至应用处理器1310;***代理单元1380;总线控制器单元1390;集成存储器控制器单元1340;一组或一个或多个协处理器1320,其可包括集成图形逻辑、图像处理器、音频处理器和视频处理器;静态随机存取存储器(Static Random Access Memory,SRAM)单元1330;直接存储器存取(DMA)单元1360。在一个实施例中,协处理器1320包括专用处理器,诸如例如网络或通信处理器、压缩引擎、GPGPU、高吞吐量MIC处理器、或嵌入式处理器等。
在一个实施例中,静态随机存取存储器(SRAM)单元1330中可以包括用于存储数据和/或指令的一个或多个计算机可读介质。计算机可读存储介质中可以存储有指令,具体而言,存储有该指令的暂时和永久副本。该指令可以包括:由处理器中的至少一个单元执行时使Soc1300执行根据上述实施例中的图1-图10e示出的用于肌肤检测的电子设备的交互方法,具体可参照上述实施例的方法,在此不再赘述。
本申请公开的机制的各实施例可以被实现在硬件、软件、固件或这些实现方法的组合中。本申请的实施例可实现为在可编程***上执行的计算机程序或程序代码,该可编程***包括至少一个处理器、存储***(包括易失性和非易失性存储器和/或存储元件)、至少一个输入设备以及至少一个输出设备。
可将程序代码应用于输入指令,以执行本申请描述的各功能并生成输出信息。可以按已知方式将输出信息应用于一个或多个输出设备。为了本申请的目的,处理***包括具有诸如例如数字信号处理器(Digital Signal Processor,DSP)、微控制器、专用集成电路(Application Specific Integrated Circuit,ASIC)或微处理器之类的处理器的任何***。
程序代码可以用高级程序化语言或面向对象的编程语言来实现,以便与处理***通信。在需要时,也可用汇编语言或机器语言来实现程序代码。事实上,本申请中描述的机制不限于任何特定编程语言的范围。在任一情形下,该语言可以是编译语言或解释语言。
在一些情况下,所公开的实施例可以以硬件、固件、软件或其任何组合来实现。所公开的实施例还可以被实现为由一个或多个暂时或非暂时性机器可读(例如,计算机可读)存储介质承载或存储在其上的指令,其可以由一个或多个处理器读取和执行。例如,指令可以通过网络或通过其他计算机可读介质分发。因此,机器可读介质可以包括用于以机器(例如,计算机)可读的形式存储或传输信息的任何机制,包括但不限于,软盘、光盘、光碟、光盘只读存储器(Compact Disc Read Only Memory,CD-ROMs)、磁光盘、只读存储器(Read Only Memory,ROM)、随机存取存储器(RAM)、可擦除可编程只读存储器(Erasable  Programmable Read Only Memory,EPROM)、电可擦除可编程只读存储器(Electrically Erasable Programmable Read Only Memory,EEPROM)、磁卡或光卡、闪存、或用于利用因特网以电、光、声或其他形式的传播信号来传输信息(例如,载波、红外信号数字信号等)的有形的机器可读存储器。因此,机器可读介质包括适合于以机器(例如,计算机)可读的形式存储或传输电子指令或信息的任何类型的机器可读介质。
在附图中,可以以特定布置和/或顺序示出一些结构或方法特征。然而,应该理解,可能不需要这样的特定布置和/或排序。而是,在一些实施例中,这些特征可以以不同于说明书附图中所示的方式和/或顺序来布置。另外,在特定图中包括结构或方法特征并不意味着暗示在所有实施例中都需要这样的特征,并且在一些实施例中,可以不包括这些特征或者可以与其他特征组合。
需要说明的是,本申请各设备实施例中提到的各单元/模块都是逻辑单元/模块,在物理上,一个逻辑单元/模块可以是一个物理单元/模块,也可以是一个物理单元/模块的一部分,还可以以多个物理单元/模块的组合实现,这些逻辑单元/模块本身的物理实现方式并不是最重要的,这些逻辑单元/模块所实现的功能的组合才是解决本申请所提出的技术问题的关键。此外,为了突出本申请的创新部分,本申请上述各设备实施例并没有将与解决本申请所提出的技术问题关系不太密切的单元/模块引入,这并不表明上述设备实施例并不存在其它的单元/模块。
需要说明的是,在本专利的示例和说明书中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
虽然通过参照本申请的某些优选实施例,已经对本申请进行了图示和描述,但本领域的普通技术人员应该明白,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。

Claims (23)

  1. 一种用于肌肤检测的电子设备的交互方法,其特征在于,包括:
    所述电子设备获取同时包括用户的面部和手部的多个视频帧;
    所述电子设备识别所述多个视频帧中的用户的手部相对于所述面部的动作,并确定目标手部动作;
    所述电子设备响应于所述目标手部动作,确定所述视频帧中的用户的面部上至少部分区域中的检测目标;
    所述电子设备基于所述检测目标及检测目标的形态,从扩展内容库中确定与该检测目标及检测目标的形态关联的扩展内容,并输出所述扩展内容。
  2. 根据权利要求1所述的方法,其特征在于,所述确定目标手部动作,包括:
    所述电子设备确定所述视频帧中的手指的指尖所在位置距离所述面部所在位置小于预设距离,则确定所述手部动作为目标手部动作。
  3. 根据权利要求1所述的方法,其特征在于,所述确定目标手部动作,包括:
    确定所述视频帧中的手指的指尖所在位置距离所述面部所在位置小于预设距离,并且
    确定所述视频帧中所述手指相对于所述面部相对静止持续的时间大于预设时间,则确定所述手部动作为目标手部动作。
  4. 根据权利要求1所述的方法,其特征在于,所述确定目标手部动作,包括:确定所述视频中包括两只手;并且
    确定所述视频帧中的两只手的手指的指尖所在位置距离所述面部所在位置小于预设距离,并且
    确定所述视频帧中所述两只手的手指相对于所述面部相对静止持续的时间大于预设时间,则确定所述手部动作为目标手部动作。
  5. 根据权利要求3或4所述的方法,其特征在于,当所述手指相对于所述面部的相对运动的幅度小于预设值时,确定所述视频帧中手指相对于所述面部相对静止。
  6. 根据权利要求3或4所述的方法,其特征在于,所述电子设备确定所述视频帧中的所述手指所在位置距离所述面部所在位置小于预设距离,包括:
    所述视频帧中的用户的手指所在位置区域所述面部所在位置区域重叠,或者,
    所述视频帧中的所述手指与所述面部不重叠,但所属手指的指尖与所述面部距离手指指尖最近的边沿点之间的距离小于预设距离。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述电子设备响应于所述目标手部动作,确定所述视频帧中的用户的面部上至少部分区域中的检测目标,包括:
    从所述多个视频帧中的至少一个视频帧中,确定所述目标手部动作中的手指的指向区域与所述面部所在区域的交集区域,并确定所述交集区域内的所述检测目标。
  8. 根据权利要求7所述的方法,其特征在于,所述手指的指向区域为以手指的指尖为基准点,手指的指向为基准方向确定的几何图形,并且所述几何图形具有用户预先设定的大小和轮廓。
  9. 根据权利要求7所述的方法,其特征在于,包括:确定所述视频中包括两只手;所述手指的指向区域为以手指的指尖为基准点,手指的指向为基准方向确定的几何图 形,其中所述几何图形是用户预先设定的几何图形;
    所述两只手的手指的指向区域的交集或并集作为所述手指的指向区域。
  10. 根据权利要求8或9所述的方法,其特征在于,所述几何图形包括梯形、扇形、三角形、圆形、方形中的任一种。
  11. 根据权利要求7所述的方法,其特征在于,所述面部包括至少一个预设的ROI区域;
    所述电子设备响应于所述目标手部动作,确定所述视频帧中的用户的面部上至少部分区域中的检测目标,进一步包括:
    从所述多个视频帧中的至少一个视频帧中,确定所述目标手部动作中的手指的指向区域与所述面部中包括的ROI区域的交集区域,并确定所述交集区域内的所述检测目标。
  12. 根据权利要求11所述的方法,其特征在于,当确定所述交集区域内覆盖两个以上ROI区域时,从所述交集区域内的覆盖面积最大的ROI区域内确定检测目标。
  13. 根据权利要求11所述的方法,其特征在于,当确定所述交集区域内覆盖两个以上ROI区域时,
    基于ROI区域的预设权重和ROI区域的图像特征与所述图像特征对应的特征标准模型的匹配度,将ROI区域内分值最高的图像特征确定为所述检测目标。
  14. 根据权利要求11所述的方法,其特征在于,当确定所述交集区域内覆盖两个以上ROI区域时,进一步包括:
    所述电子设备基于用户对所述ROI区域内的检测目标的第一操作,确定所述检测目标。
  15. 根据权利要求1所述的方法,其特征在于,所述检测目标包括肤色、痘痘、细纹、毛孔、黑头、痤疮、斑块和红血丝中的一种或多种皮肤状态,或者,所述检测目标包括鼻子、嘴巴、眼睛、眉毛、脸部轮廓、皮肤颜色中的一种或多种。
  16. 根据权利要求1所述的方法,其特征在于,多个视频帧是预设时长内连续的多个视频帧。
  17. 根据权利要求1所述的方法,其特征在于,还包括:
    所述电子设备通过所述电子设备的摄像头获取所述用户的实时影像,并在第一界面显示用户的实时影像,以及从所述实时影像中获取同时具有用户的面部和手部的设定时长内的视频帧。
  18. 根据权利要求1所述的方法,其特征在于,获取同时包括用户的面部和手部的多个视频帧之前,还包括:所述电子设备响应于用户的输入操作,确定执行化妆模式或执行护肤模式。
  19. 根据权利要求1所述的方法,其特征在于:所述电子设备输出所述扩展内容,包括:
    所述电子设备显示第二界面,所述第二界面中包括基于所述目标手部动作确定的所述检测目标及所述检测目标的形态对应的扩展内容;或者,
    所述电子设备语音播报基于所述目标手部动作确定的所述检测目标及所述检测目标的形态对应的扩展内容。
  20. 根据权利要求1或所述的方法,其特征在于,所述扩展内容包括:
    基于护肤状态时的所述检测目标的问题分析和护理建议中的一种或多种,或者,
    基于化妆状态时的所述检测目标的化妆状态分析和化妆建议中的一种或多种。
  21. 一种电子设备,其特征在于,包含一个或多个存储器,与所述存储器耦合的一个或多个处理器,以及一个或多个程序,其中所述一个或多个程序被存储在所述存储器中,所述电子设备用于执行如权利要求1至20任一项所述的方法。
  22. 一种计算机可读存储介质,其特征在于,计算机可读存储介质存储有计算机程序,计算机程序被处理器运行时,使得处理器执行权利要求1-20任一项的方法。
  23. 一种包含指令的计算机程序产品,其特征在于,当该计算机程序产品在电子设备上运行时,使得处理器执行权利要求1-20任一项的方法。
PCT/CN2021/096113 2020-05-27 2021-05-26 用于肌肤检测的电子设备的交互方法及电子设备 WO2021238995A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/927,580 US20230215208A1 (en) 2020-05-27 2021-05-26 Interaction Method for Electronic Device for Skin Detection, and Electronic Device
EP21812300.8A EP4145251A4 (en) 2020-05-27 2021-05-26 METHOD FOR INTERACTION WITH AN ELECTRONIC DEVICE FOR SKIN INSPECTION, AND ELECTRONIC DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010459215.5 2020-05-27
CN202010459215.5A CN111651040B (zh) 2020-05-27 2020-05-27 用于肌肤检测的电子设备的交互方法及电子设备

Publications (1)

Publication Number Publication Date
WO2021238995A1 true WO2021238995A1 (zh) 2021-12-02

Family

ID=72350951

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/096113 WO2021238995A1 (zh) 2020-05-27 2021-05-26 用于肌肤检测的电子设备的交互方法及电子设备

Country Status (4)

Country Link
US (1) US20230215208A1 (zh)
EP (1) EP4145251A4 (zh)
CN (1) CN111651040B (zh)
WO (1) WO2021238995A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4113253A1 (en) * 2021-07-02 2023-01-04 Perfect Mobile Corp. System and method for navigating user interfaces using a hybrid touchless control mechanism

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111651040B (zh) * 2020-05-27 2021-11-26 华为技术有限公司 用于肌肤检测的电子设备的交互方法及电子设备
US11690435B2 (en) 2020-07-07 2023-07-04 Perfect Mobile Corp. System and method for navigating user interfaces using a hybrid touchless control mechanism
CN113301243B (zh) * 2020-09-14 2023-08-11 阿里巴巴(北京)软件服务有限公司 图像处理方法、交互方法、***、装置、设备和存储介质
CN112749634A (zh) * 2020-12-28 2021-05-04 广州星际悦动股份有限公司 基于美容设备的控制方法、装置以及电子设备
CN114816195A (zh) * 2021-01-28 2022-07-29 华为技术有限公司 一种护肤打卡方法和电子设备
CN113434759A (zh) * 2021-06-24 2021-09-24 青岛海尔科技有限公司 推荐内容的确定方法及***、镜子、存储介质及电子装置
CN113499036A (zh) * 2021-07-23 2021-10-15 厦门美图之家科技有限公司 皮肤监测方法、装置、电子设备及计算机可读存储介质
EP4131184A1 (en) * 2021-08-03 2023-02-08 Koninklijke Philips N.V. Analysing skin features
CN115937919A (zh) * 2021-08-31 2023-04-07 北京新氧科技有限公司 一种妆容颜色识别方法、装置、设备及存储介质
CN113808007B (zh) * 2021-09-16 2022-07-19 北京百度网讯科技有限公司 调整虚拟脸部模型的方法、装置、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574484A (zh) * 2014-11-04 2016-05-11 三星电子株式会社 电子装置及用于在电子装置中分析脸部信息的方法
CN107111861A (zh) * 2015-01-29 2017-08-29 松下知识产权经营株式会社 图像处理装置、触笔以及图像处理方法
CN109793498A (zh) * 2018-12-26 2019-05-24 华为终端有限公司 一种皮肤检测方法及电子设备
CN111651040A (zh) * 2020-05-27 2020-09-11 华为技术有限公司 用于肌肤检测的电子设备的交互方法及电子设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105229673B (zh) * 2013-04-03 2021-12-03 诺基亚技术有限公司 一种装置和相关联的方法
WO2015029371A1 (ja) * 2013-08-30 2015-03-05 パナソニックIpマネジメント株式会社 メイクアップ支援装置、メイクアップ支援方法、およびメイクアップ支援プログラム
CN104299011A (zh) * 2014-10-13 2015-01-21 吴亮 一种基于人脸图像识别的肤质与皮肤问题识别检测方法
KR102585229B1 (ko) * 2015-07-08 2023-10-05 삼성전자주식회사 대상체의 피부 타입에 관련된 정보를 제공하는 전자 장치 및 방법
TWI585711B (zh) * 2016-05-24 2017-06-01 泰金寶電通股份有限公司 獲得保養信息的方法、分享保養信息的方法及其電子裝置
CN106095088B (zh) * 2016-06-06 2019-03-08 联想(北京)有限公司 一种电子设备及其图像处理方法
CN108062400A (zh) * 2017-12-25 2018-05-22 深圳市美丽控电子商务有限公司 基于智能镜的试妆方法、智能镜及存储介质
CN109977906B (zh) * 2019-04-04 2021-06-01 睿魔智能科技(深圳)有限公司 手势识别方法及***、计算机设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574484A (zh) * 2014-11-04 2016-05-11 三星电子株式会社 电子装置及用于在电子装置中分析脸部信息的方法
CN107111861A (zh) * 2015-01-29 2017-08-29 松下知识产权经营株式会社 图像处理装置、触笔以及图像处理方法
CN109793498A (zh) * 2018-12-26 2019-05-24 华为终端有限公司 一种皮肤检测方法及电子设备
CN111651040A (zh) * 2020-05-27 2020-09-11 华为技术有限公司 用于肌肤检测的电子设备的交互方法及电子设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4113253A1 (en) * 2021-07-02 2023-01-04 Perfect Mobile Corp. System and method for navigating user interfaces using a hybrid touchless control mechanism

Also Published As

Publication number Publication date
CN111651040A (zh) 2020-09-11
EP4145251A4 (en) 2023-11-01
US20230215208A1 (en) 2023-07-06
EP4145251A1 (en) 2023-03-08
CN111651040B (zh) 2021-11-26

Similar Documents

Publication Publication Date Title
WO2021238995A1 (zh) 用于肌肤检测的电子设备的交互方法及电子设备
WO2020151580A1 (zh) 一种屏幕控制和语音控制方法及电子设备
WO2021052290A1 (zh) 一种音量调节方法及电子设备
WO2020177585A1 (zh) 一种手势处理方法及设备
WO2021213164A1 (zh) 应用界面交互方法、电子设备和计算机可读存储介质
WO2020077511A1 (zh) 一种拍摄场景下的图像显示方法及电子设备
WO2021000881A1 (zh) 一种分屏方法及电子设备
WO2021249053A1 (zh) 图像处理的方法及相关装置
CN112671976B (zh) 电子设备的控制方法、装置及电子设备、存储介质
WO2020029306A1 (zh) 一种图像拍摄方法及电子设备
CN113170037B (zh) 一种拍摄长曝光图像的方法和电子设备
WO2021052139A1 (zh) 手势输入方法及电子设备
WO2020037469A1 (zh) 界面的显示方法及电子设备
WO2022042766A1 (zh) 信息显示方法、终端设备及计算机可读存储介质
WO2021057203A1 (zh) 一种操作方法和电子设备
WO2021121223A1 (zh) 交互***的显示方法、交互***及电子设备
WO2022052786A1 (zh) 皮肤敏感度的显示方法、装置、电子设备及可读存储介质
WO2021208677A1 (zh) 眼袋检测方法以及装置
WO2021170129A1 (zh) 一种位姿确定方法以及相关设备
WO2023011348A1 (zh) 检测方法及电子设备
WO2023029916A1 (zh) 批注展示方法、装置、终端设备及可读存储介质
WO2022222688A1 (zh) 一种窗口控制方法及其设备
EP4181494A1 (en) Display method and related apparatus
WO2022228043A1 (zh) 显示方法、电子设备、存储介质和程序产品
CN114125145B (zh) 显示屏解锁的方法、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21812300

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021812300

Country of ref document: EP

Effective date: 20221129

NENP Non-entry into the national phase

Ref country code: DE