WO2020224136A1 - Interface interaction method and device - Google Patents

Interface interaction method and device Download PDF

Info

Publication number
WO2020224136A1
WO2020224136A1 PCT/CN2019/104315 CN2019104315W WO2020224136A1 WO 2020224136 A1 WO2020224136 A1 WO 2020224136A1 CN 2019104315 W CN2019104315 W CN 2019104315W WO 2020224136 A1 WO2020224136 A1 WO 2020224136A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
adjusted
points
key points
area
Prior art date
Application number
PCT/CN2019/104315
Other languages
French (fr)
Chinese (zh)
Inventor
叶唐陟
吴棨贤
陈嘉俊
陈衡
Original Assignee
厦门美图之家科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 厦门美图之家科技有限公司 filed Critical 厦门美图之家科技有限公司
Publication of WO2020224136A1 publication Critical patent/WO2020224136A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Definitions

  • the present disclosure relates to the technical field of interface interaction, and in particular to an interface interaction method and device.
  • Portrait beautification is one of the commonly used functions in retouching applications. It can be used to modify and beautify the portrait in the picture, for example, for skin resurfacing, whitening, face thinning, etc. It is widely used and loved by users.
  • One of the objectives of the present disclosure is to provide an interface interaction method and device for the above-mentioned deficiencies in the prior art.
  • an interface interaction method including:
  • Identify and acquire multiple areas to be adjusted in the face part of the image to be processed receive user operation instructions, which are used to select the target area in the image to be processed; determine the area to be adjusted to which the target area belongs according to the position information of the target area, The adjustment interface corresponding to the area to be adjusted to which the target area belongs is displayed, where each area to be adjusted corresponds to an adjustment interface, and the adjustment interface contains function options corresponding to the area to be adjusted.
  • recognizing and acquiring multiple regions to be adjusted in the face part of the image to be processed includes: identifying the face part in the image to be processed; using a face key point detection model to detect the face part to obtain the person Multiple face key points in the face part; according to the face key points, multiple areas to be adjusted in the face part are determined.
  • each face key point has a face attribute, and the face attribute indicates the area to be adjusted to which the face key point belongs; according to the face key points, determining multiple areas to be adjusted in the face part includes:
  • determining the area to be adjusted corresponding to the face attribute according to the coordinates of the face key points having the face attribute includes:
  • a plurality of preset anchor points are calculated according to the coordinates of the face key points having the face attributes, and the face key points having the face attributes and the plurality of preset anchor points are enclosed
  • the area of is determined as the area to be adjusted corresponding to the face attribute.
  • determining the area to be adjusted corresponding to the face attribute according to the coordinates of the face key points having the face attribute includes:
  • the face key points with the face attributes as anchor points corresponding to the face attributes; or, calculate multiple preset anchor points according to the face key points with the face attributes, which will have the
  • the face key points of the face attributes and the plurality of preset anchor points are used as anchor points corresponding to the face attributes, where the preset anchor points include face attributes, and the preset anchor points and the face key points have the same person Face attributes
  • the area to be adjusted corresponding to the face attribute is determined.
  • calculating multiple preset anchor points according to the coordinates of the face key points having the face attributes includes: acquiring multiple face standard key points on the standard face image, and comparing the multiple face standard key points The key points are connected end to end according to the set order, and there is a line between every two connected face standard key points; on at least one of the lines, set the standard anchor point; according to where each standard anchor point is The coordinates of the standard key points of the face at both ends of the line and the interpolation algorithm are calculated to obtain the expression of the standard anchor point coordinates based on the standard key points of the face at both ends of the line; according to the expression of the standard anchor point coordinates and the expression of the face key points Coordinates, calculate multiple preset anchor points.
  • said calculating and acquiring a plurality of preset anchor points according to the expression of the standard anchor point coordinates and the coordinates of the key points of the face includes: according to the set sequence, the said face attributes are set to The key points of the face are sequentially connected end to end; according to the expression of the standard anchor point coordinates, one of the preset anchor points is calculated according to the coordinates of every two connected key points of the face.
  • using a face key point detection model to detect and acquire multiple face key points in the face part includes: correcting the direction of the face part to a preset direction to obtain a corrected face image.
  • the face key point detection model is used to detect and obtain multiple face key points in the corrected face image.
  • the method further includes: displaying each region to be adjusted on the image to be processed.
  • the present disclosure also provides an interface interaction device, including: a recognition module configured to recognize and acquire a plurality of regions to be adjusted in the face part of the image to be processed; a receiving module configured to receive operation instructions from a user, and operate The instruction is used to select the target area in the image to be processed; the first display module is configured to determine the area to be adjusted to which the target area belongs according to the position information of the target area, and display the adjustment interface corresponding to the area to be adjusted to which the target area belongs, where each Each area to be adjusted corresponds to an adjustment interface, and the adjustment interface contains function options corresponding to the area to be adjusted.
  • a recognition module configured to recognize and acquire a plurality of regions to be adjusted in the face part of the image to be processed
  • a receiving module configured to receive operation instructions from a user, and operate The instruction is used to select the target area in the image to be processed
  • the first display module is configured to determine the area to be adjusted to which the target area belongs according to the position information of the target area, and display the adjustment interface corresponding to
  • the recognition module is specifically configured to recognize the face part in the image to be processed; use a face key point detection model to detect the face part to obtain multiple face key points in the face part; The key points of the face are to determine multiple areas to be adjusted in the face.
  • each face key point has a face attribute, and the face attribute indicates the area to be adjusted to which the face key point belongs;
  • the recognition module is specifically configured to: obtain the face key point coordinates of the face key point and the face key point feature value; according to the face key point feature value and the preset mapping relationship between the face key point feature value and the face attribute, Obtain the face key points having the face attributes; and determine the area to be adjusted corresponding to the face attributes according to the coordinates of the face key points having the face attributes.
  • the identification module is specifically configured as:
  • the face key points with the face attributes as anchor points corresponding to the face attributes; or, calculate multiple preset anchor points according to the face key points with the face attributes, and set the The face key points of the face attributes and the plurality of preset anchor points are used as anchor points corresponding to the face attributes, wherein the preset anchor points include face attributes, the preset anchor points and the face key points Have the same face attributes;
  • the area to be adjusted corresponding to the face attribute is determined.
  • the calculation module is configured to: obtain multiple face standard key points on the standard face image; set multiple standard anchors on the line between the multiple face standard key points Point; According to the coordinates of the face standard key points at both ends of the line where each standard anchor point is located and the interpolation algorithm, the expression based on the standard anchor point coordinates of the face standard key points at both ends of the line is calculated; according to the standard anchor point The expression of coordinates and the coordinates of key points on the face are calculated to obtain multiple preset anchor points.
  • the recognition module is specifically configured to: correct the direction of the face part to a preset direction to obtain a corrected face image; use a face key point detection model to detect and obtain multiple faces in the corrected face image key point.
  • it further includes a second display module configured to display each area to be adjusted on the image to be processed.
  • the present disclosure provides an electronic device, a processor, a storage medium, and a bus.
  • the storage medium stores machine-readable instructions executable by the processor.
  • the processor and the bus The storage media communicate through a bus, and the processor executes the machine-readable instructions to execute the steps of any method of the first aspect described above.
  • the present disclosure also provides a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of any one of the methods in the first aspect described above are executed.
  • FIG. 1 is a schematic flowchart of an interface interaction method provided by an embodiment of the disclosure
  • FIG. 2 is a schematic diagram of another flow of an interface interaction method provided by an embodiment of the disclosure.
  • FIG. 3 is a schematic diagram of an area to be adjusted in an interface interaction method provided by an embodiment of the disclosure
  • FIG. 4 is another schematic flowchart of the interface interaction method provided by the embodiments of the disclosure.
  • FIG. 5 is another schematic flowchart of an interface interaction method provided by an embodiment of the disclosure.
  • FIG. 6 is a schematic flowchart of another interface interaction method provided by an embodiment of the disclosure.
  • FIG. 7 is a schematic flowchart of another interface interaction method provided by an embodiment of the disclosure.
  • FIG. 8 is a schematic structural diagram of an interface interaction device provided by an embodiment of the disclosure.
  • FIG. 9 is a schematic diagram of another structure of an interface interaction device provided by an embodiment of the disclosure.
  • FIG. 10 is another schematic structural diagram of an interface interaction device provided by an embodiment of the disclosure.
  • FIG. 11 is a schematic diagram of a structure of an electronic device provided by an embodiment of the disclosure.
  • FIG. 1 is a schematic flowchart of an interface interaction method provided by an embodiment of the disclosure.
  • the execution subject of the interface interaction method may be a terminal with image processing capabilities, such as a desktop computer, a notebook computer, a tablet computer, a smart phone, a camera, etc., which is not limited here.
  • the interface interaction method includes:
  • S110 Identify and acquire multiple regions to be adjusted in the face portion of the image to be processed.
  • the source of the image to be processed includes an image pre-stored in the device, a frame image of a video pre-stored in the device, an image obtained through an image acquisition device connected to the device, etc., which are not limited herein.
  • the image acquisition device may be a camera, an external camera, etc.
  • each area to be adjusted is used to represent a part of the facial features to be adjusted, such as eyes, nose, chin, forehead, cheeks, etc.
  • multiple areas to be adjusted can represent the same facial feature to be adjusted, for example, Both the left eye and right eye areas are used to represent the eyes.
  • the area to be adjusted may be divided according to the beauty function provided by the terminal running the interface interaction method. For example, if the terminal provides four beauty function options A, B, C, and D, the area corresponding to the four function options A, B, C, and D in the face image can be set as the area to be adjusted. .
  • S120 Receive an operation instruction from a user, where the operation instruction is used to select a target area in the image to be processed.
  • the received user operation instructions are also different.
  • the user's operation instruction may be the instruction corresponding to the user's touch and click operation, and the target area in the image to be processed can be selected according to the position of the touch and click operation;
  • the user's operation instruction may be an instruction corresponding to a key operation, and the target area in the image to be processed may be selected according to the operated key, but is not limited to this.
  • S130 Determine the area to be adjusted to which the target area belongs according to the position information of the target area, and display an adjustment interface corresponding to the area to be adjusted to which the target area belongs.
  • each area to be adjusted corresponds to an adjustment interface
  • the adjustment interface includes function options corresponding to the area to be adjusted.
  • the terminal may store the corresponding relationship (or mapping relationship) between the area to be adjusted and the function option, and after determining the area to be adjusted to which the target area belongs, it may determine what needs to be provided to the user according to the stored mapping relationship. Function options, and then show the user an adjustment interface including the determined function options.
  • the function option can be configured to adjust the corresponding area to be adjusted.
  • the function option can be configured to perform related adjustments to the eyes, such as magnifying the eyes, and if the area to be adjusted is the chin ,
  • the function option can be configured to perform face-lifting, etc., and there is no restriction here.
  • the adjustment interface corresponding to the region to be adjusted to which the target region belongs is displayed.
  • the adjustment interface includes function options corresponding to the area to be adjusted, and intuitively displays the area to be adjusted that can be adjusted by the adjustment interface.
  • the terminal can intelligently provide the user with corresponding image processing function options according to the area selected by the user that needs to be adjusted, and the user can quickly get started without learning or trying the application. , Effectively reducing interaction costs.
  • FIG. 2 is a schematic flowchart of another interface interaction method provided by the present disclosure
  • FIG. 3 is a schematic diagram of an area to be adjusted in the interface interaction method.
  • recognizing and acquiring multiple regions to be adjusted in the face portion of the image to be processed includes:
  • the face part in the image to be processed can be recognized through a face detection model.
  • the face detection model can include a You Only Look Once (YOLO) model, a fast area convolutional neural network (FAST- Region Convolutional Neural Networks, FAST-RCNN) model, Multi-task Convolutional Neural Networks (MTCNN) model, etc., but not limited to this.
  • YOLO You Only Look Once
  • FAST- Region Convolutional Neural Networks FAST-RCNN
  • MTCNN Multi-task Convolutional Neural Networks
  • the face in the face frame may be squared to obtain the face after the square.
  • the straightened face can be used for face key point detection.
  • the coordinates of the two eyes recognized from the face part can be used as the standard for the alignment.
  • S112 Use a face key point detection model to detect and acquire multiple face key points in the face part.
  • Face attributes can refer to the description of a certain location area on the face, or the description of a certain feature, etc.
  • the key points of the face may be anchor points of the designated face contour and the contour of the facial features
  • the key point detection model of the face may include a Heatmap Regression model, an MTCNN model, etc.
  • the face key points 201 are obtained through the face key point detection model (marked by black dots in Fig. 3).
  • a face can obtain tens to hundreds of face key points 201, and each face The key point 201 corresponds to a face attribute, such as eyes, nose, chin, forehead, cheeks, etc.
  • the face key points refer to key feature points on the facial image, and each key feature point has a face attribute, which indicates which key feature point belongs to on the face.
  • the area to be adjusted for example, the eye area, the nose area, the mouth area, the eyebrow area, and other facial parts.
  • the acquired key points of the face may be the corner points of the eyes, the tip of the nose, the corner points of the mouth, the brow peak, and the contour points of various parts of the face.
  • S113 According to the key points of the human face, determine multiple regions to be adjusted in the human face.
  • multiple face key points 201 of the same face attribute enclose the area to be adjusted corresponding to the face attribute, and the same face attribute may correspond to more than one area to be adjusted.
  • the area to be adjusted may include The chin area 203, the cheek area 204, the cheekbone area 205, the eye area 206, the forehead area 207, the nose area 208, etc., but not limited to this.
  • FIG. 4 is a schematic diagram of another flow of an interface interaction method provided by an embodiment of the disclosure.
  • the foregoing determination of multiple areas to be adjusted in the face part according to the key points of the face includes:
  • the coordinates of the key points of the face may be the coordinates of the anchor points of the designated face contour and the contour of the facial features in the image to be processed, and the feature values of the key points of the face may be the number and order of the key points of the face. There is no restriction here.
  • the preset mapping relationship may be the mapping relationship between the number of the face key point and the face attribute, for example, the face key numbered 1-15
  • the face attribute of the point is the contour of the face
  • the face attribute of the key points of the face numbered 16-20 is the nose, but not limited to this.
  • S1133 Determine a region to be adjusted corresponding to the face attribute according to the coordinates of the key points of the face having the face attribute.
  • the anchor points corresponding to the key point coordinates of the face key points numbered 1-15 enclose a closed area in the image to be processed, which is the face area, and the key points of the face key points numbered 16-20
  • the anchor point corresponding to the point coordinate encloses another closed area in the image to be processed, and this area is the nose area.
  • Fig. 5 is another schematic flowchart of the interface interaction method provided by the disclosed embodiment.
  • the foregoing determination of the region to be adjusted corresponding to the face attribute according to the coordinates of the face key points having the face attribute includes:
  • S1133a Use the key points of the face with the face attributes as anchor points corresponding to the face attributes.
  • S1133a is the method in steps S1131 and S1132 above, and will not be repeated here.
  • S1133b calculates multiple preset anchor points according to the face key points having the face attributes, and uses the face key points having the face attributes and the multiple preset anchor points as the corresponding The anchor point of the face attribute.
  • the preset anchor points include face attributes, and the preset anchor points have the same face attributes as the face key points.
  • a white point is used to mark the preset anchor point 202 in FIG. 3.
  • the preset anchor point 202 can be used to supplement the face key points, so that the face key points and the preset anchor points
  • the edge of the determined area to be adjusted is smoother and the area range is more accurate.
  • S1133c Determine an area to be adjusted corresponding to the face attribute according to the anchor point of the face attribute.
  • the anchor points of the face attributes may include face feature points with the same face attributes and preset anchor points.
  • the area represented by the anchor point of the face attribute on the image to be processed is the area to be adjusted corresponding to the face attribute.
  • step S1133 can also be implemented in one of the following ways:
  • the area enclosed by the key points of the face having the face attribute may be determined as the area to be adjusted corresponding to the face attribute.
  • the area enclosed by each face key point whose face attribute is the face contour (such as the face key points numbered 1-15 above) can be determined as the area to be adjusted corresponding to the face contour, that is, the face area .
  • the area enclosed by each face key point whose face attribute is the nose (such as the aforementioned face key points numbered 16-20) can be determined as the area to be adjusted corresponding to the nose, that is, the nose area.
  • multiple preset anchor points may be calculated according to the coordinates of the face key points having the face attributes, and the face key points having the face attributes and the The area enclosed by a plurality of preset anchor points is determined as the area to be adjusted corresponding to the face attribute.
  • FIG. 6 is a schematic flowchart of another interface interaction method provided by an embodiment of the disclosure.
  • the calculation of multiple preset anchor points according to the face key points having the face attributes can be implemented through the following process:
  • the standard face image may be a face image that meets the requirements of the face key point detection model, and the face key points obtained on it may be used to calculate the expression of the preset anchor point.
  • each standard anchor point and the face standard key points at both ends of the line where it is located can be used to calculate the expression corresponding to the preset anchor point.
  • the standard anchor point corresponds to a preset anchor point, wherein the feature value of the face standard key point at both ends of the line where each standard anchor point is located is the same as the feature value of the face key point at both ends of the line where the preset anchor point is located.
  • the standard anchor point S(X s ', Y s ') can be expressed as:
  • is the ratio of the distance between the standard anchor point S and the face standard key point A
  • , namely ⁇
  • S340 According to the expression of the standard anchor point coordinates and the coordinates of the key points of the human face, calculate and obtain multiple preset anchor points.
  • the face key points at both ends of the line where the preset anchor point is located can be obtained according to the feature values of the face standard key points at both ends of the line where the standard anchor point corresponding to each preset anchor point is located, and The coordinates of the key points of the face and the expression of the standard anchor point coordinates are calculated to obtain the coordinates of the preset anchor points.
  • the key points of the face with the face attributes may be connected end to end in sequence in a set order, and according to the expression of the standard anchor point coordinates, according to every two connected The coordinates of the key points of the human face are calculated to obtain one of the preset anchor points. In this way, multiple preset anchor points can be obtained.
  • sequence rules used when connecting the standard key points of the face and the key points of the face are the same to ensure that the face key points that are connected end to end in sequence conform to the expression of the standard anchor point coordinates.
  • FIG. 7 is a schematic flowchart of another interface interaction method provided by an embodiment of the disclosure.
  • the above-mentioned use of the face key point detection model to detect and acquire multiple face key points in the face part includes:
  • the face image in the face part may have a certain offset, which can be corrected by a face correction network to obtain a corrected face image.
  • the face key point detection model used in S1122 is the same as the above, and will not be repeated here.
  • the method further includes: displaying each region to be adjusted on the image to be processed.
  • displaying each area to be adjusted on the image to be processed can be achieved by highlighting or marking each area to be adjusted.
  • the area to be adjusted corresponding to different face attributes can be covered with different color highlight layers, such as red for eyes, brown for nose, yellow for cheeks, etc.; in other embodiments , Can be on the highlight layer, or without using the highlight layer, directly on the area to be adjusted, using text, graphics, etc., to label the area to be adjusted with different face attributes with different text or graphics
  • each area to be adjusted on the image to be processed is to be able to distinguish different face areas clearly and quickly, so that the user can intuitively select the required adjustment function and reduce the interaction cost.
  • the area to be adjusted can also be used to indicate the adjustment of other parts of the human body in the image, for example, the legs, waist, etc. are marked by the area to be adjusted.
  • the selection method and the display method of the adjustment interface are the same as those of the human face, so I will not repeat them here.
  • a photo including a face image can be taken through a smart phone, and the photo is stored in the flash memory of the smart phone.
  • the photo After the user selects the photo, first operate in the background of the smart phone, that is, obtain the correction through the face detection model and the face correction network
  • the face image After the face image, the face image is detected through the face key point detection model, and multiple face key points on the face image are obtained, and the preset anchor points are calculated through the obtained face key points.
  • the key points of the face and the calculated preset anchor points obtain the range coordinates of the area to be adjusted. After obtaining the range coordinates of the area to be adjusted, they are displayed on the front end of the smartphone, that is, on the face image in the photo displayed on the phone screen , By highlighting different areas to be adjusted.
  • FIG. 8 is a schematic structural diagram of an interface interaction device provided by an embodiment of the disclosure.
  • an embodiment of the present disclosure also provides an interface interaction device, which includes: a recognition module 401 configured to recognize and acquire multiple regions to be adjusted in the face portion of the image to be processed.
  • the receiving module 402 is configured to receive a user's operation instruction, and the operation instruction is used to select a target area in the image to be processed.
  • the first display module 403 is configured to determine the area to be adjusted to which the target area belongs according to the location information of the target area, and display the adjustment interface corresponding to the area to be adjusted to which the target area belongs, wherein each area to be adjusted corresponds to an adjustment interface, and the adjustment The interface contains the function options corresponding to the area to be adjusted.
  • the recognition module 401 is specifically configured to recognize the face part in the image to be processed.
  • the face key point detection model is used to detect the face part, and multiple face key points in the face part are obtained. According to the key points of the human face, multiple regions to be adjusted in the human face are determined.
  • each face key point has a face attribute, and the face attribute indicates the area to be adjusted to which the face key point belongs.
  • the recognition module 401 is specifically configured to: obtain the face key point coordinates and the face key point feature value of the face key point; for each face attribute, according to the face key point feature value and the face key point feature The preset mapping relationship between the value and the face attribute is used to obtain the face key point with the face attribute; according to the coordinates of the face key point with the face attribute, the area to be adjusted corresponding to the face attribute is determined.
  • the specific method for the recognition module 401 to determine the region to be adjusted corresponding to the face attribute according to the coordinates of the face key points having the face attribute may be:
  • the face key points and the multiple preset anchor points are used as anchor points corresponding to the face attributes.
  • the preset anchor points include face attributes, and the preset anchor points and the face key points have the same face attributes ;
  • the area to be adjusted corresponding to the face attribute is determined.
  • FIG. 9 is a schematic structural diagram of an interface interaction device provided by an embodiment of the disclosure.
  • a calculation module 404 is further included.
  • the calculation module 404 is configured to: obtain multiple face standard key points on the standard face image; set multiple standard anchor points on the line between the multiple face standard key points. According to the coordinates of the standard key points of the face at both ends of the line where each standard anchor point is located and the interpolation algorithm, the expression based on the standard anchor point coordinates of the standard key points of the face at both ends of the line is calculated; according to the standard anchor point coordinates Expressions and coordinates of key points on the face are calculated to obtain multiple preset anchor points.
  • the recognition module 401 may be specifically configured to correct the direction of the face part to a preset direction to obtain a corrected face image; use a face key point detection model to detect and obtain multiple people in the corrected face image Key points of the face.
  • FIG. 10 is a schematic structural diagram of an interface interaction device provided by an embodiment of the disclosure.
  • a second display module 405 is further included, configured to display each area to be adjusted on the image to be processed.
  • the foregoing device is configured to execute the method provided in the foregoing embodiment, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), or one or more microprocessors (digital singnal processor, DSP for short), or, one or more Field Programmable Gate Array (FPGA for short), etc.
  • ASIC Application Specific Integrated Circuit
  • DSP digital singnal processor
  • FPGA Field Programmable Gate Array
  • the processing element may be a general-purpose processor, such as a central processing unit (CPU for short) or other processors that can call program codes.
  • these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC for short).
  • FIG. 11 is a schematic diagram of a structure of an electronic device provided by an embodiment of the disclosure.
  • the electronic device includes: a processor 501, a computer-readable storage medium 502, and a bus 503, where:
  • the electronic device may include one or more processors 501, a bus 503, and a computer-readable storage medium 502, where the computer-readable storage medium 502 is configured to store a program, and the processor 501 is communicatively connected to the computer-readable storage medium 502 through the bus 503
  • the processor 501 invokes a program stored in the computer-readable storage medium 502 to execute the foregoing method embodiment.
  • the electronic device can be a general-purpose computer, a server, or a mobile terminal, etc., which is not limited here.
  • the electronic device is configured to implement the face recognition method of the present disclosure.
  • the processor 501 may include one or more processing cores (for example, a single-core processor or a multi-core processor).
  • the processor may include a central processing unit (CPU), an application specific integrated circuit (ASIC), a special instruction set processor (Application Specific Instruction-set Processor, ASIP), and a graphics processing unit (Graphics Processing Unit, GPU), Physical Processing Unit (Physics Processing Unit, PPU), Digital Signal Processor (Digital Signal Processor, DSP), Field Programmable Gate Array (Field Programmable Gate Array, FPGA), Programmable Logic Device ( Programmable Logic Device (PLD), controller, microcontroller unit, Reduced Instruction Set Computing (RISC), or microprocessor, etc., or any combination thereof.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • ASIP Application Specific Instruction-set Processor
  • GPU Graphics Processing Unit
  • PPU Physical Processing Unit
  • DSP Digital Signal Processor
  • DSP Digital Signal Processor
  • FPGA Field Programmable Gate Array
  • PLD Programmable Logic Device
  • the computer-readable storage medium 502 may include: including mass memory, removable memory, volatile read-write memory, or read-only memory (Read-Only Memory, ROM), or any combination thereof.
  • mass storage may include magnetic disks, optical disks, solid-state drives, etc.
  • removable storage may include flash drives, floppy disks, optical disks, memory cards, zip disks, tapes, etc.
  • volatile read-write storage may include random access memory ( Random Access Memory, RAM; RAM can include dynamic RAM (Dynamic Random Access Memory, DRAM), double data rate synchronous dynamic RAM (Double Date-Rate Synchronous RAM, DDR SDRAM); static RAM (Static Random-Access Memory, SRAM) ), Thyristor-Based Random Access Memory (T-RAM) and Zero-Capacitor RAM (Zero-RAM), etc.
  • ROM may include mask ROM (Mask Read-Only Memory, MROM), programmable ROM (Programmable Read-Only Memory, PROM), erasable programmable ROM (Programmable Read-only Memory, PEROM), electronic Erasable programmable ROM (Electrically Erasable Programmable read only memory, EEPROM), compact disc ROM (CD-ROM), and digital universal disk ROM, etc.
  • MROM mask ROM
  • PROM Programmable Read-Only Memory
  • PROM Programmable Read-Only Memory
  • PEROM Erasable programmable ROM
  • EEPROM Electrical Erasable Programmable read only memory
  • CD-ROM compact disc ROM
  • digital universal disk ROM etc.
  • processor 501 For ease of description, only one processor 501 is described in the electronic device. However, it should be noted that the electronic device in the present disclosure may also include multiple processors 501, so the steps performed by one processor described in the present disclosure may also be performed jointly or individually by multiple processors. For example, if the processor 501 of the electronic device executes step A and step B, it should be understood that step A and step B may also be executed by two different processors or be executed separately in one processor. For example, the first processor performs step A and the second processor performs step B, or the first processor and the second processor perform steps A and B together.
  • the present disclosure also provides a program product, such as a computer-readable storage medium, including a program, which is used to execute the foregoing method embodiments when executed by a processor.
  • a program product such as a computer-readable storage medium, including a program, which is used to execute the foregoing method embodiments when executed by a processor.
  • the interface interaction method and device obtain multiple regions to be adjusted in the face portion of the image to be processed by identifying, and select the target region according to the user's operation instruction, and display the target region to be adjusted
  • the adjustment interface corresponding to the area the adjustment interface contains the function options corresponding to the area to be adjusted, and intuitively shows the area to be adjusted that can be adjusted by the adjustment interface. You can quickly get started without learning or trying the application. Effectively reduce the interaction cost.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
  • the above-mentioned integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium.
  • the above-mentioned software functional unit is stored in a storage medium, and includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (English: processor) execute the various embodiments of the present disclosure Part of the method.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (English: Read-Only Memory, abbreviated as: ROM), random access memory (English: Random Access Memory, abbreviated as: RAM), magnetic disk or optical disk, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • magnetic disk or optical disk etc.
  • the interface interaction method and device provided by the present disclosure enable the terminal to intelligently provide the user with corresponding image processing function options according to the area selected by the user to be adjusted, and the user can quickly get started without learning or trying the application. Effectively reduce interaction costs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention relates to the field of interface interaction techniques, and more particularly, relates to an interface interaction method and device. The interface interaction method comprises: performing recognition, and acquiring multiple regions to undergo adjustment from a facial portion in an image to be processed (S110); receiving an operation instruction from a user, the operation instruction being used to select a target region in the image (S120); and determining, according to location information of the target region, the region to undergo adjustment containing the target region, and displaying an adjustment interface corresponding to the region to undergo adjustment containing the target region (S130), wherein each region to undergo adjustment corresponds to one adjustment interface, and the adjustment interface contains function options corresponding to the region to undergo adjustment. The invention realizes intuitive display of a region to undergo adjustment and an adjustment interface corresponding thereto, such that a user can operate an application easily without needing to learn or practice using the interface, thereby effectively reducing interaction costs.

Description

界面交互方法及装置Interface interaction method and device
相关申请的交叉引用Cross references to related applications
本申请要求于2019年05月07日提交中国专利局的申请号为2019103750591,名称为“界面交互方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on May 7, 2019, with the application number 2019103750591, titled "Interface Interaction Method and Device", the entire content of which is incorporated into this application by reference.
技术领域Technical field
本公开涉及界面交互技术领域,具体而言,涉及一种界面交互方法及装置。The present disclosure relates to the technical field of interface interaction, and in particular to an interface interaction method and device.
背景技术Background technique
人像美颜是修图应用程序中的常用功能之一,可以用于对图片中的人像进行修改、美化,例如,进行磨皮、美白、瘦脸等,应用广泛,深受用户喜爱。Portrait beautification is one of the commonly used functions in retouching applications. It can be used to modify and beautify the portrait in the picture, for example, for skin resurfacing, whitening, face thinning, etc. It is widely used and loved by users.
但现有的美颜应用程序中,选择每一项美颜功能时,需要在菜单栏或选项栏中寻找对应的功能位置,点击以后打开对应的美颜功能调整界面,对人像的图片进行修改、美化。However, in the existing beauty application, when selecting each beauty function, you need to find the corresponding function location in the menu bar or option bar, click to open the corresponding beauty function adjustment interface, and modify the portrait picture ,beautify.
可见,现有美颜应用程序需要用户预先学习和试用来熟悉其使用方式和流程,交互成本较大。It can be seen that the existing beauty applications require users to learn and try in advance to familiarize themselves with their usage methods and procedures, and the interaction cost is relatively high.
发明内容Summary of the invention
本公开的目的之一在于,针对上述现有技术中的不足,提供一种界面交互方法及装置。One of the objectives of the present disclosure is to provide an interface interaction method and device for the above-mentioned deficiencies in the prior art.
为实现上述目的,本公开采用的技术方案如下:In order to achieve the above objectives, the technical solutions adopted in the present disclosure are as follows:
第一方面,本公开提供了一种界面交互方法,包括:In the first aspect, the present disclosure provides an interface interaction method, including:
识别获取待处理图像中人脸部分的多个待调整区域;接收用户的操作指令,操作指令用于选择待处理图像中的目标区域;根据目标区域的位置信息确定目标区域所属的待调整区域,展示目标区域所属的待调整区域对应的调整界面,其中,每个待调整区域与一个调整界面对应,调整界面包含待调整区域对应的功能选项。Identify and acquire multiple areas to be adjusted in the face part of the image to be processed; receive user operation instructions, which are used to select the target area in the image to be processed; determine the area to be adjusted to which the target area belongs according to the position information of the target area, The adjustment interface corresponding to the area to be adjusted to which the target area belongs is displayed, where each area to be adjusted corresponds to an adjustment interface, and the adjustment interface contains function options corresponding to the area to be adjusted.
可选地,识别获取待处理图像中人脸部分的多个待调整区域,包括:识别待处理图像中的人脸部分;采用人脸关键点检测模型检测所述人脸部分,得到所述人脸部分中的多个人脸关键点,;根据人脸关键点,确定人脸部分的多个待调整区域。Optionally, recognizing and acquiring multiple regions to be adjusted in the face part of the image to be processed includes: identifying the face part in the image to be processed; using a face key point detection model to detect the face part to obtain the person Multiple face key points in the face part; according to the face key points, multiple areas to be adjusted in the face part are determined.
可选地,每个人脸关键点具有一人脸属性,所述人脸属性指示该人脸关键点所属的待调整区域;根据人脸关键点,确定人脸部分的多个待调整区域,包括:Optionally, each face key point has a face attribute, and the face attribute indicates the area to be adjusted to which the face key point belongs; according to the face key points, determining multiple areas to be adjusted in the face part includes:
获取人脸关键点的人脸关键点坐标以及人脸关键点特征值;针对每个所述人脸属性,根据人脸关键点特征值以及人脸关键点特征值与人脸属性的预设映射关系,确定具有所述人脸属性的人脸关键点;根据具有所述人脸属性的人脸关键点的坐标,确定所述人脸属性对应的待调整区域。Obtain the face key point coordinates and the face key point feature value of the face key point; for each of the face attributes, according to the face key point feature value and the preset mapping between the face key point feature value and the face attribute Relationship, determine the face key points having the face attributes; determine the area to be adjusted corresponding to the face attributes according to the coordinates of the face key points having the face attributes.
可选地,根据具有所述人脸属性的人脸关键点的坐标,确定所述人脸属性对应的待调整区域,包括:Optionally, determining the area to be adjusted corresponding to the face attribute according to the coordinates of the face key points having the face attribute includes:
将具有所述人脸属性的所述人脸关键点围成的区域,确定为所述人脸属性对应的待调整区域;或者,Determine the area surrounded by the key points of the face with the face attributes as the area to be adjusted corresponding to the face attributes; or,
根据具有所述人脸属性的所述人脸关键点的坐标计算得到多个预设锚点,将具有所述人脸属性的所述人脸关键点及所述多个预设锚点围成的区域,确定为所述人脸属性对应的待调整区域。A plurality of preset anchor points are calculated according to the coordinates of the face key points having the face attributes, and the face key points having the face attributes and the plurality of preset anchor points are enclosed The area of is determined as the area to be adjusted corresponding to the face attribute.
可选地,根据具有所述人脸属性的人脸关键点的坐标,确定所述人脸属性对应的待调整区域,包括:Optionally, determining the area to be adjusted corresponding to the face attribute according to the coordinates of the face key points having the face attribute includes:
将具有所述人脸属性的人脸关键点,作为对应所述人脸属性的锚点;或,根据具有所述人脸属性的人脸关键点计算多个预设锚点,将具有所述人脸属性的人脸关键点以及所述多个预设锚点作为对应人脸属性的锚点,其中,预设锚点包括人脸属性,预设锚点与人脸关键点具有相同的人脸属性;Use the face key points with the face attributes as anchor points corresponding to the face attributes; or, calculate multiple preset anchor points according to the face key points with the face attributes, which will have the The face key points of the face attributes and the plurality of preset anchor points are used as anchor points corresponding to the face attributes, where the preset anchor points include face attributes, and the preset anchor points and the face key points have the same person Face attributes
根据所述人脸属性的锚点,确定所述人脸属性对应的所述待调整区域。According to the anchor point of the face attribute, the area to be adjusted corresponding to the face attribute is determined.
可选地,根据具有所述人脸属性的人脸关键点的坐标计算得到多个预设锚点,包括:获取标准人脸图像上的多个人脸标准关键点,将所述多个人脸标准关键点按照设定的顺序依次首尾相连,每两个相连的人脸标准关键点之间存在一连线;在至少一所述连线上,设置标准锚点;根据每个标准锚点所在的连线两端的人脸标准关键点的坐标以及插值算法,计算获得基于连线两端的人脸标准关键点的标准锚点坐标的表达式;根据标准锚点坐标的表达式、人脸关键点的坐标,计算获取多个预设锚点。Optionally, calculating multiple preset anchor points according to the coordinates of the face key points having the face attributes includes: acquiring multiple face standard key points on the standard face image, and comparing the multiple face standard key points The key points are connected end to end according to the set order, and there is a line between every two connected face standard key points; on at least one of the lines, set the standard anchor point; according to where each standard anchor point is The coordinates of the standard key points of the face at both ends of the line and the interpolation algorithm are calculated to obtain the expression of the standard anchor point coordinates based on the standard key points of the face at both ends of the line; according to the expression of the standard anchor point coordinates and the expression of the face key points Coordinates, calculate multiple preset anchor points.
可选地,所述根据标准锚点坐标的表达式、人脸关键点的坐标,计算获取多个预设锚点,包括:按照所述设定的顺序将具有所述人脸属性的所述人脸关键点依次首尾相连;按照所述标准锚点坐标的表达式,根据每两个相连的所述人脸关键点的坐标计算得到一个所述预设锚点。Optionally, said calculating and acquiring a plurality of preset anchor points according to the expression of the standard anchor point coordinates and the coordinates of the key points of the face includes: according to the set sequence, the said face attributes are set to The key points of the face are sequentially connected end to end; according to the expression of the standard anchor point coordinates, one of the preset anchor points is calculated according to the coordinates of every two connected key points of the face.
可选地,采用人脸关键点检测模型检测获取人脸部分中的多个人脸关键点,包括:将人脸部分的方向校正为预设方向,得到校正后的人脸图像。采用人脸关键点检测模型检测获取校正后的人脸图像中多个人脸关键点。Optionally, using a face key point detection model to detect and acquire multiple face key points in the face part includes: correcting the direction of the face part to a preset direction to obtain a corrected face image. The face key point detection model is used to detect and obtain multiple face key points in the corrected face image.
可选地,识别获取待处理图像中人脸部分的多个待调整区域之后,还包括:在待处理图像上展示每个待调整区域。Optionally, after identifying and acquiring a plurality of regions to be adjusted in the face portion of the image to be processed, the method further includes: displaying each region to be adjusted on the image to be processed.
第二方面,本公开还提供了一种界面交互装置,包括:识别模块,配置成识别获取待处理图像中人脸部分的多个待调整区域;接收模块,配置成接收用户的操作指令,操作指 令用于选择待处理图像中的目标区域;第一展示模块,配置成根据目标区域的位置信息确定目标区域所属的待调整区域,展示目标区域所属的待调整区域对应的调整界面,其中,每个待调整区域与一个调整界面对应,调整界面包含待调整区域对应的功能选项。In a second aspect, the present disclosure also provides an interface interaction device, including: a recognition module configured to recognize and acquire a plurality of regions to be adjusted in the face part of the image to be processed; a receiving module configured to receive operation instructions from a user, and operate The instruction is used to select the target area in the image to be processed; the first display module is configured to determine the area to be adjusted to which the target area belongs according to the position information of the target area, and display the adjustment interface corresponding to the area to be adjusted to which the target area belongs, where each Each area to be adjusted corresponds to an adjustment interface, and the adjustment interface contains function options corresponding to the area to be adjusted.
可选地,识别模块,具体配置成识别待处理图像中的人脸部分;采用人脸关键点检测模型检测所述人脸部分,得到所述人脸部分中的多个人脸关键点,;根据人脸关键点,确定人脸部分的多个待调整区域。Optionally, the recognition module is specifically configured to recognize the face part in the image to be processed; use a face key point detection model to detect the face part to obtain multiple face key points in the face part; The key points of the face are to determine multiple areas to be adjusted in the face.
可选地,每个人脸关键点具有一人脸属性,所述人脸属性指示该人脸关键点所属的待调整区域;Optionally, each face key point has a face attribute, and the face attribute indicates the area to be adjusted to which the face key point belongs;
识别模块,具体配置成:获取人脸关键点的人脸关键点坐标以及人脸关键点特征值;根据人脸关键点特征值以及人脸关键点特征值与人脸属性的预设映射关系,获取具有所述人脸属性的人脸关键点;根据具有所述人脸属性的人脸关键点的坐标,确定所述人脸属性对应的待调整区域。The recognition module is specifically configured to: obtain the face key point coordinates of the face key point and the face key point feature value; according to the face key point feature value and the preset mapping relationship between the face key point feature value and the face attribute, Obtain the face key points having the face attributes; and determine the area to be adjusted corresponding to the face attributes according to the coordinates of the face key points having the face attributes.
可选地,识别模块,具体配置成:Optionally, the identification module is specifically configured as:
将具有所述人脸属性的人脸关键点,作为对应人脸属性的锚点;或,根据具有所述人脸属性的人脸关键点计算得到多个预设锚点,将所述具有所述人脸属性的人脸关键点及所述多个预设锚点,作为对应所述人脸属性的锚点,其中,预设锚点包括人脸属性,预设锚点与人脸关键点具有相同的人脸属性;Use the face key points with the face attributes as anchor points corresponding to the face attributes; or, calculate multiple preset anchor points according to the face key points with the face attributes, and set the The face key points of the face attributes and the plurality of preset anchor points are used as anchor points corresponding to the face attributes, wherein the preset anchor points include face attributes, the preset anchor points and the face key points Have the same face attributes;
根据所述人脸属性的锚点,确定所述人脸属性对应的待调整区域。According to the anchor point of the face attribute, the area to be adjusted corresponding to the face attribute is determined.
可选地,还包括计算模块;所述计算模块,配置成:获取标准人脸图像上的多个人脸标准关键点;在多个人脸标准关键点之间的连线上,设置多个标准锚点;根据每个标准锚点所在的连线两端的人脸标准关键点的坐标以及插值算法,计算获得基于连线两端的人脸标准关键点的标准锚点坐标的表达式;根据标准锚点坐标的表达式、人脸关键点的坐标,计算获取多个预设锚点。Optionally, it further includes a calculation module; the calculation module is configured to: obtain multiple face standard key points on the standard face image; set multiple standard anchors on the line between the multiple face standard key points Point; According to the coordinates of the face standard key points at both ends of the line where each standard anchor point is located and the interpolation algorithm, the expression based on the standard anchor point coordinates of the face standard key points at both ends of the line is calculated; according to the standard anchor point The expression of coordinates and the coordinates of key points on the face are calculated to obtain multiple preset anchor points.
可选地,识别模块,具体配置成:将人脸部分的方向校正为预设方向,得到校正后的人脸图像;采用人脸关键点检测模型检测获取校正后的人脸图像中多个人脸关键点。Optionally, the recognition module is specifically configured to: correct the direction of the face part to a preset direction to obtain a corrected face image; use a face key point detection model to detect and obtain multiple faces in the corrected face image key point.
可选地,还包括第二展示模块,配置成在待处理图像上展示每个待调整区域。Optionally, it further includes a second display module configured to display each area to be adjusted on the image to be processed.
第三方面,本公开提供一种电子设备,处理器、存储介质和总线,所述存储介质存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储介质之间通过总线通信,所述处理器执行所述机器可读指令,以执行上述第一方面任一方法的步骤。In a third aspect, the present disclosure provides an electronic device, a processor, a storage medium, and a bus. The storage medium stores machine-readable instructions executable by the processor. When the electronic device is running, the processor and the bus The storage media communicate through a bus, and the processor executes the machine-readable instructions to execute the steps of any method of the first aspect described above.
第四方面,本公开还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器运行时执行如上述第一方面任一方法的步骤。In a fourth aspect, the present disclosure also provides a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of any one of the methods in the first aspect described above are executed.
附图说明Description of the drawings
为了更清楚地说明本公开的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to explain the technical solutions of the present disclosure more clearly, the following will briefly introduce the drawings that need to be used in the embodiments. It should be understood that the following drawings only show certain embodiments of the present disclosure and should not be It is regarded as a limitation of the scope. For those of ordinary skill in the art, without creative work, other related drawings can be obtained from these drawings.
图1为本公开实施例提供的界面交互方法的一种流程示意图;FIG. 1 is a schematic flowchart of an interface interaction method provided by an embodiment of the disclosure;
图2为本公开实施例提供的界面交互方法的另一流程示意图;2 is a schematic diagram of another flow of an interface interaction method provided by an embodiment of the disclosure;
图3为本公开实施例提供的界面交互方法中待调整区域的示意图;FIG. 3 is a schematic diagram of an area to be adjusted in an interface interaction method provided by an embodiment of the disclosure;
图4为本公开实施例提供的界面交互方法的又一流程示意图;FIG. 4 is another schematic flowchart of the interface interaction method provided by the embodiments of the disclosure;
图5为本公开实施例提供的界面交互方法的又一流程示意图;FIG. 5 is another schematic flowchart of an interface interaction method provided by an embodiment of the disclosure;
图6为本公开实施例提供的界面交互方法的又一流程示意图;FIG. 6 is a schematic flowchart of another interface interaction method provided by an embodiment of the disclosure;
图7为本公开实施例提供的界面交互方法的又一流程示意图;FIG. 7 is a schematic flowchart of another interface interaction method provided by an embodiment of the disclosure;
图8为本公开实施例提供的界面交互装置的一种结构示意图;FIG. 8 is a schematic structural diagram of an interface interaction device provided by an embodiment of the disclosure;
图9为本公开实施例提供的界面交互装置的另一结构示意图;9 is a schematic diagram of another structure of an interface interaction device provided by an embodiment of the disclosure;
图10为本公开实施例提供的界面交互装置的又一结构示意图;FIG. 10 is another schematic structural diagram of an interface interaction device provided by an embodiment of the disclosure;
图11为本公开实施例提供的电子设备的一种结构示意图。FIG. 11 is a schematic diagram of a structure of an electronic device provided by an embodiment of the disclosure.
具体实施方式Detailed ways
为使本公开的目的、技术方案和优点更加清楚,下面将结合本公开中的附图,对本公开中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。In order to make the objectives, technical solutions and advantages of the present disclosure clearer, the technical solutions in the present disclosure will be described clearly and completely in conjunction with the accompanying drawings in the present disclosure. Obviously, the described embodiments are part of the implementation of the present disclosure. Examples, not all examples.
图1为本公开实施例提供的界面交互方法流程示意图。该界面交互方法的执行主体可以是具有图像处理能力的终端,例如,台式电脑、笔记本电脑、平板电脑、智能手机、相机等,在此不做限制。FIG. 1 is a schematic flowchart of an interface interaction method provided by an embodiment of the disclosure. The execution subject of the interface interaction method may be a terminal with image processing capabilities, such as a desktop computer, a notebook computer, a tablet computer, a smart phone, a camera, etc., which is not limited here.
如图1所示,该界面交互方法包括:As shown in Figure 1, the interface interaction method includes:
S110、识别获取待处理图像中人脸部分的多个待调整区域。S110: Identify and acquire multiple regions to be adjusted in the face portion of the image to be processed.
一些实施方式中,待处理图像的来源包括预存在设备中的图像、预存在设备中的视频的帧图像、通过与设备连接的图像采集装置获取的图像等,在此不做限定。其中,图像采集装置可以是摄像头、外接相机等。In some embodiments, the source of the image to be processed includes an image pre-stored in the device, a frame image of a video pre-stored in the device, an image obtained through an image acquisition device connected to the device, etc., which are not limited herein. Among them, the image acquisition device may be a camera, an external camera, etc.
需要说明的是,每个待调整区域用于表示一部分待调整的人脸特征,如眼睛、鼻子、下巴、额头、脸颊等,多个待调整区域可以表示同一待调整的人脸特征,例如,左眼和右眼区域均用于表示眼睛。It should be noted that each area to be adjusted is used to represent a part of the facial features to be adjusted, such as eyes, nose, chin, forehead, cheeks, etc., multiple areas to be adjusted can represent the same facial feature to be adjusted, for example, Both the left eye and right eye areas are used to represent the eyes.
在一个示例中,可以根据运行界面交互方法的终端所提供的美颜功能来划分所述待调整区域。例如,所述终端提供了A、B、C、D四个美颜功能选项,则可以将人脸图像中与A、B、C、D四个功能选项分别对应的区域的设置为待调整区域。In an example, the area to be adjusted may be divided according to the beauty function provided by the terminal running the interface interaction method. For example, if the terminal provides four beauty function options A, B, C, and D, the area corresponding to the four function options A, B, C, and D in the face image can be set as the area to be adjusted. .
S120、接收用户的操作指令,操作指令用于选择待处理图像中的目标区域。S120: Receive an operation instruction from a user, where the operation instruction is used to select a target area in the image to be processed.
一些实施方式中,由于执行该界面交互方法的设备不同,接收到的用户的操作指令也有所不同。例如,对于触控设备,该用户的操作指令可以是用户的触控点击操作对应的指令,可以根据触控点击操作的位置选择待处理图像中的目标区域;对于通过如键盘、鼠标、按键进行控制的设备,该用户的操作指令可以是按键操作对应的指令,可以根据***作的按键选择待处理图像中的目标区域,但不以此为限。In some embodiments, due to different devices that execute the interface interaction method, the received user operation instructions are also different. For example, for a touch device, the user's operation instruction may be the instruction corresponding to the user's touch and click operation, and the target area in the image to be processed can be selected according to the position of the touch and click operation; For the controlled device, the user's operation instruction may be an instruction corresponding to a key operation, and the target area in the image to be processed may be selected according to the operated key, but is not limited to this.
S130、根据目标区域的位置信息确定目标区域所属的待调整区域,展示目标区域所属的待调整区域对应的调整界面。S130: Determine the area to be adjusted to which the target area belongs according to the position information of the target area, and display an adjustment interface corresponding to the area to be adjusted to which the target area belongs.
其中,每个待调整区域与一个调整界面对应,调整界面包含待调整区域对应的功能选项。Among them, each area to be adjusted corresponds to an adjustment interface, and the adjustment interface includes function options corresponding to the area to be adjusted.
可选地,所述终端可以存储待调整区域与功能选项的对应关系(或,映射关系),在确定目标区域所属的待调整区域之后,可以根据存储的所述映射关系确定需要提供给用户的功能选项,进而向用户展示包括所确定的功能选项的调整界面。Optionally, the terminal may store the corresponding relationship (or mapping relationship) between the area to be adjusted and the function option, and after determining the area to be adjusted to which the target area belongs, it may determine what needs to be provided to the user according to the stored mapping relationship. Function options, and then show the user an adjustment interface including the determined function options.
示例性地,该功能选项可以配置成调整对应的待调整区域,例如,若待调整区域为眼睛,则该功能选项可以配置成对眼睛进行相关的调整,如眼睛放大,若待调整区域为下巴,则该功能选项可以配置成进行瘦脸等,在此不做限制。Exemplarily, the function option can be configured to adjust the corresponding area to be adjusted. For example, if the area to be adjusted is the eyes, the function option can be configured to perform related adjustments to the eyes, such as magnifying the eyes, and if the area to be adjusted is the chin , The function option can be configured to perform face-lifting, etc., and there is no restriction here.
本实施例中,通过识别并获取待处理图像中的人脸部分的多个待调整区域,并根据用户的操作指令选择目标区域,展示目标区域所属的待调整区域对应的所述调整界面,该调整界面包含所述待调整区域对应的功能选项,直观地展示了调整界面所能调整的待调整区域。换言之,通过本实施例提供的界面交互方法,终端可以根据用户所选的需要调整的区域,智能地为用户提供对应的图像处理功能选项,无需用户对应用程序进行学习或试用,即可快速上手,有效地减少了交互成本。In this embodiment, by identifying and acquiring multiple regions to be adjusted in the face portion of the image to be processed, and selecting the target region according to the user's operation instruction, the adjustment interface corresponding to the region to be adjusted to which the target region belongs is displayed. The adjustment interface includes function options corresponding to the area to be adjusted, and intuitively displays the area to be adjusted that can be adjusted by the adjustment interface. In other words, through the interface interaction method provided in this embodiment, the terminal can intelligently provide the user with corresponding image processing function options according to the area selected by the user that needs to be adjusted, and the user can quickly get started without learning or trying the application. , Effectively reducing interaction costs.
图2为本公开提供的另一种界面交互方法流程示意图,图3为该界面交互方法中待调整区域的示意图。FIG. 2 is a schematic flowchart of another interface interaction method provided by the present disclosure, and FIG. 3 is a schematic diagram of an area to be adjusted in the interface interaction method.
可选地,如图2所示,识别获取待处理图像中人脸部分的多个待调整区域包括:Optionally, as shown in FIG. 2, recognizing and acquiring multiple regions to be adjusted in the face portion of the image to be processed includes:
S111、识别待处理图像中的人脸部分。S111. Recognize the face part in the image to be processed.
一些实施方式中,可以通过人脸检测模型识别待处理图像中的人脸部分,人脸检测模型可以包括你只看一次(You Only Look Once,YOLO)模型、快速区域卷积神经网络(FAST-Region Convolutional Neural Networks,FAST-RCNN)模型、多任务级联卷积神经网络(Multi-task Convolutional Neural Networks,MTCNN)模型等,但不以此为限。In some embodiments, the face part in the image to be processed can be recognized through a face detection model. The face detection model can include a You Only Look Once (YOLO) model, a fast area convolutional neural network (FAST- Region Convolutional Neural Networks, FAST-RCNN) model, Multi-task Convolutional Neural Networks (MTCNN) model, etc., but not limited to this.
可选地,通过人脸检测模型识别确定人脸框的位置后,由于框中的人脸可能存在一定角度,再对人脸框中的人脸进行摆正,得到摆正后的人脸,摆正后的人脸可以用于进行人脸关键点检测。示例性地,可以根据从人脸部分中识别出的两只眼睛的坐标作为摆正的标准。Optionally, after the position of the face frame is determined through the recognition of the face detection model, since the face in the frame may have a certain angle, the face in the face frame may be squared to obtain the face after the square. The straightened face can be used for face key point detection. Exemplarily, the coordinates of the two eyes recognized from the face part can be used as the standard for the alignment.
S112、采用人脸关键点检测模型检测获取人脸部分中的多个人脸关键点。S112: Use a face key point detection model to detect and acquire multiple face key points in the face part.
其中,每个人脸关键点对应一个人脸属性。人脸属性可以指对人脸上某个位置区域的描述、或者某个特征的描述等。Among them, each face key point corresponds to a face attribute. Face attributes can refer to the description of a certain location area on the face, or the description of a certain feature, etc.
一种可能的实现方式中,人脸关键点可以是指定位人脸轮廓、五官轮廓的锚点,人脸关键点检测模型可以包括热图回归(Heatmap Regression)模型、MTCNN模型等。参考图3,通过人脸关键点检测模型获取人脸关键点201(在图3中以黑点进行标记),通常一张人脸可以获取几十到几百个人脸关键点201,每个人脸关键点201对应一个人脸属性,如眼睛、鼻子、下巴、额头、脸颊等。In a possible implementation manner, the key points of the face may be anchor points of the designated face contour and the contour of the facial features, and the key point detection model of the face may include a Heatmap Regression model, an MTCNN model, etc. Referring to Fig. 3, the face key points 201 are obtained through the face key point detection model (marked by black dots in Fig. 3). Usually, a face can obtain tens to hundreds of face key points 201, and each face The key point 201 corresponds to a face attribute, such as eyes, nose, chin, forehead, cheeks, etc.
另一种可能的实现方式中,人脸关键点是指面部图像上的关键特征点,每个关键特征点具有一人脸属性,该人脸属性指示了所述关键特征点在人脸上所属的待调整区域,例如,眼睛区域、鼻子区域、嘴巴区域、眉毛区域及其他人脸部件区域等。对应地,获取的人脸关键点可以是眼角点、鼻尖、嘴角点、眉峰及人脸各部件轮廓点等。In another possible implementation, the face key points refer to key feature points on the facial image, and each key feature point has a face attribute, which indicates which key feature point belongs to on the face. The area to be adjusted, for example, the eye area, the nose area, the mouth area, the eyebrow area, and other facial parts. Correspondingly, the acquired key points of the face may be the corner points of the eyes, the tip of the nose, the corner points of the mouth, the brow peak, and the contour points of various parts of the face.
S113、根据人脸关键点,确定人脸部分的多个待调整区域。S113: According to the key points of the human face, determine multiple regions to be adjusted in the human face.
一些实施方式中,同一人脸属性的多个人脸关键点201围成该人脸属性对应的待调整区域,同一人脸属性可以对应一个以上的待调整区域,参考图3,待调整区域可以包括下巴区域203、脸颊区域204、颧骨区域205、眼睛区域206、额头区域207、鼻子区域208等,但不以此为限。In some embodiments, multiple face key points 201 of the same face attribute enclose the area to be adjusted corresponding to the face attribute, and the same face attribute may correspond to more than one area to be adjusted. Referring to FIG. 3, the area to be adjusted may include The chin area 203, the cheek area 204, the cheekbone area 205, the eye area 206, the forehead area 207, the nose area 208, etc., but not limited to this.
图4为本公开实施例提供的界面交互方法的另一流程示意图。FIG. 4 is a schematic diagram of another flow of an interface interaction method provided by an embodiment of the disclosure.
可选地,如图4所示,上述根据人脸关键点,确定人脸部分的多个待调整区域,包括:Optionally, as shown in FIG. 4, the foregoing determination of multiple areas to be adjusted in the face part according to the key points of the face includes:
S1131、获取人脸关键点的人脸关键点坐标以及人脸关键点特征值。S1131. Obtain face key point coordinates and face key point feature values of the face key points.
一些实施方式中,人脸关键点的坐标可以是指定位人脸轮廓、五官轮廓的锚点在待处理图像中的坐标,人脸关键点的特征值可以为人脸关键点的编号、顺序等,在此不做限制。In some embodiments, the coordinates of the key points of the face may be the coordinates of the anchor points of the designated face contour and the contour of the facial features in the image to be processed, and the feature values of the key points of the face may be the number and order of the key points of the face. There is no restriction here.
S1132、针对每个所述人脸属性,根据人脸关键点特征值以及人脸关键点特征值与人脸属性的预设映射关系,获取具有该人脸属性的人脸关键点。S1132, for each of the face attributes, obtain a face key point having the face attribute according to the feature value of the face key point and the preset mapping relationship between the face key point feature value and the face attribute.
一些实施方式中,以人脸关键点的特征值是编号为例,则预设映射关系可以是人脸关键点的编号与人脸属性的映射关系,例如,编号为1-15的人脸关键点的人脸属性为脸部轮廓、编号为16-20的人脸关键点的人脸属性为鼻子等,但不以此为限。In some embodiments, taking the feature value of the face key point as a number as an example, the preset mapping relationship may be the mapping relationship between the number of the face key point and the face attribute, for example, the face key numbered 1-15 The face attribute of the point is the contour of the face, and the face attribute of the key points of the face numbered 16-20 is the nose, but not limited to this.
S1133、根据具有所述人脸属性的人脸关键点的坐标,确定所述人脸属性对应的待调整区域。S1133: Determine a region to be adjusted corresponding to the face attribute according to the coordinates of the key points of the face having the face attribute.
参考前例,编号1-15的人脸关键点的关键点坐标对应的锚点在待处理图像中围成一个封闭区域,该区域则为脸部区域,编号16-20的人脸关键点的关键点坐标对应的锚点在待处理图像中围成另一个封闭区域,该区域则为鼻子区域。Refer to the previous example, the anchor points corresponding to the key point coordinates of the face key points numbered 1-15 enclose a closed area in the image to be processed, which is the face area, and the key points of the face key points numbered 16-20 The anchor point corresponding to the point coordinate encloses another closed area in the image to be processed, and this area is the nose area.
图5为公开实施例提供的界面交互方法的又一流程示意图。Fig. 5 is another schematic flowchart of the interface interaction method provided by the disclosed embodiment.
可选地,如图5所示,上述根据具有所述人脸属性的人脸关键点的坐标,确定所述人脸属性对应的待调整区域,包括:Optionally, as shown in FIG. 5, the foregoing determination of the region to be adjusted corresponding to the face attribute according to the coordinates of the face key points having the face attribute includes:
S1133a、将具有所述人脸属性的人脸关键点,作为对应人脸属性的锚点。S1133a: Use the key points of the face with the face attributes as anchor points corresponding to the face attributes.
S1133a即上述S1131、S1132步骤中的方法,在此不再赘述。S1133a is the method in steps S1131 and S1132 above, and will not be repeated here.
或,包括:Or, include:
S1133b根据具有所述人脸属性的人脸关键点计算得到多个预设锚点,将具有所述人脸属性的所述人脸关键点以及所述多个预设锚点,作为对应所述人脸属性的锚点。S1133b calculates multiple preset anchor points according to the face key points having the face attributes, and uses the face key points having the face attributes and the multiple preset anchor points as the corresponding The anchor point of the face attribute.
其中,预设锚点包括人脸属性,预设锚点与人脸关键点具有相同的人脸属性。Among them, the preset anchor points include face attributes, and the preset anchor points have the same face attributes as the face key points.
需要说明的是,参考图3,图3中使用白色点标记了预设锚点202,预设锚点202可以用于对人脸关键点进行补充,使得通过人脸关键点以及预设锚点确定的待调整区域的边缘更加平滑、区域范围更加准确。It should be noted that, referring to FIG. 3, a white point is used to mark the preset anchor point 202 in FIG. 3. The preset anchor point 202 can be used to supplement the face key points, so that the face key points and the preset anchor points The edge of the determined area to be adjusted is smoother and the area range is more accurate.
S1133c、根据人脸属性的锚点,确定人脸属性对应的待调整区域。S1133c: Determine an area to be adjusted corresponding to the face attribute according to the anchor point of the face attribute.
其中,人脸属性的锚点可以包括具有相同人脸属性的人脸特征点和预设锚点。人脸属性的锚点在待处理图像上表示的区域,即为该人脸属性对应的待调整区域。Wherein, the anchor points of the face attributes may include face feature points with the same face attributes and preset anchor points. The area represented by the anchor point of the face attribute on the image to be processed is the area to be adjusted corresponding to the face attribute.
可选地,在本实施例中,步骤S1133还可以通过以下方式之一实现:Optionally, in this embodiment, step S1133 can also be implemented in one of the following ways:
在第一种方式中,可以将具有所述人脸属性的所述人脸关键点围成的区域,确定为所述人脸属性对应的待调整区域。例如,可以将人脸属性为脸部轮廓的各人脸关键点(如上述编号1-15的人脸关键点)围成的区域确定为脸部轮廓对应的待调整区域,即,脸部区域。又如,可以将人脸属性为鼻子的各人脸关键点(如上述编号为16-20的人脸关键点)所围成的区域确定为鼻子对应的待调整区域,即,鼻子区域。In the first manner, the area enclosed by the key points of the face having the face attribute may be determined as the area to be adjusted corresponding to the face attribute. For example, the area enclosed by each face key point whose face attribute is the face contour (such as the face key points numbered 1-15 above) can be determined as the area to be adjusted corresponding to the face contour, that is, the face area . For another example, the area enclosed by each face key point whose face attribute is the nose (such as the aforementioned face key points numbered 16-20) can be determined as the area to be adjusted corresponding to the nose, that is, the nose area.
在第二种方式中,可以根据具有所述人脸属性的所述人脸关键点的坐标计算得到多个预设锚点,将具有所述人脸属性的所述人脸关键点以及所述多个预设锚点围成的区域,确定为所述人脸属性对应的待调整区域。In the second way, multiple preset anchor points may be calculated according to the coordinates of the face key points having the face attributes, and the face key points having the face attributes and the The area enclosed by a plurality of preset anchor points is determined as the area to be adjusted corresponding to the face attribute.
图6为本公开实施例提供的界面交互方法的又一流程示意图。FIG. 6 is a schematic flowchart of another interface interaction method provided by an embodiment of the disclosure.
可选地,如图6所示,根据具有所述人脸属性的人脸关键点计算多个预设锚点,可以通过以下流程实现:Optionally, as shown in FIG. 6, the calculation of multiple preset anchor points according to the face key points having the face attributes can be implemented through the following process:
S310、获取标准人脸图像上的多个人脸标准关键点,将所述多个人脸标准关键点按照设定的顺序依次首尾相连,每两个相连的人脸标准点之间存在一连线。S310. Obtain multiple face standard key points on the standard face image, and connect the multiple face standard key points in sequence according to a set order end to end, and there is a line between every two connected face standard points.
一些实施方式中,标准人脸图像可以是获取符合人脸关键点检测模型要求的人脸图像,在其上获取的人脸关键点可以用来计算预设锚点的表达式。In some embodiments, the standard face image may be a face image that meets the requirements of the face key point detection model, and the face key points obtained on it may be used to calculate the expression of the preset anchor point.
S320、在至少一所述连线上,设置标准锚点。S320. Set a standard anchor point on at least one of the connecting lines.
其中,可以根据应用时的实际情况,选择设置多个标准锚点,每个标准锚点和其所在连线两端的人脸标准关键点可以用于计算对应预设锚点的表达式,每个标准锚点对应一个预设锚点,其中,每个标准锚点所在连线两端的人脸标准关键点的特征值,与预设锚点所在连线两端的人脸关键点的特征值相同。Among them, you can choose to set multiple standard anchor points according to the actual situation of the application. Each standard anchor point and the face standard key points at both ends of the line where it is located can be used to calculate the expression corresponding to the preset anchor point. The standard anchor point corresponds to a preset anchor point, wherein the feature value of the face standard key point at both ends of the line where each standard anchor point is located is the same as the feature value of the face key point at both ends of the line where the preset anchor point is located.
S330、根据每个标准锚点所在的连线两端的人脸标准关键点的坐标以及插值算法,计算获得基于连线两端的人脸标准关键点的标准锚点坐标的表达式。S330: According to the coordinates of the standard key points of the face at both ends of the line where each standard anchor point is located, and an interpolation algorithm, an expression based on the standard anchor point coordinates of the standard key points of the face at both ends of the line is calculated.
一些实施方式中,若标准锚点S的坐标为(X s,Y s)、其所在的连线两端的人脸标准关键点A、B的坐标为(X a,Y a)(X b,Y b),则该标准锚点S(X s’,Y s’)的表达式可以为: In some embodiments, if the coordinates of the standard anchor point S are (X s , Y s ), the coordinates of the standard key points A and B of the face at both ends of the line where it is located are (X a ,Y a )(X b , Y b ), the standard anchor point S(X s ', Y s ') can be expressed as:
Figure PCTCN2019104315-appb-000001
Figure PCTCN2019104315-appb-000001
其中,λ为标准锚点S与人脸标准关键点A的距离|AS|和标准锚点S与人脸标准关键点B的距离|SB|之比,即λ=|AS|/|SB|。Among them, λ is the ratio of the distance between the standard anchor point S and the face standard key point A |AS| and the distance between the standard anchor point S and the face standard key point B |SB|, namely λ=|AS|/|SB| .
λ的计算方式为:λ is calculated as:
Figure PCTCN2019104315-appb-000002
Figure PCTCN2019104315-appb-000002
S340、根据标准锚点坐标的表达式、人脸关键点的坐标,计算获取多个预设锚点。S340: According to the expression of the standard anchor point coordinates and the coordinates of the key points of the human face, calculate and obtain multiple preset anchor points.
一些实施方式中,可以根据每个预设锚点对应的标准锚点所在连线两端的人脸标准关键点的特征值,获取预设锚点所在连线两端的人脸关键点,并根据人脸关键点的坐标、标准锚点坐标的表达式,计算获取预设锚点的坐标。In some embodiments, the face key points at both ends of the line where the preset anchor point is located can be obtained according to the feature values of the face standard key points at both ends of the line where the standard anchor point corresponding to each preset anchor point is located, and The coordinates of the key points of the face and the expression of the standard anchor point coordinates are calculated to obtain the coordinates of the preset anchor points.
在另一些实施方式中,可以按照设定的顺序将具有所述人脸属性的所述人脸关键点依次首尾相连,按照所述标准锚点坐标的表达式,根据每两个相连的所述人脸关键点的坐标计算得到一个所述预设锚点。如此,可以得到多个所述预设锚点。In other implementations, the key points of the face with the face attributes may be connected end to end in sequence in a set order, and according to the expression of the standard anchor point coordinates, according to every two connected The coordinates of the key points of the human face are calculated to obtain one of the preset anchor points. In this way, multiple preset anchor points can be obtained.
其中,在连接人脸标准关键点和连接人脸关键点时所采用的顺序规则相同,以确保依次首尾相连后的人脸关键点符合所述标准锚点坐标的表达式。Wherein, the sequence rules used when connecting the standard key points of the face and the key points of the face are the same to ensure that the face key points that are connected end to end in sequence conform to the expression of the standard anchor point coordinates.
图7为本公开实施例提供的界面交互方法的又一流程示意图。FIG. 7 is a schematic flowchart of another interface interaction method provided by an embodiment of the disclosure.
可选地,如图7所示,上述采用人脸关键点检测模型检测获取人脸部分中的多个人脸关键点,包括:Optionally, as shown in FIG. 7, the above-mentioned use of the face key point detection model to detect and acquire multiple face key points in the face part includes:
S1121、将人脸部分的方向校正为预设方向,得到校正后的人脸图像。S1121, correct the direction of the face part to a preset direction, and obtain a corrected face image.
一些实施方式中,人脸部分中的人脸图像可能存在一定的偏移,可以通过人脸校正网络对其进行校正,得到校正后的人脸图像。In some implementations, the face image in the face part may have a certain offset, which can be corrected by a face correction network to obtain a corrected face image.
S1122、采用人脸关键点检测模型检测获取校正后的人脸图像中多个人脸关键点。S1122, using a face key point detection model to detect and acquire multiple face key points in the corrected face image.
S1122中使用的人脸关键点检测模型与上述的相同,在此不再赘述。The face key point detection model used in S1122 is the same as the above, and will not be repeated here.
可选地,识别获取待处理图像中人脸部分的多个待调整区域之后,还包括:在待处理图像上展示每个待调整区域。Optionally, after identifying and acquiring a plurality of regions to be adjusted in the face portion of the image to be processed, the method further includes: displaying each region to be adjusted on the image to be processed.
一些实施方式中,在待处理图像上展示每个待调整区域,可以通过将每个待调整区域进行高亮显示实现或者进行标记实现。例如,可以根据待调整区域的范围,为不同人脸属性对应的待调整区域覆盖不同颜色的高亮图层,如,眼睛使用红色、鼻子使用棕色、脸颊使用黄色等;在另一些实施方式中,可以在高亮图层上,或者也可以不使用高亮图层,直接在待调整区域上,通过文字、图形等方式,对不同人脸属性的待调整区域使用不同的文字或者图形进行标注,例如,直接使用汉字对不同的待调整区域进行文字标注,或使用与待调整区域对应的图标进行标注等,但不以此为限。In some embodiments, displaying each area to be adjusted on the image to be processed can be achieved by highlighting or marking each area to be adjusted. For example, according to the range of the area to be adjusted, the area to be adjusted corresponding to different face attributes can be covered with different color highlight layers, such as red for eyes, brown for nose, yellow for cheeks, etc.; in other embodiments , Can be on the highlight layer, or without using the highlight layer, directly on the area to be adjusted, using text, graphics, etc., to label the area to be adjusted with different face attributes with different text or graphics For example, directly use Chinese characters to mark different areas to be adjusted, or use icons corresponding to the areas to be adjusted, etc., but not limited to this.
需要说明的是,在待处理图像上展示每个待调整区域,目的在于能够清楚、快速的区分不同的人脸区域,以便用户可以直观选择需要的调整功能,降低交互成本。It should be noted that the purpose of displaying each area to be adjusted on the image to be processed is to be able to distinguish different face areas clearly and quickly, so that the user can intuitively select the required adjustment function and reduce the interaction cost.
可选地,当上述待处理图像中还包含除人脸部分以外的人体时,待调整区域还可以用于指示调整图像中人体的其他部分,例如,通过待调整区域标示腿、腰等,其选择方法与调整界面的展示方法与人脸部分的一致,在此不再赘述。Optionally, when the aforementioned image to be processed also contains a human body other than the face part, the area to be adjusted can also be used to indicate the adjustment of other parts of the human body in the image, for example, the legs, waist, etc. are marked by the area to be adjusted. The selection method and the display method of the adjustment interface are the same as those of the human face, so I will not repeat them here.
以下,以执行主体为智能手机为例,给出本公开提供的界面交互方法的一种应用示例,对本公开的界面交互方法进行解释。In the following, taking a smart phone as the execution body as an example, an application example of the interface interaction method provided by the present disclosure is given, and the interface interaction method of the present disclosure is explained.
可以通过智能手机拍摄包括人脸图像的照片,该照片存储于智能手机的闪存内,用户选择该照片后,先在智能手机的后台进行操作,即通过人脸检测模型、人脸校正网络获取 校正后的人脸图像,再通过人脸关键点检测模型检测该人脸图像,获取该人脸图像上的多个人脸关键点,并且通过获取的人脸关键点计算获取预设锚点,根据人脸关键点和计算出的预设锚点获取待调整区域的范围坐标,在获取到待调整区域的范围坐标后,在智能手机前端进行展示,即在手机屏展示的照片中的人脸图像上,通过高亮展示不同的待调整区域。A photo including a face image can be taken through a smart phone, and the photo is stored in the flash memory of the smart phone. After the user selects the photo, first operate in the background of the smart phone, that is, obtain the correction through the face detection model and the face correction network After the face image, the face image is detected through the face key point detection model, and multiple face key points on the face image are obtained, and the preset anchor points are calculated through the obtained face key points. The key points of the face and the calculated preset anchor points obtain the range coordinates of the area to be adjusted. After obtaining the range coordinates of the area to be adjusted, they are displayed on the front end of the smartphone, that is, on the face image in the photo displayed on the phone screen , By highlighting different areas to be adjusted.
等待接收用户的操作指令,若用户的操作指令选中的区域在待调整区域内,则跳转到该待调整区域对应的功能选项所在的调整界面,该功能选项可以配置成调整待调整区域,实现美颜效果;若用户的操作指令选中的区域不在待调整区域内,则可以通过闪烁高亮层,提示用户重新选择。Waiting to receive the user's operation instruction. If the area selected by the user's operation instruction is in the area to be adjusted, it will jump to the adjustment interface where the function option corresponding to the area to be adjusted is located. This function option can be configured to adjust the area to be adjusted. Beauty effect; if the area selected by the user's operation instruction is not in the area to be adjusted, the highlight layer can be flashed to prompt the user to select again.
上述示例仅为一种可能的实施方式,而不是必须如此,本领域技术人员应该明确,本公开的方案可以有各种更改和变化。凡在本公开方案的精神和原则之内所作的更改和变化,均应包含在本公开的保护范围之内。The above-mentioned example is only a possible implementation, and it is not necessarily the case. It should be clear to those skilled in the art that the solution of the present disclosure can have various modifications and changes. All changes and changes made within the spirit and principles of the solution of this disclosure should be included in the scope of protection of this disclosure.
图8为本公开实施例提供的界面交互装置的一种结构示意图。FIG. 8 is a schematic structural diagram of an interface interaction device provided by an embodiment of the disclosure.
如图8所示,本公开实施例还提供了一种界面交互装置,包括:识别模块401,配置成识别获取待处理图像中人脸部分的多个待调整区域。接收模块402,配置成接收用户的操作指令,操作指令用于选择待处理图像中的目标区域。第一展示模块403,配置成根据目标区域的位置信息确定目标区域所属的待调整区域,展示目标区域所属的待调整区域对应的调整界面,其中,每个待调整区域与一个调整界面对应,调整界面包含待调整区域对应的功能选项。As shown in FIG. 8, an embodiment of the present disclosure also provides an interface interaction device, which includes: a recognition module 401 configured to recognize and acquire multiple regions to be adjusted in the face portion of the image to be processed. The receiving module 402 is configured to receive a user's operation instruction, and the operation instruction is used to select a target area in the image to be processed. The first display module 403 is configured to determine the area to be adjusted to which the target area belongs according to the location information of the target area, and display the adjustment interface corresponding to the area to be adjusted to which the target area belongs, wherein each area to be adjusted corresponds to an adjustment interface, and the adjustment The interface contains the function options corresponding to the area to be adjusted.
可选地,识别模块401,具体配置成识别待处理图像中的人脸部分。采用人脸关键点检测模型检测人脸部分,得到人脸部分中的多个人脸关键点。根据所述人脸关键点,确定人脸部分的多个待调整区域。Optionally, the recognition module 401 is specifically configured to recognize the face part in the image to be processed. The face key point detection model is used to detect the face part, and multiple face key points in the face part are obtained. According to the key points of the human face, multiple regions to be adjusted in the human face are determined.
可选地,其中,每个人脸关键点具有一人脸属性,所述人脸属性指示该人脸关键点所属的待调整区域。Optionally, each face key point has a face attribute, and the face attribute indicates the area to be adjusted to which the face key point belongs.
在此情况下,识别模块401具体配置成:获取人脸关键点的人脸关键点坐标以及人脸关键点特征值;针对每个人脸属性,根据人脸关键点特征值以及人脸关键点特征值与人脸属性的预设映射关系,获取具有该人脸属性的人脸关键点;根据具有该人脸属性的人脸关键点的坐标,确定该人脸属性对应的待调整区域。In this case, the recognition module 401 is specifically configured to: obtain the face key point coordinates and the face key point feature value of the face key point; for each face attribute, according to the face key point feature value and the face key point feature The preset mapping relationship between the value and the face attribute is used to obtain the face key point with the face attribute; according to the coordinates of the face key point with the face attribute, the area to be adjusted corresponding to the face attribute is determined.
可选地,识别模块401根据具有该人脸属性的人脸关键点的坐标,确定该人脸属性对应的待调整区域的具体方式可以为:Optionally, the specific method for the recognition module 401 to determine the region to be adjusted corresponding to the face attribute according to the coordinates of the face key points having the face attribute may be:
将具有该人脸属性的人脸关键点,作为对应该人脸属性的锚点;或,根据具有该人脸属性的人脸关键点计算多个预设锚点,并将具有该人脸属性的人脸关键点及该多个预设锚 点,作为对应该人脸属性的锚点,其中,预设锚点包括人脸属性,预设锚点与人脸关键点具有相同的人脸属性;Use the face key point with the face attribute as the anchor point corresponding to the face attribute; or, calculate multiple preset anchor points according to the face key point with the face attribute, and then have the face attribute The face key points and the multiple preset anchor points are used as anchor points corresponding to the face attributes. The preset anchor points include face attributes, and the preset anchor points and the face key points have the same face attributes ;
根据该人脸属性的锚点,确定该人脸属性对应的待调整区域。According to the anchor point of the face attribute, the area to be adjusted corresponding to the face attribute is determined.
图9为本公开实施例提供的界面交互装置的一种结构示意图。FIG. 9 is a schematic structural diagram of an interface interaction device provided by an embodiment of the disclosure.
可选地,如图9所示,还包括计算模块404。计算模块404,配置成:获取标准人脸图像上的多个人脸标准关键点;在多个人脸标准关键点之间的连线上,设置多个标准锚点。根据每个标准锚点所在的连线两端的人脸标准关键点的坐标以及插值算法,计算获得基于连线两端的人脸标准关键点的标准锚点坐标的表达式;根据标准锚点坐标的表达式、人脸关键点的坐标,计算获取多个预设锚点。Optionally, as shown in FIG. 9, a calculation module 404 is further included. The calculation module 404 is configured to: obtain multiple face standard key points on the standard face image; set multiple standard anchor points on the line between the multiple face standard key points. According to the coordinates of the standard key points of the face at both ends of the line where each standard anchor point is located and the interpolation algorithm, the expression based on the standard anchor point coordinates of the standard key points of the face at both ends of the line is calculated; according to the standard anchor point coordinates Expressions and coordinates of key points on the face are calculated to obtain multiple preset anchor points.
可选地,识别模块401,具体可以配置成将人脸部分的方向校正为预设方向,得到校正后的人脸图像;采用人脸关键点检测模型检测获取校正后的人脸图像中多个人脸关键点。Optionally, the recognition module 401 may be specifically configured to correct the direction of the face part to a preset direction to obtain a corrected face image; use a face key point detection model to detect and obtain multiple people in the corrected face image Key points of the face.
图10为本公开实施例提供的界面交互装置的一种结构示意图。FIG. 10 is a schematic structural diagram of an interface interaction device provided by an embodiment of the disclosure.
可选地,如图10所示,还包括第二展示模块405,配置成在待处理图像上展示每个待调整区域。Optionally, as shown in FIG. 10, a second display module 405 is further included, configured to display each area to be adjusted on the image to be processed.
上述装置配置成执行前述实施例提供的方法,其实现原理和技术效果类似,在此不再赘述。The foregoing device is configured to execute the method provided in the foregoing embodiment, and its implementation principles and technical effects are similar, and will not be repeated here.
以上这些模块可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个特定集成电路(Application Specific Integrated Circuit,简称ASIC),或,一个或多个微处理器(digital singnal processor,简称DSP),或,一个或者多个现场可编程门阵列(Field Programmable Gate Array,简称FPGA)等。再如,当以上某个模块通过处理元件调度程序代码的形式实现时,该处理元件可以是通用处理器,例如中央处理器(Central Processing Unit,简称CPU)或其它可以调用程序代码的处理器。再如,这些模块可以集成在一起,以片上***(system-on-a-chip,简称SOC)的形式实现。The above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), or one or more microprocessors (digital singnal processor, DSP for short), or, one or more Field Programmable Gate Array (FPGA for short), etc. For another example, when one of the above modules is implemented in the form of processing element scheduling program code, the processing element may be a general-purpose processor, such as a central processing unit (CPU for short) or other processors that can call program codes. For another example, these modules can be integrated together and implemented in the form of a system-on-a-chip (SOC for short).
图11为本公开实施例提供的电子设备的一种结构示意图。FIG. 11 is a schematic diagram of a structure of an electronic device provided by an embodiment of the disclosure.
如图11所示,该电子设备包括:处理器501、计算机可读存储介质502和总线503,其中:As shown in FIG. 11, the electronic device includes: a processor 501, a computer-readable storage medium 502, and a bus 503, where:
电子设备可以包括一个或多个处理器501、总线503和计算机可读存储介质502,其中,计算机可读存储介质502配置成存储程序,处理器501通过总线503与计算机可读存储介质502通信连接,处理器501调用计算机可读存储介质502存储的程序,以执行上述方法实施例。The electronic device may include one or more processors 501, a bus 503, and a computer-readable storage medium 502, where the computer-readable storage medium 502 is configured to store a program, and the processor 501 is communicatively connected to the computer-readable storage medium 502 through the bus 503 The processor 501 invokes a program stored in the computer-readable storage medium 502 to execute the foregoing method embodiment.
电子设备可以是通用计算机、服务器或移动终端等,在此不做限制。电子设备配置成实现本公开的人脸识别方法。The electronic device can be a general-purpose computer, a server, or a mobile terminal, etc., which is not limited here. The electronic device is configured to implement the face recognition method of the present disclosure.
需要说明的是,处理器501可以包括一个或多个处理核(例如,单核处理器或多核处理器)。仅作为举例,处理器可以包括中央处理单元(Central Processing Unit,CPU)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用指令集处理器(Application Specific Instruction-set Processor,ASIP)、图形处理单元(Graphics Processing Unit,GPU)、物理处理单元(Physics Processing Unit,PPU)、数字信号处理器(Digital Signal Processor,DSP)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、可编程逻辑器件(Programmable Logic Device,PLD)、控制器、微控制器单元、简化指令集计算机(Reduced Instruction Set Computing,RISC)、或微处理器等,或其任意组合。It should be noted that the processor 501 may include one or more processing cores (for example, a single-core processor or a multi-core processor). For example only, the processor may include a central processing unit (CPU), an application specific integrated circuit (ASIC), a special instruction set processor (Application Specific Instruction-set Processor, ASIP), and a graphics processing unit (Graphics Processing Unit, GPU), Physical Processing Unit (Physics Processing Unit, PPU), Digital Signal Processor (Digital Signal Processor, DSP), Field Programmable Gate Array (Field Programmable Gate Array, FPGA), Programmable Logic Device ( Programmable Logic Device (PLD), controller, microcontroller unit, Reduced Instruction Set Computing (RISC), or microprocessor, etc., or any combination thereof.
计算机可读存储介质502可以包括:包括大容量存储器、可移动存储器、易失性读写存储器、或只读存储器(Read-Only Memory,ROM)等,或其任意组合。作为举例,大容量存储器可以包括磁盘、光盘、固态驱动器等;可移动存储器可包括闪存驱动器、软盘、光盘、存储卡、zip磁盘、磁带等;易失性读写存储器可以包括随机存取存储器(Random Access Memory,RAM);RAM可以包括动态RAM(Dynamic Random Access Memory,DRAM),双倍数据速率同步动态RAM(Double Date-Rate Synchronous RAM,DDR SDRAM);静态RAM(Static Random-Access Memory,SRAM),晶闸管RAM(Thyristor-Based Random Access Memory,T-RAM)和零电容器RAM(Zero-RAM)等。作为举例,ROM可以包括掩模ROM(Mask Read-Only Memory,MROM)、可编程ROM(Programmable Read-Only Memory,PROM)、可擦除可编程ROM(Programmable Erasable Read-only Memory,PEROM)、电可擦除可编程ROM(Electrically Erasable Programmable read only memory,EEPROM)、光盘ROM(CD-ROM)、以及数字通用磁盘ROM等。The computer-readable storage medium 502 may include: including mass memory, removable memory, volatile read-write memory, or read-only memory (Read-Only Memory, ROM), or any combination thereof. As an example, mass storage may include magnetic disks, optical disks, solid-state drives, etc.; removable storage may include flash drives, floppy disks, optical disks, memory cards, zip disks, tapes, etc.; volatile read-write storage may include random access memory ( Random Access Memory, RAM; RAM can include dynamic RAM (Dynamic Random Access Memory, DRAM), double data rate synchronous dynamic RAM (Double Date-Rate Synchronous RAM, DDR SDRAM); static RAM (Static Random-Access Memory, SRAM) ), Thyristor-Based Random Access Memory (T-RAM) and Zero-Capacitor RAM (Zero-RAM), etc. As an example, ROM may include mask ROM (Mask Read-Only Memory, MROM), programmable ROM (Programmable Read-Only Memory, PROM), erasable programmable ROM (Programmable Read-only Memory, PEROM), electronic Erasable programmable ROM (Electrically Erasable Programmable read only memory, EEPROM), compact disc ROM (CD-ROM), and digital universal disk ROM, etc.
为了便于说明,在电子设备中仅描述了一个处理器501。然而,应当注意,本公开中的电子设备还可以包括多个处理器501,因此本公开中描述的一个处理器执行的步骤也可以由多个处理器联合执行或单独执行。例如,若电子设备的处理器501执行步骤A和步骤B,则应该理解,步骤A和步骤B也可以由两个不同的处理器共同执行或者在一个处理器中单独执行。例如,第一处理器执行步骤A,第二处理器执行步骤B,或者第一处理器和第二处理器共同执行步骤A和B。For ease of description, only one processor 501 is described in the electronic device. However, it should be noted that the electronic device in the present disclosure may also include multiple processors 501, so the steps performed by one processor described in the present disclosure may also be performed jointly or individually by multiple processors. For example, if the processor 501 of the electronic device executes step A and step B, it should be understood that step A and step B may also be executed by two different processors or be executed separately in one processor. For example, the first processor performs step A and the second processor performs step B, or the first processor and the second processor perform steps A and B together.
可选地,本公开还提供一种程序产品,例如计算机可读存储介质,包括程序,该程序在被处理器执行时用于执行上述方法实施例。Optionally, the present disclosure also provides a program product, such as a computer-readable storage medium, including a program, which is used to execute the foregoing method embodiments when executed by a processor.
综上所述,本公开提供的界面交互方法及装置,通过识别获取待处理图像中的人脸部分的多个待调整区域,并根据用户的操作指令选择目标区域,展示目标区域所属的待调整区域对应的所述调整界面,该调整界面包含所述待调整区域对应的功能选项,直观地展示了调整界面所能调整的待调整区域,无需对应用程序进行学习或试用,即可快速上手,有效地减少了交互成本。To sum up, the interface interaction method and device provided by the present disclosure obtain multiple regions to be adjusted in the face portion of the image to be processed by identifying, and select the target region according to the user's operation instruction, and display the target region to be adjusted The adjustment interface corresponding to the area, the adjustment interface contains the function options corresponding to the area to be adjusted, and intuitively shows the area to be adjusted that can be adjusted by the adjustment interface. You can quickly get started without learning or trying the application. Effectively reduce the interaction cost.
在本公开所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in the present disclosure, it should be understood that the disclosed device and method may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, the functional units in the various embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The above-mentioned integrated unit may be implemented in the form of hardware, or may be implemented in the form of hardware plus software functional units.
上述以软件功能单元的形式实现的集成的单元,可以存储在一个计算机可读取存储介质中。上述软件功能单元存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(英文:processor)执行本公开各个实施例所述方法的部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(英文:Read-Only Memory,简称:ROM)、随机存取存储器(英文:Random Access Memory,简称:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The above-mentioned integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The above-mentioned software functional unit is stored in a storage medium, and includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (English: processor) execute the various embodiments of the present disclosure Part of the method. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (English: Read-Only Memory, abbreviated as: ROM), random access memory (English: Random Access Memory, abbreviated as: RAM), magnetic disk or optical disk, etc. Various media that can store program codes.
工业实用性Industrial applicability
本公开提供的界面交互方法及装置,使得终端可以根据用户所选的需要调整的区域,智能地为用户提供对应的图像处理功能选项,无需用户对应用程序进行学习或试用,即可快速上手,有效地减少了交互成本。The interface interaction method and device provided by the present disclosure enable the terminal to intelligently provide the user with corresponding image processing function options according to the area selected by the user to be adjusted, and the user can quickly get started without learning or trying the application. Effectively reduce interaction costs.

Claims (16)

  1. 一种界面交互方法,其特征在于,包括:An interface interaction method, characterized by comprising:
    识别获取待处理图像中人脸部分的多个待调整区域;Identify and acquire multiple regions to be adjusted in the face part of the image to be processed;
    接收用户的操作指令,所述操作指令用于选择所述待处理图像中的目标区域;Receiving an operation instruction from a user, the operation instruction being used to select a target area in the image to be processed;
    根据所述目标区域的位置信息确定所述目标区域所属的待调整区域,展示所述目标区域所属的待调整区域对应的调整界面,其中,每个所述待调整区域与一个所述调整界面对应,所述调整界面包含所述待调整区域对应的功能选项。Determine the area to be adjusted to which the target area belongs according to the position information of the target area, and display the adjustment interface corresponding to the area to be adjusted to which the target area belongs, wherein each area to be adjusted corresponds to one adjustment interface , The adjustment interface includes function options corresponding to the area to be adjusted.
  2. 根据权利要求1所述的方法,其特征在于,所述识别获取待处理图像中所述人脸部分的多个待调整区域,包括:The method according to claim 1, wherein the identifying and acquiring a plurality of regions to be adjusted of the face part in the image to be processed comprises:
    识别所述待处理图像中的人脸部分;Identifying the face part in the image to be processed;
    采用人脸关键点检测模型检测所述人脸部分,得到所述人脸部分中的多个人脸关键点;Using a face key point detection model to detect the face part, and obtain multiple face key points in the face part;
    根据所述人脸关键点,确定所述人脸部分的多个待调整区域。According to the key points of the human face, multiple regions to be adjusted of the human face are determined.
  3. 根据权利要求2所述的方法,其特征在于,每个人脸关键点具有一人脸属性,所述人脸属性指示该人脸关键点所属的待调整区域;The method according to claim 2, wherein each face key point has a face attribute, and the face attribute indicates the area to be adjusted to which the face key point belongs;
    所述根据所述人脸关键点,确定所述人脸部分的多个待调整区域,包括:The determining a plurality of regions to be adjusted of the human face part according to the key points of the human face includes:
    获取所述人脸关键点的人脸关键点坐标以及人脸关键点特征值;Acquiring the face key point coordinates and the face key point feature value of the face key point;
    针对每个所述人脸属性,根据所述人脸关键点特征值以及所述人脸关键点特征值与人脸属性的预设映射关系,确定具有所述人脸属性的所述人脸关键点;For each of the face attributes, determine the face key having the face attribute according to the feature value of the face key point and the preset mapping relationship between the feature value of the face key point and the face attribute point;
    根据具有所述人脸属性的所述人脸关键点的坐标,确定所述人脸属性对应的待调整区域。According to the coordinates of the key points of the face having the face attributes, the area to be adjusted corresponding to the face attributes is determined.
  4. 根据权利要求3所述的方法,其特征在于,所述根据具有所述人脸属性的所述人脸关键点的坐标,确定所述人脸属性对应的待调整区域,包括:The method according to claim 3, wherein the determining the area to be adjusted corresponding to the face attribute according to the coordinates of the face key points having the face attribute comprises:
    将具有所述人脸属性的所述人脸关键点围成的区域,确定为所述人脸属性对应的待调整区域;或者,Determine the area surrounded by the key points of the face with the face attributes as the area to be adjusted corresponding to the face attributes; or,
    根据具有所述人脸属性的所述人脸关键点的坐标计算得到多个预设锚点,将具有所述人脸属性的所述人脸关键点以及所述多个预设锚点围成的区域,确定为所述人脸属性对应的待调整区域。A plurality of preset anchor points are calculated according to the coordinates of the face key points having the face attributes, and the face key points having the face attributes and the plurality of preset anchor points are enclosed The area of is determined as the area to be adjusted corresponding to the face attribute.
  5. 根据权利要求3所述的方法,其特征在于,所述根据具有所述人脸属性的所述人脸关键点的坐标,确定所述人脸属性对应的待调整区域,包括:The method according to claim 3, wherein the determining the area to be adjusted corresponding to the face attribute according to the coordinates of the face key points having the face attribute comprises:
    将具有所述人脸属性的所述人脸关键点,作为对应所述人脸属性的锚点;或,根据具有所述人脸属性的所述人脸关键点的坐标计算得到多个预设锚点,将具有所述人脸属性的 所述人脸关键点以及所述多个预设锚点,作为对应所述人脸属性的锚点,其中,所述预设锚点包括所述人脸属性,所述预设锚点与所述人脸关键点具有相同的人脸属性;Use the face key points with the face attributes as anchor points corresponding to the face attributes; or calculate multiple presets based on the coordinates of the face key points with the face attributes Anchor point, using the face key points having the face attributes and the multiple preset anchor points as anchor points corresponding to the face attributes, wherein the preset anchor points include the person Face attributes, the preset anchor points and the face key points have the same face attributes;
    根据所述人脸属性的锚点,确定所述人脸属性对应的所述待调整区域。According to the anchor point of the face attribute, the area to be adjusted corresponding to the face attribute is determined.
  6. 根据权利要求4或5所述的方法,其特征在于,所述根据具有所述人脸属性的所述人脸关键点的坐标计算得到多个预设锚点,包括:The method according to claim 4 or 5, wherein the calculating a plurality of preset anchor points according to the coordinates of the face key points having the face attributes includes:
    获取标准人脸图像上的多个人脸标准关键点,将所述多个人脸标准关键点按照设定的顺序依次首尾相连,每两个相连的人脸标准关键点之间存在一连线;Acquiring multiple face standard key points on the standard face image, connecting the multiple face standard key points in sequence according to a set order end to end, and there is a line between every two connected face standard key points;
    在至少一所述连线上,设置标准锚点;Set a standard anchor point on at least one of the connecting lines;
    根据每个所述标准锚点所在的所述连线两端的人脸标准关键点的坐标以及插值算法,计算获得基于所述连线两端的人脸标准关键点的所述标准锚点坐标的表达式;According to the coordinates of the standard key points of the face at both ends of the line where each standard anchor point is located, and an interpolation algorithm, the expression of the coordinates of the standard anchor points based on the standard key points of the face at both ends of the line is calculated formula;
    根据所述标准锚点坐标的表达式、所述人脸关键点的坐标,计算获取多个预设锚点。According to the expression of the standard anchor point coordinates and the coordinates of the key points of the human face, a plurality of preset anchor points are calculated and acquired.
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述标准锚点坐标的表达式、所述人脸关键点的坐标,计算获取多个预设锚点,包括:The method according to claim 6, wherein said calculating and acquiring a plurality of preset anchor points according to the expression of the standard anchor point coordinates and the coordinates of the key points of the face comprises:
    按照所述设定的顺序将具有所述人脸属性的所述人脸关键点依次首尾相连;Connecting the face key points with the face attributes end to end in sequence according to the set sequence;
    按照所述标准锚点坐标的表达式,根据每两个相连的所述人脸关键点的坐标计算得到一个所述预设锚点。According to the expression of the standard anchor point coordinates, one of the preset anchor points is calculated according to the coordinates of every two connected key points of the human face.
  8. 根据权利要求2-7中任一项所述的方法,其特征在于,所述采用人脸关键点检测模型检测获取所述人脸部分中的多个人脸关键点,包括:The method according to any one of claims 2-7, wherein the detecting and acquiring a plurality of face key points in the face part using a face key point detection model comprises:
    将所述人脸部分的方向校正为预设方向,得到校正后的人脸图像;Correcting the direction of the part of the face to a preset direction to obtain a corrected face image;
    采用人脸关键点检测模型检测获取所述校正后的人脸图像中多个所述人脸关键点。A face key point detection model is used to detect and acquire a plurality of the face key points in the corrected face image.
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述识别获取待处理图像中人脸部分的多个待调整区域之后,还包括:The method according to any one of claims 1-8, characterized in that, after the recognizing and acquiring a plurality of regions to be adjusted in the face portion of the image to be processed, the method further comprises:
    在所述待处理图像上展示每个所述待调整区域。Each of the regions to be adjusted is displayed on the image to be processed.
  10. 一种界面交互装置,其特征在于,包括:An interface interaction device, characterized in that it comprises:
    识别模块,配置成识别获取待处理图像中人脸部分的多个待调整区域;A recognition module, configured to recognize and obtain multiple regions to be adjusted in the face portion of the image to be processed;
    接收模块,配置成接收用户的操作指令,所述操作指令用于选择所述待处理图像中的目标区域;A receiving module configured to receive a user's operation instruction, the operation instruction being used to select a target area in the image to be processed;
    第一展示模块,配置成根据所述目标区域的位置信息确定所述目标区域所属的待调整区域,展示所述目标区域所属的待调整区域对应的调整界面,其中,每个所述待调整区域与一个所述调整界面对应,所述调整界面包含所述待调整区域对应的功能选项。The first display module is configured to determine the area to be adjusted to which the target area belongs according to the location information of the target area, and display the adjustment interface corresponding to the area to be adjusted to which the target area belongs, wherein each of the areas to be adjusted Corresponding to one of the adjustment interfaces, the adjustment interface includes function options corresponding to the area to be adjusted.
  11. 根据权利要求10所述的装置,其特征在于,所述识别模块,具体配置成:The device according to claim 10, wherein the identification module is specifically configured to:
    识别所述待处理图像中的人脸部分;Identifying the face part in the image to be processed;
    采用人脸关键点检测模型检测所述人脸部分,得到所述人脸部分中的多个人脸关键点;Using a face key point detection model to detect the face part, and obtain multiple face key points in the face part;
    根据所述人脸关键点,确定所述人脸部分的多个待调整区域。According to the key points of the human face, multiple regions to be adjusted of the human face are determined.
  12. 根据权利要求11所述的装置,其特征在于,每个人脸关键点具有一人脸属性,所述人脸属性指示该人脸关键点所属的待调整区域;所述识别模块,具体配置成:The device according to claim 11, wherein each face key point has a face attribute, and the face attribute indicates the region to be adjusted to which the face key point belongs; the recognition module is specifically configured as:
    获取所述人脸关键点的人脸关键点坐标以及人脸关键点特征值;Acquiring the face key point coordinates and the face key point feature value of the face key point;
    针对每个所述人脸属性,根据所述人脸关键点特征值以及所述人脸关键点特征值与人脸属性的预设映射关系,确定具有所述人脸属性的所述人脸关键点;For each of the face attributes, determine the face key having the face attribute according to the feature value of the face key point and the preset mapping relationship between the feature value of the face key point and the face attribute point;
    根据具有所述人脸属性的所述人脸关键点的坐标,确定所述人脸属性对应的待调整区域。According to the coordinates of the key points of the face having the face attributes, the area to be adjusted corresponding to the face attributes is determined.
  13. 根据权利要求12所述的装置,其特征在于,所述识别模块根据具有所述人脸属性的所述人脸关键点的坐标,确定所述人脸属性对应的待调整区域的具体方式,为:The device according to claim 12, wherein the specific method for the recognition module to determine the region to be adjusted corresponding to the face attribute according to the coordinates of the face key point having the face attribute is :
    将具有所述人脸属性的所述人脸关键点,作为对应所述人脸属性的锚点;或,根据具有所述人脸属性的所述人脸关键点计算得到多个预设锚点,将具有所述人脸属性的所述人脸关键点以及所述多个预设锚点,作为对应所述人脸属性的锚点,其中,所述预设锚点包括所述人脸属性,所述预设锚点与所述人脸关键点具有相同的人脸属性;Use the face key points with the face attributes as anchor points corresponding to the face attributes; or calculate multiple preset anchor points according to the face key points with the face attributes , Taking the face key points having the face attributes and the multiple preset anchor points as anchor points corresponding to the face attributes, wherein the preset anchor points include the face attributes , The preset anchor point and the face key point have the same face attributes;
    根据所述人脸属性的锚点,生成所述人脸属性对应的所述待调整区域。According to the anchor point of the face attribute, the region to be adjusted corresponding to the face attribute is generated.
  14. 根据权利要求12或13所述的装置,其特征在于,还包括计算模块;The device according to claim 12 or 13, characterized in that it further comprises a calculation module;
    所述计算模块,配置成获取标准人脸图像上的多个人脸标准关键点;在多个所述人脸标准关键点之间的连线上,设置多个标准锚点;根据每个所述标准锚点所在的所述连线两端的人脸标准关键点的坐标以及插值算法,计算获得基于所述连线两端的人脸标准关键点的所述标准锚点坐标的表达式;根据所述标准锚点坐标的表达式、所述人脸关键点的坐标,计算获取多个预设锚点。The calculation module is configured to obtain a plurality of face standard key points on a standard face image; set a plurality of standard anchor points on the line between the plurality of face standard key points; According to the coordinates of the standard key points of the face at both ends of the line where the standard anchor points are located, and an interpolation algorithm, the expression based on the standard key points of the face at both ends of the line is calculated; The expression of the standard anchor point coordinates and the coordinates of the key points of the face are calculated to obtain multiple preset anchor points.
  15. 根据权利要求11-14中任一项所述的装置,其特征在于,所述识别模块,具体配置成:The device according to any one of claims 11-14, wherein the identification module is specifically configured to:
    将所述人脸部分的方向校正为预设方向,得到校正后的人脸图像;Correcting the direction of the part of the face to a preset direction to obtain a corrected face image;
    采用人脸关键点检测模型检测获取所述校正后的人脸图像中多个所述人脸关键点。A face key point detection model is used to detect and acquire a plurality of the face key points in the corrected face image.
  16. 根据权利要求10-15中任一项所述的装置,其特征在于,还包括第二展示模块,配置成在所述待处理图像上展示每个所述待调整区域。15. The device according to any one of claims 10-15, further comprising a second display module configured to display each of the regions to be adjusted on the image to be processed.
PCT/CN2019/104315 2019-05-07 2019-09-04 Interface interaction method and device WO2020224136A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910375059.1A CN110084219B (en) 2019-05-07 2019-05-07 Interface interaction method and device
CN201910375059.1 2019-05-07

Publications (1)

Publication Number Publication Date
WO2020224136A1 true WO2020224136A1 (en) 2020-11-12

Family

ID=67418999

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/104315 WO2020224136A1 (en) 2019-05-07 2019-09-04 Interface interaction method and device

Country Status (2)

Country Link
CN (1) CN110084219B (en)
WO (1) WO2020224136A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613357A (en) * 2020-12-08 2021-04-06 深圳数联天下智能科技有限公司 Face measurement method, face measurement device, electronic equipment and medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084219B (en) * 2019-05-07 2022-06-24 厦门美图之家科技有限公司 Interface interaction method and device
CN114185628B (en) * 2021-11-19 2024-04-12 北京奇艺世纪科技有限公司 Picture adjustment method, device and equipment of iOS (integrated operation system) and computer readable medium
CN114546211A (en) * 2022-02-21 2022-05-27 深圳硬盒交互设计科技有限公司 Image processing method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140023231A1 (en) * 2012-07-19 2014-01-23 Canon Kabushiki Kaisha Image processing device, control method, and storage medium for performing color conversion
CN105825486A (en) * 2016-04-05 2016-08-03 北京小米移动软件有限公司 Beautifying processing method and apparatus
CN108021308A (en) * 2016-10-28 2018-05-11 中兴通讯股份有限公司 Image processing method, device and terminal
CN108550117A (en) * 2018-03-20 2018-09-18 维沃移动通信有限公司 A kind of image processing method, device and terminal device
CN109064388A (en) * 2018-07-27 2018-12-21 北京微播视界科技有限公司 Facial image effect generation method, device and electronic equipment
CN110084219A (en) * 2019-05-07 2019-08-02 厦门美图之家科技有限公司 Interface alternation method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101130817B1 (en) * 2011-09-27 2012-04-16 (주)올라웍스 Face recognition method, apparatus, and computer-readable recording medium for executing the method
CN105787878B (en) * 2016-02-25 2018-12-28 杭州格像科技有限公司 A kind of U.S. face processing method and processing device
CN109559288A (en) * 2018-11-30 2019-04-02 深圳市脸萌科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN109614951A (en) * 2018-12-27 2019-04-12 广东金杭科技有限公司 Portrait compares task distribution processor algorithm

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140023231A1 (en) * 2012-07-19 2014-01-23 Canon Kabushiki Kaisha Image processing device, control method, and storage medium for performing color conversion
CN105825486A (en) * 2016-04-05 2016-08-03 北京小米移动软件有限公司 Beautifying processing method and apparatus
CN108021308A (en) * 2016-10-28 2018-05-11 中兴通讯股份有限公司 Image processing method, device and terminal
CN108550117A (en) * 2018-03-20 2018-09-18 维沃移动通信有限公司 A kind of image processing method, device and terminal device
CN109064388A (en) * 2018-07-27 2018-12-21 北京微播视界科技有限公司 Facial image effect generation method, device and electronic equipment
CN110084219A (en) * 2019-05-07 2019-08-02 厦门美图之家科技有限公司 Interface alternation method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613357A (en) * 2020-12-08 2021-04-06 深圳数联天下智能科技有限公司 Face measurement method, face measurement device, electronic equipment and medium
CN112613357B (en) * 2020-12-08 2024-04-09 深圳数联天下智能科技有限公司 Face measurement method, device, electronic equipment and medium

Also Published As

Publication number Publication date
CN110084219B (en) 2022-06-24
CN110084219A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
WO2020224136A1 (en) Interface interaction method and device
US11250241B2 (en) Face image processing methods and apparatuses, and electronic devices
US10990803B2 (en) Key point positioning method, terminal, and computer storage medium
WO2020199906A1 (en) Facial keypoint detection method, apparatus and device, and storage medium
WO2020207191A1 (en) Method and apparatus for determining occluded area of virtual object, and terminal device
US10599914B2 (en) Method and apparatus for human face image processing
WO2017193906A1 (en) Image processing method and processing system
WO2022012085A1 (en) Face image processing method and apparatus, storage medium, and electronic device
EP3454250A1 (en) Facial image processing method and apparatus and storage medium
WO2020001013A1 (en) Image processing method and device, computer readable storage medium, and terminal
EP3992919B1 (en) Three-dimensional facial model generation method and apparatus, device, and medium
US20160328825A1 (en) Portrait deformation method and apparatus
US10311295B2 (en) Heuristic finger detection method based on depth image
CN106778453B (en) Method and device for detecting glasses wearing in face image
CN105096353B (en) Image processing method and device
WO2021083125A1 (en) Call control method and related product
US11238569B2 (en) Image processing method and apparatus, image device, and storage medium
JP6694465B2 (en) Display method and electronic device for recommending eyebrow shape
WO2021062998A1 (en) Image processing method, apparatus and electronic device
WO2019237747A1 (en) Image cropping method and apparatus, and electronic device and computer-readable storage medium
WO2019095117A1 (en) Facial image detection method and terminal device
WO2020244160A1 (en) Terminal device control method and apparatus, computer device, and readable storage medium
WO2020223940A1 (en) Posture prediction method, computer device and storage medium
CN106875332A (en) A kind of image processing method and terminal
WO2020124442A1 (en) Pushing method and related product

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19927808

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19927808

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19927808

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06/05/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19927808

Country of ref document: EP

Kind code of ref document: A1