CN114554097A - Display method, display device, electronic apparatus, and readable storage medium - Google Patents

Display method, display device, electronic apparatus, and readable storage medium Download PDF

Info

Publication number
CN114554097A
CN114554097A CN202210206069.4A CN202210206069A CN114554097A CN 114554097 A CN114554097 A CN 114554097A CN 202210206069 A CN202210206069 A CN 202210206069A CN 114554097 A CN114554097 A CN 114554097A
Authority
CN
China
Prior art keywords
preview image
input
target
display
half area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210206069.4A
Other languages
Chinese (zh)
Inventor
唐文丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202210206069.4A priority Critical patent/CN114554097A/en
Publication of CN114554097A publication Critical patent/CN114554097A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a display method, a display device, electronic equipment and a readable storage medium, and belongs to the technical field of display. The display method comprises the following steps: collecting and displaying a preview image; receiving a first input of a user to a first half area in the preview image, wherein the preview image is divided into the first half area and a second half area by a symmetry axis, and the symmetry axis is determined according to at least one pair of feature points with symmetrical positions in the preview image; and responding to the first input, displaying a target contour in a second half area of the preview image, wherein the target contour is obtained by mirror-turning the contour of a target characteristic part in the first half area along the symmetry axis.

Description

Display method, display device, electronic apparatus, and readable storage medium
Technical Field
The application belongs to the technical field of display, and particularly relates to a display method, a display device, an electronic device and a readable storage medium.
Background
At present, with the improvement of the image capability of the mobile terminal, a user can preview and display images by using a camera anytime and anywhere, and then the appearance instrument is arranged. However, the conventional image preview display function is generally only a whole display, and is limited by the display size of the mobile terminal, and the details of the screen are not clear enough, so that the use is inconvenient.
Disclosure of Invention
The embodiment of the application aims to provide a display method, a display device, electronic equipment and a readable storage medium, and can solve the problems that the existing image preview display function is generally only integrally displayed and is limited by the display size of a mobile terminal, the picture details are not clear enough, and the use is inconvenient.
In a first aspect, an embodiment of the present application provides a display method, where the method includes:
collecting and displaying a preview image;
receiving a first input of a user to a first half area in the preview image, wherein the preview image is divided into the first half area and a second half area by a symmetry axis, and the symmetry axis is determined according to at least one pair of feature points with symmetrical positions in the preview image;
and responding to the first input, displaying a target contour in a second half area of the preview image, wherein the target contour is obtained by mirror-turning the contour of a target characteristic part in the first half area along the symmetry axis.
In a second aspect, an embodiment of the present application provides a display device, including:
the first display module is used for acquiring and displaying the preview image;
the preview image processing device comprises a first receiving module, a second receiving module and a display module, wherein the first receiving module is used for receiving first input of a user to a first half area in the preview image, the preview image is divided into the first half area and a second half area by a symmetry axis, and the symmetry axis is determined according to at least one pair of feature points with symmetrical positions in the preview image;
and the second display module is used for responding to the first input and displaying a target contour in a second half area of the preview image, wherein the target contour is obtained by mirror image turning of the contour of the target characteristic part in the first half area along the symmetry axis.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, the preview image is continuously acquired and displayed, the trimming reference area is selected through the first input of the first half area in the preview image, the target contour is displayed in the second half area in the preview image, namely, the contour of the target feature part in the first half area is symmetrically displayed in the second half area, so that makeup can be conveniently performed according to the target contour, the requirement of a user on the symmetrical makeup effect is met, and the user experience is improved.
Drawings
Fig. 1 is a schematic flowchart of a display method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a preview image provided in an embodiment of the present application;
FIG. 3 is a schematic view of an axis of symmetry provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a first input provided by an embodiment of the present application;
FIG. 5 is a schematic view of a target profile provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram of a display device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The display method, the display apparatus, the electronic device, and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, fig. 1 is a schematic flow chart of a display method according to an embodiment of the present disclosure. As shown in fig. 1, an embodiment of an aspect of the present application provides a display method applied to an electronic device, where the display method includes the following steps:
step 101: and collecting and displaying the preview image.
Optionally, before step 101, the display method further includes: receiving an opening input of a user, that is, step 101 is performed in response to an opening input, where the opening input may be an input for opening a cosmetic mirror function of the electronic device, for example, the opening input is a click input, a long-press input, a drag input, and the like, for a cosmetic mirror control at a camera interface, or the opening input may also be an input for opening a cosmetic mirror application, and the like, and the electronic device will open the cosmetic mirror function or open the cosmetic mirror application in response to the opening input, and capture and display a preview image through a default camera, which is usually a front camera of the electronic device, on a screen of the electronic device, where the preview image is dynamically updated in real time.
Referring to fig. 2, fig. 2 is a schematic view of a preview image according to an embodiment of the present disclosure. As shown in fig. 2, the electronic device displays the preview image captured by the camera on its screen in full screen.
Step 102: receiving a first input of a user to a first half area in the preview image, wherein the preview image is divided into the first half area and a second half area by a symmetry axis, and the symmetry axis is determined according to at least one pair of feature points with symmetrical positions in the preview image;
step 103: and responding to the first input, displaying a target contour in a second half area of the preview image, wherein the target contour is obtained by mirror-turning the contour of a target characteristic part in the first half area along the symmetry axis.
In the step 102, optionally, the first input may be an input of clicking or long-pressing the first half area of the preview image displayed on the screen of the electronic device, that is, the first input may be an input on the screen of the electronic device; the first input may also be a gesture input, for example, the first input is a gesture input in which a user finger points to a user real face or is in contact with the user real face, and the first input also appears in a corresponding user finger image in the preview image, which corresponds to a mirror image relationship. Thus, the user can select a first half area in the preview image through the first input, and make up the second half area with the first half area as a modified reference area, i.e., with reference to the first half area.
In some embodiments of the present application, the axis of symmetry may be determined according to at least one pair of feature points with symmetrical positions in the preview image, for example, the preview image includes a human face image, and the feature points include human face feature points, that is, the human face feature points may be extracted from the acquired preview image, and then at least one pair of human face feature points with symmetrical positions is further determined, and finally the axis of symmetry of the preview image is determined according to the at least one pair of human face feature points with symmetrical positions by using a straight line fitting method or the like. The preview image is divided into a first half area and a second half area by the symmetry axis, and the first half area and the second half area are arranged in a left-right mode.
Referring to fig. 3, fig. 3 is a schematic view of a symmetry axis according to an embodiment of the present disclosure. As shown in fig. 3, the symmetry axis divides the preview image into left and right half-regions, the left half-region corresponding to the first half-region and the right half-region corresponding to the second half-region. It can be seen that the preview image is not in fact completely symmetrical, e.g. the right eyebrow is slightly lower than the left eyebrow.
Referring to fig. 4, fig. 4 is a schematic diagram of a first input provided in an embodiment of the present application. As shown in fig. 4, when the user points at the left half area of the preview image by the user's finger 301, the electronic device may detect the user's finger 301 by an object recognition technique or a gesture recognition technique, and then determine whether the area pointed by the user's finger 301 or the area contacted by the user's finger 301 is the first half area or the second half area of the preview image, and in fig. 4, the user selects the first half area (i.e., the left half area) as a correction reference surface to assist the second half area (i.e., the right half area) in makeup.
In step 103, if the electronic device receives the first input, in response to the first input, a target contour is displayed in the second half area of the preview image, where the target contour is obtained by mirror-flipping the contour of the target feature location located in the first half area along the symmetry axis, that is, the target location where the target contour is located in the second half area is symmetric to the location where the target feature location is located in the first half area about the symmetry axis. That is to say, since the face image in the preview image is usually not strictly symmetrical, the target feature in the first half area is symmetrical to the second half area through the symmetry axis, so that the user can conveniently view the position difference between the corresponding feature in the second half area and the target contour, and the user can conveniently obtain a symmetrical makeup effect.
In some embodiments of the present application, the target feature portion may be detected and identified by a face facial features recognition technology, an object recognition technology, and the like, and the target feature portion may be eyes and other facial features, or may be some portions with contours in the face, such as eyebrows, positions of cheekbones, freckles, acne marks, and the like.
Referring to fig. 5, fig. 5 is a schematic diagram of a target profile provided in an embodiment of the present application. As shown in fig. 5, after the first half area is selected as the correction reference surface, the contour of the target feature portion 302 in the first half area may be symmetrically displayed in the second half area, that is, the target contour 303 in the second half area, and the second half area is used as the surface to be corrected, so that the user may assist in making up the cosmetic product by the target contour 303. For example, the target contour in fig. 5 is an eyebrow.
Therefore, in the embodiment of the application, the preview image is continuously acquired and displayed, the trimming reference area is selected through the first input of the first half area in the preview image, the target contour is displayed in the second half area in the preview image, namely, the contour of the target feature part in the first half area is symmetrically displayed in the second half area, so that makeup can be conveniently performed according to the target contour, the requirement of a user on the symmetrical makeup effect is met, and the user experience is improved.
In some embodiments of the present application, the displaying the target contour in the second half of the preview image comprises:
and displaying the target contour in the second half area of the preview image according to the preset transparency.
In this embodiment, since the second half area usually displays the feature portion corresponding to the target feature portion in the first half area, the user can conveniently view the details of the target contour without affecting the display of the corresponding feature portion in the second half area, and the target contour can be displayed according to the preset transparency, so that the target contour is displayed in a faded manner.
In other embodiments of the application, after the displaying the target contour in the second half area of the preview image, the method further includes:
receiving a second input of the first target contour in the second half area from the user;
in response to the second input, adjusting a display position of the first target contour or a target parameter, the target parameter comprising at least one of a display size, a display shape, a contrast, a transparency.
In this embodiment, if the display position or the target parameter of any one target contour in the second half area does not meet the user requirement, the user may adjust the display position or the target parameter thereof through the second input. For example, if a second input of the user to the first target contour in the second half area is received, the second input may be an input of clicking or long-pressing the first target contour in the preview image displayed on the screen of the electronic device, that is, the second input may be an input on the screen of the electronic device; the second input may also be a gesture input, for example, the second input is a gesture input in which a user finger points to a user real face or is in contact with the user real face, the second input is a corresponding user finger image that also appears at a position where the first target contour is located in the preview image, and is equivalent to a mirror image relationship, the electronic device will adjust, in response to the second input, a display position or a target parameter of the first target contour in the second half area, that is, move the first target contour to a position specified by the user, or adjust a display size, a display shape, a contrast, a transparency, and the like of the first target contour, so that the user can make up according to the adjusted first target contour.
In some embodiments of the present application, after displaying the target contour in the second half of the preview image, the method further includes:
receiving a third input of the user to the first position in the preview image;
and in response to the third input, displaying an image of a first area in an enlarged manner, wherein the first area is determined according to the first position.
In this embodiment, optionally, the third input may be an input of clicking or long-pressing the first position in the preview image displayed on the screen of the electronic device, that is, the third input may be an input on the screen of the electronic device; the third input may also be a gesture input, for example, the third input is a gesture input in which a user finger points to a user real face or is in contact with the user real face, and the third input may also appear in a corresponding user finger image in the preview image, which is equivalent to a mirror image relationship. Therefore, the user can select any area needing to be magnified and viewed in the preview image through the third input, so that the user can conveniently and clearly view details of any position of the face, and the use experience of the user in scenes such as makeup is improved.
Alternatively, when the image of the target area is displayed in an enlarged manner, a window may pop up and the image of the target area may be displayed in the window in an enlarged manner.
In some embodiments of the present application, when determining the first area according to the first position, an area within a preset range with the first position as a center may be determined as the first area, where the preset range may be a circular area range, a rectangular area range, and the like with the first position as a center, and parameters such as a specific size may be set according to actual requirements.
Since the first region is framed in a fixed circular or rectangular shape, some regions that the user does not want to focus on are usually selected and then enlarged for display, and thus the portion that the user wants to enlarge and view cannot be effectively highlighted. Therefore, in other embodiments of the present application, when determining the first region according to the first position, whether the feature portion exists may be detected within a preset region range centered on the first position, where a size of the preset region range may be set in advance, and if the feature portion is detected within the preset region range centered on the first position, the first region may be determined according to a contour of the feature portion, for example, a region surrounded by a contour boundary of the feature portion is used as the first region, or a region surrounded by a minimum circle and a minimum rectangle that can frame the contour of the feature portion is used as the first region. Therefore, the size of the first area can be determined by combining the outline of the specific characteristic part which the user wants to magnify and view, so that the situation that the part which is not focused by the user is magnified is avoided, and the characteristic part can be more highlighted.
In still other embodiments of the present application, the displaying the image of the first area in an enlarged manner includes:
respectively displaying the image of the first area in a first window and a second window according to a preset magnification;
the display frame in the first window is fixed to be the same as the image of the first area in the target frame, the display frame in the second window is updated in real time, the first window and the second window are not overlapped with the first area, and the target frame is determined according to the time when the first input is received.
In this embodiment, optionally, after the user selects the first area, the two windows may be respectively used for comparison display, so that the user can compare the state of the first area before and after makeup. When the image of the first region is displayed in the first window according to the preset magnification, the image in the first window is fixed, that is, the image is fixed as the image of the first region in the target frame image, and the target frame image may be determined according to the time when the electronic device receives the third input, for example, the frame preview image corresponding to the time when the third input is received is determined as the target frame image; and when the image of the first area is displayed in the second window according to the preset magnification, the picture in the second window is dynamically updated in real time, namely, the camera acquires the image of the first area in real time and displays the image in the second window in an enlarged manner, and if a user makes up in the first area, the corresponding makeup effect can be seen in the second window. Therefore, the user can conveniently check the states of the first area before and after makeup so as to determine whether the makeup effect is satisfactory or not, and the use experience of the function of the cosmetic mirror is improved.
In other embodiments of the present application, the preset magnification refers to a magnification of the image of the first region, which may be preset by a user or may be a default value.
In some embodiments of the present application, after the enlarging and displaying the image of the first area, the method further includes:
under the condition that the gesture of a user finger is detected to meet a first preset gesture condition, determining an image of a first area corresponding to the gesture;
and adjusting the magnification of the image of the target area corresponding to the gesture into a target magnification, wherein the target magnification is determined according to the gesture.
In this embodiment, the user may magnify and display the image of one or more first regions through the third input, at this time, the user does not need to input on the screen of the electronic device, and the magnification of the image of the first region may be conveniently adjusted through a gesture, so that the user may better view details of the corresponding region. For example, if the electronic device currently displays only one image of a first area, that is, an image of the first area corresponding to the gesture, and if the electronic device currently displays at least two images of the first area, the image of the first region where the user's finger is displayed may be determined as the image of the first region corresponding to the gesture, and optionally, if the user's finger is displayed in each of the images of at least two first regions, the image of the first region containing the most user's fingers may be determined as the image of the first region corresponding to the gesture, so that the problem that the image of the first region where the user wants to magnify the image in a multiple cannot be accurately recognized may be solved. It can be known that, when the image of the target area is displayed in a window in a magnified manner, for example, when there are multiple windows, the displayed content in different windows corresponds to the image of different target areas, and similarly, the target window corresponding to the gesture, that is, the object whose magnification is to be adjusted, can be determined according to the above manner.
After determining the image of the first area corresponding to the gesture of the user finger, the magnification of the image of the first area may be adjusted to a target magnification, where the target magnification is determined according to the gesture, for example, when the gesture for setting the magnification down is that the included angle formed by the two user fingers is smaller than a certain preset included angle threshold, the included angle formed by the two user fingers and the target magnification may be set to be positively correlated, that is, the included angle formed by the two user fingers is smaller, the target magnification is smaller, the included angle formed by the two user fingers is larger, and the target magnification is larger.
In other embodiments of the present application, after the displaying the image of the first area in an enlarged manner, the method further includes:
under the condition that the gesture of the user finger meets a second preset gesture condition, determining an image of a first area corresponding to the gesture;
and adjusting the display size of the image of the target area corresponding to the gesture into a target size, wherein the target size is determined according to the gesture.
In this embodiment, the user may enlarge and display the image of one or more first regions through the third input, at this time, the user does not need to input on the screen of the electronic device, and may conveniently adjust the display size of the image of the first region through a gesture, so that the user may better view details of the corresponding region. For example, the user may preset a second preset gesture condition for adjusting the display size of the image in the first region, for example, setting a gesture for increasing the display size of the image in the first region to make an included angle between two nonadjacent fingers of the three user fingers greater than a preset included angle threshold, and setting a gesture for decreasing the display size of the image in the first region to make an included angle between two nonadjacent fingers of the three user fingers less than a preset included angle threshold, or setting a gesture for increasing the display size of the image in the first region to make a single user finger point at a third preset orientation, setting a gesture for decreasing the display size of the image in the first region to make a single user finger point at a fourth preset orientation, and so on, and in a case that the gesture of the user finger is detected to satisfy the second preset gesture condition, determining the image in the first region corresponding to the gesture, for example, if the electronic device currently displays only one image of the first region, that is, the image of the first region corresponding to the gesture, and if the electronic device currently displays at least two images of the first regions, the image of the first region in which the user's finger is displayed may be determined as the image of the first region corresponding to the gesture, and optionally, if the user's finger is displayed in both of the at least two images of the first region, the image of the first region including the largest number of the user's fingers may be determined as the image of the first region corresponding to the gesture, thereby solving the problem that the image of the first region to be enlarged by the user cannot be accurately recognized. It can be known that, when the image of the target area is displayed in a window in a magnified manner, for example, when there are multiple windows, the display contents in different windows correspond to images of different target areas, and similarly, the target window corresponding to the gesture, that is, the object whose display size is to be adjusted, can be determined according to the above manner.
After determining the image of the first region corresponding to the gesture of the user finger, the display size of the image of the first region corresponding to the gesture may be adjusted to a target size, where the target size is determined according to the gesture, for example, when the gesture for setting the display size of the image of the first region to be increased is set such that an included angle formed by two nonadjacent fingers of the three user fingers is greater than a certain preset included angle threshold value, the included angle formed by the two nonadjacent fingers of the three user fingers may be set to be positively correlated with the target size, that is, the smaller the included angle formed by the two nonadjacent fingers of the three user fingers is, the smaller the target size is, the larger the included angle formed by the two nonadjacent fingers of the three user fingers is, and the larger the target size is.
In still other embodiments of the present application, after displaying the target area in the window according to the preset magnification, the method further includes:
and recording and storing the enlarged and displayed image of the first area.
For example, when the image of the target area is enlarged and displayed in the form of a window, the display picture in the window may be recorded, and the recorded and saved window is a window for dynamically updating the picture in real time. By recording and storing the display picture in the window, the makeup video file of the first area can be obtained, and the makeup video file stores the makeup steps of the user on the first area, so that the user can conveniently share the makeup video file, for example, the makeup video file is sent to other users and uploaded to a target social platform.
In still other embodiments of the present application, after displaying the target contour at the target position of the second half area in the preview image, the method further includes:
receiving a fourth input of the user to the second half area in the preview image;
in response to the fourth input, removing the target contour displayed in the second half.
That is, the user may autonomously undo any target profile displayed. For example, if a fourth input of the user to any target contour in the second half-area is received, for example, the fourth input is an input of clicking the target contour, an input of long-pressing the target contour, an input of dragging the target contour to a preset position, or the fourth input is a gesture input, the gesture does not contact with a screen of the electronic device, and the gesture can be preset, the electronic device will remove the target contour in response to the fourth input, wherein when the target contour is removed, all target contours in the second half-area can be cancelled from being displayed, or only a certain target contour can be cancelled from being displayed, for example, only the target contour determined by the fourth input is cancelled from being displayed.
In summary, in the embodiment of the application, the preview image is continuously acquired and displayed, the trimming reference area is selected through the first input of the first half area in the preview image, and then the target contour is displayed in the second half area in the preview image, namely, the contour of the target feature part in the first half area is symmetrically displayed in the second half area, so that makeup can be conveniently performed according to the target contour, the requirement of a user on the symmetrical makeup effect is met, and the user experience is improved.
According to the display method provided by the embodiment of the application, the execution main body can be a display device. In the embodiment of the present application, a display device executing a display method is taken as an example, and the display device provided in the embodiment of the present application is described.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a display device according to an embodiment of the present disclosure. As shown in fig. 6, another embodiment of the present application further provides a display device, where the display device 600 includes:
the first display module 601 is used for acquiring and displaying a preview image;
a first receiving module 602, configured to receive a second input of the user to a first half area in the preview image, where the preview image is divided into the first half area and a second half area by a symmetry axis, and the symmetry axis is determined according to at least one pair of feature points with symmetric positions in the preview image;
a second display module 603, configured to, in response to the first input, display a target contour in a second half of the preview image, where the target contour is obtained by mirror-flipping a contour of a target feature in the first half along the symmetry axis.
Optionally, the second display module comprises:
and the transparency unit is used for displaying the target contour in the second half area of the preview image according to the preset transparency.
Optionally, the display device further comprises:
a second receiving module, configured to receive a second input of the first target contour in the second half area from the user;
and the third display module is used for responding to the second input and adjusting the display position of the first target outline or target parameters, wherein the target parameters comprise at least one of display size, display shape, contrast and transparency.
Optionally, the display device further comprises:
the third receiving module is used for receiving a third input of the user to the first position in the preview image;
and the fourth display module is used for responding to the third input and displaying the image of the first area in an enlarged mode, wherein the first area is determined according to the first position.
In the embodiment of the application, the preview image is continuously acquired and displayed, the trimming reference area is selected through the first input of the first half area in the preview image, the target contour is displayed in the second half area in the preview image, namely, the contour of the target feature part in the first half area is symmetrically displayed in the second half area, so that makeup can be conveniently performed according to the target contour, the requirement of a user on the symmetrical makeup effect is met, and the user experience is improved.
The display device in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The display device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 5, and is not described here again to avoid repetition.
Optionally, as shown in fig. 7, an electronic device 700 is further provided in this embodiment of the present application, and includes a processor M01 and a memory 702, where the memory 702 stores a program or an instruction that can be executed on the processor 701, and when the program or the instruction is executed by the processor 701, the steps of the foregoing display method embodiment are implemented, and the same technical effects can be achieved, and are not described again here to avoid repetition.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 800 includes, but is not limited to: a radio frequency unit 801, a network module 802, an audio output unit 803, an input unit 804, a sensor 805, a display unit 806, a user input unit 807, an interface unit 808, a memory 809, and a processor 8010.
Those skilled in the art will appreciate that the electronic device 800 may further comprise a power supply (e.g., a battery) for supplying power to the various components, and the power supply may be logically coupled to the processor 8010 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption. The electronic device structure shown in fig. 8 does not constitute a limitation to the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The sensor 805 is used for acquiring a preview image;
a display unit 806 for displaying the preview image;
a user input unit 807 for receiving a first input of a user to a first half area in the preview image, wherein the preview image is divided into the first half area and a second half area by a symmetry axis determined according to at least one pair of feature points that are position-symmetric in the preview image;
the display unit 806 is further configured to display, in response to the first input, a target contour in a second half of the preview image, where the target contour is obtained by mirror-flipping a contour of a target feature in the first half along the symmetry axis.
Optionally, the display unit 806 is further configured to display the target contour in the second half area of the preview image according to a preset transparency.
Optionally, a user input unit 807 for receiving a second input of the first target contour in the second half area from the user;
the display unit 806 is further configured to adjust a display position of the first target contour or a target parameter in response to the second input, where the target parameter includes at least one of a display size, a display shape, a contrast, and a transparency.
Optionally, a user input unit 807 further configured to receive a third input from the user to the first position in the preview image;
the display unit 806 is further configured to display an enlarged image of a first area in response to the third input, where the first area is determined according to the first position.
In the embodiment of the application, the preview image is continuously acquired and displayed, the trimming reference area is selected through the first input of the first half area in the preview image, the target contour is displayed in the second half area in the preview image, namely, the contour of the target feature part in the first half area is symmetrically displayed in the second half area, so that makeup can be conveniently performed according to the target contour, the requirement of a user on the symmetrical makeup effect is met, and the user experience is improved.
It should be understood that in the embodiment of the present application, the input Unit 804 may include a Graphics Processing Unit (GPU) 8041 and a microphone 8042, and the Graphics Processing Unit 8041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 806 may include a display panel 8061, and the display panel 8061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 807 includes at least one of a touch panel 8071 and other input devices 8072. A touch panel 8071, also referred to as a touch screen. The touch panel 8071 may include two portions of a touch detection device and a touch controller. Other input devices 8072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 809 may be used to store software programs and various data, and the memory 809 may mainly include a first storage area storing programs or instructions and a second storage area storing data, wherein the first storage area may store an operating system, application programs or instructions required for at least one function (such as a sound playing function, an image playing function, etc.), and the like. Further, the memory 809 can include volatile memory or nonvolatile memory, or the memory 809 can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 809 in the present embodiment includes, but is not limited to, these and any other suitable types of memory.
The processor 8010 may include one or more processing units; optionally, the processor 8010 integrates an application processor, which primarily handles operations involving the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into the processor 8010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements the processes of the display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read-only memory, a random access memory, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the display method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing display method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not described here again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A display method, comprising:
collecting and displaying a preview image;
receiving a first input of a user to a first half area in the preview image, wherein the preview image is divided into the first half area and a second half area by a symmetry axis, and the symmetry axis is determined according to at least one pair of feature points with symmetrical positions in the preview image;
and responding to the first input, and displaying a target contour in a second half area of the preview image, wherein the target contour is obtained by mirror image inversion of the contour of the target characteristic part in the first half area along the symmetry axis.
2. The display method according to claim 1, wherein the displaying the target contour in the second half area of the preview image comprises:
and displaying the target contour in the second half area of the preview image according to the preset transparency.
3. The method according to claim 1, further comprising, after displaying the target contour in the second half of the preview image:
receiving a second input of the first target contour in the second half area from the user;
in response to the second input, adjusting a display position of the first target contour or a target parameter, the target parameter comprising at least one of a display size, a display shape, a contrast, a transparency.
4. The method according to claim 1, further comprising, after displaying the target contour in the second half of the preview image:
receiving a third input of a user to a first position in the preview image;
and in response to the third input, displaying an image of a first area in an enlarged manner, wherein the first area is determined according to the first position.
5. A display device, comprising:
the first display module is used for acquiring and displaying the preview image;
the preview image processing device comprises a first receiving module, a second receiving module and a display module, wherein the first receiving module is used for receiving first input of a user to a first half area in the preview image, the preview image is divided into the first half area and a second half area by a symmetry axis, and the symmetry axis is determined according to at least one pair of feature points with symmetrical positions in the preview image;
and the second display module is used for responding to the first input and displaying a target contour in a second half area of the preview image, wherein the target contour is obtained by mirror image turning of the contour of the target characteristic part in the first half area along the symmetry axis.
6. The display device according to claim 5, wherein the second display module comprises:
and the transparency unit is used for displaying the target contour in the second half area of the preview image according to the preset transparency.
7. The display device according to claim 5, further comprising:
a second receiving module, configured to receive a second input of the first target contour in the second half area from the user;
and the third display module is used for responding to the second input and adjusting the display position of the first target outline or target parameters, wherein the target parameters comprise at least one of display size, display shape, contrast and transparency.
8. The display device according to claim 5, further comprising:
the fourth receiving module is used for receiving a third input of the user to the first position in the preview image;
and the fourth display module is used for responding to the third input and displaying the image of the first area in an enlarged mode, wherein the first area is determined according to the first position.
9. An electronic device comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions when executed by the processor implementing the steps of the display method according to any one of claims 1-4.
10. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the display method according to any one of claims 1-4.
CN202210206069.4A 2022-02-28 2022-02-28 Display method, display device, electronic apparatus, and readable storage medium Pending CN114554097A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210206069.4A CN114554097A (en) 2022-02-28 2022-02-28 Display method, display device, electronic apparatus, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210206069.4A CN114554097A (en) 2022-02-28 2022-02-28 Display method, display device, electronic apparatus, and readable storage medium

Publications (1)

Publication Number Publication Date
CN114554097A true CN114554097A (en) 2022-05-27

Family

ID=81660918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210206069.4A Pending CN114554097A (en) 2022-02-28 2022-02-28 Display method, display device, electronic apparatus, and readable storage medium

Country Status (1)

Country Link
CN (1) CN114554097A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160357578A1 (en) * 2015-06-03 2016-12-08 Samsung Electronics Co., Ltd. Method and device for providing makeup mirror
US20170185824A1 (en) * 2015-12-27 2017-06-29 Asustek Computer Inc. Electronic device, computer readable storage medium and face image display method
CN109583261A (en) * 2017-09-28 2019-04-05 丽宝大数据股份有限公司 Biological information analytical equipment and its auxiliary ratio are to eyebrow type method
CN110246206A (en) * 2019-06-21 2019-09-17 北京新氧科技有限公司 Assist the method, apparatus and system of thrush
WO2020108261A1 (en) * 2018-11-28 2020-06-04 维沃移动通信(杭州)有限公司 Photographing method and terminal
CN113079316A (en) * 2021-03-26 2021-07-06 维沃移动通信有限公司 Image processing method, image processing device and electronic equipment
CN113468934A (en) * 2020-04-30 2021-10-01 海信集团有限公司 Symmetry detection method and intelligent equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160357578A1 (en) * 2015-06-03 2016-12-08 Samsung Electronics Co., Ltd. Method and device for providing makeup mirror
US20170185824A1 (en) * 2015-12-27 2017-06-29 Asustek Computer Inc. Electronic device, computer readable storage medium and face image display method
CN109583261A (en) * 2017-09-28 2019-04-05 丽宝大数据股份有限公司 Biological information analytical equipment and its auxiliary ratio are to eyebrow type method
WO2020108261A1 (en) * 2018-11-28 2020-06-04 维沃移动通信(杭州)有限公司 Photographing method and terminal
CN110246206A (en) * 2019-06-21 2019-09-17 北京新氧科技有限公司 Assist the method, apparatus and system of thrush
CN113468934A (en) * 2020-04-30 2021-10-01 海信集团有限公司 Symmetry detection method and intelligent equipment
CN113079316A (en) * 2021-03-26 2021-07-06 维沃移动通信有限公司 Image processing method, image processing device and electronic equipment

Similar Documents

Publication Publication Date Title
US11706521B2 (en) User interfaces for capturing and managing visual media
US11770601B2 (en) User interfaces for capturing and managing visual media
CN112585566B (en) Hand-covering face input sensing for interacting with device having built-in camera
CN113655929A (en) Interface display adaptation processing method and device and electronic equipment
CN114779977A (en) Interface display method and device, electronic equipment and storage medium
CN113259743A (en) Video playing method and device and electronic equipment
CN112911147A (en) Display control method, display control device and electronic equipment
CN112153281A (en) Image processing method and device
CN115357158A (en) Message processing method and device, electronic equipment and storage medium
CN114063845A (en) Display method, display device and electronic equipment
CN114995713B (en) Display control method, display control device, electronic equipment and readable storage medium
CN115562539A (en) Control display method and device, electronic equipment and readable storage medium
CN112286430B (en) Image processing method, apparatus, device and medium
CN114554097A (en) Display method, display device, electronic apparatus, and readable storage medium
CN114245017A (en) Shooting method and device and electronic equipment
CN114519212A (en) Information display method and device
CN114242023A (en) Display screen brightness adjusting method, display screen brightness adjusting device and electronic equipment
CN114546203A (en) Display method, display device, electronic apparatus, and readable storage medium
CN114546576A (en) Display method, display device, electronic apparatus, and readable storage medium
CN112165584A (en) Video recording method, video recording device, electronic equipment and readable storage medium
CN114554098A (en) Display method, display device, electronic apparatus, and readable storage medium
CN114500855A (en) Display method, display device, electronic apparatus, and readable storage medium
CN114339050B (en) Display method and device and electronic equipment
CN112330571B (en) Image processing method and device and electronic equipment
CN112367468B (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination