CN110730298A - Display control method and electronic equipment - Google Patents

Display control method and electronic equipment Download PDF

Info

Publication number
CN110730298A
CN110730298A CN201910901746.2A CN201910901746A CN110730298A CN 110730298 A CN110730298 A CN 110730298A CN 201910901746 A CN201910901746 A CN 201910901746A CN 110730298 A CN110730298 A CN 110730298A
Authority
CN
China
Prior art keywords
electronic device
camera
display
identifier
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910901746.2A
Other languages
Chinese (zh)
Inventor
程青
巩宇龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910901746.2A priority Critical patent/CN110730298A/en
Publication of CN110730298A publication Critical patent/CN110730298A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a display control method and electronic equipment, relates to the technical field of communication, and aims to solve the problems that the electronic equipment has long image acquisition time and poor man-machine interaction performance. The method comprises the following steps: receiving a target input of a user; and responding to the target input, starting a camera positioned below a display screen of the electronic equipment, and displaying a first identifier, wherein the display position of the first identifier corresponds to a first position, and the first position is the position of the camera positioned below the display screen. The method is applied to the scene of the acquired image.

Description

Display control method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a display control method and electronic equipment.
Background
With the development of terminal technology, the screen occupation ratio of electronic equipment is larger and larger. Typically, a front-facing camera may be disposed below a screen of the electronic device (e.g., an off-screen camera) to achieve a full screen of the electronic device. In addition, the screen lower camera cannot be seen by a user, so that the display effect of the screen of the electronic equipment can be ensured.
However, when the user triggers the electronic device to perform image acquisition through the off-screen camera, for example, a human face is acquired for human face recognition, and the off-screen camera is invisible to the user, so that the user may need to adjust many times to align with the off-screen camera in the human face recognition process, that is, the face of the user is located within the acquisition range of the off-screen camera. Therefore, the electronic device may not be able to accurately and quickly acquire the image of the user, which may result in a long time for the electronic device to acquire the image and a poor man-machine interaction performance.
Disclosure of Invention
The embodiment of the invention provides a display control method and electronic equipment, and aims to solve the problems that the time for acquiring images by the electronic equipment is long and the man-machine interaction performance is poor.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present invention provides a display control method applied to an electronic device, where the method includes receiving a target input of a user; responding to target input, starting a camera positioned below a display screen of the electronic equipment, and displaying a first identifier; the display position of the first mark corresponds to the first position, and the first position is the position where the camera is located below the display screen.
In a second aspect, an embodiment of the present invention provides an electronic device, including: the device comprises a receiving module, a processing module and a display module. The receiving module is used for receiving target input of a user; the processing module is used for responding to the target input received by the receiving module and starting a camera positioned below a display screen of the electronic equipment; a display module for displaying a first identifier in response to the target input received by the receiving module; the display position of the first mark corresponds to the first position, and the first position is the position where the camera is located below the display screen.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and executable on the processor, where the computer program, when executed by the processor, implements the steps of the display control method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the display control method of the first aspect.
In an embodiment of the present invention, an electronic device may receive a target input of a user; and responding to the target input, starting a camera positioned below a display screen of the electronic equipment, and displaying a first identifier, wherein the display position of the first identifier corresponds to a first position, and the first position is the position of the camera positioned below the display screen. Through the scheme, when the user triggers the electronic equipment to acquire the image through the camera, the electronic equipment can display the first identification used for indicating the position of the camera relative to the display screen of the electronic equipment, so that the user can quickly know the position of the camera below the display screen, the user can quickly align the camera with the shooting object, and the electronic equipment can quickly acquire the image of the shooting object. Therefore, the time for the electronic equipment to acquire the images can be shortened, and the man-machine interaction performance is improved.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a display control method according to an embodiment of the present invention;
fig. 3 is one of schematic interfaces of an application of the display control method according to the embodiment of the present invention;
fig. 4 is a second schematic interface diagram of an application of the display control method according to the embodiment of the present invention;
fig. 5 is a third schematic interface diagram of an application of the display control method according to the embodiment of the present invention;
FIG. 6 is a fourth schematic view of an interface applied by the display control method according to the embodiment of the present invention;
fig. 7 is a second schematic diagram of a display control method according to an embodiment of the invention;
fig. 8 is a third schematic diagram illustrating a display control method according to an embodiment of the present invention;
FIG. 9 is a fifth exemplary diagram of an interface applied by the display control method according to the embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 11 is a hardware schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first" and "second," etc. herein are used to distinguish between different objects and are not used to describe a particular order of objects. For example, the first region and the second region, etc. are for distinguishing different regions, and are not for describing a particular order of the regions.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of elements means two or more elements, and the like.
Some of the nouns or terms referred to in the claims and the specification of the present application will be explained first.
The camera under the screen: the camera is arranged below the display screen of the electronic equipment when the back of the electronic equipment faces downwards and the display screen faces upwards. Typically, the off-screen camera is not visible to the user.
The acquisition range of the camera is as follows: refers to the range where a camera can capture a sharp image of an object (e.g., a person, or object). For example, assuming that the acquisition range of the camera is range a, the camera may acquire images of objects within range a, but cannot acquire images of objects outside range a.
In practical implementation, the acquisition range of the camera is reflected by a common horizontal field angle, and the smaller the horizontal field angle is, the smaller the acquisition range of the camera is; the larger the horizontal field angle, the larger the acquisition range of the camera. The horizontal field angle can be reflected by the focal length, and the larger the focal length f is, the smaller the horizontal field angle is; the smaller the focal length f, the larger the horizontal angle of view.
The screen extinguishing state: the display screen of the electronic equipment is in a turned-off state. For example, when the electronic device is in a bright screen state, the user may press a power key to trigger the electronic device to turn off the backlight source, so that the display screen of the electronic device is turned off, that is, the electronic device is in a turned-off screen state.
The bright screen state: this is a case where the display screen of the electronic device is in a lit state.
The screen locking state: the method comprises the steps that a display screen of the electronic equipment is in a lighting state, and a screen locking interface is displayed; namely, the state of the electronic device when the electronic device lights up the display screen but does not unlock the display screen.
And displaying the state: the method comprises the steps that a display screen of the electronic equipment is in a lighting state, and a user interface is displayed; namely, the state of the electronic device when the electronic device lights up the display screen and unlocks the display screen.
It can be understood that, in the case that the display screen of the electronic device is in the lighting state, the electronic device may be specifically in the display state and the screen locking state.
The embodiment of the invention provides a display control method and electronic equipment, wherein the electronic equipment can receive target input of a user; and responding to the target input, starting a camera positioned below a display screen of the electronic equipment, and displaying a first identifier, wherein the display position of the first identifier corresponds to a first position, and the first position is the position of the camera positioned below the display screen. Through the scheme, when the user triggers the electronic equipment to acquire the image through the camera, the electronic equipment can display the first identification used for indicating the position of the camera relative to the display screen of the electronic equipment, so that the user can quickly know the specific position of the camera below the display screen, the user can quickly align the camera with the shooting object, and the electronic equipment can quickly acquire the image of the shooting object. Therefore, the time for the electronic equipment to acquire the images can be shortened, and the man-machine interaction performance is improved.
The electronic device in the embodiment of the present invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following takes an android operating system as an example to describe a software environment to which the display control method provided by the embodiment of the present invention is applied.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the display control method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the interface display control method may operate based on the android operating system shown in fig. 1. That is, the processor or the electronic device may implement the display control method provided by the embodiment of the present invention by running the software program in the android operating system.
The electronic equipment in the embodiment of the invention can be a mobile terminal or a non-mobile terminal. For example, the mobile terminal may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile terminal may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiment of the present invention is not limited in particular.
An execution main body of the display control method provided in the embodiment of the present invention may be the electronic device, or may also be a functional module and/or a functional entity capable of implementing the display control method in the electronic device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The following takes an electronic device as an example to exemplarily explain a display control method provided by an embodiment of the present invention.
The display control method provided by the embodiment of the invention can be executed in the following three scenes. The three scenarios are described below as examples.
In a first scene, the electronic equipment is in a screen-off state, and a user needs the electronic equipment to acquire a facial image through a camera positioned below a display screen of the electronic equipment so as to unlock the electronic equipment.
In a first scenario, a user may trigger the electronic device to keep the electronic device in an off-screen state through an input (e.g., a target input in the embodiment of the present invention), turn on a camera located below a display screen of the electronic device, and display an identifier (e.g., a first identifier in the embodiment of the present invention) for indicating a position of the camera relative to the display screen on the display screen of the electronic device, so that the user may quickly align with the camera according to the identifier, and further, multiple blind adjustments by the user may be avoided. Therefore, the time for recognizing the face of the electronic equipment can be shortened, and the man-machine interaction performance is improved.
And in a second scene, the electronic equipment is in a screen locking state, and the user needs the electronic equipment to acquire a facial image through a camera positioned below a display screen of the electronic equipment, so that the time for face recognition is shortened, and the electronic equipment is unlocked.
In the second scenario, the user may trigger the electronic device to keep the electronic device in the screen-locked state through an input (e.g., a target input in the embodiment of the present invention), turn on the camera located below the display screen of the electronic device, and display an identifier (e.g., the first identifier in the embodiment of the present invention) for indicating a position of the camera relative to the display screen on the display screen of the electronic device, so that the user may quickly align with the camera according to the identifier, thereby avoiding multiple blind adjustments by the user. Therefore, the time for the electronic equipment to acquire the images can be shortened, the time for face recognition is shortened, and the man-machine interaction performance is improved.
The electronic equipment is in a display state, and a user needs the electronic equipment to acquire an image through a camera positioned below a display screen of the electronic equipment so as to unlock a certain application program or a certain folder in the electronic equipment; or perform some function in some application (e.g., pay for payment), or acquire a self-portrait image.
In the third scenario, a user may trigger the electronic device to turn on the camera located below the display screen of the electronic device through an input (e.g., a target input in the embodiment of the present invention), and display an identifier (e.g., the first identifier in the embodiment of the present invention) on the display screen of the electronic device, so that the user may quickly align with the camera according to the identifier, thereby avoiding multiple blind adjustments by the user. Therefore, the time for the electronic equipment to acquire the image can be shortened, and the man-machine interaction performance is improved.
Optionally, in the embodiment of the present invention, the first scene, the second scene, and the third scene are only exemplary illustrations of applicable scenes of the display control method provided in the embodiment of the present invention, and in an actual implementation, the display control method provided in the embodiment of the present invention may also be applied to any other possible scenes.
The following describes an exemplary display control method according to an embodiment of the present invention with reference to the drawings.
As shown in fig. 2, an embodiment of the present invention provides a display control method, which may include S201 and S202 described below.
S201, the electronic equipment receives target input of a user.
Optionally, in the embodiment of the present invention, the target input may be input by a user in the second area of the display screen of the electronic device, may also be input by a user on a physical key, and may also be input by a user on a target identifier. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Specifically, the second region can be a region corresponding to the fingerprint identification module arranged below the display screen and in the display screen, and it can be understood that at this time, the display screen of the electronic device faces upward, and the back face of the electronic device faces downward. The physical key may be at least one of a power key and a volume key. The target identifier may be a program identifier of an application program, or a function identifier of a specific function (e.g., a payment function) in an application program.
Optionally, in the embodiment of the present invention, before the user performs the target input, the state of the electronic device is different, and the target object of the target input may also be different.
Optionally, in the embodiment of the present invention, the electronic device may be in a screen-off state or a screen-on state. The electronic device is in a bright screen state, which may specifically include that the electronic device is in a lock screen state and the electronic device is in a display state.
For the description of the screen-off state, the screen-on state, the screen-locking state and the display state, reference may be made to the related description of the above noun explanation part, and details are not repeated here.
Specifically, when the electronic device is in the screen-off state or the screen-locking state, the target input may be an input of a physical key or an input of the second area by the user. When the electronic device is in the display state, the target input may be input of a target identifier by a user.
Optionally, in this embodiment of the present invention, when the target input is an input of a user to the second area or an input of a target identifier, the target input may be any possible form of input, such as a click input, a long-press input, a double-press input, a drag input, a slide input, and the like, which may be determined specifically according to an actual use requirement, and this embodiment of the present invention is not limited.
Optionally, in this embodiment of the present invention, when the target input is an input of a physical key by a user, the target input may be a long-press input.
Optionally, in the embodiment of the present invention, the click input may be input of a single click, a double click, or a preset number of continuous clicks. The long press input may be an input contacting for a preset time period. The above-mentioned heavy-pressing input is also referred to as a pressure touch input, and refers to an input in which a user presses a pressure value greater than or equal to a pressure threshold value. The drag input may be an input of dragging in an arbitrary direction. The slide input may be an input to slide in any direction.
In the embodiment of the present invention, the preset times, the preset duration, and the pressure threshold may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
For the description of the under-screen camera, reference may be made to the related description of the under-screen camera in the noun explanation section, and details are not repeated here.
S202, the electronic equipment responds to target input, a camera positioned below a display screen of the electronic equipment is started, and the first identification is displayed.
The display position of the first identifier may correspond to a first position, and the first position may be a position where the camera is located when the camera is located below the display screen of the terminal device. In the embodiment of the invention, the first identifier can be used for indicating the position of the camera when the camera is below the display screen of the terminal equipment.
Optionally, in this embodiment of the present invention, the first position may be any position below a display screen of the electronic device.
Optionally, in the embodiment of the present invention, the electronic device may display the first identifier at any position of the display screen.
Optionally, in the embodiment of the present invention, the electronic device may display the first identifier in the target area, or may display the first identifier in an area other than the target area in a display screen of the electronic device, which may be determined specifically according to an actual use requirement, and the embodiment of the present invention is not limited.
Optionally, in an embodiment of the present invention, the target area may include at least one of the following: a first region, a periphery of the first region. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
For example, the first region may be a region where 30 is shown in fig. 3 (a), and the periphery of the first region may be a region where 31 is shown in fig. 3 (a).
The first area may be an area on a display screen of the electronic device and corresponding to the first position. For example, the first position may be a position below the area where the camera 30 is located as shown in (a) of fig. 3, that is, the camera may be located below the area where the camera 30 is located as shown in (a) of fig. 3.
Optionally, in the embodiment of the present invention, the electronic device displays the first identifier in the target area, and may display the first identifier in the first area for the electronic device; the first mark can also be displayed on the periphery of the first area for the electronic equipment; the first identifier may also be displayed for the electronic device in the first area and a periphery of the first area.
Illustratively, assume that the first marker is a green flashing cursor (which may be implemented by displaying a pattern of the green flashing cursor or by providing an indicator light that emits a green light). Then, as shown in fig. 3, the electronic device displays the first identifier in the first area, and may display a green flashing cursor in the area where 30 is located as shown in (a) in fig. 3 for the electronic device; or, the electronic device displays the first identifier on the periphery of the first area, and may display a green flashing cursor in an area where 31 is located as shown in (a) in fig. 3 for the electronic device; alternatively, the electronic device displays the first identifier in the first area and the periphery of the first area, and may display a green flashing cursor in the area where 30 and the area where 31 are located as shown in fig. 3 (a) for the electronic device.
Optionally, in the embodiment of the present invention, when the electronic device displays the first identifier in a region other than the target region in the display screen, the electronic device may display an arrow pointing to the first region on the display screen of the electronic device, so as to prompt a user that the camera is located at a position below the display screen, so that the user can align with the camera quickly according to the arrow. Illustratively, as shown in fig. 3 (b), the first position (i.e., where the camera is located when it is below the display screen of the electronic device) may be opposite a first area 30 of the display screen of the electronic device, and then the electronic device may display an arrow 32 (i.e., a first logo) pointing to the first area outside the first area.
In the embodiment of the invention, the electronic equipment can display the first identifier in different areas and different forms, so that the flexibility and the diversity of displaying the first identifier can be increased, and the man-machine interaction performance can be improved.
Optionally, in the embodiment of the present invention, the electronic device may display the first identifier when the electronic device is in a screen-off state, or may display the first identifier when the electronic device is in a screen-on state. Wherein, the electronic device being in the bright screen state may include: the electronic equipment is in a screen locking state and the electronic equipment is in a display state.
Optionally, in the embodiment of the present invention, in response to the first scenario (that is, the electronic device receives the target input of the user in the screen-off state), the electronic device, in response to the target input of the user, may turn on the camera located below the display screen of the electronic device and display the first identifier when the electronic device is in the screen-off state (implementation manner one). Or, in response to the second scenario (that is, the electronic device receives a target input of the user in the screen lock state, and in response to the target input of the user, the electronic device may turn on the camera located below the display screen of the electronic device and display the first identifier (implementation mode two) when the electronic device is in the screen lock state, or, in response to the third scenario (that is, the electronic device receives the target input of the user in the display state), the electronic device may turn on the camera located below the display screen of the electronic device and display the first identifier (implementation mode three) when the electronic device is in the display state.
Implementation mode one
Optionally, in the embodiment of the present invention, the electronic device may turn on a camera located below a display screen of the electronic device when the electronic device is in a screen-off state, and display the first identifier with the first brightness.
In this embodiment of the present invention, the electronic device displaying the first identifier with the first brightness means that the electronic device partially lights up an area for displaying the first identifier on a display screen of the electronic device with the first brightness, and displays the first identifier in the area. In this case, since the first identifier is displayed for user activation, that is, the electronic device does not display the first identifier before the user activation, the power consumption of the electronic device in the screen-off state can be reduced.
Implementation mode two
Optionally, in the embodiment of the present invention, the electronic device may control the electronic device to be in a screen locking state, and display the first identifier with the second brightness.
It should be noted that, in the embodiment of the present invention, displaying, by the electronic device, the first identifier at the second brightness may be understood that, by the electronic device, the first identifier is displayed at the second brightness on the screen locking interface of the electronic device.
The first brightness may be less than or equal to the second brightness.
The specific values of the first brightness and the second brightness may be determined according to actual use requirements, and the embodiment of the present invention is not limited.
Implementation mode three
Optionally, in this embodiment of the present invention, the electronic device may display the first identifier with the second brightness when the electronic device is in the display state.
It should be noted that, in the embodiment of the present invention, before the electronic device receives the target input, no matter what state the electronic device is in, the electronic device may display the first identifier while keeping the electronic device in the state.
In the embodiment of the invention, because the user can trigger the electronic equipment to display the first identifier under the condition that the electronic equipment is in different states, when the electronic equipment is in different states, the user can know the position of the camera relative to the display screen according to the first identifier and can align with the camera according to the first identifier, so that the convenience and the flexibility of the electronic equipment for displaying the first identifier can be improved, and the man-machine interaction performance can be further improved.
The following describes an exemplary display control method provided by an embodiment of the present invention with reference to fig. 4 to 6.
For example, assuming that the electronic device is in a screen-off state, the target input is a long-press input of a power key by a user; further assume that the electronic device displays the first logo at the first brightness at the periphery of the first area when the electronic device is in the off-screen state. Then, as shown in (a) of fig. 4, when the electronic device is in the screen-off state, the user may long-press on the power key 40, that is, the electronic device receives a target input of the user, and then the electronic device may respond to the target input, as shown in (b) of fig. 4, the electronic device may keep the electronic device in the screen-off state, turn on the camera located below the display screen of the electronic device, and light up the periphery of the first area (41 shown in (b) of fig. 4) at the first brightness, and display a first identifier 42 on the periphery of the first area for indicating the position of the camera relative to the display screen of the electronic device, that is, the first position is a position below the display screen of the electronic device and corresponding to the area shown in 41 shown in (b) of fig. 4.
As another example, assuming that the display screen of the electronic device is in the screen-locking state, the target input is a click input of the user in the second area; and under the condition that the electronic equipment is in the screen locking state, the first mark is displayed at the first area and the periphery of the first area with second brightness by the electronic equipment. Then, as shown in (a) of fig. 5, when the electronic device is in the lock screen state, the user may click on the second area 50, that is, the electronic device receives a target input of the user, then the electronic device may respond to the target input, as shown in (b) of fig. 5, the electronic device may keep the electronic device in the lock screen state, turn on the camera located under the display screen of the electronic device, and display a first identifier in the first area (the area where 51 is shown in (b) of fig. 5) and the periphery 52 of the first area, for indicating the position of the camera relative to the display screen of the electronic device, that is, the first position is a position under the display screen of the electronic device and corresponding to the area shown in 51 shown in (b) of fig. 5.
As another example, assuming that the display screen of the electronic device is in a display state, the target input is a click input of the user on an application icon of the album application; further assume that the electronic device displays the first logo at the second brightness in the first area when the electronic device is in the display state. Then, as shown in (a) of fig. 6, when the electronic device is in the display state, the user may click on the application icon of the album application, that is, the electronic device receives a target input of the user, and then the electronic device may respond to the target input, as shown in (b) of fig. 6, the electronic device may keep the electronic device in the display state, turn on the camera located below the display screen of the electronic device, and display a first identifier 61 in the first area for indicating the position of the camera relative to the display screen of the electronic device, that is, the first position is an area below the display screen of the electronic device and corresponding to the area shown as 61 shown in (b) of fig. 6.
In the display control method provided by the embodiment of the invention, when the user triggers the electronic equipment to acquire the image through the camera, the electronic equipment can display the first identifier for indicating the position of the camera relative to the display screen of the electronic equipment, so that the user can quickly know the specific position of the camera, the user can quickly align the camera with the face of the camera, and the electronic equipment can quickly acquire the image. Therefore, the time for recognizing the face of the electronic equipment can be shortened, and the man-machine interaction performance is improved.
Optionally, in the embodiment of the present invention, the camera may be a movable camera. For example, the camera may move along a preset trajectory under a screen of the electronic device.
Further, the camera may extend from below a screen of the electronic device to outside the electronic device, and may retract inside the electronic device.
Optionally, in this embodiment of the present invention, when the camera is a movable camera, a display position of the first identifier may be changed along with a change in a position of the camera below the display screen.
It can be understood that when the camera extends out of the electronic device, the electronic device can display the first identifier at a position of the display screen corresponding to the extending position of the camera, and can also cancel the display of the first identifier. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
In the embodiment of the invention, the first mark can be changed along with the change of the position of the camera below the display screen, so that the flexibility and convenience for displaying the first mark can be improved, and the man-machine interaction performance is improved.
Optionally, in the embodiment of the present invention, after the electronic device turns on the camera located below the display screen of the electronic device, an image may be acquired by the camera.
Optionally, in the embodiment of the present invention, the electronic device may acquire an image of any possible object through the camera. For example, the electronic device may acquire a facial image of a user through the camera, or the electronic device may also acquire an image of a book or a landscape or an image of other parts of the user, such as a body and a fingerprint, through the camera, which may be determined according to actual usage requirements, and the embodiment of the present invention is not limited.
Optionally, in the embodiment of the present invention, the electronic device may store the facial image acquired by the camera in the electronic device as a self-portrait image, or the electronic device may use the facial image acquired by the camera for face recognition.
In the embodiment of the invention, when the user triggers the electronic equipment to acquire the image through the camera, the electronic equipment can display the first identifier for indicating the position of the camera relative to the display screen, so that the user can quickly align the camera with the acquired object (such as the face of the user) according to the first identifier, and the electronic equipment can quickly and accurately acquire the image of the acquired object.
Optionally, in the first implementation manner and the second implementation manner of the present invention, the display screen of the electronic device is in a locked state (i.e., an unlocked state), so that the electronic device may acquire a facial image of a user through the camera and recognize the facial image to unlock the display screen of the electronic device. In the third implementation manner, the electronic device is in a display state (that is, the electronic device may display the user interface), so that the electronic device may collect a facial image of the user through the camera and recognize the facial image to perform the target operation. The target operation may include: payment, unlocking a certain application or folder, etc.
The method for identifying the facial image by the electronic device may specifically be: the electronic equipment can acquire facial feature information from a facial image acquired by the camera, and compares the facial feature information with preset facial feature information in the electronic equipment to determine whether the facial feature information conforms to the preset facial image information; if the facial feature information conforms to the preset facial feature information (that is, the facial feature information is the same as the preset facial feature information, or the matching degree of the facial feature information and the preset facial feature information is greater than a preset threshold), the electronic equipment can be understood as successful face recognition; if the facial feature information does not conform to the preset facial feature information (that is, the two facial feature information are different, or the matching degree of the two facial feature information is smaller than a preset threshold), it can be understood that the face recognition of the electronic device fails.
Optionally, in the embodiment of the present invention, in the process that the electronic device acquires the facial image of the user through the camera, the electronic device may first detect the image acquired by the camera to determine whether a complete facial image exists in the image acquired by the camera, and perform face recognition when it is determined that the complete facial image exists in the image acquired by the camera. Specifically, when the electronic device detects that the face image of the user captured by the camera is incomplete, the electronic device may assume that the user (specifically, the face of the user) is offset with respect to the camera, that is, the user is not aligned with the camera, and at this time, the electronic device may display an identifier (for example, a second identifier described below) to indicate that the user moves in one direction (for example, a target direction described below) with respect to the camera and is aligned with the camera.
For example, in the embodiment of the present invention, with reference to fig. 2, as shown in fig. 7, after S202, the display control method provided in the embodiment of the present invention may further include S203 described below.
S203, the electronic equipment displays the second identification.
The second identifier may be used to instruct the user to move in the target direction relative to the camera. It will be appreciated that the user can adjust his position relative to the camera as indicated by the second indicia, so that the user's face can be quickly aligned with the camera.
Optionally, in the embodiment of the present invention, the target direction may be any direction relative to the camera. For example, it may be in an upward direction with respect to the camera. Alternatively, it may be in a downward direction relative to the camera. Or may be in a leftward direction relative to the camera. Alternatively, the direction may be rightward with respect to the camera. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Optionally, in the embodiment of the present invention, in the process that the user moves relative to the camera in the target direction, the electronic device may continuously acquire images through the camera, and may continuously detect the images acquired by the camera, so as to determine whether a complete facial image exists in the images acquired by the camera. Specifically, if the electronic device detects a complete facial image in an image captured by the camera, the electronic device may cancel displaying the second identifier, or control the second identifier to stop moving cyclically in the target direction, so as to prompt the user that the user is aligned with the camera. If the electronic device does not detect a complete facial image in the image acquired by the camera, the electronic device may acquire the image again by the camera and perform facial image detection on the image acquired by the camera until the electronic device detects the complete facial image in the image acquired by the camera.
In the embodiment of the invention, the second mark can intuitively indicate the process of aligning the user with the camera in real time, so that the man-machine interaction performance can be improved.
In the embodiment of the invention, the second identifier can be used for indicating the user to move relative to the camera along the target direction, so that the user can quickly adjust the position of the user relative to the first camera under the prompt of the second identifier, thereby avoiding the user from blindly adjusting the position of the user relative to the camera for many times, further simplifying the process of aligning the user with the camera and improving the man-machine interaction performance.
Optionally, in this embodiment of the present invention, the second identifier may be a dynamic identifier that moves circularly along a target direction to indicate a direction in which a user needs to move, so that the user can know whether the moving direction of the user is correct in real time, and thus the user can align with the camera quickly.
Optionally, in the embodiment of the present invention, the electronic device may specifically display the second identifier according to an offset direction of a face image in an image acquired by the camera with respect to the camera.
Optionally, in the embodiment of the present invention, with reference to fig. 7, as shown in fig. 8, after S202 and before S203, the display control method provided in the embodiment of the present invention may further include S204 and S205 described below. S203 described above can be specifically realized by S203a described below.
And S204, the electronic equipment acquires a target image through the camera.
For the description in S204, reference may be specifically made to the related description in the foregoing embodiments, and details are not repeated here.
S205, the electronic equipment determines the offset direction of the user relative to the camera according to the target image.
S203a, the electronic device displays the second mark according to the offset direction.
Wherein the offset direction may be opposite to the target direction.
Optionally, in the embodiment of the present invention, after the electronic device acquires the target image through the camera, the target image may be compared with a reference image preset in the electronic device, so as to determine an offset direction of the user relative to the camera.
Specifically, the electronic device may acquire a first face image from the target image and map the first face image into a reference image (including a reference face image), and then the electronic device may select a reference object (e.g., an eye, forehead, etc.) and determine a position of the reference object in the reference face image (hereinafter referred to as a first position) and a position of the reference point in the mapped first face image (hereinafter referred to as a second position); finally, the electronic device can determine that the offset direction of the user relative to the camera is a direction pointing from the second location to the first location. It is to be understood that the selected reference object may be an object existing in both the reference face image and the first face image.
Alternatively, in the embodiment of the present invention, the reference face image may be any schematic image of a face contour and/or facial features.
In the embodiment of the present invention, the purpose of the electronic device comparing the first face image with the reference face image is to determine which part of the face the first face image is specifically, without confirming whether the first face image matches with the reference face image. For example, after comparing the first face image with the reference face image, the electronic device only needs to determine that the first face image is the forehead and the part above the forehead, and does not need to determine whether the first face image matches the forehead and the part above the forehead in the reference face image.
Next, the above-mentioned S203a, S204, and S205 will be exemplarily described with reference to fig. 9.
Illustratively, as shown in fig. 9, (a) in fig. 9 is a target image, and (b) in fig. 9 is a reference image; assuming that the first face image is an image of the mouth and a part below the mouth, the reference face image is a schematic diagram including a face contour and facial five sense organs; further assume that the reference object selected by the electronic device is the mouth; then, the electronic device may acquire a first face image from the target image (a shown in (a) in fig. 9), and may map the first face image into the reference image; the electronic device may then select the mouth as the reference object and determine that the position of the mouth in the reference facial image is C and the position of the mouth in the mapped first facial image (e.g., a' shown in (C) in fig. 9) is D. Then, the electronic device may determine that the offset direction of the user with respect to the camera is a direction pointing from C to D, and thus, the electronic device may display the second identifier along the offset direction, i.e., the electronic device displays the second identifier according to the offset direction.
In the embodiment of the invention, the electronic equipment can determine the offset direction of the user relative to the camera according to the target image acquired by the camera, so that the electronic equipment can determine the opposite target direction according to the offset direction and display the second mark for indicating the target direction to indicate the user to adjust the position of the user relative to the camera along the target direction so as to align the user with the camera, thereby simplifying the process of aligning the user with the camera and improving the man-machine interaction performance.
In the embodiment of the present invention, the display control methods shown in the above method drawings are all exemplarily described with reference to one drawing in the embodiment of the present invention. In specific implementation, the display control shown in the above method drawings may also be implemented by combining with any other drawings that may be combined, which are illustrated in the above embodiments, and will not be described herein again.
As shown in fig. 10, an embodiment of the present invention provides an electronic device 900, and the electronic device 900 may include a receiving module 901, a processing module 902, and a display module 903. A receiving module 901, which may be used to receive a target input of a user; a processing module 902, configured to turn on a camera located below a display screen of the electronic device in response to the target input received by the receiving module 901; a display module 903 may be configured to display the first identifier in response to the target input received by the receiving module 901. The display position of the first mark corresponds to a first position, and the first position can be a position where the camera is located below the display screen.
Optionally, in this embodiment of the present invention, the display module 903 may be specifically configured to display the first identifier in a target area, where the target area may include at least one of the following: a first region, a periphery of the first region. The first area may be an area on a display screen of the electronic device and corresponding to the first position.
Optionally, in the embodiment of the present invention, a display position of the first identifier may be changed along with a change of a position of the camera below the display screen.
Optionally, in this embodiment of the present invention, the display module 903 may be further configured to display a second identifier after the processing module 902 turns on the camera located below the display screen of the electronic device, where the second identifier may be used to instruct the user to move along the target direction relative to the camera.
Optionally, in this embodiment of the present invention, the processing module 902 may be further configured to acquire a target image through a camera before the display module 903 displays the second identifier, and determine an offset direction of the user relative to the camera according to the target image; the display module 903 may be specifically configured to display the second identifier according to the offset direction, where the offset direction may be opposite to the target direction.
Optionally, in this embodiment of the present invention, the second identifier may be a dynamic identifier that moves circularly along the target direction.
Optionally, in this embodiment of the present invention, the display module 903 may be specifically configured to display the first identifier with a first brightness when the electronic device is in a screen-off state; or, under the condition that the electronic equipment is in a bright screen state, the first mark is displayed with the second brightness. The first brightness may be smaller than the second brightness, or the first brightness may be equal to the second brightness.
The electronic device 900 provided in the embodiment of the present invention can implement each process implemented by the electronic device shown in the foregoing method embodiment, and is not described here again to avoid repetition.
The embodiment of the invention provides electronic equipment, which can receive target input of a user; and responding to the target input, starting a camera positioned below a display screen of the electronic equipment, and displaying a first identifier, wherein the display position of the first identifier corresponds to a first position, and the first position is the position of the camera positioned below the display screen. So, when the user triggered electronic equipment to pass through the camera and gather the image, because electronic equipment can show the first sign that is used for instructing the camera for the position of electronic equipment's display screen, consequently can make the user learn the specific position that the camera was located when the display screen below fast to the user can aim at the camera rather than face fast, and then makes electronic equipment can gather the image fast, and then can shorten the time that electronic equipment gathered the image, improves man-machine interaction performance.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention. As shown in fig. 11, the electronic device 100 includes but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 11 does not constitute a limitation of electronic devices, which may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer, and the like.
Wherein, the user input unit 107 is used for receiving target input of a user; the display unit 106 may be configured to turn on a camera located below a display screen of the electronic device in response to the target input received by the user input unit 107, and display the first identifier; the processor 110 may be configured to turn on a camera located below a display screen of the electronic device in response to the target input received by the user input unit 107. The display position of the first mark corresponds to the first position, and the first position can be the position of the camera under the display screen.
It can be understood that, in the embodiment of the present invention, the receiving module 901 in the structural schematic diagram of the electronic device (for example, fig. 10) may be implemented by the user input unit 107. The display module 903 in the structural schematic diagram of the electronic device (for example, fig. 10) may be implemented by the display unit 106. The processing module 902 in the structural schematic diagram of the electronic device (for example, fig. 10) may be implemented by the processor 110.
The embodiment of the invention provides electronic equipment, which can receive target input of a user; and responding to the target input, starting a camera positioned below a display screen of the electronic equipment, and displaying a first identifier, wherein the display position of the first identifier corresponds to a first position, and the first position is the position of the camera positioned below the display screen. Therefore, when the user triggers the electronic equipment to acquire the image through the camera, the electronic equipment can display the first identification used for indicating the position of the camera relative to the display screen of the electronic equipment, so that the user can quickly know the specific position of the camera, the user can quickly align the camera with the face of the camera, the electronic equipment can quickly acquire the image, the time for acquiring the image by the electronic equipment can be shortened, and the man-machine interaction performance is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 11, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power supply 111 (e.g., a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides an electronic device, which includes the processor 110 shown in fig. 11, the memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements the processes of the foregoing method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the foregoing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may include a read-only memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (16)

1. A display control method is applied to electronic equipment, and is characterized by comprising the following steps:
receiving a target input of a user;
and responding to the target input, starting a camera positioned below a display screen of the electronic equipment, and displaying a first identifier, wherein the display position of the first identifier corresponds to a first position, and the first position is the position of the camera when the camera is positioned below the display screen.
2. The method of claim 1, wherein displaying the first indicator comprises:
displaying the first identifier in a target area, the target area including at least one of: a first region, a periphery of the first region;
the first area is an area corresponding to the first position on a display screen of the electronic equipment.
3. The method of claim 1 or 2, wherein the display position of the first marker changes as the position of the camera below the display screen changes.
4. The method of claim 1, wherein after the turning on a camera located below a display screen of the electronic device, the method further comprises:
and displaying a second identifier, wherein the second identifier is used for indicating that the user moves along the target direction relative to the camera.
5. The method of claim 4, wherein prior to displaying the second indicia, the method further comprises:
collecting a target image through the camera;
determining the offset direction of a user relative to the camera according to the target image;
the displaying the second identifier includes:
and displaying the second identification according to the offset direction, wherein the offset direction is opposite to the target direction.
6. The method of claim 4, wherein the second marker is a dynamic marker that moves cyclically in the target direction.
7. The method of claim 1, wherein displaying the first indicator comprises:
under the condition that the electronic equipment is in a screen-off state, displaying the first mark with first brightness;
alternatively, the first and second electrodes may be,
under the condition that the electronic equipment is in a bright screen state, displaying the first mark with second brightness;
wherein the first luminance is less than or equal to the second luminance.
8. An electronic device, characterized in that the electronic device comprises: the device comprises a receiving module, a display module and a processing module;
the receiving module is used for receiving target input of a user;
the processing module is used for responding to the target input received by the receiving module and starting a camera positioned below a display screen of the electronic equipment;
the display module is used for responding to the target input received by the receiving module and displaying a first identifier;
the display position of the first mark corresponds to a first position, and the first position is the position where the camera is located below the display screen.
9. The electronic device of claim 8,
the display module is specifically configured to display the first identifier in a target area, where the target area includes at least one of the following: a first region, a periphery of the first region;
the first area is an area corresponding to the first position on a display screen of the electronic equipment.
10. The electronic device according to claim 8 or 9, wherein a display position of the first mark changes as a position of the camera below the display screen changes.
11. The electronic device of claim 10,
the display module is further configured to display a second identifier after the processing module starts the camera located below the display screen of the electronic device, where the second identifier is used to indicate that the user moves in a target direction relative to the camera.
12. The electronic device of claim 11, wherein the processing module is further configured to capture a target image through the camera before the display module displays the second identifier, and determine an offset direction of a user relative to the camera according to the target image;
the display module is specifically configured to display the second identifier according to the offset direction, where the offset direction is opposite to the target direction.
13. The electronic device of claim 11, wherein the second indicator is a dynamic indicator that moves cyclically along the target direction.
14. The electronic device of claim 8,
the display module is specifically configured to display the first identifier with first brightness when the electronic device is in a screen-off state; or, under the condition that the electronic equipment is in a bright screen state, displaying the first identifier with second brightness;
wherein the first luminance is less than or equal to the second luminance.
15. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the display control method according to any one of claims 1 to 7.
16. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the display control method according to any one of claims 1 to 7.
CN201910901746.2A 2019-09-23 2019-09-23 Display control method and electronic equipment Pending CN110730298A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910901746.2A CN110730298A (en) 2019-09-23 2019-09-23 Display control method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910901746.2A CN110730298A (en) 2019-09-23 2019-09-23 Display control method and electronic equipment

Publications (1)

Publication Number Publication Date
CN110730298A true CN110730298A (en) 2020-01-24

Family

ID=69218235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910901746.2A Pending CN110730298A (en) 2019-09-23 2019-09-23 Display control method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110730298A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112148185A (en) * 2020-09-17 2020-12-29 维沃移动通信(杭州)有限公司 Image display method and device
CN112445342A (en) * 2020-11-26 2021-03-05 维沃移动通信有限公司 Display screen control method and device and electronic equipment
CN112954215A (en) * 2021-02-10 2021-06-11 维沃移动通信有限公司 Control method and device, electronic equipment and storage medium
CN113067985A (en) * 2021-03-31 2021-07-02 Oppo广东移动通信有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN113965690A (en) * 2020-07-20 2022-01-21 珠海格力电器股份有限公司 Method, device and equipment for opening off-screen camera and storage medium
WO2022081283A1 (en) * 2020-10-12 2022-04-21 Qualcomm Incorporated Under-display camera and sensor control
JP7345022B1 (en) 2022-07-12 2023-09-14 レノボ・シンガポール・プライベート・リミテッド Information processing device and control method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878564A (en) * 2017-03-07 2017-06-20 广东欧珀移动通信有限公司 A kind of control method and device of display screen, display screen and mobile terminal
CN108174108A (en) * 2018-03-08 2018-06-15 广州三星通信技术研究有限公司 The method and apparatus and mobile terminal for effect of taking pictures are adjusted in the terminal
CN108366186A (en) * 2018-02-09 2018-08-03 广东欧珀移动通信有限公司 Electronic device, display screen and camera control method
CN208257874U (en) * 2018-06-07 2018-12-18 信利光电股份有限公司 A kind of comprehensive screen mobile phone
CN109302569A (en) * 2018-09-27 2019-02-01 维沃移动通信有限公司 A kind of image imaging method and device of mobile terminal
CN109348123A (en) * 2018-10-25 2019-02-15 努比亚技术有限公司 Photographic method, mobile terminal and computer readable storage medium
WO2019062179A1 (en) * 2017-09-30 2019-04-04 云谷(固安)科技有限公司 Terminal and display screen
CN109714532A (en) * 2018-12-29 2019-05-03 联想(北京)有限公司 Image-pickup method, treating method and apparatus
CN109740519A (en) * 2018-12-29 2019-05-10 联想(北京)有限公司 Control method and electronic equipment
CN110049154A (en) * 2019-03-27 2019-07-23 武汉华星光电半导体显示技术有限公司 A kind of comprehensive screen display device
CN110134459A (en) * 2019-05-15 2019-08-16 Oppo广东移动通信有限公司 Using starting method and Related product

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878564A (en) * 2017-03-07 2017-06-20 广东欧珀移动通信有限公司 A kind of control method and device of display screen, display screen and mobile terminal
WO2019062179A1 (en) * 2017-09-30 2019-04-04 云谷(固安)科技有限公司 Terminal and display screen
CN108366186A (en) * 2018-02-09 2018-08-03 广东欧珀移动通信有限公司 Electronic device, display screen and camera control method
CN108174108A (en) * 2018-03-08 2018-06-15 广州三星通信技术研究有限公司 The method and apparatus and mobile terminal for effect of taking pictures are adjusted in the terminal
CN208257874U (en) * 2018-06-07 2018-12-18 信利光电股份有限公司 A kind of comprehensive screen mobile phone
CN109302569A (en) * 2018-09-27 2019-02-01 维沃移动通信有限公司 A kind of image imaging method and device of mobile terminal
CN109348123A (en) * 2018-10-25 2019-02-15 努比亚技术有限公司 Photographic method, mobile terminal and computer readable storage medium
CN109714532A (en) * 2018-12-29 2019-05-03 联想(北京)有限公司 Image-pickup method, treating method and apparatus
CN109740519A (en) * 2018-12-29 2019-05-10 联想(北京)有限公司 Control method and electronic equipment
CN110049154A (en) * 2019-03-27 2019-07-23 武汉华星光电半导体显示技术有限公司 A kind of comprehensive screen display device
CN110134459A (en) * 2019-05-15 2019-08-16 Oppo广东移动通信有限公司 Using starting method and Related product

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113965690A (en) * 2020-07-20 2022-01-21 珠海格力电器股份有限公司 Method, device and equipment for opening off-screen camera and storage medium
CN112148185A (en) * 2020-09-17 2020-12-29 维沃移动通信(杭州)有限公司 Image display method and device
WO2022081283A1 (en) * 2020-10-12 2022-04-21 Qualcomm Incorporated Under-display camera and sensor control
US11706520B2 (en) 2020-10-12 2023-07-18 Qualcomm Incorporated Under-display camera and sensor control
CN112445342A (en) * 2020-11-26 2021-03-05 维沃移动通信有限公司 Display screen control method and device and electronic equipment
CN112445342B (en) * 2020-11-26 2023-07-21 维沃移动通信有限公司 Display screen control method and device and electronic equipment
CN112954215A (en) * 2021-02-10 2021-06-11 维沃移动通信有限公司 Control method and device, electronic equipment and storage medium
CN113067985A (en) * 2021-03-31 2021-07-02 Oppo广东移动通信有限公司 Shooting method, shooting device, electronic equipment and storage medium
JP7345022B1 (en) 2022-07-12 2023-09-14 レノボ・シンガポール・プライベート・リミテッド Information processing device and control method
US11948490B2 (en) 2022-07-12 2024-04-02 Lenovo (Singapore) Pte. Ltd. Information processing apparatus and control method

Similar Documents

Publication Publication Date Title
CN110913132B (en) Object tracking method and electronic equipment
CN108495029B (en) Photographing method and mobile terminal
CN110730298A (en) Display control method and electronic equipment
CN109743498B (en) Shooting parameter adjusting method and terminal equipment
CN110769155B (en) Camera control method and electronic equipment
CN111142991A (en) Application function page display method and electronic equipment
CN109190356B (en) Screen unlocking method and terminal
CN111163260B (en) Camera starting method and electronic equipment
CN108848256B (en) Key control method of double-screen terminal and double-screen terminal
CN109495616B (en) Photographing method and terminal equipment
CN111339515A (en) Application program starting method and electronic equipment
CN110703972B (en) File control method and electronic equipment
CN109246351B (en) Composition method and terminal equipment
CN111352547A (en) Display method and electronic equipment
CN110753155A (en) Proximity detection method and terminal equipment
CN110944139A (en) Display control method and electronic equipment
CN110908750B (en) Screen capturing method and electronic equipment
CN110944113B (en) Object display method and electronic equipment
CN110058686B (en) Control method and terminal equipment
CN109859718B (en) Screen brightness adjusting method and terminal equipment
CN109104573B (en) Method for determining focusing point and terminal equipment
CN109189514B (en) Terminal device control method and terminal device
CN111246105B (en) Photographing method, electronic device, and computer-readable storage medium
CN110913133B (en) Shooting method and electronic equipment
CN109359460B (en) Face recognition method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200124

RJ01 Rejection of invention patent application after publication