WO2021253992A1 - 显示屏控制方法、装置、计算机设备以及存储介质 - Google Patents

显示屏控制方法、装置、计算机设备以及存储介质 Download PDF

Info

Publication number
WO2021253992A1
WO2021253992A1 PCT/CN2021/090003 CN2021090003W WO2021253992A1 WO 2021253992 A1 WO2021253992 A1 WO 2021253992A1 CN 2021090003 W CN2021090003 W CN 2021090003W WO 2021253992 A1 WO2021253992 A1 WO 2021253992A1
Authority
WO
WIPO (PCT)
Prior art keywords
display screen
sound signal
terminal
control instruction
response
Prior art date
Application number
PCT/CN2021/090003
Other languages
English (en)
French (fr)
Inventor
冯东杰
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021253992A1 publication Critical patent/WO2021253992A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0266Details of the structure or mounting of specific components for a display module assembly
    • H04M1/0268Details of the structure or mounting of specific components for a display module assembly including a flexible display panel
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C23/00Non-electrical signal transmission systems, e.g. optical systems
    • G08C23/02Non-electrical signal transmission systems, e.g. optical systems using infrasonic, sonic or ultrasonic waves
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F9/00Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements
    • G09F9/30Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements in which the desired character or characters are formed by combining individual elements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions

Definitions

  • This application relates to the technical field of terminal control, and in particular to a display screen control method, device, computer equipment, and storage medium.
  • users can manually operate or press keys (including virtual keys and physical keys) to control the expansion and contraction of the flexible display screen of the terminal. For example, taking a part of the display screen displayed outside the terminal and a part of the display screen curled and shrunk inside the terminal as an example, the user can manually drag and drop the part of the display screen that is curled and shrunk in the terminal to make the display screen of the terminal change. It’s bigger, and combined with the part of the display outside the terminal to display the application interface.
  • the embodiments of the present application provide a display screen control method, device, computer equipment, and storage medium, which can improve the flexibility of a terminal when displaying a retractable display screen.
  • the technical solution is as follows:
  • an embodiment of the present application provides a method for controlling a display screen, the method is executed by a terminal, and the terminal includes a retractable display screen, and the method includes:
  • the expansion or contraction of the display screen is controlled.
  • an embodiment of the present application provides a display screen control device, the device is used in a terminal, the terminal includes a retractable display screen, and the device includes:
  • the sound signal receiving module is used to receive the input first sound signal
  • a control instruction acquiring module configured to acquire a display screen control instruction in response to the first sound signal meeting a specified condition, where the display screen control instruction is used to instruct to control the expansion or contraction of the display screen;
  • the display screen control module is used to control the expansion or contraction of the display screen according to the display screen control instruction.
  • an embodiment of the present application provides a computer device.
  • the computer device includes a processor and a memory.
  • the memory stores at least one instruction, at least one program, code set, or instruction set, and the at least one instruction ,
  • the at least one program, the code set or the instruction set is loaded and executed by the processor to implement the display screen control method as described above.
  • an embodiment of the present application provides a computer-readable storage medium that stores at least one instruction, at least one program, code set, or instruction set, the at least one instruction, the at least one program ,
  • the code set or instruction set is loaded and executed by the processor to realize the above-mentioned display screen control method.
  • an embodiment of the present application provides a computer program product.
  • the computer program product includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the display screen control method provided in the above aspect.
  • the terminal judges the condition of the input first sound signal.
  • the display screen control instruction is obtained, so as to automatically control the expansion and contraction of the terminal's display screen, so that the user can input the sound signal. It completes the telescopic control of the terminal display screen, which improves the flexibility of the terminal when displaying the telescopic display screen.
  • FIGS. 1 to 5 are schematic diagrams of a terminal structure involved in an exemplary embodiment of the present application.
  • Fig. 6 is a method flowchart of a display screen control method provided by an exemplary embodiment of the present application.
  • FIG. 7 is a method flowchart of a display screen control method provided by an exemplary embodiment of the present application.
  • FIG. 8 is a method flowchart of a display screen control method provided by an exemplary embodiment of the present application.
  • FIG. 9 is a method flowchart of a display screen control method provided by an exemplary embodiment of the present application.
  • FIG. 10 is a schematic diagram of a user inputting voice to a mobile phone according to an exemplary embodiment of the present application.
  • Fig. 11 is a structural block diagram of a display screen control device provided by an exemplary embodiment of the present application.
  • Fig. 12 is a schematic structural diagram of a computer device provided by an exemplary embodiment of the present application.
  • the solution provided in this application can be used in real-life scenarios where the flexible display screen of the terminal is controlled to expand or contract when people use a terminal with a flexible display screen in their daily lives.
  • the terms and the structure of the terminal are briefly introduced.
  • Voice wake-up It means that the user wakes up the terminal by speaking the wake-up word, makes the terminal turn on the voice dialogue function, enters the state of waiting for voice instructions, or makes the terminal directly execute the predetermined voice instructions.
  • the terminal 100 in the embodiment of the present application includes a housing assembly 10, a flexible display 30, a driving member 50, and a driving mechanism 70.
  • the housing assembly 10 is a hollow structure; components such as the driving member 50, the driving mechanism 70, and the camera 60 can all be arranged in the housing assembly 10.
  • the terminal 100 in the embodiment of the present application includes, but is not limited to, mobile terminals such as mobile phones and tablets, or other portable electronic devices.
  • the terminal 100 is a mobile phone as an example for description.
  • the housing assembly 10 includes a first housing 12 and a second housing 14, and the first housing 12 and the second housing 14 can move relatively.
  • the first housing 12 and the second housing 14 are slidably connected, that is, the second housing 14 can slide relative to the first housing 12.
  • the first housing 12 and the second housing 14 jointly form an accommodating space 16.
  • the accommodating space 16 can be used to place components such as the driving member 50, the camera 60, and the driving mechanism 70.
  • the housing assembly 10 may further include a back cover 18, and the back cover 18 and the first housing 12 and the second housing 14 together form an accommodating space 16.
  • the driving member 50 is disposed in the second casing 14, one end of the flexible display screen 30 is disposed in the first casing 12, the flexible display screen 30 bypasses the driving member 50, and the other end of the flexible display screen is disposed in the accommodation In the space 16, part of the flexible display screen 30 is hidden in the accommodating space 16, and part of the flexible display screen 30 hidden in the accommodating space 16 may not be lit.
  • the first housing 12 and the second housing 14 are relatively far away, and the flexible display screen 30 can be driven by the driving member 50 to expand, so that more flexible display screens 30 are exposed outside the accommodating space 16.
  • the flexible display screen 30 exposed outside the accommodating space 16 is lighted up, so that the display area presented by the terminal 100 becomes larger.
  • the driving member 50 is a rotating shaft structure with teeth 52 on the outside, and the flexible display screen 30 is linked with the driving member 50 through meshing or the like.
  • the driving member 50 drives a part of the flexible display screen 30 engaged on the driving member 50 to move and unfold.
  • the driving member 50 can also be a round shaft without teeth 52.
  • the driving member 50 will wind the part of the flexible display screen 30 on the driving member 50. Expand so that more flexible display screens are exposed outside the accommodating space 16 and are in a flat state.
  • the driving member 50 is rotatably disposed on the second housing 14, and when the flexible display screen 30 is gradually expanded, the driving member 50 can rotate with the movement of the flexible display screen 30.
  • the driving member 50 may also be fixed on the second housing 14, and the driving member 50 has a smooth surface. When the flexible display screen 30 is expanded, the driving member 50 can slidably contact with the flexible display screen 30 through its smooth surface.
  • the terminal 100 further includes a resetting member (not shown). One end of the flexible display screen housed in the accommodating space 16 is linked with the resetting member. When the first housing 12 and the second housing 14 are relatively close, the resetting member drives the flexible The display screen 30 is reset, so that part of the flexible display screen is retracted into the accommodating space 16.
  • the driving mechanism 70 may be disposed in the accommodating space 16, the driving mechanism 70 may be linked with the second housing 14, and the driving mechanism 70 is used to drive the second housing 14 relative to the first housing 12. Performing a separation movement, thereby driving the flexible display assembly 30 to stretch. It can be understood that the driving mechanism 70 may also be omitted, and the user may directly move the first housing and the second housing relative to one another manually.
  • the terminal 100 may be a terminal with a voice wake-up function.
  • the terminal may be a mobile phone, tablet computer, e-book reader, smart glasses, smart watch, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic Video experts compress standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image experts compress standard audio layer 4) players, notebook computers, laptop portable computers, etc.
  • the user can enable the voice wake-up function in the terminal 100 in advance, and input a corresponding wake-up word to the terminal, so that the terminal executes the voice command corresponding to the wake-up word.
  • the terminal can get the wake-up voice and wake up the terminal.
  • FIG. 6 shows an exemplary embodiment of the present application.
  • a method flow chart of a method for controlling a display screen is provided. This method can be applied to the terminal with a retractable display screen shown in Figs. 1 to 5 above. As shown in Figure 6, the display screen control method may include the following steps:
  • Step 601 Receive the input first sound signal.
  • the user may input the first sound signal through the microphone of the terminal, and the terminal collects the first sound signal input by the user through its own microphone.
  • the terminal may also collect the environmental sound signal of its surrounding environment through a microphone, and the environmental sound signal may also be the first sound signal.
  • Step 602 In response to the first sound signal meeting the specified condition, obtain a display screen control instruction, where the display screen control instruction is used to instruct to control the expansion or contraction of the display screen.
  • the terminal may determine whether the first sound signal satisfies a specified condition according to the received first sound signal, and obtain a display screen control instruction when the first sound signal satisfies the specified condition.
  • the specified conditions can be preset in the terminal.
  • the specified condition may refer to a condition to be satisfied by the first attribute of the first sound signal.
  • the first attribute is any one of the sound amplitude attribute, sound timbre attribute, and text content attribute of the first sound signal. Multiple attributes.
  • Step 603 Control the expansion or contraction of the display screen according to the display screen control instruction.
  • the terminal controls its own retractable display to expand or contract.
  • the terminal receives the input first sound signal; responds to the first sound signal to meet the specified conditions, obtains the display screen control instruction, the display screen control instruction is used to instruct to control the expansion or contraction of the display screen; Screen control instructions to control the expansion or contraction of the display screen.
  • the terminal performs condition judgment on the input first sound signal.
  • the display screen control instruction is obtained, so as to automatically control the expansion and contraction of the display screen of the terminal, so that the user can input the sound signal. This can complete the telescopic control of the terminal display screen, which improves the flexibility of the terminal when displaying the telescopic display screen.
  • the above-mentioned specified condition refers to the condition to be satisfied by the first attribute of the first sound signal
  • the first attribute is the sound amplitude attribute of the first sound signal and the text content of the first sound signal
  • At least one of the attributes is taken as an example, and the solution shown in FIG. 6 is described as an example. That is, the terminal uses at least one of the conditions to be satisfied by the voice recognition result of the first sound signal and the conditions to be satisfied by the volume amplitude of the first sound signal to determine whether to obtain the display control instruction and control the terminal's
  • the retractable display screen can be stretched or contracted.
  • FIG. 7 shows a method flowchart of a method for controlling a display screen provided by an exemplary embodiment of the present application. This method can be executed by the terminal shown in Fig. 1 to Fig. 5. As shown in Fig. 7, the display screen control method can include the following steps:
  • Step 701 In response to the display screen being in the off state, it is determined that the specified condition includes the condition to be satisfied by the voice recognition result of the first sound signal.
  • the terminal when the terminal executes the embodiment shown in this application, it can determine whether its own retractable display screen is in the off state, that is, the display screen is in the off state.
  • the terminal determines that its display screen is in the off state, it can be determined that the specified condition is a condition to be satisfied by the voice recognition result of the first voice signal. That is, after performing voice recognition on the first sound signal, the conditions to be satisfied by the text content included in the obtained voice recognition result.
  • Step 702 Receive the input second sound signal.
  • the user can enable the voice wake-up function in the terminal in advance.
  • the terminal can wake up the terminal after receiving the wake-up voice to perform other steps of this application.
  • the user can set the on and off of the voice wake-up function in the setting interface of the terminal. In this step, the user turns on the voice wake-up function of the terminal.
  • the user when the voice wake-up function is enabled on the terminal, the user can input a sound signal to the terminal by speaking into the microphone of the terminal, and accordingly, the microphone of the terminal can collect the sound signal input by the user.
  • the voice input by the user before the terminal is awakened can be collectively referred to as the second sound signal.
  • Step 703 In response to the wake-up word contained in the second sound signal, control the terminal to enter a voice wake-up state.
  • a target detection algorithm may be preset in the terminal, and the target detection algorithm may extract and detect the voiceprint feature of the voice input by the user.
  • the terminal obtains the voiceprint feature of the second voice signal input by the user, it compares the voiceprint feature of the second voice signal with the voiceprint feature of the preset wake-up word. If the two voiceprint features are the same, the terminal regards the voiceprint feature as the second The voiceprint feature of the sound signal matches the voiceprint feature of the preset wake-up word, then the terminal can determine that the second voice signal contains the wake-up word, and step 704 is executed. Otherwise, the terminal continues to receive the second sound signal input by the user.
  • the preset wake-up word may be entered in the terminal in advance by the user.
  • the terminal can prompt the user to input a preset wake-up word by voice, and record the voiceprint characteristics corresponding to the preset wake-up word input by the user as the voiceprint of the preset wake-up word feature.
  • the preset wake-up word "hello, hello” as an example
  • the terminal can get the wake-up voice, and perform the wake-up voice by itself.
  • the wake-up word contained in the wake-up voice is recognized, and the voiceprint feature of the wake-up voice is further obtained.
  • the terminal recognizes which user said it according to the voiceprint feature of the voice information, thereby deciding whether to activate the corresponding function.
  • the number of voiceprint features of the preset wake word can also be multiple.
  • the terminal stores the voiceprint features of the preset wake-up words of user A and the voiceprint features of the preset wake-up words of user B.
  • the voiceprint features of the preset wake-up words may include the voiceprint features of user A.
  • the voiceprint characteristics of user B may be the terminal.
  • the terminal may compare the voiceprint characteristics of the second sound signal with the voiceprint characteristics of each preset wake-up word stored in the terminal.
  • the voiceprint feature of the second voice signal is the same as the voiceprint feature of any one of the preset wake-up words, it can also be regarded as the voiceprint feature of the second voice signal matches the voiceprint feature of the preset wake-up word, and the The second sound signal contains the wake-up word, and after determining that the second sound signal contains the wake-up word, the terminal can control the terminal to enter the voice wake-up state.
  • the terminal matches the acquired voiceprint feature of the second sound signal with the voiceprint feature of the preset wake-up word, and when the voiceprint feature of the second voice belongs to the preset wake-up word
  • the terminal can respond to the corresponding voice, activate the voice dialogue function, and activate the voice recognition module in the terminal.
  • the voice recognition module can be used to recognize the voice input by the user, and obtain the text corresponding to the voice, that is, the voice content.
  • the voiceprint feature of the preset wake-up word includes the voiceprint feature corresponding to the voice of "Hello, hello" of user A and the voiceprint feature of the voice of "Hello, hello" of user B.
  • the second voice signal "Hello, hello” is input to the terminal, and the terminal can process the second voice signal through the target detection model to obtain the voiceprint characteristics corresponding to the voice of user A "Hello, hello", and pass
  • the voiceprint feature is matched with the voiceprint feature of the preset wake-up word stored in the terminal, and it is learned that the voiceprint feature of the second voice signal obtained this time is the voiceprint feature of the user A corresponding to the voiceprint feature of the preset wake-up word To wake up the terminal.
  • Step 704 Receive a first sound signal input when the terminal is in a voice wake-up state.
  • the terminal in the voice awakening state may continue to receive the first sound signal input by the user.
  • the sound signal input by the user received after the terminal is awakened is collectively referred to as the first sound signal. That is, the user may input the first sound signal through the microphone of the terminal, and the terminal collects the sound signal input by the user through its own microphone.
  • the input sound signal can be "open the display screen", “shrink the display screen” and so on.
  • Step 705 In response to the specified conditions including the conditions to be met by the voice recognition result of the first sound signal, perform voice recognition on the first sound signal to obtain a voice recognition result.
  • the terminal Since the terminal is awakened, the terminal activates the voice recognition function, and can perform voice recognition on the first voice signal input by the user to obtain the first voice content corresponding to the first voice signal. For example, if the voice input by the user is "please open the first application", the terminal recognizes that the first voice content of the first sound signal input by the user this time is “please open the first application” through the voice recognition function. If the voice input by the user is "expanding the display screen”, the terminal recognizes that the first voice content of the first sound signal input by the user this time is “expanding the display screen” through the voice recognition function.
  • Step 706 In response to the voice recognition result being matched with the first keyword, it is determined that the first sound signal satisfies a specified condition.
  • the first keyword is any keyword in the target keyword set
  • each keyword in the target keyword set corresponds to a control instruction
  • the control instruction is an instruction that can control the expansion and contraction of the display screen of the terminal.
  • the target keyword set may be pre-entered in the terminal by the developer.
  • the terminal stores a set of keywords such as "expand the display”, “shrink the display”, “open to half”, “close to half” and other keywords.
  • the "expand display” keyword corresponds to the display of the control terminal
  • the control command for the screen expansion corresponds to the control command for the display screen of the control terminal to shrink
  • the keyword “open to half” corresponds to the screen expansion of the control terminal to half of the full length of the display screen
  • the keyword “close to half” corresponds to a control instruction that the display screen of the control terminal shrinks to half of the full length of the display screen.
  • the terminal may determine that the first sound signal satisfies a specified condition. For example, after the user inputs the first voice signal, the terminal recognizes the first voice signal, and the first voice content in the obtained voice recognition result is "Expand the display screen". It is determined that the first voice content and the target keyword set are The keyword "expanded display screen" is the same, and the terminal learns that the voice recognition result matches the first keyword, thereby determining that the first sound signal meets the specified condition.
  • Step 707 In response to the first sound signal meeting the specified condition, obtain a display screen control instruction corresponding to the first keyword.
  • the terminal may obtain a display screen control instruction corresponding to the first keyword. For example, if the first voice content acquired in step 706 is "expanded display screen", which is the same as a keyword in the target keyword set, the terminal can consider that the acquired first voice content matches the first keyword. In this step, the terminal can obtain the control instruction for controlling the expansion and contraction of the display screen of the terminal corresponding to the first keyword. If the first voice content is "Expand the display screen", the terminal can obtain a control instruction for controlling the expansion of the display screen of the terminal. If the first voice content is "shrink the display screen", the terminal may obtain a control instruction for controlling the shrinkage of the display screen of the terminal.
  • Step 708 in response to the terminal being in a designated state, determine that the designated condition includes a condition to be satisfied by the volume amplitude of the first sound signal.
  • the designated state includes at least one of the following states: the display screen is in the on state; and the terminal is in the target scene.
  • the target scene is any one of a voice call scene, a recording scene, a video call scene, a voice message sending scene, and a video playback scene.
  • the terminal judges whether its own retractable display screen is in the off state, it is found that the display screen is in the on state, and the terminal is in the designated state at this time.
  • the terminal can also determine whether its current use scene is the target scene. If the display screen of the terminal is in the off state and the current use scene of the terminal itself is the target scene, the terminal is in the designated state at this time.
  • the retractable display screen when the retractable display screen is on, it means that the retractable display screen is not completely off. When part of the retractable display screen is in the on state, it can also be regarded as the retractable display screen. Lit state.
  • the terminal may determine the usage scenario in which it is located in the following manner. For example, the terminal obtains the current usage scenario of the terminal according to the program name of the running application.
  • the terminal may also obtain the current usage scenario of the terminal according to the program name of the application program it is running.
  • the terminal may store a correspondence table between application program names and usage scenarios. Please refer to Table 1, which shows a table of correspondences between application names and usage scenarios provided by an exemplary embodiment of the present application.
  • the usage scenarios of the terminal can be one-to-one correspondence with the running applications, or one-to-many. For example, if the above-mentioned terminal obtains that the program name of the running application is application one, then, through the above-mentioned Table 1, the terminal can obtain that the current use scenario is use scenario one.
  • the third application in Table 1 is a phone application in the terminal.
  • the current use scenario acquired by the terminal is a voice call scenario.
  • the third application in Table 1 above is the recording application in the terminal.
  • the recording application is running, the current use scene acquired by the terminal is the recording scene.
  • the third application in Table 1 above is a video playback application in the terminal.
  • the video playback application is running, the current use scene acquired by the terminal is a video scene.
  • the terminal After acquiring its current use scene, the terminal can continue to determine whether its use scene is the target scene. If it is the target scene, it is determined in this step that the specified condition includes the condition to be satisfied by the volume amplitude of the first sound signal.
  • Step 709 Acquire the volume amplitude of the first sound signal in response to the specified condition including the condition to be satisfied by the volume amplitude of the first sound signal.
  • the terminal may also obtain the volume amplitude of the first sound signal.
  • the volume amplitude of the first sound signal may be acquired through a microphone in advance when the first sound signal input by the user is received, and the acquired volume amplitude of the first sound signal is directly acquired in this step.
  • the volume amplitude can also be regarded as the sound intensity.
  • the terminal may also calculate the average value of the volume amplitude of the first sound signal, and use the average value of the volume amplitude of the first sound signal as the volume amplitude of the first sound signal.
  • the time used in the calculation of the average value can be set in the terminal in advance by the user. For example, if the user sets that the time used to calculate the average value of the volume amplitude of the first sound signal is the duration of the first sound signal, then the terminal can be based on the volume amplitude of the first sound signal (that is, when the first sound signal is in The sum of the volume amplitude at each sampling point is divided by the duration of the first sound signal to obtain the average value of the volume amplitude of the first sound signal.
  • the terminal can divide the acquired volume amplitude of the first sound signal by 2 seconds to obtain The volume amplitudes of the multiple first sound signals (that is, the sum of the volume amplitudes on the multiple sampling points), and these volume amplitudes are all divided by 2 seconds to obtain the respective average values of the volume amplitudes of the multiple first sound signals .
  • Step 710 In response to the magnitude relationship between the volume amplitude and the amplitude threshold satisfying the specified magnitude relationship, it is determined that the first sound signal satisfies the specified condition.
  • the terminal may also obtain the magnitude relationship between the volume amplitude of the first sound signal and the amplitude threshold.
  • the magnitude relationship between the volume amplitude of the first sound signal and the amplitude threshold satisfies the specified magnitude relationship, The terminal determines that the first sound signal meets the specified condition.
  • the amplitude threshold includes a first amplitude threshold and a second amplitude threshold, wherein the specified magnitude relationship is that the volume amplitude is less than the first amplitude threshold. That is, in response to the volume amplitude being less than the first amplitude threshold, the terminal determines that the first sound signal satisfies the specified condition.
  • the terminal may directly compare the acquired volume amplitude of the first sound signal with the first amplitude threshold to obtain the magnitude relationship between the two; corresponding to the foregoing one possible implementation manner, the specified magnitude relationship is The volume amplitude is less than the first amplitude threshold.
  • the terminal may calculate the average value of the volume amplitude of the first sound signal; detect whether the average value of the volume amplitude of the first sound signal is less than the first amplitude threshold. When the average value of the volume amplitude of the first sound signal is less than the first amplitude threshold, it is determined that the magnitude relationship between the two satisfies the specified magnitude relationship.
  • both the specified magnitude relationship and the first amplitude threshold may be preset in the terminal by the developer or operation and maintenance personnel.
  • the first amplitude threshold is 20 decibels. If the average value of the volume amplitude of the first sound signal acquired by the terminal is 15 decibels, it means that the magnitude relationship between the volume amplitude and the first amplitude threshold meets the specified magnitude Correspondingly, the terminal determines that the first sound signal satisfies the specified condition. If the average value of the volume amplitude of the first sound signal acquired by the terminal is 25 decibels, it means that the relationship between the volume amplitude and the first amplitude threshold does not satisfy the specified relationship. Accordingly, the terminal determines the first sound The signal does not meet the specified conditions.
  • the specified magnitude relationship is that the volume amplitude is greater than the second amplitude threshold. That is, in response to the volume amplitude being greater than the second amplitude threshold, the terminal determines that the first sound signal satisfies the specified condition.
  • the terminal may directly compare the acquired volume amplitude of the first sound signal with the second amplitude threshold to obtain the magnitude relationship between the two; corresponding to the foregoing one possible implementation manner, the specified magnitude relationship is The volume amplitude is greater than the second amplitude threshold.
  • the terminal may calculate the average value of the volume amplitude of the first sound signal; detect whether the average value of the volume amplitude of the first sound signal is greater than the second amplitude threshold. When the average value of the volume amplitude of the first sound signal is greater than the second amplitude threshold, it is determined that the magnitude relationship between the two satisfies the specified magnitude relationship.
  • the second amplitude threshold may also be preset in the terminal by developers or operation and maintenance personnel.
  • the first amplitude threshold is 20 decibels. If the average value of the volume amplitude of the first sound signal acquired by the terminal is 25 decibels, it means that the magnitude relationship between the volume amplitude and the first amplitude threshold meets the specified magnitude Correspondingly, the terminal determines that the first sound signal satisfies the specified condition. If the average value of the volume amplitude of the first sound signal acquired by the terminal is 15 decibels, it means that the relationship between the volume amplitude and the first amplitude threshold does not satisfy the specified relationship. Accordingly, the terminal determines the first sound The signal does not meet the specified conditions.
  • the magnitude of the first amplitude threshold and the second amplitude threshold may be the same or different.
  • Step 711 In response to the volume amplitude being less than the first amplitude threshold, a first control instruction is obtained, where the first control instruction is used to instruct to control the stretching of the display screen.
  • the terminal determines that the volume amplitude is less than the first amplitude threshold, it can obtain the first control instruction to control the display screen to stretch.
  • this step can be replaced with: in response to the volume amplitude being greater than the second amplitude threshold, a second control instruction is acquired, and the second control instruction is used for Indicate to control the shrinkage of the display. That is, when the terminal determines that the volume amplitude is greater than the second amplitude threshold, it can obtain the second control instruction to control the display screen to shrink.
  • the terminal may keep the size of the display screen unchanged and not perform subsequent steps. That is, in response to the first sound signal not meeting the specified condition, the terminal may not perform expansion and contraction processing on the display screen, and keep the original expansion and contraction state unchanged.
  • Step 712 Control the expansion or contraction of the display screen according to the display screen control instruction.
  • the terminal calls the driver chip interface function for controlling the expansion and contraction of the display screen according to the acquired display screen control instruction, thereby controlling the expansion and contraction of the display screen.
  • the display screen control instruction corresponding to the first keyword obtained in step 707 is used to shrink the display screen. Then, the terminal can use the display screen control instruction to: Control the shrinkage of the display. If the display screen control instruction corresponding to the first keyword obtained in step 707 is used to expand the display screen, then the terminal can control the expansion of the display screen through the display screen control instruction. Correspondingly, if the first control instruction is acquired in step 711, the terminal can control the expansion of the display screen through the first control instruction. If the above step 711 is replaced with a situation described therein, and the terminal obtains the second control instruction, then the terminal can control the shrinking of the display screen through the second control instruction.
  • the display screen control instruction is to expand the display screen.
  • the terminal Before the terminal controls the expansion of the display screen, the terminal can also detect whether the display screen of the terminal is in the maximum expansion state; In the most expanded state, the display screen of the control terminal is stretched. That is, before the terminal expands the display screen, the terminal can detect whether the display screen has been expanded to the maximum. If it has been expanded to the maximum, the terminal can keep the size of the display screen unchanged. If the display screen is not expanded to the maximum, the terminal can control The display expands.
  • the terminal can also detect whether the terminal's display screen is in the maximum contraction state before the terminal controls the expansion of the display screen; in response to the terminal's display screen is not in the maximum contraction state, The display screen of the control terminal stretches. That is, before the terminal shrinks the display screen, the terminal can detect whether the display screen has been shrunk to the maximum. If it has been shrunk to the maximum, the terminal can keep the size of the display screen unchanged. If the display screen is not shrunk to the maximum, the terminal can control The display shrinks.
  • the display screen control instruction corresponding to the first keyword obtained in step 707 also has a corresponding target state in the terminal.
  • the terminal obtains the target state corresponding to the display control instruction according to the display control instruction; the terminal can detect whether the display of the terminal is in the target state, and respond to the display of the terminal not in the target state, and control the terminal's operation according to the display control instruction.
  • the display screen stretches or contracts, so that the display screen of the terminal stretches or contracts to a target state.
  • the terminal may pre-store the corresponding relationship between the target state and the display screen control instruction. Please refer to Table 2, which shows a corresponding relationship table between a display screen control instruction and a target state involved in an exemplary embodiment of the present application.
  • the target state may indicate the length of the terminal display screen in the expansion or contraction direction.
  • the length in the unfolding direction is 20 cm.
  • the first control command corresponds to the command to fully expand the display screen of the terminal.
  • the target state corresponding to the control command one can be the length of the display screen. It is 20 cm.
  • the display screen of the terminal is expanded to half, the length in the expansion direction is 15 cm.
  • the second control instruction corresponds to an instruction to half the display screen of the terminal.
  • the target state one corresponding to the control instruction two can be the display screen.
  • the length is 15 cm.
  • the terminal can judge whether its own display screen is in the corresponding state at this time. If it is in the corresponding state, the terminal does not need to do processing. If it is not in the corresponding state, the terminal can also control the display of the terminal according to the display control instructions. The screen stretches or contracts.
  • the terminal receives the input first sound signal; responds to the first sound signal to meet the specified conditions, obtains the display screen control instruction, the display screen control instruction is used to instruct to control the expansion or contraction of the display screen; Screen control instructions to control the expansion or contraction of the display screen.
  • the terminal performs condition judgment on the input first sound signal.
  • the display screen control instruction is obtained, so as to automatically control the expansion and contraction of the terminal display screen, so that the user can input the sound signal. This can complete the telescopic control of the terminal display screen, which improves the flexibility of the terminal when displaying the telescopic display screen.
  • the terminal when the terminal is in the target scene, by determining the relationship between the volume amplitude and the first amplitude threshold and the second amplitude threshold, the corresponding display control instruction is obtained, so that the terminal can actively change the display in the target scene
  • the size of the terminal can flexibly improve the effect of the terminal’s microphone acquiring or playing sound in the target scene.
  • the specified condition includes the condition to be satisfied by the speech recognition result of the first sound signal and the condition to be satisfied by the volume amplitude of the first sound signal, that is, When the conditions to be met by the speech recognition result of the first sound signal and the conditions to be met by the volume amplitude of the first sound signal are both in the specified conditions, the terminal not only obtains the speech recognition result, but also the volume amplitude of the first sound signal .
  • FIG. 8 shows a method flowchart of a method for controlling a display screen provided by an exemplary embodiment of the present application. This method can be executed by the terminal shown in Fig. 1 to Fig. 5. As shown in Fig. 8, the display screen control method can include the following steps:
  • Step 801 Start the voice wake-up function of the terminal.
  • Step 802 Receive the input second sound signal.
  • Step 803 In response to the wake-up word contained in the second sound signal, control the terminal to enter a voice wake-up state.
  • Step 804 Receive the first sound signal input when the terminal is in the voice wake-up state.
  • Step 805 Perform voice recognition on the first sound signal to obtain a voice recognition result.
  • step 802 to step 805 reference may be made to the description of step 702 to step 705 in the embodiment of FIG. 7, which will not be repeated here.
  • Step 806 Acquire the volume amplitude of the first sound signal.
  • step 806 reference may be made to the description of step 709 in the embodiment of FIG. 7, which will not be repeated here.
  • Step 807 In response to the voice recognition result matching the second keyword, and the magnitude relationship between the volume amplitude and the amplitude threshold meets the specified magnitude relationship, it is determined that the first sound signal meets the specified condition.
  • the second keyword and the above-mentioned first keyword may be the same, and both are keywords in the target keyword set.
  • the terminal for the terminal to determine whether the voice recognition result matches the second keyword, and whether the magnitude relationship between the volume amplitude and the amplitude threshold meets the specified magnitude relationship, please refer to the description of step 706 and step 710 in the embodiment of FIG. 7 respectively. , I won’t repeat it here.
  • Step 808 in response to the first sound signal meeting the specified condition, obtain a display screen control instruction.
  • a display screen control instruction corresponding to the second keyword is acquired. That is, when the terminal determines that the first voice signal satisfies the specified condition, it can also judge the volume amplitude obtained above again to determine the magnitude relationship between the volume amplitude and the third amplitude threshold. If the volume amplitude is greater than the third amplitude threshold, The amplitude threshold is used to obtain the display control command corresponding to the second keyword. For example, if the third amplitude threshold is 25 decibels, and the volume amplitude acquired by the terminal is 30 decibels, then the terminal can acquire the display screen control instruction corresponding to the second keyword.
  • the terminal may not perform expansion or contraction processing on the display screen, that is, it does not obtain a display screen control instruction, and keeps the size of the display screen unchanged.
  • the terminal in response to the voice recognition result matching the second keyword, the amplitude range where the volume amplitude is located is acquired, and the display screen control instruction corresponding to the amplitude range is acquired. That is, when determining that the first voice signal satisfies the specified condition, the terminal may also obtain the display screen control instruction corresponding to the amplitude interval for the amplitude interval corresponding to the acquired volume amplitude. For example, the terminal may pre-store the corresponding relationship between the amplitude interval and the display screen control command. Please refer to Table 3, which shows a correspondence table between an amplitude interval and a display screen control command involved in an exemplary embodiment of the present application.
  • Amplitude interval Display control instructions Amplitude interval one Display control command one Amplitude interval 2 Display control instruction two Amplitude interval three Display control instruction three ... ...
  • different amplitude ranges can correspond to their own display control commands.
  • the terminal After the terminal obtains the amplitude range where the volume amplitude is located, it can obtain the display screen control command corresponding to the amplitude range by querying Table 3 above. For example, if the amplitude range where the volume amplitude is acquired by the terminal is amplitude range two, then the display screen control instruction finally acquired by the terminal is display screen control instruction two.
  • Step 809 Control the expansion or contraction of the display screen according to the display screen control instruction.
  • step 809 reference may be made to the description of step 712 in the embodiment of FIG. 7, which will not be repeated here.
  • the terminal receives the input first sound signal; responds to the first sound signal to meet the specified conditions, obtains the display screen control instruction, the display screen control instruction is used to instruct to control the expansion or contraction of the display screen; Screen control instructions to control the expansion or contraction of the display screen.
  • the terminal performs condition judgment on the input first sound signal.
  • the display screen control instruction is obtained, so as to automatically control the expansion and contraction of the display screen of the terminal, so that the user can input the sound signal. This can complete the telescopic control of the terminal display screen, which improves the flexibility of the terminal when displaying the telescopic display screen.
  • this application judges the target scene, when the terminal is in the target scene, the sound amplitude is used to further determine whether the terminal needs to be scaled, and the distance between the terminal and the user is used to further determine whether the terminal needs to be scaled Operation avoids the terminal's miscontrol of the terminal's display screen in the target scene.
  • the terminal determines that the sound amplitude does not meet the third amplitude threshold condition, it does not perform a telescopic operation on the terminal, which can reduce the noise generated when the terminal telescopes the display screen in the target scene, and improve the performance in the target scene. Sound quality.
  • FIG. 9 shows a method flowchart of a method for controlling a display screen provided by an exemplary embodiment of the present application. The method is executed by hands in daily life, as shown in Figure 9, the display screen control method may include the following steps:
  • Step 901 Turn on the voice wake-up function in the mobile phone.
  • Step 902 Pre-enter the wake-up word, the first keyword, and the amplitude threshold that can be detected by the voice chip in the mobile phone.
  • Step 903 Detect whether the display screen of the mobile phone is in an off state.
  • step 904 If yes, go to step 904, if not, go to step 911.
  • Step 904 Receive the first voice.
  • FIG. 10 shows a schematic diagram of a user inputting a voice to a mobile phone according to an exemplary embodiment of the present application.
  • the user can speak to the terminal 1000, so that the terminal 1000 collects the user's voice.
  • Step 905 Detect whether the first voice contains a wake-up word through the voice chip.
  • step 906 If yes, go to step 906, otherwise go to step 904.
  • step 906 the voice chip turns on the voice recognition mode and wakes up the terminal.
  • the voice chip can be regarded as being in a light sleep state before waking up the terminal, that is, the voice recognition function is not turned on, and voice recognition of the input voice cannot be performed temporarily.
  • the voice chip can start its own voice recognition function, which can be regarded as being in a voice recognition mode, and can perform voice recognition on the voice input by the user.
  • Step 907 Receive the second voice.
  • Step 908 The voice chip recognizes the voice content of the second voice and reports it.
  • Step 909 Match the reported voice content with the first keyword.
  • step 910 When the reported voice content matches the first keyword, step 910 is executed; otherwise, step 907 is executed.
  • Step 910 The terminal obtains a control instruction corresponding to the second voice.
  • Step 911 Detect whether the terminal is in the target scene.
  • step 912 If yes, go to step 912; otherwise, go to step 917.
  • Step 912 Obtain the volume amplitude of the sound collected by the microphone.
  • Step 913 Detect whether the volume amplitude is greater than the amplitude threshold.
  • step 914 If yes, go to step 914, otherwise go to step 917.
  • Step 914 Obtain a control instruction.
  • Step 915 It is detected whether the display screen is in the maximum expansion state or the maximum contraction state.
  • step 916 If yes, go to step 916, otherwise go to step 917.
  • Step 916 Control the display screen to expand or contract.
  • Step 917 Keep the size of the display screen of the terminal unchanged.
  • the algorithm logic corresponding to the above second voice is to expand the terminal in FIG. 10, and the display screen of the terminal can be expanded in the direction of the arrow shown in FIG.
  • the dotted line in FIG. 10 represents the position of the display screen before unfolding
  • the solid line represents the position of the display screen after unfolding.
  • the terminal receives the input first sound signal; responds to the first sound signal to meet the specified conditions, obtains the display screen control instruction, the display screen control instruction is used to instruct to control the expansion or contraction of the display screen; Screen control instructions to control the expansion or contraction of the display screen.
  • the terminal performs condition judgment on the input first sound signal.
  • the display screen control instruction is obtained, so as to automatically control the expansion and contraction of the display screen of the terminal, so that the user can input the sound signal. This can complete the telescopic control of the terminal display screen, which improves the flexibility of the terminal when displaying the telescopic display screen.
  • FIG. 11 shows a structural block diagram of a display screen control device provided by an exemplary embodiment of the present application.
  • the display screen control device can be used in a terminal that includes a retractable display screen to perform all or part of the steps performed by the terminal in the method provided in the embodiment shown in FIG. 6, FIG. 7, FIG. 8, or FIG. 9 .
  • the display screen control device may include: a sound signal receiving module 1101, a control instruction acquisition module 1102, and a display screen control module 1103.
  • the sound signal receiving module 1101 is configured to receive the input first sound signal
  • the control instruction obtaining module 1102 is configured to obtain a display screen control instruction in response to the first sound signal meeting a specified condition, where the display screen control instruction is used to instruct to control the expansion or contraction of the display screen;
  • the display screen control module 1103 is configured to control the expansion or contraction of the display screen according to the display screen control instruction.
  • the terminal receives the input first sound signal; responds to the first sound signal to meet the specified conditions, obtains the display screen control instruction, the display screen control instruction is used to instruct to control the expansion or contraction of the display screen; Screen control instructions to control the expansion or contraction of the display screen.
  • the terminal performs condition judgment on the input first sound signal.
  • the display screen control instruction is obtained, so as to automatically control the expansion and contraction of the display screen of the terminal, so that the user can input the sound signal. This can complete the telescopic control of the terminal display screen, which improves the flexibility of the terminal when displaying the telescopic display screen.
  • the specified conditions include at least one of the following conditions:
  • the device further includes:
  • the first acquisition module is configured to respond to the designated condition including the voice recognition of the first sound signal before the control instruction acquisition module 1102 acquires the display screen control instruction in response to the first sound signal satisfying a designated condition
  • the conditions to be met are performing voice recognition on the first sound signal to obtain the voice recognition result
  • the first determining module is configured to determine that the first sound signal satisfies the specified condition in response to the voice recognition result matching the first keyword.
  • control instruction obtaining module 1102 is configured to obtain the display screen control instruction corresponding to the first keyword in response to the first sound signal satisfying a specified condition.
  • the device further includes:
  • the second determining module is configured to determine that the specified condition includes a condition to be satisfied by the voice recognition result of the first sound signal in response to the display screen being in the off state.
  • the device further includes:
  • the first receiving module is configured to receive the input second sound signal before the sound signal receiving module 1101 receives the input first sound signal
  • the first control module is configured to control the terminal to enter the voice wake-up state in response to the wake-up word contained in the second sound signal;
  • the sound signal receiving module 1101 is configured to receive the first sound signal input when the terminal is in the voice wake-up state.
  • the device further includes:
  • the second acquiring module is configured to respond to the specified condition including the volume amplitude of the first sound signal before the control instruction acquiring module 1102 acquires the display screen control instruction in response to the first sound signal meeting a specified condition.
  • the condition that is satisfied, the volume amplitude of the first sound signal is acquired;
  • the second determining module is configured to determine that the first sound signal satisfies the specified condition in response to the magnitude relationship between the volume amplitude and the amplitude threshold satisfying a specified magnitude relationship.
  • control instruction acquisition module 1102 is configured to acquire a first control instruction in response to the volume amplitude being less than a first amplitude threshold, and the first control instruction is used to instruct the expansion of the display screen. Take control.
  • control instruction acquisition module 1102 is configured to acquire a second control instruction in response to the volume amplitude being greater than a second amplitude threshold, and the second control instruction is used to instruct the display screen to shrink Take control.
  • the device further includes:
  • a third determining module configured to determine that the specified condition includes a condition to be satisfied by the volume amplitude of the first sound signal in response to the terminal being in a specified state
  • the specified state includes at least one of the following states:
  • the display screen is in a lighted state; and, the terminal is in a target scene.
  • the target scene is any one of a voice call scene, a recording scene, a video call scene, a voice message sending scene, and a video playback scene.
  • the device further includes:
  • the first obtaining module is configured to perform voice recognition on the first sound signal to obtain the voice recognition before the control instruction obtaining module 1102 obtains the display screen control instruction in response to the first sound signal satisfying a specified condition result;
  • the third acquiring module is configured to acquire the volume amplitude of the first sound signal
  • the fourth determining module is configured to respond to the voice recognition result matching the second keyword, and the magnitude relationship between the volume amplitude and the amplitude threshold meets the specified magnitude relationship, and determine that the first sound signal satisfies all State the specified conditions.
  • control instruction obtaining module 1102 includes: a first obtaining unit or a second obtaining unit;
  • the first obtaining unit is configured to obtain the display screen control instruction corresponding to the second keyword in response to the volume amplitude being greater than a third amplitude threshold;
  • the second acquiring unit is configured to, in response to the voice recognition result being matched with the second keyword, acquire the amplitude interval in which the volume amplitude is located, and acquire the display corresponding to the amplitude interval Screen control instructions.
  • the computer device 1200 may include a processor 1201, a receiver 1202, a transmitter 1203, a memory 1204, and a bus 1205.
  • the processor 1201 includes one or more processing cores, and the processor 1201 executes various functional applications and information processing by running software programs and modules.
  • the receiver 1202 and the transmitter 1203 may be implemented as a communication component, and the communication component may be a communication chip.
  • the communication chip can also be called a transceiver.
  • the memory 1204 is connected to the processor 1201 through a bus 1205.
  • the memory 1204 may be used to store a computer program, and the processor 1201 is used to execute the computer program to implement each step executed by the computer device in the foregoing method embodiment.
  • the memory 1204 can be implemented by any type of volatile or non-volatile storage device or a combination thereof.
  • the volatile or non-volatile storage device includes, but is not limited to: magnetic disks or optical disks, electrically erasable and programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), Erasable Programmable Read-Only Memory (EPROM), Static Random Access Memory (SRAM), Read-Only Memory (Read Only Memory, ROM), magnetic memory, flash memory, and Programmable Read Only Memory (PROM).
  • the computer device includes a processor and a memory
  • the processor is used for,
  • the expansion or contraction of the display screen is controlled.
  • the processor 1201 may be configured to execute all or part of the steps in the embodiment shown in FIG. 6, FIG. 7 or FIG. 8.
  • the embodiments of the present application also provide a computer-readable medium that stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the display screen control described in each of the above embodiments. In the method, all or part of the steps performed by the terminal.
  • the embodiments of the present application also provide a computer program product.
  • the computer program product includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the display screen control method provided in the various optional implementation manners of the foregoing various embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Environmental & Geological Engineering (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种显示屏控制方法、装置、计算机设备以及存储介质,属于终端控制技术领域。所述方法用于终端中,所述终端中包含可伸缩的显示屏,所述方法包括:接收输入的第一声音信号;响应于第一声音信号满足指定条件,获取显示屏控制指令,显示屏控制指令用于指示对显示屏的伸展或者收缩进行控制;根据显示屏控制指令,控制显示屏的伸展或者收缩。本申请通过终端对输入的第一声音信号进行条件判断,在第一声音信号满足指定条件时,获取显示屏控制指令,从而自动控制终端的显示屏伸展和收缩,实现了用户通过输入声音信号,便可以完成对终端显示屏的伸缩控制,提高了终端展示可伸缩显示屏时的灵活性。

Description

显示屏控制方法、装置、计算机设备以及存储介质
本申请要求于2020年06月18日提交的申请号为202010561945.6、发明名称为“显示屏控制方法、装置、计算机设备以及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端控制技术领域,特别涉及一种显示屏控制方法、装置、计算机设备以及存储介质。
背景技术
随着科学技术的快速发展,在人们的日常生活中,各种各样的终端已经出现,其中,卷曲屏、折叠屏等一系列具有可伸缩显示屏(也称柔性显示屏)的终端受到了广大用户的喜爱。
其中,在这些具有可伸缩显示屏的终端中,用户可以手动操作或者按键(包括虚拟按键和实体按键)操作控制终端的柔性显示屏展开、收缩。比如,以一部分显示屏展示在终端外,一部分显示屏卷曲收缩在终端内为例,用户可以通过手动拖拉的方式,将卷曲收缩在终端中的那部分显示屏展示出来,使得终端的显示屏幕变得更大,与展示在终端外那部分显示屏结合显示应用界面。
由于对上述可收缩在终端中的可伸缩显示屏进行展示时,需要用户手动操作或者通过按键进行控制,导致终端展示可伸缩显示屏时的灵活性低。
发明内容
本申请实施例提供了一种显示屏控制方法、装置、计算机设备以及存储介质,可以提高终端展示可伸缩显示屏时的灵活性。所述技术方案如下:
一个方面,本申请实施例提供了一种显示屏控制方法,所述方法由终端执行,所述终端中包含可伸缩的显示屏,所述方法包括:
接收输入的第一声音信号;
响应于所述第一声音信号满足指定条件,获取显示屏控制指令,所述显示屏控制指令用于指示对所述显示屏的伸展或者收缩进行控制;
根据所述显示屏控制指令,控制所述显示屏的伸展或者收缩。
另一方面,本申请实施例提供了一种显示屏控制装置,所述装置用于终端中,所述终端中包含可伸缩的显示屏,所述装置包括:
声音信号接收模块,用于接收输入的第一声音信号;
控制指令获取模块,用于响应于所述第一声音信号满足指定条件,获取显示屏控制指令,所述显示屏控制指令用于指示对所述显示屏的伸展或者收缩进行控制;
显示屏控制模块,用于根据所述显示屏控制指令,控制所述显示屏的伸展或者收缩。
另一方面,本申请实施例提供了一种计算机设备,所述计算机设备包含处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如上所述的显示屏控制方法。
另一方面,本申请实施例提供了一种计算机可读存储介质,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如上所述的显示屏控制方法。
另一方面,本申请实施例提供了一种计算机程序产品,所述计算机程序产品包括计算机 指令,所述计算机指令存储在计算机可读存储介质中。计算机设备的处理器从所述计算机可读存储介质读取所述计算机指令,所述处理器执行所述计算机指令,使得所述计算机设备执行上述一个方面提供的显示屏控制方法。
本申请实施例提供的技术方案带来的有益效果至少包括:
通过终端对输入的第一声音信号进行条件判断,在第一声音信号满足指定条件时,获取显示屏控制指令,从而自动控制终端的显示屏伸展和收缩,实现了用户通过输入声音信号,便可以完成对终端显示屏的伸缩控制,提高了终端展示可伸缩显示屏时的灵活性。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1至图5是本申请一示例性实施例涉及的一种终端结构示意图;
图6是本申请一示例性实施例提供的一种显示屏控制方法的方法流程图;
图7是本申请一示例性实施例提供的一种显示屏控制方法的方法流程图;
图8是本申请一示例性实施例提供的一种显示屏控制方法的方法流程图;
图9是本申请一示例性实施例提供的一种显示屏控制方法的方法流程图;
图10是本申请一示例性实施例涉及的一种用户对手机输入语音的示意图;
图11是本申请一示例性实施例提供的显示屏控制装置的结构框图;
图12是本申请一示例性实施例提供的一种计算机设备的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
本申请提供的方案,可以用于人们在日常生活中使用具有柔性显示屏的终端时,控制终端的柔性显示屏展开或者收缩的现实场景中,为了便于理解,下面首先对本申请实施例涉及的一些名词以及终端的结构进行简单介绍。
语音唤醒:指用户通过说出唤醒词来唤醒终端,使终端开启语音对话功能,进入到等待语音指令的状态或者使终端直接执行预定语音指令。
请结合图1至图5,其示出了本申请一示例性实施例涉及的一种终端结构示意图。本申请实施例中的终端100包括壳体组件10、柔性显示屏30、带动件50及驱动机构70。壳体组件10为中空结构;带动件50、驱动机构70以及摄像头60等组件均可设置在壳体组件10。可以理解的是,本申请实施例中的终端100包括但不限于手机、平板等移动终端或者其它便携式电子设备,在本文中,以终端100为手机为例进行说明。
在本申请实施中,壳体组件10包括第一壳体12和第二壳体14,第一壳体12和第二壳体14能够相对运动。在一种可能的实施方式中,第一壳体12和第二壳体14滑动连接,也即是说,第二壳体14能够相对第一壳体12滑动。
可选的,请参阅图4及图5,第一壳体12与第二壳体14共同形成有容置空间16。容置空间16可用于放置带动件50、摄像头60及驱动机构70等部件。壳体组件10还可包括后盖18,后盖18与第一壳体12与第二壳体14共同形成容置空间16。
可选的,带动件50设置于第二壳体14,柔性显示屏30的一端设置于第一壳体12,柔性显示屏30绕过带动件50,且柔性显示屏的另一端设置于容置空间16内,以使部分柔性显示屏30隐藏于容置空间16内,隐藏于容置空间16内的部分柔性显示屏30可不点亮。第一 壳体12和第二壳体14相对远离,可通过带动件50带动柔性显示屏30展开,以使得更多的柔性显示屏30暴露于容置空间16外。点亮暴露于容置空间16外部的柔性显示屏30,以使得终端100所呈现的显示区域变大。
可选的,带动件50为外部带有齿52的转轴结构,柔性显示屏30通过啮合等方式与带动件50相联动,第一壳体12和第二壳体14相对远离时,通过带动件50带动啮合于带动件50上的部分柔性显示屏30移动并展开。
可以理解,带动件50还可为不附带齿52的圆轴,第一壳体12和第二壳体14相对远离时,通过带动件50将卷绕于带动件50上的部分柔性显示屏30撑开,以使更多的柔性显示屏暴露于容置空间16外,并处于平展状态。可选的,带动件50可转动地设置于第二壳体14,在逐步撑开柔性显示屏30时,带动件50可随柔性显示屏30的移动而转动。在其它实施例中,带动件50也可固定在第二壳体14上,带动件50具备光滑的表面。在将柔性显示屏30撑开时,带动件50通过其光滑的表面与柔性显示屏30可滑动接触。
当第一壳体12和第二壳体14相对靠近时,柔性显示屏可通过带动件50带动收回。或者,终端100还包括复位件(图未示),柔性显示屏收容于容置空间16的一端与复位件联动,在第一壳体12和第二壳体14相对靠近时,复位件带动柔性显示屏30复位,进而使得部分柔性显示屏收回于容置空间16内。
在本申请实施例中,驱动机构70可设置在容置空间16内,驱动机构70可与第二壳体14相联动,驱动机构70用于驱动第二壳体14相对于第一壳体12做相离运动,进而带动柔性显示屏组件30伸展。可以理解,驱动机构70也可以省略,用户可以直接通过手动等方式来使得第一壳体和第二壳体相对运动。
可选的,终端100可以是具有语音唤醒功能的终端,比如,该终端可以是手机、平板电脑、电子书阅读器、智能眼镜、智能手表、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑、膝上型便携计算机等等。
可选的,用户可以提前在终端100中开启语音唤醒功能,并对终端输入相应的唤醒词,从而使得终端执行该唤醒词对应的语音指令。例如,以唤醒词为“你好,你好”为例,当用户对终端的麦克风说出:“你好,你好”语音时,终端可以得到该唤醒语音,并将终端唤醒。
对于上述图1至图5所示的终端来说,终端中的柔性显示屏需要展开或者收缩时,往往需要通过用户手动拖拽的方式实现,或者,通过用户对终端中具有控制柔性显示屏展开或收缩的控件进行操作而实现,从而使得终端中的柔性显示屏可以由图4展开至图5或者由图5收缩至图4。在这两种实现方式下,不仅需要用户手动操作,终端中还需要设置相应的硬件器件,增加了终端硬件设计的复杂程度,降低了终端展示可伸缩显示屏的灵活性。
为了提高终端展示可伸缩显示屏时的灵活性,扩展控制终端中可伸缩显示屏伸缩的方式,本申请提供了一种解决方案,请参考图6,其示出了本申请一示例性实施例提供的一种显示屏控制方法的方法流程图。该方法可以应用于上述图1至图5所示的具有可伸缩显示屏的终端中。如图6所示,该显示屏控制方法可以包括以下几个步骤:
步骤601,接收输入的第一声音信号。
可选的,用户可以通过终端的麦克风输入第一声音信号,终端通过自身的麦克风采集用户输入的第一声音信号。或者,终端也可以通过麦克风采集到自身周围环境的环境声音信号,该环境声音信号也可以是第一声音信号。
步骤602,响应于第一声音信号满足指定条件,获取显示屏控制指令,显示屏控制指令用于指示对显示屏的伸展或者收缩进行控制。
可选地,终端可以根据接收到的第一声音信号,判断该第一声音信号是否满足指定条件,在第一声音信号满足指定条件时,获取显示屏控制指令。其中,指定条件可以是预先设置在 终端中的。
可选地,指定条件可以是指第一声音信号的第一属性所要满足的条件,第一属性是第一声音信号的声音幅值属性、声音音色属性、文字内容属性中的任意一种属性或者多种属性。
步骤603,根据显示屏控制指令,控制显示屏的伸展或者收缩。
终端根据获取到的显示屏控制指令,进而控制自身的可收缩显示屏展开或者收缩。
综上所述,终端通过接收输入的第一声音信号;响应于第一声音信号满足指定条件,获取显示屏控制指令,显示屏控制指令用于指示对显示屏的伸展或者收缩进行控制;根据显示屏控制指令,控制显示屏的伸展或者收缩。本申请通过终端对输入的第一声音信号进行条件判断,在第一声音信号满足指定条件时,获取显示屏控制指令,从而自动控制终端的显示屏伸展和收缩,实现了用户通过输入声音信号,便可以完成对终端显示屏的伸缩控制,提高了终端展示可伸缩显示屏时的灵活性。
在一种可能实现的方式中,以上述指定条件指的是第一声音信号的第一属性所要满足的条件,第一属性是第一声音信号的声音幅值属性以及第一声音信号的文字内容属性中的至少一种为例,对上述图6所示的方案进行举例介绍。即,终端通过第一声音信号的语音识别结果所要满足的条件以及第一声音信号的音量幅值所要满足的条件这两者中的至少一项,确定是否获取显示屏控制指令,并控制终端的可伸缩显示屏进行伸展或者收缩。
请参考图7,其示出了本申请一示例性实施例提供的一种显示屏控制方法的方法流程图。该方法可以由上述图1至图5所示的终端执行,如图7所示,该显示屏控制方法可以包括以下几个步骤:
步骤701,响应于显示屏处于熄灭状态,确定指定条件包括第一声音信号的语音识别结果所要满足的条件。
可选地,终端在执行本申请所示的实施例时,可以判断自身的可伸缩显示屏是否处于熄灭状态,即,显示屏处于灭屏状态。在终端确定自身的显示屏处于熄灭状态时,可以确定指定条件是第一声音信号的声音识别结果所要满足的条件。即,对第一声音信号进行语音识别后,得到的语音识别结果中包含的文字内容所要满足的条件。
步骤702,接收输入的第二声音信号。
可选的,用户可以提前在终端中开启语音唤醒功能,终端在开启语音唤醒功能的情况下,接收到唤醒语音后,可以将终端唤醒,从而执行本申请的其他步骤。在一种可能实现的方式中,用户可以在终端的设置界面中设置语音唤醒功能的开启和关闭,在本步骤中,用户将终端的语音唤醒功能开启。
可选的,在终端开启了语音唤醒功能的情况下,用户可以通过对终端的麦克风讲话,向终端输入声音信号,相应的,终端的麦克风可以采集用户输入的声音信号。其中,在本申请中,终端在被唤醒之前,用户输入的语音可以统称为第二声音信号。
步骤703,响应于第二声音信号中包含唤醒词,控制终端进入语音唤醒状态。
在一种可能实现的方式中,终端中可以预先设置有目标检测算法,该目标检测算法可以提取并检测用户输入的语音的声纹特征。终端得到用户输入的第二声音信号的声纹特征后,将第二声音信号的声纹特征与预设唤醒词的声纹特征进行比对,如果两个声纹特征相同,终端视为第二声音信号的声纹特征与预设唤醒词的声纹特征匹配,那么,终端可以确定第二声音信号中包含唤醒词,执行步骤704。否则,终端继续接收用户输入的第二声音信号。
可选的,预设唤醒词可以是用户预先录入在终端中的。比如,在上述用户开启语音唤醒功能时,终端可以提示用户通过语音输入预设唤醒词,并将用户通过语音输入的预设唤醒词对应的声纹特征记录下来,作为预设唤醒词的声纹特征。例如,以预设唤醒词为“你好,你好”为例,当用户对终端的麦克风说出:“你好,你好”语音时,终端可以得到该唤醒语音,通过自身对唤醒语音进行识别得到该唤醒语音中包含的唤醒词,进一步获取该唤醒语音的声 纹特征,终端根据该语音信息的声纹特征,识别是哪个用户说的,从而决定是否启动相应的功能。
可选的,预设唤醒词的声纹特征的数量也可以是多个。例如,终端中存储有用户A的预设唤醒词的声纹特征,也存储有用户B的预设唤醒词的声纹特征,则预设唤醒词的声纹特征可以包含用户A的声纹特征以及用户B的声纹特征。可选的,当终端得到用户输入的第二声音信号后,终端可以将该第二声音信号的声纹特征与自身存储的各个预设唤醒词的声纹特征进行比对。当该第二声音信号的声纹特征与其中任意一个预设唤醒词的声纹特征相同时,也可以看做第二声音信号的声纹特征与预设唤醒词的声纹特征匹配,确定该第二声音信号中包含唤醒词,终端在确定第二声音信号中包含唤醒词后,可以控制终端进入语音唤醒状态。
即,在本步骤中,终端对获取到的第二声音信号的声纹特征与上述预设唤醒词的声纹特征进行匹配,当第二语音的声纹特征是属于该预设唤醒词的声纹特征中任意一个声纹特征时,终端都可以响应相应的语音,激活语音对话功能,并可以激活终端中的语音识别模块。该语音识别模块可以用于对用户输入的语音进行识别,得到该语音对应的文字,即语音内容。例如,预设唤醒词的声纹特征包含用户A的“你好,你好”语音对应的声纹特征以及用户B的“你好,你好”语音对应的声纹特征,用户A通过上述步骤对终端输入了第二声音信号“你好,你好”,终端可以通过目标检测模型对该第二声音信号进行处理,得到用户A的“你好,你好”语音对应的声纹特征,通过该声纹特征与终端中存储的预设唤醒词的声纹特征进行匹配,得知此次得到第二声音信号的声纹特征是预设唤醒词的声纹特征中对应用户A的声纹特征,从而唤醒终端。
步骤704,接收在终端处于语音唤醒状态时输入的第一声音信号。
可选的,在唤醒终端后,处于语音唤醒状态的终端可以继续接收用户输入的第一声音信号。其中,本申请中,终端被唤醒后接收到的用户输入的声音信号统称为第一声音信号。即,用户可以通过终端的麦克风输入第一声音信号,终端通过自身的麦克风采集用户输入的声音信号。例如,输入的声音信号可以是“打开显示屏”,“收缩显示屏”等。
步骤705,响应于指定条件包括第一声音信号的语音识别结果所要满足的条件,对第一声音信号进行语音识别,获得语音识别结果。
由于终端被唤醒,终端激活了语音识别功能,可以对用户输入的第一声音信号进行语音识别,得到该第一声音信号对应的第一语音内容。比如,用户输入的语音为“请打开第一应用程序”,那么,终端通过语音识别功能,识别到用户此次输入的第一声音信号的第一语音内容是“请打开第一应用程序”。如果用户输入的语音为“展开显示屏”,那么,终端通过语音识别功能,识别到用户此次输入的第一声音信号的第一语音内容是“展开显示屏”。
步骤706,响应于语音识别结果与第一关键词匹配,确定第一声音信号满足指定条件。
其中,第一关键词是目标关键词集合中的任意一个关键词,目标关键词集合中的各个关键词对应有控制指令,该控制指令是可以控制终端的显示屏伸缩的指令。可选的,目标关键词集合可以是开发人员预先录入在终端中的。比如,终端中存储有“展开显示屏”、“收缩显示屏”、“打开至一半”、“关闭至一半”等关键词组成的集合,该“展开显示屏”关键词对应有控制终端的显示屏展开的控制指令,该“收缩显示屏”关键词对应有控制终端的显示屏收缩的控制指令,该“打开至一半”关键词对应有控制终端的显示屏展开至显示屏全部长度的一半的控制指令,该“关闭至一半”关键词对应有控制终端的显示屏收缩至显示屏全部长度的一半的控制指令。
可选的,在终端对第一声音信号进行识别后,得到的语音识别结果中第一语音内容与第一关键词匹配时,终端可以确定第一声音信号满足指定条件。比如,用户输入的第一声音信号后,终端对第一声音信号进行识别,得到的语音识别结果中第一语音内容是“展开显示屏”,确定出该第一语音内容与目标关键词集合中的“展开显示屏”这个关键词相同,终端得知语音识别结果与第一关键词匹配,从而确定第一声音信号满足指定条件。
步骤707,响应于第一声音信号满足指定条件,获取与第一关键词相对应的显示屏控制指令。
可选地,终端在确定第一声音信号满足指定条件时,可以获取第一关键词相对应的显示屏控制指令。比如,上述步骤706中获取的第一语音内容是“展开显示屏”,与目标关键词集合中的一个关键词相同,那么终端可以视为获取的第一语音内容与第一关键词匹配。本步骤中终端可以获取到第一关键词对应的控制终端的显示屏伸缩的控制指令。如果第一语音内容是“展开显示屏”,终端可以获取到控制终端的显示屏展开的控制指令。如果第一语音内容是“收缩显示屏”,终端可以获取到控制终端的显示屏收缩的控制指令。
步骤708,响应于终端处于指定状态,确定指定条件包括第一声音信号的音量幅值所要满足的条件。
其中,指定状态包括以下状态中的至少一项:显示屏处于点亮状态;以及,终端处于目标场景。可选地,目标场景是语音通话场景、录音场景、视频通话场景、发送语音消息场景以及视频播放场景中的任意一种场景。
在一种可能实现的方式中,终端在上述判断自身的可伸缩显示屏是否处于熄灭状态时,发现显示屏是处于点亮状态的,此时终端处于指定状态。终端还可以判断自身当前的使用场景是否是目标场景,如果终端的显示屏处于熄灭状态,并且终端自身当前的使用场景是目标场景,此时终端处于指定状态。可选地,可伸缩显示屏处于点亮状态是指可伸缩显示屏不处于完全熄灭的状态,可伸缩显示屏中的部分显示屏处于点亮状态时,也可以视为该可伸缩显示屏处于点亮状态。
可选地,终端判断自身所处的使用场景可以通过如下方式进行判断。例如,终端根据正在运行的应用程序的程序名称,获取终端当前的使用场景。
可选的,终端的显示屏在点亮状态下,终端还可以根据自身正在运行的应用程序的程序名称,获取终端当前的使用场景。在一种可能实现的方式中,终端可以存储有应用程序名称与使用场景之间的对应关系表。请参考表1,其示出了本申请一示例性实施例提供的一种应用程序名称与使用场景之间的对应关系表。
应用程序名称 使用场景
应用程序一 使用场景一
应用程序二 使用场景二
应用程序三 使用场景三
应用程序一、应用程序二 使用场景一
应用程序一、应用程序三 使用场景三
…… ……
表1
如表1所示,终端的使用场景可以与正在运行的应用程序一一对应,也可以是一对多。例如,上述终端获取到自身正在运行的应用程序的程序名称是应用程序一,那么,通过上述表1,终端可以获取到当前的使用场景是使用场景一。
在一种可能实现的方式中,上述表1中的应用程序三是终端中的电话应用程序,当该电话应用程序运行时,终端获取到的当前的使用场景是语音通话场景。上述表1中的应用程序三是终端中的录音应用程序,当该录音应用程序运行时,终端获取到的当前的使用场景是录音场景。上述表1中的应用程序三是终端中的视频播放应用程序,当该视频播放应用程序运行时,终端获取到的当前的使用场景是视频场景。
终端在获取到自身当前的使用场景后,可以继续判断自身的使用场景是否是目标场景,如果是目标场景,则在本步骤中确定指定条件包括第一声音信号的音量幅值所要满足的条件。
步骤709,响应于指定条件包括第一声音信号的音量幅值所要满足的条件,获取第一声音信号的音量幅值。
可选地,终端在确定指定条件包括第一声音信号的音量幅值所要满足的条件后,终端还可以获取第一声音信号的音量幅值。可选的,该第一声音信号的音量幅值可以在上述接收到用户输入的第一声音信号时预先通过麦克风获取,本步骤中直接获取已经获取到的第一声音信号的音量幅值。可选的,该音量幅值也可以看做是声音强度。
可选地,终端也可以计算第一声音信号的音量幅值的平均值,将第一声音信号的音量幅值的平均值作为第一声音信号的音量幅值。可选地,平均值计算时采用的时间可以由用户预先在终端中设定。比如,用户设定计算第一声音信号的音量幅值的平均值时采用的时间是第一声音信号的持续时长,那么,终端可以根据第一声音信号的音量幅值(即第一声音信号在各个采样点上的音量幅值之和)除以第一声音信号的持续时长,得到第一声音信号的音量幅值的平均值。或者,用户设定计算第一声音信号的音量幅值的平均值时采用的时间是固定的2秒,那么,终端可以将获取到的第一声音信号的音量幅值按照2秒进行分割,得到多个第一声音信号的音量幅值(即多段采样点上的音量幅值之和),对这些音量幅值均除以2秒,得到多个第一声音信号的音量幅值各自的平均值。
步骤710,响应于音量幅值与幅值阈值之间的大小关系满足指定大小关系,确定第一声音信号满足指定条件。
可选地,终端还可以获取第一声音信号的音量幅值与幅值阈值之间的大小关系,当第一声音信号的音量幅值与幅值阈值之间的大小关系满足指定大小关系时,终端确定第一声音信号满足指定条件。
在一种可能实现的方式中,幅值阈值包括第一幅值阈值和第二幅值阈值,其中,指定大小关系是音量幅值小于第一幅值阈值。即,终端响应于音量幅值小于第一幅值阈值,确定第一声音信号满足指定条件。
可选的,终端可以直接将获取到的第一声音信号的音量幅值与第一幅值阈值进行比较,得到两者之间的大小关系;对应上述一种可能实现的方式,指定大小关系是音量幅值小于第一幅值阈值。终端可以计算第一声音信号的音量幅值的平均值;检测第一声音信号的音量幅值的平均值是否小于第一幅值阈值。在第一声音信号的音量幅值的平均值小于第一幅值阈值时,确定两者之间的大小关系满足指定大小关系。可选的,该指定大小关系和第一幅值阈值都可以有开发人员或者运维人员预先设置在终端中。
比如,第一幅值阈值为20分贝,若终端获取到的第一声音信号的音量幅值的平均值是15分贝,那么说明音量幅值与第一幅值阈值之间的大小关系满足指定大小关系,相应的,终端确定第一声音信号满足指定条件。若终端获取到的第一声音信号的音量幅值的平均值是25分贝,那么说明音量幅值与第一幅值阈值之间的大小关系不满足指定大小关系,相应的,终端确定第一声音信号不满足指定条件。
在一种可能实现的方式中,指定大小关系是音量幅值大于第二幅值阈值。即,终端响应于音量幅值大于第二幅值阈值,确定第一声音信号满足指定条件。
可选的,终端可以直接将获取到的第一声音信号的音量幅值与第二幅值阈值进行比较,得到两者之间的大小关系;对应上述一种可能实现的方式,指定大小关系是音量幅值大于第二幅值阈值。终端可以计算第一声音信号的音量幅值的平均值;检测第一声音信号的音量幅值的平均值是否大于第二幅值阈值。在第一声音信号的音量幅值的平均值大于第二幅值阈值时,确定两者之间的大小关系满足指定大小关系。可选的,该第二幅值阈值也可以由开发人员或者运维人员预先设置在终端中。
比如,第一幅值阈值为20分贝,若终端获取到的第一声音信号的音量幅值的平均值是25分贝,那么说明音量幅值与第一幅值阈值之间的大小关系满足指定大小关系,相应的,终端确定第一声音信号满足指定条件。若终端获取到的第一声音信号的音量幅值的平均值是15分贝,那么说明音量幅值与第一幅值阈值之间的大小关系不满足指定大小关系,相应的,终端确定第一声音信号不满足指定条件。可选地,上述第一幅值阈值与第二幅值阈值的大小可 以相同也可以不同。
步骤711,响应于音量幅值小于第一幅值阈值,获取第一控制指令,第一控制指令用于指示对显示屏的伸展进行控制。
即,如果上述指定大小关系是音量幅值小于第一幅值阈值,终端判断出音量幅值小于第一幅值阈值时,可以获取第一控制指令,从而控制显示屏伸展。
可选地,如果上述指定大小关系是音量幅值大于第二幅值阈值,本步骤可以替换为:响应于音量幅值大于第二幅值阈值,获取第二控制指令,第二控制指令用于指示对显示屏的收缩进行控制。即,终端判断出音量幅值大于第二幅值阈值时,可以获取第二控制指令,从而控制显示屏收缩。
在一种可能实现的方式中,如果上述步骤710中,终端确定第一声音信号不满足指定条件,终端可以保持显示屏的大小不变,不执行后续的步骤。即,响应于第一声音信号不满足指定条件,终端可以不对显示屏做伸缩处理,保持原来的伸缩状态不变。
步骤712,根据显示屏控制指令,控制显示屏的伸展或者收缩。
终端根据获取到的显示屏控制指令,调用用于控制显示屏伸展和收缩的驱动芯片接口函数,从而控制显示屏的伸展和收缩。
在一种可能实现的方式中,在上述步骤707中获取到的与第一关键词相对应的显示屏控制指令,是用于将显示屏进行收缩,那么,终端可以通过该显示屏控制指令,控制显示屏的收缩。如果在上述步骤707中获取到的与第一关键词相对应的显示屏控制指令,是用于将显示屏进行展开,那么,终端可以通过该显示屏控制指令,控制显示屏的展开。相应的,如果在上述步骤711中获取到第一控制指令,那么,终端可以通过该第一控制指令,控制显示屏的展开。如果在上述步骤711替换为其中描述的一种情况,终端获取到第二控制指令,那么,终端可以通过该第二控制指令,控制显示屏的收缩。
在一种可能实现的方式中,显示屏控制指令是将显示屏进行展开,终端在控制显示屏的伸展之前,终端还可以检测终端的显示屏是否处于最大展开状态;响应于终端的显示屏未处于最大展开状态,控制终端的显示屏伸展。即,终端在展开显示屏之前,终端可以检测显示屏是否已经展开到最大,如果已经展开到最大,那么终端可以保持显示屏的大小不变,如果显示屏未展开到最大,那么,终端可以控制显示屏展开。
相应的,如果显示屏控制指令是将显示屏进行收缩,终端在控制显示屏的伸展之前,终端还可以检测终端的显示屏是否处于最大收缩状态;响应于终端的显示屏未处于最大收缩状态,控制终端的显示屏伸展。即,终端在收缩显示屏之前,终端可以检测显示屏是否已经收缩到最大,如果已经收缩到最大,那么终端可以保持显示屏的大小不变,如果显示屏未收缩到最大,那么,终端可以控制显示屏收缩。
在一种可能实现的方式中,在步骤707中获取到的与第一关键词相对应的显示屏控制指令在终端中还具有对应的目标状态。终端根据显示屏控制指令,获取显示屏控制指令对应的目标状态;终端可以检测终端的显示屏是否处于目标状态,并响应于终端的显示屏未处于目标状态,根据显示屏控制指令,控制终端的显示屏伸展或者收缩,使得终端的显示屏伸展或者收缩至目标状态。可选的,终端可以预先存储有目标状态与显示屏控制指令之间的对应关系。请参考表2,其示出了本申请一示例性实施例涉及的一种显示屏控制指令与目标状态之间的对应关系表。
显示屏控制指令 目标状态
控制指令一 目标状态一
控制指令二 目标状态二
控制指令三 目标状态三
…… ……
表2
如表2所示,不同的显示屏控制指令可以对应有自己的目标状态。可选的,目标状态可以指示终端显示屏在伸缩或者收缩方向上的长度。比如,终端的显示屏全部展开时,在展开方向的长度为20厘米,控制指令一是对应将终端的显示屏全部展开的指令,那么,该控制指令一对应的目标状态一可以是显示屏长度为20厘米的状态。或者,终端的显示屏展开至一半时,在展开方向的长度为15厘米,控制指令二是对应将终端的显示屏至一半的指令,那么,该控制指令二对应的目标状态一可以是显示屏长度为15厘米的状态。相应的,终端可以判断此时自身的显示屏是否处于相应的状态,如果处于相应的状态,终端可以不做处理,如果不处于相应的状态,终端还可以根据显示屏控制指令,控制终端的显示屏伸展或者收缩。
综上所述,终端通过接收输入的第一声音信号;响应于第一声音信号满足指定条件,获取显示屏控制指令,显示屏控制指令用于指示对显示屏的伸展或者收缩进行控制;根据显示屏控制指令,控制显示屏的伸展或者收缩。本申请通过终端对输入的第一声音信号进行条件判断,在第一声音信号满足指定条件时,获取显示屏控制指令,从而自动控制终端的显示屏伸展和收缩,实现了用户通过输入声音信号,便可以完成对终端显示屏的伸缩控制,提高了终端展示可伸缩显示屏时的灵活性。
另外,在终端处于目标场景下时,通过确定音量幅值与第一幅值阈值和第二幅值阈值的大小关系,获取对应的显示屏控制指令,使得终端在目标场景下可以主动变换显示屏的大小,灵活改善目标场景下终端的麦克风获取或者播放声音的效果。
在一种可能实现的方式中,在上述图7所示的实施例中,指定条件包括第一声音信号的语音识别结果所要满足的条件以及第一声音信号的音量幅值所要满足的条件,即,第一声音信号的语音识别结果所要满足的条件以及第一声音信号的音量幅值所要满足的条件都存在指定条件中时,终端不仅获得语音识别结果,还获取第一声音信号的音量幅值。
请参考图8,其示出了本申请一示例性实施例提供的一种显示屏控制方法的方法流程图。该方法可以由上述图1至图5所示的终端执行,如图8所示,该显示屏控制方法可以包括以下几个步骤:
步骤801,启动终端的语音唤醒功能。
步骤802,接收输入的第二声音信号。
步骤803,响应于第二声音信号中包含唤醒词,控制终端进入语音唤醒状态。
步骤804,接收在终端处于语音唤醒状态时输入的第一声音信号。
步骤805,对第一声音信号进行语音识别,获得语音识别结果。
可选地,步骤802至步骤805的实现方式可以参照上述图7实施例中的步骤702至步骤705的描述,此处不再赘述。
步骤806,获取第一声音信号的音量幅值。
可选地,步骤806的实现方式可以参照上述图7实施例中步骤709的描述,此处不再赘述。
步骤807,响应于语音识别结果与第二关键词匹配,且音量幅值与幅值阈值之间的大小关系满足指定大小关系,确定第一声音信号满足指定条件。
可选地,第二关键词和上述第一关键词可以相同,均是目标关键词集合中的关键词。其中,终端判断语音识别结果与第二关键词匹配是否匹配,以及音量幅值与幅值阈值之间的大小关系是否满足指定大小关系分别可以参照上述图7实施例中步骤706和步骤710的描述,此处不再赘述。
步骤808,响应于第一声音信号满足指定条件,获取显示屏控制指令。
在一种可能实现的方式中,响应于音量幅值大于第三幅值阈值,获取与第二关键词相对应的显示屏控制指令。即,终端在确定第一语音信号满足指定条件时,还可以对上述获取到的音量幅值再次判断,确定音量幅值与第三幅值阈值之间的大小关系,如果音量幅值大于第 三幅值阈值,获取与第二关键词相对应的显示屏控制指令。例如,第三幅值阈值为25分贝,终端获取到的音量幅值为30分贝,那么,终端可以获取与第二关键词相对应的显示屏控制指令。可选地,如果音量幅值不大于第三幅值阈值,终端可以不对显示屏做伸展或者收缩处理,即并不获取显示屏控制指令,保持显示屏大小不变。
在一种可能实现的方式中,响应于语音识别结果与第二关键词匹配,获取音量幅值所在的幅值区间,获取与幅值区间相对应的显示屏控制指令。即,终端在确定第一语音信号满足指定条件时,还可以对上述获取音量幅值对应的幅值区间,获取与幅值区间相对应的显示屏控制指令。例如,终端可以预先存储有幅值区间与显示屏控制指令之间的对应关系。请参考表3,其示出了本申请一示例性实施例涉及的一种幅值区间与显示屏控制指令之间的对应关系表。
幅值区间 显示屏控制指令
幅值区间一 显示屏控制指令一
幅值区间二 显示屏控制指令二
幅值区间三 显示屏控制指令三
…… ……
表3
如表3所示,不同的幅值区间可以对应有自己的显示屏控制指令。终端在获取到音量幅值所在的幅值区间后,可以通过查询上述表3得到幅值区间相对应的显示屏控制指令。比如,终端获取到音量幅值所在的幅值区间是幅值区间二,那么,终端最终获取到的显示屏控制指令是显示屏控制指令二。
步骤809,根据显示屏控制指令,控制显示屏的伸展或者收缩。
可选地,步骤809的实现方式可以参照上述图7实施例中步骤712的描述,此处不再赘述。
综上所述,终端通过接收输入的第一声音信号;响应于第一声音信号满足指定条件,获取显示屏控制指令,显示屏控制指令用于指示对显示屏的伸展或者收缩进行控制;根据显示屏控制指令,控制显示屏的伸展或者收缩。本申请通过终端对输入的第一声音信号进行条件判断,在第一声音信号满足指定条件时,获取显示屏控制指令,从而自动控制终端的显示屏伸展和收缩,实现了用户通过输入声音信号,便可以完成对终端显示屏的伸缩控制,提高了终端展示可伸缩显示屏时的灵活性。
另外,本申请通过对目标场景的判断,在终端处于目标场景下时,通过声音幅值进一步确定是否需要对终端执行伸缩操作,以及通过终端与用户之间的距离进一步确定是否需要对终端执行伸缩操作,避免了终端在目标场景下,对终端的显示屏的误控制。而且,本申请中,终端在判断出声音幅值不满足第三幅值阈值条件时,并不对终端执行伸缩操作,可以减少在目标场景下终端伸缩显示屏时产生的噪音,提高目标场景下的音质。
在一种可能实现的方式中,以上述终端是手机,手机中包含语音芯片,该手机中的语音芯片执行上述图7所示的关于声音信号的获取、检测、识别等步骤为例,对上述图6、图7或图8的方法实施例进行举例说明。请参考图9,其示出了本申请一示例性实施例提供的一种显示屏控制方法的方法流程图。该方法是由日常生活中的手,执行的,如图9所示,该显示屏控制方法可以包括以下几个步骤:
步骤901,在手机中开启语音唤醒功能。
步骤902,在手机中预先录入语音芯片所能检测到的唤醒词、第一关键词以及幅值阈值。
步骤903,检测手机的显示屏是否处于熄灭状态。
若是,执行步骤904,若否,执行步骤911。
步骤904,接收第一语音。
请参考图10,其示出了本申请一示例性实施例涉及的一种用户对手机输入语音的示意图。如图10所示,用户可以对终端1000讲话,使得终端1000采集到用户的语音。
步骤905,通过语音芯片检测第一语音是否包含唤醒词。
若是,执行步骤906,否则执行步骤904。
步骤906,语音芯片开启语音识别模式,唤醒终端。
可选地,语音芯片在唤醒终端之前,可以看做处于浅睡眠状态,即,未开启语音识别功能,暂时不能对输入的语音进行语音识别。在唤醒终端后,语音芯片可以启动自身的语音识别功能,可以看做是处于语音识别模式下,并可以对用户输入的语音进行语音识别。
步骤907,接收第二语音。
步骤908,语音芯片识别第二语音的语音内容进行上报。
步骤909,通过上报的语音内容与第一关键字进行匹配。
当上报的语音内容与第一关键字匹配时执行步骤910,否则执行步骤907。
步骤910,终端获取第二语音对应的控制指令。
步骤911,检测终端是否处于目标场景。
若是,执行步骤912,否则,执行步骤917。
步骤912,获取麦克风采集到声音的音量幅值。
步骤913,检测音量幅值是否大于幅值阈值。
若是,执行步骤914,否则执行步骤917。
步骤914,获取控制指令。
步骤915,检测显示屏是否处于最大展开状态或者最大收缩状态。
若是,执行步骤916,否则执行步骤917。
步骤916,控制显示屏展开或者收缩。
步骤917,保持终端的显示屏大小不变。
例如,上述第二语音对应的算法逻辑是将图10中的终端展开,终端的显示屏可以按照图10所示的箭头方向进行展开。其中,图10中虚线代表显示屏展开前的位置,实线代表显示屏展开后的位置。如果上述第二语音对应的算法逻辑是将图10中的终端收缩,终端的显示屏可以按照图10所示的箭头方向的反方向进行收缩。此时,图10中实线代表显示屏收缩前的位置,虚线代表显示屏收缩后的位置。
综上所述,终端通过接收输入的第一声音信号;响应于第一声音信号满足指定条件,获取显示屏控制指令,显示屏控制指令用于指示对显示屏的伸展或者收缩进行控制;根据显示屏控制指令,控制显示屏的伸展或者收缩。本申请通过终端对输入的第一声音信号进行条件判断,在第一声音信号满足指定条件时,获取显示屏控制指令,从而自动控制终端的显示屏伸展和收缩,实现了用户通过输入声音信号,便可以完成对终端显示屏的伸缩控制,提高了终端展示可伸缩显示屏时的灵活性。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
请参考图11,其示出了本申请一示例性实施例提供的显示屏控制装置的结构框图。该显示屏控制装置可以用于终端中,该终端中包含可伸缩的显示屏,以执行图6、图7、图8或者图9所示实施例提供的方法中由终端执行的全部或者部分步骤。该显示屏控制装置可以包括:声音信号接收模块1101,控制指令获取模块1102以及显示屏控制模块1103。
声音信号接收模块1101,用于接收输入的第一声音信号;
控制指令获取模块1102,用于响应于所述第一声音信号满足指定条件,获取显示屏控制指令,所述显示屏控制指令用于指示对所述显示屏的伸展或者收缩进行控制;
显示屏控制模块1103,用于根据所述显示屏控制指令,控制所述显示屏的伸展或者收缩。
综上所述,终端通过接收输入的第一声音信号;响应于第一声音信号满足指定条件,获取显示屏控制指令,显示屏控制指令用于指示对显示屏的伸展或者收缩进行控制;根据显示屏控制指令,控制显示屏的伸展或者收缩。本申请通过终端对输入的第一声音信号进行条件判断,在第一声音信号满足指定条件时,获取显示屏控制指令,从而自动控制终端的显示屏伸展和收缩,实现了用户通过输入声音信号,便可以完成对终端显示屏的伸缩控制,提高了终端展示可伸缩显示屏时的灵活性。
可选的,所述指定条件包括以下条件中的至少一项:
所述第一声音信号的语音识别结果所要满足的条件;
以及,所述第一声音信号的音量幅值所要满足的条件。
可选的,所述装置还包括:
第一获取模块,用于在所述控制指令获取模块1102响应于所述第一声音信号满足指定条件,获取显示屏控制指令之前,响应于所述指定条件包括所述第一声音信号的语音识别结果所要满足的条件,对所述第一声音信号进行语音识别,获得所述语音识别结果;
第一确定模块,用于响应于所述语音识别结果与第一关键词匹配,确定所述第一声音信号满足所述指定条件。
可选地,所述控制指令获取模块1102,用于响应于所述第一声音信号满足指定条件,获取与所述第一关键词相对应的所述显示屏控制指令。
可选地,所述装置还包括:
第二确定模块,用于响应于所述显示屏处于熄灭状态,确定所述指定条件包括所述第一声音信号的语音识别结果所要满足的条件。
可选地,所述装置还包括:
第一接收模块,用于在所述声音信号接收模块1101接收输入的第一声音信号之前,接收输入的第二声音信号;
第一控制模块,用于响应于所述第二声音信号中包含唤醒词,控制所述终端进入语音唤醒状态;
所述声音信号接收模块1101,用于接收在所述终端处于所述语音唤醒状态时输入的所述第一声音信号。
可选地,所述装置还包括:
第二获取模块,用于在所述控制指令获取模块1102响应于第一声音信号满足指定条件,获取显示屏控制指令之前,响应于所述指定条件包括所述第一声音信号的音量幅值所要满足的条件,获取所述第一声音信号的音量幅值;
第二确定模块,用于响应于所述音量幅值与幅值阈值之间的大小关系满足指定大小关系,确定所述第一声音信号满足所述指定条件。
可选地,所述控制指令获取模块1102,用于响应于所述音量幅值小于第一幅值阈值,获取第一控制指令,所述第一控制指令用于指示对所述显示屏的伸展进行控制。
可选地,所述控制指令获取模块1102,用于响应于所述音量幅值大于第二幅值阈值,获取第二控制指令,所述第二控制指令用于指示对所述显示屏的收缩进行控制。
可选地,所述装置还包括:
第三确定模块,用于响应于所述终端处于指定状态,确定所述指定条件包括所述第一声音信号的音量幅值所要满足的条件;
所述指定状态包括以下状态中的至少一项:
所述显示屏处于点亮状态;以及,所述终端处于目标场景。
可选地,所述目标场景是语音通话场景、录音场景、视频通话场景、发送语音消息场景以及视频播放场景中的任意一种场景。
可选地,所述装置还包括:
第一获得模块,用于在所述控制指令获取模块1102响应于所述第一声音信号满足指定条件,获取显示屏控制指令之前,对所述第一声音信号进行语音识别,获得所述语音识别结果;
第三获取模块,用于获取所述第一声音信号的音量幅值;
第四确定模块,用于响应于所述语音识别结果与第二关键词匹配,且所述音量幅值与幅值阈值之间的大小关系满足指定大小关系,确定所述第一声音信号满足所述指定条件。
可选地,所述控制指令获取模块1102,包括:第一获取单元或者第二获取单元;
所述第一获取单元,用于响应于所述音量幅值大于第三幅值阈值,获取与所述第二关键词相对应的所述显示屏控制指令;
或者,
所述第二获取单元,用于响应于所述语音识别结果与所述第二关键词匹配,获取所述音量幅值所在的幅值区间,获取与所述幅值区间相对应的所述显示屏控制指令。
请参考图12,其示出了本申请一示例性实施例提供的一种计算机设备的结构示意图。该计算机设备1200可以包括:处理器1201、接收器1202、发射器1203、存储器1204和总线1205。
处理器1201包括一个或者一个以上处理核心,处理器1201通过运行软件程序以及模块,从而执行各种功能应用以及信息处理。
接收器1202和发射器1203可以实现为一个通信组件,该通信组件可以是一块通信芯片。该通信芯片也可以称为收发器。
存储器1204通过总线1205与处理器1201相连。
存储器1204可用于存储计算机程序,处理器1201用于执行该计算机程序,以实现上述方法实施例中的计算机设备执行的各个步骤。
此外,存储器1204可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,易失性或非易失性存储设备包括但不限于:磁盘或光盘,电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM),可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM),静态随时存取存储器(Static Random Access Memory,SRAM),只读存储器(Read Only Memory,ROM),磁存储器,快闪存储器,可编程只读存储器(Programmable Read Only Memory,PROM)。
在示例性实施例中,所述计算机设备包括处理器和存储器;
所述处理器用于,
接收输入的第一声音信号;
响应于所述第一声音信号满足指定条件,获取显示屏控制指令,所述显示屏控制指令用于指示对所述显示屏的伸展或者收缩进行控制;
根据所述显示屏控制指令,控制所述显示屏的伸展或者收缩。
处理器1201可以用于执行如上述图6、图7或图8所示实施例中的全部或者部分步骤。
本申请实施例还提供了一种计算机可读介质,该计算机可读介质存储有至少一条指令,所述至少一条指令由所述处理器加载并执行以实现如上各个实施例所述的显示屏控制方法中,由终端执行的全部或部分步骤。
本申请实施例还提供了一种计算机程序产品,该计算机程序产品包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述各个实施例的各种可选实现方式中提供的显示屏控制方法。
需要说明的是:上述实施例提供的显示屏控制装置在执行上述显示屏控制方法时,仅以上述各实施例进行举例说明,实际程序中,可以根据需要而将上述功能分配由不同的功能模 块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的装置与方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请可选的实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (16)

  1. 一种显示屏控制方法,其特征在于,所述方法由终端执行,所述终端中包含可伸缩的显示屏,所述方法包括:
    接收输入的第一声音信号;
    响应于所述第一声音信号满足指定条件,获取显示屏控制指令,所述显示屏控制指令用于指示对所述显示屏的伸展或者收缩进行控制;
    根据所述显示屏控制指令,控制所述显示屏的伸展或者收缩。
  2. 根据权利要求1所述的方法,其特征在于,所述指定条件包括以下条件中的至少一项:
    所述第一声音信号的语音识别结果所要满足的条件;
    以及,所述第一声音信号的音量幅值所要满足的条件。
  3. 根据权利要求2所述的方法,其特征在于,所述响应于所述第一声音信号满足指定条件,获取显示屏控制指令之前,还包括:
    响应于所述指定条件包括所述第一声音信号的语音识别结果所要满足的条件,对所述第一声音信号进行语音识别,获得所述语音识别结果;
    响应于所述语音识别结果与第一关键词匹配,确定所述第一声音信号满足所述指定条件。
  4. 根据权利要求3所述的方法,其特征在于,所述响应于所述第一声音信号满足指定条件,获取显示屏控制指令,包括:
    响应于所述第一声音信号满足指定条件,获取与所述第一关键词相对应的所述显示屏控制指令。
  5. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    响应于所述显示屏处于熄灭状态,确定所述指定条件包括所述第一声音信号的语音识别结果所要满足的条件。
  6. 根据权利要求3所述的方法,其特征在于,在所述接收输入的第一声音信号之前,还包括:
    接收输入的第二声音信号;
    响应于所述第二声音信号中包含唤醒词,控制所述终端进入语音唤醒状态;
    所述接收输入的第一声音信号,包括:
    接收在所述终端处于所述语音唤醒状态时输入的所述第一声音信号。
  7. 根据权利要求2所述的方法,其特征在于,所述响应于所述第一声音信号满足指定条件,获取显示屏控制指令之前,还包括:
    响应于所述指定条件包括所述第一声音信号的音量幅值所要满足的条件,获取所述第一声音信号的音量幅值;
    响应于所述音量幅值与幅值阈值之间的大小关系满足指定大小关系,确定所述第一声音信号满足所述指定条件。
  8. 根据权利要求7所述的方法,其特征在于,所述响应于所述第一声音信号满足指定条件,获取显示屏控制指令,包括:
    响应于所述音量幅值小于第一幅值阈值,获取第一控制指令,所述第一控制指令用于指示对所述显示屏的伸展进行控制。
  9. 根据权利要求8所述的方法,其特征在于,所述响应于所述第一声音信号满足指定条件,获取显示屏控制指令,包括:
    响应于所述音量幅值大于第二幅值阈值,获取第二控制指令,所述第二控制指令用于指示对所述显示屏的收缩进行控制。
  10. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    响应于所述终端处于指定状态,确定所述指定条件包括所述第一声音信号的音量幅值所要满足的条件;
    所述指定状态包括以下状态中的至少一项:
    所述显示屏处于点亮状态;以及,所述终端处于目标场景。
  11. 根据权利要求10所述的方法,其特征在于,所述目标场景是语音通话场景、录音场景、视频通话场景、发送语音消息场景以及视频播放场景中的任意一种场景。
  12. 根据权利要求2所述的方法,其特征在于,所述响应于所述第一声音信号满足指定条件,获取显示屏控制指令之前,还包括:
    对所述第一声音信号进行语音识别,获得所述语音识别结果;
    获取所述第一声音信号的音量幅值;
    响应于所述语音识别结果与第二关键词匹配,且所述音量幅值与幅值阈值之间的大小关系满足指定大小关系,确定所述第一声音信号满足所述指定条件。
  13. 根据权利要求12所述的方法,其特征在于,所述响应于所述第一声音信号满足指定条件,获取显示屏控制指令,包括:
    响应于所述音量幅值大于第三幅值阈值,获取与所述第二关键词相对应的所述显示屏控制指令;
    或者,
    响应于所述语音识别结果与所述第二关键词匹配,获取所述音量幅值所在的幅值区间,获取与所述幅值区间相对应的所述显示屏控制指令。
  14. 一种显示屏控制装置,其特征在于,所述装置用于终端中,所述终端中包含可伸缩的显示屏,所述装置包括:
    声音信号接收模块,用于接收输入的第一声音信号;
    控制指令获取模块,用于响应于所述第一声音信号满足指定条件,获取显示屏控制指令,所述显示屏控制指令用于指示对所述显示屏的伸展或者收缩进行控制;
    显示屏控制模块,用于根据所述显示屏控制指令,控制所述显示屏的伸展或者收缩。
  15. 一种计算机设备,其特征在于,所述计算机设备包含处理器和存储器,所述存储器中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如权利要求1至13任一所述的显示屏控制方法。
  16. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由处理器加载并执行以实现如权利要求1至13任一所述的显示屏控制方法。
PCT/CN2021/090003 2020-06-18 2021-04-26 显示屏控制方法、装置、计算机设备以及存储介质 WO2021253992A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010561945.6 2020-06-18
CN202010561945.6A CN113905110B (zh) 2020-06-18 2020-06-18 显示屏控制方法、装置、计算机设备以及存储介质

Publications (1)

Publication Number Publication Date
WO2021253992A1 true WO2021253992A1 (zh) 2021-12-23

Family

ID=79186116

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/090003 WO2021253992A1 (zh) 2020-06-18 2021-04-26 显示屏控制方法、装置、计算机设备以及存储介质

Country Status (2)

Country Link
CN (1) CN113905110B (zh)
WO (1) WO2021253992A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170050270A (ko) * 2015-10-30 2017-05-11 엘지전자 주식회사 롤러블 이동 단말기 및 그 제어 방법
CN108377279A (zh) * 2018-02-09 2018-08-07 维沃移动通信有限公司 一种移动终端及控制方法
CN109087649A (zh) * 2018-09-05 2018-12-25 努比亚技术有限公司 终端、终端控制方法及计算机可读存储介质
CN109286706A (zh) * 2018-10-12 2019-01-29 京东方科技集团股份有限公司 显示设备
CN110752973A (zh) * 2018-07-24 2020-02-04 Tcl集团股份有限公司 一种终端设备的控制方法、装置和终端设备
CN111385393A (zh) * 2018-12-29 2020-07-07 Oppo广东移动通信有限公司 一种电子设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010025887A (ko) * 1999-09-01 2001-04-06 공관식 마이크 겸용 안테나를 가진 셀룰라폰
US20140149216A1 (en) * 2013-09-24 2014-05-29 Peter McGie Voice Recognizing Digital Messageboard System and Method
KR20180024927A (ko) * 2016-08-31 2018-03-08 삼성전자주식회사 디스플레이 장치 및 디스플레이 장치의 제어 방법
CN109656498B (zh) * 2018-10-30 2021-12-03 努比亚技术有限公司 一种显示控制方法、柔性屏终端及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170050270A (ko) * 2015-10-30 2017-05-11 엘지전자 주식회사 롤러블 이동 단말기 및 그 제어 방법
CN108377279A (zh) * 2018-02-09 2018-08-07 维沃移动通信有限公司 一种移动终端及控制方法
CN110752973A (zh) * 2018-07-24 2020-02-04 Tcl集团股份有限公司 一种终端设备的控制方法、装置和终端设备
CN109087649A (zh) * 2018-09-05 2018-12-25 努比亚技术有限公司 终端、终端控制方法及计算机可读存储介质
CN109286706A (zh) * 2018-10-12 2019-01-29 京东方科技集团股份有限公司 显示设备
CN111385393A (zh) * 2018-12-29 2020-07-07 Oppo广东移动通信有限公司 一种电子设备

Also Published As

Publication number Publication date
CN113905110B (zh) 2022-11-18
CN113905110A (zh) 2022-01-07

Similar Documents

Publication Publication Date Title
US10838765B2 (en) Task execution method for voice input and electronic device supporting the same
US11670302B2 (en) Voice processing method and electronic device supporting the same
US10978048B2 (en) Electronic apparatus for recognizing keyword included in your utterance to change to operating state and controlling method thereof
WO2019214361A1 (zh) 语音信号中关键词的检测方法、装置、终端及存储介质
TWI525532B (zh) Set the name of the person to wake up the name for voice manipulation
EP3528243A1 (en) System for processing user utterance and controlling method thereof
CN109192210B (zh) 一种语音识别的方法、唤醒词检测的方法及装置
WO2019153999A1 (zh) 一种基于语音控制的动向投影方法、装置及动向投影***
CN108810280B (zh) 语音采集频率的处理方法、装置、存储介质及电子设备
WO2019242414A1 (zh) 语音处理方法、装置、存储介质及电子设备
KR102369083B1 (ko) 음성 데이터 처리 방법 및 이를 지원하는 전자 장치
KR20190109916A (ko) 전자 장치 및 상기 전자 장치로부터 수신된 데이터를 처리하는 서버
CN115312068B (zh) 语音控制方法、设备及存储介质
US20210183388A1 (en) Voice recognition method and device, photographing system, and computer-readable storage medium
WO2022147692A1 (zh) 一种语音指令识别方法、电子设备以及非瞬态计算机可读存储介质
US20210020177A1 (en) Device for processing user voice input
CN112651235A (zh) 一种诗歌生成的方法及相关装置
WO2021169711A1 (zh) 指令执行方法、装置、存储介质及电子设备
WO2021253992A1 (zh) 显示屏控制方法、装置、计算机设备以及存储介质
CN110337030B (zh) 视频播放方法、装置、终端和计算机可读存储介质
EP4293664A1 (en) Voiceprint recognition method, graphical interface, and electronic device
WO2019242415A1 (zh) 位置提示方法、装置、存储介质及电子设备
CN109584877A (zh) 语音交互控制方法和装置
WO2021147417A1 (zh) 语音识别方法、装置、计算机设备及计算机可读存储介质
CN112017662B (zh) 控制指令确定方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21826257

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21826257

Country of ref document: EP

Kind code of ref document: A1