WO2020259674A1 - 一种曲面屏的防误触方法及电子设备 - Google Patents

一种曲面屏的防误触方法及电子设备 Download PDF

Info

Publication number
WO2020259674A1
WO2020259674A1 PCT/CN2020/098446 CN2020098446W WO2020259674A1 WO 2020259674 A1 WO2020259674 A1 WO 2020259674A1 CN 2020098446 W CN2020098446 W CN 2020098446W WO 2020259674 A1 WO2020259674 A1 WO 2020259674A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
user
touch
preset
camera
Prior art date
Application number
PCT/CN2020/098446
Other languages
English (en)
French (fr)
Inventor
陈兰昊
徐世坤
崔建伟
于飞
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20833347.6A priority Critical patent/EP3961358B1/en
Publication of WO2020259674A1 publication Critical patent/WO2020259674A1/zh
Priority to US17/562,397 priority patent/US11782554B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
    • G06F3/04186Touch location disambiguation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04104Multi-touch detection in digitiser, i.e. details about the simultaneous detection of a plurality of touching locations, e.g. multiple fingers or pen and finger
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/22Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Definitions

  • the embodiments of the present application relate to the field of terminal technology, and in particular to a method for preventing accidental touch of a curved screen and an electronic device.
  • curved screens such as curved screen mobile phones.
  • the side of the curved screen mobile phone is a curved touch screen; therefore, when the user holds the curved screen mobile phone, the finger easily touches the side of the curved screen, which may cause false touches on the curved screen.
  • the embodiment of the application provides a method for preventing accidental touch of a curved screen and an electronic device, which can realize the prevention of accidental touch on the side of the curved screen, and improve the accuracy of preventing accidental touch, thereby improving the user's use of the curved screen device Experience.
  • an embodiment of the present application provides a method for preventing accidental touches of a curved screen.
  • the method can be applied to an electronic device.
  • the touch screen of the electronic device is a curved screen with a curved side.
  • the method includes: the electronic device obtains the angle between the touch screen and the horizontal plane; in response to the angle between the touch screen and the horizontal plane being within a first preset angle range, the electronic device activates the camera; in response to the face image collected by the camera, the electronic device obtains the electronic device The distance between the user and the user, and the user’s face yaw; in response to the distance between the electronic device and the user being less than the first distance threshold, and the human face yaw is within the second preset angle range, the electronic device The user performs a mis-touch prevention process on the preset mis-touch operation of the touch screen.
  • the human face yaw degree is the left-right rotation angle of the user's face orientation relative to the first connection line
  • the first connection line is the connection line between the camera and the user's head.
  • the camera can collect the face image, the distance between the electronic device and the user is less than the first distance threshold, and the user's face is offset If the navigation angle is within the second preset angle range, the user is more likely to use the electronic device in the scene (1) and the scene (2).
  • scene (1) the scene where the user lies flat and holds the mobile phone with one hand.
  • Scene (2) A scene where the user lies on his side and holds the phone with one hand.
  • the preset mistouch operation is a mistouch caused by the user holding the electronic device on the left arc area (such as the second side arc area) and the right arc area (such as the first side arc area) of the touch screen.
  • the preset mis-touch operation can be recognized, and the preset mis-touch operation is processed to prevent mis-touch, which can improve the accuracy of the mis-touch prevention.
  • the electronic device can recognize the preset mis-touch operation as a mis-touch operation, then while the first contact surface and the second contact surface exist, the electronic device can respond to the user's other non-mis-touch operations on the curved screen, then There will be no user click failure problem, which can improve the user experience of the curved screen device.
  • the first contact surface is the contact surface between the touch screen collected by the electronic device and the user's hand when the electronic device is held by the user.
  • the second contact surface is the contact surface between the touch screen collected by the electronic device and the user's finger when the electronic device is held by the user. That is, the electronic device can determine whether the touch operation is a preset error according to whether the position of the touch operation input by the user on the touch screen is in the first side arc area or the second side arc area, and the shape of the contact surface of the touch operation on the touch screen. Touch operation.
  • the duration of the preset mistouch operation generated when the user holds the electronic device is generally longer, and the duration of the user's normal operation of the touch screen is generally shorter .
  • the electronic device when the electronic device recognizes the preset mis-touch operation, it can not only refer to the shape of the contact surface corresponding to the touch operation, but also determine whether the duration of the touch operation is greater than the preset time.
  • the preset mistouch operation may include: when the electronic device is held by the user, the duration of contact with the first side arc area collected by the electronic device is longer than the first preset time, and movement in the first side arc area Touch operations whose distance is less than the second distance threshold.
  • the above-mentioned electronic device performs mistouch prevention processing on the user's preset mistouch operation on the touch screen, including: the electronic device receives the user's first touch operation on the touch screen; electronic device Using a preset anti-mistouch algorithm, it is recognized that the user's first touch operation on the touch screen is a preset mistouch operation; the electronic device does not respond to the first touch operation.
  • the electronic device when the electronic device determines that the touch operation is a preset mistouch operation, there may be a misjudgment of the touch operation, which affects the accuracy of the mistouch prevention.
  • the electronic device can continuously determine whether the preset false touch operation identified above has moved a larger distance.
  • the method of the embodiment of the present application also Including: the electronic device determines that the moving distance of the first touch operation within the second preset time is less than or equal to the third distance threshold; the second preset time starts when the electronic device recognizes that the first touch operation is a preset mistouch operation, The duration is a time period of the first preset duration.
  • the electronic device may The touch operation is processed to prevent accidental touch, that is, it does not respond to the first touch operation.
  • the preset mistouch operation (ie, the first touch operation) recognized by the electronic device may include one or more touch operations.
  • the aforementioned first touch operation may include a touch operation corresponding to a first contact surface and a touch operation corresponding to x second contact surfaces.
  • the method of the embodiment of the present application further includes: the electronic device determines that at least one of the first touch operations is in the first touch operation. 2. The moving distance within the preset time is greater than the third distance threshold, and the electronic device responds to the at least one touch operation to execute an event corresponding to the at least one touch operation. The electronic device does not respond to touch operations other than at least one touch operation in the first touch operation.
  • the electronic device determines that the moving distance of at least one touch operation in the first touch operation within the second preset time is greater than the third distance threshold, it means that the electronic device misjudged the at least one touch operation.
  • the electronic device can perform anti-manslaughter processing on the at least one touch operation, that is, the electronic device can execute the corresponding event in response to the at least one touch operation. For other touch operations except the at least one touch operation in the first touch operation, the electronic device does not make a misjudgment and does not respond to the other touch operations.
  • the method of the embodiment of the present application further includes: if the electronic device receives a second touch operation of the user in the second side arc area after the third preset time, the electronic In response to the second touch operation, the device executes an event corresponding to the second touch operation.
  • the third preset time is a time period after the electronic device recognizes that the first touch operation is a preset mistouch operation, and the duration is the second preset duration.
  • the aforementioned first preset angle range includes: at least one of [-n°, n°] and [90°-m°, 90°+m°] .
  • the value range of n includes at least: (0, 20), (0, 15) or any one of (0, 10); the value range of m includes at least: (0, 20), (0, 15 ) Or (0,10).
  • the second preset angle range is [-k°, k°]. Wherein, the value range of k includes at least any one of (0, 15), (0, 10) or (0, 5).
  • the above-mentioned electronic device obtains the angle between the touch screen and the horizontal plane, including: the electronic device obtains the angle between the touch screen and the horizontal plane through one or more sensors.
  • the one or more sensors may at least include a gyroscope sensor.
  • the above electronic device further includes a structured light camera module, the structured light camera module includes a light projector, a first camera and a second camera, the first camera and the second camera
  • the distance between the cameras is the first length.
  • the electronic device in response to the face image collected by the camera, obtains the distance between the electronic device and the user, including: in response to the face image collected by the camera, the electronic device emits light information through the light projector, and collects the person through the first camera.
  • the first image information of the user’s face corresponding to the face image is collected by the second camera to collect the second image information of the face.
  • the first image information and the second image information include the characteristics of the face; the electronic device uses the first image information, The second image information, the first length, the focal length of the lens of the first camera and the focal length of the second camera are used to calculate the depth information of the face; the electronic device calculates the distance between the electronic device and the user according to the depth information of the face, And the user’s face yaw.
  • the above electronic device further includes a distance sensor.
  • the electronic device obtains the distance between the electronic device and the user, including: in response to the face image collected by the camera, the electronic device obtains the distance between the electronic device and the user through a distance sensor.
  • an embodiment of the present application provides an electronic device, which includes a processor, a memory, a touch screen, and a camera.
  • the touch screen is a curved screen with a curved side.
  • the processor is used to obtain the angle between the touch screen and the horizontal plane; in response to the angle between the touch screen and the horizontal plane being within the first preset angle range, the electronic device starts the camera; the camera is used to collect images; the processor is also used to respond to The camera collects the face image, the distance between the electronic device and the user, and the user's face yaw.
  • the face yaw is the left and right rotation angle of the user's face relative to the first connection.
  • the line is the connection between the camera and the user’s head; in response to the distance between the electronic device and the user being less than the first distance threshold, and the yaw of the human face is within the second preset angle range, the user’s touch screen is preset Mishandling operations are carried out to prevent accidental touches.
  • the contact surfaces of the user's hand and the touch screen are: the first contact surface in the arc area on the first side of the touch screen, and x second contact surfaces in the arc area on the second side of the touch screen, 1 ⁇ x ⁇ 4, x is a positive integer.
  • the first contact surface is the contact surface between the touch screen collected by the electronic device and the user's hand when the electronic device is held by the user;
  • the second contact surface is the electronic device When held by the user, the contact surface between the touch screen collected by the electronic device and the user's finger.
  • the preset mistouch operation includes: when the electronic device is held by the user, the duration of contact with the first side arc area collected by the electronic device is longer than the first preset Time, and the moving distance in the arc area on the first side is less than the second distance threshold.
  • the processor is used to prevent the user from accidentally touching the preset operation on the touch screen, including: the processor, specifically used to: receive the user's touch screen The first touch operation; using a preset anti-mistouch algorithm, it is recognized that the first touch operation is a preset mistouch operation; it does not respond to the first touch operation.
  • the above-mentioned processor is further configured to not respond to the first touch operation after recognizing that the first touch operation is a preset touch operation by using a preset anti-mistouch algorithm. Before the touch operation, it is determined that the moving distance of the first touch operation within the second preset time is less than or equal to the third distance threshold.
  • the second preset time is a time period after the electronic device recognizes that the first touch operation is a preset mistouch operation, and the duration is the first preset duration.
  • the first touch operation includes one or more touch operations.
  • the above-mentioned processor is further configured to determine that the movement distance of at least one touch operation in the first touch operation within the second preset time is greater than the third distance threshold, and the electronic device responds to the at least one touch operation and executes the corresponding touch operation Event; does not respond to touch operations other than at least one touch operation in the first touch operation.
  • the above-mentioned processor is further configured to respond to the second touch if the user's second touch operation on the second side arc area is received after the third preset time. Operation to execute the event corresponding to the second touch operation.
  • the third preset time is a time period after the electronic device recognizes that the first touch operation is a preset mistouch operation, and the duration is the second preset duration.
  • the aforementioned first preset angle range includes: at least one of [-n°, n°] and [90°-m°, 90°+m°] ;
  • the value range of n includes at least: (0, 20), (0, 15) or any one of (0, 10);
  • the value range of m includes at least: (0, 20), (0, 15) Or any one of (0,10).
  • the second preset angle range is [-k°, k°]; where the value range of k includes at least any one of (0, 15), (0, 10) or (0, 5).
  • the above electronic device further includes: one or more sensors, and the one or more sensors include at least a gyroscope sensor.
  • the processor is configured to obtain the angle between the touch screen and the horizontal plane, and includes: the processor is specifically configured to obtain the angle between the touch screen and the horizontal plane through the one or more sensors.
  • the above electronic device further includes a structured light camera module
  • the structured light camera module includes a light projector, a first camera and a second camera, the first camera and the second camera
  • the distance between the cameras is the first length.
  • the processor is used to obtain the distance between the electronic device and the user in response to the face image collected by the camera, and includes: the processor, specifically used to respond to the face image collected by the camera, emit light information through the light projector, and pass
  • the first camera collects the first image information of the user’s face corresponding to the human face image
  • the second camera collects the second image information of the human face.
  • the first image information and the second image information include the characteristics of the face; Calculate the depth information of the face based on the image information, the second image information, the first length, the focal length of the lens of the first camera and the focal length of the second camera; calculate the distance between the electronic device and the user according to the depth information of the face , And the user’s face yaw.
  • the above electronic device further includes a distance sensor.
  • the processor is used to obtain the distance between the electronic device and the user in response to the face image collected by the camera, and includes: the processor, which is specifically used to obtain the electronic device and the user through the distance sensor in response to the face image collected by the camera the distance between.
  • the present application provides a chip system that is applied to an electronic device including a touch screen; the chip system includes one or more interface circuits and one or more processors; the interface circuit and the processing The interface circuit is used to receive a signal from the memory of the electronic device and send the signal to the processor, and the signal includes the computer instruction stored in the memory; when the processor When executing the computer instruction, the electronic device executes the method described in the first aspect and any of its possible design manners.
  • an embodiment of the present application provides a computer storage medium that includes computer instructions.
  • the computer instructions When the computer instructions are executed on an electronic device, the electronic device is caused to execute the first aspect and any one of them. Possible design methods for preventing accidental touches of curved screens.
  • the embodiments of the present application provide a computer program product, which when the computer program product runs on a computer, causes the computer to execute the curved screen as described in the first aspect and any of its possible design methods.
  • the electronic device described in the second aspect and its possible design method, the chip system described in the third aspect, the computer storage medium described in the fourth aspect, and the computer program product described in the fifth aspect are provided above Both are used to implement the corresponding methods provided above. Therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding methods provided above, which will not be repeated here.
  • FIG. 1 is a schematic diagram of a product form of a curved screen mobile phone provided by an embodiment of the application
  • FIG. 2 is a schematic diagram of a mobile phone with a curved screen provided by an embodiment of the application being held by a user;
  • FIG. 3 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the application.
  • FIG. 4 is a schematic diagram of a software system architecture of an electronic device provided by an embodiment of the application.
  • FIG. 5 is a flowchart of a method for preventing accidental touch of a curved screen provided by an embodiment of the application
  • FIG. 6 is a schematic diagram of a gyroscope coordinate system and a geographic coordinate system provided by an embodiment of this application;
  • FIG. 7 is a schematic diagram of a calculation principle of depth information provided by an embodiment of this application.
  • FIG. 8 is a schematic diagram of a human face yaw provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of the algorithm logic architecture of a preset anti-mistouch algorithm provided by an embodiment of the application.
  • FIG. 10 is a schematic diagram of the algorithm logic architecture of a preset anti-mistouch algorithm provided by an embodiment of the application;
  • FIG. 11 is a schematic structural diagram of a chip system provided by an embodiment of the application.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features. In the description of this embodiment, unless otherwise specified, “plurality” means two or more.
  • the embodiment of the present application provides a method for preventing accidental touch of a curved screen, which can be applied to an electronic device, and the touch screen of the electronic device is a curved screen with a curved side.
  • the electronic device as the curved screen mobile phone shown in FIG. 1 as an example.
  • FIG. 1 shows a perspective view of a curved screen mobile phone 100.
  • FIG. 1 shows a front view of the curved screen mobile phone 100.
  • the touch screen of the mobile phone 100 is a curved screen with a curvature on the left side 10 and the right side 20.
  • the touch screen of the curved screen mobile phone is a curved screen with a curvature on the side; therefore, when the user holds the curved screen mobile phone, the user's finger will touch the curved area of the touch screen in a large area.
  • the user holds a curved screen mobile phone with the right hand as an example.
  • the contact surface between the tiger’s mouth and thumb of the user’s right hand and the curved area on the left side of the curved screen is contact surface 1, and the contact surface between the other fingers of the user’s right hand and the curved area on the right side of the curved screen
  • the contact surface 2 can have 1-4 contact points. In FIG. 2, the contact surface 2 includes 4 contact points as an example.
  • the conventional anti-mistouch solution after the mobile phone collects the user's touch operation on the touch screen, the small area contact points on the side of the user and the touch screen can be processed to prevent accidental touch. For the contact surface 1 and the contact surface 2 shown in Figure 2 Such large-area contact points will not be processed to prevent accidental touch. In this way, with the conventional anti-mis-touch solution, the mobile phone 100 will not perform anti-mis-touch processing on the touch operations corresponding to the contact surface 1 and the contact surface 2, which increases the possibility of user misoperation. That is, the conventional anti-mistouch solution is not suitable for electronic devices with the above-mentioned curved screen.
  • Scene (1) A scene where the user lies flat and holds the phone with one hand.
  • Scene (2) A scene where the user lies on his side and holds the phone with one hand.
  • the mobile phone 100 does not perform the false touch prevention processing on the touch operations corresponding to the contact surface 1 and the contact surface 2, that is, the mobile phone 100 recognizes the touch operations corresponding to the contact surface 1 and the contact surface 2 as normal touch operations (that is, non-false touch operations) ; Then, while the contact surface 1 and the contact surface 2 exist, the mobile phone 100 cannot respond to the user's other non-incorrect touch operations on the curved screen, and the problem of user click failure will occur, which affects the user experience.
  • the electronic device can identify the scene in which the electronic device is located; when the electronic device recognizes that the electronic device is in a preset accidental touch scene, it can activate the preset protection False touch algorithm.
  • the use of a preset anti-mistouch algorithm can realize the anti-mistouch on the side of the curved screen. That is, by adopting the preset anti-mistouch algorithm, the electronic device can recognize the touch operations corresponding to the contact surface 1 and the contact surface 2 shown in (b) of FIG. 2 as the wrong touch operation, which can improve the accuracy of the accident prevention.
  • the electronic device can recognize the touch operation corresponding to the contact surface 1 and the contact surface 2 shown in Figure 2 (b) as a false touch operation, then the electronic device will It can respond to other non-incorrect touch operations of the user on the curved screen, and the problem of user click failure will not occur, and the user experience of the curved screen device can be improved.
  • the electronic device when a user uses an electronic device in the aforementioned scene (1) or scene (2), the electronic device may be in the aforementioned preset mistouch scene.
  • the preset false touch scene For the detailed introduction of the preset false touch scene, refer to the description in the following embodiments, and the details of the embodiments of the present application are not repeated here.
  • the electronic devices in the embodiments of the present application may be mobile phones, tablet computers, desktops, laptops, handheld computers, notebook computers, ultra-mobile personal computers (UMPC), netbooks, and cellular computers.
  • FIG. 3 is a schematic structural diagram of an electronic device 300 according to an embodiment of this application.
  • the electronic device 300 may include a processor 310, an external memory interface 320, an internal memory 321, a universal serial bus (USB) interface 330, a charging management module 340, a power management module 341, and a battery 342 , Antenna 1, antenna 2, mobile communication module 350, wireless communication module 360, audio module 370, speaker 370A, receiver 370B, microphone 370C, earphone interface 370D, sensor module 380, buttons 390, motor 391, indicator 392, camera 393 , Display screen 394, subscriber identification module (SIM) card interface 395, etc.
  • SIM subscriber identification module
  • the sensor module 380 may include pressure sensor 380A, gyroscope sensor 380B, air pressure sensor 380C, magnetic sensor 380D, acceleration sensor 380E, distance sensor 380F, proximity light sensor 380G, fingerprint sensor 380H, temperature sensor 380J, touch sensor 380K, environment Light sensor 380L, bone conduction sensor 380M, etc.
  • the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device 300.
  • the electronic device 300 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 310 may include one or more processing units.
  • the processor 310 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 300.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 310 to store instructions and data.
  • the memory in the processor 310 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 310. If the processor 310 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 310 is reduced, and the efficiency of the system is improved.
  • the processor 310 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous) interface receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / Or Universal Serial Bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the interface connection relationship between the modules illustrated in this embodiment is merely a schematic description, and does not constitute a structural limitation of the electronic device 300.
  • the electronic device 300 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 340 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 340 may receive the charging input of the wired charger through the USB interface 330.
  • the charging management module 340 may receive the wireless charging input through the wireless charging coil of the electronic device 300. While the charging management module 340 charges the battery 342, it can also supply power to the electronic device through the power management module 341.
  • the power management module 341 is used to connect the battery 342, the charging management module 340 and the processor 310.
  • the power management module 341 receives input from the battery 342 and/or the charge management module 340, and supplies power to the processor 310, the internal memory 321, the external memory, the display screen 394, the camera 393, and the wireless communication module 360.
  • the power management module 341 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 341 may also be provided in the processor 310.
  • the power management module 341 and the charging management module 340 may also be provided in the same device.
  • the wireless communication function of the electronic device 300 can be realized by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 300 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 350 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 300.
  • the mobile communication module 350 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and so on.
  • the mobile communication module 350 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 350 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation by the antenna 1.
  • at least part of the functional modules of the mobile communication module 350 may be provided in the processor 310.
  • at least part of the functional modules of the mobile communication module 350 and at least part of the modules of the processor 310 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to a speaker 370A, a receiver 370B, etc.), or displays an image or video through the display screen 394.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 310 and be provided in the same device as the mobile communication module 350 or other functional modules.
  • the wireless communication module 360 can provide applications on the electronic device 300 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 360 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 360 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 310.
  • the wireless communication module 360 may also receive the signal to be sent from the processor 310, perform frequency modulation, amplify, and convert it into electromagnetic waves through the antenna 2 and radiate it out.
  • the antenna 1 of the electronic device 300 is coupled with the mobile communication module 350, and the antenna 2 is coupled with the wireless communication module 360, so that the electronic device 300 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 300 implements a display function through a GPU, a display screen 394, and an application processor.
  • the GPU is an image processing microprocessor, which is connected to the display screen 394 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 310 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 394 is used to display images, videos, etc.
  • the display screen 394 is a touch screen.
  • the touch screen is a curved screen with curved sides.
  • the display screen 394 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • emitting diode AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 300 can implement a shooting function through an ISP, a camera 393, a video codec, a GPU, a display screen 394, and an application processor.
  • the ISP is used to process the data fed back by the camera 393. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 393.
  • the camera 393 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats.
  • the electronic device 300 may include 1 or N cameras 393, and N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 300 selects the frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 300 may support one or more video codecs. In this way, the electronic device 300 can play or record videos in a variety of encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent cognition of the electronic device 300, such as image recognition, face recognition, voice recognition, text understanding, etc.
  • the external memory interface 320 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 300.
  • the external memory card communicates with the processor 310 through the external memory interface 320 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 321 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 310 executes various functional applications and data processing of the electronic device 300 by running instructions stored in the internal memory 321.
  • the processor 310 can execute the instructions stored in the internal memory 321 to respond to the user's first operation or second operation on the display screen 394 (ie, the folding screen), in the display screen 384 ( That is, the folding screen) displays the corresponding display content.
  • the internal memory 321 may include a storage program area and a storage data area. Among them, the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 300.
  • the internal memory 321 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • the electronic device 300 can implement audio functions through an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, a headphone interface 370D, and an application processor. For example, music playback, recording, etc.
  • the audio module 370 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 370 can also be used to encode and decode audio signals.
  • the audio module 370 may be provided in the processor 310, or part of the functional modules of the audio module 370 may be provided in the processor 310.
  • the speaker 370A also called a "speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 300 can listen to music through the speaker 370A, or listen to a hands-free call.
  • the receiver 370B also called “earpiece”, is used to convert audio electrical signals into sound signals.
  • the electronic device 300 When the electronic device 300 answers a call or voice message, it can receive the voice by bringing the receiver 370B close to the human ear.
  • the microphone 370C also called “microphone”, “microphone”, is used to convert sound signals into electric signals.
  • the user can approach the microphone 370C through the mouth to make a sound, and input the sound signal into the microphone 370C.
  • the electronic device 300 may be provided with at least one microphone 370C. In some other embodiments, the electronic device 300 can be provided with two microphones 370C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the electronic device 300 may also be provided with three, four or more microphones 370C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 370D is used to connect wired earphones.
  • the earphone interface 370D may be a USB interface 330, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association
  • the pressure sensor 380A is used to sense pressure signals and can convert the pressure signals into electrical signals.
  • the pressure sensor 380A may be provided on the display screen 394.
  • the capacitive pressure sensor may include at least two parallel plates with conductive material. When a force is applied to the pressure sensor 380A, the capacitance between the electrodes changes.
  • the electronic device 300 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 300 detects the intensity of the touch operation according to the pressure sensor 380A.
  • the electronic device 300 may also calculate the touched position according to the detection signal of the pressure sensor 380A.
  • touch operations that act on the same touch location but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 380B can be used to determine the movement posture of the electronic device 300.
  • the angular velocity of the electronic device 300 around three axes ie, x, y, and z axes
  • the gyro sensor 380B can be used for shooting anti-shake.
  • the display 394 (ie, curved screen) of the electronic device 300 may include a gyroscope sensor (such as the above-mentioned gyroscope sensor 380B) for measuring the orientation of the display screen 334 (ie, the direction vector of the orientation).
  • the orientation of the display screen 334 can be used to determine the angle between the display screen 334 and the horizontal plane.
  • the magnetic sensor 380D includes a Hall sensor.
  • the electronic device 300 can use the magnetic sensor 380D to detect the opening and closing of the flip holster.
  • the acceleration sensor 380E can detect the magnitude of the acceleration of the electronic device 300 in various directions (generally three axes). When the electronic device 300 is stationary, the magnitude and direction of gravity can be detected.
  • the electronic device 300 can measure the distance by infrared or laser.
  • the electronic device 300 may measure the distance between the electronic device 300 and the human face through the distance sensor 380F.
  • the proximity light sensor 380G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 300 emits infrared light to the outside through the light emitting diode.
  • the electronic device 300 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 300. When insufficient reflected light is detected, the electronic device 300 can determine that there is no object near the electronic device 300.
  • the ambient light sensor 380L is used to sense the brightness of the ambient light.
  • the electronic device 300 can adaptively adjust the brightness of the display screen 394 according to the perceived brightness of the ambient light.
  • the ambient light sensor 380L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 380L can also cooperate with the proximity light sensor 380G to detect whether the electronic device 300 is in the pocket to prevent accidental touch.
  • the fingerprint sensor 380H is used to collect fingerprints.
  • the electronic device 300 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • the temperature sensor 380J is used to detect temperature.
  • the electronic device 300 uses the temperature detected by the temperature sensor 380J to execute the temperature processing strategy. For example, when the temperature reported by the temperature sensor 380J exceeds a threshold, the electronic device 300 performs a reduction in the performance of the processor located near the temperature sensor 380J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 300 when the temperature is lower than another threshold, the electronic device 300 heats the battery 342 to avoid abnormal shutdown of the electronic device 300 due to low temperature.
  • the electronic device 300 boosts the output voltage of the battery 342 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 380K also known as "touch panel”.
  • the touch sensor 380K can be arranged on the display screen 394, and the touch screen is composed of the touch sensor 380K and the display screen 394, which is also called a “touch screen”.
  • the touch sensor 380K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 394.
  • the touch sensor 380K may also be disposed on the surface of the electronic device 300, which is different from the position of the display screen 394.
  • the bone conduction sensor 380M can acquire vibration signals.
  • the bone conduction sensor 380M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 380M can also contact the human pulse and receive blood pressure pulse signals.
  • the button 390 includes a power button, a volume button and so on.
  • the button 390 may be a mechanical button. It can also be a touch button.
  • the electronic device 300 may receive key input, and generate key signal input related to user settings and function control of the electronic device 300.
  • the motor 391 can generate vibration prompts.
  • the motor 391 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • the indicator 392 can be an indicator light, which can be used to indicate the charging status, power change, and can also be used to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 395 is used to connect to the SIM card. The SIM card can be inserted into the SIM card interface 395 or pulled out from the SIM card interface 395 to achieve contact and separation with the electronic device 300.
  • FIG. 4 takes an Android system as an example to show a schematic diagram of a software system architecture of an electronic device 300 provided by an embodiment of the present application.
  • the possible software system of the electronic device 300 includes a framework (Framework) layer, a hardware abstraction layer (HAL) and a kernel (Kernel) layer.
  • framework Framework
  • HAL hardware abstraction layer
  • Kernel kernel
  • the display screen (such as a touch screen) of a mobile phone includes a touch sensor, which is used to collect a user's touch operation on the touch screen and obtain touch information corresponding to the touch operation (such as touch information 1).
  • the touch information may include size and position information of the touch surface corresponding to the touch operation, and information such as the pressing force of the touch operation.
  • the driver of the Kernel layer can report the touch information 1 to the hardware abstraction layer HAL (ie, execute S1 shown in FIG. 4).
  • the hardware abstraction layer HAL includes multiple TP algorithms, such as a calibration algorithm and the anti-mistouch algorithm preset in the embodiment of the present application.
  • the TP algorithm in the hardware abstraction layer HAL can process the touch information 1 (including false touch prevention processing) to obtain the touch information 2. Subsequently, the hardware abstraction layer HAL can send the touch information 2 to the input system of the Kernel layer (that is, execute S2 shown in FIG. 4). After the input system of the Kernel layer receives the touch information 2, it can report the touch information to the upper layer (such as the Framework layer), and the upper layer responds to the aforementioned touch operation according to the touch information.
  • the upper layer such as the Framework layer
  • the mobile phone can activate the anti-mistouch algorithm preset in the multiple TP algorithms when the mobile phone is in a preset mistouch scene, and the hardware abstraction layer HAL uses the preset anti-mistouch algorithm to Information 1 is subjected to the false touch prevention processing described in the embodiment of this application.
  • the mobile phone when the mobile phone is in the preset anti-mistouch scene, it may specifically be: the angle between the touch screen of the mobile phone and the horizontal plane is within the first preset angle range; the camera of the mobile phone collects the face image of the user, and the touch screen of the mobile phone is The distance of the user's face is less than the first distance threshold, and the yaw degree of the user's face is within the second preset angle range.
  • the face yaw of the user is the left-right rotation angle of the user's face orientation relative to the first connection, which is the connection between the camera of the mobile phone and the user's head.
  • the example of this application is combined with the software system architecture shown in FIG. 4 to illustrate the working principle of the electronic device (such as a mobile phone) in the embodiment of this application starting the preset anti-mistouch algorithm: when the mobile phone is turned on, a system service can be started through the zygote process ( The system server, that is, system_server, SS for short) process, the SS process is used to determine whether the mobile phone is in a preset false touch scene. If the SS process determines that the phone is in a preset accidental touch scene, the preset anti-inadvertent touch algorithm can be activated.
  • the system server that is, system_server, SS for short
  • the gravity angle detection module (such as a gyroscope sensor) of the mobile phone can collect the angle between the curved screen of the mobile phone and the horizontal plane.
  • the SS process monitors the change of the included angle collected by the gravity angle detection module. If the SS process monitors that the angle between the curved screen of the mobile phone and the horizontal plane is within the first preset angle range, the 3D structured light detection module can be activated.
  • the 3D structured light detection module may include a camera, a distance sensor, and an infrared light sensor. The 3D structured light detection module collects face images, the distance between the face and the phone, and the angle between the face and the phone (ie, the yaw of the face).
  • the SS process can enable the preset anti-mistouch algorithm, that is, execute S0 as shown in Figure 4.
  • the preset anti-mistouch algorithm may be the AFT 2SA algorithm (Algorithm).
  • the preset anti-mistouch algorithm can be integrated in the TP daemon (that is, the daemon) of the HAL layer.
  • the preset anti-mistouch algorithm is used to determine the position and shape of the contact surface corresponding to the touch operation collected by TP. That is, the preset touch operation (such as the above contact surface 1 and contact Touch operation corresponding to face 2).
  • the init process is first started, and then the init process loads the Android file system, creates a system directory, initializes the attribute system, and starts some daemon processes.
  • the zygote process is the most important daemon process.
  • the SS process is the first process fork of the zygote process, and it is also the core process of the entire Android system.
  • the touch screen of the mobile phone is a curved screen with curved sides.
  • the method for preventing accidental touch of the curved screen may include: (1) a preset judgment process for preventing accidental touch; (2) a process for preventing accidental touch.
  • the mobile phone can first execute (1) the preset anti-mistouch scene judgment process to determine whether the mobile phone is in the preset anti-mistouch scene. If the mobile phone is in a preset anti-incorrect touch scene, the mobile phone can execute (2) anti-inadvertent touch processing flow, start the preset anti-incorrect touch algorithm, and perform anti-inadvertent touch processing on the preset accidental touch operation.
  • the above-mentioned (1) preset anti-mistouch scene judgment process may include S501-S508:
  • the mobile phone obtains the angle ⁇ between the touch screen (that is, the curved screen) of the mobile phone and the horizontal plane.
  • the mobile phone may include a gyroscope sensor.
  • the gyroscope sensor is used to measure the orientation of the touch screen (that is, the direction vector of the orientation).
  • the mobile phone can determine the angle between the touch screen of the mobile phone (that is, the curved screen) and the horizontal plane according to the orientation of the touch screen.
  • the gyroscope sensor measures the orientation of the touch screen (that is, the direction vector a), and the mobile phone calculates the angle ⁇ between the touch screen (that is, the curved screen) of the mobile phone and the horizontal plane according to the orientation of the touch screen is described.
  • the coordinate system of the gyroscope sensor is the geographic coordinate system.
  • the origin O of the geographic coordinate system is at the point where the carrier (ie, the device containing the gyroscope sensor, such as a mobile phone) is located, the x-axis points to the east (E) along the local latitude, and the y-axis
  • the local meridian line points to the north (N), and the z-axis points upward along the local geographic vertical line, and forms a right-handed rectangular coordinate system with the x-axis and y-axis.
  • the plane formed by the x-axis and the y-axis is the local horizontal plane
  • the plane formed by the y-axis and the z-axis is the local meridian.
  • the xOy plane is the local horizontal plane
  • the plane formed by the yOz axis is the local meridian plane.
  • the coordinate system of the gyroscope sensor is: taking the gyroscope sensor as the origin O, pointing east along the local latitude line as the x-axis, pointing north along the local meridian line as the y-axis, and pointing upward along the local geographic vertical line ( That is, the opposite direction of the geographic vertical) is the z-axis.
  • the mobile phone utilizes the gyro sensor provided in the touch screen to measure the direction vector of the orientation of the touch screen in the coordinate system of the gyro sensor provided in the mobile phone.
  • the direction vector of the orientation of the touch screen ie, curved screen
  • the mobile phone can calculate the angle ⁇ between the vector a and the horizontal plane.
  • the mobile phone can determine the angle ⁇ between the curved screen and the horizontal plane according to the direction vector (ie vector a) of the direction of the curved screen in the coordinate system of the gyroscope sensor obtained by the measurement.
  • the method for the mobile phone to obtain the angle between the touch screen (ie curved screen) of the mobile phone and the horizontal plane includes but is not limited to the above method for obtaining the angle by the gyroscope sensor.
  • S502 The mobile phone judges whether the included angle ⁇ is within a first preset angle range.
  • the value range of the angle ⁇ between the touch screen of the mobile phone and the horizontal plane when the user uses the mobile phone in the above-mentioned scenes (1) and (2) can be counted to determine the first preset angle range.
  • the aforementioned first preset angle range may be an angle range with a value around 0°.
  • the first preset angle range may be [-n°, n°].
  • the value range of n can be (0, 10), (0, 5), (0, 20) or (0, 15), etc.
  • the above-mentioned first preset angle range may be an angle range taking a value around 90°.
  • the first preset angle range may be [90°-m°, 90°+m°].
  • the value range of m can be (0, 10), (0, 5), (0, 20) or (0, 15), etc.
  • the aforementioned first preset angle range may include two angle ranges [-n°, n°] and [90°-m°, 90°+m°].
  • the mobile phone can determine whether the included angle ⁇ is within [-n°, n°] or [90°-m°, 90°+m°]; if the included angle ⁇ is within [-n°, n°] or Within [90°-m°, 90°+m°], the mobile phone can continue to execute S503.
  • the mobile phone can determine whether the relative state of the user and the mobile phone meets the preset conditions. Wherein, if the relative state of the user and the mobile phone meets the preset condition, it means that the mobile phone is held by the user in the above-mentioned scene (1) or scene (2). At this time, the mobile phone can start the preset anti-incorrect touch algorithm, and use the preset anti-incorrect touch algorithm to prevent inadvertent touch on the side of the curved screen.
  • the relative state of the user and the mobile phone satisfies a preset condition, which may specifically be: the mobile phone can collect a face image through a camera, and the yaw degree of the user's face corresponding to the face image is at the second preset angle Within range.
  • the user's face yaw is the left-right rotation angle of the user's face orientation with respect to the first connection
  • the first connection is the connection between the camera of the mobile phone and the user's head.
  • the relative state of the user and the mobile phone satisfies the preset condition, which may specifically be: the mobile phone can collect the face image through the camera, and the distance between the face corresponding to the face image and the mobile phone is less than the first distance Threshold, the yaw degree of the user's face corresponding to the aforementioned face image is within the second preset angle range.
  • the mobile phone can perform S503-S509:
  • the mobile phone turns on the camera, and collects images through the camera.
  • S504 The mobile phone recognizes whether the image collected by the camera includes a face image.
  • the method of turning on the camera of the mobile phone, collecting images through the camera, and identifying whether the image collected by the camera includes a face image can refer to the specific method in the conventional technology, and the examples of this application will not be repeated here.
  • the mobile phone continues to execute S505; if the image collected by the camera does not include a face image, the phone executes S501.
  • the mobile phone obtains the distance between the face and the mobile phone.
  • the aforementioned camera may be a structured light camera module.
  • the structured light camera module includes a light projector and two cameras (such as a first camera and a second camera).
  • the light projector is used to emit light information to a target object (such as a human face).
  • the first camera and the second camera are used to photograph the target object.
  • the first camera and the second camera may also be referred to as binocular cameras.
  • the mobile phone can calculate the depth information of the target object (such as a face) collected by the binocular camera; then, determine the distance between the target object (such as the face) and the mobile phone according to the depth information of the target object.
  • a target object (such as a human face) is an object with a three-dimensional shape.
  • the distances between the various features on the target object (such as the tip of a human nose and eyes) and the camera may be different.
  • the distance between each feature on the target object and the camera is called the depth of the feature (or the point where the feature is located).
  • the depth of each point on the target object constitutes the depth information of the target object.
  • the depth information of the target object can represent the three-dimensional characteristics of the target object.
  • the distance between each feature on the target object and the camera can be: the vertical line between the point of each feature on the target object and the two cameras distance.
  • the depth of the feature P is the vertical distance Z from P to OL O R.
  • O L is the position of the first camera
  • O R is the position of the second camera.
  • the mobile phone can calculate the depth of each feature on the target object according to the parallax of the binocular camera to the same feature, combined with the hardware parameters of the binocular camera, and use the triangulation principle to calculate the depth of each feature on the target object to obtain the depth information of the target object.
  • the method for calculating the depth information by the mobile phone according to the disparity is described as an example:
  • the positions of the first camera and the second camera in the structured light camera module are different.
  • O L for the location of the first camera, O R for the location of the second camera, the distance between the O L and O R is a first length T, which exposes O L O R T 7.
  • the focal lengths of the lenses of the first camera and the second camera are both f.
  • the feature P is a feature of the target object.
  • the vertical distance between the point of the feature P and the line connecting the first camera and the second camera is Z. That is, the depth information of P is Z.
  • the first camera collects the image 1 of the target object, and the feature P is at the PL point of the image 1.
  • a second camera to capture the image of the target object 2, wherein R & lt point P in the image P 2.
  • the characteristic image point P L P R of 1 in two points corresponding to the characteristic of the target object is P.
  • P L P R O L O R -B L P L -P R B R.
  • O L O R T
  • P R B R x/2-x R.
  • the depth Z of the feature P can be calculated from the distance T between the two cameras, the lens focal length f of the two cameras, and the parallax d.
  • the image information collected by the first camera ie, the first image information, such as image 1
  • the image information collected by the second camera ie, the second image information, such as image 2
  • image 1 and image 2 are recognized by the mobile phone.
  • the more the same features the mobile phone recognizes the more the depth of the point where the mobile phone can calculate. Since the depth information of the target object is composed of the depths of multiple points (ie features) of the target object; therefore, the more the depth of the points calculated by the mobile phone, the more accurate the depth information of the target object.
  • the same feature in the above two images refers to information corresponding to the same feature in the two images.
  • the AL point in image 1 shown in FIG. 7 corresponds to the left eye corner of a human face
  • the AR point in image 2 also corresponds to the left eye corner of the same human face.
  • the B L point corresponds to the right eye corner of the aforementioned human face
  • the B R point also corresponds to the right eye corner of the human face.
  • the parallax of the binocular camera to the left eye corner of the above face is x L1 -x R1 .
  • the parallax of the binocular camera to the right eye corner of the above face is x L2- x R2 .
  • the mobile phone can use the depth of any of the above-mentioned face features (such as nose, left eye corner, brow center, left eyeball, right eye corner, left mouth corner, or right mouth corner, etc.) as the distance between the face and the mobile phone.
  • face features such as nose, left eye corner, brow center, left eyeball, right eye corner, left mouth corner, or right mouth corner, etc.
  • the mobile phone may calculate the average value of the depths of the multiple features, and use the average value as the face and The distance between mobile phones.
  • the mobile phone turns on the camera, which can be specifically: the mobile phone turns on the structured light camera module mentioned above, the structured light camera module is used to collect images, and when the collected images include face images, it is used for mobile phone calculations The depth information of the face.
  • the mobile phone in S503 can also only turn on one camera (such as the first camera). If the mobile phone recognizes that the image collected by the first camera includes a face image, it can turn on the light projector and another camera (such as the second camera). It is used in mobile phones to calculate the depth information of human faces.
  • the mobile phone may include a camera (such as a third camera) in addition to the above-mentioned structured light camera module; in S503, the mobile phone can turn on the third camera, if the mobile phone recognizes that the image collected by the third camera includes a face image , You can turn on the structured light camera module for the mobile phone to calculate the depth information of the face.
  • a camera such as a third camera
  • the mobile phone can turn on the third camera, if the mobile phone recognizes that the image collected by the third camera includes a face image , You can turn on the structured light camera module for the mobile phone to calculate the depth information of the face.
  • the mobile phone can obtain the distance between the face and the mobile phone through a distance sensor (such as a distance sensor 380F).
  • a distance sensor such as a distance sensor 380F.
  • the distance sensor is used to measure distance.
  • the distance sensor is used to transmit and receive infrared or laser.
  • the mobile phone can measure the distance based on the infrared or laser (such as the energy of infrared or laser) received by the distance sensor.
  • S506 The mobile phone judges whether the distance between the face and the mobile phone is less than a first distance threshold.
  • the first distance threshold is different.
  • the first distance threshold may be 20 centimeters (cm); and in scene (2), the first distance threshold may be 15 cm.
  • the aforementioned first distance threshold is not limited to 15 cm and 20 cm.
  • the first distance threshold may be obtained by calculating the distance between the mobile phone and the face when a large number of users use the mobile phone in the above-mentioned scene (1) and scene (2).
  • the first distance threshold may be set by the user in the mobile phone.
  • the mobile phone if the distance between the face and the mobile phone is less than the first distance threshold, the mobile phone continues to perform S507; if the distance between the face and the mobile phone is greater than or equal to the first distance threshold, the mobile phone performs S501.
  • the mobile phone acquires the yaw degree of the user's face corresponding to the aforementioned face image.
  • the user's face yaw is the left-right rotation angle of the user's face orientation with respect to the first connection
  • the first connection is the connection between the camera of the mobile phone and the user's head.
  • the yaw of the human face is the deviation angle between the user's face orientation and the "line between the camera and the user's head" (ie, the first line).
  • the human face yaw degree may also be the left-right rotation angle of the user's face orientation relative to the first line.
  • the connection between the camera and the user's head may be a connection between the camera and any organ of the user's head (such as the nose or mouth).
  • O P O A is the connection between the camera and the head of user A, and X A O A represents the direction of user A's face.
  • a human face yaw degree beta] is the angle between the user A and X A O A O P O A's.
  • O P O B is the connection with the camera head of the user B
  • X B O B represents a B face towards the user.
  • User B's yaw of the human face is the angle beta] B X B O B and O P O B's.
  • O P O C is connecting the camera head with the user C
  • X C O C C represents the user's face orientation.
  • User C who face yaw degree C beta] is the angle between the X C O C O P O C's.
  • O P O D is the connection with the camera head of the user D
  • X D O D D indicates face towards the user.
  • User D D human face beta] is the angle of yaw X D O D and the O P O D.
  • O P O E is the connection with the camera head user E
  • X E O E E represents the orientation of the user face.
  • Human face user F F beta] is the angle of yaw X F O F of the O P O F.
  • the mobile phone judges whether the yaw degree of the human face is within a second preset angle range.
  • FIG. 8 (a) and Fig. 8 (b) it can be seen that the closer the human face yaw degree is to 0°, the higher the possibility that the user will pay attention to the touch screen of the mobile phone.
  • the face yaw of user A is ⁇ A
  • the face yaw of user B is ⁇ B.
  • the surface yaw is close to 0°. Therefore, the user A, user B, and user C shown in (a) of FIG. 8 are highly likely to pay attention to the touch screen of the mobile phone.
  • Fig. 8(a) and Fig. 8(b) it can be seen that the greater the absolute value of the yaw degree of the human face, the lower the possibility that the user will pay attention to the touch screen of the mobile phone.
  • the absolute value of the face yaw ⁇ D of the user D the absolute value of the face yaw ⁇ E of the user E
  • the absolute value of the face yaw ⁇ F of the user F are larger. Therefore, the user D, user E, and user F shown in (b) of FIG. 8 are less likely to pay attention to the touch screen of the mobile phone.
  • the second preset angle range may be an angle range that takes a value around 0°.
  • the second preset angle range may be [-k°, k°].
  • the value range of k can be (0, 10) or (0, 5).
  • the mobile phone may acquire the facial features of the facial image collected by the camera (such as the above-mentioned third camera) by means of face detection.
  • the facial features may include the aforementioned yaw degree of the human face.
  • the face feature may also include face position information (faceRect), face feature point information (landmarks), and face pose information.
  • face posture information may include a face pitch angle (pitch), an in-plane rotation angle (roll), and a face yaw degree (that is, the left and right rotation angle, yaw).
  • the mobile phone can provide an interface (such as the Face Detector interface), which can receive pictures taken by the camera. Then, the processor (such as the NPU) of the mobile phone can perform face detection on the picture to obtain the aforementioned facial features. Finally, the mobile phone can return the detection result (JSON Object), that is, the aforementioned facial features.
  • an interface such as the Face Detector interface
  • the processor such as the NPU
  • the mobile phone can perform face detection on the picture to obtain the aforementioned facial features.
  • the mobile phone can return the detection result (JSON Object), that is, the aforementioned facial features.
  • JSON detection result
  • a picture (such as a picture) may include one or more face images.
  • the mobile phone can allocate the one or more different IDs of the face image to identify the face image.
  • the human face yaw degree is within the second preset angle range, it means that the rotation angle of the user's face orientation relative to the connection line between the camera and the user's head is small.
  • the mobile phone can activate a preset anti-incorrect touch algorithm, and use the preset anti-incorrect touch algorithm to prevent inadvertent touch on the side of the curved screen. Specifically, as shown in FIG.
  • the mobile phone executes (2) the anti-mistouch processing flow (ie S509). After S508, if the yaw degree of the human face is not within the second preset angle range, the mobile phone executes S501.
  • the mobile phone performs an anti-error operation on the user's preset error-touch operation on the touch screen.
  • the mobile phone collects the contact surface between the touch screen and the user's hand: in the arc area on the first side of the touch screen (the arc area 20 on the right as shown in Figure 1 (a))
  • the first contact surface, and the x second contact surfaces in the arc area on the second side of the touch screen (the left arc area 10 shown in (a) in FIG. 1), 1 ⁇ x ⁇ 4, and x is a positive integer.
  • the first contact surface is the contact surface between the touch screen collected by the mobile phone and the mouth of the user's hand when the mobile phone is held by the user.
  • the shape of the first contact surface is similar to the shape of the contact area between the mouth of the user's hand and the touch screen when the user holds the mobile phone.
  • the first contact surface may be the contact surface 1 shown in (b) in FIG. 2.
  • the second contact surface is the contact surface between the touch screen collected by the mobile phone and the user's finger when the mobile phone is held by the user.
  • the shape of the second contact surface is similar to the shape of the contact area between the user's finger and the touch screen when the user holds the mobile phone.
  • the second contact surface may be the contact surface 2 shown in (b) in FIG. 2.
  • the mobile phone may receive the user's touch operation on the touch screen.
  • S509 may specifically include: the mobile phone adopts a preset anti-mistouch algorithm, and recognizes that the user's first touch operation on the touch screen is a preset mistouch operation; the mobile phone does not respond to the first touch operation.
  • the mobile phone can collect the user's touch operation on the touch screen in real time, and the touch operation may include: preset mis-touch operation and normal operation of the user on the touch screen (such as the user's click operation on the icon displayed on the touch screen).
  • the mobile phone can recognize that the first touch operation in the collected touch operations is a preset mis-touch operation; then, the first touch operation is processed to prevent mis-touch, that is, it does not respond to the first touch operation.
  • the mobile phone can recognize the preset mistouch operation by recognizing the user's touch operation on the touch screen and the position and shape of the contact surface of the touch screen.
  • the mobile phone can set the contact surface as the first contact surface in the arc area on the first side of the touch screen (as shown in Figure 1 (a) on the right side of the arc area 20), and on the second side of the touch screen arc area (as shown in Figure 1
  • the touch operation ie, the first touch operation
  • the x second contact surfaces in the left arc area 10 shown in (a) is recognized as a preset mistouch operation.
  • the first side curvature area is the right side curvature area of the touch screen
  • the second side curvature area is the left side curvature area of the touch screen as an example to illustrate the method of the embodiment of the present application.
  • the first side arc area may also be the left arc area of the touch screen
  • the second side arc area may be the right arc area of the touch screen. The embodiment of the application does not limit this.
  • the HAL layer of the mobile phone recognizes the preset mis-touch operation, and performs the mis-touch prevention processing on the preset mis-touch operation. Specifically, after the TP of the mobile phone collects the user's touch operation on the touch screen, it reports the touch information of the touch operation to the HAL layer.
  • the touch information of the touch operation may include: preset touch information corresponding to the wrong touch operation, and/or touch information corresponding to the normal operation of the user on the touch screen.
  • the touch information may include the position, shape, and size of the contact surface corresponding to the touch operation.
  • the HAL layer can be configured with multiple TP algorithms, such as the TP algorithm 1, a preset anti-mistouch algorithm (such as the AFT 2 algorithm), and the TP algorithm 2, as shown in FIG. 9.
  • Each TP algorithm can process the touch information reported by the bottom layer.
  • the preset anti-mistouch algorithm (such as the AFT 2 algorithm) is used to identify that the first touch operation is a preset mistouch operation.
  • the preset anti-mistouch algorithm can recognize the preset mistouch operation by executing 901 and 902 shown in FIG. 9.
  • the HAL layer reports the touch information of the touch operation to the upper layer (such as the Framework layer), it can only report the touch information corresponding to the normal operation of the user on the touch screen, instead of reporting the touch information corresponding to the preset mistouch operation.
  • the Framework layer does not receive the user's preset mis-touch operation on the touch screen, nor does it need to respond to the preset mis-touch operation, and can realize the anti-mis-touch processing for the preset mis-touch operation.
  • the HAL layer can receive the touch information 3 reported by the bottom layer (such as the kernel layer); the TP algorithm 1 can process the touch information 3 to obtain the touch information 1; then, the preset anti-mistouch algorithm The touch information 1 is processed to prevent accidental touch.
  • the preset anti-mistouch algorithm can recognize the first contact surface in the arc area on the first side of the touch screen (that is, the large contact surface of the tiger’s mouth), and the second contact surface in the arc area on the first side of the touch screen (that is, the four-finger small contact surface). Surface) to identify the preset mis-touch operation.
  • the touch information corresponding to the preset mistouch operation can be ignored (or intercepted).
  • the touch information 2 sent by the preset anti-mistouch algorithm to the TP algorithm 2 does not include the touch information corresponding to the preset mistouch operation, but only includes the touch information 1, except for the preset mistouch operation Touch information other than the corresponding touch information.
  • the TP algorithm 2 can process the touch information 3 sent by the preset anti-mistouch algorithm to obtain the touch information 4, and report the touch information 4 to the upper layer.
  • the embodiment of the application provides a method for preventing accidental touch of a curved screen. If the angle between the touch screen and the horizontal plane is within the first preset angle range, the camera can collect the face image, and the distance between the mobile phone and the user is less than the first distance Threshold value, and the user's face yaw is within the second preset angle range, then the user is more likely to use the mobile phone in scene (1) and scene (2). In this way, it can be determined that the mobile phone is in a preset anti-mistouch scene.
  • the mobile phone can recognize the preset mis-touch operation as a mis-touch operation, then while the first contact surface and the second contact surface exist, the mobile phone can respond to the user's other non-mis-touch operations on the curved screen.
  • the problem of user click failure can improve the user experience of the curved screen device.
  • the duration of the aforementioned preset mistouch operation generated when the user holds the mobile phone is generally longer, and the duration of the normal operation of the user on the touch screen is generally shorter.
  • the mobile phone when the mobile phone recognizes the preset mistouch operation, it can not only refer to the shape of the contact surface corresponding to the touch operation, but also determine whether the duration of the touch operation is greater than the preset time (such as A preset time).
  • the preset mistouch operation is limited as follows.
  • the preset mistouch operation may include: when the mobile phone is held by the user, the duration of the mobile phone and the first side arc area is greater than the first preset time, and the moving distance in the first side arc area is less than the second distance threshold Touch operation.
  • the mobile phone receives the user's touch operation on the touch screen, and if the touch operation meets the following two conditions, it can be determined that the touch operation is a preset mistouch operation.
  • the contact surface corresponding to the touch operation received by the mobile phone is: the first contact surface in the arc area on the first side of the touch screen, and x second contact surfaces in the arc area on the second side of the touch screen.
  • the preset anti-mistouch algorithm when the preset anti-mistouch algorithm recognizes the preset mistouch operation, it can not only execute 901 and 902 to determine whether the touch operation meets the condition (1); it can also execute 902 to determine whether the touch operation meets the condition (1).
  • Condition (2) when the preset anti-mistouch algorithm recognizes the preset mistouch operation, it can not only execute 901 and 902 to determine whether the touch operation meets the condition (1); it can also execute 902 to determine whether the touch operation meets the condition (1).
  • Condition (2) when the preset anti-mistouch algorithm recognizes the preset mistouch operation, it can not only execute 901 and 902 to determine whether the touch operation meets the condition (1); it can also execute 902 to determine whether the touch operation meets the condition (1).
  • Condition (2) when the preset anti-mistouch algorithm recognizes the preset mistouch operation, it can not only execute 901 and 902 to determine whether the touch operation meets the condition (1); it can also execute 902 to determine whether the touch operation meets the condition (1).
  • the aforementioned second distance threshold may be 6 millimeters (mm), 5 mm, 2 mm, or 3 mm.
  • the first preset time may be 2 seconds (s), 3s, 1s, etc.
  • the duration of the aforementioned preset mistouch operation generated when the user holds the mobile phone is generally longer, and the duration of the user's normal operation of the touch screen is generally shorter. Therefore, when the mobile phone receives a touch operation, if the contact surface corresponding to the touch operation lasts longer in the first side arc area (for example, greater than the first preset time), and the movement distance in the first side arc area is longer If it is small (for example, less than the second distance threshold), it means that the touch operation is more likely to be a false touch operation caused by the mobile phone being held. Therefore, the mobile phone can improve the accuracy of the mobile phone to recognize the preset mistouch operation through the double judgment of the above condition (1) and the condition (2).
  • the mobile phone when the mobile phone determines that the first touch operation is a preset wrong touch operation, there may be a misjudgment of the first touch operation, which affects the accuracy of preventing the wrong touch.
  • the mobile phone can continuously determine whether the first touch operation identified above has moved a large distance. Specifically, after the mobile phone recognizes that the first touch operation is a preset mis-touch operation, and before it does not respond to the first touch operation operation, the mobile phone can determine whether the movement distance of the first touch operation in the second preset time is greater than the third touch operation. Distance threshold.
  • the second preset time may be a time period after the mobile phone recognizes that the first touch operation is a preset mistouch operation, and the duration is the first preset duration.
  • the first preset duration may be 2s, 3s, 1s, 0.5s, etc.
  • the third distance threshold may be 7mm, 5mm, 3mm, 2mm, etc.
  • the third distance threshold and the second distance threshold may be the same or different.
  • the first touch operation may include one or more touch operations.
  • the aforementioned first touch operation may include a touch operation corresponding to a first contact surface and a touch operation corresponding to x second contact surfaces.
  • the mobile phone can determine that the moving distance of the first touch operation (that is, all touch operations in the first touch operation) within the second preset time is less than or equal to the third distance threshold, which means that the mobile phone determines the first touch operation It is preset that the wrong touch operation is not a misjudgment. In this way, the mobile phone may not respond to the first touch operation.
  • the mobile phone may determine that the movement distance of at least one of the first touch operations within the second preset time is greater than the third distance threshold. In this way, it means that the mobile phone misjudged the at least one touch operation.
  • the mobile phone can perform anti-manslaughter processing on the at least one touch operation, that is, the mobile phone can respond to the at least one touch operation and execute an event corresponding to the at least one touch operation.
  • the mobile phone does not make a misjudgment. In this way, the mobile phone may not respond to touch operations other than at least one touch operation in the first touch operation.
  • the mobile phone after the mobile phone recognizes that the first touch operation is a preset wrong touch operation, it can further determine whether the first touch operation is misjudged. In this way, it is possible to improve the accuracy of the mobile phone to recognize the preset mistouch operation, and further improve the accuracy of the mobile phone to prevent mistouch.
  • the mobile phone may receive the user's touch operation on the touch screen (such as the second touch). operating).
  • the second touch operation of the user on the touch screen is a normal touch operation that the user triggers the mobile phone to perform the corresponding event (that is, the mistouch operation caused by the non-user holding the mobile phone) is more likely high. Therefore, if the mobile phone receives the second touch operation of the user in the second side arc area after the third preset time, the mobile phone may execute the event corresponding to the second touch operation in response to the second touch operation. That is, the mobile phone can perform anti-manslaughter processing on the second touch operation.
  • the third preset time is a second preset time period since the electronic device detects the preset mistouch operation.
  • the second preset duration may be 2s, 3s, 1s, 0.5s, etc.
  • the first preset duration and the second preset duration may be the same or different.
  • the embodiment of the present application is combined with FIG. 10 to introduce the algorithm logic of the anti-mistouch algorithm preset in the embodiment of the present application.
  • the HAL layer can receive the touch information 3 reported by the bottom layer (such as the kernel layer); the TP algorithm 1 can process the touch information 3 to obtain the touch information 1. Then, the touch information 1 is processed by the preset anti-mistouch algorithm.
  • the preset anti-mistouch algorithm can determine whether the contact surface of the first touch operation on the touch screen includes the first contact surface in the first side arc area (ie, execute 901). If the first touch operation on the contact surface of the touch screen does not include the first contact surface in the first side arc area, the preset anti-mistouch algorithm can directly send the touch information 1 to the TP algorithm 2.
  • the preset anti-mistaken touch algorithm can be executed 902 to determine whether the first touch operation is included on the contact surface of the touch screen.
  • the second contact surface of the second side arc area If the contact surface of the first touch operation on the touch screen does not include the second contact surface in the second side arc area, the preset anti-mistouch algorithm can directly send the touch information 1 to the TP algorithm 2.
  • the preset anti-mistouch algorithm can execute 903 to determine the touch operation in the first side arc area in the first touch operation Whether the duration of is greater than the first preset time, and whether the moving distance is less than the second distance threshold. If the duration of the touch operation in the first side arc area is less than or equal to the first preset time, and the moving distance is greater than or equal to the second distance threshold, the preset anti-mistouch algorithm can directly send the touch information 1 to the TP algorithm 2.
  • the preset anti-mistouch algorithm can preliminarily determine that the first touch operation is a preset mistouch operation, When sending touch information to the TP algorithm, the touch information of the first touch operation can be ignored (that is, execute 1003).
  • the preset anti-error touch algorithm can also be the first of the preset accidental touch operations. Touch operation to prevent manslaughter.
  • the preset anti-mistouch algorithm can determine whether the movement distance of the first touch operation within the second preset time is greater than the third distance threshold (ie, execute 1001). If the moving distance of part of the touch operation (such as the at least one touch operation mentioned above) in the first touch operation within the second preset time is greater than the third distance threshold, it means that the at least one touch operation is not a false touch operation, and the at least one touch The contact surface or contact point corresponding to the operation cannot be ignored. At this time, the preset anti-mistouch algorithm may send the touch information 5 including the touch information of the at least one touch operation to the TP algorithm 2.
  • the preset anti-mistouch algorithm may send touch information 2 that does not include the touch information of the at least one touch operation to the TP algorithm 2.
  • the preset anti-mistouch algorithm can also execute 1002 to perform a touch operation on the contact surface corresponding to the mobile phone receiving the user's new touch operation in the second side arc area (ie, the second touch operation) after the third preset time. ignore. At this time, the preset anti-mistouch algorithm may send the touch information 2 that does not include the touch information of the second touch operation to the TP algorithm 2. The preset anti-mistouch algorithm can also execute 1002, and the contact surface corresponding to the new touch operation of the user in the second side arc area received by the mobile phone within the third preset time is not ignored. At this time, the preset anti-mistouch algorithm may send the touch information 5 including the touch information of the second touch operation to the TP algorithm 2.
  • the electronic device includes a processor, a memory, a touch screen, and a camera.
  • the touch screen is a curved screen with curved sides.
  • the memory, the touch screen, and the camera are coupled with the processor, and the memory is used to store computer program code.
  • the computer program code includes computer instructions.
  • the processor executes the computer instructions
  • the electronic device executes the electronic device (such as a mobile phone) in the above method embodiment. The various functions or steps performed.
  • the above-mentioned electronic device further includes one or more sensors.
  • the one or more sensors include at least a gyroscope sensor.
  • the one or more sensors are used to collect the direction vector of the orientation of the touch screen.
  • the direction vector of the orientation of the touch screen is used to calculate the angle between the touch screen and the horizontal plane.
  • the electronic device may further include a structured light camera module.
  • the structured light camera module includes a light projector, a first camera, and a second camera. The distance between the first camera and the second camera is the first length.
  • the electronic device may further include a distance sensor.
  • the distance sensor is used to obtain the distance between the electronic device and the user.
  • the chip system includes at least one processor 1101 and at least one interface circuit 1102.
  • the processor 1101 and the interface circuit 1102 may be interconnected by wires.
  • the interface circuit 1102 can be used to receive signals from other devices (such as the memory of an electronic device).
  • the interface circuit 1102 may be used to send signals to other devices (such as the processor 1101 or the touch screen of an electronic device).
  • the interface circuit 1102 may read instructions stored in the memory, and send the instructions to the processor 1101.
  • the electronic device can execute the steps in the foregoing embodiments.
  • the chip system may also include other discrete devices, which are not specifically limited in the embodiment of the present application.
  • the computer storage medium includes computer instructions.
  • the computer instructions run on an electronic device, the electronic device executes the execution of the electronic device (such as a mobile phone) in the foregoing method embodiment. Each function or step.
  • Another embodiment of the present application provides a computer program product, which when the computer program product runs on a computer, causes the computer to execute various functions or steps performed by an electronic device (such as a mobile phone) in the foregoing method embodiments.
  • an electronic device such as a mobile phone
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be Combined or can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • each embodiment of this embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of this embodiment essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
  • the aforementioned storage media include: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例提供一种曲面屏的防误触方法及电子设备,涉及终端技术领域,可以实现对曲面屏侧边的防误触,并提升了防误触的准确性。具体方案包括:电子设备获取触摸屏(侧边有弧度的曲面屏)与水平面的夹角;响应于触摸屏与水平面的夹角在第一预设角度范围内,启动摄像头;响应于摄像头采集到人脸图像,获取电子设备与用户的距离和用户的人面偏航度;响应于电子设备与用户之间的距离小于第一距离阈值,且人面偏航度在第二预设角度范围内,对用户在触摸屏的预设误触操作进行防误触处理;用户执行预设误触操作时,手与触摸屏的接触面为:在触摸屏一侧弧度区域的第一接触面,以及在触摸屏另一侧弧度区域的x个第二接触面,1≤x≤4,x为正整数。

Description

一种曲面屏的防误触方法及电子设备
本申请要求在2019年6月28日提交中国国家知识产权局、申请号为201910578705.4、发明名称为“一种曲面屏的防误触方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及终端技术领域,尤其涉及一种曲面屏的防误触方法及电子设备。
背景技术
随着智能终端的普及和扩展,现有终端已不满足于平面触摸屏。市场上已有部分终端采用曲面屏,如曲面屏手机。其中,由于曲面屏手机的侧边是有弧度的触摸屏;因此,用户握持曲面屏手机时,手指容易接触曲面屏的侧边而导致对曲面屏的误触。
其中,在曲面屏手机的使用过程中,如何防止用户对曲面屏侧边的误触是一个亟待解决的问题。
发明内容
本申请实施例提供一种曲面屏的防误触方法及电子设备,可以实现对曲面屏侧边的防误触,并提升了防误触的准确性,进而可以提升用户对曲面屏设备的使用体验。
为达到上述目的,本申请实施例采用如下技术方案:
第一方面,本申请实施例提供一种曲面屏的防误触方法,该方法可以应用于电子设备,电子设备的触摸屏是侧边有弧度的曲面屏。该方法包括:电子设备获取触摸屏与水平面的夹角;响应于触摸屏与水平面的夹角在第一预设角度范围内,电子设备启动摄像头;响应于摄像头采集到人脸图像,电子设备获取电子设备与用户之间的距离,以及用户的人面偏航度;响应于电子设备与用户之间的距离小于第一距离阈值,且人面偏航度在第二预设角度范围内,电子设备对用户在触摸屏的预设误触操作进行防误触处理。
其中,人面偏航度是用户的面部朝向相对于第一连线的左右旋转角度,第一连线是摄像头与用户的头部的连线。用户执行预设误触操作时,用户的手与触摸屏的接触面为:在触摸屏第一侧弧度区域的第一接触面,以及在触摸屏第二侧弧度区域的x个第二接触面,1≤x≤4,x为正整数。
本申请实施例中,如果触摸屏与水平面的夹角在第一预设角度范围内,摄像头可采集到人脸图像,电子设备与用户之间的距离小于第一距离阈值,且用户的人面偏航度在第二预设角度范围内,那么用户在场景(1)和场景(2)中使用电子设备的可能性较高。其中,场景(1):用户平躺且单手握持手机的场景。场景(2):用户侧躺且单手握持手机的场景。其中,本申请实施例中,电子设备满足上述条件时,称该电子设备处于预设的防误触场景。
在场景(1)和场景(2)中,用户握持电子设备的方式比较固定,用户握持电子设备的力度较大,且用户手指与曲面屏的左侧弧度区域和右侧弧度区域的接触面的面积较大。用户握持电子设备,更容易产生对曲面屏侧边的误触。采用常规的防误触方案,电子设备无法对上述接触面对应的触摸操作进行防误触处理。
其中,预设误触操作是用户握持电子设备,对触摸屏的左侧弧度区域(如第二侧弧度区域)和右侧弧度区域(如第一侧弧度区域)产生的误触。本申请实施例中,电子设备处于预设的防误触场景时,可识别出该预设误触操作,并对预设误触操作进行防误触处理,可以提 升防误触的准确性。
并且,如果电子设备可将预设误触操作识别为误触操作,那么在第一接触面和第二接触面存在的同时,电子设备便可以响应用户对曲面屏的其他非误触操作,则不会出现用户点击失效的问题,可以提升用户对曲面屏设备的使用体验。
结合第一方面,在一种可能的设计方式中,第一接触面是电子设备被用户握持时,电子设备采集的触摸屏与用户的手的虎口的接触面。第二接触面是电子设备被用户握持时,电子设备采集的触摸屏与用户的手指的接触面。即电子设备可以根据用户在触摸屏上输入的触摸操作的位置是否在第一侧弧度区域或第二侧弧度区域,以及触摸操作在触摸屏上的接触面的形状,确定该触摸操作是否为预设误触操作。
结合第一方面,在另一种可能的设计方式中,由于用户握持电子设备时所产生的预设误触操作的持续时间一般较长,而用户对触摸屏的正常操作的持续时间一般较短。为了提高电子设备识别预设误触操作的准确性,电子设备识别预设误触操作时,不仅可以参考触摸操作对应的接触面的形状,还可以判断触摸操作的持续时间是否大于预设时间。具体的,预设误触操作可以包括:电子设备被用户握持时,电子设备采集的、与第一侧弧度区域接触的持续时间大于第一预设时间,且在第一侧弧度区域的移动距离小于第二距离阈值的触摸操作。
结合第一方面,在另一种可能的设计方式中,上述电子设备对用户在触摸屏的预设误触操作进行防误触处理,包括:电子设备接收用户对触摸屏的第一触摸操作;电子设备采用预设的防误触算法,识别出用户对触摸屏的第一触摸操作是预设误触操作;电子设备不响应第一触摸操作。
结合第一方面,在另一种可能的设计方式中,电子设备确定触摸操作是预设误触操作时,可能会存在对触摸操作的误判,影响防误触的准确性。为了提升防误触的准确性,电子设备可以持续判断上述识别出的预设误触操作是否发生较大距离的移动。具体的,在电子设备采用预设的防误触算法,识别出用户对触摸屏的第一触摸操作是预设误触操作之后,电子设备不响应第一触摸操作之前,本申请实施例的方法还包括:电子设备确定第一触摸操作在第二预设时间内的移动距离小于或等于第三距离阈值;第二预设时间是从电子设备识别出第一触摸操作是预设误触操作开始,时长为第一预设时长的时间段。也就是说,如果电子设备识别出的第一触摸操作在第二预设时间内的移动距离小于或等于第三距离阈值,电子设备没有对该第一触摸操作误判,则可以对该第一触摸操作进行防误触处理,即不响应该第一触摸操作。
结合第一方面,在另一种可能的设计方式中,电子设备识别出的预设误触操作(即第一触摸操作)可包括一个或多个触摸操作。例如,上述第一触摸操作可包括第一接触面对应的触摸操作和x个第二接触面对应的触摸操作。为了避免电子设备将这一个或多个触摸操作中,部分触摸操作误判为预设误触操作,本申请实施例的方法还包括:电子设备确定第一触摸操作中的至少一个触摸操作在第二预设时间内的移动距离大于第三距离阈值,电子设备响应该至少一个触摸操作,执行该至少一个触摸操作对应的事件。电子设备不响应第一触摸操作中、除至少一个触摸操作之外的其他触摸操作。
可以理解,如果电子设备确定第一触摸操作中的至少一个触摸操作在第二预设时间内的移动距离大于第三距离阈值,则表示电子设备对该至少一个触摸操作进行了误判。电子设备则可以对该至少一个触摸操作进行防误杀处理,即电子设备可响应该至少一个触摸操作,执行对应的事件。而对于第一触摸操作中、除该至少一个触摸操作之外的其他触摸操作,电子设备则并未进行误判,则不响应该其他触摸操作。
结合第一方面,在另一种可能的设计方式中,本申请实施例的方法还包括:如果电子设 备在第三预设时间后接收到用户在第二侧弧度区域的第二触摸操作,电子设备响应于第二触摸操作,执行该第二触摸操作对应的事件。其中,第三预设时间是从电子设备识别出第一触摸操作是预设误触操作开始,时长为第二预设时长的时间段。
结合第一方面,在另一种可能的设计方式中,上述第一预设角度范围包括:[-n°,n°]和[90°-m°,90°+m°]中的至少一个。其中,n的取值范围至少包括:(0,20),(0,15)或者(0,10)中的任一个;m的取值范围至少包括:(0,20),(0,15)或者(0,10)中的任一个。第二预设角度范围为[-k°,k°]。其中,k的取值范围至少包括:(0,15),(0,10)或者(0,5)中的任一个。
结合第一方面,在另一种可能的设计方式中,上述电子设备获取触摸屏与水平面的夹角,包括:电子设备通过一个或多个传感器,获取触摸屏与水平面的夹角。其中,该一个或多个传感器至少可以包括陀螺仪传感器。
结合第一方面,在另一种可能的设计方式中,上述电子设备还包括结构光摄像头模组,结构光摄像头模组包括光投射器、第一摄像头和第二摄像头,第一摄像头和第二摄像头之间的距离为第一长度。其中,响应于摄像头采集到人脸图像,电子设备获取电子设备与用户之间的距离,包括:响应于摄像头采集到人脸图像,电子设备通过光投射器发射光信息,通过第一摄像头采集人脸图像对应的用户的人脸的第一图像信息,通过第二摄像头采集人脸的第二图像信息,第一图像信息和第二图像信息包括人脸的特征;电子设备根据第一图像信息、第二图像信息、第一长度、以及第一摄像头的镜头焦距和第二摄像头的镜头焦距,计算人脸的深度信息;电子设备根据人脸的深度信息,计算电子设备与用户之间的距离,以及用户的人面偏航度。
结合第一方面,在另一种可能的设计方式中,上述电子设备还包括距离传感器。响应于摄像头采集到人脸图像,电子设备获取电子设备与用户之间的距离,包括:响应于摄像头采集到人脸图像,电子设备通过距离传感器,获取电子设备与用户之间的距离。
第二方面,本申请实施例提供一种电子设备,该电子设备包括:处理器、存储器、触摸屏和摄像头,触摸屏是侧边有弧度的曲面屏。处理器,用于获取触摸屏与水平面的夹角;响应于触摸屏与水平面的夹角在第一预设角度范围内,电子设备启动摄像头;摄像头,用于采集图像;处理器,还用于响应于摄像头采集到人脸图像,获取电子设备与用户之间的距离,以及用户的人面偏航度,人面偏航度是用户的面部朝向相对于第一连线的左右旋转角度,第一连线是摄像头与用户的头部的连线;响应于电子设备与用户之间的距离小于第一距离阈值,且人面偏航度在第二预设角度范围内,对用户在触摸屏的预设误触操作进行防误触处理。其中,用户执行预设误触操作时,用户的手与触摸屏的接触面为:在触摸屏第一侧弧度区域的第一接触面,以及在触摸屏第二侧弧度区域的x个第二接触面,1≤x≤4,x为正整数。
结合第二方面,在另一种可能的设计方式中,第一接触面是电子设备被用户握持时,电子设备采集的触摸屏与用户的手的虎口的接触面;第二接触面是电子设备被用户握持时,电子设备采集的触摸屏与用户的手指的接触面。
结合第二方面,在另一种可能的设计方式中,预设误触操作包括:电子设备被用户握持时,电子设备采集的、与第一侧弧度区域接触的持续时间大于第一预设时间,且在第一侧弧度区域的移动距离小于第二距离阈值的触摸操作。
结合第二方面,在另一种可能的设计方式中,处理器,用于对用户在触摸屏的预设误触操作进行防误触处理,包括:处理器,具体用于:接收用户对触摸屏的第一触摸操作;采用预设的防误触算法,识别出第一触摸操作是预设误触操作;不响应第一触摸操作。
结合第二方面,在另一种可能的设计方式中,上述处理器,还用于在采用预设的防误触算法,识别出第一触摸操作是预设误触操作之后,不响应第一触摸操作之前,确定第一触摸操作在第二预设时间内的移动距离小于或等于第三距离阈值。其中,第二预设时间是从电子设备识别出第一触摸操作是预设误触操作开始,时长为第一预设时长的时间段。
结合第二方面,在另一种可能的设计方式中,第一触摸操作包括一个或多个触摸操作。上述处理器,还用于确定第一触摸操作中的至少一个触摸操作在第二预设时间内的移动距离大于第三距离阈值,电子设备响应至少一个触摸操作,执行该至少一个触摸操作对应的事件;不响应第一触摸操作中、除至少一个触摸操作之外的其他触摸操作。
结合第二方面,在另一种可能的设计方式中,上述处理器,还用于如果在第三预设时间后接收到用户在第二侧弧度区域的第二触摸操作,响应于第二触摸操作,执行该第二触摸操作对应的事件。其中,第三预设时间是从电子设备识别出第一触摸操作是预设误触操作开始,时长为第二预设时长的时间段。
结合第二方面,在另一种可能的设计方式中,上述第一预设角度范围包括:[-n°,n°]和[90°-m°,90°+m°]中的至少一个;其中,n的取值范围至少包括:(0,20),(0,15)或者(0,10)中的任一个;m的取值范围至少包括:(0,20),(0,15)或者(0,10)中的任一个。第二预设角度范围为[-k°,k°];其中,k的取值范围至少包括:(0,15),(0,10)或者(0,5)中的任一个。
结合第二方面,在另一种可能的设计方式中,上述电子设备还包括:一个或多个传感器,该一个或多个传感器至少包括陀螺仪传感器。处理器,用于获取触摸屏与水平面的夹角,包括:处理器,具体用于通过该一个或多个传感器,获取触摸屏与水平面的夹角。
结合第二方面,在另一种可能的设计方式中,上述电子设备还包括结构光摄像头模组,结构光摄像头模组包括光投射器、第一摄像头和第二摄像头,第一摄像头和第二摄像头之间的距离为第一长度。处理器,用于响应于摄像头采集到人脸图像,获取电子设备与用户之间的距离,包括:处理器,具体用于响应于摄像头采集到人脸图像,通过光投射器发射光信息,通过第一摄像头采集人脸图像对应的用户的人脸的第一图像信息,通过第二摄像头采集人脸的第二图像信息,第一图像信息和第二图像信息包括人脸的特征;根据第一图像信息、第二图像信息、第一长度、以及第一摄像头的镜头焦距和第二摄像头的镜头焦距,计算人脸的深度信息;根据人脸的深度信息,计算电子设备与用户之间的距离,以及用户的人面偏航度。
结合第二方面,在另一种可能的设计方式中,上述电子设备还包括距离传感器。处理器,用于响应于摄像头采集到人脸图像,获取电子设备与用户之间的距离,包括:处理器,具体用于响应于摄像头采集到人脸图像,通过距离传感器,获取电子设备与用户之间的距离。
第三方面,本申请提供一种芯片***,该芯片***应用于包括触摸屏的电子设备;所述芯片***包括一个或多个接口电路和一个或多个处理器;所述接口电路和所述处理器通过线路互联;所述接口电路用于从所述电子设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括所述存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,所述电子设备执行如第一方面及其任一种可能的设计方式所述的方法。
第四方面,本申请实施例提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如第一方面及其任一种可能的设计方式所述的曲面屏的防误触方法。
第五方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如第一方面及其任一种可能的设计方式所述的曲面屏的防误触 方法。
可以理解,上述提供的第二方面及其可能的设计方法所述的电子设备、第三方面所述的芯片***,第四方面所述的计算机存储介质,以及第五方面所述的计算机程序产品均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种曲面屏手机的产品形态示意图;
图2为本申请实施例提供的一种曲面屏手机被用户握持的示意图;
图3为本申请实施例提供的一种电子设备的硬件结构示意图;
图4为本申请实施例提供的一种电子设备的软件***架构示意图;
图5为本申请实施例提供的一种曲面屏的防误触方法流程图;
图6为本申请实施例提供的一种陀螺仪坐标系和地理坐标系的示意图;
图7为本申请实施例提供的一种深度信息的计算原理示意图;
图8为本申请实施例提供的一种人面偏航度的示意图;
图9为本申请实施例提供的一种预设的防误触算法的算法逻辑架构示意图;
图10为本申请实施例提供的一种预设的防误触算法的算法逻辑架构示意图;
图11为本申请实施例提供的一种芯片***的结构示意图。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请实施例提供一种曲面屏的防误触方法,可以应用于电子设备,该电子设备的触摸屏是侧边有弧度的曲面屏。例如,以电子设备是图1所示的曲面屏手机为例。图1中的(a)示出曲面屏手机100的立体图。图1中的(b)示出曲面屏手机100的主视图。如图1中的(a)和图1中的(b)所示,手机100的触摸屏是左侧边10和右侧边20有弧度的曲面屏。
其中,由于曲面屏手机的触摸屏是侧边有弧度的曲面屏;因此,用户握持曲面屏手机时,用户手指会大面积接触触摸屏的弧度区域。例如,如图2中的(a)所示,以用户右手握持曲面屏手机为例。如图2中的(b)所示,用户右手的虎口和大拇指与曲面屏的左侧弧度区域的接触面为接触面1,用户右手的其他手指与曲面屏的右侧弧度区域的接触面为接触面2。该接触面2中可以1-4个接触点,图2中以接触面2包括4个接触点为例。
常规的防误触方案中,手机采集用户对触摸屏的触摸操作后,可对用户与触摸屏侧边的小面积接触点进行防误触处理,而对于图2所示的接触面1和接触面2这类较大面积的接触点不会进行防误触处理。如此,采用常规的防误触方案,手机100不会对接触面1和接触面2对应的触摸操作进行防误触处理,增加了用户误操作的可能性。即常规的防误触方案并不适用于具有上述曲面屏的电子设备。
特别的,在以下场景(1)和场景(2)中,上述问题尤其明显。场景(1):用户平躺且单手握持手机的场景。场景(2):用户侧躺且单手握持手机的场景。
具体的,在场景(1)和场景(2)中,用户握持手机的方式比较固定,用户握持手机的力度较大,且用户手指与曲面屏的左侧弧度区域和右侧弧度区域的接触面的面积较大。因此,在这两个场景中,用户握持上述曲面屏手机,更容易产生对曲面屏侧边的误触。采用常规的 防误触方案,手机无法对上述接触面对应的触摸操作进行防误触处理。
并且,如果手机100不对接触面1和接触面2对应的触摸操作进行防误触处理,即手机100将接触面1和接触面2对应的触摸操作识别为正常触摸操作(即非误触操作);那么,在接触面1和接触面2存在的同时,手机100则无法响应用户对曲面屏的其他非误触操作,则会出现用户点击失效的问题,影响用户体验。
本申请实施例提供的曲面屏的防误触方法中,电子设备可识别该电子设备所处的场景;当电子设备识别到该电子设备处于预设的误触场景时,可启动预设的防误触算法。其中,采用预设的防误触算法可实现对曲面屏侧边的防误触。即采用预设的防误触算法,电子设备可将图2中的(b)所示的接触面1和接触面2对应的触摸操作识别为误触操作,可以提升防误触的准确性。
并且,如果电子设备可将图2中的(b)所示的接触面1和接触面2对应的触摸操作识别为误触操作,那么在接触面1和接触面2存在的同时,电子设备便可以响应用户对曲面屏的其他非误触操作,则不会出现用户点击失效的问题,可以提升用户对曲面屏设备的使用体验。
举例来说,用户在上述场景(1)或场景(2)使用电子设备时,该电子设备可处于上述预设的误触场景。其中,预设的误触场景的详细介绍参考以下实施例中的描述,本申请实施例这里不予赘述。
示例性的,本申请实施例中的电子设备可以是手机、平板电脑、桌面型、膝上型、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备等包括上述曲面屏的设备,本申请实施例对该电子设备的具体形态不作特殊限制。
下面将结合附图对本申请实施例的实施方式进行详细描述。
请参考图3,为本申请实施例提供的一种电子设备300的结构示意图。如图3所示,电子设备300可以包括处理器310,外部存储器接口320,内部存储器321,通用串行总线(universal serial bus,USB)接口330,充电管理模块340,电源管理模块341,电池342,天线1,天线2,移动通信模块350,无线通信模块360,音频模块370,扬声器370A,受话器370B,麦克风370C,耳机接口370D,传感器模块380,按键390,马达391,指示器392,摄像头393,显示屏394,以及用户标识模块(subscriber identification module,SIM)卡接口395等。其中,传感器模块380可以包括压力传感器380A,陀螺仪传感器380B,气压传感器380C,磁传感器380D,加速度传感器380E,距离传感器380F,接近光传感器380G,指纹传感器380H,温度传感器380J,触摸传感器380K,环境光传感器380L,骨传导传感器380M等。
可以理解的是,本实施例示意的结构并不构成对电子设备300的具体限定。在另一些实施例中,电子设备300可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器310可以包括一个或多个处理单元,例如:处理器310可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是电子设备300的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器310中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器310中的存储器为高速缓冲存储器。该存储器可以保存处理器310刚用过或循环使用的指令或数据。如果处理器310需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器310的等待时间,因而提高了***的效率。
在一些实施例中,处理器310可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备300的结构限定。在另一些实施例中,电子设备300也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块340用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块340可以通过USB接口330接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块340可以通过电子设备300的无线充电线圈接收无线充电输入。充电管理模块340为电池342充电的同时,还可以通过电源管理模块341为电子设备供电。
电源管理模块341用于连接电池342,充电管理模块340与处理器310。电源管理模块341接收电池342和/或充电管理模块340的输入,为处理器310,内部存储器321,外部存储器,显示屏394,摄像头393,和无线通信模块360等供电。电源管理模块341还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块341也可以设置于处理器310中。在另一些实施例中,电源管理模块341和充电管理模块340也可以设置于同一个器件中。
电子设备300的无线通信功能可以通过天线1,天线2,移动通信模块350,无线通信模块360,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备300中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块350可以提供应用在电子设备300上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块350可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块350可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块350还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块350的至少部分功能模块可以被设置于处理器310中。在一些实施例中,移动通信模块350的至少部分功能模块可以与处理器310的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解 调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器370A,受话器370B等)输出声音信号,或通过显示屏394显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器310,与移动通信模块350或其他功能模块设置在同一个器件中。
无线通信模块360可以提供应用在电子设备300上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块360可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块360经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器310。无线通信模块360还可以从处理器310接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备300的天线1和移动通信模块350耦合,天线2和无线通信模块360耦合,使得电子设备300可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。
电子设备300通过GPU,显示屏394,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏394和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器310可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏394用于显示图像,视频等。该显示屏394是触摸屏。该触摸屏是侧边有弧度的曲面屏。显示屏394包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。
电子设备300可以通过ISP,摄像头393,视频编解码器,GPU,显示屏394以及应用处理器等实现拍摄功能。
ISP用于处理摄像头393反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头393中。
摄像头393用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体 (complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备300可以包括1个或N个摄像头393,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备300在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备300可以支持一种或多种视频编解码器。这样,电子设备300可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备300的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口320可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备300的存储能力。外部存储卡通过外部存储器接口320与处理器310通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器321可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器310通过运行存储在内部存储器321的指令,从而执行电子设备300的各种功能应用以及数据处理。例如,在本申请实施例中,处理器310可以通过执行存储在内部存储器321中的指令,响应于用户在显示屏394(即折叠屏)的第一操作或第二操作,在显示屏384(即折叠屏)显示对应的显示内容。内部存储器321可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备300使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器321可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备300可以通过音频模块370,扬声器370A,受话器370B,麦克风370C,耳机接口370D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块370用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块370还可以用于对音频信号编码和解码。在一些实施例中,音频模块370可以设置于处理器310中,或将音频模块370的部分功能模块设置于处理器310中。扬声器370A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备300可以通过扬声器370A收听音乐,或收听免提通话。受话器370B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备300接听电话或语音信息时,可以通过将受话器370B靠近人耳接听语音。麦克风370C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息或需要通过语音助手触发电子设备300执行某些功能时,用户可以通过人嘴靠近麦克风370C发声,将声音信号输入到麦克风370C。电子设备300可以设置至少一个麦克风370C。在另一些实施例中,电子设备300可以设置两个麦克风370C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备300还可以设置三个,四个或更多麦克风370C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口370D用于连接有线耳机。耳机接口370D可以是USB接口330,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器380A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器380A可以设置于显示屏394。压力传感器380A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器380A,电极之间的电容改变。电子设备300根据电容的变化确定压力的强度。当有触摸操作作用于显示屏394,电子设备300根据压力传感器380A检测所述触摸操作强度。电子设备300也可以根据压力传感器380A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器380B可以用于确定电子设备300的运动姿态。在一些实施例中,可以通过陀螺仪传感器380B确定电子设备300围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器380B可以用于拍摄防抖。本申请实施例中,电子设备300的显示器394(即曲面屏)中可以包括陀螺仪传感器(如上述陀螺仪传感器380B),用于测量显示屏334的朝向(即朝向的方向向量)。其中,显示屏334的朝向可以用于确定显示屏334与水平面的夹角。
磁传感器380D包括霍尔传感器。电子设备300可以利用磁传感器380D检测翻盖皮套的开合。加速度传感器380E可检测电子设备300在各个方向上(一般为三轴)加速度的大小。当电子设备300静止时可检测出重力的大小及方向。
距离传感器380F,用于测量距离。电子设备300可以通过红外或激光测量距离。例如,本申请实施例中,电子设备300可以通过距离传感器380F测量电子设备300与人脸的距离。
接近光传感器380G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备300通过发光二极管向外发射红外光。电子设备300使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备300附近有物体。当检测到不充分的反射光时,电子设备300可以确定电子设备300附近没有物体。
环境光传感器380L用于感知环境光亮度。电子设备300可以根据感知的环境光亮度自适应调节显示屏394亮度。环境光传感器380L也可用于拍照时自动调节白平衡。环境光传感器380L还可以与接近光传感器380G配合,检测电子设备300是否在口袋里,以防误触。
指纹传感器380H用于采集指纹。电子设备300可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器380J用于检测温度。在一些实施例中,电子设备300利用温度传感器380J检测的温度,执行温度处理策略。例如,当温度传感器380J上报的温度超过阈值,电子设备300执行降低位于温度传感器380J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备300对电池342加热,以避免低温导致电子设备300异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备300对电池342的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器380K,也称“触控面板”。触摸传感器380K可以设置于显示屏394,由触摸传感器380K与显示屏394组成触摸屏,也称“触控屏”。触摸传感器380K用于检测作用于其 上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏394提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器380K也可以设置于电子设备300的表面,与显示屏394所处的位置不同。
骨传导传感器380M可以获取振动信号。在一些实施例中,骨传导传感器380M可以获取人体声部振动骨块的振动信号。骨传导传感器380M也可以接触人体脉搏,接收血压跳动信号。
按键390包括开机键,音量键等。按键390可以是机械按键。也可以是触摸式按键。电子设备300可以接收按键输入,产生与电子设备300的用户设置以及功能控制有关的键信号输入。马达391可以产生振动提示。马达391可以用于来电振动提示,也可以用于触摸振动反馈。指示器392可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口395用于连接SIM卡。SIM卡可以通过***SIM卡接口395,或从SIM卡接口395拔出,实现和电子设备300的接触和分离。
以下实施例中的方法均可以在具有上述硬件结构的电子设备300中实现。
请参考图4,其以安卓(Android)***为例,示出本申请实施例提供的一种电子设备300的软件***架构示意图。如图4所示,电子设备300可以的软件***包括:框架(Framework)层、硬件抽象层(hardware abstraction layer,HAL)和内核(Kernel)层。
其中,手机的显示屏(如触摸屏)中包括触摸传感器,用于采集用户对触摸屏的触摸操作,获取触摸操作对应的触摸信息(如触摸信息1)。其中,该触摸信息可以包括触摸操作对应的触控面的大小和位置信息,以及触摸操作的按压力度等信息。触摸传感器采集到触摸信息1后,Kernel层的驱动(如触摸传感器的驱动)可以向硬件抽象层HAL上报该触摸信息1(即执行图4所示的S1)。硬件抽象层HAL中包括多个TP算法,如校准算法和本申请实施例中预设的防误触算法等。硬件抽象层HAL中的TP算法可以对该触摸信息1进行处理(包括防误触处理),得到触摸信息2。随后,硬件抽象层HAL可以向Kernel层的输入***发送该触摸信息2(即执行图4所示的S2)。Kernel层的输入***接收到触摸信息2后,可以向上层(如Framework层)上报该触摸信息,由上层根据该触摸信息响应上述触摸操作。
在本申请实施例中,手机可以在手机处于预设的误触场景时,启动上述多个TP算法中预设的防误触算法,由硬件抽象层HAL采用预设的防误触算法对触摸信息1进行本申请实施例中所述的防误触处理。
示例性的,手机处于预设的防误触场景具体可以为:手机的触摸屏与水平面的夹角在第一预设角度范围内;手机的摄像头采集到用户的人脸图像,手机的触摸屏与该用户的面部的距离小于第一距离阈值,且该用户的人面偏航度在第二预设角度范围内。其中,该用户的人面偏航度是该用户的面部朝向相对于第一连线的左右旋转角度,第一连线是手机的摄像头与该用户的头部的连线。
本申请实例这里结合图4所示的软件***架构对本申请实施例中电子设备(如手机)启动预设的防误触算法的工作原理进行说明:手机开机时可以通过zygote进程启动一个***服务(system server,即system_server,简称SS)进程,该SS进程用于判断手机是否处于预设的误触场景。如果SS进程确定手机处于预设的误触场景,则可以启用预设的防误触算法。
例如,手机开机后,手机的重力角度检测模块(如陀螺仪传感器)便可以采集手机的曲面屏与水平面的夹角。如图4所示,SS进程监听重力角度检测模块采集到的夹角的变化。如果SS进程监听到手机的曲面屏与水平面的夹角在第一预设角度范围内,则可以启动3D结构光检测模块。3D结构光检测模块可以包括摄像头、距离传感器和红外光传感器等。由3D结 构光检测模块进行人脸图像采集,人脸与手机距离的采集,以及人脸与手机夹角(即人面偏航度)的采集。具体的,如果手机的摄像头采集到用户的人脸图像,手机的曲面屏与该用户的面部的距离小于第一距离阈值,且该用户的人面偏航度在第二预设角度范围内,SS进程则可以启用预设的防误触算法,即执行图4所示的S0。
示例性的,预设的防误触算法可以是AFT 2SA算法(Algorithm)。该预设的防误触算法可以集成在HAL层的TP daemon(即守护进程)中。该预设的防误触算法用于判断TP采集的触摸操作对应的接触面的位置和形状进行判断,即从TP采集的触摸操作中识别出预设的触摸操作(如上述接触面1和接触面2对应的触摸操作)。
其中,Android***启动时,Linux内核加载完毕之后,首先启动init进程,然后由init进程装载Android文件***,创建***目录,初始化属性***,并启动一些守护进程。其中,zygote进程是最重要的守护进程。SS进程是zygote进程fork出的第一个进程,也是整个Android***的核心进程。
以下将以上述电子设备是手机为例,对本申请实施例提供的技术方案进行具体阐述。该手机的触摸屏是侧边有弧度的曲面屏。该曲面屏的防误触方法可以包括:(1)预设的防误触场景判断流程;(2)防误触处理流程。
其中,手机可以先执行(1)预设的防误触场景判断流程,判断手机是否处于预设的防误触场景。如果手机处于预设的防误触场景,手机则可以执行(2)防误触处理流程,启动预设的防误触算法,对预设的误触操作进行防误触处理。
如图5所示,上述(1)预设的防误触场景判断流程,可以包括S501-S508:
S501、手机获取手机的触摸屏(即曲面屏)与水平面的夹角α。
示例性的,手机中可以包括陀螺仪传感器。该陀螺仪传感器用于测量触摸屏的朝向(即朝向的方向向量)。手机可根据触摸屏的朝向,确定手机的触摸屏(即曲面屏)与水平面的夹角。
本申请实施例这里,对陀螺仪传感器测量触摸屏的朝向(即朝向的方向向量a),以及手机根据触摸屏的朝向计算手机的触摸屏(即曲面屏)与水平面的夹角α的原理进行说明。
其中,陀螺仪传感器的坐标系是地理坐标系。如图6中的(b)所示,地理坐标系的原点O位于运载体(即包含陀螺仪传感器的设备,如手机)所在的点,x轴沿当地纬线指向东(E),y轴沿当地子午线线指向北(N),z轴沿当地地理垂线指向上,并与x轴和y轴构成右手直角坐标系。其中,x轴与y轴构成的平面即为当地水平面,y轴与z轴构成的平面即为当地子午面。例如,如图6中的(a)所示,xOy面是当地水平面,yOz轴构成的平面即为当地子午面。
因此,可以理解的是,陀螺仪传感器的坐标系是:以陀螺仪传感器为原点O,沿当地纬线指向东为x轴,沿当地子午线线指向北为y轴,沿当地地理垂线指向上(即地理垂线的反方向)为z轴。
手机利用触摸屏中设置的陀螺仪传感器,可测量得到触摸屏在其设置的陀螺仪传感器的坐标系中的朝向的方向向量。例如,参考图6中的(a)所示的手机的立体图,手机测量得到的触摸屏(即曲面屏)在陀螺仪传感器的坐标系中的朝向的方向向量为向量a。手机可计算得到向量a与水平面的夹角θ。
又根据图6中的(a)可知,由于向量a与曲面屏垂直,因此,可以得到曲面屏与水平面的夹角α=90°-θ。即手机根据测量得到的曲面屏在陀螺仪传感器的坐标系中的朝向的方向向量(即向量a),便可确定出曲面屏与水平面的夹角α。
需要注意的是,本申请实施例中,手机获取手机的触摸屏(即曲面屏)与水平面的夹角的方法,包括但不限于上述通过陀螺仪传感器获取夹角的方法。
S502、手机判断夹角α是否在第一预设角度范围内。
本申请实施例中,可以统计用户在上述场景(1)和场景(2)使用手机时,手机的触摸屏与水平面的夹角α的取值范围,确定第一预设角度范围。
在场景(1)中,用户平躺单手握持手机。一般而言,用户平躺握持手机时,手机的触摸屏与水平面的夹角α接近于0°。因此,针对场景(1),上述第一预设角度范围可以为在0°左右取值的角度范围。例如,第一预设角度范围可以为[-n°,n°]。例如,n的取值范围可以为(0,10),(0,5),(0,20)或者(0,15)等。例如,n的取值范围为(0,20)时,n=10,n=15,或者n=5等。
在场景(2)中,用户侧躺单手握持手机。一般而言,用户侧躺握持手机时,手机的触摸屏与水平面的夹角α接近于90°。因此,针对场景(2),上述第一预设角度范围可以为在90°左右取值的角度范围。例如,第一预设角度范围可以为[90°-m°,90°+m°]。例如,m的取值范围可以为(0,10),(0,5),(0,20)或者(0,15)等。例如,m的取值范围为(0,15)时,m=5,m=8,或者m=12等。
综上所述,上述第一预设角度范围可以包括[-n°,n°]和[90°-m°,90°+m°]两个角度范围。手机可以判断夹角α是否在[-n°,n°]或[90°-m°,90°+m°]任一角度范围内;如果夹角α在[-n°,n°]或[90°-m°,90°+m°]内,手机则可以继续执行S503。
可以理解,如果夹角α在[-n°,n°]或[90°-m°,90°+m°]内,则表示手机在上述场景(1)或场景(2)中被用户握持的可能性较高。在这种情况下,用户握持手机对曲面屏侧边产生误触的可能性较高。因此,手机可以判断用户与手机的相对状态是否满足预设条件。其中,如果用户与手机的相对状态满足预设条件,则表示手机在上述场景(1)或场景(2)中被用户握持。此时,手机可启动预设的防误触算法,采用预设的防误触算法实现对曲面屏侧边的防误触。
在一些实施例中,用户与手机的相对状态满足预设条件,具体可以为:手机通过摄像头可采集到人脸图像,上述人脸图像对应的用户的人面偏航度在第二预设角度范围内。
其中,用户的人面偏航度是该用户的面部朝向相对于第一连线的左右旋转角度,第一连线是手机的摄像头与该用户的头部的连线。需要注意的是,人面偏航度的详细描述可以参考后续实施例中的内容,本申请实施例这里不予赘述。
在另一些实施例中,用户与手机的相对状态满足预设条件,具体可以为:手机通过摄像头可采集到人脸图像,上述人脸图像对应的人脸与手机之间的距离小于第一距离阈值,上述人脸图像对应的用户的人面偏航度在第二预设角度范围内。
如图5所示,在S502之后,如果夹角α在第一预设角度范围内,手机则可以执行S503-S509:
S503、手机开启摄像头,通过摄像头采集图像。
S504、手机识别摄像头采集的图像中是否包括人脸图像。
需要说明的是,手机开启摄像头,通过摄像头采集图像,以及识别摄像头采集的图像中是否包括人脸图像的方法可以参考常规技术中的具体方法,本申请实例这里不予赘述。
具体的,如果摄像头采集的图像中包括人脸图像,手机则继续执行S505;如果摄像头采集的图像中不包括人脸图像,手机则执行S501。
S505、手机获取人脸与手机之间的距离。
在一些实施例中,上述摄像头可以是结构光摄像头模组。该结构光摄像头模组包括光投射器和两个摄像头(如第一摄像头和第二摄像头)。其中,光投射器用于向目标对象(如人脸)发射光信息。第一摄像头和第二摄像头用于拍摄目标对象。其中,第一摄像头和第二摄像头也可以称为双目摄像头。手机可以根据双目摄像头采集的目标对象(如人脸)的图像,计算该目标对象的深度信息;然后,根据目标对象的深度信息确定目标对象(如人脸)与手机之间的距离。
一般而言,目标对象(如人脸)是具备三维立体形态的物体。手机的摄像头拍摄该目标对象时,该目标对象上的各个特征(如人的鼻尖和眼睛)与摄像头之间的距离可能不同。目标对象上每个特征与摄像头之间的距离称为该特征(或者该特征所在点)的深度。目标对象上的各个点的深度组成该目标对象的深度信息。目标对象的深度信息可以表征目标对象的三维特征。
对于第一摄像头和第二摄像头而言,上述目标对象上的各个特征与摄像头之间的距离(即点的深度)可以为:该目标对象上的各个特征所在点与两个摄像头连线的垂直距离。例如,如图7所示,假设P为目标对象上的一个特征,特征P的深度为P到O LO R的垂直距离Z。其中,O L为第一摄像头的位置,O R为第二摄像头的位置。
手机可以根据双目摄像头对同一特征的视差,结合双目摄像头的硬件参数,采用三角定位原理计算目标对象上每一个特征的深度,得到目标对象的深度信息。
本申请实施例这里对手机根据视差计算深度信息的方法进行举例说明:
其中,结构光摄像头模组中第一摄像头和第二摄像头的位置不同。例如,如图7所示,O L为第一摄像头的位置,O R为第二摄像头的位置,O L与O R之间的距离为第一长度T,即O LO R=T。第一摄像头和第二摄像头的镜头焦距均为f。
特征P为目标对象的一个特征。特征P所在点与第一摄像头和第二摄像头连线的垂直距离为Z。即P的深度信息为Z。第一摄像头采集到目标对象的图像1,特征P在图像1的P L点。第二摄像头采集到目标对象的图像2,特征P在图像2的P R点。其中,图像1中的P L点与图像2中的P R点所对应的特征都是目标对象的特征P。
如图7所示,A LC L=A RC R=x,A LB L=B LC L=A RB R=B RC R=x/2。其中,特征P L与A L之间的距离为x L,即特征P L距离图像1的最左端的距离为x L,即A LP L=x L。特征P R与A R之间的距离为x R,即特征P R距离图像2的最左端的距离为x R,即A RP R=x R。A LP L与A RP R的差值为第一摄像头与第二摄像头对特征P的视差,即特征P的视差d=x L-x R
由于P LP R平行于O LO R;因此,按照三角形原理可以得出以下公式(1):
Figure PCTCN2020098446-appb-000001
其中,P LP R=O LO R-B LP L-P RB R。O LO R=T,B LP L=A LP L-A LB L=x L-x/2,P RB R=x/2-x R。P LP R=T-(x L-x/2)-(x/2-x R)=T-(x L-x R)=T-d。
将P LP R=T-d,O LO R=T代入公式(1)可以得到:
Figure PCTCN2020098446-appb-000002
Figure PCTCN2020098446-appb-000003
可知:特征P的深度Z可以通过两个摄像头之间的距离T、两个摄像头的镜头焦距f,以及视差d计算得到。
由上述描述可知:上述第一摄像头采集的图像信息(即第一图像信息,如上述图像1)和第二摄像头采集的图像信息(即第二图像信息如上述图像2)中的特征越多越明显,手机识别到图像1和图像2中相同的特征则越多。手机识别到的相同的特征越多,手机则可以计算得到越多特征所在点的深度。由于该目标对象的深度信息由目标对象的多个点(即特征)的深度组成;因此,手机计算得到的点的深度越多,目标对象的深度信息则越准确。
其中,上述两个图像中相同的特征指的是:两个图像中对应同一个特征的信息。例如,图7所示的图像1中的A L点对应人脸的左眼角,图像2中的A R点也对应同一人脸的左眼角。B L点对应上述人脸的右眼角,B R点也对应该人脸的右眼角。双目摄像头对上述人脸的左眼角的视差为x L1-x R1。双目摄像头对上述人脸的右眼角的视差为x L2-x R2
在一些实施例中,手机可将上述人脸中任一特征(如鼻头、左眼角、眉心、左眼球、右眼角、左嘴角或右嘴角等)的深度作为人脸与手机之间的距离。
在另一些实施例中,手机可以在获取到上述人脸的深度信息(包括人脸的多个特征的深度)后,计算该多个特征的深度的平均值,将该平均值作为人脸与手机之间的距离。
需要注意的是,S503中手机开启摄像头,具体可以为:手机开启上述结构光摄像头模组,该结构光摄像头模组用于采集图像,并在采集的图像包括人脸图像时,用于手机计算人脸的深度信息。或者,S503中手机也可以只开启一个摄像头(如第一摄像头),如果手机识别第一摄像头采集的图像中包括人脸图像,则可以开启光投射器和另一个摄像头(如第二摄像头),用于手机计算人脸的深度信息。或者,手机中除上述结构光摄像头模组之外还可以包括一个摄像头(如第三摄像头);S503中,手机可开启该第三摄像头,如果手机识别第三摄像头采集的图像中包括人脸图像,则可以开启结构光摄像头模组,用于手机计算人脸的深度信息。
在另一些实施例中,手机可以通过距离传感器(如距离传感器380F)获取人脸与手机之间的距离。其中,该距离传感器用于测量距离。该距离传感器用于发射和接收红外或激光,手机可根据距离传感器接收的上述红外或激光(如红外或激光的能量)测量距离。
S506、手机判断人脸与手机之间的距离是否小于第一距离阈值。
其中,在上述场景(1)和场景(2)中,第一距离阈值不同。例如,在场景(1)中,第一距离阈值可以为20厘米(cm);而在场景(2)中,第一距离阈值可以为15cm。当然,上述第一距离阈值并不限于15cm和20cm。第一距离阈值可以是统计大量用户在上述场景(1)和场景(2)使用手机时,手机与人脸的距离得到的。或者,第一距离阈值可以由用户在手机中设置。
具体的,如果人脸与手机之间的距离小于第一距离阈值,手机则继续执行S507;如果人脸与手机之间的距离大于或等于第一距离阈值,手机则执行S501。
S507、手机获取上述人脸图像对应的用户的人面偏航度。
其中,用户的人面偏航度是该用户的面部朝向相对于第一连线的左右旋转角度,第一连线是手机的摄像头与该用户的头部的连线。
人面偏航度是用户的面部朝向和“摄像头与用户头部的连线”(即第一连线)的偏离角度。人面偏航度也可以是用户的面部朝向相对于第一连线的左右旋转角度。例如,摄像头与用户头部的连线可以为摄像头与用户头部的任一器官(如鼻子或者嘴巴等)的连线。
例如,如图8中的(a)所示,以用户A为例。O PO A是摄像头与用户A头部的连线,X AO A表示用户A的面部朝向。L AO A与用户A的面部朝向所在直线X AO A垂直,η A=90°。用户A的人面偏航度β A是X AO A与O PO A的夹角。以用户B为例。O PO B是摄像头与用户B头部的连线,X BO B表示用户B的面部朝向。L BO B与用户B的面部朝向所在直线X BO B垂直,η B=90°。 用户B的人面偏航度β B是X BO B与O PO B的夹角。以用户C为例。O PO C是摄像头与用户C头部的连线,X CO C表示用户C的面部朝向。L CO C与用户C的面部朝向所在直线X CO C垂直,η C=90°。用户C的人面偏航度β C是X CO C与O PO C的夹角。
又例如,如图8中的(b)所示,以用户D为例。O PO D是摄像头与用户D头部的连线,X DO D表示用户D的面部朝向。L DO D与用户D的面部朝向所在直线X DO D垂直,η D=90°。用户D的人面偏航度β D是X DO D与O PO D的夹角。以用户E为例。O PO E是摄像头与用户E头部的连线,X EO E表示用户E的面部朝向。L EO E与用户E的面部朝向所在直线X EO E垂直,η E=90°。用户E的人面偏航度β E是X EO E与O PO E的夹角。以用户F为例。O PO F是摄像头与用户F头部的连线,X FO F表示用户F的面部朝向。L FO F与用户F的面部朝向所在直线X FO F垂直,η F=90°。用户F的人面偏航度β F是X FO F与O PO F的夹角。
S508、手机判断人面偏航度是否在第二预设角度范围内。
参考图8中的(a)和图8中的(b)可知:人面偏航度越接近于0°,用户关注手机触摸屏的可能性越高。例如,如图8中的(a)所示,用户C的人面偏航度β C=0°,用户A的人面偏航度β A和用户B的人面偏航度β B的人面偏航度接近于0°。因此,图8中的(a)所示的用户A、用户B和用户C关注手机的触摸屏的可能性很高。
参考图8中的(a)和图8中的(b)可知:人面偏航度的绝对值越大,用户关注手机的触摸屏的可能性越低。例如,用户D的人面偏航度β D的绝对值、用户E的人面偏航度β E的绝对值,以及用户F的人面偏航度β F的绝对值均较大。因此,图8中的(b)所示的用户D、用户E和用户F关注手机的触摸屏的可能性较低。
由上述描述可知:上述第二预设角度范围可以为在0°左右取值的角度范围。示例性的,第二预设角度范围可以为[-k°,k°]。例如,k的取值范围可以为(0,10)或者(0,5)等。例如,k=2,或者k=1,或者k=3等。
示例性的,手机可以通过人脸检测的方式获取摄像头(如上述第三摄像头)采集的人脸图像的人脸特征。该人脸特征可以包括上述人面偏航度。具体的,该人脸特征还可以包括人脸位置信息(faceRect)、人脸特征点信息(landmarks)和人脸姿态信息。该人脸姿态信息可以包括人面俯仰角度(pitch)、平面内旋转角度(roll)和人面偏航度(即左右旋转角度,yaw)。
其中,手机可以提供一个接口(如Face Detector接口),该接口可以接收摄像头拍摄的图片。然后,手机的处理器(如NPU)可以对该图片进行人脸检测,得到上述人脸特征。最后,手机可以返回检测结果(JSON Object),即上述人脸特征。
例如,以下为本申请实施例中,手机返回的检测结果(JSON)示例。
Figure PCTCN2020098446-appb-000004
Figure PCTCN2020098446-appb-000005
其中,上述代码中,“"id":0”表示上述人脸特征对应的人脸ID为0。其中,一张图片(如图片)中可以包括一个或多个人脸图像。手机可以分配该一个或多个人脸图像不同的ID,以标识人脸图像。
“"height":1795”表示人脸图像(即人脸图像在图片中所在的人脸区域)的高度为1795个像素点。“"left":761”表示人脸图像与图片左边界的距离为761个像素点。“"top":1033”表示人脸图像与图片上边界的距离为1033个像素点。“"width":1496”表示人脸图像的宽度为1496个像素点。“"pitch":-2.9191732”表示人脸ID为0的人脸图像的人面俯仰角度为-2.9191732°。“"roll":2.732926”表示人脸ID为0的人脸图像的平面内旋转角度为2.732926°。
“"yaw":0.44898167”表示人脸ID为0的人脸图像的人面偏航度(即左右旋转角度)β=0.44898167°。由β=0.44898167°,0.44898167°>0°可知,用户的面部朝向相对于摄像头与该用户头部的连线向右旋转0.44898167°。假设上述k=2,即上述第二预设角度范围为[-2°,2°]。由于β=0.44898167°,且0.44898167°∈[-2°,2°];因此,手机可以确定人面偏航度在第二预设角度范围内。
可以理解,如果人面偏航度在第二预设角度范围内,则表示用户的面部朝向相对于摄像头与用户头部之间的连线的旋转角度较小。此时,用户关注(在看或者凝视)手机的触摸屏的可能性较高,用户在上述场景(1)和场景(2)中使用手机的可能性较高。在这种情况下,手机可启动预设的防误触算法,采用预设的防误触算法实现对曲面屏侧边的防误触。具体的,如图5所示,在S508之后,如果人面偏航度在第二预设角度范围内,手机则执行(2)防误触处理流程(即S509)。在S508之后,如果人面偏航度不在第二预设角度范围内,手机则执行S501。
S509、手机对用户在触摸屏的预设误触操作进行防误触处理。
其中,用户执行预设误触操作时,手机采集到触摸屏与用户的手的接触面为:在触摸屏第一侧弧度区域(如图1中的(a)所示的右侧弧度区域20)的第一接触面,以及在触摸屏第二侧弧度区域(如图1中的(a)所示的左侧弧度区域10)的x个第二接触面,1≤x≤4,x为正整数。
其中,本申请实施例中,第一接触面是手机被用户握持时,手机采集的触摸屏与用户的手的虎口的接触面。该第一接触面的形状与用户握持手机时,用户的手的虎口与触摸屏的接触区域的形状类似。例如,第一接触面可以为图2中的(b)所示的接触面1。第二接触面是手机被用户握持时,手机采集的触摸屏与用户的手指的接触面。该第二接触面的形状与用户握持手机时,用户的手指与触摸屏的接触区域的形状类似。例如,第二接触面可以为图2中的(b)所示的接触面2。
示例性的,在S509之前,手机可接收用户对触摸屏的触摸操作。S509具体可以包括:手机采用预设的防误触算法,识别出用户对触摸屏的第一触摸操作是预设误触操作;手机不响应该第一触摸操作。
可以理解,手机可以实时采集用户在触摸屏上的触摸操作,该触摸操作可以包括:预设误触操作,以及用户对触摸屏的正常操作(如用户对触摸屏显示的图标的点击操作)。如此,手机则可识别出采集的触摸操作中的第一触摸操作是预设误触操作;然后,对该第一触摸操 作操作进行防误触处理,即不响应该第一触摸操作。
示例性的,手机可通过识别用户对触摸屏的触摸操作与触摸屏的接触面的位置和形状,识别出预设误触操作。手机可将接触面为在触摸屏第一侧弧度区域(如图1中的(a)所示的右侧弧度区域20)的第一接触面,以及在触摸屏第二侧弧度区域(如图1中的(a)所示的左侧弧度区域10)的x个第二接触面的触摸操作(即第一触摸操作)识别为预设误触操作。
需要注意的是,上述实施例中以第一侧弧度区域是触摸屏右侧弧度区域,第二侧弧度区域是触摸屏左侧弧度区域为例,对本申请实施例的方法进行说明。当然,第一侧弧度区域也可以是触摸屏左侧弧度区域,第二侧弧度区域可以是触摸屏右侧弧度区域。本申请实施例对此不作限制。
本申请实施例中,由手机的HAL层识别预设误触操作,并对预设误触操作进行防误触处理。具体的,手机的TP采集到用户在触摸屏上的触摸操作后,向HAL层上报该触摸操作的触摸信息。其中,触摸操作的触摸信息可以包括:预设误触操作对应的触摸信息,和/或用户对触摸屏的正常操作对应触摸信息。例如,触摸信息可以包括触摸操作对应的接触面的位置、形状和大小等。
其中,HAL层可以配置有多个TP算法,如图9所示的TP算法1、预设的防误触算法(如AFT 2算法)和TP算法2等。每个TP算法都可以对底层上报的触摸信息进行处理。预设的防误触算法(如AFT 2算法)用于识别出第一触摸操作是预设误触操作。其中,预设的防误触算法可以通过执行图9所示的901和902,识别预设误触操作。如此,HAL层在向上层(如Framework层)上报触摸操作的触摸信息时,则可以仅上报用户对触摸屏的正常操作对应触摸信息,而不上报上述预设误触操作对应的触摸信息。如此,Framework层则不会接收到用户对触摸屏的预设误触操作,也不需要响应该预设误触操作,可以实现对预设误触操作防误触处理。
例如,如图9所示,HAL层可接收底层(如内核层)上报的触摸信息3;TP算法1可对触摸信息3进行处理,得到触摸信息1;然后,由预设的防误触算法对触摸信息1进行防误触处理。其中,预设的防误触算法可进行触摸屏第一侧弧度区域的第一接触面(即虎口大接触面)的识别,以及触摸屏第一侧弧度区域的第二接触面(即四指小接触面)的识别,以识别出预设误触操作。如此,预设的防误触算法向下一级TP算法或者上层(如Framework层)发送触摸信息时,则可以忽略(或拦截)预设误触操作对应的触摸信息。例如,如图9所示,预设的防误触算法向TP算法2发送的触摸信息2中不包括预设误触操作对应的触摸信息,仅包括触摸信息1中,除预设误触操作对应的触摸信息之外的其他触摸信息。最后,TP算法2可对预设的防误触算法发送的触摸信息3进行处理,得到触摸信息4,并向上层上报触摸信息4。
本申请实施例提供一种曲面屏的防误触方法,如果触摸屏与水平面的夹角在第一预设角度范围内,摄像头可采集到人脸图像,手机与用户之间的距离小于第一距离阈值,且用户的人面偏航度在第二预设角度范围内,那么用户在场景(1)和场景(2)中使用手机的可能性较高。如此,则可以确定手机处于预设的防误触场景。
在场景(1)和场景(2)中,用户握持电子设备的方式比较固定,用户握持电子设备的力度较大,且用户手指与曲面屏的左侧弧度区域和右侧弧度区域的接触面的面积较大。用户握持电子设备,更容易产生对曲面屏侧边的误触。采用常规的防误触方案,电子设备无法对上述接触面对应的触摸操作进行防误触处理。而本申请实施例中,手机处于预设的防误触场景时,可识别出该预设误触操作,并对预设误触操作进行防误触处理,可以提升防误触的准 确性。
并且,如果手机可将预设误触操作识别为误触操作,那么在第一接触面和第二接触面存在的同时,手机便可以响应用户对曲面屏的其他非误触操作,则不会出现用户点击失效的问题,可以提升用户对曲面屏设备的使用体验。
可以理解,用户握持手机时所产生的上述预设误触操作的持续时间一般较长,而用户对触摸屏的正常操作的持续时间一般较短。为了提高手机识别预设误触操作的准确性,手机识别预设误触操作时,不仅可以参考触摸操作对应的接触面的形状,还可以判断触摸操作的持续时间是否大于预设时间(如第一预设时间)。具体的,本申请实施例中对预设误触操作做出如下限定。预设误触操作可以包括:手机被用户握持时,手机采集的、与第一侧弧度区域的持续时间大于第一预设时间,且在第一侧弧度区域的移动距离小于第二距离阈值的触摸操作。
也就是说,手机接收到用户对触摸屏的触摸操作,如果该触摸操作满足以下两个条件,便可以确定该触摸操作是预设误触操作。条件(1):手机接收到的触摸操作对应的接触面为:在触摸屏第一侧弧度区域的第一接触面,以及在触摸屏第二侧弧度区域的x个第二接触面。条件(2):手机接收到的触摸操作接触第一侧弧度区域的持续时间(即第一接触面的持续时间)大于第一预设时间,且在第一侧弧度区域的移动距离(即第一接触面的移动距离)小于第二距离阈值。例如,如图9所示,预设的防误触算法识别预设误触操作时,不仅可以执行901和902,判断触摸操作是否满足条件(1);还可以执行902,判断触摸操作是否满足条件(2)。
例如,上述第二距离阈值可以为6毫米(mm)、5mm、2mm或者3mm等。第一预设时间可以为2秒(s)、3s或1s等。
可以理解,由于用户握持手机时所产生的上述预设误触操作的持续时间一般较长,而用户对触摸屏的正常操作的持续时间一般较短。因此,手机接收到一个触摸操作时,如果该触摸操作对应的接触面在第一侧弧度区域的持续时间较长(如大于第一预设时间),且在第一侧弧度区域的移动距离较小(如小于第二距离阈值),则表示该触摸操作是手机被握持产生的误触操作的可能性较高。因此,手机通过上述条件(1)和条件(2)的双重判定,可以提升手机识别预设误触操作的准确度。
在一些情况下,手机确定第一触摸操作是预设误触操作时,可能会存在对第一触摸操作的误判,影响防误触的准确性。为了提升防误触的准确性,手机可以持续判断上述识别出的第一触摸操作是否发生较大距离的移动。具体的,在手机识别出第一触摸操作是预设误触操作之后,不响应第一触摸操作操作之前,手机可判断上述第一触摸操作在第二预设时间内的移动距离是否大于第三距离阈值。其中,第二预设时间可以是从手机识别出第一触摸操作是预设误触操作开始,时长为第一预设时长的时间段。例如,第一预设时长可以为2s、3s、1s或0.5s等。第三距离阈值可以为7mm、5mm、3mm或者2mm等。其中,第三距离阈值与第二距离阈值可以相同,也可以不同。其中,第一触摸操作可包括一个或多个触摸操作。例如,上述第一触摸操作可包括第一接触面对应的触摸操作和x个第二接触面对应的触摸操作。
在一些情况下,手机可以确定第一触摸操作(即第一触摸操作中的所有触摸操作)在第二预设时间内的移动距离小于或等于第三距离阈值,则表示手机确定第一触摸操作是预设误触操作不是误判。如此,手机则可不响应该第一触摸操作。
在另一些情况下,手机可确定第一触摸操作中的至少一个触摸操作在第二预设时间内的移动距离大于第三距离阈值。如此,则表示手机对该至少一个触摸操作进行了误判。这种情看下,手机则可以对该至少一个触摸操作进行防误杀处理,即手机可响应该至少一个触摸操 作,执行该至少一个触摸操作对应的事件。而对于第一触摸操作中、除该至少一个触摸操作之外的其他触摸操作,手机并未进行误判。如此,手机则可以不响应第一触摸操作中、除至少一个触摸操作之外的其他触摸操作。
本申请实施例中,手机识别出第一触摸操作是预设误触操作后,可以进一步判断是否对该第一触摸操作进行了误判。如此,可以提升手机识别预设误触操作的准确度,进而可以提升手机进行防误触处理的准确度。
在一些实施例中,从手机识别到第一触摸操作是预设误触操作开始的一段时间(如第三预设时间)后,手机可能会接收到用户对触摸屏的触摸操作(如第二触摸操作)。
可以理解,用户握持手机后的一段时间后,用户对触摸屏的第二触摸操作是用户触发手机执行相应事件的正常触摸操作(即非用户握持手机所产生的误触操作)的可能性较高。因此,如果手机在第三预设时间后接收到用户在第二侧弧度区域的第二触摸操作,手机可响应于该第二触摸操作,执行该第二触摸操作对应的事件。即手机可对该第二触摸操作进行防误杀处理。其中,所述第三预设时间是从所述电子设备检测到所述预设误触操作开始的第二预设时长。例如,第二预设时长可以为2s、3s、1s或0.5s等。其中,第一预设时长与第二预设时长可以相同,也可以不同。
示例性的,本申请实施例这里结合图10,对本申请实施例中预设的防误触算法的算法逻辑进行介绍。如图10所示,HAL层可接收底层(如内核层)上报的触摸信息3;TP算法1可对触摸信息3进行处理,得到触摸信息1。然后,由预设的防误触算法对触摸信息1进行防误触处理。
具体的,预设的防误触算法可判断第一触摸操作在触摸屏的接触面是否包括在第一侧弧度区域的第一接触面(即执行901)。如果该第一触摸操作在触摸屏的接触面不包括在第一侧弧度区域的第一接触面,预设的防误触算法则可以直接向TP算法2发送触摸信息1。
如果该第一触摸操作在触摸屏的接触面包括在第一侧弧度区域的第一接触面,预设的防误触算法则可执行902,判断该第一触摸操作在触摸屏的接触面是否包括在第二侧弧度区域的第二接触面。如果该第一触摸操作在触摸屏的接触面不包括在第二侧弧度区域的第二接触面,预设的防误触算法则可以直接向TP算法2发送触摸信息1。
如果该第一触摸操作在触摸屏的接触面包括在第二侧弧度区域的第二接触面,预设的防误触算法则可执行903,判断第一触摸操作中第一侧弧度区域的触摸操作的持续时间是否大于第一预设时间,且移动距离是否小于第二距离阈值。如果第一侧弧度区域的触摸操作的持续时间小于或等于第一预设时间,移动距离大于或等于第二距离阈值,预设的防误触算法则可以直接向TP算法2发送触摸信息1。
如果第一侧弧度区域的触摸操作的持续时间大于第一预设时间,且移动距离小于第二距离阈值,预设的防误触算法可以初步确定该第一触摸操作是预设误触操作,向TP算法发送触摸信息时,可忽略该第一触摸操作的触摸信息(即执行1003)。
为了提升防误触的准确性,避免将部分触摸操作误判为预设误触操作,如图10所示,预设的防误触算法还可以对初步确定为预设误触操作的第一触摸操作进行防误杀处理。
具体的,预设的防误触算法可以判断第一触摸操作在第二预设时间内的移动距离是否大于第三距离阈值(即执行1001)。如果第一触摸操作中的部分触摸操作(如上述至少一个触摸操作)在第二预设时间内的移动距离大于第三距离阈值,则表示该至少一个触摸操作不是误触操作,该至少一个触摸操作对应的接触面或接触点不可忽略。此时,预设的防误触算法可向TP算法2发送包括该至少一个触摸操作的触摸信息的触摸信息5。
如果第一触摸操作中的部分触摸操作(如上述至少一个触摸操作)在第二预设时间内的移动距离小于或等于第三距离阈值,则表示该至少一个触摸操作是误触操作,该至少一个触摸操作对应的接触面可忽略。此时,预设的防误触算法可向TP算法2发送不包括该至少一个触摸操作的触摸信息的触摸信息2。
进一步的,预设的防误触算法还可以执行1002,对手机在第三预设时间后接收到用户在第二侧弧度区域的新的触摸操作(即第二触摸操作)对应的接触面进行忽略。此时,预设的防误触算法可向TP算法2发送不包括该第二触摸操作的触摸信息的触摸信息2。预设的防误触算法还可以执行1002,对手机在第三预设时间内接收到用户在第二侧弧度区域的新的触摸操作对应的接触面不忽略。此时,预设的防误触算法可向TP算法2发送包括该第二触摸操作的触摸信息的触摸信息5。
本申请另一实施例提供一种电子设备(如上述手机),该电子设备包括:处理器、存储器、触摸屏和摄像头。该触摸屏是侧边有弧度的曲面屏。存储器、触摸屏和摄像头与处理器耦合,存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令,当处理器执行该计算机指令时,该电子设备执行上述方法实施例中电子设备(如手机)所执行的各个功能或者步骤。
在另一些实施例中,上述电子设备还包括一个或多个传感器。该一个或多个传感器至少包括陀螺仪传感器。该一个或多个传感器用于采集触摸屏的朝向的方向向量。触摸屏的朝向的方向向量用于计算触摸屏与水平面的夹角。
在另一些实施例中,电子设备还可以包括结构光摄像头模组,结构光摄像头模组包括光投射器、第一摄像头和第二摄像头,第一摄像头和第二摄像头之间的距离为第一长度。其中,结构光摄像头模组中各个器件的功能可以参考上述实施例中的相关描述,本申请实施例这里不予赘述。
在另一些实施例中,电子设备还可以包括距离传感器。该距离传感器用于获取电子设备与用户之间的距离。
本申请实施例还提供一种芯片***,如图11所示,该芯片***包括至少一个处理器1101和至少一个接口电路1102。处理器1101和接口电路1102可通过线路互联。例如,接口电路1102可用于从其它装置(例如电子设备的存储器)接收信号。又例如,接口电路1102可用于向其它装置(例如处理器1101或者电子设备的触摸屏)发送信号。示例性的,接口电路1102可读取存储器中存储的指令,并将该指令发送给处理器1101。当所述指令被处理器1101执行时,可使得电子设备执行上述实施例中的各个步骤。当然,该芯片***还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请另一实施例提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在电子设备上运行时,使得电子设备执行上述方法实施例中电子设备(如手机)所执行的各个功能或者步骤。
本申请另一实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中电子设备(如手机)所执行的各个功能或者步骤。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的***,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本实施例所提供的几个实施例中,应该理解到,所揭露的***,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本实施例的具体实施方式,但本实施例的保护范围并不局限于此,任何在本实施例揭露的技术范围内的变化或替换,都应涵盖在本实施例的保护范围之内。因此,本实施例的保护范围应以所述权利要求的保护范围为准。

Claims (23)

  1. 一种曲面屏的防误触方法,其特征在于,应用于电子设备,所述电子设备的触摸屏是侧边有弧度的曲面屏,所述方法包括:
    所述电子设备获取所述触摸屏与水平面的夹角;
    响应于所述触摸屏与水平面的夹角在第一预设角度范围内,所述电子设备启动摄像头;
    响应于所述摄像头采集到人脸图像,所述电子设备获取所述电子设备与用户之间的距离,以及所述用户的人面偏航度,所述人面偏航度是所述用户的面部朝向相对于第一连线的左右旋转角度,所述第一连线是所述摄像头与所述用户的头部的连线;
    响应于所述电子设备与所述用户之间的距离小于第一距离阈值,且所述人面偏航度在第二预设角度范围内,所述电子设备对用户在所述触摸屏的预设误触操作进行防误触处理;
    其中,用户执行所述预设误触操作时,用户的手与所述触摸屏的接触面为:在所述触摸屏第一侧弧度区域的第一接触面,以及在所述触摸屏第二侧弧度区域的x个第二接触面,1≤x≤4,x为正整数。
  2. 根据权利要求1所述的方法,其特征在于,所述第一接触面是所述电子设备被用户握持时,所述电子设备采集的所述触摸屏与用户的手的虎口的接触面;所述第二接触面是所述电子设备被用户握持时,所述电子设备采集的所述触摸屏与用户的手指的接触面。
  3. 根据权利要求1或2所述的方法,其特征在于,所述预设误触操作包括:所述电子设备被用户握持时,所述电子设备采集的、与所述第一侧弧度区域接触的持续时间大于第一预设时间,且在所述第一侧弧度区域的移动距离小于第二距离阈值的触摸操作。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述电子设备对用户在所述触摸屏的预设误触操作进行防误触处理,包括:
    所述电子设备接收用户对所述触摸屏的第一触摸操作;
    所述电子设备采用预设的防误触算法,识别出所述第一触摸操作是所述预设误触操作;
    所述电子设备不响应所述第一触摸操作。
  5. 根据权利要求4所述的方法,其特征在于,在所述电子设备采用预设的防误触算法,识别出用户对所述触摸屏的第一触摸操作是所述预设误触操作之后,所述电子设备不响应所述第一触摸操作之前,所述方法还包括:
    所述电子设备确定所述第一触摸操作在第二预设时间内的移动距离小于或等于第三距离阈值;所述第二预设时间是从所述电子设备识别出所述第一触摸操作是所述预设误触操作开始,时长为第一预设时长的时间段。
  6. 根据权利要求5所述的方法,其特征在于,所述第一触摸操作包括一个或多个触摸操作;所述方法还包括:
    所述电子设备确定所述第一触摸操作中的至少一个触摸操作在所述第二预设时间内的移动距离大于所述第三距离阈值,所述电子设备响应所述至少一个触摸操作,执行所述至少一个触摸操作对应的事件;
    所述电子设备不响应所述第一触摸操作中、除所述至少一个触摸操作之外的其他触摸操作。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述方法还包括:
    如果所述电子设备在第三预设时间后接收到用户在所述第二侧弧度区域的第二触摸操作,所述电子设备响应于所述第二触摸操作,执行所述第二触摸操作对应的事件;
    其中,所述第三预设时间是从所述电子设备识别出所述第一触摸操作是所述预设误触操 作开始,时长为第二预设时长的时间段。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,所述第一预设角度范围包括:[-n°,n°]和[90°-m°,90°+m°]中的至少一个;其中,n的取值范围至少包括:(0,20),(0,15)或者(0,10)中的任一个;m的取值范围至少包括:(0,20),(0,15)或者(0,10)中的任一个;
    所述第二预设角度范围为[-k°,k°];其中,k的取值范围至少包括:(0,15),(0,10)或者(0,5)中的任一个。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述电子设备获取所述触摸屏与水平面的夹角,包括:
    所述电子设备通过一个或多个传感器,获取所述触摸屏与水平面的夹角;
    其中,所述一个或多个传感器至少包括陀螺仪传感器。
  10. 根据权利要求1-9中任一项所述的方法,其特征在于,所述电子设备还包括结构光摄像头模组,所述结构光摄像头模组包括光投射器、第一摄像头和第二摄像头,所述第一摄像头和所述第二摄像头之间的距离为第一长度;
    所述响应于所述摄像头采集到人脸图像,所述电子设备获取所述电子设备与用户之间的距离,包括:
    响应于所述摄像头采集到人脸图像,所述电子设备通过所述光投射器发射光信息,通过所述第一摄像头采集所述人脸图像对应的用户的人脸的第一图像信息,通过所述第二摄像头采集所述人脸的第二图像信息,所述第一图像信息和所述第二图像信息包括所述人脸的特征;
    所述电子设备根据所述第一图像信息、所述第二图像信息、所述第一长度、以及所述第一摄像头的镜头焦距和所述第二摄像头的镜头焦距,计算所述人脸的深度信息;
    所述电子设备根据所述人脸的深度信息,计算所述电子设备与用户之间的距离。
  11. 一种电子设备,其特征在于,所述电子设备包括:处理器、存储器、触摸屏和摄像头,所述触摸屏是侧边有弧度的曲面屏;
    所述处理器,用于获取所述触摸屏与水平面的夹角;响应于所述触摸屏与水平面的夹角在第一预设角度范围内,所述电子设备启动摄像头;
    所述摄像头,用于采集图像;
    所述处理器,还用于响应于所述摄像头采集到人脸图像,获取所述电子设备与用户之间的距离,以及所述用户的人面偏航度,所述人面偏航度是所述用户的面部朝向相对于第一连线的左右旋转角度,所述第一连线是所述摄像头与所述用户的头部的连线;响应于所述电子设备与所述用户之间的距离小于第一距离阈值,且所述人面偏航度在第二预设角度范围内,对用户在所述触摸屏的预设误触操作进行防误触处理;
    其中,用户执行所述预设误触操作时,用户的手与所述触摸屏的接触面为:在所述触摸屏第一侧弧度区域的第一接触面,以及在所述触摸屏第二侧弧度区域的x个第二接触面,1≤x≤4,x为正整数。
  12. 根据权利要求11所述的电子设备,其特征在于,所述第一接触面是所述电子设备被用户握持时,所述电子设备采集的所述触摸屏与用户的手的虎口的接触面;所述第二接触面是所述电子设备被用户握持时,所述电子设备采集的所述触摸屏与用户的手指的接触面。
  13. 根据权利要求11或12所述的电子设备,其特征在于,所述预设误触操作包括:所述电子设备被用户握持时,所述电子设备采集的、与所述第一侧弧度区域接触的持续时间大于第一预设时间,且在所述第一侧弧度区域的移动距离小于第二距离阈值的触摸操作。
  14. 根据权利要求11-13中任一项所述的电子设备,其特征在于,所述处理器,用于对用户在所述触摸屏的预设误触操作进行防误触处理,包括:
    所述处理器,具体用于:
    接收用户对所述触摸屏的第一触摸操作;采用预设的防误触算法,识别出所述第一触摸操作是所述预设误触操作;不响应所述第一触摸操作。
  15. 根据权利要求14所述的电子设备,其特征在于,所述处理器,还用于在采用所述预设的防误触算法,识别出所述第一触摸操作是所述预设误触操作之后,不响应所述第一触摸操作之前,确定所述第一触摸操作在第二预设时间内的移动距离小于或等于第三距离阈值;
    其中,所述第二预设时间是从所述电子设备识别出所述第一触摸操作是所述预设误触操作开始,时长为第一预设时长的时间段。
  16. 根据权利要求15所述的电子设备,其特征在于,所述第一触摸操作包括一个或多个触摸操作;
    所述处理器,还用于确定所述第一触摸操作中的至少一个触摸操作在所述第二预设时间内的移动距离大于所述第三距离阈值,所述电子设备响应所述至少一个触摸操作,执行所述至少一个触摸操作对应的事件;不响应所述第一触摸操作中、除所述至少一个触摸操作之外的其他触摸操作。
  17. 根据权利要求11-16中任一项所述的电子设备,其特征在于,所述处理器,还用于如果在第三预设时间后接收到用户在所述第二侧弧度区域的第二触摸操作,响应于所述第二触摸操作,执行所述第二触摸操作对应的事件;
    其中,所述第三预设时间是从所述电子设备识别出所述第一触摸操作是所述预设误触操作开始,时长为第二预设时长的时间段。
  18. 根据权利要求11-17中任一项所述的电子设备,其特征在于,所述第一预设角度范围包括:[-n°,n°]和[90°-m°,90°+m°]中的至少一个;其中,n的取值范围至少包括:(0,20),(0,15)或者(0,10)中的任一个;m的取值范围至少包括:(0,20),(0,15)或者(0,10)中的任一个;
    所述第二预设角度范围为[-k°,k°];其中,k的取值范围至少包括:(0,15),(0,10)或者(0,5)中的任一个。
  19. 根据权利要求11-18中任一项所述的电子设备,其特征在于,所述电子设备还包括:一个或多个传感器,所述一个或多个传感器至少包括陀螺仪传感器;
    其中,所述处理器,用于获取所述触摸屏与水平面的夹角,包括:
    所述处理器,具体用于通过所述一个或多个传感器,获取所述触摸屏与水平面的夹角。
  20. 根据权利要求11-19中任一项所述的电子设备,其特征在于,所述电子设备还包括结构光摄像头模组,所述结构光摄像头模组包括光投射器、第一摄像头和第二摄像头,所述第一摄像头和所述第二摄像头之间的距离为第一长度;
    所述处理器,用于响应于所述摄像头采集到人脸图像,获取所述电子设备与用户之间的距离,包括:
    所述处理器,具体用于响应于所述摄像头采集到人脸图像,通过所述光投射器发射光信息,通过所述第一摄像头采集所述人脸图像对应的用户的人脸的第一图像信息,通过所述第二摄像头采集所述人脸的第二图像信息,所述第一图像信息和所述第二图像信息包括所述人脸的特征;根据所述第一图像信息、所述第二图像信息、所述第一长度、以及所述第一摄像头的镜头焦距和所述第二摄像头的镜头焦距,计算所述人脸的深度信息;根据所述人脸的深 度信息,计算所述电子设备与用户之间的距离。
  21. 一种芯片***,其特征在于,所述芯片***应用于包括触摸屏的电子设备;所述芯片***包括一个或多个接口电路和一个或多个处理器;所述接口电路和所述处理器通过线路互联;所述接口电路用于从所述电子设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括所述存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,所述电子设备执行如权利要求1-10中任一项所述的方法。
  22. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-10中任一项所述的方法。
  23. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1-10中任一项所述的方法。
PCT/CN2020/098446 2019-06-28 2020-06-28 一种曲面屏的防误触方法及电子设备 WO2020259674A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20833347.6A EP3961358B1 (en) 2019-06-28 2020-06-28 False touch prevention method for curved screen, and eletronic device
US17/562,397 US11782554B2 (en) 2019-06-28 2021-12-27 Anti-mistouch method of curved screen and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910578705.4 2019-06-28
CN201910578705.4A CN110456938B (zh) 2019-06-28 2019-06-28 一种曲面屏的防误触方法及电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/562,397 Continuation US11782554B2 (en) 2019-06-28 2021-12-27 Anti-mistouch method of curved screen and electronic device

Publications (1)

Publication Number Publication Date
WO2020259674A1 true WO2020259674A1 (zh) 2020-12-30

Family

ID=68481747

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098446 WO2020259674A1 (zh) 2019-06-28 2020-06-28 一种曲面屏的防误触方法及电子设备

Country Status (4)

Country Link
US (1) US11782554B2 (zh)
EP (1) EP3961358B1 (zh)
CN (1) CN110456938B (zh)
WO (1) WO2020259674A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113986047A (zh) * 2021-12-23 2022-01-28 荣耀终端有限公司 识别误触信号的方法和装置
US11531426B1 (en) 2021-10-29 2022-12-20 Beijing Xiaomi Mobile Software Co., Ltd. Edge anti-false-touch method and apparatus, electronic device and computer-readable storage medium

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110456938B (zh) * 2019-06-28 2021-01-29 华为技术有限公司 一种曲面屏的防误触方法及电子设备
CN111064847B (zh) * 2019-12-14 2021-05-07 惠州Tcl移动通信有限公司 一种防误触方法、装置、存储介质及电子设备
CN111147667A (zh) * 2019-12-25 2020-05-12 华为技术有限公司 一种熄屏控制方法及电子设备
CN111309179A (zh) * 2020-02-10 2020-06-19 北京小米移动软件有限公司 触控屏控制方法及装置、终端及存储介质
CN112286391B (zh) * 2020-10-29 2024-05-24 深圳市艾酷通信软件有限公司 显示方法及装置
CN113676574B (zh) * 2021-08-12 2024-02-27 维沃移动通信有限公司 电子设备
CN114554078B (zh) * 2022-01-10 2023-01-31 荣耀终端有限公司 一种摄像头调用方法和电子设备
CN114879894B (zh) * 2022-04-20 2024-06-11 华为技术有限公司 功能启动方法、用户界面及电子设备
US11682368B1 (en) * 2022-06-24 2023-06-20 Arm Limited Method of operating a mobile device
CN115639905B (zh) * 2022-08-25 2023-10-27 荣耀终端有限公司 一种手势控制方法及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014102557A (ja) * 2012-11-16 2014-06-05 Sharp Corp 携帯端末
CN103970277A (zh) * 2014-05-27 2014-08-06 福建天晴数码有限公司 基于手持电子设备的健康阅读监测方法及其装置
CN104182154A (zh) * 2014-08-18 2014-12-03 广东欧珀移动通信有限公司 握持触摸屏的避免误操作的方法、装置及移动设备
CN104541231A (zh) * 2014-08-29 2015-04-22 华为技术有限公司 防误触触摸屏的方法和装置
CN106708263A (zh) * 2016-12-16 2017-05-24 广东欧珀移动通信有限公司 一种触摸屏的防误触方法、装置及移动终端
CN110456938A (zh) * 2019-06-28 2019-11-15 华为技术有限公司 一种曲面屏的防误触方法及电子设备

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8723824B2 (en) * 2011-09-27 2014-05-13 Apple Inc. Electronic devices with sidewall displays
CN103116403A (zh) * 2013-02-16 2013-05-22 广东欧珀移动通信有限公司 一种屏幕切换方法及移动智能终端
US9047002B2 (en) * 2013-03-15 2015-06-02 Elwha Llc Systems and methods for parallax compensation
CN104423656B (zh) 2013-08-20 2018-08-17 南京中兴新软件有限责任公司 误触摸识别方法和装置
CN104007932B (zh) * 2014-06-17 2017-12-29 华为技术有限公司 一种触摸点识别方法及装置
CN104238948B (zh) * 2014-09-29 2018-01-16 广东欧珀移动通信有限公司 一种智能手表点亮屏幕的方法及智能手表
CN105243345B (zh) * 2015-10-30 2018-08-17 维沃移动通信有限公司 一种电子设备防误触的方法及电子设备
CN105700709B (zh) * 2016-02-25 2019-03-01 努比亚技术有限公司 一种移动终端及控制移动终端不可触控区域的方法
CN105892920B (zh) * 2016-03-31 2020-10-27 内蒙古中森智能终端技术研发有限公司 显示控制方法和装置
CN106020698A (zh) * 2016-05-25 2016-10-12 努比亚技术有限公司 移动终端及其单手模式的实现方法
CN106572299B (zh) * 2016-10-31 2020-02-28 北京小米移动软件有限公司 摄像头开启方法及装置
CN106775404A (zh) * 2016-12-16 2017-05-31 广东欧珀移动通信有限公司 一种显示界面的防误触方法、装置及移动终端
CN106681638B (zh) * 2016-12-16 2019-07-23 Oppo广东移动通信有限公司 一种触摸屏控制方法、装置及移动终端
CN107317918B (zh) * 2017-05-26 2020-05-08 Oppo广东移动通信有限公司 参数设置方法及相关产品
CN107450778B (zh) * 2017-09-14 2021-03-19 维沃移动通信有限公司 一种误触识别方法及移动终端
CN107613119B (zh) * 2017-09-15 2021-06-15 努比亚技术有限公司 一种防止终端误操作的方法、设备及计算机存储介质
CN108200340A (zh) * 2018-01-12 2018-06-22 深圳奥比中光科技有限公司 能够检测眼睛视线的拍照装置及拍照方法
AU2019100486B4 (en) * 2018-05-07 2019-08-01 Apple Inc. Devices and methods for measuring using augmented reality
CN109635539B (zh) * 2018-10-30 2022-10-14 荣耀终端有限公司 一种人脸识别方法及电子设备
CN109782944A (zh) * 2018-12-11 2019-05-21 华为技术有限公司 一种触摸屏的响应方法及电子设备
CN109710080B (zh) * 2019-01-25 2021-12-03 华为技术有限公司 一种屏幕控制和语音控制方法及电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014102557A (ja) * 2012-11-16 2014-06-05 Sharp Corp 携帯端末
CN103970277A (zh) * 2014-05-27 2014-08-06 福建天晴数码有限公司 基于手持电子设备的健康阅读监测方法及其装置
CN104182154A (zh) * 2014-08-18 2014-12-03 广东欧珀移动通信有限公司 握持触摸屏的避免误操作的方法、装置及移动设备
CN104541231A (zh) * 2014-08-29 2015-04-22 华为技术有限公司 防误触触摸屏的方法和装置
CN106708263A (zh) * 2016-12-16 2017-05-24 广东欧珀移动通信有限公司 一种触摸屏的防误触方法、装置及移动终端
CN110456938A (zh) * 2019-06-28 2019-11-15 华为技术有限公司 一种曲面屏的防误触方法及电子设备

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11531426B1 (en) 2021-10-29 2022-12-20 Beijing Xiaomi Mobile Software Co., Ltd. Edge anti-false-touch method and apparatus, electronic device and computer-readable storage medium
EP4174620A1 (en) * 2021-10-29 2023-05-03 Beijing Xiaomi Mobile Software Co., Ltd. Edge anti-false-touch method, electronic device and computer-readable storage medium
CN113986047A (zh) * 2021-12-23 2022-01-28 荣耀终端有限公司 识别误触信号的方法和装置
CN113986047B (zh) * 2021-12-23 2023-10-27 荣耀终端有限公司 识别误触信号的方法和装置

Also Published As

Publication number Publication date
EP3961358A1 (en) 2022-03-02
EP3961358B1 (en) 2023-10-25
CN110456938B (zh) 2021-01-29
CN110456938A (zh) 2019-11-15
US20220121316A1 (en) 2022-04-21
US11782554B2 (en) 2023-10-10
EP3961358A4 (en) 2022-06-22

Similar Documents

Publication Publication Date Title
WO2020259674A1 (zh) 一种曲面屏的防误触方法及电子设备
WO2020168965A1 (zh) 一种具有折叠屏的电子设备的控制方法及电子设备
WO2020259038A1 (zh) 一种拍摄方法及设备
JP7391102B2 (ja) ジェスチャ処理方法およびデバイス
KR102422114B1 (ko) 화면 제어 방법, 전자 장치, 및 저장 매체
WO2021017836A1 (zh) 控制大屏设备显示的方法、移动终端及第一***
WO2020224449A1 (zh) 一种分屏显示的操作方法及电子设备
WO2021052214A1 (zh) 一种手势交互方法、装置及终端设备
EP4033335A1 (en) Touch screen, electronic device, and display control method
WO2021063311A1 (zh) 具有折叠屏的电子设备的显示控制方法及电子设备
WO2020168968A1 (zh) 一种具有折叠屏的电子设备的控制方法及电子设备
WO2021082564A1 (zh) 一种操作提示的方法和电子设备
CN114090102B (zh) 启动应用程序的方法、装置、电子设备和介质
CN113728295A (zh) 控屏方法、装置、设备及存储介质
WO2020221062A1 (zh) 一种导航操作方法及电子设备
WO2022206494A1 (zh) 目标跟踪方法及其装置
WO2022105702A1 (zh) 保存图像的方法及电子设备
WO2021052015A1 (zh) 一种触控屏控制方法和电子设备
CN113781548B (zh) 多设备的位姿测量方法、电子设备及***
WO2021013106A1 (zh) 一种折叠屏照明方法和装置
US20220317841A1 (en) Screenshot Method and Related Device
CN115150542B (zh) 一种视频防抖方法及相关设备
CN114812381A (zh) 电子设备的定位方法及电子设备
CN114302063A (zh) 一种拍摄方法及设备
WO2022028324A1 (zh) 启动应用程序的方法、装置、电子设备和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20833347

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020833347

Country of ref document: EP

Effective date: 20211125

NENP Non-entry into the national phase

Ref country code: DE