WO2019169644A1 - Method and device for inputting signal - Google Patents

Method and device for inputting signal Download PDF

Info

Publication number
WO2019169644A1
WO2019169644A1 PCT/CN2018/078642 CN2018078642W WO2019169644A1 WO 2019169644 A1 WO2019169644 A1 WO 2019169644A1 CN 2018078642 W CN2018078642 W CN 2018078642W WO 2019169644 A1 WO2019169644 A1 WO 2019169644A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
limb
operation instruction
hand
input signal
Prior art date
Application number
PCT/CN2018/078642
Other languages
French (fr)
Chinese (zh)
Inventor
宋卿
葛凯麟
Original Assignee
彼乐智慧科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 彼乐智慧科技(北京)有限公司 filed Critical 彼乐智慧科技(北京)有限公司
Priority to CN201880091030.4A priority Critical patent/CN112567319A/en
Priority to PCT/CN2018/078642 priority patent/WO2019169644A1/en
Publication of WO2019169644A1 publication Critical patent/WO2019169644A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials

Definitions

  • the present invention belongs to the field of information technology, and in particular, to a method and device for signal input.
  • the input mode of the user's signals mainly includes touch, button, and voice control.
  • touch button
  • voice control voice control
  • the user needs to manually type a key, or press a virtual button on the touch screen.
  • an input device such as a dance mat
  • the user is required to press the corresponding button on the dance mat with the foot.
  • the invention provides a method and a device for signal input, which solves the problem that the input mode of human-computer interaction signals in the prior art is single and inefficient.
  • the present invention provides an apparatus for signal input, including:
  • buttons One or more buttons
  • One or more processors for:
  • the limb is a user's hand
  • the processor is configured to perform feature recognition on the limb image, including:
  • the hand shape is identified, confirming that the hand is a left or right hand, and/or,
  • a corresponding operation instruction is generated.
  • the signal input device further includes: a pressure sensor, configured to acquire a strength of the user pressing the one or more buttons;
  • processor is further used to:
  • the signal input device further includes a fingerprint sensor, configured to collect a user fingerprint and identify the user identity;
  • processor is further used to:
  • the image sensor is further configured to: collect face information
  • the processor is further configured to:
  • the corresponding operation instruction is generated by combining the user identity information, the input signal, and the correspondence between the recognition result and the operation instruction.
  • the signal input device further includes a laser emitter for continuously emitting laser light, the laser emitter is at an angle to the image sensor, and the image sensor is a laser sensor, when the laser is emitted When the laser is emitted to the user's limb, the image sensor receives the reflected reflected laser light to generate one or more reflected light response signals;
  • the processor is further configured to:
  • the image sensor receives the reflected light in different directions, receiving the plurality of reflected light response signals, and calculating according to the reflected light response signal by using a trigonometric method a distance between the image sensor and a different slice of the user's limb;
  • the corresponding operation instruction is generated by combining the gesture information, the distance information, the input signal, and the correspondence between the recognition result and the operation instruction.
  • the characterizing the limb image further includes:
  • the limb features corresponding to the color blocks are determined, and the recognition result is output.
  • the processor is further configured to: when the user operates the glove equipped with the sensor chip, receive the sensing signal sent by the glove;
  • the embodiment of the invention further provides a method for signal input, comprising:
  • the signal input device captures a movement track of the user's limb, and collects the limb image, and the user's limb includes the user's limbs;
  • the limb is a user's hand
  • the limb image is characterized, including:
  • the hand shape is identified, confirming that the hand is a left or right hand, and/or,
  • a corresponding operation instruction is generated.
  • the method further includes:
  • the method further includes:
  • Performing face recognition or fingerprint recognition acquiring user identity information, combining the user identity information, the input signal, and the correspondence between the recognition result and the operation instruction to generate the corresponding operation instruction.
  • the method further includes:
  • the corresponding operation instruction is generated by combining the gesture information, the distance information, the input signal, and the correspondence between the recognition result and the operation instruction.
  • the characterizing the limb image further includes:
  • the limb features corresponding to the color blocks are determined, and the recognition result is output.
  • the method further includes:
  • the signal input device synchronously or asynchronously collects the user's limb information and the key input signal, and outputs a corresponding operation instruction according to the recognized relationship between the user's limb information, the key input signal, and the operation instruction.
  • the technical solution provided by the present invention which hand/foot can be used more carefully for the user, which finger is used, which gesture is used to press the button, and different body information presses the same button.
  • the response signals generated are different, and different limb information can be combined with different buttons to define a large number of shortcut operations. That is, the present invention defines a completely new way of interaction that enables fast operation of operational instructions. Compared with the prior art, the invention improves the signal input efficiency, enriches the signal input mode, and improves the user experience.
  • FIG. 1 is a schematic structural diagram of a signal input device in an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of identifying left and right hand pressing buttons in an embodiment of the present invention
  • FIG. 3 is a schematic diagram of identifying a specific finger pressing button in the embodiment of the present invention.
  • FIG. 4 is a schematic diagram of gesture recognition in an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of performing three-dimensional modeling after two-dimensional ranging in an embodiment of the present invention.
  • Figure 6 is a flow chart of a signal input method in an embodiment of the present invention.
  • the present invention provides a device 11 for signal input, the device comprising:
  • the sensor 102 is controlled to capture a trajectory of a user's limb movement, and the limb image is acquired.
  • the user's limb includes the user's limbs; that is, the user's hands or feet are included.
  • the button 101 can be a physical button or a virtual button.
  • the number of buttons is not limited.
  • the button may be one or more buttons on a conventional numeric keypad, or may be a single button, or may be one or more displayed on the touch screen. Virtual buttons.
  • the button may also be an entire or partial touch screen or a display screen, and the user touches the button regardless of which position of the display is pressed.
  • the user performs finger painting on the drawing display area in the touch screen. At this time, it is determined that the user has pressed the drawing button, and obtains the position information of the touch point after the user presses the button.
  • the senor 102 may be an image sensor of visible light or non-visible light, such as a CCD or CMOS image sensor, or an infrared/ultraviolet sensor for receiving infrared/ultraviolet light.
  • the processor 103 is configured to perform feature recognition on the limb image.
  • Positioning the hand to identify a positional relationship between the hand position and the one or more buttons for example, the hand position may be directly above the button, or may be on the left or right side of the button;
  • the method for identifying the positioning hand can be implemented by a conventional image processing method, for example, an image processing algorithm using binarization and contour recognition to obtain the shape of the hand in an image and the position in the photo, due to the image
  • the sensor can be fixed for periodic shooting in the same place, so the background of the photo (other than the hand can be defined as the background) is constant, the only change is the position of the hand in the photo, so it can be different A change in the hand in the photo to locate the hand and identify the movement of the hand.
  • the hand shape is identified to confirm that the hand is a left hand or a right hand, for example, when the hand position is above the button, The shape of the opponent is recognized to distinguish whether the user is pressing the left hand or the right hand, or has pressed the button, and in combination with the signal that the button is pressed, a different operation command response is performed when the user presses the left or right hand.
  • the operation command can be the output of a text ⁇ sound ⁇ image, or it can be a command of a certain program, which can be customized by the user or preset by the signal input device. For example, when the user presses the button with the left hand, a piece of text is output, and when the user presses the right hand, a piece of sound is output.
  • FIG. 2 is a schematic diagram of recognizing the left and right hands. As shown in FIG. 2, when the user prepares to press the button 101 on the keyboard 100, the image sensor 102 collects the opponent in real time, and distinguishes whether the user presses the button or is the left or right hand before. In operation.
  • the signal input device may preset a correspondence table. In the correspondence table, different fingers and different buttons may be arranged and combined, and the output operation instructions are different.
  • the index finger and the thumb, buttons A and B can form 7 different states, respectively, no pressing, single index finger pressing A button, single index finger pressing B button, single thumb pressing A button, single thumb pressing B button, index finger pressing
  • the A button presses the B button
  • the thumb presses the A button corresponding to 6 different operation commands (the A and B buttons are not pressed without an operation command), and each operation command can be defined or preset in advance.
  • the thumb of the left hand and the thumb of the right hand press the same button, and the operational response can also be different. That is, the left and right hands, the index finger and the thumb, and the buttons A and B can also be combined into a more complex correspondence table to output more complex operational responses.
  • 3 is a specific example of a user typing on a keyboard.
  • the image sensor 102 captures the hand motion track and the hand image in real time, thereby distinguishing the finger used by the user when pressing the keyboard, and outputting Different response commands are displayed on display 105.
  • the embodiment of the present invention can also distinguish the hand movement trajectory, as mentioned above, moving from top to bottom to above the button, moving from bottom to top. Above the button, move from obliquely up to obliquely down to the top of the button, etc., the operation instructions corresponding to different moving directions can also be different.
  • the gesture of the hand is recognized; in addition to the above-mentioned left and right hand recognition, finger recognition, and hand movement trajectory recognition, the embodiment of the present invention can also implement gesture recognition.
  • Gesture recognition can be similar to Apple's multi-touch technology, such as kneading (reduction/magnification instructions, as shown in Figure 4), multi-finger rotation (picture rotation instructions) and other interactive methods.
  • the present invention does not need to capture a multi-point moving track on the touch screen, and can capture the multi-frame picture to identify the hand shape and shape change of the multi-frame picture, thereby determining the current user. Gesture.
  • the recognition gesture technology is a comprehensive technology that combines finger recognition + finger movement trajectory recognition.
  • the gesture recognition implementation can be identified according to the existing machine learning algorithm, and will not be described here.
  • the processor is configured to generate an operation instruction corresponding to the combination of the result and the input signal, and specifically:
  • a corresponding operation instruction is generated.
  • the typical button such as the button on the dance mat
  • the typical button can position the foot through the image sensor to distinguish whether the user is stepping on a button by the left or right foot.
  • it can also be moved according to the footstep.
  • the trajectory determines the movement trajectory of the footstep when the button is currently pressed, for example, from top to bottom, or from left to right, or from obliquely to obliquely, and the operation instructions are different in different directions.
  • the new interaction method defined by the present invention is widely used.
  • the game controller usually only has a few buttons, and different gestures/finger combinations, pressing different buttons, will bring different game characters to the shortcut.
  • the key the game has high playability and good user experience; in the field of education, when the user selects different fingers/gestures to click/press/touch different buttons, different teaching contents or teaching effects can be triggered, for example, drawing, the user uses the index finger Applying on the LCD screen and applying it on the LCD screen with your thumb, the color and thickness of the lines drawn can be different.
  • the signal input device further includes: a pressure sensor, configured to acquire a strength of the user pressing the one or more buttons;
  • processor is further used to:
  • the application-level pressure sensor has been widely used in the market, and the pressure sensor (for example, the pressure sensor is placed inside the button) can be built in the embodiment of the invention to collect the force when the user presses the case, according to different force thresholds. It is divided into different levels of force, such as low, medium and high. Each level of force can correspond to different operating instructions. Similar to Apple's 3D-touch.
  • the signal input device further includes a fingerprint sensor, configured to collect a user fingerprint and identify the user identity;
  • processor is further used to:
  • the invention may be coupled with the pressure sensor and/or Or a fingerprint sensor, the fingerprint sensor can also be built in and inside the button.
  • the fingerprint is automatically recognized, thereby determining which user is operating, and combining the user identification information, the input signal, and the recognition result with Corresponding relationship of the operation instructions, the corresponding operation instruction is generated.
  • the image sensor is further configured to: collect face information
  • the processor is further configured to:
  • the corresponding operation instruction is generated by combining the user identity information, the input signal, and the corresponding relationship between the recognition result and the operation instruction.
  • the difference is that the latter is a method of face recognition by the image sensor 102 to realize user identity collection. Face recognition belongs to the prior art, and the specific implementation manner is not repeated.
  • the face recognition can be applied to the voting aspect, such as an election, an entertainment program, or an application scenario such as a public vote in other programs, and the current voting system may have malicious malicious investment and missed voting behavior.
  • the voting aspect such as an election, an entertainment program, or an application scenario such as a public vote in other programs
  • the current voting system may have malicious malicious investment and missed voting behavior.
  • face recognition and button press voting it is possible to locate which user has pressed the current button, and the number of votes is accurately matched with the user, which is good for statistics.
  • the signal input device further includes a laser emitter for continuously emitting laser light, the laser emitter is at an angle to the image sensor, and the image sensor is a laser sensor when the laser light is emitted
  • the image sensor receives the reflected reflected laser light to generate one or more reflected light response signals;
  • the laser emitter can emit a lattice light or a linear light beam, for example, through a built-in
  • the beam expander expands the lattice light into one or more linear beams.
  • the linear beam has more data to be collected than the lattice light, so the ranging is more accurate. Therefore, the embodiment of the present invention is preferably a linear beam.
  • the processor is further configured to:
  • the method uses a two-dimensional ranging technique to measure the distance between the image sensor and the user's limb.
  • the principle of two-dimensional ranging is to send a point beam or a linear beam to the user's limb through a laser, receive the reflected light emitted by the laser to the user's limb through an image sensor (for example, an infrared sensor), and calculate the current moment by using a trigonometric method.
  • the distance from the image sensor according to which the relationship between different distances and different operational commands can be defined. And corresponding to the distance information, the input signal, and the correspondence between the recognition result and the operation instruction, the corresponding operation instruction is generated.
  • the triangulation method is a commonly used measurement method in the field of optical ranging.
  • the method is as follows: by calculating the position of the center of gravity of the region and the relative angle and spacing of the known laser emitting device and the image sensor, the distance of the target distance image sensor can be estimated.
  • the position of the center of gravity of the coordinate, z is the measured distance. From the formula, the measurement distance is only related to the position of the center of gravity in the column direction, and is independent of the number of rows.
  • the image sensor receives the reflected light in different directions, receiving the plurality of reflected light response signals, and calculating according to the reflected light response signal by using a trigonometric method a distance between the image sensor and a different slice of the user's limb;
  • the corresponding operation instruction is generated by combining the gesture information, the distance information, the input signal, and the correspondence between the recognition result and the operation instruction.
  • the above solution is an optical three-dimensional ranging technology, which is different from the two-dimensional ranging: the laser emitter can perform laser emission at different angles through the rotating shaft, so that the image sensor collects reflected light in different directions.
  • the triangulation method can be used to measure the three-dimensional distance of different sections of the limb, and the three-dimensional data of different sections can be superimposed in three-dimensional space to complete the three-dimensional modeling. As shown in Figure 5.
  • the image sensor receives different reflected beams, images on the sensor panel, generates reflected light response signals, and obtains three-dimensional image reconstruction according to different reflected beams, thereby obtaining more and more Precise body information.
  • the feature image is identified by using the limb image, specifically:
  • a correspondence between a color block and a limb feature is established; for example, one color block represents the user's thumb, and another color block represents the user's index finger or the like.
  • the limb features corresponding to the color blocks are determined, and the recognition result is output.
  • recognition parameters can be added to speed up recognition. For example, when a user applies a nail polish of a particular color, or wears a glove of a particular color (or a different color block), the processor will position and track the particular color to determine the RGB value for that particular color. And according to the correspondence between the RGB value and the user's limb features, the limb features represented by the color block are determined, thereby identifying the user's limb features more quickly and efficiently.
  • the processor is further configured to: when the user operates the glove equipped with the sensor chip, receive the sensing signal sent by the glove;
  • the user's specific operation finger/gesture and the like can be determined more quickly and conveniently according to the sensing signal of the glove.
  • different sensing chips such as NFC near field communication chips
  • the input signal of the button can be detected according to the glove sensing signal.
  • the recognition result is more accurate and the robustness is higher.
  • the corresponding operation instruction is determined by means of image acquisition and button press detection, which solves the problem that the prior art human-machine interaction signal input mode is single and inefficient.
  • the technical solution provided by the present invention which hand/foot can be used more carefully for the user, which finger is used, which gesture is used to press the button, and different body information presses the same button.
  • the response signals generated are different, and different limb information can be combined with different buttons to define a large number of shortcut operations. That is, the present invention defines a completely new way of interaction that enables fast operation of operational instructions. Compared with the prior art, the invention improves the signal input efficiency, enriches the signal input mode, and improves the user experience.
  • An embodiment of the present invention provides a method for inputting a signal. As shown in FIG. 6, the method includes:
  • the signal input device captures a movement path of the user's limb, and collects the limb image, where the user limb includes the user's limbs;
  • S203 Receive an input signal after the user presses the one or more buttons.
  • buttons and sensors refer to the example described in Embodiment 1, and the details are not described herein.
  • steps S202 and S203 are not limited in sequence, and the button press signal may be received first, and then the limb image may be recognized, or vice versa, and the final processing result of the embodiment of the present invention is not affected.
  • the feature image of the limb image is identified in S202, which may be:
  • Positioning the hand to identify a positional relationship between the hand position and the one or more buttons for example, the hand position may be directly above the button, or may be on the left or right side of the button;
  • the method for identifying the positioning hand can be implemented by a conventional image processing method, for example, an image processing algorithm using binarization and contour recognition to obtain the shape of the hand in an image and the position in the photo, due to the image
  • the sensor can be fixed for periodic shooting in the same place, so the background of the photo (other than the hand can be defined as the background) is constant, the only change is the position of the hand in the photo, so it can be different A change in the hand in the photo to locate the hand and identify the movement of the hand.
  • the hand shape is identified to confirm that the hand is a left hand or a right hand, for example, when the hand position is above the button, The shape of the opponent is recognized to distinguish whether the user is pressing the left hand or the right hand, or has pressed the button, and in combination with the signal that the button is pressed, a different operation command response is performed when the user presses the left or right hand.
  • the operation command can be the output of a text ⁇ sound ⁇ image, or it can be a command of a certain program, which can be customized by the user or preset by the signal input device. For example, when the user presses the button with the left hand, a piece of text is output, and when the user presses the right hand, a piece of sound is output.
  • the signal input device may preset a correspondence table. In the correspondence table, different fingers and different buttons may be arranged and combined, and the output operation instructions are different.
  • the index finger and the thumb, buttons A and B can form 7 different states, respectively, no pressing, single index finger pressing A button, single index finger pressing B button, single thumb pressing A button, single thumb pressing B button, index finger pressing
  • the A button presses the B button
  • the thumb presses the A button corresponding to 6 different operation commands (the A and B buttons are not pressed without an operation command), and each operation command can be defined or preset in advance.
  • the thumb of the left hand and the thumb of the right hand press the same button, and the operational response can also be different. That is, the left and right hands, the index finger and the thumb, and the buttons A and B can also be combined into a more complex correspondence table to output more complex operational responses.
  • the embodiment of the present invention can also distinguish the hand movement trajectory, as mentioned above, moving from top to bottom to above the button, moving from bottom to top. Above the button, move from obliquely up to obliquely down to the top of the button, etc., the operation instructions corresponding to different moving directions can also be different.
  • the gesture of the hand is recognized; in addition to the above-mentioned left and right hand recognition, finger recognition, and hand movement trajectory recognition, the embodiment of the present invention can also implement gesture recognition.
  • Gesture recognition can be similar to Apple's multi-touch technology, such as kneading (zoom out/magnify instructions), multi-finger rotation (picture rotation instructions) and other interactive methods.
  • the present invention does not need to capture a multi-point moving track on the touch screen, and can capture the multi-frame picture to identify the hand shape and shape change of the multi-frame picture, thereby determining the current user. Gesture.
  • the recognition gesture technology is a comprehensive technology that combines finger recognition + finger movement trajectory recognition.
  • the gesture recognition implementation can be identified according to the existing machine learning algorithm, and will not be described here.
  • the processor is configured to generate an operation instruction corresponding to the combination of the result and the input signal, and specifically:
  • a corresponding operation instruction is generated.
  • the typical button such as the button on the dance mat
  • the typical button can position the foot through the image sensor to distinguish whether the user is stepping on a button by the left or right foot.
  • it can also be moved according to the footstep.
  • the trajectory determines the movement trajectory of the footstep when the button is currently pressed, for example, from top to bottom, or from left to right, or from obliquely to obliquely, and the operation instructions are different in different directions.
  • the new interaction method defined by the present invention is widely used.
  • the game controller usually only has a few buttons, and different gestures/finger combinations, pressing different buttons, will bring different game characters to the shortcut.
  • the key the game has high playability and good user experience; in the field of education, when the user selects different fingers/gestures to click/press/touch different buttons, different teaching contents or teaching effects can be triggered, for example, drawing, the user uses the index finger Applying on the LCD screen and applying it on the LCD screen with your thumb, the color and thickness of the lines drawn can be different.
  • the embodiment of the present invention further includes: acquiring a strength of the user pressing the one or more buttons, and generating the corresponding operation according to the pressing force, an input signal, and a correspondence between the recognition result and an operation instruction. instruction.
  • the force channels of different levels can be divided according to different force thresholds, and each level of force can correspond to different operation instructions. Similar to Apple's 3D-touch.
  • the embodiment of the present invention further includes: collecting a user fingerprint and identifying a user identity; acquiring user identity identification information, combining the user identity identification information, an input signal, and a correspondence between the recognition result and an operation instruction, generating a location Corresponding operation instruction; when the user presses, the fingerprint is automatically identified, thereby determining which user is in operation, and combining the user identity information, the input signal, and the correspondence between the recognition result and the operation instruction, The corresponding operation instruction.
  • the corresponding operation instruction is generated by combining the user identity information, the input signal, and the correspondence between the recognition result and the operation instruction.
  • the face recognition can be applied to the voting aspect, such as an election, an entertainment program, or an application scenario such as a public vote in other programs, and the current voting system may have malicious malicious investment and missed voting behavior.
  • the voting aspect such as an election, an entertainment program, or an application scenario such as a public vote in other programs
  • the current voting system may have malicious malicious investment and missed voting behavior.
  • face recognition and button press voting it is possible to locate which user has pressed the current button, and the number of votes is accurately matched with the user, which is good for statistics.
  • the embodiment of the present invention further includes: transmitting a laser to the limb through the laser emitter, and receiving the reflected reflected beam, and when receiving the reflected light in the same direction, generating a reflected light response signal according to the reflected light.
  • transmitting a laser to the limb through the laser emitter and receiving the reflected reflected beam, and when receiving the reflected light in the same direction, generating a reflected light response signal according to the reflected light.
  • Responding to the signal using a triangulation method to calculate the distance between the signal input device and all faces of the user's limb;
  • the method uses a two-dimensional ranging technique to measure the distance between the image sensor and the user's limb.
  • the principle of two-dimensional ranging is to send a point beam or a linear beam to the user's limb through a laser, receive the reflected light emitted by the laser to the user's limb through an image sensor (for example, an infrared sensor), and calculate the current moment by using a trigonometric method.
  • the distance from the image sensor according to which the relationship between different distances and different operational commands can be defined. And corresponding to the distance information, the input signal, and the correspondence between the recognition result and the operation instruction, the corresponding operation instruction is generated.
  • the triangulation method is a commonly used measurement method in the field of optical ranging. The method is as follows: by calculating the position of the center of gravity of the region and the relative angle and spacing of the known laser emitting device and the image sensor, the distance of the target distance image sensor can be estimated.
  • the emitted laser light is a linear light beam in different directions and receives reflected light in different directions, receiving the plurality of reflected light response signals, and calculating a signal input device by using a triangulation method according to the reflected light response signal a distance between different cut surfaces of the user's limb;
  • the corresponding operation instruction is generated by combining the gesture information, the distance information, the input signal, and the correspondence between the recognition result and the operation instruction.
  • the above solution is an optical three-dimensional ranging technology, which is different from the two-dimensional ranging: the laser emitter can perform laser emission at different angles through the rotating shaft, so that the image sensor collects reflected light in different directions.
  • the triangulation method can be used to measure the three-dimensional distance of different sections of the limb, and the three-dimensional data of different sections can be superimposed in three-dimensional space to complete the three-dimensional modeling.
  • the image sensor receives different reflected beams, and three-dimensional image reconstruction can be obtained according to different reflected beams, thereby obtaining more accurate limb information.
  • the feature image is identified by using the limb image, specifically:
  • a correspondence between a color block and a limb feature is established; for example, one color block represents the user's thumb, and another color block represents the user's index finger or the like.
  • the limb features corresponding to the color blocks are determined, and the recognition result is output.
  • recognition parameters can be added to speed up recognition. For example, when a user applies a nail polish of a particular color, or wears a glove of a particular color (or a different color block), the processor will position and track the particular color to determine the RGB value for that particular color. And according to the correspondence between the RGB value and the user's limb features, the limb features represented by the color block are determined, thereby identifying the user's limb features more quickly and efficiently.
  • the method further includes: receiving a sensing signal from the glove when the user operates the glove equipped with the sensor chip;
  • the user's specific operation finger/gesture and the like can be determined more quickly and conveniently according to the sensing signal of the glove.
  • different sensing chips such as NFC near field communication chips
  • the input signal of the button can be detected according to the glove sensing signal.
  • the recognition result is more accurate and the robustness is higher.
  • the corresponding operation instruction is determined by means of image acquisition and button press detection, which solves the problem that the prior art human-machine interaction signal input mode is single and inefficient.
  • the technical solution provided by the present invention which hand/foot can be used more carefully for the user, which finger is used, which gesture is used to press the button, and different body information presses the same button.
  • the response signals generated are different, and different limb information can be combined with different buttons to define a large number of shortcut operations. That is, the present invention defines a completely new way of interaction that enables fast operation of operational instructions. Compared with the prior art, the invention improves the signal input efficiency, enriches the signal input mode, and improves the user experience.
  • the size of the sequence number of each process does not mean the order of execution sequence, and the order of execution of each process should be determined by its function and internal logic, and should not be taken by the embodiment of the present application.
  • the implementation process constitutes any qualification.
  • modules and method steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

Disclosed is a device for inputting a signal, comprising: one or more buttons; one or more sensors used for acquiring images; one or more processors, said one or more processors being used for: controlling the sensor to capture the movement trajectory of limbs of a user, and acquiring an image of the limbs, wherein the limbs of the user comprise four limbs of the user; performing feature recognition on the image of the limbs and acquiring a post-recognition result; receiving an input signal after the user presses the one or more buttons; and combining the result of the feature recognition with the input signal, generating an operation instruction that corresponds to the combination of the result and the input signal and outputting the same. Accordingly, disclosed is a method for inputting a signal, which solves the problem in existing technology in which the mode for inputting human-machine interaction signals is unvaried and the efficiency is low.

Description

一种信号输入的方法及装置Method and device for signal input 技术领域Technical field
本发明属于信息技术领域,具体地,涉及一种信号输入的方法及装置。The present invention belongs to the field of information technology, and in particular, to a method and device for signal input.
背景技术Background technique
在传统的人机交互领域,用户进行信号的输入方式主要包括触摸式、按键式及声控等方式进行,例如,对于传统的数字键盘需要用户进行手动按键打字,或在触摸屏对虚拟按键进行按压,对于跳舞毯等输入装置则需要用户用脚按压跳舞毯上对应的按键。In the field of traditional human-computer interaction, the input mode of the user's signals mainly includes touch, button, and voice control. For example, for a traditional numeric keypad, the user needs to manually type a key, or press a virtual button on the touch screen. For an input device such as a dance mat, the user is required to press the corresponding button on the dance mat with the foot.
然而,传统的人机交互方式虽然足够简单,用户采用四肢或声音即可完成信号的输入,然而,对于某些快捷操作则需要用户不止一次的进行人机交互的操作,尤其是智能终端领域,其按键数较少,但操作复杂,往往用户需要多次操作才能找到想要的功能。因此,迫切需要一种更为高效的信号输入方式,来解决目前人机交互中信号输入方式单一、效率低下的问题。However, although the traditional human-computer interaction method is simple enough, the user can input signals by using limbs or sounds. However, for some shortcut operations, the user needs to perform human-computer interaction more than once, especially in the field of smart terminals. The number of buttons is small, but the operation is complicated, and often the user needs to operate multiple times to find the desired function. Therefore, there is an urgent need for a more efficient signal input method to solve the problem of single signal input and low efficiency in human-computer interaction.
发明内容Summary of the invention
本发明提供了一种信号输入的方法及装置,解决了现有技术人机交互信号输入方式单一且效率低下的问题。The invention provides a method and a device for signal input, which solves the problem that the input mode of human-computer interaction signals in the prior art is single and inefficient.
为了实现上述目的,本发明提供了一种信号输入的装置,包括:In order to achieve the above object, the present invention provides an apparatus for signal input, including:
一个或多个按键;One or more buttons;
一个或多个用于采集图像的传感器;One or more sensors for acquiring images;
一个或多个处理器,所述一个或多个处理器用于:One or more processors for:
控制所述传感器捕捉用户肢体移动轨迹,采集所述肢体图像,所述用户肢体包括用户四肢;Controlling the sensor to capture a trajectory of a user's limb movement, and acquiring the limb image, the user limb including a user limb;
对所述肢体图像进行特征识别,获取识别后的结果;Performing feature recognition on the limb image to obtain the recognized result;
接收所述用户按压所述一个或多个按键后的输入信号;Receiving an input signal after the user presses the one or more buttons;
结合所述特征识别后的结果与所述输入信号,产生与所述结果及输入信号组合对应的操作指令并输出。Combining the result of the feature recognition with the input signal, an operation instruction corresponding to the combination of the result and the input signal is generated and output.
可选地,所述肢体为用户的手,则所述处理器用于对所述肢体图像进行特征识别,包括:Optionally, the limb is a user's hand, and the processor is configured to perform feature recognition on the limb image, including:
定位出所述手部位置,识别出所述手部位置与所述一个或多个按键的位置关系;Positioning the hand position to identify a positional relationship between the hand position and the one or more buttons;
当手部位置处于所述一个或多个按键的上方时,对所述手部形状进行识别,确认所述手部是左手或右手,和/或,When the hand position is above the one or more buttons, the hand shape is identified, confirming that the hand is a left or right hand, and/or,
识别出处于所述一个或多个按键上方的具体手指,和/或,Identifying a particular finger above the one or more keys, and/or,
识别出所述手部移动轨迹,和/或,Identifying the hand movement trajectory, and/or,
识别出所述手部的手势;Recognizing the gesture of the hand;
则所述产生与所述结果及输入信号组合对应的操作指令,包括:Then generating the operation instruction corresponding to the combination of the result and the input signal, including:
建立所述识别出的手部位置与一个或多个按键的位置关系、所述手部形状及所述输入信号三者与操作指令的对应关系;Establishing a positional relationship between the identified hand position and one or more buttons, a shape of the hand, and a correspondence between the input signals and an operation command;
按照所述对应关系,产生对应的操作指令。According to the corresponding relationship, a corresponding operation instruction is generated.
可选地,所述信号输入装置还包括:压力传感器,用于获取用户按压所述一个或多个按键的力度;Optionally, the signal input device further includes: a pressure sensor, configured to acquire a strength of the user pressing the one or more buttons;
则所述处理器还用于:Then the processor is further used to:
获取所述压力传感器采集的按压力度,根据所述按压力度、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。Acquiring the pressing force collected by the pressure sensor, and generating the corresponding operation instruction according to the pressing force, the input signal, and the correspondence between the recognition result and the operation instruction.
可选地,所述信号输入装置还包括指纹传感器,用于采集用户指纹,并识别用户身份;Optionally, the signal input device further includes a fingerprint sensor, configured to collect a user fingerprint and identify the user identity;
则所述处理器还用于:Then the processor is further used to:
获取用户身份标识信息,结合所述用户身份标识信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令;Obtaining user identity information, combining the user identity information, an input signal, and a correspondence between the recognition result and an operation instruction to generate the corresponding operation instruction;
或,or,
所述图像传感器还用于:采集人脸信息;The image sensor is further configured to: collect face information;
所述处理器还用于:The processor is further configured to:
根据采集到的人脸信息,识别所述用户身份;Identifying the identity of the user according to the collected face information;
结合所述用户身份标识信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。The corresponding operation instruction is generated by combining the user identity information, the input signal, and the correspondence between the recognition result and the operation instruction.
可选地,所述信号输入装置还包括用于连续发射激光的激光发射器,所述激光发射器与所述图像传感器呈一夹角,且所述图像传感器为激光传感器, 当所述激光发射器发射激光至所述用户肢体时,所述图像传感器接收反射回的反射激光,产生一个或多个反射光响应信号;Optionally, the signal input device further includes a laser emitter for continuously emitting laser light, the laser emitter is at an angle to the image sensor, and the image sensor is a laser sensor, when the laser is emitted When the laser is emitted to the user's limb, the image sensor receives the reflected reflected laser light to generate one or more reflected light response signals;
所述处理器还用于:The processor is further configured to:
当所述图像传感器接收同一方向上的反射光时,接收所述一个反射光响应信号,根据所述反射光响应信号,利用三角法计算出所述图像传感器与所述用户肢体一切面之间的距离;Receiving, by the image sensor, the reflected light response signal in the same direction, and calculating, according to the reflected light response signal, a triangulation method between the image sensor and the user's limb distance;
结合所述距离信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令;或,Combining the distance information, the input signal, and the correspondence between the recognition result and the operation instruction, generating the corresponding operation instruction; or
当所述激光发射器向不同方向上发射线性光束,且所述图像传感器接收不同方向上的反射光时,接收所述多个反射光响应信号,根据所述反射光响应信号,利用三角法计算出所述图像传感器与所述用户肢体不同切面之间的距离;When the laser emitter emits a linear beam in different directions, and the image sensor receives the reflected light in different directions, receiving the plurality of reflected light response signals, and calculating according to the reflected light response signal by using a trigonometric method a distance between the image sensor and a different slice of the user's limb;
利用计算出的所述图像传感器与所述用户肢体不同切面之间的距离,在以图像传感器为原点的三维空间中对所述用户肢体进行三维建模;Using the calculated distance between the image sensor and different sections of the user's limb, three-dimensional modeling of the user's limb in a three-dimensional space with the image sensor as the origin;
对三维建模的信息进行手势重构,识别出当前手势信息及距离所述图像传感器之间的距离;Performing gesture reconstruction on the three-dimensionally modeled information to identify current gesture information and a distance from the image sensor;
结合所述手势信息、距离信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。The corresponding operation instruction is generated by combining the gesture information, the distance information, the input signal, and the correspondence between the recognition result and the operation instruction.
可选地,所述对所述肢体图像进行特征识别,还包括:Optionally, the characterizing the limb image further includes:
建立颜色块与肢体特征的对应关系;Establish a correspondence between color blocks and limb features;
检测用户肢体上的特定颜色块;Detecting a particular color block on the user's limb;
根据检测到的颜色块的RGB值,确定所述颜色块对应的肢体特征,并输出识别结果。Based on the detected RGB values of the color blocks, the limb features corresponding to the color blocks are determined, and the recognition result is output.
可选地,所述处理器还用于:当用户佩戴装配有感应芯片的手套进行操作时,接收所述手套发出的感应信号;Optionally, the processor is further configured to: when the user operates the glove equipped with the sensor chip, receive the sensing signal sent by the glove;
基于所述感应信号、所述肢体图像的特征识别结果及所述输入信号,产生与所述感应信号、识别结果及输入信号组合对应的操作指令并输出。And generating an operation command corresponding to the combination of the sensing signal, the recognition result, and the input signal based on the sensing signal, the feature recognition result of the limb image, and the input signal, and outputting.
本发明实施例还提供一种信号输入的方法,包括:The embodiment of the invention further provides a method for signal input, comprising:
信号输入装置捕捉用户肢体移动轨迹,采集所述肢体图像,所述用户肢体包括用户四肢;The signal input device captures a movement track of the user's limb, and collects the limb image, and the user's limb includes the user's limbs;
对所述肢体图像进行特征识别,获取识别后的结果;Performing feature recognition on the limb image to obtain the recognized result;
接收所述用户按压所述一个或多个按键后的输入信号;Receiving an input signal after the user presses the one or more buttons;
结合所述特征识别后的结果与所述输入信号,产生与所述结果及输入信号组合对应的操作指令并输出。Combining the result of the feature recognition with the input signal, an operation instruction corresponding to the combination of the result and the input signal is generated and output.
可选地,所述肢体为用户的手,则对所述肢体图像进行特征识别,包括:Optionally, the limb is a user's hand, and the limb image is characterized, including:
定位出所述手部位置,识别出所述手部位置与所述一个或多个按键的位置关系;Positioning the hand position to identify a positional relationship between the hand position and the one or more buttons;
当手部位置处于所述一个或多个按键的上方时,对所述手部形状进行识别,确认所述手部是左手或右手,和/或,When the hand position is above the one or more buttons, the hand shape is identified, confirming that the hand is a left or right hand, and/or,
识别出处于所述一个或多个按键上方的具体手指,和/或,Identifying a particular finger above the one or more keys, and/or,
识别出所述手部移动轨迹,和/或,Identifying the hand movement trajectory, and/or,
识别出所述手部的手势;Recognizing the gesture of the hand;
则所述产生与所述结果及输入信号组合对应的操作指令,包括:Then generating the operation instruction corresponding to the combination of the result and the input signal, including:
建立所述识别出的手部位置与一个或多个按键的位置关系、所述手部形状及所述输入信号三者与操作指令的对应关系;Establishing a positional relationship between the identified hand position and one or more buttons, a shape of the hand, and a correspondence between the input signals and an operation command;
按照所述对应关系,产生对应的操作指令。According to the corresponding relationship, a corresponding operation instruction is generated.
可选地,所述方法还包括:Optionally, the method further includes:
获取所述用户按压按键的按压力度,根据所述按压力度、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。Acquiring the pressing force of the user to press the button, and generating the corresponding operation instruction according to the pressing force, the input signal, and the correspondence between the recognition result and the operation instruction.
可选地,所述方法还包括:Optionally, the method further includes:
进行人脸识别或指纹识别,获取用户身份标识信息,结合所述用户身份标识信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。Performing face recognition or fingerprint recognition, acquiring user identity information, combining the user identity information, the input signal, and the correspondence between the recognition result and the operation instruction to generate the corresponding operation instruction.
可选地,所述方法还包括:Optionally, the method further includes:
发射激光至所述用户肢体,当接收同一方向上的反射光时,接收所述一个反射光响应信号,根据所述反射光响应信号,利用三角法计算出所述信号输入装置与所述用户肢体一切面之间的距离;Transmitting a laser to the user's limb, receiving the reflected light response signal when receiving the reflected light in the same direction, and calculating the signal input device and the user limb by a triangulation method according to the reflected light response signal The distance between all faces;
结合所述距离信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令;或,Combining the distance information, the input signal, and the correspondence between the recognition result and the operation instruction, generating the corresponding operation instruction; or
当所述发射的激光为不同方向上线性光束,且接收不同方向上的反射光时,接收所述多个反射光响应信号,根据所述反射光响应信号,利用三角法计算出所述信号输入装置与所述用户肢体不同切面之间的距离;Receiving the plurality of reflected light response signals when the emitted laser light is a linear light beam in different directions and receiving the reflected light in different directions, and calculating the signal input by using a triangulation method according to the reflected light response signal a distance between the device and a different section of the user's limb;
利用计算出的所述图像传感器与所述用户肢体不同切面之间的距离,在以图像传感器为原点的三维空间中对所述用户肢体进行三维建模;Using the calculated distance between the image sensor and different sections of the user's limb, three-dimensional modeling of the user's limb in a three-dimensional space with the image sensor as the origin;
对三维建模的信息进行手势重构,识别出当前手势信息及距离所述图像传感器之间的距离;Performing gesture reconstruction on the three-dimensionally modeled information to identify current gesture information and a distance from the image sensor;
结合所述手势信息、距离信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。The corresponding operation instruction is generated by combining the gesture information, the distance information, the input signal, and the correspondence between the recognition result and the operation instruction.
可选地,所述对所述肢体图像进行特征识别,还包括:Optionally, the characterizing the limb image further includes:
建立颜色块与肢体特征的对应关系;Establish a correspondence between color blocks and limb features;
检测用户肢体上的特定颜色块;Detecting a particular color block on the user's limb;
根据检测到的颜色块的RGB值,确定所述颜色块对应的肢体特征,并输出识别结果。Based on the detected RGB values of the color blocks, the limb features corresponding to the color blocks are determined, and the recognition result is output.
可选地,所述方法还包括:Optionally, the method further includes:
当用户佩戴装配有感应芯片的手套进行操作时,接收所述手套发出的感应信号;Receiving a sensing signal emitted by the glove when the user operates the glove equipped with the sensor chip;
基于所述感应信号、所述肢体图像的特征识别结果及所述输入信号,产生与所述感应信号、识别结果及输入信号组合对应的操作指令并输出。And generating an operation command corresponding to the combination of the sensing signal, the recognition result, and the input signal based on the sensing signal, the feature recognition result of the limb image, and the input signal, and outputting.
本发明实施例的方法及***具有下列优点:The method and system of the embodiments of the present invention have the following advantages:
本发明实施例中,信号输入装置同步或异步采集用户肢体信息及按键输入信号,根据识别出的用户肢体信息、按键输入信号及操作指令的对应关系,输出对应的操作指令。采用本发明提供的技术方案,可以更精细化地对用户采用哪一只手/脚,采用哪一只手指头,采用哪一种手势等对按键进行按压操作,不同的肢体信息按压同一个按键时产生的响应信号是不同的,并且,不同的肢体信息与不同的按键可组合使用,可定义大量的快捷操作方式。即,本发明定义了一种全新的交互方式,可实现操作指令的快速操作。相比于现有技术,本发明提高了信号输入效率,丰富了信号输入方式,提高了用户体验。In the embodiment of the present invention, the signal input device synchronously or asynchronously collects the user's limb information and the key input signal, and outputs a corresponding operation instruction according to the recognized relationship between the user's limb information, the key input signal, and the operation instruction. According to the technical solution provided by the present invention, which hand/foot can be used more carefully for the user, which finger is used, which gesture is used to press the button, and different body information presses the same button. The response signals generated are different, and different limb information can be combined with different buttons to define a large number of shortcut operations. That is, the present invention defines a completely new way of interaction that enables fast operation of operational instructions. Compared with the prior art, the invention improves the signal input efficiency, enriches the signal input mode, and improves the user experience.
附图说明DRAWINGS
图1是本发明实施例中信号输入装置结构示意图;1 is a schematic structural diagram of a signal input device in an embodiment of the present invention;
图2是本发明实施例中识别左右手按压按键的示意图;2 is a schematic diagram of identifying left and right hand pressing buttons in an embodiment of the present invention;
图3是本发明实施例中识别具体手指按压按键的示意图;3 is a schematic diagram of identifying a specific finger pressing button in the embodiment of the present invention;
图4是本发明实施例中手势识别示意图;4 is a schematic diagram of gesture recognition in an embodiment of the present invention;
图5是本发明实施例中二维测距后进行三维建模的示意图;FIG. 5 is a schematic diagram of performing three-dimensional modeling after two-dimensional ranging in an embodiment of the present invention; FIG.
图6是本发明实施例中信号输入方法流程图。Figure 6 is a flow chart of a signal input method in an embodiment of the present invention.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。此外,下面所描述的本发明各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。The present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. Further, the technical features involved in the various embodiments of the present invention described below may be combined with each other as long as they do not constitute a conflict with each other.
为达到以上目的,如图1所示,本发明提供了一种信号输入的装置11,该装置包括:To achieve the above object, as shown in FIG. 1, the present invention provides a device 11 for signal input, the device comprising:
一个或多个按键101、一个或多个用于采集图像的传感器102、一个或多个处理器103,所述一个或多个处理器103用于:One or more buttons 101, one or more sensors 102 for acquiring images, one or more processors 103, the one or more processors 103 for:
控制所述传感器102捕捉用户肢体移动轨迹,采集所述肢体图像,所述用户肢体包括用户四肢;即,包括用户双手或双脚。The sensor 102 is controlled to capture a trajectory of a user's limb movement, and the limb image is acquired. The user's limb includes the user's limbs; that is, the user's hands or feet are included.
对所述肢体图像进行特征识别,获取识别后的结果;Performing feature recognition on the limb image to obtain the recognized result;
接收所述用户按压所述一个或多个按键101后的输入信号;Receiving an input signal after the user presses the one or more buttons 101;
结合所述特征识别后的结果与所述输入信号,产生与所述结果及输入信号组合对应的操作指令并输出。Combining the result of the feature recognition with the input signal, an operation instruction corresponding to the combination of the result and the input signal is generated and output.
按键101可以是物理按键,也可以是虚拟按键,按键数量没有限制,例如可以是传统的数字键盘上的一个或多个按键,也可以是单独的一个按键,还可以是触摸屏显示的一个或多个虚拟按键。The button 101 can be a physical button or a virtual button. The number of buttons is not limited. For example, the button may be one or more buttons on a conventional numeric keypad, or may be a single button, or may be one or more displayed on the touch screen. Virtual buttons.
需要说明的是,按键也可以是整个或部分触摸屏或显示屏,用户无论在该显示屏哪一个位置按压,均是触摸了该按键。例如传统的画图软件中,用户在触摸屏中画图显示区上进行手指作画,此时认定该用户已经按压了画图按键,并获取用户按压了按键后触摸点的位置信息。It should be noted that the button may also be an entire or partial touch screen or a display screen, and the user touches the button regardless of which position of the display is pressed. For example, in the traditional drawing software, the user performs finger painting on the drawing display area in the touch screen. At this time, it is determined that the user has pressed the drawing button, and obtains the position information of the touch point after the user presses the button.
传感器102在本发明实施例中,可以是可见光或非可见光的图像传感器,例如CCD或CMOS图像传感器,也可以是红外/紫外传感器,用于接收红外/紫外光线。In the embodiment of the present invention, the sensor 102 may be an image sensor of visible light or non-visible light, such as a CCD or CMOS image sensor, or an infrared/ultraviolet sensor for receiving infrared/ultraviolet light.
其中,当所述肢体为用户的手时,所述处理器103用于对所述肢体图像 进行特征识别具体可以为:Wherein, when the limb is the user's hand, the processor 103 is configured to perform feature recognition on the limb image.
定位出该手部的位置,识别出所述手部位置与所述一个或多个按键的位置关系;例如,手部位置可以在按键的正上方,也可以在按键的左侧或右侧;其中,定位手部的识别方法可以采用传统的图像处理方法来实现,例如采用二值化及轮廓识别的图像处理算法来获取一张图像中该手部的形态及位于照片中的位置,由于图像传感器可以在同一个地方固定进行周期性拍摄,故照片的背景(除了手部以外的其他部分可定义为背景)是不变的,唯一变化的是手在照片中的位置,因此,可以根据不同照片中手部的变化,来定位手部及识别手部的移动轨迹。Positioning the hand to identify a positional relationship between the hand position and the one or more buttons; for example, the hand position may be directly above the button, or may be on the left or right side of the button; Wherein, the method for identifying the positioning hand can be implemented by a conventional image processing method, for example, an image processing algorithm using binarization and contour recognition to obtain the shape of the hand in an image and the position in the photo, due to the image The sensor can be fixed for periodic shooting in the same place, so the background of the photo (other than the hand can be defined as the background) is constant, the only change is the position of the hand in the photo, so it can be different A change in the hand in the photo to locate the hand and identify the movement of the hand.
当手部位置处于所述一个或多个按键的上方时,对所述手部形状进行识别,确认所述手部是左手或右手,例如,当手部位置处于该按键的上方,此时可对手部形状进行识别,区分出用户是左手还是右手准备按压或已经按压该按键,并结合该按键被按压的信号,对用户左手或右手按压时进行不同的操作命令响应。操作命令可以是一段文字\声音\图像的输出,也可以是某一程序的命令,可由用户自定义,也可以通过信号输入装置预设值。例如,当用户左手按压按键时,输出一段文字,而当用户右手按压时,则输出一段声音。图2是识别左右手的一个示意图,如图2所示,当用户在键盘100上准备进行按键101的按压时,图像传感器102实时对手部进行采集,区分该用户按压按键时或之前是左手还是右手在操作。When the hand position is above the one or more buttons, the hand shape is identified to confirm that the hand is a left hand or a right hand, for example, when the hand position is above the button, The shape of the opponent is recognized to distinguish whether the user is pressing the left hand or the right hand, or has pressed the button, and in combination with the signal that the button is pressed, a different operation command response is performed when the user presses the left or right hand. The operation command can be the output of a text\sound\image, or it can be a command of a certain program, which can be customized by the user or preset by the signal input device. For example, when the user presses the button with the left hand, a piece of text is output, and when the user presses the right hand, a piece of sound is output. 2 is a schematic diagram of recognizing the left and right hands. As shown in FIG. 2, when the user prepares to press the button 101 on the keyboard 100, the image sensor 102 collects the opponent in real time, and distinguishes whether the user presses the button or is the left or right hand before. In operation.
和/或,and / or,
识别出处于所述一个或多个按键上方的具体手指,即,不仅对用户左右手进行区分,还可以对用户的手指进行区分,例如大拇指按压和食指按压同一个按键,其操作指令也可以不同。同一按键、不同手指会事先对应一个操作指令。可选地,信号输入装置可预先设置一对应表,在该对应表中,不同手指与不同按键可排列组合,输出的操作指令不同。例如,食指与拇指,按键A和B,可形成7种不同的状态,分别是没有按压,单独食指按压A键,单独食指按压B键,单独拇指按压A键,单独拇指按压B键,食指按压A键拇指按压B键,食指按压B键拇指按压A键,对应6种不同的操作指令(A和B键没有被按压时无操作指令),每一个操作指令均可自己定义或预先设置。此外,左手的拇指与右手的拇指按压同一个按键,其操作响应也可以有区别。即,左右手,食指与拇指,按键A和B,还可以组合成更加复杂的对应关系表, 输出更多更复杂的操作响应。例如,图3是一个用户在键盘打字时的具体示例,用户在键盘100打字过程中,图像传感器102实时捕捉手部动作轨迹及手部图像,以此区分出用户按压键盘时采用的手指,输出不同响应命令,并显示在显示器105上。Recognizing a specific finger above the one or more keys, that is, not only distinguishing between the left and right hands of the user, but also distinguishing the fingers of the user, for example, the thumb pressing and the index finger pressing the same button, and the operation instructions may be different. . The same button and different fingers will correspond to an operation command in advance. Optionally, the signal input device may preset a correspondence table. In the correspondence table, different fingers and different buttons may be arranged and combined, and the output operation instructions are different. For example, the index finger and the thumb, buttons A and B, can form 7 different states, respectively, no pressing, single index finger pressing A button, single index finger pressing B button, single thumb pressing A button, single thumb pressing B button, index finger pressing The A button presses the B button, the index finger presses the B button, and the thumb presses the A button, corresponding to 6 different operation commands (the A and B buttons are not pressed without an operation command), and each operation command can be defined or preset in advance. In addition, the thumb of the left hand and the thumb of the right hand press the same button, and the operational response can also be different. That is, the left and right hands, the index finger and the thumb, and the buttons A and B can also be combined into a more complex correspondence table to output more complex operational responses. For example, FIG. 3 is a specific example of a user typing on a keyboard. During the typing process of the keyboard 100, the image sensor 102 captures the hand motion track and the hand image in real time, thereby distinguishing the finger used by the user when pressing the keyboard, and outputting Different response commands are displayed on display 105.
和/或,and / or,
识别出所述手部移动轨迹,除了可以区分左右手、手指之外,本发明实施例还可以区分手部移动轨迹,如上述提及的从上到下移动到按键上方,从下到上移动到按键上方,从斜上到斜下移动到按键上方等,不同的移动方向对应的操作指令也可以不同。Recognizing the hand movement trajectory, in addition to distinguishing the left and right hands and the fingers, the embodiment of the present invention can also distinguish the hand movement trajectory, as mentioned above, moving from top to bottom to above the button, moving from bottom to top. Above the button, move from obliquely up to obliquely down to the top of the button, etc., the operation instructions corresponding to different moving directions can also be different.
和/或,and / or,
识别出所述手部的手势;除了上述的左右手识别、手指识别及手部移动轨迹识别,本发明实施例还可以实现手势识别。手势识别可以类似于苹果的multi-touch技术,实现诸如捏合(缩小/放大指令,如图4所示),多指旋转(图片旋转指令)等交互方式。但与苹果的multi-touch不同的是,本发明不需要在触摸屏上捕捉多点的移动轨迹,可通过抓拍多帧图片,对多帧图片的手部形状及形态变化进行识别,从而确定用户当前的姿态。例如,当检测到用户手处于某一按键上方,用户在按压该按键之前或之后做了一个捏合的手势,此时可根据捏合手势+按键(实体或虚拟按键)按压共同确定对应的操作指令。而识别手势技术是一种结合了手指识别+手指移动轨迹识别的综合技术,手势的识别实现方式可根据现有的机器学习算法来进行识别,这里不再累述。The gesture of the hand is recognized; in addition to the above-mentioned left and right hand recognition, finger recognition, and hand movement trajectory recognition, the embodiment of the present invention can also implement gesture recognition. Gesture recognition can be similar to Apple's multi-touch technology, such as kneading (reduction/magnification instructions, as shown in Figure 4), multi-finger rotation (picture rotation instructions) and other interactive methods. However, unlike Apple's multi-touch, the present invention does not need to capture a multi-point moving track on the touch screen, and can capture the multi-frame picture to identify the hand shape and shape change of the multi-frame picture, thereby determining the current user. Gesture. For example, when it is detected that the user's hand is above a certain button, the user makes a pinch gesture before or after pressing the button, and at this time, the corresponding operation command can be jointly determined according to the pinch gesture + button (physical or virtual button) pressing. The recognition gesture technology is a comprehensive technology that combines finger recognition + finger movement trajectory recognition. The gesture recognition implementation can be identified according to the existing machine learning algorithm, and will not be described here.
所述处理器用于产生与所述结果及输入信号组合对应的操作指令,具体可以为:The processor is configured to generate an operation instruction corresponding to the combination of the result and the input signal, and specifically:
建立所述识别出的手部位置与一个或多个按键的位置关系、所述手部形状及所述输入信号三者与操作指令的对应关系;Establishing a positional relationship between the identified hand position and one or more buttons, a shape of the hand, and a correspondence between the input signals and an operation command;
按照所述对应关系,产生对应的操作指令。According to the corresponding relationship, a corresponding operation instruction is generated.
同理,当肢体为脚时,典型的按键如跳舞毯上的按键,则可以通过图像传感器定位脚的位置,区分出用户是左脚还是右脚踩踏某一按键,同理,也可以根据脚步移动轨迹,判断当前按压该按键时脚步的移动轨迹,例如是从上到下,或从左到右,或从斜上到斜下,不同的方向,其操作指令是不同的。Similarly, when the limb is a foot, the typical button, such as the button on the dance mat, can position the foot through the image sensor to distinguish whether the user is stepping on a button by the left or right foot. Similarly, it can also be moved according to the footstep. The trajectory determines the movement trajectory of the footstep when the button is currently pressed, for example, from top to bottom, or from left to right, or from obliquely to obliquely, and the operation instructions are different in different directions.
本发明定义的这一新的交互方式,其应用广泛,例如,在游戏领域,游 戏手柄通常只有几个按键,而不同手势/手指的组合,按压不同的按键,会带来不同的游戏人物快捷键,游戏的可玩性高,用户体验好;在教育领域,当用户选择不同手指/手势点击/按压/触摸不同的按键,可触发不同的教学内容或教学效果,例如画图方面,用户用食指在液晶屏上涂抹以及用拇指在液晶屏上涂抹,其画出来的线条颜色、粗细都可不同。The new interaction method defined by the present invention is widely used. For example, in the game field, the game controller usually only has a few buttons, and different gestures/finger combinations, pressing different buttons, will bring different game characters to the shortcut. The key, the game has high playability and good user experience; in the field of education, when the user selects different fingers/gestures to click/press/touch different buttons, different teaching contents or teaching effects can be triggered, for example, drawing, the user uses the index finger Applying on the LCD screen and applying it on the LCD screen with your thumb, the color and thickness of the lines drawn can be different.
可选地,所述信号输入装置还包括:压力传感器,用于获取用户按压所述一个或多个按键的力度;Optionally, the signal input device further includes: a pressure sensor, configured to acquire a strength of the user pressing the one or more buttons;
则所述处理器还用于:Then the processor is further used to:
获取所述压力传感器采集的按压力度,根据所述按压力度、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。Acquiring the pressing force collected by the pressure sensor, and generating the corresponding operation instruction according to the pressing force, the input signal, and the correspondence between the recognition result and the operation instruction.
目前,应用级的压力传感器已经在市面上普及,本发明实施例还可以内置该压力传感器(例如在按键内部放置该压力传感器),用来采集用户按压该案件时的力道,按照不同的力道阈值区分为低、中、高等不同等级的力道,每一等级力道可对应不同的操作指令。与苹果的3D-touch类似。而本发明实施例中,创造性地提出了将压力传感器采集的按压力度、输入信号及所述识别结果与操作指令做一个对应关系表,根据采集的多个参数,依据该对应关系输出对应的操作指令。At present, the application-level pressure sensor has been widely used in the market, and the pressure sensor (for example, the pressure sensor is placed inside the button) can be built in the embodiment of the invention to collect the force when the user presses the case, according to different force thresholds. It is divided into different levels of force, such as low, medium and high. Each level of force can correspond to different operating instructions. Similar to Apple's 3D-touch. In the embodiment of the present invention, it is creatively proposed to make a corresponding relationship between the pressing force, the input signal, and the recognition result collected by the pressure sensor and the operation instruction, and output corresponding operations according to the corresponding relationship according to the collected multiple parameters. instruction.
可选地,所述信号输入装置还包括指纹传感器,用于采集用户指纹,并识别用户身份;Optionally, the signal input device further includes a fingerprint sensor, configured to collect a user fingerprint and identify the user identity;
则所述处理器还用于:Then the processor is further used to:
获取用户身份标识信息,结合所述用户身份标识信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令;与压力传感器类似,本发明可加上压力传感器和/或指纹传感器,指纹传感器也可以内置与按键内部,当用户按压时,自动对该指纹进行识别,从而确定具体是哪一个用户在操作,并结合用户身份标识信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。Obtaining the user identity information, combining the user identity information, the input signal, and the correspondence between the recognition result and the operation instruction to generate the corresponding operation instruction; similar to the pressure sensor, the invention may be coupled with the pressure sensor and/or Or a fingerprint sensor, the fingerprint sensor can also be built in and inside the button. When the user presses, the fingerprint is automatically recognized, thereby determining which user is operating, and combining the user identification information, the input signal, and the recognition result with Corresponding relationship of the operation instructions, the corresponding operation instruction is generated.
或,or,
所述图像传感器还用于:采集人脸信息;The image sensor is further configured to: collect face information;
所述处理器还用于:The processor is further configured to:
根据采集到的人脸信息,识别所述用户身份;Identifying the identity of the user according to the collected face information;
结合所述用户身份标识信息、输入信号及所述识别结果与操作指令的对 应关系,产生所述对应的操作指令。The corresponding operation instruction is generated by combining the user identity information, the input signal, and the corresponding relationship between the recognition result and the operation instruction.
与前者类似,区别在于后者是通过图像传感器102进行人脸识别的方式来实现用户身份采集。人脸识别属于现有技术,具体实现方式不再累述。Similar to the former, the difference is that the latter is a method of face recognition by the image sensor 102 to realize user identity collection. Face recognition belongs to the prior art, and the specific implementation manner is not repeated.
例如,本发明实施例中,人脸识别可应用在投票方面,如选举、娱乐节目中或其他节目中大众投票等应用场景中,目前的投票***会存在用户恶意乱投、漏投的行为,在统计方面造成了比较大的困难,通过人脸识别加按键按压投票的方式,可定位具体哪一个用户按压了当前的按键,实现了票数与用户精确对应,利于统计。For example, in the embodiment of the present invention, the face recognition can be applied to the voting aspect, such as an election, an entertainment program, or an application scenario such as a public vote in other programs, and the current voting system may have malicious malicious investment and missed voting behavior. In terms of statistics, it has caused great difficulties. By means of face recognition and button press voting, it is possible to locate which user has pressed the current button, and the number of votes is accurately matched with the user, which is good for statistics.
可选地,所述信号输入装置还包括用于连续发射激光的激光发射器,所述激光发射器与所述图像传感器呈一夹角,且所述图像传感器为激光传感器,当所述激光发射器发射激光至所述用户肢体时,所述图像传感器接收反射回的反射激光,产生一个或多个反射光响应信号;该激光发射器可以发射点阵光或线性光束,例如,可通过内置的扩束装置将点阵光扩为一条或多条线性光束。线性光束相比于点阵光,其采集的数据较多,故测距更精确,因此,本发明实施例优选线性光束。Optionally, the signal input device further includes a laser emitter for continuously emitting laser light, the laser emitter is at an angle to the image sensor, and the image sensor is a laser sensor when the laser light is emitted When the laser emits laser light to the user's limb, the image sensor receives the reflected reflected laser light to generate one or more reflected light response signals; the laser emitter can emit a lattice light or a linear light beam, for example, through a built-in The beam expander expands the lattice light into one or more linear beams. The linear beam has more data to be collected than the lattice light, so the ranging is more accurate. Therefore, the embodiment of the present invention is preferably a linear beam.
所述处理器还用于:The processor is further configured to:
当所述图像传感器接收同一方向上的反射光时,接收所述一个反射光响应信号,根据所述反射光响应信号,利用三角法计算出所述图像传感器与所述用户肢体一切面之间的距离;Receiving, by the image sensor, the reflected light response signal in the same direction, and calculating, according to the reflected light response signal, a triangulation method between the image sensor and the user's limb distance;
结合所述距离信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令;Combining the distance information, the input signal, and the correspondence between the recognition result and the operation instruction, generating the corresponding operation instruction;
该方法采用了二维测距的技术来实现图像传感器与用户肢体的距离测量。二维测距的原理即通过激光发送点光束或线性光束至用户肢体,通过图像传感器(例如红外传感器)接收该激光发射到用户肢体的反射光,并利用三角法进行运算,计算出当前时刻肢体与该图像传感器之间的距离,根据该距离,可定义不同距离与不同操作指令之间的关系。并结合距离信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。其中,三角法是光学测距领域常用的一种测量方法,该方法如下:通过计算区域的重心位置以及已知的激光发射装置和图像传感器相对角度及间距,可以推算目标距离图像传感器的距离。三角法的基本测量公式为z=b*f/x;其中b表示激光发射装置和图像传感器间距,f为图像传感器所使用的镜头焦 距,x为求得的反射光投影在图像传感器上的列坐标的重心位置,z为测得的距离,从公式上看出,测量距离仅与重心在列方向的位置有关,和行数无关,因此感光面阵可以主要在列方向排布,行方向上可以只用一行或很窄的行数;另外,测量的误差公式为e=1/(b*f/(n*z)+1),其中,n表示重心提取的误差,可以看出,测量误差e和b,f成反比,和n,z成正比。因此,在b,f和n一定的情况下,需要选择相对长焦的镜头来减小不同距离的误差。The method uses a two-dimensional ranging technique to measure the distance between the image sensor and the user's limb. The principle of two-dimensional ranging is to send a point beam or a linear beam to the user's limb through a laser, receive the reflected light emitted by the laser to the user's limb through an image sensor (for example, an infrared sensor), and calculate the current moment by using a trigonometric method. The distance from the image sensor, according to which the relationship between different distances and different operational commands can be defined. And corresponding to the distance information, the input signal, and the correspondence between the recognition result and the operation instruction, the corresponding operation instruction is generated. Among them, the triangulation method is a commonly used measurement method in the field of optical ranging. The method is as follows: by calculating the position of the center of gravity of the region and the relative angle and spacing of the known laser emitting device and the image sensor, the distance of the target distance image sensor can be estimated. The basic measurement formula of the trigonometric method is z=b*f/x; where b is the distance between the laser emitting device and the image sensor, f is the focal length of the lens used by the image sensor, and x is the column of the determined reflected light projected onto the image sensor. The position of the center of gravity of the coordinate, z is the measured distance. From the formula, the measurement distance is only related to the position of the center of gravity in the column direction, and is independent of the number of rows. Therefore, the photosensitive array can be arranged mainly in the column direction, and the row direction can be Only one line or a very narrow number of lines is used; in addition, the error formula of the measurement is e=1/(b*f/(n*z)+1), where n represents the error of the center of gravity extraction, and it can be seen that the measurement error e is inversely proportional to b, f, and is proportional to n, z. Therefore, in the case where b, f, and n are constant, it is necessary to select a lens with a relatively telephoto to reduce the error of different distances.
或,or,
当所述激光发射器向不同方向上发射线性光束,且所述图像传感器接收不同方向上的反射光时,接收所述多个反射光响应信号,根据所述反射光响应信号,利用三角法计算出所述图像传感器与所述用户肢体不同切面之间的距离;When the laser emitter emits a linear beam in different directions, and the image sensor receives the reflected light in different directions, receiving the plurality of reflected light response signals, and calculating according to the reflected light response signal by using a trigonometric method a distance between the image sensor and a different slice of the user's limb;
利用计算出的所述图像传感器与所述用户肢体不同切面之间的距离,在以图像传感器为原点的三维空间中对所述用户肢体进行三维建模;Using the calculated distance between the image sensor and different sections of the user's limb, three-dimensional modeling of the user's limb in a three-dimensional space with the image sensor as the origin;
对三维建模的信息进行手势重构,识别出当前手势信息及距离所述图像传感器之间的距离;Performing gesture reconstruction on the three-dimensionally modeled information to identify current gesture information and a distance from the image sensor;
结合所述手势信息、距离信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。The corresponding operation instruction is generated by combining the gesture information, the distance information, the input signal, and the correspondence between the recognition result and the operation instruction.
上述方案即是光学三维测距技术,与二维测距不同的是:该激光发射器可通过转轴进行不同角度的激光发射,从而图像传感器采集不同方向上的反射光。利用三角法可测得肢体不同切面的三维距离,将不同切面的三维数据进行三维空间的叠加,即可完成三维建模。如图5所示。不同线性光束发射到该肢体表面上时,图像传感器接收到不同的反射光束,在传感器面板上成像,生成反射光响应信号,并根据不同的反射光束可获取三维图像重构,从而获得更多更精确的肢体信息。The above solution is an optical three-dimensional ranging technology, which is different from the two-dimensional ranging: the laser emitter can perform laser emission at different angles through the rotating shaft, so that the image sensor collects reflected light in different directions. The triangulation method can be used to measure the three-dimensional distance of different sections of the limb, and the three-dimensional data of different sections can be superimposed in three-dimensional space to complete the three-dimensional modeling. As shown in Figure 5. When different linear beams are emitted onto the surface of the limb, the image sensor receives different reflected beams, images on the sensor panel, generates reflected light response signals, and obtains three-dimensional image reconstruction according to different reflected beams, thereby obtaining more and more Precise body information.
可选地,其中,对所述肢体图像进行特征识别,具体还可以为:Optionally, wherein the feature image is identified by using the limb image, specifically:
建立颜色块与肢体特征的对应关系;例如,某一颜色块代表该用户拇指,另一颜色块代表该用户的食指等。A correspondence between a color block and a limb feature is established; for example, one color block represents the user's thumb, and another color block represents the user's index finger or the like.
检测用户肢体上的特定颜色块;Detecting a particular color block on the user's limb;
根据检测到的颜色块的RGB值,确定所述颜色块对应的肢体特征,并输出识别结果。Based on the detected RGB values of the color blocks, the limb features corresponding to the color blocks are determined, and the recognition result is output.
除了对常规的手部图像进行识别,还可以增加识别参数(颜色),以此 来加快识别效率。例如,当用户涂上特定颜色的指甲油,或佩戴特定颜色(或不同的颜色块)的手套时,处理器将对该特定颜色进行定位及跟踪,以此来确定该特定颜色的RGB值,并根据该RGB值与用户肢体特征的对应关系,确定该颜色块代表的肢体特征,以此来更快速及高效地对用户肢体特征进行识别。In addition to recognizing conventional hand images, recognition parameters (colors) can be added to speed up recognition. For example, when a user applies a nail polish of a particular color, or wears a glove of a particular color (or a different color block), the processor will position and track the particular color to determine the RGB value for that particular color. And according to the correspondence between the RGB value and the user's limb features, the limb features represented by the color block are determined, thereby identifying the user's limb features more quickly and efficiently.
可选地,所述处理器还用于:当用户佩戴装配有感应芯片的手套进行操作时,接收所述手套发出的感应信号;Optionally, the processor is further configured to: when the user operates the glove equipped with the sensor chip, receive the sensing signal sent by the glove;
基于所述感应信号、所述肢体图像的特征识别结果及所述输入信号,产生与所述感应信号、识别结果及输入信号组合对应的操作指令并输出。And generating an operation command corresponding to the combination of the sensing signal, the recognition result, and the input signal based on the sensing signal, the feature recognition result of the limb image, and the input signal, and outputting.
同理,当用户佩戴上装有感应芯片的手套时,可根据该手套的感应信号来更加快速方便地确定用户具体的操作手指/手势等。例如,在该手套不同手指内装配有不同的感应芯片(例如NFC近场通信芯片),当用户某一手指按压该按键时,除了能够检测到按键的输入信号,还可以根据手套感应信号检测到具体按压的手指,再结合肢体特征识别结果,三者共同确定当前按压按键的具体手指/手势等,从而输出相应的内容。使得识别结果更加精确,鲁棒性也更高。Similarly, when the user wears the glove with the sensor chip, the user's specific operation finger/gesture and the like can be determined more quickly and conveniently according to the sensing signal of the glove. For example, different sensing chips (such as NFC near field communication chips) are mounted in different fingers of the glove. When the user presses the button, the input signal of the button can be detected according to the glove sensing signal. The specifically pressed finger, combined with the limb feature recognition result, the three together determine the specific finger/gesture of the currently pressed button, thereby outputting the corresponding content. The recognition result is more accurate and the robustness is higher.
本发明实施例中,通过图像采集+按键按压检测的方式确定对应的操作指令,解决了现有技术人机交互信号输入方式单一且效率低下的问题。采用本发明提供的技术方案,可以更精细化地对用户采用哪一只手/脚,采用哪一只手指头,采用哪一种手势等对按键进行按压操作,不同的肢体信息按压同一个按键时产生的响应信号是不同的,并且,不同的肢体信息与不同的按键可组合使用,可定义大量的快捷操作方式。即,本发明定义了一种全新的交互方式,可实现操作指令的快速操作。相比于现有技术,本发明提高了信号输入效率,丰富了信号输入方式,提高了用户体验。In the embodiment of the present invention, the corresponding operation instruction is determined by means of image acquisition and button press detection, which solves the problem that the prior art human-machine interaction signal input mode is single and inefficient. According to the technical solution provided by the present invention, which hand/foot can be used more carefully for the user, which finger is used, which gesture is used to press the button, and different body information presses the same button. The response signals generated are different, and different limb information can be combined with different buttons to define a large number of shortcut operations. That is, the present invention defines a completely new way of interaction that enables fast operation of operational instructions. Compared with the prior art, the invention improves the signal input efficiency, enriches the signal input mode, and improves the user experience.
实施例二Embodiment 2
本发明实施例提供一种信号输入的方法,如图6所示,该方法包括:An embodiment of the present invention provides a method for inputting a signal. As shown in FIG. 6, the method includes:
S201、信号输入装置捕捉用户肢体移动轨迹,采集所述肢体图像,所述用户肢体包括用户四肢;S201: The signal input device captures a movement path of the user's limb, and collects the limb image, where the user limb includes the user's limbs;
S202、对所述肢体图像进行特征识别,获取识别后的结果;S202. Perform feature recognition on the limb image to obtain the identified result.
S203、接收所述用户按压所述一个或多个按键后的输入信号;S203. Receive an input signal after the user presses the one or more buttons.
S204、结合所述特征识别后的结果与所述输入信号,产生与所述结果及输入信号组合对应的操作指令并输出。S204. Combine the result of the feature recognition with the input signal, generate an operation instruction corresponding to the combination of the result and the input signal, and output.
按键及传感器可以参见实施例一所描述的示例,这里不再累述。For the buttons and sensors, refer to the example described in Embodiment 1, and the details are not described herein.
需要说明的是,步骤S202及S203的执行顺序无先后限制,可以先接收按键按压信号,再对肢体图像进行识别,也可以反过来,对本发明实施例最终的处理结果无影响。It should be noted that the order of execution of steps S202 and S203 is not limited in sequence, and the button press signal may be received first, and then the limb image may be recognized, or vice versa, and the final processing result of the embodiment of the present invention is not affected.
其中,当所述肢体为用户的手时,S202中对所述肢体图像进行特征识别,具体可以为:Wherein, when the limb is the user's hand, the feature image of the limb image is identified in S202, which may be:
定位出该手部的位置,识别出所述手部位置与所述一个或多个按键的位置关系;例如,手部位置可以在按键的正上方,也可以在按键的左侧或右侧;其中,定位手部的识别方法可以采用传统的图像处理方法来实现,例如采用二值化及轮廓识别的图像处理算法来获取一张图像中该手部的形态及位于照片中的位置,由于图像传感器可以在同一个地方固定进行周期性拍摄,故照片的背景(除了手部以外的其他部分可定义为背景)是不变的,唯一变化的是手在照片中的位置,因此,可以根据不同照片中手部的变化,来定位手部及识别手部的移动轨迹。Positioning the hand to identify a positional relationship between the hand position and the one or more buttons; for example, the hand position may be directly above the button, or may be on the left or right side of the button; Wherein, the method for identifying the positioning hand can be implemented by a conventional image processing method, for example, an image processing algorithm using binarization and contour recognition to obtain the shape of the hand in an image and the position in the photo, due to the image The sensor can be fixed for periodic shooting in the same place, so the background of the photo (other than the hand can be defined as the background) is constant, the only change is the position of the hand in the photo, so it can be different A change in the hand in the photo to locate the hand and identify the movement of the hand.
当手部位置处于所述一个或多个按键的上方时,对所述手部形状进行识别,确认所述手部是左手或右手,例如,当手部位置处于该按键的上方,此时可对手部形状进行识别,区分出用户是左手还是右手准备按压或已经按压该按键,并结合该按键被按压的信号,对用户左手或右手按压时进行不同的操作命令响应。操作命令可以是一段文字\声音\图像的输出,也可以是某一程序的命令,可由用户自定义,也可以通过信号输入装置预设值。例如,当用户左手按压按键时,输出一段文字,而当用户右手按压时,则输出一段声音。When the hand position is above the one or more buttons, the hand shape is identified to confirm that the hand is a left hand or a right hand, for example, when the hand position is above the button, The shape of the opponent is recognized to distinguish whether the user is pressing the left hand or the right hand, or has pressed the button, and in combination with the signal that the button is pressed, a different operation command response is performed when the user presses the left or right hand. The operation command can be the output of a text\sound\image, or it can be a command of a certain program, which can be customized by the user or preset by the signal input device. For example, when the user presses the button with the left hand, a piece of text is output, and when the user presses the right hand, a piece of sound is output.
和/或,and / or,
识别出处于所述一个或多个按键上方的具体手指,即,不仅对用户左右手进行区分,还可以对用户的手指进行区分,例如大拇指按压和食指按压同一个按键,其操作指令也可以不同。同一按键、不同手指会事先对应一个操作指令。可选地,信号输入装置可预先设置一对应表,在该对应表中,不同手指与不同按键可排列组合,输出的操作指令不同。例如,食指与拇指,按键A和B,可形成7种不同的状态,分别是没有按压,单独食指按压A键,单 独食指按压B键,单独拇指按压A键,单独拇指按压B键,食指按压A键拇指按压B键,食指按压B键拇指按压A键,对应6种不同的操作指令(A和B键没有被按压时无操作指令),每一个操作指令均可自己定义或预先设置。此外,左手的拇指与右手的拇指按压同一个按键,其操作响应也可以有区别。即,左右手,食指与拇指,按键A和B,还可以组合成更加复杂的对应关系表,输出更多更复杂的操作响应。Recognizing a specific finger above the one or more keys, that is, not only distinguishing between the left and right hands of the user, but also distinguishing the fingers of the user, for example, the thumb pressing and the index finger pressing the same button, and the operation instructions may be different. . The same button and different fingers will correspond to an operation command in advance. Optionally, the signal input device may preset a correspondence table. In the correspondence table, different fingers and different buttons may be arranged and combined, and the output operation instructions are different. For example, the index finger and the thumb, buttons A and B, can form 7 different states, respectively, no pressing, single index finger pressing A button, single index finger pressing B button, single thumb pressing A button, single thumb pressing B button, index finger pressing The A button presses the B button, the index finger presses the B button, and the thumb presses the A button, corresponding to 6 different operation commands (the A and B buttons are not pressed without an operation command), and each operation command can be defined or preset in advance. In addition, the thumb of the left hand and the thumb of the right hand press the same button, and the operational response can also be different. That is, the left and right hands, the index finger and the thumb, and the buttons A and B can also be combined into a more complex correspondence table to output more complex operational responses.
和/或,and / or,
识别出所述手部移动轨迹,除了可以区分左右手、手指之外,本发明实施例还可以区分手部移动轨迹,如上述提及的从上到下移动到按键上方,从下到上移动到按键上方,从斜上到斜下移动到按键上方等,不同的移动方向对应的操作指令也可以不同。Recognizing the hand movement trajectory, in addition to distinguishing the left and right hands and the fingers, the embodiment of the present invention can also distinguish the hand movement trajectory, as mentioned above, moving from top to bottom to above the button, moving from bottom to top. Above the button, move from obliquely up to obliquely down to the top of the button, etc., the operation instructions corresponding to different moving directions can also be different.
和/或,and / or,
识别出所述手部的手势;除了上述的左右手识别、手指识别及手部移动轨迹识别,本发明实施例还可以实现手势识别。手势识别可以类似于苹果的multi-touch技术,实现诸如捏合(缩小/放大指令),多指旋转(图片旋转指令)等交互方式。但与苹果的multi-touch不同的是,本发明不需要在触摸屏上捕捉多点的移动轨迹,可通过抓拍多帧图片,对多帧图片的手部形状及形态变化进行识别,从而确定用户当前的姿态。例如,当检测到用户手处于某一按键上方,用户在按压该按键之前或之后做了一个捏合的手势,此时可根据捏合手势+按键(实体或虚拟按键)按压共同确定对应的操作指令。而识别手势技术是一种结合了手指识别+手指移动轨迹识别的综合技术,手势的识别实现方式可根据现有的机器学习算法来进行识别,这里不再累述。The gesture of the hand is recognized; in addition to the above-mentioned left and right hand recognition, finger recognition, and hand movement trajectory recognition, the embodiment of the present invention can also implement gesture recognition. Gesture recognition can be similar to Apple's multi-touch technology, such as kneading (zoom out/magnify instructions), multi-finger rotation (picture rotation instructions) and other interactive methods. However, unlike Apple's multi-touch, the present invention does not need to capture a multi-point moving track on the touch screen, and can capture the multi-frame picture to identify the hand shape and shape change of the multi-frame picture, thereby determining the current user. Gesture. For example, when it is detected that the user's hand is above a certain button, the user makes a pinch gesture before or after pressing the button, and at this time, the corresponding operation command can be jointly determined according to the pinch gesture + button (physical or virtual button) pressing. The recognition gesture technology is a comprehensive technology that combines finger recognition + finger movement trajectory recognition. The gesture recognition implementation can be identified according to the existing machine learning algorithm, and will not be described here.
所述处理器用于产生与所述结果及输入信号组合对应的操作指令,具体可以为:The processor is configured to generate an operation instruction corresponding to the combination of the result and the input signal, and specifically:
建立所述识别出的手部位置与一个或多个按键的位置关系、所述手部形状及所述输入信号三者与操作指令的对应关系;Establishing a positional relationship between the identified hand position and one or more buttons, a shape of the hand, and a correspondence between the input signals and an operation command;
按照所述对应关系,产生对应的操作指令。According to the corresponding relationship, a corresponding operation instruction is generated.
同理,当肢体为脚时,典型的按键如跳舞毯上的按键,则可以通过图像传感器定位脚的位置,区分出用户是左脚还是右脚踩踏某一按键,同理,也可以根据脚步移动轨迹,判断当前按压该按键时脚步的移动轨迹,例如是从上到下,或从左到右,或从斜上到斜下,不同的方向,其操作指令是不同的。Similarly, when the limb is a foot, the typical button, such as the button on the dance mat, can position the foot through the image sensor to distinguish whether the user is stepping on a button by the left or right foot. Similarly, it can also be moved according to the footstep. The trajectory determines the movement trajectory of the footstep when the button is currently pressed, for example, from top to bottom, or from left to right, or from obliquely to obliquely, and the operation instructions are different in different directions.
本发明定义的这一新的交互方式,其应用广泛,例如,在游戏领域,游戏手柄通常只有几个按键,而不同手势/手指的组合,按压不同的按键,会带来不同的游戏人物快捷键,游戏的可玩性高,用户体验好;在教育领域,当用户选择不同手指/手势点击/按压/触摸不同的按键,可触发不同的教学内容或教学效果,例如画图方面,用户用食指在液晶屏上涂抹以及用拇指在液晶屏上涂抹,其画出来的线条颜色、粗细都可不同。The new interaction method defined by the present invention is widely used. For example, in the game field, the game controller usually only has a few buttons, and different gestures/finger combinations, pressing different buttons, will bring different game characters to the shortcut. The key, the game has high playability and good user experience; in the field of education, when the user selects different fingers/gestures to click/press/touch different buttons, different teaching contents or teaching effects can be triggered, for example, drawing, the user uses the index finger Applying on the LCD screen and applying it on the LCD screen with your thumb, the color and thickness of the lines drawn can be different.
可选地,本发明实施例还包括:获取用户按压所述一个或多个按键的力度,根据所述按压力度、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。Optionally, the embodiment of the present invention further includes: acquiring a strength of the user pressing the one or more buttons, and generating the corresponding operation according to the pressing force, an input signal, and a correspondence between the recognition result and an operation instruction. instruction.
本发明实施例中,可根据不同的力道阈值区分为低、中、高等不同等级的力道,每一等级力道可对应不同的操作指令。与苹果的3D-touch类似。而本发明实施例中,创造性地提出了将压力传感器采集的按压力度、输入信号及所述识别结果与操作指令做一个对应关系表,根据采集的多个参数,依据该对应关系输出对应的操作指令。In the embodiment of the present invention, the force channels of different levels, such as low, medium, and high, can be divided according to different force thresholds, and each level of force can correspond to different operation instructions. Similar to Apple's 3D-touch. In the embodiment of the present invention, it is creatively proposed to make a corresponding relationship between the pressing force, the input signal, and the recognition result collected by the pressure sensor and the operation instruction, and output corresponding operations according to the corresponding relationship according to the collected multiple parameters. instruction.
可选地,本发明实施例还包括:采集用户指纹,并识别用户身份;获取用户身份标识信息,结合所述用户身份标识信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令;当用户按压时,自动对该指纹进行识别,从而确定具体是哪一个用户在操作,并结合用户身份标识信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。Optionally, the embodiment of the present invention further includes: collecting a user fingerprint and identifying a user identity; acquiring user identity identification information, combining the user identity identification information, an input signal, and a correspondence between the recognition result and an operation instruction, generating a location Corresponding operation instruction; when the user presses, the fingerprint is automatically identified, thereby determining which user is in operation, and combining the user identity information, the input signal, and the correspondence between the recognition result and the operation instruction, The corresponding operation instruction.
或,or,
通过图像传感器采集人脸信息;Collecting face information through an image sensor;
根据采集到的人脸信息,识别所述用户身份;Identifying the identity of the user according to the collected face information;
结合所述用户身份标识信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。The corresponding operation instruction is generated by combining the user identity information, the input signal, and the correspondence between the recognition result and the operation instruction.
与前者类似,区别在于后者是通过图像传感器进行人脸识别的方式来实现用户身份采集。人脸识别属于现有技术,具体实现方式不再累述。Similar to the former, the difference is that the latter is a face recognition method through image sensors to achieve user identity collection. Face recognition belongs to the prior art, and the specific implementation manner is not repeated.
例如,本发明实施例中,人脸识别可应用在投票方面,如选举、娱乐节目中或其他节目中大众投票等应用场景中,目前的投票***会存在用户恶意乱投、漏投的行为,在统计方面造成了比较大的困难,通过人脸识别加按键按压投票的方式,可定位具体哪一个用户按压了当前的按键,实现了票数与 用户精确对应,利于统计。For example, in the embodiment of the present invention, the face recognition can be applied to the voting aspect, such as an election, an entertainment program, or an application scenario such as a public vote in other programs, and the current voting system may have malicious malicious investment and missed voting behavior. In terms of statistics, it has caused great difficulties. By means of face recognition and button press voting, it is possible to locate which user has pressed the current button, and the number of votes is accurately matched with the user, which is good for statistics.
可选地,本发明实施例还包括:通过激光发射器发射激光至肢体,并接收反射回来的反射光束,当接收同一方向上的反射光时,产生一个反射光响应信号,根据所述反射光响应信号,利用三角法计算出所述信号输入装置与所述用户肢体一切面之间的距离;Optionally, the embodiment of the present invention further includes: transmitting a laser to the limb through the laser emitter, and receiving the reflected reflected beam, and when receiving the reflected light in the same direction, generating a reflected light response signal according to the reflected light. Responding to the signal, using a triangulation method to calculate the distance between the signal input device and all faces of the user's limb;
结合所述距离信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令;Combining the distance information, the input signal, and the correspondence between the recognition result and the operation instruction, generating the corresponding operation instruction;
该方法采用了二维测距的技术来实现图像传感器与用户肢体的距离测量。二维测距的原理即通过激光发送点光束或线性光束至用户肢体,通过图像传感器(例如红外传感器)接收该激光发射到用户肢体的反射光,并利用三角法进行运算,计算出当前时刻肢体与该图像传感器之间的距离,根据该距离,可定义不同距离与不同操作指令之间的关系。并结合距离信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。其中,三角法是光学测距领域常用的一种测量方法,该方法如下:通过计算区域的重心位置以及已知的激光发射装置和图像传感器相对角度及间距,可以推算目标距离图像传感器的距离。The method uses a two-dimensional ranging technique to measure the distance between the image sensor and the user's limb. The principle of two-dimensional ranging is to send a point beam or a linear beam to the user's limb through a laser, receive the reflected light emitted by the laser to the user's limb through an image sensor (for example, an infrared sensor), and calculate the current moment by using a trigonometric method. The distance from the image sensor, according to which the relationship between different distances and different operational commands can be defined. And corresponding to the distance information, the input signal, and the correspondence between the recognition result and the operation instruction, the corresponding operation instruction is generated. Among them, the triangulation method is a commonly used measurement method in the field of optical ranging. The method is as follows: by calculating the position of the center of gravity of the region and the relative angle and spacing of the known laser emitting device and the image sensor, the distance of the target distance image sensor can be estimated.
或,or,
当所述发射的激光为不同方向上线性光束,且接收不同方向上的反射光时,接收所述多个反射光响应信号,根据所述反射光响应信号,利用三角法计算出信号输入装置与所述用户肢体不同切面之间的距离;When the emitted laser light is a linear light beam in different directions and receives reflected light in different directions, receiving the plurality of reflected light response signals, and calculating a signal input device by using a triangulation method according to the reflected light response signal a distance between different cut surfaces of the user's limb;
利用计算出的所述图像传感器与所述用户肢体不同切面之间的距离,在以图像传感器为原点的三维空间中对所述用户肢体进行三维建模;Using the calculated distance between the image sensor and different sections of the user's limb, three-dimensional modeling of the user's limb in a three-dimensional space with the image sensor as the origin;
对三维建模的信息进行手势重构,识别出当前手势信息及距离所述图像传感器之间的距离;Performing gesture reconstruction on the three-dimensionally modeled information to identify current gesture information and a distance from the image sensor;
结合所述手势信息、距离信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。The corresponding operation instruction is generated by combining the gesture information, the distance information, the input signal, and the correspondence between the recognition result and the operation instruction.
上述方案即是光学三维测距技术,与二维测距不同的是:该激光发射器可通过转轴进行不同角度的激光发射,从而图像传感器采集不同方向上的反射光。利用三角法可测得肢体不同切面的三维距离,将不同切面的三维数据进行三维空间的叠加,即可完成三维建模。不同线性光束发射到该肢体表面上时,图像传感器接收到不同的反射光束,根据不同的反射光束可获取三维 图像重构,从而获得更多更精确的肢体信息。The above solution is an optical three-dimensional ranging technology, which is different from the two-dimensional ranging: the laser emitter can perform laser emission at different angles through the rotating shaft, so that the image sensor collects reflected light in different directions. The triangulation method can be used to measure the three-dimensional distance of different sections of the limb, and the three-dimensional data of different sections can be superimposed in three-dimensional space to complete the three-dimensional modeling. When different linear beams are emitted onto the surface of the limb, the image sensor receives different reflected beams, and three-dimensional image reconstruction can be obtained according to different reflected beams, thereby obtaining more accurate limb information.
可选地,其中,对所述肢体图像进行特征识别,具体还可以为:Optionally, wherein the feature image is identified by using the limb image, specifically:
建立颜色块与肢体特征的对应关系;例如,某一颜色块代表该用户拇指,另一颜色块代表该用户的食指等。A correspondence between a color block and a limb feature is established; for example, one color block represents the user's thumb, and another color block represents the user's index finger or the like.
检测用户肢体上的特定颜色块;Detecting a particular color block on the user's limb;
根据检测到的颜色块的RGB值,确定所述颜色块对应的肢体特征,并输出识别结果。Based on the detected RGB values of the color blocks, the limb features corresponding to the color blocks are determined, and the recognition result is output.
除了对常规的手部图像进行识别,还可以增加识别参数(颜色),以此来加快识别效率。例如,当用户涂上特定颜色的指甲油,或佩戴特定颜色(或不同的颜色块)的手套时,处理器将对该特定颜色进行定位及跟踪,以此来确定该特定颜色的RGB值,并根据该RGB值与用户肢体特征的对应关系,确定该颜色块代表的肢体特征,以此来更快速及高效地对用户肢体特征进行识别。In addition to recognizing conventional hand images, recognition parameters (colors) can be added to speed up recognition. For example, when a user applies a nail polish of a particular color, or wears a glove of a particular color (or a different color block), the processor will position and track the particular color to determine the RGB value for that particular color. And according to the correspondence between the RGB value and the user's limb features, the limb features represented by the color block are determined, thereby identifying the user's limb features more quickly and efficiently.
可选地,该方法还包括:当用户佩戴装配有感应芯片的手套进行操作时,接收所述手套发出的感应信号;Optionally, the method further includes: receiving a sensing signal from the glove when the user operates the glove equipped with the sensor chip;
基于所述感应信号、所述肢体图像的特征识别结果及所述输入信号,产生与所述感应信号、识别结果及输入信号组合对应的操作指令并输出。And generating an operation command corresponding to the combination of the sensing signal, the recognition result, and the input signal based on the sensing signal, the feature recognition result of the limb image, and the input signal, and outputting.
同理,当用户佩戴上装有感应芯片的手套时,可根据该手套的感应信号来更加快速方便地确定用户具体的操作手指/手势等。例如,在该手套不同手指内装配有不同的感应芯片(例如NFC近场通信芯片),当用户某一手指按压该按键时,除了能够检测到按键的输入信号,还可以根据手套感应信号检测到具体按压的手指,再结合肢体特征识别结果,三者共同确定当前按压按键的具体手指/手势等,从而输出相应的内容。使得识别结果更加精确,鲁棒性也更高。Similarly, when the user wears the glove with the sensor chip, the user's specific operation finger/gesture and the like can be determined more quickly and conveniently according to the sensing signal of the glove. For example, different sensing chips (such as NFC near field communication chips) are mounted in different fingers of the glove. When the user presses the button, the input signal of the button can be detected according to the glove sensing signal. The specifically pressed finger, combined with the limb feature recognition result, the three together determine the specific finger/gesture of the currently pressed button, thereby outputting the corresponding content. The recognition result is more accurate and the robustness is higher.
本发明实施例中,通过图像采集+按键按压检测的方式确定对应的操作指令,解决了现有技术人机交互信号输入方式单一且效率低下的问题。采用本发明提供的技术方案,可以更精细化地对用户采用哪一只手/脚,采用哪一只手指头,采用哪一种手势等对按键进行按压操作,不同的肢体信息按压同一个按键时产生的响应信号是不同的,并且,不同的肢体信息与不同的按键可组合使用,可定义大量的快捷操作方式。即,本发明定义了一种全新的交互 方式,可实现操作指令的快速操作。相比于现有技术,本发明提高了信号输入效率,丰富了信号输入方式,提高了用户体验。In the embodiment of the present invention, the corresponding operation instruction is determined by means of image acquisition and button press detection, which solves the problem that the prior art human-machine interaction signal input mode is single and inefficient. According to the technical solution provided by the present invention, which hand/foot can be used more carefully for the user, which finger is used, which gesture is used to press the button, and different body information presses the same button. The response signals generated are different, and different limb information can be combined with different buttons to define a large number of shortcut operations. That is, the present invention defines a completely new way of interaction that enables fast operation of operational instructions. Compared with the prior art, the invention improves the signal input efficiency, enriches the signal input mode, and improves the user experience.
应理解,在本申请的各种实施例中,各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that, in various embodiments of the present application, the size of the sequence number of each process does not mean the order of execution sequence, and the order of execution of each process should be determined by its function and internal logic, and should not be taken by the embodiment of the present application. The implementation process constitutes any qualification.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的模块及方法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art will appreciate that the modules and method steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***、装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。A person skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working process of the system, the device and the module described above can refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
本说明书的各个部分均采用递进的方式进行描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点介绍的都是与其他实施例不同之处。尤其,对于装置和***实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例部分的说明即可。The various parts of the specification are described in a progressive manner, and the same or similar parts between the various embodiments may be referred to each other, and each embodiment focuses on differences from other embodiments. In particular, for device and system embodiments, the description is relatively simple as it is substantially similar to the method embodiment, and the relevant portions can be found in the description of the method embodiments.
最后,需要说明的是:以上所述仅为本申请技术方案的较佳实施例而已,并非用于限定本申请的保护范围。显然,本领域技术人员可以对本申请进行各种改动和变型而不脱离本申请的范围。倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。Finally, it should be noted that the above description is only a preferred embodiment of the technical solution of the present application, and is not intended to limit the scope of protection of the present application. It will be apparent to those skilled in the art that various modifications and changes can be made in the present application without departing from the scope of the application. All such modifications, equivalents, improvements, etc., are intended to be included within the scope of the present application.

Claims (10)

  1. 一种信号输入的装置,其特征在于,包括:A device for inputting signals, comprising:
    一个或多个按键;One or more buttons;
    一个或多个用于采集图像的传感器;One or more sensors for acquiring images;
    一个或多个处理器,所述一个或多个处理器用于:One or more processors for:
    控制所述传感器捕捉用户肢体移动轨迹,采集所述肢体图像,所述用户肢体包括用户四肢;Controlling the sensor to capture a trajectory of a user's limb movement, and acquiring the limb image, the user limb including a user limb;
    对所述肢体图像进行特征识别,获取识别后的结果;Performing feature recognition on the limb image to obtain the recognized result;
    接收所述用户按压所述一个或多个按键后的输入信号;Receiving an input signal after the user presses the one or more buttons;
    结合所述特征识别后的结果与所述输入信号,产生与所述结果及输入信号组合对应的操作指令并输出。Combining the result of the feature recognition with the input signal, an operation instruction corresponding to the combination of the result and the input signal is generated and output.
  2. 根据权利要求1所述的装置,其特征在于,所述肢体为用户的手,则所述处理器用于对所述肢体图像进行特征识别,包括:The device according to claim 1, wherein the limb is a user's hand, and the processor is configured to perform feature recognition on the limb image, including:
    定位出所述手部位置,识别出所述手部位置与所述一个或多个按键的位置关系;Positioning the hand position to identify a positional relationship between the hand position and the one or more buttons;
    当手部位置处于所述一个或多个按键的上方时,对所述手部形状进行识别,确认所述手部是左手或右手,和/或,When the hand position is above the one or more buttons, the hand shape is identified, confirming that the hand is a left or right hand, and/or,
    识别出处于所述一个或多个按键上方的具体手指,和/或,Identifying a particular finger above the one or more keys, and/or,
    识别出所述手部移动轨迹,和/或,Identifying the hand movement trajectory, and/or,
    识别出所述手部的手势;Recognizing the gesture of the hand;
    则所述产生与所述结果及输入信号组合对应的操作指令,包括:Then generating the operation instruction corresponding to the combination of the result and the input signal, including:
    建立所述识别出的手部位置与一个或多个按键的位置关系、所述手部形状及所述输入信号三者与操作指令的对应关系;Establishing a positional relationship between the identified hand position and one or more buttons, a shape of the hand, and a correspondence between the input signals and an operation command;
    按照所述对应关系,产生对应的操作指令。According to the corresponding relationship, a corresponding operation instruction is generated.
  3. 根据权利要求1所述的装置,其特征在于,所述信号输入装置还包括:压力传感器,用于获取用户按压所述一个或多个按键的力度;The device according to claim 1, wherein the signal input device further comprises: a pressure sensor for acquiring a velocity at which the user presses the one or more buttons;
    则所述处理器还用于:Then the processor is further used to:
    获取所述压力传感器采集的按压力度,根据所述按压力度、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。Acquiring the pressing force collected by the pressure sensor, and generating the corresponding operation instruction according to the pressing force, the input signal, and the correspondence between the recognition result and the operation instruction.
  4. 根据权利要求1所述的装置,其特征在于,所述信号输入装置还包括指纹传感器,用于采集用户指纹,并识别用户身份;The device according to claim 1, wherein the signal input device further comprises a fingerprint sensor for collecting a user fingerprint and identifying the user identity;
    则所述处理器还用于:Then the processor is further used to:
    获取用户身份标识信息,结合所述用户身份标识信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令;Obtaining user identity information, combining the user identity information, an input signal, and a correspondence between the recognition result and an operation instruction to generate the corresponding operation instruction;
    或,or,
    所述图像传感器还用于:采集人脸信息;The image sensor is further configured to: collect face information;
    所述处理器还用于:The processor is further configured to:
    根据采集到的人脸信息,识别所述用户身份;Identifying the identity of the user according to the collected face information;
    结合所述用户身份标识信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。The corresponding operation instruction is generated by combining the user identity information, the input signal, and the correspondence between the recognition result and the operation instruction.
  5. 根据权利要求1至4任一项所述的装置,其特征在于,所述信号输入装置还包括用于连续发射激光的激光发射器,所述激光发射器与所述图像传感器呈一夹角,且所述图像传感器为激光传感器,当所述激光发射器发射激光至所述用户肢体时,所述图像传感器接收反射回的反射激光,产生一个或多个反射光响应信号;The apparatus according to any one of claims 1 to 4, wherein said signal input means further comprises a laser emitter for continuously emitting laser light, said laser emitter being at an angle to said image sensor, And the image sensor is a laser sensor, and when the laser emitter emits laser light to the limb of the user, the image sensor receives the reflected reflected laser light to generate one or more reflected light response signals;
    所述处理器还用于:The processor is further configured to:
    当所述图像传感器接收同一方向上的反射光时,接收所述一个反射光响应信号,根据所述反射光响应信号,利用三角法计算出所述图像传感器与所述用户肢体一切面之间的距离;Receiving, by the image sensor, the reflected light response signal in the same direction, and calculating, according to the reflected light response signal, a triangulation method between the image sensor and the user's limb distance;
    结合所述距离信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令;或,Combining the distance information, the input signal, and the correspondence between the recognition result and the operation instruction, generating the corresponding operation instruction; or
    当所述激光发射器向不同方向上发射线性光束,且所述图像传感器接收不同方向上的反射光时,接收所述多个反射光响应信号,根据所述反射光响应信号,利用三角法计算出所述图像传感器与所述用户肢体不同切面之间的距离;When the laser emitter emits a linear beam in different directions, and the image sensor receives the reflected light in different directions, receiving the plurality of reflected light response signals, and calculating according to the reflected light response signal by using a trigonometric method a distance between the image sensor and a different slice of the user's limb;
    利用计算出的所述图像传感器与所述用户肢体不同切面之间的距离,在以图像传感器为原点的三维空间中对所述用户肢体进行三维建模;Using the calculated distance between the image sensor and different sections of the user's limb, three-dimensional modeling of the user's limb in a three-dimensional space with the image sensor as the origin;
    对三维建模的信息进行手势重构,识别出当前手势信息及距离所述图像传感器之间的距离;Performing gesture reconstruction on the three-dimensionally modeled information to identify current gesture information and a distance from the image sensor;
    结合所述手势信息、距离信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。The corresponding operation instruction is generated by combining the gesture information, the distance information, the input signal, and the correspondence between the recognition result and the operation instruction.
  6. 一种信号输入的方法,其特征在于,包括:A method of signal input, comprising:
    信号输入装置捕捉用户肢体移动轨迹,采集所述肢体图像,所述用户肢体包括用户四肢;The signal input device captures a movement track of the user's limb, and collects the limb image, and the user's limb includes the user's limbs;
    对所述肢体图像进行特征识别,获取识别后的结果;Performing feature recognition on the limb image to obtain the recognized result;
    接收所述用户按压所述一个或多个按键后的输入信号;Receiving an input signal after the user presses the one or more buttons;
    结合所述特征识别后的结果与所述输入信号,产生与所述结果及输入信号组合对应的操作指令并输出。Combining the result of the feature recognition with the input signal, an operation instruction corresponding to the combination of the result and the input signal is generated and output.
  7. 根据权利要求6所述的方法,其特征在于,所述肢体为用户的手,则对所述肢体图像进行特征识别,包括:The method according to claim 6, wherein the limb is a user's hand, and the limb image is characterized, including:
    定位出所述手部位置,识别出所述手部位置与所述一个或多个按键的位置关系;Positioning the hand position to identify a positional relationship between the hand position and the one or more buttons;
    当手部位置处于所述一个或多个按键的上方时,对所述手部形状进行识别,确认所述手部是左手或右手,和/或,When the hand position is above the one or more buttons, the hand shape is identified, confirming that the hand is a left or right hand, and/or,
    识别出处于所述一个或多个按键上方的具体手指,和/或,Identifying a particular finger above the one or more keys, and/or,
    识别出所述手部移动轨迹,和/或,Identifying the hand movement trajectory, and/or,
    识别出所述手部的手势;Recognizing the gesture of the hand;
    则所述产生与所述结果及输入信号组合对应的操作指令,包括:Then generating the operation instruction corresponding to the combination of the result and the input signal, including:
    建立所述识别出的手部位置与一个或多个按键的位置关系、所述手部形状及所述输入信号三者与操作指令的对应关系;Establishing a positional relationship between the identified hand position and one or more buttons, a shape of the hand, and a correspondence between the input signals and an operation command;
    按照所述对应关系,产生对应的操作指令。According to the corresponding relationship, a corresponding operation instruction is generated.
  8. 根据权利要求6所述的方法,其特征在于,所述方法还包括:The method of claim 6 wherein the method further comprises:
    获取所述用户按压按键的按压力度,根据所述按压力度、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。Acquiring the pressing force of the user to press the button, and generating the corresponding operation instruction according to the pressing force, the input signal, and the correspondence between the recognition result and the operation instruction.
  9. 根据权利要求6所述的方法,其特征在于,所述方法还包括:The method of claim 6 wherein the method further comprises:
    进行人脸识别或指纹识别,获取用户身份标识信息,结合所述用户身份标识信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。Performing face recognition or fingerprint recognition, acquiring user identity information, combining the user identity information, the input signal, and the correspondence between the recognition result and the operation instruction to generate the corresponding operation instruction.
  10. 根据权利要求6-9任一项所述的方法,其特征在于,所述方法还包括:The method of any of claims 6-9, wherein the method further comprises:
    发射激光至所述用户肢体,当接收同一方向上的反射光时,接收所述一个反射光响应信号,根据所述反射光响应信号,利用三角法计算出所述信号输入装置与所述用户肢体一切面之间的距离;Transmitting a laser to the user's limb, receiving the reflected light response signal when receiving the reflected light in the same direction, and calculating the signal input device and the user limb by a triangulation method according to the reflected light response signal The distance between all faces;
    结合所述距离信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令;或,Combining the distance information, the input signal, and the correspondence between the recognition result and the operation instruction, generating the corresponding operation instruction; or
    当所述发射的激光为不同方向上线性光束,且接收不同方向上的反射光时,接收所述多个反射光响应信号,根据所述反射光响应信号,利用三角法计算出所述信号输入装置与所述用户肢体不同切面之间的距离;Receiving the plurality of reflected light response signals when the emitted laser light is a linear light beam in different directions and receiving the reflected light in different directions, and calculating the signal input by using a triangulation method according to the reflected light response signal a distance between the device and a different section of the user's limb;
    利用计算出的所述图像传感器与所述用户肢体不同切面之间的距离,在以图像传感器为原点的三维空间中对所述用户肢体进行三维建模;Using the calculated distance between the image sensor and different sections of the user's limb, three-dimensional modeling of the user's limb in a three-dimensional space with the image sensor as the origin;
    对三维建模的信息进行手势重构,识别出当前手势信息及距离所述图像传感器之间的距离;Performing gesture reconstruction on the three-dimensionally modeled information to identify current gesture information and a distance from the image sensor;
    结合所述手势信息、距离信息、输入信号及所述识别结果与操作指令的对应关系,产生所述对应的操作指令。The corresponding operation instruction is generated by combining the gesture information, the distance information, the input signal, and the correspondence between the recognition result and the operation instruction.
PCT/CN2018/078642 2018-03-09 2018-03-09 Method and device for inputting signal WO2019169644A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880091030.4A CN112567319A (en) 2018-03-09 2018-03-09 Signal input method and device
PCT/CN2018/078642 WO2019169644A1 (en) 2018-03-09 2018-03-09 Method and device for inputting signal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/078642 WO2019169644A1 (en) 2018-03-09 2018-03-09 Method and device for inputting signal

Publications (1)

Publication Number Publication Date
WO2019169644A1 true WO2019169644A1 (en) 2019-09-12

Family

ID=67846832

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/078642 WO2019169644A1 (en) 2018-03-09 2018-03-09 Method and device for inputting signal

Country Status (2)

Country Link
CN (1) CN112567319A (en)
WO (1) WO2019169644A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902354A (en) * 2012-08-20 2013-01-30 华为终端有限公司 Terminal operation method and terminal
CN103052928A (en) * 2010-08-04 2013-04-17 惠普发展公司,有限责任合伙企业 System and method for enabling multi-display input
CN103176594A (en) * 2011-12-23 2013-06-26 联想(北京)有限公司 Method and system for text operation
US20130215038A1 (en) * 2012-02-17 2013-08-22 Rukman Senanayake Adaptable actuated input device with integrated proximity detection
CN104899494A (en) * 2015-05-29 2015-09-09 努比亚技术有限公司 Multifunctional key based operation control method and mobile terminal
CN105353873A (en) * 2015-11-02 2016-02-24 深圳奥比中光科技有限公司 Gesture manipulation method and system based on three-dimensional display
CN106227336A (en) * 2016-07-15 2016-12-14 深圳奥比中光科技有限公司 Body-sensing map method for building up and set up device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6614422B1 (en) * 1999-11-04 2003-09-02 Canesta, Inc. Method and apparatus for entering data using a virtual input device
US20130257734A1 (en) * 2012-03-30 2013-10-03 Stefan J. Marti Use of a sensor to enable touch and type modes for hands of a user via a keyboard

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103052928A (en) * 2010-08-04 2013-04-17 惠普发展公司,有限责任合伙企业 System and method for enabling multi-display input
CN103176594A (en) * 2011-12-23 2013-06-26 联想(北京)有限公司 Method and system for text operation
US20130215038A1 (en) * 2012-02-17 2013-08-22 Rukman Senanayake Adaptable actuated input device with integrated proximity detection
CN102902354A (en) * 2012-08-20 2013-01-30 华为终端有限公司 Terminal operation method and terminal
CN104899494A (en) * 2015-05-29 2015-09-09 努比亚技术有限公司 Multifunctional key based operation control method and mobile terminal
CN105353873A (en) * 2015-11-02 2016-02-24 深圳奥比中光科技有限公司 Gesture manipulation method and system based on three-dimensional display
CN106227336A (en) * 2016-07-15 2016-12-14 深圳奥比中光科技有限公司 Body-sensing map method for building up and set up device

Also Published As

Publication number Publication date
CN112567319A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
US9600078B2 (en) Method and system enabling natural user interface gestures with an electronic system
JP6348211B2 (en) Remote control of computer equipment
US11009961B2 (en) Gesture recognition devices and methods
US9760214B2 (en) Method and apparatus for data entry input
US10209881B2 (en) Extending the free fingers typing technology and introducing the finger taps language technology
US9274551B2 (en) Method and apparatus for data entry input
US8593402B2 (en) Spatial-input-based cursor projection systems and methods
US8180114B2 (en) Gesture recognition interface system with vertical display
TW201814438A (en) Virtual reality scene-based input method and device
KR20100106203A (en) Multi-telepointer, virtual object display device, and virtual object control method
CN101901106A (en) The method and the device that are used for the data input
US8948493B2 (en) Method and electronic device for object recognition, and method for acquiring depth information of an object
KR20120068253A (en) Method and apparatus for providing response of user interface
TW201135517A (en) Cursor control device, display device and portable electronic device
CN108027648A (en) The gesture input method and wearable device of a kind of wearable device
US20130229348A1 (en) Driving method of virtual mouse
TW201439813A (en) Display device, system and method for controlling the display device
WO2019169644A1 (en) Method and device for inputting signal
KR101860138B1 (en) Apparatus for sharing data and providing reward in accordance with shared data
KR101506197B1 (en) A gesture recognition input method using two hands
Annabel et al. Design and Development of Multimodal Virtual Mouse
CN108021238A (en) New concept touch system keyboard
Bhowmik 39.2: invited paper: natural and intuitive user interfaces: technologies and applications
WO2020078223A1 (en) Input device
KR101671831B1 (en) Apparatus for sharing data and providing reward in accordance with shared data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18908395

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17/12/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18908395

Country of ref document: EP

Kind code of ref document: A1