CN105242888B - A kind of system control method and electronic equipment - Google Patents

A kind of system control method and electronic equipment Download PDF

Info

Publication number
CN105242888B
CN105242888B CN201410328111.5A CN201410328111A CN105242888B CN 105242888 B CN105242888 B CN 105242888B CN 201410328111 A CN201410328111 A CN 201410328111A CN 105242888 B CN105242888 B CN 105242888B
Authority
CN
China
Prior art keywords
eye
point
dynamic image
unit
polygon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410328111.5A
Other languages
Chinese (zh)
Other versions
CN105242888A (en
Inventor
滕鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410328111.5A priority Critical patent/CN105242888B/en
Publication of CN105242888A publication Critical patent/CN105242888A/en
Application granted granted Critical
Publication of CN105242888B publication Critical patent/CN105242888B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of system control method and electronic equipments, including:Obtain the facial dynamic image of human body face;Obtain at least three face feature points;Obtain at least one polygon;Determine a special characteristic point;Determine a position reference point;Determine relative position and deformation extent value;Based on relative position and deformation extent value, it generates and executes the first control instruction for controlling position sign, technical solution in the application only needs one dynamic image harvester of setting, solves position instruction mark location low precision in the prior art, the low problem of efficiency, and it does not need user and significantly rotates head in use, or significantly mobile electronic device relatively, therefore, technical solution in the embodiment of the present application has position instruction mark location precision high, high sensitivity, operating efficiency is high, and user experiences good technique effect.

Description

System control method and electronic equipment
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to a system control method and an electronic device.
Background
Currently, as the display screen of the electronic device terminal is enlarged, the size of the display screen is usually more than 4 inches, and some display screens even reach 6 inches and 7 inches. In the practical application process, for example, on a bus, when one hand of a user holds the electronic device terminal and the other hand is holding the handle bar on the bus, it is difficult to operate all display objects on the display screen of the electronic device terminal with one hand due to the limitation of the length of the fingers of the hand holding the electronic device terminal.
For this reason, in the prior art, the control of the electronic device is realized by tracking the movement of the nose tip of the user in a manner of acquiring the nose tip position on the face of the person in real time. Specifically, the method uses two cameras placed on the left and the right to simultaneously obtain facial images of a user, determines the position of the nose tip of the user through a facial disparity map, and corresponds the cursor indication position of the electronic equipment to the movement position of the nose tip of the user, so that the electronic equipment is controlled.
However, in the process of the inventor of the present application in the embodiment of the present application, it is found that the above prior art has at least the following technical problems:
the prior art needs to deploy at least two cameras in the implementation process, which is usually not easy to implement on the existing portable equipment, and the implementation principle of the scheme is that firstly, the relative change position of the nose tip in the camera view is determined, so that the relative change position is related to the cursor indication display position of the electronic equipment, when the nose tip makes a motion parallel to the connecting line of the position points of the two cameras, the relative change position is difficult to recognize, so that the positioning of the cursor indication position is influenced, and due to the position limitation of the two cameras, if the display screen of the electronic equipment is large, a user needs to greatly move the head (or relatively greatly move the electronic equipment).
Therefore, the electronic equipment in the prior art has the defects of poor cursor indication positioning precision and low efficiency;
and because the user is required to move the head greatly or move the electronic device relatively greatly in the using process, the user experience is poor.
Disclosure of Invention
The application provides a system control method and electronic equipment, which are used for solving the technical problems of poor indication and positioning precision, low efficiency and poor user interaction experience of the electronic equipment in the prior art.
One aspect of the present application provides a system control method applied to an electronic device, where the electronic device includes a dynamic image capturing unit and a display unit, and the method includes:
obtaining a face dynamic image of the human face obtained by the dynamic image acquisition unit;
identifying the dynamic face image to obtain at least three facial feature points which are not on the same straight line in the human face;
obtaining at least one polygon from the at least three facial feature points;
determining a specific characteristic point on the polygon;
determining a position reference point on a display area of the display unit;
when the specific characteristic point moves, determining the relative position of the specific characteristic point relative to the position reference point and the deformation degree value of the polygon;
and generating and executing a first control instruction for controlling a position indication mark on the display unit based on the relative position and the deformation degree value.
Preferably, said polygon is in particular a triangle or a trapezoid.
Preferably, the facial feature points are specifically:
the left eye inner canthus position point, the right eye inner canthus position point and the nose tip position point are positioned on the triangle; or the position point of the outer canthus of the left eye, the position point of the outer canthus of the right eye and the position point of the nose tip are positioned on the triangle.
Preferably, the nose tip position point is determined as the specific feature point.
Preferably, a shift direction of the specific feature point with respect to the position reference point is obtained, and based on the shift direction, a movement control instruction for controlling a movement direction of the position indication mark is generated and executed.
Preferably, the deformation degree value includes a first deformation degree value and a second deformation degree value, the first deformation degree value represents the deformation degree of the polygon in a first direction, the second deformation degree value represents the deformation degree of the polygon in a second direction, and the first direction is different from the second direction.
Preferably, the first direction is perpendicular to the second direction, and the first direction is a horizontal direction.
Preferably, a first moving distance for controlling the position indicator to move in the first direction is obtained according to the first deformation degree value; obtaining a second movement distance for controlling the position indication mark to move in the second direction according to the second deformation degree value; according to the first moving distance and the second moving distance, obtaining a direct moving distance for controlling the position indication mark on the display area; and generating and executing a direct movement control instruction for controlling the position indication mark to move the direct movement distance on the display area according to the direct movement distance.
Preferably, the method further comprises: obtaining a left eye dynamic image of the human body obtained by the dynamic image acquisition unit; identifying the dynamic image of the left eye to obtain a first time of the left eye from an eye-opening state to an eye-closing state and a second time of the left eye from the eye-closing state to the eye-opening state after the first time of the left eye; judging whether the left eye interval duration between the left eye first time and the left eye second time is greater than or equal to a preset first duration or not, and obtaining a first judgment result; and when the first judgment result is yes, generating and executing a left-click control instruction for controlling the electronic equipment to perform left-click.
Preferably, the method further comprises: obtaining a right eye dynamic image of the human body obtained by the dynamic image acquisition unit; identifying the dynamic image of the right eye to obtain a first right eye time when the right eye is in an open eye state to a closed eye state, and a second right eye time when the right eye is in the closed eye state to the open eye state after the first right eye time; judging whether the right-eye interval time between the first right-eye time and the second right-eye time is greater than or equal to a preset second time to obtain a second judgment result; and when the second judgment result is yes, generating and executing a right click control instruction for controlling the electronic equipment to click right.
Based on the system control method, an embodiment of the present application further provides an electronic device, including a dynamic image acquisition unit and a display unit, further including:
and the dynamic image acquisition unit is used for acquiring a face dynamic image of the human face acquired by the dynamic image acquisition unit.
And the facial feature recognition unit is used for recognizing the dynamic facial image to obtain at least three facial feature points which are not on the same straight line in the human face.
And the image acquisition unit is used for acquiring at least one polygon according to the at least three facial feature points.
The specific feature point obtaining unit is used for determining a specific feature point on the polygon.
A position reference point acquiring unit for determining a position reference point on the display area of the display unit.
A relative position determination unit configured to determine a relative position of the specific feature point with respect to the position reference point when the specific feature point moves.
And the deformation degree value judging unit is used for determining the deformation degree value of the polygon when the specific characteristic point moves.
And the control instruction generating unit is used for generating and executing a first control instruction for controlling the position indication mark on the display unit based on the relative position and the deformation degree value.
Preferably, the graph acquiring unit specifically includes: to obtain at least one triangle or trapezoid from the at least three facial feature points.
Preferably, the facial feature recognition unit is specifically: the face dynamic image is identified to obtain a left eye inner canthus position point, a right eye inner canthus position point and a nose tip position point which are positioned on the triangle; or obtaining a left eye external canthus position point, a right eye external canthus position point and a nose tip position point which are positioned on the triangle.
Preferably, the specific feature point obtaining unit is specifically: to determine the nose tip location point as the specific feature point.
Preferably, the relative position determination unit is specifically: to determine a relative offset direction of the particular feature point with respect to the location reference point.
Preferably, the deformation degree value determination unit specifically includes: a first deformation degree value determination unit for determining a first deformation degree value of the polygon in a first direction; a second deformation degree value determination unit for determining a second deformation degree value of the polygon in a second direction; and, the first direction is different from the second direction.
Preferably, the first direction is perpendicular to the second direction, and the first direction is a horizontal direction.
Preferably, the control instruction generating unit further includes: a first direction movement distance determination unit configured to obtain a first movement distance for controlling the position indicator to move in the first direction according to the first deformation degree value; a second direction movement distance determination unit configured to obtain a second movement distance for controlling the position indicator to move in the second direction according to the second deformation degree value; a direct movement distance determination unit, configured to obtain a direct movement distance for controlling the position indicator on the display area according to the first movement distance and the second movement distance; and the direct movement control unit is used for generating and executing a direct movement control instruction for controlling the position indication mark to move the direct movement distance based on the direct movement distance.
Preferably, the electronic device further includes: the left eye dynamic image acquisition unit is used for acquiring a left eye dynamic image of the human body acquired by the dynamic image acquisition unit; a left eye dynamic image recognition unit for recognizing the left eye dynamic image to obtain a first time of a left eye from an eye-open state to an eye-closed state and a second time of the left eye from the eye-closed state to the eye-open state after the first time of the left eye; the left-eye interval duration judging unit is used for judging whether the left-eye interval duration between the left-eye first moment and the left-eye second moment is greater than or equal to a preset first duration or not to obtain a first judgment result; and the left-click control instruction operating unit is used for generating and executing a left-click control instruction for controlling the electronic equipment to perform left-click when the first judgment result is yes.
Preferably, the electronic device further includes: the right eye dynamic image acquisition unit is used for acquiring a right eye dynamic image of the human body acquired by the dynamic image acquisition unit; a right eye dynamic image recognition unit for recognizing the right eye dynamic image to obtain a right eye first time when the right eye is in an open eye state to a closed eye state, and a right eye second time when the right eye is in the closed eye state to the open eye state after the right eye first time; the right-eye interval duration judging unit is used for judging whether the right-eye interval duration between the first right-eye time and the second right-eye time is greater than or equal to a preset second duration or not to obtain a second judgment result; and the right click control instruction operating unit is used for generating and executing a right click control instruction for controlling the electronic equipment to perform right click when the second judgment result is yes.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
the technical proposal in the embodiment of the application realizes the multi-directional positioning of the position indicator by adopting the mode of combining the offset position of the specific characteristic point and the deformation degree value of the polygon, solves the problem of poor positioning precision of the position indicator in the prior art, and since the moving distance of the position indicator is determined according to the deformation degree value of the polygon, in the implementation process, the control of the position indicator to move a larger distance through a smaller amplitude face rotation can be realized through corresponding conversion, therefore, in the practical application process, the user does not need to rotate the head greatly, or the position indication mark can be moved by a larger distance by relatively moving the electronic equipment greatly, and compared with the prior art, the technical scheme in the application has the technical effects of higher operation efficiency and higher sensitivity.
On the other hand, the technical scheme in the embodiment of the application can ensure that the position indication mark cannot move when the face moves in parallel relative to the display interface, and can effectively avoid the situation that the position indication mark moves disorderly due to inevitable face shaking of a user in the application process of moving environments such as riding, walking and the like, so that the technical scheme in the application has the technical effect of high applicability.
Further, the technical scheme in the application includes that the number of the preset facial feature points to be identified is three: the inner canthus position point of the left eye, the inner canthus position point of the right eye and the nose tip position point; or, the left eye outer canthus position point, the right eye outer canthus position point and the nose tip position point, because the edge shapes of the four characteristic points are sharp, and the color difference between the inner edge of the canthus and the outer edge of the canthus is very obvious, the prior art has very high recognition rate and high recognition accuracy rate for the four characteristic points, and therefore, the technical scheme in the application also has the technical effects of high recognition rate and high recognition accuracy rate.
Further, in the technical solution provided in the embodiment of the present application, the polygon is specifically configured as a triangle or a trapezoid, and since the number of points forming the triangle is minimum, the number of facial feature points to be identified in the implementation process is also minimum, and parameters for determining the deformation degree value are also obvious, so that the identification efficiency and the operation efficiency of the technical solution can be effectively improved.
On the other hand, in the technical solution, theoretically, the more the number of recognized facial feature points is, the more sides of the polygon formed by the recognized facial feature points are, the more accurate the judgment of the deformation degree value is, and when a trapezoid is adopted as another specific polygon structure, the less the number of facial feature points to be recognized is ensured, and the accuracy of judging the deformation degree value can also be improved, so the technical solution in the embodiment of the present application also has the technical effect of high accuracy of judging the deformation degree value.
Further, in the technical solution of the embodiment of the application, the first direction is perpendicular to the second direction, and the first direction is a horizontal direction, and by adopting the method, a coordinate type conversion method can be adopted in a conversion process of a moving position of the position indication mark, so that simplicity in the conversion process of the position is improved, and the position indication mark is closer to a visual feeling of a user in an actual operation process, thereby realizing a technical effect of higher conversion efficiency of the position indication mark.
Further, the technical scheme in the embodiment of the application has the technical effects, so that the interactive experience of a user when using the electronic equipment is effectively improved.
Drawings
Fig. 1 is a flowchart of a system control method according to an embodiment of the present invention;
fig. 2 is a structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The application provides a system control method and electronic equipment, which are used for solving the technical problems of poor positioning accuracy and low efficiency of position indication marks of the electronic equipment in the prior art.
In order to solve the technical problems, the general idea of the embodiment of the application is as follows:
in the embodiment of the application, a dynamic image of a face of a human body is obtained through a dynamic image acquisition unit, the dynamic image of the face is identified to obtain at least three facial feature points which are not on the same straight line in the human body, the at least three facial feature points are composed to obtain at least one polygon, a specific feature point is determined on the polygon, a position reference point is determined on a display area of a display unit, and when the specific feature point moves, a first control instruction for controlling a position indication mark on the display unit is generated and executed according to the relative position of the specific feature point relative to the position reference point and the deformation degree value of the polygon.
Therefore, in the embodiment of the application, the electronic device collects the dynamic image of the human face, so as to identify the facial feature points according to the dynamic image of the face, form at least one polygon according to the positions of the facial feature points, determine a specific feature point on the edge of the polygon, determine a position reference point on the display interface, control the moving direction and the moving distance of the position indicator of the electronic device according to the relative position of the specific feature point relative to the position reference point and the deformation degree value of the polygon in the process of rotating the face, thereby achieving the purpose of controlling the position of the position indicator of the electronic device by rotating the face, solving the problems of poor positioning accuracy and low efficiency of the position indicator in the prior art, and adopting the mode of determining the moving distance of the position indicator according to the deformation degree value of the polygon, in the implementation process, the purpose of controlling the position indication mark to move for a larger distance through the face rotation with a smaller amplitude can be achieved through corresponding conversion, so that the position indication mark can move for a larger distance without the need of rotating the head by a large amplitude of a user in the actual application process or moving the position indication mark by a large amplitude of mobile electronic equipment relatively.
The technical solutions of the present application are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present application are detailed descriptions of the technical solutions of the present application, and are not limitations of the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
Example one
Referring to fig. 1, an embodiment of the present application provides a system control method applied in an electronic device, where the electronic device includes a dynamic image capturing unit and a display unit, and the method includes:
step 101: a face dynamic image of the face of the human body obtained by the dynamic image capturing unit is obtained.
Specifically, the dynamic image acquisition unit can be a camera, and dynamic image acquisition is carried out to human facial feature through the angle that is more comprehensive and obvious to user's facial feature, also can adopt a plurality of cameras, carries out diversified multi-angle's shooting, and then obtains diversified dynamic image and be used for the discernment to human facial feature point, makes the recognition result more accurate to also can make the position change analysis of facial feature point more accurate in operation process. Certainly, in the specific implementation process, dynamic imaging devices such as a continuous imaging sensor, a camcorder, an infrared recorder or a camera can also be used as a dynamic image acquisition unit to acquire a dynamic image of the face of the user, and then the dynamic image is identified and analyzed to obtain the identification result of the facial feature points and the position changes of the facial feature points of the human body. In summary, in the embodiment of the present application, a plurality of dynamic image capturing devices may be adopted to capture images in a plurality of ways, and here, the present application does not exemplify one by one.
Step 102: and identifying the dynamic face image to obtain at least three facial feature points which are not on the same straight line in the human face.
Specifically, after the dynamic image of the face of the human body is obtained, in a specific implementation process of step 102, the facial feature points are preset specific points, for example, the system is preset to identify three feature points of the left inner corner of the eye, the right inner corner of the eye, and the tip of the nose in the facial feature of the human body, and then after the dynamic image of the face is obtained, the three feature points of the left inner corner of the eye, the right inner corner of the eye, and the tip of the nose in the dynamic image of the face are analyzed and identified.
In the process of identifying the facial feature points, firstly, the dynamic image can be subjected to image preprocessing including smoothing, transformation, enhancement, restoration, filtering and the like of the image, and then the extraction and selection of the image features are carried out, wherein the purpose of identifying the image can be achieved through computer programming by adopting two modes of template matching or pattern identification.
For example, templates of individual facial feature points of a human body, such as: the method comprises the steps of extracting and selecting key features of each part of an obtained human face dynamic image, comparing the key features with pre-stored face feature points to obtain matching degree values of the key features of each part, and obtaining a recognition result of the part as a specific face feature point when the matching degree value of the part in the face dynamic image and the specific face feature point is larger than a set threshold value.
For another example, the shape, curve, and color contrast difference of the human face position point in the dynamic face image may be evaluated by a computer program using a specific algorithm, and if the evaluation score is higher than a preset threshold of a certain face feature point, the recognition result that the certain face position point is the certain face feature point is obtained.
More specifically, in the implementation process of step 102, the preset facial feature points are three, including a left eye inner canthus position point, a right eye inner canthus position point, and a nose tip position point; or the left eye external canthus position point, the right eye external canthus position point and the nose tip position point.
The four facial feature points, namely the left eye inner canthus, the left eye outer canthus, the right eye inner canthus and the right eye outer canthus, are sharp in edge shape, and the color difference between the inner edge of the canthus and the outer edge of the canthus is obvious.
Step 103: at least one polygon is obtained from the at least three facial feature points.
Specifically, in the implementation process of step 103, the polygon may be constructed in such a way that at least three facial feature points are connected by lines between adjacent points, so that the constructed polygon passes through each facial feature point.
For example, if the facial feature points are four position points of a left eyebrow tip, a right eyebrow tip, a left lip corner and a right lip corner, a line can be connected between the left eyebrow tip and the right eyebrow tip, the right eyebrow tip and the right lip corner, the right lip corner and the left lip corner, and the left lip corner and the left eyebrow tip, or a line can be connected between the left eyebrow tip and the right lip corner, and the right eyebrow tip and the left lip corner, as long as a polygon passing through each facial feature point is finally formed.
For another example, if the facial feature points are eight position points in total, namely, the left eyebrow tip, the right eyebrow tip, the left lip corner, the right lip corner, and the left inner canthus corner, the right inner canthus corner, the left alar point, and the right alar point, two quadrangles may be constructed by connecting lines between the left eyebrow tip and the right eyebrow tip, the right eyebrow tip and the right lip corner, the right lip corner and the left lip corner, the left lip corner and the left eyebrow tip, and connecting lines between the left inner canthus corner and the left alar point, the left alar point and the right alar point, the right inner canthus corner, and the right inner canthus corner and the left inner canthus corner, and the two quadrangles pass through the eight position points.
Meanwhile, the polygon may be structured in a manner of performing a polygon structure within a certain error range according to the positions of the at least three facial feature points.
For example, a left inner canthus, a left outer canthus, a right inner canthus, a right outer canthus, and a nose tip are taken, and a total of five facial feature points are obtained, at this time, a straight line segment may be constructed according to the positions of the left inner canthus, the right outer canthus, the right inner canthus, and the right outer canthus, and the straight line segment passes through the positions of the left inner canthus, the right outer canthus, the right inner canthus, and the right outer canthus within a certain error range, and then both ends of the straight line segment are connected with the nose tip position point, so that a triangle passing through the five facial feature points within a certain error range is formed.
It can be seen that, in the implementation of step 103, the number of polygons formed by the positions of the facial feature points is at least one, and may be multiple, which allows the polygon formation within a certain error range, as long as the formed polygons pass through all the facial feature points within the allowable error range.
Specifically, in the implementation process of step 103, the polygon is specifically configured as a triangle or a trapezoid, and since the number of points forming the triangle is the minimum, the number of facial feature points to be recognized in the implementation process is the minimum, and the parameters for determining the deformation degree value are also obvious, so that the recognition efficiency and the operation efficiency of the present technical solution can be effectively improved.
On the other hand, in the technical solution, theoretically, the more the number of recognized facial feature points is, the more sides of the polygon formed by the recognized facial feature points are, the more accurate the judgment of the deformation degree value is, and when a trapezoid is adopted as another specific polygon structure, the less the number of facial feature points to be recognized is ensured, and the accuracy of judging the deformation degree value can also be improved, so the technical solution in the embodiment of the present application also has the technical effect of high accuracy of judging the deformation degree value.
More specifically, in the implementation process of step 103, a polygonal shape may be displayed in the display area of the display unit to conveniently and intuitively view the deformation degree of the polygon, and certainly, a multi-deformation shape may not be displayed, so as to make the display screen more concise and improve the user experience.
Step 104: a specific feature point is determined on the polygon.
Any point on the polygon may be selected as a specific feature point during the implementation of step 104.
It should be noted that any point on the polygon includes points on the sides of the polygon, as well as all points in the area enclosed by the sides of the polygon.
Specifically, the nose tip position point can be determined as the specific feature point, because the nose tip is the most prominent part of the human face, and when the head rotates, the nose tip can generate the largest relative displacement in the dynamic image, which plays a role in improving the operation efficiency.
Step 105: a position reference point is determined on a display area of the display unit.
Specifically, an arbitrary point on the display area of the display unit may be used as the position reference point.
Further specifically, the center point of the display area of the display unit may be determined as the position reference point, whereby it is possible to ensure that the determination of the relative position of the specific feature point in each orientation and the maximum distance is performed, and it is also possible to ensure that the determination of the degree of deformation of the polygon within the maximum acceptable range is achieved.
Step 106: when the specific characteristic point moves, determining the relative position of the specific characteristic point relative to the position reference point and the deformation degree value of the polygon.
In the process of controlling the position of the position indication mark of the electronic device by rotating the face, the specific feature point and the polygon are subjected to position deviation and shape change along with the shaking of the human face.
In a specific implementation of step 106, when the specific feature point moves, a moving direction, a moving distance, or both of the position indication identifier of the electronic device and the moving distance may be determined according to a relative position of the specific feature point with respect to the position reference point.
For example, when the specific feature point moves, a direction in which the specific feature point deviates from the position reference point may be used as a moving direction of the position indicator, a distance in which the specific feature point deviates from the position reference point may be used as one of the parameters for determining the moving distance of the position indicator, and when the distance in which the specific feature point deviates from the position reference point is larger, the moving distance of the position indicator may be determined to be larger.
It should be particularly noted that, in the implementation of step 106, two ways of determining the relative position of the specific feature point may be adopted:
the first is to take a form in which the location reference point is fixed once determined.
For example, the central point of the display area is determined as a position reference point, the position of the position reference point is the central point no matter how the face is shaken, and the movement mode corresponding to the position indication mark is determined by performing relative position conversion on the distance and angle change of a specific feature point relative to the position reference point (central point) through computer programming.
The second is to adopt a manner in which the position reference point changes as the face shakes.
Specifically, a certain point in the display area is first determined as a position reference point, the conversion of the movement position of the position indicator is performed using the certain point as the position reference point when the first face shake is performed, the position reference point is changed to the position of the specific feature point at the end of the first face shake after the first face shake is performed, and the conversion of the movement position of the position indicator is performed using the position of the specific feature point at the end of the first face shake as the position reference point when the second face shake is performed.
For example: firstly, the central point of the display area is determined as a position reference point, the display area is used as a first face shake when the face shake is started, the first face shake at the moment uses the central point as the position reference point to perform the first conversion of the moving position of the position indication mark, when the face shake with different directions occurs, the display area is judged to enter into a second face shake, the position of the specific characteristic point when the face shake direction changes is used as the position reference point, and therefore the conversion of the moving position of the position indication mark is continued.
It should be noted that the criterion for determining whether different face shakes occur is as follows: when the direction of the face shake changes, it means that a different face shake is performed.
Meanwhile, after the relative position of the specific feature point with respect to the position reference point is determined, it is also necessary to determine a deformation degree value of the polygon, so as to further determine the moving position of the position indicator, for example, the moving position of the position indicator may be determined based on an evaluation result of the deformation degree value obtained by evaluating a deformation degree value of an area change, a shape change, a length change, a width change, a length change of each side, and the like of the polygon and a combination of the above changes.
It should be noted that, similar to the way of determining the relative position of the specific feature point, the way of determining the deformation degree value may also be two ways:
the first is a fixed manner of referencing the polygons for the degree of deformation once determined.
For example, after the execution of step 103 is completed, an original polygon constructed by the identified facial feature points is obtained, the polygon used as a reference for the degree of deformation is the original polygon regardless of the way the face is shaken, and then the degree of change of the polygon relative to the original polygon during the face shaking process is determined by evaluating the combination of the change, such as the change in area, the change in shape, the change in length, the change in width, the change in length of each side, and the like of the polygon relative to the original polygon during the face shaking process by computer programming, so as to determine the moving position of the position indication mark.
Of course, the original polygon may be not only a certain polygon constructed by the facial feature points, but also a polygon preset in the system.
The second is a way in which the polygon of the reference deformation degree changes as the face shakes.
Specifically, an original polygon is first identified, a relative change degree value is evaluated based on parameters such as the area, shape, side length, and width of the original polygon during a first face shake, the original polygon is then changed to a polygon at the end of the first face shake after the first face shake, that is, the relative change degree value is evaluated based on parameters such as the area, shape, side length, and width of the polygon at the end of the first face shake during a second face shake, and the moving position of the position indicator is converted based on the change degree value.
Here, the criterion for determining whether different face shakes occur is still: when the direction of the face shake changes, it means that a different face shake is performed.
For example: firstly, an initially determined polygon is determined as an original polygon, and the initially determined polygon is used as a first face shake when face shake is started, the first face shake takes parameters such as the area, the shape, the side length, the length and the width of the original polygon as reference parameters for evaluating a deformation degree value, so that the moving position of a position indication mark is converted for the first time, when face shakes with different directions occur, the initially determined polygon is judged to enter into a second face shake, and the parameters such as the area, the shape, the side length, the length and the width of the polygon when the face shake direction changes are used as reference parameters for evaluating the deformation degree value, so that the moving position of the position indication mark is converted for the second time.
The mode that the moving distance of the position indication mark is determined according to the deformation degree value of the polygon is adopted, and the purpose that the position indication mark is controlled to move a larger distance through small-amplitude face rotation can be achieved through corresponding conversion in the implementation process, so that the situation that the position indication mark needs to be controlled to move a larger distance through large-amplitude head rotation in the use process or through relatively large-amplitude mobile electronic equipment when the position indication mark is applied to a larger display screen in the prior art is avoided, and the technical scheme in the embodiment of the application has the technical effect of high operation efficiency.
In addition, in the prior art, only the movement determination of the position indicator is performed according to the movement position of a certain point in the facial features, because the position and the angle of the dynamic image acquisition device are limited, particularly when the movement is performed relative to the direction of the dynamic image acquisition device, it is inevitable that a large error is generated in the determination of the relative displacement in the direction.
On the other hand, the moving distance of the position indication mark is judged through the deformation degree value, so that the position indication mark cannot move when the face moves in parallel relative to the display interface, and the situation that the position indication mark moves disorderly due to inevitable face shaking of a user in the application process of moving environments such as riding, walking and the like can be effectively avoided.
Specifically, the deformation degree values include a first deformation degree value and a second deformation degree value, the first deformation degree value represents a deformation degree of the polygon in a first direction, the second deformation degree value represents a deformation degree of the polygon in a second direction, the first direction is different from the second direction, that is, the deformation degree values are determined by two deformation degree values of the polygon in different directions, and in a specific implementation process, the first direction and the second direction are both directions parallel to a display plane of the display unit.
Further specifically, a first movement distance for controlling the position indicator to move in the first direction may be obtained according to the first deformation degree value; obtaining a second movement distance for controlling the position indication mark to move in the second direction according to the second deformation degree value; then, according to the first moving distance and the second moving distance, obtaining a direct moving distance for controlling the position indication mark on the display area; and generating and executing a direct movement control instruction for controlling the position indication mark to move the direct movement distance on the display area according to the direct movement distance.
That is, the distance that the position indicator moves in the first direction is determined by the deformation degree value of the polygon in the first direction, the distance that the position indicator moves in the second direction is determined by the deformation degree value of the polygon in the second direction, a direct movement distance is calculated by comprehensive conversion according to the distance that the position indicator moves in the first direction and the distance that the position indicator moves in the second direction, and finally, a control command for controlling the position indicator to move the direct movement distance is generated and executed.
For example, when the polygon is a triangle, the length of the triangle in the horizontal direction with respect to the display unit is 6 cm, and the width in the vertical direction with respect to the display unit is 4 cm, when the length of the triangle in the horizontal direction is deformed from 6 cm to 3 cm, the deformation degree value in the horizontal direction is ((3/6) × Q), and when the width of the triangle in the vertical direction is also deformed from 4 back to 3 cm, the deformation degree value in the vertical direction with respect to the horizontal direction is ((3/4) × P), wherein Q and P are deformation degree value conversion formulas, Q and P may be changed according to different algorithms for the deformation degree values, or the moving distance weight of the position indication mark may be different according to the deformation degree in different directions, q and P are distinguished in the calculation process, of course, the calculation process and the mode of Q and P are the same under the general condition, and in the example, it is obvious that the deformation degree value of the triangle in the horizontal direction is larger than that in the vertical direction.
After the deformation degree values of the triangle in the horizontal direction and the direction perpendicular to the horizontal direction are obtained, then the deformation degree values in different directions can be multiplied by a distance calculation parameter to obtain a moving distance a (moving distance in a first direction) in the horizontal direction and a moving distance b (moving distance in a second direction) perpendicular to the horizontal direction, and finally the values of a and b are substituted into a direct distance algorithm formula for conversion to obtain a direct moving distance result c, here, as an example, the direct movement distance result may be obtained by using a pythagorean theorem formula, where the movement distance in the horizontal direction is a long side of a right triangle, and the movement distance perpendicular to the horizontal direction is a wide side of the right triangle, and the following conversion is performed:
c2=a2+b2
and finally obtaining the value of c by the direct distance algorithm formula to obtain the direct movement distance of the position indication mark.
Still further specifically, the first direction is perpendicular to the second direction, and the first direction is a horizontal direction, and with the adoption of the method, a coordinate type conversion method can be adopted in the conversion process of the moving position of the position indication mark, so that the simplicity in the conversion process of the position is improved, and the position indication mark is closer to the visual feeling of a user in the actual operation process, and therefore the technical effects of higher conversion efficiency of the position indication mark and better feeling of the user can be realized.
Step 107: and generating and executing a first control instruction for controlling a position indication mark on the display unit based on the relative position and the deformation degree value.
In the specific implementation of step 107, the marker moving direction of the position indicator can be determined according to the deviation direction of the specific feature point relative to the position reference point, or the marker moving distance of the position indicator can be determined based on the deviation distance of the specific feature point relative to the position reference point, or both the marker moving direction and the marker moving distance of the position indicator can be determined based on the deviation direction and the deviation distance of the specific feature point relative to the position reference point, and the corresponding instruction for controlling the movement of the position indicator can be generated and executed.
For example, the marker moving direction of the position indicator may be determined according to a deviation moving direction of the specific feature point relative to the position reference point, the marker moving direction may be the same as the deviation moving direction or opposite to the deviation moving direction, and of course, the marker moving direction may be a preset angle with the deviation moving direction, so that the instruction for controlling the moving direction of the position indicator may be generated and executed based on the deviation moving direction.
For another example, the moving distance of the position indicator may be determined according to a deviation moving distance of the specific feature point relative to the position reference point, it is noted that the deviation moving distance is not numerically identical to the moving distance of the position indicator, and is only one of parameters determining the moving distance of the position indicator, and needs to be converted together with a deformation degree value of a polygon constructed by the facial feature points so as to derive the specific moving distance of the position indicator, and thus, an instruction for controlling the moving distance of the position indicator may be generated and executed based on the deviation moving distance, and in a general case, the moving distance of the position indicator is increased correspondingly when the deviation moving distance is larger.
For another example, the marker moving direction and the marker moving distance of the position indicator may be determined simultaneously from the deviating moving direction and the deviating moving distance of the specific feature point with respect to the position reference point, that is, the marker moving direction may be determined simultaneously from the deviating moving direction, and the marker moving distance may be determined as one of the parameters deciding the marker moving distance, whereby the instruction to control the moving direction and the moving distance of the position indicator may be generated and executed based on the deviating moving direction and the deviating moving distance.
Specifically, while the moving position of the position indication identifier is controlled through face shaking, the technical solution in the embodiment of the application may also generate and implement an instruction of left-click operation or right-click operation through the electronic device controlled by facial features.
Further specifically, in the implementation process of the technical solution in the embodiment of the present application, the instruction for implementing the left-click operation by the electronic device controlled by the facial features may be implemented by the following method:
first, a left-eye dynamic image of a human body obtained by the dynamic image capturing unit is obtained.
Then, the left-eye dynamic image is identified, and a first time of the left eye from the eye-open state to the eye-closed state and a second time of the left eye from the eye-closed state to the eye-open state after the first time of the left eye are obtained.
Then, judging whether the left eye interval duration between the left eye first time and the left eye second time is greater than or equal to a preset first duration or not, and obtaining a first judgment result;
and finally, when the first judgment result is yes, generating and executing a left-click control instruction for controlling the electronic equipment to perform left-click.
For example, a left-eye dynamic image is obtained through a camera, and then the left-eye dynamic image is recognized, after a first time that the left eye is opened at the left eye is recognized to be in a closed-eye state, and a second time that the left eye is changed from the closed-eye state to the opened-eye state after the first time that the left eye is opened is recognized, a left-eye interval time length from the first time that the left eye is opened to the second time that the left eye is opened is determined, the left-eye interval time length is assumed to be a, a is compared with a preset left-eye time length a, when the preset time length a is assumed to be 1 second, a left-click control instruction for controlling the electronic device to click left is generated and executed when the preset time length a is greater than or equal to 1 second, and the left-click control instruction is consistent with the effect of a control instruction for pressing a left key by using a mouse.
Similarly, in the implementation process of the technical solution in the embodiment of the present application, the instruction for implementing the right click operation by the electronic device controlled by the facial features may also be implemented by the same method:
firstly, obtaining a right eye dynamic image of a human body obtained by the dynamic image acquisition unit;
then, the dynamic image of the right eye is identified, and a first right eye time when the right eye is in an open eye state to a closed eye state and a second right eye time when the right eye is in the closed eye state to the open eye state after the first right eye time are obtained;
then, judging whether the right-eye interval time between the first right-eye time and the second right-eye time is greater than or equal to a preset second time to obtain a second judgment result;
and finally, when the second judgment result is yes, generating and executing a right click control instruction for controlling the electronic equipment to click right.
For example, a right eye moving image is first obtained by a camera, and then the right eye moving image is recognized, after a right eye first time point at which the right eye changes from an open eye state to a closed eye state is recognized, and a right eye second time point at which the right eye changes from the closed eye state to the open eye state is recognized after the right eye first time point, a right eye interval time period from the right eye first time point to the right eye second time point is determined, the right eye interval time period is assumed to be B, B is compared with a right eye preset time period B, and if the preset time period B is assumed to be 1.5 seconds, when B is greater than or equal to 1.5 seconds, a right click control command for controlling the electronic device to perform right click is generated and executed, and the right click control command is consistent with the effect of a control command for pressing a right key with a mouse.
It is noted that, in general, the left-eye preset time period and the right-eye preset time period are generally the same in the same electronic device.
Therefore, according to the technical scheme in the embodiment of the application, the dynamic image of the human face is collected, so that the facial feature points are identified according to the dynamic image of the face, at least one polygon is formed according to the positions of the facial feature points, a specific feature point is determined on the edge of the polygon, a position reference point is determined on a display interface, and the moving direction and the moving distance of the position indication identifier of the electronic equipment are controlled according to the relative position of the specific feature point relative to the position reference point and the deformation degree value of the polygon in the face rotating process, so that the aim of controlling the position of the position indication identifier of the electronic equipment by rotating the face is fulfilled. The technical scheme in the embodiment of the application only needs to set one dynamic image acquisition device, solves the problems of more additional equipment, poor positioning accuracy of the position indication mark and low efficiency in the prior art, and does not need a user to rotate the head greatly in the using process or move the electronic equipment greatly relatively.
Example two
Referring to fig. 2, a second embodiment of the present invention provides an electronic device including a dynamic image capturing unit and a display unit, where the electronic device is an electronic device such as a mobile phone, a tablet computer, a notebook computer, and the like, and the electronic device specifically includes:
a dynamic image acquisition unit 201 for acquiring a face dynamic image of the face of the human body acquired by the dynamic image acquisition unit.
A facial feature recognition unit 202, configured to recognize the dynamic facial image and obtain at least three facial feature points that are not on the same straight line in the human face.
Specifically, the facial feature recognition unit 202 is configured to recognize the dynamic facial image, and obtain a left eye inner corner position point, a right eye inner corner position point, and a nose tip position point located on the triangle; or obtaining a left eye external canthus position point, a right eye external canthus position point and a nose tip position point which are positioned on the triangle.
A graph obtaining unit 203, configured to obtain at least one polygon according to the at least three facial feature points.
Specifically, the graph obtaining unit 203 is configured to obtain at least one triangle or trapezoid according to the at least three facial feature points.
A specific feature point obtaining unit 204, configured to determine a specific feature point on an edge of the polygon.
Specifically, the specific feature point obtaining unit 204 is configured to determine the nose tip position point as the specific feature point.
A position reference point acquiring unit 205 configured to determine a position reference point on the display area of the display unit.
A relative position determination unit 206, configured to determine a relative position of the specific feature point with respect to the position reference point when the specific feature point moves.
Specifically, the relative position determination unit 206 is configured to determine a relative offset direction of the specific feature point with respect to the position reference point when the specific feature point moves.
A deformation degree value determining unit 207, configured to determine a deformation degree value of the polygon when the specific feature point moves.
Specifically, the deformation degree value determination unit 207 specifically includes:
a first deformation degree value determining unit 2071, configured to determine a first deformation degree value of the polygon in the first direction when the specific feature point moves.
A second deformation degree value determining unit 2072, configured to determine a second deformation degree value of the polygon in the second direction when the specific feature point moves.
And, the first direction is different from the second direction.
Further specifically, the first direction is perpendicular to the second direction, and the first direction is a horizontal direction.
A control instruction generating unit 208, configured to generate and execute a first control instruction for controlling the position indicator on the display unit based on the relative position and the deformation degree value.
Specifically, the control instruction generation unit 208 further includes:
a first direction movement distance determination unit 2081, configured to obtain a first movement distance for controlling the position indicator to move in the first direction according to the first deformation degree value.
A second direction movement distance determination unit 2082, configured to obtain a second movement distance for controlling the position indicator to move in the second direction according to the second deformation degree value.
A direct movement distance determination unit 2083, configured to obtain a direct movement distance for controlling the position indicator on the display area according to the first movement distance and the second movement distance.
A direct movement control unit 2084, configured to generate and execute a direct movement control instruction for controlling the position indicator to move the direct movement distance based on the direct movement distance.
Specifically, the electronic device in the embodiment of the present application further includes:
a left eye dynamic image obtaining unit 209, configured to obtain a left eye dynamic image of the human body obtained by the dynamic image capturing unit.
The left eye dynamic image recognition unit 210 is configured to recognize the left eye dynamic image, and obtain a first left eye time when the left eye is in an eye-open state to an eye-closed state, and a second left eye time when the left eye is in the eye-closed state to the eye-open state after the first left eye time.
The left-eye interval duration determining unit 211 is configured to determine whether a left-eye interval duration between the left-eye first time and the left-eye second time is greater than or equal to a preset first duration, so as to obtain a first determination result.
And a left-click control instruction operating unit 212, configured to generate and execute a left-click control instruction for controlling the electronic device to perform a left click when the first determination result is yes.
Further specifically, the electronic device in the embodiment of the present application further includes:
a right-eye dynamic image obtaining unit 213, configured to obtain a right-eye dynamic image of the human body obtained by the dynamic image acquiring unit.
A right eye dynamic image recognition unit 214, configured to recognize the right eye dynamic image, and obtain a first right eye time when the right eye is in an open eye state to a closed eye state, and a second right eye time when the right eye is in the closed eye state to the open eye state after the first right eye time.
The right-eye interval duration determining unit 215 is configured to determine whether a right-eye interval duration between the first right-eye time and the second right-eye time is greater than or equal to a preset second duration, so as to obtain a second determination result.
A right click control instruction operating unit 216, configured to generate and execute a right click control instruction for controlling the electronic device to perform a right click when the second determination result is yes.
Therefore, according to the technical scheme in the embodiment of the application, the dynamic image of the human face is collected, so that the facial feature points are identified according to the dynamic image of the face, at least one polygon is formed according to the positions of the facial feature points, a specific feature point is determined on the edge of the polygon, a position reference point is determined on a display interface, and the moving direction and the moving distance of the position indication identifier of the electronic equipment are controlled according to the relative position of the specific feature point relative to the position reference point and the deformation degree value of the polygon in the face rotating process, so that the aim of controlling the position of the position indication identifier of the electronic equipment by rotating the face is fulfilled.
The embodiment of the application has at least the following technical effects or advantages:
the technical scheme in the embodiment of the application only needs to set one dynamic image acquisition device, solves the problems of more additional equipment, poor positioning accuracy of the position indication mark and low efficiency in the prior art, and does not need a user to rotate the head greatly in the using process or move the electronic equipment greatly relatively.
Further, the technical scheme in the application includes that the number of the preset facial feature points to be identified is three: the inner canthus position point of the left eye, the inner canthus position point of the right eye and the nose tip position point; or, the left eye outer canthus position point, the right eye outer canthus position point and the nose tip position point, because the edge shapes of the four characteristic points are sharp, and the color difference between the inner edge of the canthus and the outer edge of the canthus is very obvious, the prior art has very high recognition rate and high recognition accuracy rate for the four characteristic points, and therefore, the technical scheme in the application also has the technical effects of high recognition rate and high recognition accuracy rate.
Further, in the technical solution provided in the embodiment of the present application, the polygon is specifically configured as a triangle or a trapezoid, and since the number of points forming the triangle is minimum, the number of facial feature points to be identified in the implementation process is also minimum, and parameters for determining the deformation degree value are also obvious, so that the identification efficiency and the operation efficiency of the technical solution can be effectively improved; on the other hand, in the technical solution, theoretically, the more the number of recognized facial feature points is, the more sides of the polygon formed by the recognized facial feature points are, the more accurate the judgment of the deformation degree value is, and when a trapezoid is adopted as another specific polygon structure, the less the number of facial feature points to be recognized is ensured, and the accuracy of judging the deformation degree value can also be improved, so the technical solution in the embodiment of the present application also has the technical effect of high accuracy of judging the deformation degree value.
Furthermore, in the technical scheme of the embodiment of the application, the first direction is perpendicular to the second direction, and the first direction is a horizontal direction, so that a coordinate type conversion method can be adopted in the conversion process of the mobile position of the position indication mark after the method is adopted, the simplicity in the conversion process of the position is improved, and the position indication mark is closer to the visual feeling of a user in the actual operation process, so that the position conversion efficiency of the position indication mark is higher, and the user feels better technical effects.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Specifically, a system control method in an embodiment of the present application is applied to an electronic device having a first dynamic image capturing device or a second dynamic image capturing device, and computer program instructions corresponding to the method may be stored in a storage medium such as an optical disc, a hard disc, a usb disk, and the like, and when the computer program instructions corresponding to a system control method in the storage medium are read or executed by an electronic device, the method includes the following steps:
obtaining at least one polygon from the at least three facial feature points;
determining a specific characteristic point on the edge of the polygon;
determining a position reference point on a display area of the display unit;
determining the relative position of the specific feature point relative to the position reference point and the deformation degree value of the polygon.
Optionally, the storage medium further stores other computer instructions, and the computer instructions perform the following steps: determining a relative position of the specific feature point with respect to the position reference point and a deformation degree value of the polygon, the corresponding computer instructions being executed after being executed, the computer instructions comprising the steps of:
obtaining a first movement distance for controlling the position indication mark to move in the first direction according to the first deformation degree value;
obtaining a second movement distance for controlling the position indication mark to move in the second direction according to the second deformation degree value;
according to the first moving distance and the second moving distance, obtaining a direct moving distance for controlling the position indication mark on the display area;
and generating and executing a direct movement control instruction for controlling the position indication mark to move the direct movement distance on the display area according to the direct movement distance.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (22)

1. A system control method is applied to an electronic device, the electronic device comprises a dynamic image acquisition unit and a display unit, and the method comprises the following steps:
obtaining a face dynamic image of the human face obtained by the dynamic image acquisition unit;
identifying the dynamic face image to obtain at least three facial feature points which are not on the same straight line in the human face;
obtaining at least one polygon from the at least three facial feature points;
determining a specific characteristic point on the polygon;
determining a position reference point on a display area of the display unit;
when the specific characteristic point moves, determining the relative position of the specific characteristic point relative to the position reference point and the deformation degree value of the polygon;
and generating and executing a first control instruction for controlling a position indication mark on the display unit based on the relative position and the deformation degree value.
2. The method according to claim 1, wherein the polygon is in particular a triangle or a trapezoid.
3. The method according to claim 2, characterized in that said facial feature points are in particular:
the left eye inner canthus position point, the right eye inner canthus position point and the nose tip position point are positioned on the triangle;
or,
the position point of the left eye external canthus, the position point of the right eye external canthus and the position point of the nose tip are positioned on the triangle.
4. The method according to claim 3, wherein said determining a specific feature point on said polygon is:
and determining the nose tip position point as the specific characteristic point.
5. The method according to claim 1, wherein said determining a location reference point on the display area of the display unit is in particular:
and determining the central point of the display area as the position reference point.
6. The method according to claim 1, wherein the generating and executing a first control instruction for controlling a position indicator on the display unit based on the relative position and the deformation degree value includes:
obtaining an offset direction of the specific feature point relative to the position reference point;
and generating and executing a movement control instruction for controlling the movement direction of the position indication mark based on the offset direction.
7. The method of claim 1, wherein the deformation level values include a first deformation level value characterizing a degree of deformation of the polygon in a first direction and a second deformation level value characterizing a degree of deformation of the polygon in a second direction, the first direction being different from the second direction.
8. The method of claim 7, wherein the first direction is perpendicular to the second direction and the first direction is a horizontal direction.
9. The method according to claim 7, wherein the generating and executing a first control instruction for controlling a position indicator on the display unit based on the relative position and the deformation degree value specifically includes:
obtaining a first movement distance for controlling the position indication mark to move in the first direction according to the first deformation degree value;
obtaining a second movement distance for controlling the position indication mark to move in the second direction according to the second deformation degree value;
according to the first moving distance and the second moving distance, obtaining a direct moving distance for controlling the position indication mark on the display area;
and generating and executing a direct movement control instruction for controlling the position indication mark to move the direct movement distance on the display area according to the direct movement distance.
10. The method of claim 1, wherein the method further comprises:
obtaining a left eye dynamic image of the human body obtained by the dynamic image acquisition unit;
identifying the dynamic image of the left eye, and obtaining a first time of the left eye from an eye-opening state to an eye-closing state, and a second time of the left eye from the eye-closing state to the eye-opening state after the first time of the left eye;
judging whether the left eye interval duration between the left eye first time and the left eye second time is greater than or equal to a preset first duration or not, and obtaining a first judgment result;
and when the first judgment result is yes, generating and executing a left-click control instruction for controlling the electronic equipment to perform left-click.
11. The method of claim 1, wherein the method further comprises:
obtaining a right eye dynamic image of the human body obtained by the dynamic image acquisition unit;
identifying the dynamic image of the right eye to obtain a first right eye time when the right eye is in an eye opening state to an eye closing state, and a second right eye time when the right eye is in the eye closing state to the eye opening state after the first right eye time;
judging whether the right-eye interval time between the first right-eye time and the second right-eye time is greater than or equal to a preset second time to obtain a second judgment result;
and when the second judgment result is yes, generating and executing a right click control instruction for controlling the electronic equipment to click right.
12. An electronic device includes a dynamic image acquisition unit and a display unit, and further includes:
a dynamic image acquisition unit for acquiring a face dynamic image of the face of the human body acquired by the dynamic image acquisition unit;
the facial feature recognition unit is used for recognizing the dynamic facial image to obtain at least three facial feature points which are not on the same straight line in the human face;
a graph obtaining unit, configured to obtain at least one polygon according to the at least three facial feature points;
a specific feature point obtaining unit, configured to determine a specific feature point on the polygon;
a position reference point acquisition unit configured to determine a position reference point on a display area of the display unit;
a relative position determination unit configured to determine a relative position of the specific feature point with respect to the position reference point when the specific feature point moves;
a deformation degree value determination unit for determining a deformation degree value of the polygon when the specific feature point moves;
and the control instruction generating unit is used for generating and executing a first control instruction for controlling the position indication mark on the display unit based on the relative position and the deformation degree value.
13. The electronic device according to claim 12, wherein the graphics capture unit is embodied as:
to obtain at least one triangle or trapezoid from the at least three facial feature points.
14. The electronic device of claim 13, wherein the facial feature recognition unit is specifically:
the face dynamic image is identified to obtain a left eye inner canthus position point, a right eye inner canthus position point and a nose tip position point which are positioned on the triangle;
or,
and obtaining a left eye external canthus position point, a right eye external canthus position point and a nose tip position point which are positioned on the triangle.
15. The electronic device according to claim 14, wherein the specific feature point obtaining unit is specifically:
to determine the nose tip location point as the specific feature point.
16. The electronic device according to claim 12, wherein the position reference point obtaining unit is specifically:
the central point of the display area is determined as the position reference point.
17. The electronic device according to claim 12, wherein the relative position determination unit is specifically:
to determine a relative offset direction of the particular feature point with respect to the location reference point.
18. The electronic device according to claim 12, wherein the deformation degree value determination unit specifically includes:
a first deformation degree value determination unit for determining a first deformation degree value of the polygon in a first direction;
a second deformation degree value determination unit for determining a second deformation degree value of the polygon in a second direction;
the first direction is different from the second direction.
19. The electronic device of claim 18, wherein the first direction is perpendicular to the second direction, and the first direction is a horizontal direction.
20. The electronic device of claim 18, wherein the control instruction generation unit further comprises:
a first direction movement distance determination unit configured to obtain a first movement distance for controlling the position indicator to move in the first direction according to the first deformation degree value;
a second direction movement distance determination unit configured to obtain a second movement distance for controlling the position indicator to move in the second direction according to the second deformation degree value;
a direct movement distance determination unit, configured to obtain a direct movement distance for controlling the position indicator on the display area according to the first movement distance and the second movement distance;
and the direct movement control unit is used for generating and executing a direct movement control instruction for controlling the position indication mark to move the direct movement distance based on the direct movement distance.
21. The electronic device of claim 12, wherein the electronic device further comprises:
the left eye dynamic image acquisition unit is used for acquiring a left eye dynamic image of the human body acquired by the dynamic image acquisition unit;
the left eye dynamic image identification unit is used for identifying the left eye dynamic image to obtain a first left eye time when the left eye is in an eye opening state to an eye closing state and a second left eye time when the left eye is in the eye closing state to the eye opening state after the first left eye time;
the left-eye interval duration judging unit is used for judging whether the left-eye interval duration between the left-eye first moment and the left-eye second moment is greater than or equal to a preset first duration or not to obtain a first judgment result;
and the left-click control instruction operating unit is used for generating and executing a left-click control instruction for controlling the electronic equipment to perform left-click when the first judgment result is yes.
22. The electronic device of claim 12, wherein the electronic device further comprises:
the right eye dynamic image acquisition unit is used for acquiring a right eye dynamic image of the human body acquired by the dynamic image acquisition unit;
a right eye dynamic image recognition unit, configured to recognize the right eye dynamic image, and obtain a right eye first time when a right eye is in an eye-open state to an eye-closed state, and a right eye second time when the right eye is in the eye-closed state to the eye-open state after the right eye first time;
the right-eye interval duration judging unit is used for judging whether the right-eye interval duration between the first right-eye time and the second right-eye time is greater than or equal to a preset second duration or not to obtain a second judgment result;
and the right click control instruction operating unit is used for generating and executing a right click control instruction for controlling the electronic equipment to perform right click when the second judgment result is yes.
CN201410328111.5A 2014-07-10 2014-07-10 A kind of system control method and electronic equipment Active CN105242888B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410328111.5A CN105242888B (en) 2014-07-10 2014-07-10 A kind of system control method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410328111.5A CN105242888B (en) 2014-07-10 2014-07-10 A kind of system control method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105242888A CN105242888A (en) 2016-01-13
CN105242888B true CN105242888B (en) 2018-10-12

Family

ID=55040551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410328111.5A Active CN105242888B (en) 2014-07-10 2014-07-10 A kind of system control method and electronic equipment

Country Status (1)

Country Link
CN (1) CN105242888B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930044A (en) * 2016-04-20 2016-09-07 乐视控股(北京)有限公司 Display page locating method and system
CN106020448B (en) * 2016-05-06 2019-03-29 深圳市国华识别科技开发有限公司 Man-machine interaction method and system based on intelligent terminal
CN109003224A (en) * 2018-07-27 2018-12-14 北京微播视界科技有限公司 Strain image generation method and device based on face
CN111091031A (en) * 2018-10-24 2020-05-01 北京旷视科技有限公司 Target object selection method and face unlocking method
CN109454638A (en) * 2018-10-31 2019-03-12 昆山睿力得软件技术有限公司 A kind of robot grasping system of view-based access control model guidance
CN111192575A (en) * 2019-03-29 2020-05-22 码赫镭(上海)数字科技有限公司 Integrated control system for multimedia digital exhibition
CN110928415B (en) * 2019-12-04 2020-10-30 上海飘然工程咨询中心 Robot interaction method based on facial actions

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101135945A (en) * 2007-09-20 2008-03-05 苏勇 Head-controlled mouse
CN103631365A (en) * 2012-08-22 2014-03-12 ***通信集团公司 Terminal input control method and device
CN103760975A (en) * 2014-01-02 2014-04-30 深圳宝龙达信息技术股份有限公司 Method for tracking and positioning faces and display system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101694820B1 (en) * 2010-05-07 2017-01-23 삼성전자주식회사 Method and apparatus of recognizing location of user
CN102830797B (en) * 2012-07-26 2015-11-25 深圳先进技术研究院 A kind of man-machine interaction method based on sight line judgement and system
CN103049084B (en) * 2012-12-18 2016-01-27 深圳国微技术有限公司 A kind of electronic equipment and method thereof that can adjust display direction according to face direction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101135945A (en) * 2007-09-20 2008-03-05 苏勇 Head-controlled mouse
CN103631365A (en) * 2012-08-22 2014-03-12 ***通信集团公司 Terminal input control method and device
CN103760975A (en) * 2014-01-02 2014-04-30 深圳宝龙达信息技术股份有限公司 Method for tracking and positioning faces and display system

Also Published As

Publication number Publication date
CN105242888A (en) 2016-01-13

Similar Documents

Publication Publication Date Title
CN105242888B (en) A kind of system control method and electronic equipment
US11461966B1 (en) Determining spans and span lengths of a control object in a free space gesture control environment
US11030237B2 (en) Method and apparatus for identifying input features for later recognition
CN106846403B (en) Method and device for positioning hand in three-dimensional space and intelligent equipment
US8509484B2 (en) Information processing device and information processing method
US8879787B2 (en) Information processing device and information processing method
US8428306B2 (en) Information processor and information processing method for performing process adapted to user motion
US8638987B2 (en) Image-based hand detection apparatus and method
TWI526877B (en) Input device, machine, input method and recording medium
CN105426827A (en) Living body verification method, device and system
KR101551576B1 (en) Robot cleaner, apparatus and method for recognizing gesture
JP2012518236A (en) Method and system for gesture recognition
CN110502104A (en) The device of touch free operation is carried out using depth transducer
WO2017000218A1 (en) Living-body detection method and device and computer program product
JP5598751B2 (en) Motion recognition device
TWI431538B (en) Image based motion gesture recognition method and system thereof
KR20170084643A (en) Motion analysis appratus and method using dual smart band
Wang et al. Dynamic gesture recognition using 3D trajectory
JP5555193B2 (en) Data processing apparatus, data processing system, and program
US20220189131A1 (en) Modification of projected structured light based on identified points within captured image
KR101861096B1 (en) Method and apparatus for controlling information displayed on screen by recognizing hand gesture of user
JP2015114762A (en) Finger operation detection device, finger operation detection method, finger operation detection program, and virtual object processing system
KR101320922B1 (en) Method for movement tracking and controlling avatar using weighted search windows
KR101481307B1 (en) Mobile terminal for generating control command using finger image and sensor and method for generating control command using finger image from camera and sensor in terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant