Specific embodiment
To describe technology contents of the invention, structural feature in detail, purpose and effect being realized, below in conjunction with embodiment
And coordinate accompanying drawing to be explained in detail.
Fig. 1 is referred to, is the hardware structure schematic diagram of a kind of virtual contact action system in embodiment of the present invention, should
System 100 includes 10, two picture pick-up devices 20 of virtual contact action device and display device 21, for the inspection to user gesture
Survey realizes that touch-control is input into.
It is the high-level schematic functional block diagram of the virtual contact action device in embodiment of the present invention please refer to Fig. 2, should
Device 10 includes that ambient brightness sensing unit 101, view recognition unit 102, finger number judging unit 103, mode of operation are sentenced
Disconnected unit 104, horizontal plane two-dimensional coordinate set up unit 105, vertical plane two-dimensional coordinate and set up unit 106, three-dimensional coordinate calculating list
Unit 107, action judging unit 108, chart drawing unit 109, display control unit 110 and display unit 111.The device 10
In can be using electronic equipments such as camera, mobile phone, panel computers, the picture pick-up device 20 be entered by network with the device 10
Row communication connection, the transmission medium of the network can be the wireless transmission mediums such as bluetooth, zigbee, WIFI.
Each picture pick-up device 20 includes the first camera 201 and second camera 202, sets respectively as longitudinal direction shooting
Standby and horizontal picture pick-up device.Wherein, can may be at for intelligent glasses etc. as the first camera 201 of longitudinal picture pick-up device
Mobile portable formula electronic equipment above user's hand, can be intelligent hand as the second camera 202 of horizontal picture pick-up device
Ring etc. can be positioned over the Mobile portable formula electronic equipment in front of user.Further, the first shooting of each picture pick-up device 20
201 and second camera 202 be respectively common camera and infrared camera.Wherein, common camera can be in light line
In the case of part is preferable, IMAQ is carried out to user operation action and device 10 is sent to analyzing.Infrared camera can be in light
In the case that lines part is poor, IMAQ is carried out to user operation action and device 10 is sent to analyzing.The view recognition unit
102 include longitudinal view identification subelement 1021 and transverse views identification subelement 1022, respectively to should be used as longitudinal shooting
First camera 201 and second camera 202 of equipment and horizontal picture pick-up device is arranged, and the image for gathering to it is carried out
Identifying processing.
In original state, two pairs of cameras (a pair of common cameras and a pair of infrared cameras) use cooperatively, shooting
Direction is set to orthogonal, can simultaneously catch hand action behavior both vertically and horizontally.Generally, intelligent glasses
In two cameras (common camera and an infrared camera) put down, in Intelligent bracelet or smart mobile phone
Two camera (common camera and an infrared camera) levels are put.Also, by shooting for the two pairs of cameras
Rectangular area be collectively forming image capture area.
The brightness value of the induced environment of ambient brightness sensing unit 101, and ambient brightness value is sent to the view identification
In unit 102.The view recognition unit 102 judges to use common camera or infrared according to the luminance threshold value that pre-sets
Camera.For example, brightness impression scope is 1~100, and threshold value is 50, then determination when ambient brightness value is more than 50 uses common
Camera, infrared camera image is used when ambient brightness value is less than 50.
Determined after the camera types for using according to ambient brightness value, start initial alignment operation, it is specific as follows.The device
10 carry out initial alignment operate when, user will need in both hands operate one hold fist vacantly lie against two groups select take the photograph
As the position that head can be photographed, i.e. image capture area, and the static of certain hour is kept, to complete user's hand position
Initialization flow process, is easy to device 10 to recognize and orient the initial position of hand, so as to follow-up operation.The device 10 identification and
The principle of positioning hand position will be described below in detail.
When the formula that interacts is operated, user will need the hand (hereinafter referred to as singlehanded) for operating vacantly to keep flat in both hands
In image capture area, the ring that longitudinal view identification subelement 1021 is detected according to the ambient brightness sensing unit 101
Border brightness value judges to use common camera or infrared camera, and in singlehanded top after the camera for using is determined
The view data of the common camera or infrared camera collection as longitudinal picture pick-up device carries out hard recognition, to determine hand
Portion center position in the picture.The transverse views identification subelement 1022 is detectd according to the ambient brightness sensing unit 101
The ambient brightness value for measuring judges to use common camera or infrared camera, and to being in after the camera for using is determined
Singlehanded front carries out hard recognition as the view data that the common camera or infrared camera of horizontal picture pick-up device are gathered,
To determine hand center position in the picture.
Wherein, the hand center that longitudinal view identification subelement 1021 determines position in the picture is to sit in XZ
The hand central point pixel in mark face position in the picture, for example, hand central point pixel is located at a rows b of XZ faces image and arranges.Should
The hand center that transverse views identification subelement 1022 determines position in the picture is at the hand center of YZ coordinate surfaces
Point pixel position in the picture.
Further, judge that the method for hand central point includes color background method and color glove by common camera
Method.Wherein, color background method is specially:The environmental background of bimanualness needs color relatively easy and single, so can be straight
The direct handle portion Extraction of Image of color interval scope of human body complexion was connected out, then according to the hand images area for extracting
The line number for being averagely worth to central point of peak and minimum point in domain, by ultra-left point and rightest point center is averagely worth to
The columns of point.Color glove auxiliary law is specially:User wears special pure red gloves, because common camera is all RGB
(red-green-blue) samples, and can go out pure red regional location with extracting directly, it is also possible to use green or blueness as hand
Set finger end points color.Then, averagely it is worth to center according to highs and lows in the hand images region for extracting
The line number of point, by ultra-left point and the columns for being averagely worth to central point of rightest point.
Judge that the method for finger centre point filters method and color glove auxiliary law including temperature by infrared camera.Its
In, temperature filters method and is in particular:Bimanualness can directly by human surface temperature's relative ambient temperature it is higher the characteristics of
Directly the higher hand Extraction of Image of temperature out, then according to highs and lows in the hand images region for extracting
The line number for being averagely worth to central point, by ultra-left point and the columns for being averagely worth to central point of rightest point.Color glove
Auxiliary law is specially:User wears special gloves, and there is heating effect on the surface of gloves, so can go out heat in image with extracting directly
Region, then according to the line number for being averagely worth to central point of highs and lows in the hand images region for extracting, leads to
Cross the columns for being averagely worth to central point of ultra-left point and rightest point.
The horizontal plane two-dimensional coordinate is set up unit 105 and is recognized in the hand that subelement 1021 is identified according to longitudinal view
Heart point position position in the picture and the pixel resolution of camera, by hand central point location of pixels XZ coordinate surfaces are converted to
Two-dimensional coordinate value.The vertical plane two-dimensional coordinate sets up what unit 106 was identified according to the transverse views identification subelement 1022
Hand center position position in the picture and the pixel resolution of camera, by hand central point location of pixels YZ is converted to
The two-dimensional coordinate value of coordinate surface.
Fig. 3 is referred to, hand central point location of pixels is converted to the transfer principle tool of the two-dimensional coordinate value of XZ coordinate surfaces
It is body:Image lower left corner pixel is set to into the starting point 0 of two-dimensional coordinate system, according to image analytic degree and being converted to two dimension
Coordinate value range computation after coordinate goes out coordinate value scope with respect to the line number of each image and the ratio of columns.For example, XZ coordinates
The a height of 2000*1000 of face image analytic degree width, the coordinate value scope of two-dimentional XZ plane coordinate systems is that 1 to 150, Z axis are 1 for X-axis
To 100, then the column number proportion of Z axis coordinate value scope relative image is 100/1000, the columns of X-axis coordinate value scope relative image
Ratio 150/2000.The location of pixels of hand central point is multiplied by into the ratio of calculated coordinate range relative image row, column number
Example, so as to get converted to two-dimensional coordinate after end points two-dimensional coordinate value.For example, the location of pixels of certain hand central point is
300 rows 200 are arranged, then the Z axis coordinate of the hand central point is 300*100/1000=30, and the X-axis coordinate of the hand central point is
200*150/2000=15.The transfer principle of the two-dimensional coordinate value that hand central point location of pixels is converted to into YZ coordinate surfaces is same
On, here is not added with repeating.
The three-dimensional coordinate computing unit 107 sets up unit 105 and vertical plane two-dimensional coordinate according to the horizontal plane two-dimensional coordinate
Set up the hand central point location of pixels that unit 106 determines respectively and set up hand in the two-dimensional coordinate value of XZ coordinate surfaces and YZ coordinate surfaces
Coordinate value of portion's central point in XYZ three-dimensional system of coordinates.
Wherein, the operation principle for setting up coordinate value of the hand central point in XYZ three-dimensional system of coordinates is specially:Because XZ sits
Mark face and YZ coordinate surfaces have common Z axis, so by each seat in the Z values of each coordinate end points in XZ coordinate surfaces and YZ coordinate surfaces
The Z values of mark end points are all extracted and are compared, and Z axis coordinate value is consistent or immediate coordinate end points can be considered as same
Individual end points, then merges into a seat by the coordinate value of the coordinate value and YZ coordinate surfaces that are judged as the XZ coordinate surfaces of same end point
Mark end points, using the coordinate value as XYZ three-dimensional system of coordinates.Because Z values are possible to different, the new three-dimensional coordinate for producing
Z values are the coordinate Z value of the coordinate Z values plus YZ coordinate surfaces and then the operation result divided by 2 of XZ coordinate surfaces, in three-dimensional system of coordinate
X, Y-coordinate value are respectively equal to the X-coordinate value and Y-coordinate value of XZ coordinate surfaces and YZ coordinate surfaces.
It is sent to the horizontal plane two in the position of longitudinal view identification subelement 1021 by hand center position in the picture
Also the hand images for identifying are sent to the finger number judging unit 103 while dimension coordinate sets up unit 105.
The finger number judging unit 103 recognizes that user is operated used hand in longitudinal view according to hand images
Refer to number.
Specifically, the finger number judging unit 103 determines finger by the identification to finger end points in hand images
Number.Wherein, recognize that the method for finger end points includes color background method and color glove auxiliary law, the monochrome back of the body by common camera
Scape method is specially:The environmental background of bimanualness needs color relatively easy and single, so can directly pass through human body complexion
The direct handle portion Extraction of Image of color interval scope out, then hand each strip is calculated according to figure endpoint algorithm and is prolonged
The cut off position stretched, as the endpoint location of every finger, then calculates one and has several end points.Color glove auxiliary law has
Body is:User needs to wear special gloves, and each fingertip location of gloves is pure red, because common camera is all RGB
(red-green-blue) samples, the position that can go out pure red point with extracting directly, it is also possible to use green or blueness as gloves
Finger end points color, then calculates one and has several end points.
Recognize that the method for finger end points filters method and color glove auxiliary law, temperature filter including temperature by infrared camera
Division is specially:Bimanualness can directly by human surface temperature's relative ambient temperature it is higher the characteristics of directly temperature compared with
High hand Extraction of Image out, then calculates the cut off position of each strip extension of hand according to figure endpoint algorithm,
As the endpoint location of every finger, then calculate one and have several end points.Color glove auxiliary law is in particular:User wears
Special gloves, each fingertip location of gloves is the point for having heating, so can go out hotspot location in image with extracting directly, so
One is calculated afterwards has several end points.
The mode of operation judging unit 104 uses the individual of finger according to the user that the finger number judging unit 103 determines
Number is operated the selection of pattern, for example, when stretching out (hand is in and holds bulk) without finger, the operation mould to virtual touch screen
Formula is the position coordinates for changing hand in screen;When only using a finger (other fingers are in and hold bulk), touch to virtual
The operator scheme of control screen is to choose certain icon;During using two fingers, the operator scheme of virtual touch screen is chosen for dragging
Icon;It is the whole screen that slides to the operator scheme of virtual touch screen during using three fingers.0 finger of most definables
To 5 fingers, altogether 6 kinds of mode of operations, all can be re-defined using the corresponding mode of operation of different fingers.
Hand D coordinates value, the work that the action judging unit 108 is set up according to the three-dimensional coordinate computing unit 107
Mode of operation and the XZ coordinate models of the operable icon of the feedback of chart drawing unit 109 that mode determination 104 determines
Region is enclosed, the operating position and operator scheme of user is judged.
In the present embodiment, in hand position initial phase, the action judging unit 108 is according to hand central point
D coordinates value in vertical direction minimum end points (the namely min coordinates value of Y-axis) as click on judge plane Y-axis value,
The D coordinates value of handle portion central point is mapped on operable area, and judges what is currently carried out with reference to current mode of operation
Hand judges the operating result corresponding to plane through the click.For example, in the corresponding hand that changes of 0 finger in screen
Under the mode of operation of position coordinates, clicking operation does not have any implication;Certain icon working mould is chosen 1 finger is corresponding
Under formula, clicking operation represents the icon chosen in certain screen;In the corresponding icon working pattern chosen of dragging of 2 fingers
Under, click action represents that certain icon is started or terminated drag operation by user;In the corresponding whole screen that slides of 3 fingers
Under mode of operation, click action represents operation that whole screen-picture is started or terminate to drag by user etc..
Due to being hand position initial phase, the action judging unit 108 sets the initial of the judgement face of click using Y value
Value, so the Y value of hand coordinate is both greater than or equal to the decision content in the judgement face of click.Also, when user's movement hand is carried out just
During often operation under mode of operation, every time the action judging unit 108 is received after the three-dimensional coordinate of hand central point, is no longer weighed
The click is newly set and judges plane Y-axis value, but directly judge plane Y-axis value to judge whether virtual touch screen according to the click
There is effective click action.
Wherein, the D coordinates value of hand central point is mapped on the operable area of touch screen, in particular:Setting
The operable area scope is the coordinate value scope of XZ coordinate surfaces, and the coordinate in the XZ faces of hand central point can be mapped directly into can
Plan-position coordinate in operating area.
Click action is judged according to the D coordinates value of hand central point, specially:Judge plane Y-axis when have selected to click on
After value, as long as the Y value in hand central point three-dimensional coordinate judges plane Y-axis value less than the click, then judge what the end points was passed through
The click judges that click behavior occur in plane, the i.e. finger, then in conjunction with hand central point in which region decision user most
It is eventually that clicking operation has been carried out to which position.
Judged result and current all operable icons of the chart drawing unit 109 according to the action judging unit 108
Coordinate, draw out the position of hand on virtual touch screen and the effectively picture of operation.The chart drawing unit 109 is just at the beginning
The coordinate value of the XZ coordinate surfaces of all operable icons of beginningization, and the coordinates regional of the XZ coordinate surfaces of each operable icon is anti-
It is fed to the action judging unit 108.The chart drawing unit 109 is according to the position of the different operable icons of operation change, root
Specific response is made according to the band of position of the click behavior for occurring, for example, highlight, dragging, deletion etc., and will be drawn
Image send to the display control unit 110, while update shift position after operable icon coordinate value and feed back to
The action judging unit 108.
The display control unit 110 is converted to display device 21 the image drawn by the chart drawing unit 109 can be with
The sequential of display, call the display unit 111 by the image operated on virtual touch screen be shown on display device 21 for
Family is watched, and user can learn the position in oneself corresponding virtual touch screen of current hand central point according to feedback, then
Can start to continue to move to hand according to display content to proceed virtual touch control operation.
Fig. 4 is referred to, is the schematic flow sheet of the virtual contact action method in embodiment of the present invention, the method bag
Include:
Step S30, the brightness value of the induced environment of ambient brightness sensing unit 101, the view recognition unit 102 is according to pre-
The ambient brightness value that the luminance threshold value and the ambient brightness sensing unit 101 for first arranging is sensed is judged using common shooting
Head or infrared camera.
In original state, two pairs of cameras (a pair of common cameras and a pair of infrared cameras) use cooperatively, shooting
Direction is set to orthogonal, can simultaneously catch hand action behavior both vertically and horizontally.Generally, intelligent glasses
In two cameras (common camera and an infrared camera) put down, in Intelligent bracelet or smart mobile phone
Two camera (common camera and an infrared camera) levels are put.Also, by shooting for the two pairs of cameras
Rectangular area be collectively forming image capture area.
Step S31, user will need one operated holding fist vacantly to lie against in image capture area and protect in both hands
The static of certain hour is held, the initial position of hand is recognized by device 10 and oriented, the initialization of user's hand position is completed.
The device 10 recognizes and positions that the principle of hand position will be described below in detail.
Step S32, user will need the hand (hereinafter referred to as singlehanded) for operating vacantly to lie against picture catching in both hands
In region, longitudinal view identification subelement 1021 according in singlehanded top as the common camera of longitudinal direction picture pick-up device or
Person is that the view data of infrared camera collection carries out hard recognition, to determine hand center position in the picture.Should
Transverse views recognize subelement 1022 according in singlehanded front as the common camera in horizontal picture pick-up device or infrared
The view data of camera collection carries out hard recognition, to determine hand center position in the picture.
Specifically, the hand center that longitudinal view identification subelement 1021 determines position in the picture is in XZ
The hand central point pixel of coordinate surface position in the picture, for example, hand central point pixel is located at a rows b of XZ faces image and arranges.
The hand center that the transverse views identification subelement 1022 determines position in the picture is in the hand of YZ coordinate surfaces
Heart point pixel position in the picture.
Further, judge that the method for hand central point includes color background method and color glove by common camera
Method.Wherein, color background method is specially:The environmental background of bimanualness needs color relatively easy and single, so can be straight
The direct handle portion Extraction of Image of color interval scope of human body complexion was connected out, then according to the hand images area for extracting
The line number for being averagely worth to central point of peak and minimum point in domain, by ultra-left point and rightest point center is averagely worth to
The columns of point.Color glove auxiliary law is specially:User wears special pure red gloves, because common camera is all RGB
(red-green-blue) samples, and can go out pure red regional location with extracting directly, it is also possible to use green or blueness as hand
Set finger end points color.Then, averagely it is worth to center according to highs and lows in the hand images region for extracting
The line number of point, by ultra-left point and the columns for being averagely worth to central point of rightest point.
Judge that the method for finger centre point filters method and color glove auxiliary law including temperature by infrared camera.Its
In, temperature filters method and is in particular:Bimanualness can directly by human surface temperature's relative ambient temperature it is higher the characteristics of
Directly the higher hand Extraction of Image of temperature out, then according to highs and lows in the hand images region for extracting
The line number for being averagely worth to central point, by ultra-left point and the columns for being averagely worth to central point of rightest point.Color glove
Auxiliary law is specially:User wears special gloves, and there is heating effect on the surface of gloves, so can go out heat in image with extracting directly
Region, then according to the line number for being averagely worth to central point of highs and lows in the hand images region for extracting, leads to
Cross the columns for being averagely worth to central point of ultra-left point and rightest point.
Step S33, the horizontal plane two-dimensional coordinate is set up unit 105 and is identified according to longitudinal view identification subelement 1021
Hand center position position in the picture and the pixel resolution of camera, hand central point location of pixels is converted to
The two-dimensional coordinate value of XZ coordinate surfaces.The vertical plane two-dimensional coordinate sets up unit 106 according to the transverse views identification subelement 1022
The hand center position for identifying position in the picture and the pixel resolution of camera, by hand central point location of pixels
Be converted to the two-dimensional coordinate value of YZ coordinate surfaces.
Wherein, hand central point location of pixels is converted to the transfer principle of two-dimensional coordinate value of XZ coordinate surfaces specifically
For:Image lower left corner pixel is set to into the starting point 0 of two-dimensional coordinate system, according to image analytic degree and being converted to two-dimensional coordinate
Coordinate value range computation afterwards goes out coordinate value scope with respect to the line number of each image and the ratio of columns.For example, XZ coordinate surfaces figure
As a height of 2000*1000 of resolution width, the coordinate value scope of two-dimentional XZ plane coordinate systems is that X-axis is arrived for 1 to 150, Z axis for 1
100, then the column number proportion of Z axis coordinate value scope relative image is 100/1000, the columns ratio of X-axis coordinate value scope relative image
Example 150/2000.The location of pixels of hand central point is multiplied by into the ratio of calculated coordinate range relative image row, column number,
End points two-dimensional coordinate value after so as to get converted to two-dimensional coordinate.For example, the location of pixels of certain hand central point is 300 rows
200 row, then the Z axis coordinate of the hand central point is 300*100/1000=30, and the X-axis coordinate of the hand central point is 200*
150/2000=15.Hand central point location of pixels is converted to the transfer principle of two-dimensional coordinate value of YZ coordinate surfaces ibid,
This is not added with repeating.
Step S34, the three-dimensional coordinate computing unit 107 sets up unit 105 and vertical plane according to the horizontal plane two-dimensional coordinate
Two-dimensional coordinate sets up hand central point location of pixels that unit 106 determines respectively in XZ coordinate surfaces and the two-dimensional coordinate of YZ coordinate surfaces
Value sets up coordinate value of the hand central point in XYZ three-dimensional system of coordinates.
Wherein, the method for setting up coordinate value of the hand central point in XYZ three-dimensional system of coordinates is specially:Due to XZ coordinate surfaces
There are common Z axis with YZ coordinate surfaces, so by each coordinate end in the Z values of each coordinate end points in XZ coordinate surfaces and YZ coordinate surfaces
The Z values of point are all extracted and are compared, and Z axis coordinate value is consistent or immediate coordinate end points can be considered as same end
Point, then merges into a coordinate end by the coordinate value of the coordinate value and YZ coordinate surfaces that are judged as the XZ coordinate surfaces of same end point
Point, using the coordinate value as XYZ three-dimensional system of coordinates.Because Z values are possible to different, the Z values of the new three-dimensional coordinate for producing
For the coordinate Z value of the coordinate Z values plus YZ coordinate surfaces and then the operation result divided by 2 of XZ coordinate surfaces, X, the Y in three-dimensional system of coordinate
Coordinate value is respectively equal to the X-coordinate value and Y-coordinate value of XZ coordinate surfaces and YZ coordinate surfaces.
Step S35, the finger number judging unit 103 recognizes that user carries out operation institute in longitudinal view according to hand images
The finger number for using.
Specifically, the finger number judging unit 103 determines finger by the identification to finger end points in hand images
Number.Wherein, recognize that the method for finger end points includes color background method and color glove auxiliary law, the monochrome back of the body by common camera
Scape method is specially:The environmental background of bimanualness needs color relatively easy and single, so can directly pass through human body complexion
The direct handle portion Extraction of Image of color interval scope out, then hand each strip is calculated according to figure endpoint algorithm and is prolonged
The cut off position stretched, as the endpoint location of every finger, then calculates one and has several end points.Color glove auxiliary law has
Body is:User needs to wear special gloves, and each fingertip location of gloves is pure red, because common camera is all RGB
(red-green-blue) samples, the position that can go out pure red point with extracting directly, it is also possible to use green or blueness as gloves
Finger end points color, then calculates one and has several end points.
Recognize that the method for finger end points filters method and color glove auxiliary law, temperature filter including temperature by infrared camera
Division is specially:Bimanualness can directly by human surface temperature's relative ambient temperature it is higher the characteristics of directly temperature compared with
High hand Extraction of Image out, then calculates the cut off position of each strip extension of hand according to figure endpoint algorithm,
As the endpoint location of every finger, then calculate one and have several end points.Color glove auxiliary law is in particular:User wears
Special gloves, each fingertip location of gloves is the point for having heating, so can go out hotspot location in image with extracting directly, so
One is calculated afterwards has several end points.
Step S36, the mode of operation judging unit 104 is used according to the user that the finger number judging unit 103 determines
The number of finger is operated the selection of pattern.
For example, it is change hand to the operator scheme of virtual touch screen when stretching out (hand is in and holds bulk) without finger
Position coordinates in screen;When only using a finger (other fingers are in and hold bulk), the operation mould to virtual touch screen
Formula is to choose certain icon;It is to drag the icon chosen to the operator scheme of virtual touch screen during using two fingers;Using three
It is the whole screen that slides to the operator scheme of virtual touch screen during root finger.At most 0 finger of definable to 5 fingers, altogether
6 kinds of mode of operations, all can be re-defined using the corresponding mode of operation of different fingers.
Step S37, the hand three-dimensional coordinate that the action judging unit 108 is set up according to the three-dimensional coordinate computing unit 107
Mode of operation and the operable icon of the feedback of chart drawing unit 109 that value, the mode of operation judging unit 104 determine
XZ coordinate ranges region, judge the operating position and operator scheme of user.
In the present embodiment, in hand position initial phase, the action judging unit 108 is according to hand central point
D coordinates value in vertical direction minimum end points (the namely min coordinates value of Y-axis) as click on judge plane Y-axis value,
The D coordinates value of handle portion central point is mapped on operable area, and judges what is currently carried out with reference to current mode of operation
Hand judges the operating result corresponding to plane through the click.For example, in the corresponding hand that changes of 0 finger in screen
Under the mode of operation of position coordinates, clicking operation does not have any implication;Certain icon working mould is chosen 1 finger is corresponding
Under formula, clicking operation represents the icon chosen in certain screen;In the corresponding icon working pattern chosen of dragging of 2 fingers
Under, click action represents that certain icon is started or terminated drag operation by user;In the corresponding whole screen that slides of 3 fingers
Under mode of operation, click action represents operation that whole screen-picture is started or terminate to drag by user etc..
Due to being hand position initial phase, the action judging unit 108 sets the initial of the judgement face of click using Y value
Value, so the Y value of hand coordinate is both greater than or equal to the decision content in the judgement face of click.Also, when user's movement hand is carried out just
During often operation under mode of operation, every time the action judging unit 108 is received after the three-dimensional coordinate of hand central point, is no longer weighed
The click is newly set and judges plane Y-axis value, but directly judge plane Y-axis value to judge whether virtual touch screen according to the click
There is effective click action.
Wherein, the method D coordinates value of hand central point being mapped on the operable area of touch screen is:Setting
The operable area scope is the coordinate value scope of XZ coordinate surfaces, and the coordinate in the XZ faces of hand central point can be mapped directly into can
Plan-position coordinate in operating area.
Judge that the method for click action is according to the D coordinates value of hand central point:Judge plane Y when have selected to click on
After axle value, as long as the Y value in hand central point three-dimensional coordinate judges plane Y-axis value less than the click, then judge that the end points is passed through
The click judge that click behavior occur in plane, the i.e. finger, then in conjunction with hand central point in which region decision user
It is finally that clicking operation has been carried out to which position.
Step S38, the chart drawing unit 109 according to the judged result of the action judging unit 108 and it is current it is all can
The coordinate of handle icon, draws out the position of hand and the picture for effectively operating on virtual touch screen.
Wherein, the chart drawing unit 109 initializes at the beginning the coordinate value of the XZ coordinate surfaces of all operable icons, and
The coordinates regional of the XZ coordinate surfaces of each operable icon is fed back to the action judging unit 108.The chart drawing unit 109
According to the position of the different operable icons of operation change, specific sound is made according to the band of position of the click behavior for occurring
Should, for example, highlight, dragging, deletion etc., and drawn image is sent to the display control unit 110, while updating
The coordinate value of the operable icon behind shift position simultaneously feeds back to the action judging unit 108.
Step S39, the display control unit 110 is converted to display and sets the image drawn by the chart drawing unit 109
Standby 21 sequential that can be shown, call the display unit 111 that the image operated on virtual touch screen is shown to into display device
Watch for user on 21, user can learn the position in oneself corresponding virtual touch screen of current hand central point according to feedback
Put, then can start to continue to move to hand according to display content to proceed virtual touch control operation.
A kind of virtual contact action device, system and method that the present invention is provided, image is captured and by figure by camera
Operating position is determined as identification hand and mode of operation is judged by recognizing finger number, by the hand three-dimensional coordinate for obtaining
The operational motion to virtual touch screen is mapped directly into, and is shown feed back to user over the display, touch screen can be made
Input no longer needs entity device, by the picture pick-up device in intelligent glasses and Intelligent bracelet, or intelligent and portable mobile device
The virtual touch screen input environment of fast construction, carries out whenever and wherever possible touch screen input, facilitates user to touch by virtual whenever and wherever possible
Control screen carries out man-machine interactive operation freely.
Embodiments of the invention are the foregoing is only, the scope of the claims of the present invention is not thereby limited, it is every using this
Equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills
Art field, is included within the scope of the present invention.