CN111857346A - Gesture control method and device - Google Patents

Gesture control method and device Download PDF

Info

Publication number
CN111857346A
CN111857346A CN202010718137.6A CN202010718137A CN111857346A CN 111857346 A CN111857346 A CN 111857346A CN 202010718137 A CN202010718137 A CN 202010718137A CN 111857346 A CN111857346 A CN 111857346A
Authority
CN
China
Prior art keywords
target
gesture image
gesture
function item
controlled
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010718137.6A
Other languages
Chinese (zh)
Other versions
CN111857346B (en
Inventor
杨华
龚圆杰
张涛
褚天
洪瑋鸿
覃进武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chunmi Electronics Technology Co Ltd
Original Assignee
Shanghai Chunmi Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chunmi Electronics Technology Co Ltd filed Critical Shanghai Chunmi Electronics Technology Co Ltd
Priority to CN202010718137.6A priority Critical patent/CN111857346B/en
Publication of CN111857346A publication Critical patent/CN111857346A/en
Application granted granted Critical
Publication of CN111857346B publication Critical patent/CN111857346B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a gesture control method and device. The method comprises the following steps: acquiring a first picture, wherein the first picture comprises a first gesture image of a user; acquiring characteristic parameters of the first gesture image, wherein the characteristic parameters comprise finger types and/or finger numbers corresponding to the first gesture image; determining a target control object to be controlled according to the characteristic parameters of the first gesture image, wherein the target control object comprises target equipment, a target operation object or a target function item; an apparatus includes at least one operation object; one operation object corresponds to at least one function item. In the technical scheme, the user can select the target control object to be controlled through gestures in the cooking process, so that the target control object is adjusted, the friendliness of the intelligent device and the convenience of user operation are improved, and the user experience is better.

Description

Gesture control method and device
Technical Field
The disclosure relates to the technical field of terminal control, and in particular relates to a gesture control method and device.
Background
With the development of science and technology, the application of the intelligent kitchen ware is more and more extensive, and the individuation and friendliness of interaction between the intelligent kitchen ware and a user are the development trend of the intelligent kitchen ware.
In the related art, a user often sticks greasy dirt, flour, seasonings and other stains on hands in the cooking process, and if the kitchen ware is operated through a key or a knob, the operation panel of the kitchen ware is possibly polluted by the stains on the hands of the user, so that the cleaning difficulty of the kitchen ware is increased, and the user experience is poor.
Disclosure of Invention
To overcome the problems in the related art, embodiments of the present disclosure provide a gesture control method and apparatus. The technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a gesture control method, including:
acquiring a first picture, wherein the first picture comprises a first gesture image of a user;
acquiring characteristic parameters of the first gesture image, wherein the characteristic parameters comprise finger types and/or finger numbers corresponding to the first gesture image;
determining a target control object to be controlled according to the characteristic parameters of the first gesture image, wherein the target control object comprises target equipment, a target operation object or a target function item; an apparatus includes at least one operation object; one operation object corresponds to at least one function item.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the user can select the target control object to be controlled through gestures in the cooking process, and then the target control object is adjusted, so that the friendliness of the intelligent device and the convenience of user operation are improved, and the user experience is better.
In one embodiment, the determining a target control object to be controlled according to the characteristic parameter of the first gesture image includes:
and acquiring a control object corresponding to the characteristic parameter of the first gesture image as a target control object according to the characteristic parameter of the first gesture image and the corresponding relation between the pre-stored characteristic parameter and the control object.
In one embodiment, the determining a target control object to be controlled according to the characteristic parameter of the first gesture image includes:
determining target equipment to be controlled from a plurality of pieces of equipment according to the characteristic parameters of the first gesture image;
or after the target device to be controlled is determined, determining a target operation object to be controlled from at least one operation object included in the target device according to the characteristic parameters of the first gesture image;
or after the target operation object to be controlled is determined, determining a target function item to be controlled from at least one function item corresponding to the target operation object according to the characteristic parameters of the first gesture image.
In one embodiment, the target control object is a target device; the method further comprises the following steps:
acquiring a pointing region of a first gesture image included in the first picture;
and determining a target operation object or a target function item to be controlled from a plurality of operation objects included in the target equipment according to the pointing region of the first gesture image.
In one embodiment, the target control object is a target operation object or a target function item; the method further comprises the following steps:
acquiring a pointing region of a first gesture image included in the first picture;
and determining target equipment to be controlled from a plurality of equipment according to the pointing region of the first gesture image.
In one embodiment, the method comprises:
after the target function item is determined, acquiring a plurality of second pictures, wherein each second picture comprises a second gesture image of the user;
acquiring a gesture motion track according to second gesture images included by the plurality of second pictures;
and adjusting the target function item according to the gesture motion track.
According to a second aspect of the embodiments of the present disclosure, there is provided a gesture control apparatus including:
the first picture acquisition module is used for acquiring a first picture, and the first picture comprises a first gesture image of a user;
the parameter acquisition module is used for acquiring characteristic parameters of the first gesture image, wherein the characteristic parameters comprise finger types and/or finger numbers corresponding to the first gesture image;
the object determination module is used for determining a target control object to be controlled according to the characteristic parameters of the first gesture image, wherein the target control object comprises target equipment, a target operation object or a target function item; an apparatus includes at least one operation object; one operation object corresponds to at least one function item.
In one embodiment, the object determination module comprises:
and the object determining submodule is used for acquiring a control object corresponding to the characteristic parameter of the first gesture image as a target control object according to the characteristic parameter of the first gesture image and the corresponding relation between the pre-stored characteristic parameter and the control object.
In one embodiment, the object determination module comprises:
the device determining submodule is used for determining target devices to be controlled from a plurality of devices according to the characteristic parameters of the first gesture image;
or, the operation object determining submodule is used for determining a target operation object to be controlled from at least one operation object included in the target equipment according to the characteristic parameters of the first gesture image after the target equipment to be controlled is determined;
or the function item determining submodule is used for determining a target function item to be controlled from at least one function item corresponding to the target operation object according to the characteristic parameter of the first gesture image after the target operation object to be controlled is determined.
In one embodiment, the target control object is a target device; the device further comprises:
the first region acquisition module is used for acquiring a pointing region of a first gesture image included in the first picture;
and the operation object determining module is used for determining a target operation object or a target function item to be controlled from a plurality of operation objects included in the target equipment according to the pointing region of the first gesture image.
In one embodiment, the target control object is a target operation object or a target function item; the device further comprises:
the second area acquisition module is used for acquiring a pointing area of a first gesture image included in the first picture;
and the equipment determining module is used for determining target equipment to be controlled from a plurality of pieces of equipment according to the pointing area of the first gesture image.
In one embodiment, the apparatus comprises:
the second picture acquisition module is used for acquiring a plurality of second pictures after the target function item is determined, wherein each second picture comprises a second gesture image of the user;
the track acquisition module is used for acquiring a gesture motion track according to second gesture images included by the second pictures;
and the adjusting module is used for adjusting the target function item according to the gesture motion track.
According to a third aspect of the embodiments of the present disclosure, there is provided a gesture control apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a first picture, wherein the first picture comprises a first gesture image of a user;
acquiring characteristic parameters of the first gesture image, wherein the characteristic parameters comprise finger types and/or finger numbers corresponding to the first gesture image;
determining a target control object to be controlled according to the characteristic parameters of the first gesture image, wherein the target control object comprises target equipment, a target operation object or a target function item; an apparatus includes at least one operation object; one operation object corresponds to at least one function item.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to any one of the embodiments of the first aspect.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1a is a flow diagram illustrating a gesture control method according to an exemplary embodiment.
FIG. 1b is a flow diagram illustrating a gesture control method according to an example embodiment.
FIG. 1c is a flow diagram illustrating a gesture control method according to an example embodiment.
FIG. 1d is a flow diagram illustrating a gesture control method according to an example embodiment.
Fig. 2a is a schematic structural diagram of a gesture control apparatus according to an exemplary embodiment.
Fig. 2b is a schematic structural diagram of a gesture control apparatus according to an exemplary embodiment.
Fig. 2c is a schematic structural diagram of a gesture control apparatus according to an exemplary embodiment.
Fig. 2d is a schematic structural diagram of a gesture control apparatus according to an exemplary embodiment.
Fig. 2e is a schematic structural diagram of a gesture control apparatus according to an exemplary embodiment.
Fig. 2f is a schematic structural diagram of a gesture control apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1a is a flowchart illustrating a gesture control method according to an exemplary embodiment, and as shown in fig. 1a, the gesture control method includes the following steps 101 to 103:
in step 101, a first picture is obtained, the first picture comprising a first gesture image of a user.
In step 102, characteristic parameters of the first gesture image are acquired, and the characteristic parameters include a finger type and/or a finger number corresponding to the first gesture image.
In step 103, a target control object to be controlled is determined according to the characteristic parameters of the first gesture image, wherein the target control object comprises a target device, a target operation object or a target function item.
It should be noted that, a device may include at least one operation object; one operation object may correspond to at least one function item. For example, if the device is a range hood, the range hood may include operation objects of a left air opening and a right air opening; the corresponding function items of the left air opening and the right air opening can be wind power and wind direction. Or, if the device is a cooker, the cooker may include an operation object that is a left cooker and a right cooker; the function items corresponding to the left cooking range and the right cooking range can be opening, fire and extinguishing.
For example, the gesture control method may be applied to a central control device in a kitchen, where the central control device is connected to a camera and can control a plurality of devices, and at this time, the plurality of devices may be intelligent kitchenware. The central control device can acquire a picture shot by the camera in real time and then determine whether the picture comprises a gesture image of the user. If yes, determining the picture as a first picture, wherein the gesture image included in the picture is the first gesture image; if not, discarding the picture and continuing monitoring.
After the first picture is acquired, the central control device may determine the characteristic parameters of the first gesture image through image recognition, that is, determine the type and/or the number of fingers that the first gesture image indicates that the user extends, and then acquire the target control object according to the type and/or the number of fingers. Specifically, the central control device may determine the target device to be controlled from the multiple devices controlled by the central control device according to the characteristic parameter of the first gesture image. Or after the target device to be controlled is determined, the target operation object to be controlled is determined from at least one operation object included in the target device according to the characteristic parameters of the first gesture image, and at this time, the device currently used by the user may be determined as the target device. Or after the target operation object to be controlled is determined, according to the characteristic parameter of the first gesture image, the target function item to be controlled is determined from at least one function item corresponding to the target operation object, and at this time, the operation object included in the device currently used by the user may be determined as the target operation object.
Wherein the finger type may be a one-finger type, such as thumb, index finger, middle finger, ring finger, and pinky finger; a combination of multiple fingers, such as a combination of a thumb and an index finger, a combination of an index finger, a middle finger and a ring finger, a combination of an index finger, a middle finger, a ring finger and a little finger, etc., may also be used, which is not limited in this disclosure.
Optionally, the central control device prestores a corresponding relationship between the characteristic parameter and the control object, and after the characteristic parameter of the first gesture image is obtained, the corresponding relationship may be referred to according to the characteristic parameter of the first gesture image, and the control object corresponding to the characteristic parameter of the first gesture image is obtained as the target control object.
Taking the characteristic parameter as a finger type and the target control object as a target device as an example, assuming that the device corresponding to the index finger is recorded in the corresponding relationship between the characteristic parameter and the control object as an electric cooker, the device corresponding to the middle finger is a range hood, and the device corresponding to the ring finger is a cooker, the central control device recognizes that the corresponding finger type of the first gesture image is the index finger, that is, when the user currently extends the index finger, the user indicates that the user needs to control the electric cooker through a gesture, and at this time, the central control device can determine the electric cooker as the target control object; or the central control device recognizes that the corresponding finger type of the first gesture image is the ring finger, that is, when the user stretches out the ring finger currently, the user indicates that the user needs to control the cooker through the gesture, and at this time, the central control device can determine the cooker as the target control object. After the target device is acquired, the central control device may send selected information to the target device, and the target device may display the reminding information after receiving the selected information, so that after the reminding information is referred to by a user, a target operation object is determined again from at least one operation object included in the target device through a gesture, and a target function item is determined from at least one function item corresponding to the target operation object.
Taking the characteristic parameter as a finger type and the target control object as a target operation object as an example, assuming that the corresponding relation between the characteristic parameter and the control object records that the combination of the index finger and the middle finger corresponds to the left air port of the range hood, the combination of the ring finger and the middle finger corresponds to the right air port of the range hood, the combination of the ring finger and the index finger corresponds to the left cooking range of the kitchen range, and the combination of the ring finger and the little finger corresponds to the right cooking range of the kitchen range. When the central control device recognizes that the corresponding finger type of the first gesture image is the combination of the index finger and the middle finger, that is, when the user stretches out the index finger and the middle finger at the same time, the user indicates that the user needs to control the left air port of the range hood through a gesture, and at this time, the central control device can determine the left air port of the range hood as a target control object; or the central control device recognizes that the corresponding finger type of the first gesture image is a combination of the ring finger and the little finger, that is, the user stretches out the ring finger and the little finger at the same time, which indicates that the user needs to control the right cooking range of the cooking range through the gesture, and at this time, the central control device can determine the right cooking range of the cooking range as the target control object. After the target operation object is acquired, the central control device may send selected information to the target device corresponding to the target operation object, and the target device may display the reminding information after receiving the selected information, so that the user determines the target function item from the at least one function item corresponding to the target operation object through a gesture again after looking up the reminding information.
Taking the characteristic parameter as the number of fingers and the target control object as the target device as an example, assuming that the corresponding relationship between the characteristic parameter and the control object records that the device corresponding to one finger is an electric cooker, the devices corresponding to two fingers are a range hood, and the devices corresponding to three fingers are kitchen ranges, the central control device recognizes that the number of the corresponding fingers of the first gesture image is two, that is, when the user extends two fingers currently, the user needs to control the range hood through a gesture, and at this time, the central control device can determine the range hood as the target control object.
For example, the gesture control method can also be applied to a target device, which can be an intelligent kitchen ware, is provided with a camera and comprises at least one operation object. The target device can acquire a picture shot by the camera in real time and then determine whether the picture comprises a gesture image of the user. If yes, determining the picture as a first picture, wherein the gesture image included in the picture is the first gesture image; if not, discarding the picture and continuing monitoring.
After the first picture is acquired, the target device may determine the characteristic parameter of the first gesture image through image recognition, that is, determine the type and/or number of fingers that the first gesture image indicates that the user extends, and then acquire the target control object according to the type and/or number of fingers. Specifically, the target device may determine a target operation object to be controlled from at least one operation object included in the first gesture image according to the characteristic parameter of the first gesture image. Or after the target operation object to be controlled is determined, according to the characteristic parameter of the first gesture image, the target function item to be controlled is determined from at least one function item corresponding to the target operation object, and at this time, the operation object included in the device currently used by the user may be determined as the target operation object.
The finger type may be a single finger type, or a combination of multiple fingers, which is not limited in this disclosure.
Optionally, the target device prestores a corresponding relationship between the feature parameter and the control object, and after the feature parameter of the first gesture image is obtained, the corresponding relationship may be referred according to the feature parameter of the first gesture image, and the control object corresponding to the feature parameter of the first gesture image is obtained as the target control object.
Taking the characteristic parameter as a finger type, the target device as a range hood, and the target control object as a target operation object as examples, it is assumed that the correspondence between the characteristic parameter and the control object describes that the operation object corresponding to the index finger is a left air opening, and the operation object corresponding to the middle finger is a right air opening. When the target device recognizes that the corresponding finger type of the first gesture image is the index finger, that is, when the user stretches out the index finger currently, it indicates that the target device needs to control the left air port of the currently used range hood through a gesture, and at this time, the range hood can determine the left air port as a target control object; or, when the target device recognizes that the corresponding finger type of the first gesture image is the middle finger, that is, when the user currently stretches out the middle finger, it indicates that the user needs to control the right air opening of the range hood currently in use through a gesture, and at this time, the range hood may determine the right air opening as the target control object. After the target operation object is determined, the target device may display the reminding information, so that after the reminding information is referred to by the user, the target function item is determined again from the at least one function item corresponding to the target operation object through the gesture.
Taking the characteristic parameter as a finger type, the target device as a range hood, and the target control object as a target operation object as examples, it is assumed that the correspondence between the characteristic parameter and the control object describes the wind power of the left air opening of the range hood corresponding to the combination of the index finger and the middle finger, the direction of the left air opening of the range hood corresponding to the combination of the ring finger and the middle finger, the wind power of the right air opening of the range hood corresponding to the combination of the ring finger and the index finger, and the direction of the right air opening of the range hood corresponding to the combination of the ring finger and the little finger. When the smoke exhaust ventilator recognizes that the corresponding finger type of the first gesture image is the combination of the index finger and the middle finger, namely when the user stretches out the index finger and the middle finger at the same time, the smoke exhaust ventilator indicates that the user needs to control the wind power of the left air port through gestures, and at the moment, the smoke exhaust ventilator can specify the wind power of the left air port as a target control object; or the target device recognizes that the corresponding finger type of the first gesture image is a combination of the ring finger and the little finger, that is, the user stretches out the ring finger and the little finger at the same time, which indicates that the direction of the right air port of the range hood needs to be controlled through the gesture, and at this time, the target device can determine the direction of the right air port of the range hood as a target control object.
Taking the characteristic parameter as the number of fingers, the target device as a cooker, and the target control object as a target operation object as examples, assuming that the corresponding relationship between the characteristic parameter and the control object records that the device corresponding to one finger is a left cooking range, and the device corresponding to two fingers is a right cooking range, the cooker recognizes that the number of the corresponding fingers of the first gesture image is two, that is, when the user extends two fingers currently, the cooker needs to control the right cooking range of the cooker through gestures, and at this time, the cooker can determine the right cooking range as the target control object.
According to the technical scheme, the user can select the target control object to be controlled through gestures in the cooking process, and then the target control object is adjusted, so that the friendliness of the intelligent device and the convenience of user operation are improved, and the user experience is better.
In one embodiment, the target control object is a target device. As shown in fig. 1b, the method further comprises steps 104 and 105:
in step 104, a pointing region of a first gesture image included in the first picture is obtained.
In step 105, a target operation object to be controlled is determined from a plurality of operation objects included in the target device according to the pointing region of the first gesture image.
For example, the pointing region of the first gesture image may be a location of the first gesture image on the first picture. Specifically, the target device stores a corresponding relationship between the pointing region and the operation object in advance, and different pointing regions correspond to different operation objects. After the pointing region is obtained, the target operation object can be determined according to the corresponding relation between the pointing region and the operation object.
Taking the target device as the range hood as an example, the corresponding relation between the pointing region and the operation object records that the operation object corresponding to the left region is the left air inlet; the operation object corresponding to the right area is a right air opening. At this time, the range hood may determine whether the first gesture image is located on the left or right side of the first picture after the first picture is acquired. If the first gesture image is located on the left side, the indication area of the first gesture image is a left side area, and at this time, the left air opening can be determined as a target operation object; if the position is on the right side, the indication area of the first gesture image is the right side area, and at this time, the right air opening can be determined as the target operation object.
In practical application, a corresponding relation between the pointing region and the function item of the operation object can be set, and the corresponding relation can record that the function item of the operation object corresponding to the upper left region is the wind power of the left air inlet; the function item of the operation object corresponding to the lower left area is the wind direction of the left wind port; the function item of the operation object corresponding to the upper right area is the wind power of a right air inlet; the function item of the operation object corresponding to the lower right area is the wind direction of the right wind opening. After the range hood acquires the first picture, the function item required to be operated by the user can be determined according to the area of the first picture occupied by the first gesture image, and the specific process can refer to the above embodiment.
In one embodiment, the target control object is a target operation object or a target function item. As shown in fig. 1c, the method further comprises step 106 and step 107:
in step 106, acquiring a pointing region of a first gesture image included in the first picture;
in step 107, a target device to be controlled is determined from the plurality of devices according to the pointing region of the first gesture image.
For example, after acquiring the first image, the central control device may first acquire a pointing region of a first gesture image included in the first image, then determine a target device to be controlled from the multiple devices according to the pointing region, then acquire a characteristic parameter of the first gesture image, and determine a target operation object or a target function item according to the characteristic parameter. Alternatively, the pointing region of the first gesture image may be a positional relationship between the first gesture image and an image of the target device on the first picture. For example, the pointing region of the first gesture image may be above or below it in the first picture.
For example, the central control device first acquires a first picture, the first picture further includes images of a plurality of devices controlled by the central control device in addition to the first gesture image, then acquires an object located below the first gesture image in the first picture, and then determines a device corresponding to the object as a target device, where a pointing region of the first gesture image is below the first picture.
It should be noted that, the user may select the target device and the target operation object in sequence according to the above method until reaching the target function item, and then directly adjust the target function item. For example, the target device may be determined from the plurality of devices through the pointing region of the first gesture image, the target operation object may be acquired from the plurality of operation objects included in the target device through the finger type of the first gesture image, and then the target function item may be determined from the plurality of function items corresponding to the target operation object through the number of fingers of the first gesture image. The user may complete the above determination process through one gesture instruction, or may complete the determination process through multiple gesture instructions, which is not limited in the embodiments of the present disclosure.
In one embodiment, as shown in FIG. 1d, the method includes steps 108-110:
in step 108, after the target function item is determined, a plurality of second pictures are acquired, and each second picture comprises a second gesture image of the user.
In step 109, a gesture motion trajectory is obtained according to the second gesture images included in the plurality of second pictures.
In step 110, the target function item is adjusted according to the gesture motion trajectory.
For example, after the target function item is determined by one gesture operation, the central control device may continuously obtain the pictures acquired by the camera for multiple times, and then determine continuous multiple pictures with gesture images in the multiple pictures as multiple second pictures. And determining a gesture motion track by analyzing the positions of the second gesture images included in the plurality of second pictures, and then adjusting the target function item according to the gesture motion track. Optionally, the central control device stores a corresponding relationship between the gesture motion trajectory and the adjustment operation, for example, a parameter value is adjusted to the right rotation direction; rotating left to adjust the parameter value; or close to the center; opening to the periphery. After the gesture motion track is acquired, the corresponding relation between the gesture motion track and the adjustment operation can be inquired, and how to adjust the target function item is determined.
The gestures of the first gesture image and the second gesture image may be the same or different, and the embodiment of the present disclosure is described by taking the same as the above two gestures, but the present disclosure is not limited thereto.
For example, when the user needs to adjust the air volume of the range hood (at the same time, the air volume of the left and right air ports is adjusted), the user can stretch out the thumb below the range hood. After the first picture is acquired, the central control device can identify an object image in a pointing region (above) of the first gesture image included in the first picture. If the range hood is the device corresponding to the object image, the range hood can be determined as the target device. Then, the central control device can identify the finger type of the first gesture image, and if the finger type of the first gesture image is the thumb, the wind power of an air opening included in the range hood is used as a target function item. After determining the target function item, the central control device may instruct the range hood to display a reminder message. The user may rotate the entire hand of the extended thumb clockwise after reviewing the reminder. In the process, the central control device can continuously acquire a plurality of second pictures, the gesture motion track of the second gesture image can be known to rotate rightwards for the thumb through analysis of the second gesture image included by the plurality of second pictures, the central control device determines that the air volume of the range hood needs to be increased by the user at present by looking up the corresponding relation between the gesture motion track and the adjustment operation, and the air volume of the range hood can be gradually increased along with the rotation of the user. In practical application, the rotation of the user and the air volume increase of the range hood can be performed simultaneously, and the arm can be stopped rotating when the user determines that the air volume is enough, namely the whole gesture operation process is finished. Alternatively, the user may rotate the entire hand of the extended thumb counterclockwise after reviewing the reminder information. In the process, the central control device can continuously acquire a plurality of second pictures, the gesture motion track of the second gesture image can be known to rotate left for the thumb through analysis of the second gesture image included by the plurality of second pictures, the central control device determines that the air volume of the range hood needs to be reduced by the user at present through looking up the corresponding relation between the gesture motion track and the adjustment operation, and the air volume of the range hood can be gradually reduced along with the rotation of the user. In practical application, the air volume reduction of the user rotating arm and the range hood can be carried out simultaneously, the rotating arm can be stopped when the user determines that the air volume is enough, and the whole gesture operation process is finished.
For example, when the user needs to adjust the fire power of the left cooking range, five fingers can be extended above the left cooking range to form a shape for gripping the air. After the first picture is acquired, the central control device can identify an object image in a pointing region (below) of the first gesture image included in the first picture. If the object image corresponds to the left cooking range of the cooker, the left cooking range of the cooker can be determined as a target control object. The central control device can then recognize the finger type of the first gesture image, and if the finger type of the first gesture image is five fingers, the fire power of the left cooking range is used as a target function item. After the target function item is determined, the central control device can instruct the cooker to display reminding information. After the user consults the reminding information, the user can rotate the extended five fingers clockwise. In the process, the central control device can continuously acquire a plurality of second pictures, the gesture motion track of the second gesture image included by the plurality of second pictures can be known to be that five fingers rotate rightwards, the central control device determines that the firepower of the left cooking range needs to be increased at present by looking up the corresponding relation between the gesture motion track and the adjustment operation, and the firepower of the left cooking range can be gradually increased along with the rotation of the user. In practical application, the rotation of the user and the increase of the firepower of the cooker can be simultaneously carried out, and the rotation can be stopped when the user determines that the firepower is enough, namely the whole gesture operation process is finished. Alternatively, the user may rotate the extended five fingers counterclockwise after reviewing the reminder information. In the process, the central control device can continuously acquire a plurality of second pictures, the gesture motion track of the second gesture image included by the plurality of second pictures can be known to rotate leftwards for five fingers through analysis of the second gesture image, the central control device determines that the firepower of the left cooking range needs to be reduced currently by the user by looking up the corresponding relation between the gesture motion track and the adjustment operation, and the firepower of the left cooking range can be gradually reduced along with the rotation of the user. Or the user can focus the five extending fingers towards the center after looking up the reminding information to finally form a fist. In the process, the central control device can continuously acquire a plurality of second pictures, the gesture motion track of the second gesture image can be gathered in a five-direction center manner through analysis of the second gesture image included by the plurality of second pictures, the central control device determines that a user needs to extinguish the left cooking range at present by looking up the corresponding relation between the gesture motion track and the adjustment operation, and at the moment, the firepower of the left cooking range can be extinguished.
For example, after the target device determines the target function item through one gesture operation, the target device may continuously obtain the pictures acquired by the camera multiple times, and then determine continuous multiple pictures with gesture images in the multiple pictures as multiple second pictures. And determining a gesture motion track by analyzing the positions of the second gesture images included in the plurality of second pictures, and then adjusting the target function item according to the gesture motion track. Optionally, the target device stores a corresponding relationship between the gesture motion trajectory and the adjustment operation, for example, a parameter value is increased by rotating to the right; rotating left to adjust the parameter value; or close to the center; opening to the periphery. After the gesture motion track is acquired, the corresponding relation between the gesture motion track and the adjustment operation can be inquired, and how to adjust the target function item is determined.
Taking the target device as the range hood as an example, when the air volume of the range hood needs to be adjusted (at the moment, the air volume of the left air port and the right air port are adjusted at the same time), the thumb can be extended out of the lower part of the range hood by a user. After the range hood acquires the first picture, the position of the first gesture image included by the first picture can be identified. And if the first gesture image is positioned at the lower side of the first picture, taking the wind power of a wind gap included by the range hood as a target function item. After the target function item is determined, the range hood may display a reminder message. The user may rotate the entire hand of the extended thumb clockwise after reviewing the reminder. In the process, the range hood can continuously acquire a plurality of second pictures, the gesture motion track of the second gesture image can be known to rotate rightwards for the thumb through analysis of the second gesture image included by the plurality of second pictures, the range hood determines that the air volume of the range hood needs to be increased by the user at present through consulting the corresponding relation between the gesture motion track and the adjusting operation, and the air volume can be gradually increased along with the rotation of the user at the moment. In practical application, the rotation of the user and the air volume increase of the range hood can be performed simultaneously, and the arm can be stopped rotating when the user determines that the air volume is enough, namely the whole gesture operation process is finished. Alternatively, the user may rotate the entire hand of the extended thumb counterclockwise after reviewing the reminder information. In the process, the range hood can continuously acquire a plurality of second pictures, the gesture motion track of the second gesture image included by the plurality of second pictures can be known to rotate leftwards for the thumb through analysis of the second gesture image, the range hood determines that the air volume of the range hood needs to be reduced by the user at present through looking up the corresponding relation between the gesture motion track and the adjustment operation, and the air volume can be gradually reduced along with the rotation of the user at the moment. In practical application, the air volume reduction of the user rotating arm and the range hood can be carried out simultaneously, the rotating arm can be stopped when the user determines that the air volume is enough, and the whole gesture operation process is finished.
For example, when the user needs to adjust the fire power of the left cooking range, five fingers can be extended above the left cooking range to form a shape for gripping the air. After the first picture is acquired, the cooker can identify the position of a first gesture image included in the first picture in the first image. If the first gesture image is located on the left side of the first image, a left cooking range of the cooker may be determined as a target control object. And then the cooker can identify the finger type of the first gesture image, and if the finger type of the first gesture image is five fingers, the firepower of the left cooker head is used as a target function item. After the target function item is determined, the cooker can display reminding information. After the user consults the reminding information, the user can rotate the extended five fingers clockwise. The kitchen range can continuously acquire a plurality of second pictures in the process, the gesture motion track of a second gesture image included by the plurality of second pictures can be known to rotate rightwards for five fingers through analysis of the second gesture image, the kitchen range determines that the firepower of the left cooking range needs to be increased at present by the user by looking up the corresponding relation between the gesture motion track and the adjusting operation, and the firepower of the left cooking range can be gradually increased along with the rotation of the user. In practical application, the rotation of the user and the increase of the firepower of the cooker can be simultaneously carried out, and the rotation can be stopped when the user determines that the firepower is enough, namely the whole gesture operation process is finished. Alternatively, the user may rotate the extended five fingers counterclockwise after reviewing the reminder information. The kitchen range can continuously acquire a plurality of second pictures in the process, the gesture motion track of a second gesture image included by the plurality of second pictures can be known to rotate leftwards for five fingers through analysis of the second gesture image, the kitchen range determines that the firepower of the left cooking range needs to be reduced by the user at present through looking up the corresponding relation between the gesture motion track and the adjustment operation, and the firepower of the left cooking range can be gradually reduced along with the rotation of the user at the moment. Or the user can focus the five extending fingers towards the center after looking up the reminding information to finally form a fist. The kitchen range can continuously acquire a plurality of second pictures in the process, the gesture motion track of a second gesture image included by the plurality of second pictures can be gathered in a five-direction center manner through analysis of the second gesture image, the kitchen range determines that a user needs to extinguish the left kitchen range at present by looking up the corresponding relation between the gesture motion track and the adjustment operation, and at the moment, the firepower of the left kitchen range can be extinguished.
The embodiment of the disclosure provides a gesture control method, a user can select a target control object to be controlled through a gesture in a cooking process, and then the target control object is adjusted, so that the condition that the target control object is polluted in an operation process is avoided, the condition that the target control object is difficult to clean is avoided, the friendliness of intelligent equipment and the convenience of user operation are improved, and the user experience is better.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 2a is a schematic structural diagram illustrating a gesture control apparatus 20 according to an exemplary embodiment, where the apparatus 20 may be implemented as part or all of an electronic device through software, hardware, or a combination of the two. As shown in fig. 2a, the gesture control apparatus 20 includes a first picture obtaining module 201, a parameter obtaining module 202 and an object determining module 203.
The first picture obtaining module 201 is configured to obtain a first picture, where the first picture includes a first gesture image of a user.
A parameter obtaining module 202, configured to obtain a feature parameter of the first gesture image, where the feature parameter includes a finger type and/or a finger number corresponding to the first gesture image.
An object determining module 203, configured to determine a target control object to be controlled according to the characteristic parameter of the first gesture image, where the target control object includes a target device, a target operation object, or a target function item. An apparatus includes at least one operation object. One operation object corresponds to at least one function item.
In one embodiment, as shown in FIG. 2b, the object determination module 203 includes an object determination submodule 2031.
The object determining sub-module 2031 is configured to obtain, as a target control object, a control object corresponding to the characteristic parameter of the first gesture image according to the characteristic parameter of the first gesture image and a correspondence between a pre-stored characteristic parameter and the control object.
In one embodiment, as shown in fig. 2c, the object determination module 203 comprises:
the device determining sub-module 2032 is configured to determine a target device to be controlled from the multiple devices according to the characteristic parameter of the first gesture image.
Or, the operation object determining sub-module 2033 is configured to, after the target device to be controlled has been determined, determine a target operation object to be controlled from at least one operation object included in the target device according to the characteristic parameter of the first gesture image.
Or, the function item determining sub-module 2034 is configured to, after the target operation object to be controlled is determined, determine, according to the feature parameter of the first gesture image, a target function item to be controlled from at least one function item corresponding to the target operation object.
In one embodiment, as shown in fig. 2d, the target control object is a target device. The apparatus 20 further comprises a first region acquisition module 204 and an operation object determination module 205.
The first region obtaining module 204 is configured to obtain a pointing region of a first gesture image included in the first picture.
An operation object determining module 205, configured to determine, according to the pointing region of the first gesture image, a target operation object or a target function item to be controlled from a plurality of operation objects included in the target device.
In one embodiment, as shown in fig. 2e, the target control object is a target operation object or a target function item. The apparatus 20 further comprises a second region acquisition module 206 and a device determination module 207.
The second region obtaining module 206 is configured to obtain a pointing region of a first gesture image included in the first picture.
And the device determining module 207 is configured to determine a target device to be controlled from the multiple devices according to the pointing region of the first gesture image.
In one embodiment, as shown in fig. 2f, the apparatus 20 comprises a second picture taking module 208, a trajectory taking module 209 and an adjusting module 210.
The second image obtaining module 208 is configured to obtain a plurality of second images after determining the target function item, where each second image includes a second gesture image of the user.
And a track obtaining module 209, configured to obtain a gesture motion track according to the second gesture image included in the plurality of second pictures.
And an adjusting module 210, configured to adjust the target function item according to the gesture motion trajectory.
The embodiment of the disclosure provides a gesture control device, a user can select a target control object to be controlled through a gesture in a cooking process, and then the target control object is adjusted, so that the condition that the target control object is polluted in an operation process is avoided, the condition that the target control object is difficult to clean is avoided, the friendliness of intelligent equipment and the convenience of user operation are improved, and the user experience is better.
The disclosed embodiment provides a gesture control device, which comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a first picture, wherein the first picture comprises a first gesture image of a user;
acquiring characteristic parameters of the first gesture image, wherein the characteristic parameters comprise finger types and/or finger numbers corresponding to the first gesture image;
determining a target control object to be controlled according to the characteristic parameters of the first gesture image, wherein the target control object comprises target equipment, a target operation object or a target function item; an apparatus includes at least one operation object; one operation object corresponds to at least one function item.
In one embodiment, the processor may be further configured to: and acquiring a control object corresponding to the characteristic parameter of the first gesture image as a target control object according to the characteristic parameter of the first gesture image and the corresponding relation between the pre-stored characteristic parameter and the control object.
In one embodiment, the processor may be further configured to: determining target equipment to be controlled from a plurality of pieces of equipment according to the characteristic parameters of the first gesture image; or after the target device to be controlled is determined, determining a target operation object to be controlled from at least one operation object included in the target device according to the characteristic parameters of the first gesture image; or after the target operation object to be controlled is determined, determining a target function item to be controlled from at least one function item corresponding to the target operation object according to the characteristic parameters of the first gesture image.
In one embodiment, the processor may be further configured to: acquiring a pointing region of a first gesture image included in the first picture; and determining a target operation object or a target function item to be controlled from a plurality of operation objects included in the target equipment according to the pointing region of the first gesture image.
In one embodiment, the processor may be further configured to: acquiring a pointing region of a first gesture image included in the first picture; and determining target equipment to be controlled from a plurality of equipment according to the pointing region of the first gesture image.
In one embodiment, the processor may be further configured to: after the target function item is determined, acquiring a plurality of second pictures, wherein each second picture comprises a second gesture image of the user; acquiring a gesture motion track according to second gesture images included by the plurality of second pictures; and adjusting the target function item according to the gesture motion track.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as memory 504 comprising instructions, executable by a processing device of device 20 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The disclosed embodiments provide a non-transitory computer-readable storage medium, such as a memory including instructions that, when executed by a processor of apparatus 20, enable apparatus 20 to perform the above-described gesture control method, e.g., ROM, Random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage, and the like. The method comprises the following steps:
acquiring a first picture, wherein the first picture comprises a first gesture image of a user;
acquiring characteristic parameters of the first gesture image, wherein the characteristic parameters comprise finger types and/or finger numbers corresponding to the first gesture image;
determining a target control object to be controlled according to the characteristic parameters of the first gesture image, wherein the target control object comprises target equipment, a target operation object or a target function item; an apparatus includes at least one operation object; one operation object corresponds to at least one function item.
In one embodiment, the determining a target control object to be controlled according to the characteristic parameter of the first gesture image includes: and acquiring a control object corresponding to the characteristic parameter of the first gesture image as a target control object according to the characteristic parameter of the first gesture image and the corresponding relation between the pre-stored characteristic parameter and the control object.
In one embodiment, the determining a target control object to be controlled according to the characteristic parameter of the first gesture image includes: determining target equipment to be controlled from a plurality of pieces of equipment according to the characteristic parameters of the first gesture image; or after the target device to be controlled is determined, determining a target operation object to be controlled from at least one operation object included in the target device according to the characteristic parameters of the first gesture image; or after the target operation object to be controlled is determined, determining a target function item to be controlled from at least one function item corresponding to the target operation object according to the characteristic parameters of the first gesture image.
In one embodiment, the target control object is a target device; the method further comprises the following steps: acquiring a pointing region of a first gesture image included in the first picture; and determining a target operation object or a target function item to be controlled from a plurality of operation objects included in the target equipment according to the pointing region of the first gesture image.
In one embodiment, the target control object is a target operation object or a target function item; the method further comprises the following steps: acquiring a pointing region of a first gesture image included in the first picture; and determining target equipment to be controlled from a plurality of equipment according to the pointing region of the first gesture image.
In one embodiment, the method comprises: after the target function item is determined, acquiring a plurality of second pictures, wherein each second picture comprises a second gesture image of the user; acquiring a gesture motion track according to second gesture images included by the plurality of second pictures; and adjusting the target function item according to the gesture motion track.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A gesture control method, comprising:
acquiring a first picture, wherein the first picture comprises a first gesture image of a user;
acquiring characteristic parameters of the first gesture image, wherein the characteristic parameters comprise finger types and/or finger numbers corresponding to the first gesture image;
determining a target control object to be controlled according to the characteristic parameters of the first gesture image, wherein the target control object comprises target equipment, a target operation object or a target function item; an apparatus includes at least one operation object; one operation object corresponds to at least one function item.
2. The method according to claim 1, wherein the determining a target control object to be controlled according to the characteristic parameter of the first gesture image comprises:
and acquiring a control object corresponding to the characteristic parameter of the first gesture image as a target control object according to the characteristic parameter of the first gesture image and the corresponding relation between the pre-stored characteristic parameter and the control object.
3. The method according to claim 1 or 2, wherein the determining a target control object to be controlled according to the characteristic parameter of the first gesture image comprises:
determining target equipment to be controlled from a plurality of pieces of equipment according to the characteristic parameters of the first gesture image;
or after the target device to be controlled is determined, determining a target operation object to be controlled from at least one operation object included in the target device according to the characteristic parameters of the first gesture image;
or after the target operation object to be controlled is determined, determining a target function item to be controlled from at least one function item corresponding to the target operation object according to the characteristic parameters of the first gesture image.
4. The method of claim 1, wherein the target control object is a target device; the method further comprises the following steps:
acquiring a pointing region of a first gesture image included in the first picture;
and determining a target operation object or a target function item to be controlled from a plurality of operation objects included in the target equipment according to the pointing region of the first gesture image.
5. The method according to claim 1, wherein the target control object is a target operation object or a target function item; the method further comprises the following steps:
acquiring a pointing region of a first gesture image included in the first picture;
and determining target equipment to be controlled from a plurality of equipment according to the pointing region of the first gesture image.
6. The method according to any one of claims 1 to 5, characterized in that it comprises:
after the target function item is determined, acquiring a plurality of second pictures, wherein each second picture comprises a second gesture image of the user;
acquiring a gesture motion track according to second gesture images included by the plurality of second pictures;
and adjusting the target function item according to the gesture motion track.
7. A gesture control apparatus, comprising:
the first picture acquisition module is used for acquiring a first picture, and the first picture comprises a first gesture image of a user;
the parameter acquisition module is used for acquiring characteristic parameters of the first gesture image, wherein the characteristic parameters comprise finger types and/or finger numbers corresponding to the first gesture image;
the object determination module is used for determining a target control object to be controlled according to the characteristic parameters of the first gesture image, wherein the target control object comprises target equipment, a target operation object or a target function item; an apparatus includes at least one operation object; one operation object corresponds to at least one function item.
8. The apparatus of claim 7, wherein the object determination module comprises:
and the object determining submodule is used for acquiring a control object corresponding to the characteristic parameter of the first gesture image as a target control object according to the characteristic parameter of the first gesture image and the corresponding relation between the pre-stored characteristic parameter and the control object.
9. The apparatus of claim 7 or 8, wherein the object determination module comprises:
the device determining submodule is used for determining target devices to be controlled from a plurality of devices according to the characteristic parameters of the first gesture image;
or, the operation object determining submodule is used for determining a target operation object to be controlled from at least one operation object included in the target equipment according to the characteristic parameters of the first gesture image after the target equipment to be controlled is determined;
or the function item determining submodule is used for determining a target function item to be controlled from at least one function item corresponding to the target operation object according to the characteristic parameter of the first gesture image after the target operation object to be controlled is determined.
10. The apparatus of claim 7, wherein the target control object is a target device; the device further comprises:
the first region acquisition module is used for acquiring a pointing region of a first gesture image included in the first picture;
and the operation object determining module is used for determining a target operation object or a target function item to be controlled from a plurality of operation objects included in the target equipment according to the pointing region of the first gesture image.
11. The apparatus according to claim 7, wherein the target control object is a target operation object or a target function item; the device further comprises:
the second area acquisition module is used for acquiring a pointing area of a first gesture image included in the first picture;
and the equipment determining module is used for determining target equipment to be controlled from a plurality of pieces of equipment according to the pointing area of the first gesture image.
12. The apparatus according to any one of claims 7 to 11, characterized in that it comprises:
the second picture acquisition module is used for acquiring a plurality of second pictures after the target function item is determined, wherein each second picture comprises a second gesture image of the user;
the track acquisition module is used for acquiring a gesture motion track according to second gesture images included by the second pictures;
and the adjusting module is used for adjusting the target function item according to the gesture motion track.
13. A gesture control apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a first picture, wherein the first picture comprises a first gesture image of a user;
acquiring characteristic parameters of the first gesture image, wherein the characteristic parameters comprise finger types and/or finger numbers corresponding to the first gesture image;
determining a target control object to be controlled according to the characteristic parameters of the first gesture image, wherein the target control object comprises target equipment, a target operation object or a target function item; an apparatus includes at least one operation object; one operation object corresponds to at least one function item.
14. A computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 6.
CN202010718137.6A 2020-07-23 2020-07-23 Gesture control method and device Active CN111857346B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010718137.6A CN111857346B (en) 2020-07-23 2020-07-23 Gesture control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010718137.6A CN111857346B (en) 2020-07-23 2020-07-23 Gesture control method and device

Publications (2)

Publication Number Publication Date
CN111857346A true CN111857346A (en) 2020-10-30
CN111857346B CN111857346B (en) 2024-07-16

Family

ID=72949829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010718137.6A Active CN111857346B (en) 2020-07-23 2020-07-23 Gesture control method and device

Country Status (1)

Country Link
CN (1) CN111857346B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100916836B1 (en) * 2008-05-29 2009-09-14 고려대학교 산학협력단 Method and apparatus for recognizing pointing gesture
US20170153710A1 (en) * 2014-06-20 2017-06-01 Lg Electronics Inc. Video display device and operating method thereof
CN109991859A (en) * 2017-12-29 2019-07-09 青岛有屋科技有限公司 A kind of gesture instruction control method and intelligent home control system
CN110275611A (en) * 2019-05-27 2019-09-24 联想(上海)信息技术有限公司 A kind of parameter adjusting method, device and electronic equipment
CN111367415A (en) * 2020-03-17 2020-07-03 北京明略软件***有限公司 Equipment control method and device, computer equipment and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100916836B1 (en) * 2008-05-29 2009-09-14 고려대학교 산학협력단 Method and apparatus for recognizing pointing gesture
US20170153710A1 (en) * 2014-06-20 2017-06-01 Lg Electronics Inc. Video display device and operating method thereof
CN109991859A (en) * 2017-12-29 2019-07-09 青岛有屋科技有限公司 A kind of gesture instruction control method and intelligent home control system
CN110275611A (en) * 2019-05-27 2019-09-24 联想(上海)信息技术有限公司 A kind of parameter adjusting method, device and electronic equipment
CN111367415A (en) * 2020-03-17 2020-07-03 北京明略软件***有限公司 Equipment control method and device, computer equipment and medium

Also Published As

Publication number Publication date
CN111857346B (en) 2024-07-16

Similar Documents

Publication Publication Date Title
US11003253B2 (en) Gesture control of gaming applications
JP4965653B2 (en) Virtual controller for visual display
US10819905B1 (en) System and method for temperature sensing in cooking appliance with data fusion
TWI398818B (en) Method and system for gesture recognition
US11190678B2 (en) Information processing apparatus, information processing method, and program
US10504384B1 (en) Augmented reality user engagement system
JP5567206B2 (en) Computing device interface
US8666115B2 (en) Computer vision gesture based control of a device
JP4764832B2 (en) Apparatus and method for optical input device
EP3130969A1 (en) Method and device for showing work state of a device
JP2016520946A (en) Human versus computer natural 3D hand gesture based navigation method
CN109865282A (en) Information processing method, device, medium and electronic equipment in mobile terminal
JPH0844490A (en) Interface device
JP4419768B2 (en) Control device for electronic equipment
CN111857345A (en) Gesture-based control method and device
CN107803024B (en) Shooting control method and device
CN108616712A (en) A kind of interface operation method, device, equipment and storage medium based on camera
US20200050280A1 (en) Operation instruction execution method and apparatus, user terminal and storage medium
CN111857346B (en) Gesture control method and device
CN103513762B (en) Utilize operational approach and the digital device of user's gesture
JP2012018620A (en) Information processor and control method therefor
CN108174171A (en) A kind of projecting apparatus with speaker
JP4972013B2 (en) Information presenting apparatus, information presenting method, information presenting program, and recording medium recording the program
JP2013134549A (en) Data input device and data input method
Park et al. A hand posture recognition system utilizing frequency difference of infrared light

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 01-04, 1st floor, No.2 Lane 60, Naxian Road, Pudong New Area pilot Free Trade Zone, Shanghai 201203

Applicant after: Chunmi Technology (Shanghai) Co.,Ltd.

Address before: Room 01-04, 1st floor, Lane 60, Naxian Road, Pudong New Area, Shanghai, 201203

Applicant before: SHANGHAI CHUNMI ELECTRONICS TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant