CN112905004B - Gesture control method and device for vehicle-mounted display screen and storage medium - Google Patents

Gesture control method and device for vehicle-mounted display screen and storage medium Download PDF

Info

Publication number
CN112905004B
CN112905004B CN202110084512.0A CN202110084512A CN112905004B CN 112905004 B CN112905004 B CN 112905004B CN 202110084512 A CN202110084512 A CN 202110084512A CN 112905004 B CN112905004 B CN 112905004B
Authority
CN
China
Prior art keywords
gesture
image
identified
vehicle
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110084512.0A
Other languages
Chinese (zh)
Other versions
CN112905004A (en
Inventor
杨小辉
常博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Geely Holding Group Co Ltd
Geely Automobile Research Institute Ningbo Co Ltd
Original Assignee
Zhejiang Geely Holding Group Co Ltd
Geely Automobile Research Institute Ningbo Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Geely Holding Group Co Ltd, Geely Automobile Research Institute Ningbo Co Ltd filed Critical Zhejiang Geely Holding Group Co Ltd
Priority to CN202110084512.0A priority Critical patent/CN112905004B/en
Publication of CN112905004A publication Critical patent/CN112905004A/en
Application granted granted Critical
Publication of CN112905004B publication Critical patent/CN112905004B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2540/00Input parameters relating to occupants
    • B60W2540/223Posture, e.g. hand, foot, or seat position, turned or inclined

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Transportation (AREA)
  • Mathematical Physics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to a gesture control method, a gesture control device and a storage medium for a vehicle-mounted display screen, wherein the method comprises the steps of obtaining information to be identified containing gesture actions; the information to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be identified comprises consistent and different gesture actions; identifying the information to be identified to obtain an identification result; if the fact that the transformation action from the first gesture to the second gesture in the first screen surface preset space exists is determined according to the identification result, determining that the picture in the vehicle-mounted display screen corresponding to the first screen surface preset space is a picture to be shared; and if the fact that the transformation action from the second gesture to the first gesture exists in the second screen surface preset space is determined according to the identification result, displaying the picture to be shared in the vehicle-mounted display screen corresponding to the second screen surface preset space. Therefore, the driving safety can be improved, and the man-machine interaction experience can be improved.

Description

Gesture control method and device for vehicle-mounted display screen and storage medium
Technical Field
The application relates to the technical field of automobiles, in particular to a gesture control method, a gesture control device and a storage medium for a vehicle-mounted display screen.
Background
With the development of automobile intellectualization, various new technologies of high-rise, cool and dazzling have emerged from the science fiction market into the consumer's field of view. In which the in-vehicle display is the most important configuration in a car, is receiving increasing attention from consumers.
The vehicle-mounted Display comprises a center console Display screen, a Head UP Display (HUD), a combination instrument Display screen, a streaming media rearview mirror and the like. The center console display screen can display contents such as car audio, navigation, vehicle information, reversing images and the like, is positioned between a main driving position and a co-driving position, and is convenient for passengers of the main driving position and the co-driving position to use; the HUD can display important driving information such as speed per hour, navigation and the like on a windshield in front of a driver; the combined instrument display screen is positioned right in front of the main driving position and can display information such as speed per hour, navigation, weather, humidity, driving mode and the like so as to be convenient for a driver to observe; the streaming media rearview mirror shoots a picture behind the vehicle in real time through a camera arranged behind the vehicle and displays the picture on a central rearview mirror display screen.
However, since the installation positions of the vehicle-mounted displays are fixed, when the driver uses the navigation function, the driver must frequently turn around to watch the central control display screen to know the current position and the driving route, so that the attention of the driver is dispersed; for example, when a novice driver drives, the road condition behind the vehicle needs to be carried out by means of the streaming media rearview mirror, and meanwhile, the screen of the streaming media rearview mirror is smaller, so that the novice driver needs to frequently lift the head to watch the central rearview mirror display screen, and cannot quickly acquire information, visual fatigue is easy to cause, and the driving safety is poor.
Disclosure of Invention
The embodiment of the application provides a gesture control method, a gesture control device and a storage medium for a vehicle-mounted display screen, which can improve driving safety and human-computer interaction experience.
On one hand, an embodiment of the present application provides a gesture control method for a vehicle-mounted display screen, which is characterized by comprising:
acquiring information to be identified containing gesture actions; the information to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be identified comprises consistent and different gesture actions;
identifying the information to be identified to obtain an identification result;
if the fact that the transformation action from the first gesture to the second gesture in the first screen surface preset space exists is determined according to the identification result, determining that the picture in the vehicle-mounted display screen corresponding to the first screen surface preset space is a picture to be shared;
and if the fact that the transformation action from the second gesture to the first gesture exists in the second screen surface preset space is determined according to the identification result, displaying the picture to be shared in the vehicle-mounted display screen corresponding to the second screen surface preset space.
Optionally, the detection unit includes a camera;
acquiring information to be identified containing gesture actions, including:
acquiring a plurality of frames of continuous images to be identified through a camera;
and detecting a plurality of frames of continuous images to be recognized according to the obtained image detection model to obtain a set of images to be recognized containing gesture actions.
Optionally, identifying the information to be identified to obtain an identification result, including:
and identifying the image set to be identified according to the obtained target gesture identification model to obtain image frame information containing the target gesture and the position of the target gesture in each frame of image.
Optionally, the recognition result includes first image frame information including a first gesture, position information of the first gesture in each frame of image, second image frame information including a second gesture, and position information of the second gesture in each frame of image;
determining that there is a transformation action of transforming from the first gesture to the second gesture in the first screen surface preset space according to the recognition result, including:
if the next frame image of the current frame image in the first image frame information is the first frame image in the second image frame information, and the matching degree value of the position of the first gesture in the current frame image and the position of the second gesture in the first frame image is larger than or equal to a preset value, determining that a transformation action from the first gesture to the second gesture exists in a preset space on the surface of the first screen; the preset space of the first screen surface corresponds to the position of the first gesture in the current frame image or the position of the second gesture in the first frame image.
Optionally, determining that there is a transformation action of transforming from the second gesture to the first gesture in the second screen surface preset space according to the recognition result includes:
if the current frame image exists in the first image frame information and is the next frame image of the tail frame image in the second image frame information, and the matching degree value of the position of the second gesture in the tail frame image and the position of the first gesture in the current frame image is larger than or equal to a preset value, determining that a transformation action from the second gesture to the first gesture exists in a preset space on the surface of the second screen; the preset space of the second screen surface corresponds to the position of the first gesture in the current frame image or the position of the second gesture in the tail frame image.
Optionally, the detection unit includes a distance sensor; the at least two vehicle-mounted display screens are respectively provided with a distance sensor;
acquiring information to be identified containing gesture actions, including:
and acquiring hand actions through a distance sensor, and generating information to be identified containing gesture actions.
Optionally, the detection unit includes an infrared detector; the at least two vehicle-mounted display screens are respectively provided with infrared detectors;
acquiring information to be identified containing gesture actions, including:
and acquiring hand actions through an infrared detector, and generating information to be identified containing gesture actions.
Optionally, the method further comprises:
acquiring a preset space on the surface of the second screen through a camera to obtain an image sequence; the second screen surface preset space comprises a plurality of subspaces, and the subspaces correspond to the controllable functions of the vehicle-mounted display screen corresponding to the second screen surface preset space one by one;
determining a manipulation gesture based on the sequence of images;
and determining a function to be controlled from the plurality of controllable functions according to the control gesture, and controlling the function to be controlled.
On the other hand, the embodiment of the application provides a gesture control device for a vehicle-mounted display screen, which comprises:
the acquisition module is used for acquiring information to be identified containing gesture actions; the information to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be identified comprises consistent and different gesture actions;
the identification module is used for identifying the information to be identified to obtain an identification result;
the first determining module is used for determining that the picture in the vehicle-mounted display screen corresponding to the first screen surface preset space is a picture to be shared if the conversion action from the first gesture to the second gesture in the first screen surface preset space is determined according to the identification result;
and the second determining module is used for displaying the picture to be shared in the vehicle-mounted display screen corresponding to the second screen surface preset space if the conversion action from the second gesture to the first gesture exists in the second screen surface preset space according to the identification result.
In another aspect, an embodiment of the present application provides a computer storage medium, where at least one instruction or at least one program is stored in the storage medium, where the at least one instruction or the at least one program is loaded and executed by a processor to implement the gesture control method for an on-vehicle display screen.
The gesture control method, the gesture control device and the storage medium for the vehicle-mounted display screen have the following beneficial effects:
acquiring information to be identified containing gesture actions; the information to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be identified comprises consistent and different gesture actions; identifying the information to be identified to obtain an identification result; if the fact that the transformation action from the first gesture to the second gesture in the first screen surface preset space exists is determined according to the identification result, determining that the picture in the vehicle-mounted display screen corresponding to the first screen surface preset space is a picture to be shared; and if the fact that the transformation action from the second gesture to the first gesture exists in the second screen surface preset space is determined according to the identification result, displaying the picture to be shared in the vehicle-mounted display screen corresponding to the second screen surface preset space. Therefore, the driving safety can be improved, and the man-machine interaction experience can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario of an automobile cabin provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a gesture control method for a vehicle-mounted display screen according to an embodiment of the present application;
fig. 3 is a schematic diagram of an identification process of information to be identified according to an embodiment of the present application;
fig. 4 is a schematic diagram of a corresponding area of a screen surface preset space of each vehicle-mounted display screen in an image according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a procedure for determining a manipulation function according to a recognition result according to an embodiment of the present application;
fig. 6 is a schematic diagram of a controllable function corresponding to each sub-region in an image according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a gesture control device for a vehicle-mounted display screen according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a schematic diagram of an application scenario of an automobile cabin provided in an embodiment of the present application, including a plurality of vehicle-mounted display screens: a center control display screen 101, a dashboard display screen 102, a streaming media rearview mirror display screen 103 and a HUD display screen 104; the detection unit is used for collecting gesture actions of a user aiming at each vehicle-mounted display screen so as to realize picture sharing among the vehicle-mounted display screens.
Acquiring information to be identified containing gesture actions based on a detection unit, wherein the information to be identified comprises consistent and different gesture actions; the detection range of the detection unit comprises screen surface preset spaces corresponding to the central control display screen 101, the instrument panel display screen 102, the streaming media rearview mirror display screen 103 and the HUD display screen 104; then, identifying the information to be identified to obtain an identification result; if the fact that the transformation action from the first gesture to the second gesture in the first screen surface preset space exists is determined according to the identification result, determining that the picture in the vehicle-mounted display screen corresponding to the first screen surface preset space is a picture to be shared; and if the fact that the transformation action from the second gesture to the first gesture exists in the second screen surface preset space is determined according to the identification result, displaying the picture to be shared in the vehicle-mounted display screen corresponding to the second screen surface preset space.
Optionally, the position of the detection unit is installed according to actual needs, and a detection device can be installed near each vehicle-mounted display screen for detecting gesture actions of each vehicle-mounted display screen respectively, or only one detection device is installed for detecting gesture actions of all vehicle-mounted display screens.
In the following, a specific embodiment of a gesture control method for an in-vehicle display screen of the present application is described, and fig. 2 is a schematic flow chart of a gesture control method for an in-vehicle display screen provided in the embodiment of the present application, where the present specification provides method operation steps as an example or a flowchart, but may include more or fewer operation steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in a real system or server product, the methods illustrated in the embodiments or figures may be performed sequentially or in parallel (e.g., in a parallel processor or multithreaded environment). As shown in fig. 2, the method may include:
s201: acquiring information to be identified containing gesture actions; the information to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be recognized includes consistent and different gesture actions.
In the embodiment of the application, a detection unit is arranged in an automobile cabin, and a screen surface preset space corresponding to a vehicle-mounted display screen is detected through the detection unit to obtain information to be identified containing gesture actions, wherein the information to be identified comprises consistent and different gesture actions; the vehicle-mounted display screen comprises the central control display screen, the instrument panel display screen, the streaming media rearview mirror display screen, the HUD display screen and the like; the screen surface is preset with a space, namely a region with a certain distance above the vehicle-mounted display screen; the detection unit comprises one or more detection devices, and when only one detection device is arranged, the detection range of the detection device can cover the screen surface preset spaces corresponding to all the central control display screen, the instrument panel display screen, the streaming media rearview mirror display screen and the HUD display screen; when a plurality of detection devices exist, the detection devices are respectively arranged near the central control display screen, the instrument panel display screen, the streaming media rearview mirror display screen and the HUD display screen, namely, each detection device in the detection devices corresponds to each vehicle-mounted display screen in the vehicle-mounted display screens one by one.
In this embodiment of the present application, the obtained information to be identified including gesture motion may be any form of an image, a video, and a point cloud, and the corresponding detection unit may include a camera, an infrared identification device, a distance sensor, and other sensors.
In an optional implementation manner, the detection unit comprises a camera, the camera is arranged in an area above the central control display screen, and the camera can detect preset screen surface spaces corresponding to all vehicle-mounted display screens, including the central control display screen, the instrument panel display screen, the streaming media rearview mirror display screen and the HUD display screen respectively; the step S201 may specifically include: acquiring a plurality of frames of continuous images to be identified through a camera; and detecting a plurality of frames of continuous images to be recognized according to the obtained image detection model to obtain a set of images to be recognized containing gesture actions. The image detection model can be obtained by training a built machine learning model based on the collected training images containing target gesture actions; the target gesture motion can realize a single gesture or a plurality of consecutive and different gestures for controlling the vehicle-mounted display screen.
In an alternative embodiment, the detection unit comprises a distance sensor; the at least two vehicle-mounted display screens are respectively provided with a distance sensor, namely a central control display screen, an instrument panel display screen, a streaming media rearview mirror display screen and a HUD display screen, and the distance sensors are respectively arranged in the nearby areas of the display screens; the step S201 may specifically include: and acquiring hand actions through a distance sensor, and generating information to be identified containing gesture actions. Specifically, the distance sensors of the vehicle-mounted display screens are connected with the same single chip microcomputer, and the hand signals acquired by the distance sensors are input into the single chip microcomputer for operation, so that information to be identified containing gesture actions is obtained; wherein the distance sensor may be microsoft Kinect.
In an alternative embodiment, the detection unit comprises an infrared detector; the at least two vehicle-mounted display screens are respectively provided with infrared detectors; namely, the infrared detector is respectively arranged in the nearby areas of the central control display screen, the instrument panel display screen, the streaming media rearview mirror display screen and the HUD display screen; the step S201 may specifically include: and acquiring hand actions through an infrared detector, and generating information to be identified containing gesture actions. Specifically, the infrared detectors of each vehicle-mounted display screen are connected with the same single-chip microcomputer, and hand signals acquired by the infrared detectors are input into the single-chip microcomputer for operation, so that information to be identified containing gesture actions is obtained.
S203: and identifying the information to be identified to obtain an identification result.
In the embodiment of the application, the information to be identified is identified, and an identification result is obtained, wherein the identification result comprises all gestures in the identification information, or further only comprises target gestures after screening all the gestures; the target gesture can realize a single gesture or a plurality of consecutive and different gestures for controlling the vehicle-mounted display screen.
In an alternative embodiment, the step S203 may include: and identifying the image set to be identified according to the obtained target gesture identification model to obtain image frame information containing the target gesture and the position of the target gesture in each frame of image. Based on the optional embodiment, the image set to be identified is an image containing the target gesture action, and in order to further improve the identification accuracy, the image set to be identified is identified again according to the target gesture identification model, so that image frame information containing the target gesture and the position of the target gesture in each frame of image are obtained; the image of each frame is an image after secondary detection, and the accuracy of containing the target gesture is high.
Specifically, a training image set containing target gestures is collected, images of the target gestures in different positions or areas in the images can be collected, and a preset machine learning model is trained according to the training image set, so that a target gesture recognition model is obtained; as shown in fig. 3, fig. 3 is a schematic diagram of a recognition process of information to be recognized provided in the embodiment of the present application, after an image set to be recognized is subjected to a target gesture recognition model, image frame information t0 to tn containing a target gesture and a position P { (x 0, y 0) … … (xn, yn) of the target gesture in each frame of image are output.
S205: if the fact that the transformation action from the first gesture to the second gesture exists in the preset space of the first screen surface is determined according to the identification result, the picture in the vehicle-mounted display screen corresponding to the preset space of the first screen surface is determined to be the picture to be shared.
S207: and if the fact that the transformation action from the second gesture to the first gesture exists in the second screen surface preset space is determined according to the identification result, displaying the picture to be shared in the vehicle-mounted display screen corresponding to the second screen surface preset space.
In this embodiment of the present application, a region corresponding to the preset space on the first screen surface in the image is predetermined, for example, as shown in fig. 4, the region corresponding to the preset space on the first screen surface in the image is a central control display corresponding to a region a, an instrument panel display corresponding to a region b, a streaming media rearview mirror display corresponding to a region c, and a HUD display corresponding to a region d. The target gestures comprise a first gesture and a second gesture, the first gesture is converted into the second gesture in the same area to represent selection and the corresponding vehicle-mounted display screen is selected, and after the vehicle-mounted display screen is selected, the picture in the vehicle-mounted display screen is determined to be a picture to be shared; and if the second gesture is changed to the first gesture again in the other area, displaying the picture to be shared on the vehicle-mounted display screen corresponding to the current area.
In an optional implementation manner, the recognition result includes first image frame information including a first gesture, position information of the first gesture in each frame of image, second image frame information including a second gesture, and position information of the second gesture in each frame of image, and when it is determined that there is a transformation action of transforming the first gesture to the second gesture in a preset space of the first screen surface according to the recognition result, it is determined that a picture in the vehicle-mounted display screen corresponding to the preset space of the first screen surface is a picture to be shared.
Correspondingly, the determining, according to the recognition result, that there is a transformation action of transforming from the first gesture to the second gesture in the preset space on the first screen surface may include: if the next frame image of the current frame image in the first image frame information is the first frame image in the second image frame information, and the matching degree value of the position of the first gesture in the current frame image and the position of the second gesture in the first frame image is larger than or equal to a preset value, determining that a transformation action from the first gesture to the second gesture exists in a preset space on the surface of the first screen; the preset space of the first screen surface corresponds to the position of the first gesture in the current frame image or the position of the second gesture in the first frame image. The preset value may be 1, and the matching degree value of the position of the first gesture in the current frame image and the position of the second gesture in the first frame image may be determined based on the predetermined corresponding region of each screen surface preset space in the image, where the matching degree value is 1 when the position of the first gesture in the current frame image and the position of the second gesture in the first frame image are in the corresponding region of the same screen surface preset space in the image; when the position of the first gesture in the current frame image and the position of the second gesture in the first frame image are not in the corresponding area of the same screen surface preset space in the image, the matching degree value is 0.
Correspondingly, the determining, according to the recognition result, that there is a transformation action of transforming from the second gesture to the first gesture in the preset space on the second screen surface may include: if the current frame image exists in the first image frame information and is the next frame image of the tail frame image in the second image frame information, and the matching degree value of the position of the second gesture in the tail frame image and the position of the first gesture in the current frame image is larger than or equal to a preset value, determining that a transformation action from the second gesture to the first gesture exists in a preset space on the surface of the second screen; the preset space of the second screen surface corresponds to the position of the first gesture in the current frame image or the position of the second gesture in the tail frame image.
Specifically, as shown in fig. 5, the first gesture may be palm opening, and the second gesture may be finger pinching; the recognition result includes first image frame information containing a first gesture (palm opening) of t0 to t5 and t21 to tn, position information P { (x 0, y 0) … (x 5, y 5), (x 21, y 21) … (xn, yn) } of the first gesture (palm opening) in each frame image, second image frame information containing a second gesture of t6 to t20, and position information P { (x 6, y 6) … (x 21, y 21) } of the second gesture in each frame image; at this time, the next frame image (t 6) with the current frame image (t 5) in the first image frame information is the first frame image in the second image frame information, the position of the first gesture in the current frame image (t 5) and the position of the second gesture in the first frame image (t 6) are both located in the area a, the matching degree value is 1, it is determined that a transformation action from the first gesture to the second gesture exists in a preset screen surface space corresponding to the area a, and a picture in the central control display screen corresponding to the area a is a picture to be shared; meanwhile, the current frame image (t 21) is the next frame image of the tail frame image (t 20) in the second image frame information, the position of the second gesture in the tail frame image (t 20) and the position of the first gesture in the current frame image (t 21) are both located in the area b, the matching degree value is 1, it is determined that a transformation action from the second gesture to the first gesture exists in a screen surface preset space corresponding to the existing area b, and at the moment, a picture to be shared in the central control display screen is displayed on the instrument panel display screen. The pictures to be shared can be navigation map pictures, so that a driver can know the current position and the driving route without turning around frequently to watch the central control display screen, the attention of the driver is dispersed, and the driving safety can be improved. The method is also suitable for picture interaction between other vehicle-mounted display screens, for example, when the position of the first gesture in the current frame image (t 5) and the position of the second gesture in the first frame image (t 6) are both located in the area c, the picture of the streaming media rearview mirror display screen is selected as a picture to be shared, and the position of the second gesture in the tail frame image (t 20) and the position of the first gesture in the current frame image (t 21) are both located in the area a, the picture to be shared in the streaming media rearview mirror display screen is displayed in a full screen or window mode on a central control display screen, so that a driver does not need to frequently lift the head to watch the streaming media rearview mirror display screen with a smaller screen, information can be directly and rapidly acquired from the central control display screen, visual fatigue is reduced, and driving safety can be improved.
In an alternative embodiment, the method may further comprise: acquiring a preset space on the surface of the second screen through a camera to obtain an image sequence; the second screen surface preset space comprises a plurality of subspaces, and the subspaces correspond to the controllable functions of the vehicle-mounted display screen corresponding to the second screen surface preset space one by one; determining a manipulation gesture based on the sequence of images; and determining a function to be controlled from the plurality of controllable functions according to the control gesture, and controlling the function to be controlled. The controllable functions include adjusting the progress, brightness, volume and the like of the video currently played.
Specifically, the sub-areas corresponding to the plurality of sub-spaces on the image are predetermined, as shown in fig. 6, fig. 6 is a schematic diagram of controllable functions corresponding to each sub-area in the image provided in the embodiment of the present application, for example, sub-area e corresponds to a video progress adjusting function, sub-area f corresponds to a video picture brightness adjusting function, and sub-area g corresponds to a video volume adjusting function; different subareas can be added or reduced according to actual requirements; acquiring a screen surface preset space corresponding to a central control display screen through a camera to obtain an image sequence; then analyzing an operation gesture based on the image sequence; determining a function to be controlled from a plurality of controllable functions according to the control gesture, and controlling the function to be controlled; wherein manipulating the gesture may include transitioning from the first gesture to the second gesture in the first sub-region, then spatially moving with the second gesture, and finally switching from the second gesture to the third gesture; the third gesture may be the same as the first gesture, and the function corresponding to the first sub-region is a function to be controlled; for example, when the central control display screen (or other display screens in the vehicle) is playing the video in full screen, the first gesture (palm opening) is extended to the area (subarea e) of the lower part of the preset space of the screen surface, then the thumb and the index finger are pinched (changed to the second gesture), and then the video is controlled to synchronously fast forward when the pinch gesture moves rightwards; when the video moves leftwards in a pinch gesture, controlling the video to synchronously fast back; when the pinch gesture is spread (switched to the third gesture), the manipulation is stopped; for another example, when the central control display screen is playing the video in full screen, the palm is opened and extends to the area (sub-area f) of the screen surface preset space near the left, then thumb and forefinger pinch (change to the second gesture), and then when the pinch gesture moves upwards, the brightness of the video picture is controlled to increase synchronously; when the video image moves downwards in a pinch gesture, controlling the brightness of the video image to synchronously decrease; when the pinch gesture is unfolded, the operation is stopped; for another example, when the central control display screen is playing the video in full screen, the palm is opened and extends to the area (sub-area g) of the screen surface preset space near the right, then the thumb and the index finger pinch, and then when the pinch gesture moves upwards, the volume of the video is controlled to increase synchronously; when moving downwards in a pinch gesture, controlling the video volume to synchronously decrease; when the pinch gesture is unfolded, the operation is stopped; for another example, when the central control display screen is playing the video but not playing the video in full screen, the central control display screen stretches to a preset area of the display screen with a pinch gesture, then five fingers are opened, and the video is controlled to start playing the video in full screen; and otherwise, controlling the video to exit full-screen playing. Therefore, compared with the control mode which mainly recognizes static gestures and single dynamic gestures and is simpler in gestures in the prior art, the operation command corresponding to the vehicle is also single, some gestures are not commonly used, the operation intuitiveness of human beings is not met, and the gesture control method provided by the embodiment of the application can greatly expand the control range of gesture operation, so that interaction experience is improved.
The method embodiments provided in the embodiments of the present application may be performed in a computer terminal, a server, or a similar computing device.
The embodiment of the application also provides a gesture control device for a vehicle-mounted display screen, and fig. 7 is a schematic structural diagram of the gesture control device for a vehicle-mounted display screen, as shown in fig. 7, where the gesture control device includes:
an acquiring module 701, configured to acquire information to be identified including a gesture; the information to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be identified comprises consistent and different gesture actions;
the identifying module 702 is configured to identify information to be identified, and obtain an identification result;
the first determining module 703 is configured to determine that a picture in the vehicle-mounted display screen corresponding to the first screen surface preset space is a picture to be shared if it is determined that there is a transformation action of transforming the first gesture from the first gesture to the second gesture in the first screen surface preset space according to the recognition result;
and the second determining module 704 is configured to display a picture to be shared on the vehicle-mounted display screen corresponding to the second screen surface preset space if it is determined that there is a transformation action of transforming the second gesture into the first gesture in the second screen surface preset space according to the recognition result.
In an alternative embodiment, the detection unit comprises a camera; the obtaining module 701 is specifically configured to: acquiring a plurality of frames of continuous images to be identified through a camera; and detecting a plurality of frames of continuous images to be recognized according to the obtained image detection model to obtain a set of images to be recognized containing gesture actions.
In an alternative embodiment, the identification module 702 is specifically configured to: and identifying the image set to be identified according to the obtained target gesture identification model to obtain image frame information containing the target gesture and the position of the target gesture in each frame of image.
In an alternative embodiment, the recognition result includes first image frame information including a first gesture, position information of the first gesture in each frame of image, second image frame information including a second gesture, and position information of the second gesture in each frame of image; the first determining module 703 is specifically configured to: if the next frame image of the current frame image in the first image frame information is the first frame image in the second image frame information, and the matching degree value of the position of the first gesture in the current frame image and the position of the second gesture in the first frame image is larger than or equal to a preset value, determining that a transformation action from the first gesture to the second gesture exists in a preset space on the surface of the first screen; the preset space of the first screen surface corresponds to the position of the first gesture in the current frame image or the position of the second gesture in the first frame image.
In an alternative embodiment, the second determining module 704 is specifically configured to: if the current frame image exists in the first image frame information and is the next frame image of the tail frame image in the second image frame information, and the matching degree value of the position of the second gesture in the tail frame image and the position of the first gesture in the current frame image is larger than or equal to a preset value, determining that a transformation action from the second gesture to the first gesture exists in a preset space on the surface of the second screen; the preset space of the second screen surface corresponds to the position of the first gesture in the current frame image or the position of the second gesture in the tail frame image.
In an alternative embodiment, the detection unit comprises a distance sensor; the at least two vehicle-mounted display screens are respectively provided with a distance sensor; the obtaining module 701 is specifically configured to: and acquiring hand actions through a distance sensor, and generating information to be identified containing gesture actions.
In an alternative embodiment, the detection unit comprises an infrared detector; the at least two vehicle-mounted display screens are respectively provided with infrared detectors; the obtaining module 701 is specifically configured to: and acquiring hand actions through an infrared detector, and generating information to be identified containing gesture actions.
In an optional implementation manner, the device further comprises a third determining module, configured to acquire, through the camera, a preset space on the surface of the second screen, so as to obtain an image sequence; the second screen surface preset space comprises a plurality of subspaces, and the subspaces correspond to the controllable functions of the vehicle-mounted display screen corresponding to the second screen surface preset space one by one; determining a manipulation gesture based on the sequence of images; and determining a function to be controlled from the plurality of controllable functions according to the control gesture, and controlling the function to be controlled.
The apparatus and method embodiments in the embodiments of the present application are based on the same application concept.
The embodiment of the application also provides a storage medium, which can be arranged in a server to store at least one instruction, at least one section of program, code set or instruction set related to a gesture control method for an on-vehicle display screen in the embodiment of the method, and the at least one instruction, the at least one section of program, the code set or instruction set is loaded and executed by the processor to implement the gesture control method for an on-vehicle display screen.
Alternatively, in this embodiment, the storage medium may be located in at least one network server among a plurality of network servers of the computer network. Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The embodiments of the gesture control method, the device and the storage medium for the vehicle-mounted display screen provided by the application can be seen, wherein the gesture control method, the device and the storage medium are used for acquiring information to be identified containing gesture actions; the information to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the information to be identified comprises consistent and different gesture actions; identifying the information to be identified to obtain an identification result; if the fact that the transformation action from the first gesture to the second gesture in the first screen surface preset space exists is determined according to the identification result, determining that the picture in the vehicle-mounted display screen corresponding to the first screen surface preset space is a picture to be shared; and if the fact that the transformation action from the second gesture to the first gesture exists in the second screen surface preset space is determined according to the identification result, displaying the picture to be shared in the vehicle-mounted display screen corresponding to the second screen surface preset space. Therefore, the driving safety can be improved, and the man-machine interaction experience can be improved.
It should be noted that: the foregoing sequence of the embodiments of the present application is only for describing, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments in part.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the invention to the particular embodiments of the present application, but to limit the scope of the invention to the particular embodiments of the present application.

Claims (8)

1. A gesture control method for an in-vehicle display screen, comprising:
acquiring an image set to be identified containing gesture actions; the image set to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the image set to be identified comprises consistent and different gesture actions;
identifying the image set to be identified to obtain an identification result; the recognition result comprises first image frame information containing a first gesture, position information of the first gesture in each frame of image, second image frame information containing a second gesture and position information of the second gesture in each frame of image;
if the next frame image of the current frame image exists in the first image frame information and is the first frame image in the second image frame information, and the matching degree value of the position of the first gesture in the current frame image and the position of the second gesture in the first frame image is larger than or equal to a preset value, determining that a transformation action of transforming the first gesture into the second gesture exists in a preset space of the first screen surface, and determining that a picture in a vehicle-mounted display screen corresponding to the preset space of the first screen surface is a picture to be shared; the first screen surface preset space corresponds to the position of the second gesture in the first frame image;
if the current frame image exists in the first image frame information and is the next frame image of the tail frame image in the second image frame information, and the matching degree value of the position of the second gesture in the tail frame image and the position of the first gesture in the current frame image is larger than or equal to a preset value, determining that a transformation action for transforming the second gesture to the first gesture exists in a second screen surface preset space, and displaying the picture to be shared in a vehicle-mounted display screen corresponding to the second screen surface preset space; and the second screen surface preset space corresponds to the position of a second gesture in the tail frame image.
2. The method of claim 1, wherein the detection unit comprises a camera;
the acquiring the image set to be recognized containing the gesture action comprises the following steps:
acquiring a plurality of frames of continuous images to be identified through the camera;
and detecting the multi-frame continuous images to be recognized according to the obtained image detection model to obtain the image set to be recognized containing the gesture.
3. The method according to claim 1, wherein the identifying the image set to be identified to obtain an identification result includes:
and identifying the image set to be identified according to the obtained target gesture identification model to obtain image frame information containing the target gesture and the position of the target gesture in each frame of image.
4. The method of claim 1, wherein the detection unit comprises a distance sensor; the at least two vehicle-mounted display screens are respectively provided with the distance sensor;
the acquiring the image set to be recognized containing the gesture action comprises the following steps:
collecting hand actions through the distance sensor, and generating information to be identified containing gesture actions;
and obtaining the image set to be identified based on the information to be identified.
5. The method of claim 1, wherein the detection unit comprises an infrared detector; the at least two vehicle-mounted display screens are respectively provided with the infrared detectors;
the acquiring the image set to be recognized containing the gesture action comprises the following steps:
collecting hand actions through the infrared detector, and generating information to be identified containing gesture actions;
and obtaining the image set to be identified based on the information to be identified.
6. The method according to claim 2, wherein the method further comprises:
acquiring a preset space on the surface of the second screen through the camera to obtain an image sequence; the second screen surface preset space comprises a plurality of subspaces, and the subspaces correspond to the controllable functions of the vehicle-mounted display screen corresponding to the second screen surface preset space one by one;
determining a manipulation gesture based on the sequence of images;
and determining a function to be controlled from the plurality of controllable functions according to the control gesture, and controlling the function to be controlled.
7. A gesture control apparatus for an in-vehicle display screen, comprising:
the acquisition module is used for acquiring an image set to be identified containing gesture actions; the image set to be identified is obtained based on the detection unit; the detection range of the detection unit comprises screen surface preset spaces corresponding to at least two vehicle-mounted display screens; the image set to be identified comprises consistent and different gesture actions;
the identification module is used for identifying the image set to be identified to obtain an identification result; the recognition result comprises first image frame information containing a first gesture, position information of the first gesture in each frame of image, second image frame information containing a second gesture and position information of the second gesture in each frame of image;
the first determining module is used for determining that a transformation action from a first gesture to a second gesture exists in a first screen surface preset space if a next frame image of a current frame image exists in the first image frame information and is a first frame image in the second image frame information, and a matching degree value of a position of a first gesture in the current frame image and a position of a second gesture in the first frame image is larger than or equal to a preset value, and determining that a picture in a vehicle-mounted display screen corresponding to the first screen surface preset space is a picture to be shared; the first screen surface preset space corresponds to the position of the second gesture in the first frame image;
the second determining module is configured to determine that there is a transformation action from the second gesture to the first gesture in a second screen surface preset space if the first image frame information includes a next frame image in which the current frame image is a last frame image in the second image frame information, and a matching degree value of a position of the second gesture in the last frame image and a position of the first gesture in the current frame image is greater than or equal to a preset value, and display the picture to be shared in a vehicle-mounted display screen corresponding to the second screen surface preset space; and the second screen surface preset space corresponds to the position of a second gesture in the tail frame image.
8. A computer storage medium having stored therein at least one instruction or at least one program loaded and executed by a processor to implement the gesture control method for an in-vehicle display screen according to any one of claims 1 to 6.
CN202110084512.0A 2021-01-21 2021-01-21 Gesture control method and device for vehicle-mounted display screen and storage medium Active CN112905004B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110084512.0A CN112905004B (en) 2021-01-21 2021-01-21 Gesture control method and device for vehicle-mounted display screen and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110084512.0A CN112905004B (en) 2021-01-21 2021-01-21 Gesture control method and device for vehicle-mounted display screen and storage medium

Publications (2)

Publication Number Publication Date
CN112905004A CN112905004A (en) 2021-06-04
CN112905004B true CN112905004B (en) 2023-05-26

Family

ID=76118230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110084512.0A Active CN112905004B (en) 2021-01-21 2021-01-21 Gesture control method and device for vehicle-mounted display screen and storage medium

Country Status (1)

Country Link
CN (1) CN112905004B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113253849A (en) * 2021-07-01 2021-08-13 湖北亿咖通科技有限公司 Display control method, device and equipment of control bar
CN114527924A (en) * 2022-02-16 2022-05-24 珠海读书郎软件科技有限公司 Control method based on double-screen device, storage medium and device
CN117218716B (en) * 2023-08-10 2024-04-09 中国矿业大学 DVS-based automobile cabin gesture recognition system and method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090707A (en) * 2013-07-11 2014-10-08 腾讯科技(北京)有限公司 Method, device and system for sharing content between intelligent terminals
CN104777981A (en) * 2015-04-24 2015-07-15 无锡天脉聚源传媒科技有限公司 Information fast sharing method and device
CN107678664A (en) * 2017-08-28 2018-02-09 中兴通讯股份有限公司 A kind of terminal interface switching, the method, apparatus and terminal of gesture processing
CN110109639A (en) * 2019-05-09 2019-08-09 北京伏羲车联信息科技有限公司 Multi-screen interaction method and onboard system
CN110231866A (en) * 2019-05-29 2019-09-13 中国第一汽车股份有限公司 Vehicular screen control method, system, vehicle and storage medium
US11554668B2 (en) * 2019-06-25 2023-01-17 Hyundai Mobis Co., Ltd. Control system and method using in-vehicle gesture input
CN111857468A (en) * 2020-07-01 2020-10-30 Oppo广东移动通信有限公司 Content sharing method and device, equipment and storage medium

Also Published As

Publication number Publication date
CN112905004A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN112905004B (en) Gesture control method and device for vehicle-mounted display screen and storage medium
CN111931579B (en) Automatic driving assistance system and method using eye tracking and gesture recognition techniques
EP3491493B1 (en) Gesture based control of autonomous vehicles
EP2659336B1 (en) User interface, apparatus and method for gesture recognition
US9858702B2 (en) Device and method for signalling a successful gesture input
KR102182667B1 (en) An operating device comprising an eye tracker unit and a method for calibrating the eye tracker unit of the operating device
CN110045825A (en) Gesture recognition system for vehicle interaction control
MX2011004124A (en) Method and device for displaying information sorted into lists.
US20160132124A1 (en) Gesture determination apparatus and method, gesture operation apparatus, program, and recording medium
JP2005509973A (en) Method and apparatus for gesture-based user interface
EP3691926A1 (en) Display system in a vehicle
CN105027062A (en) Information processing device
CN109835260B (en) Vehicle information display method, device, terminal and storage medium
CN112959945B (en) Vehicle window control method and device, vehicle and storage medium
CN108733283A (en) Context vehicle user interface
GB2545005A (en) Responsive human machine interface
US20230078074A1 (en) Methods and devices for hand-on-wheel gesture interaction for controls
US20230143429A1 (en) Display controlling device and display controlling method
CN112905003A (en) Intelligent cockpit gesture control method and device and storage medium
KR101709129B1 (en) Apparatus and method for multi-modal vehicle control
CN110850975B (en) Electronic system with palm recognition, vehicle and operation method thereof
CN114217716A (en) Menu bar display method and device and electronic equipment
CN110297686B (en) Content display method and device
US11734928B2 (en) Vehicle controls and cabin interior devices augmented reality usage guide
WO2023105843A1 (en) Operation support device, operation support method, and operation support program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant