CN112215198A - Self-adaptive man-machine interaction system and method based on big data - Google Patents

Self-adaptive man-machine interaction system and method based on big data Download PDF

Info

Publication number
CN112215198A
CN112215198A CN202011168303.6A CN202011168303A CN112215198A CN 112215198 A CN112215198 A CN 112215198A CN 202011168303 A CN202011168303 A CN 202011168303A CN 112215198 A CN112215198 A CN 112215198A
Authority
CN
China
Prior art keywords
user
unit
vehicle
coordinates
sight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011168303.6A
Other languages
Chinese (zh)
Other versions
CN112215198B (en
Inventor
郑小云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Chang'e Investment Partnership Enterprise LP
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202011168303.6A priority Critical patent/CN112215198B/en
Publication of CN112215198A publication Critical patent/CN112215198A/en
Application granted granted Critical
Publication of CN112215198B publication Critical patent/CN112215198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/48Biological material, e.g. blood, urine; Haemocytometers
    • G01N33/50Chemical analysis of biological material, e.g. blood, urine; Testing involving biospecific ligand binding methods; Immunological testing
    • G01N33/98Chemical analysis of biological material, e.g. blood, urine; Testing involving biospecific ligand binding methods; Immunological testing involving alcohol, e.g. ethanol in breath
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Hematology (AREA)
  • Chemical & Material Sciences (AREA)
  • Urology & Nephrology (AREA)
  • Human Computer Interaction (AREA)
  • Immunology (AREA)
  • Microbiology (AREA)
  • Pathology (AREA)
  • Biotechnology (AREA)
  • Signal Processing (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Food Science & Technology (AREA)
  • Medicinal Chemistry (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Cell Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of big data, in particular to a self-adaptive man-machine interaction system and a self-adaptive man-machine interaction method based on big data, wherein the man-machine interaction system comprises a video transmission module, a holographic image module, an alcohol content detection module and a sight line forming module, the video transmission module is used for monitoring all angles of a vehicle and transmitting videos to the module, the holographic influence module is used for starting a holographic display screen according to gestures of a user and judging a unit specifically pointed by the user according to a specific characteristic vector amplified by fingers of the user, the sight line forming detection module is used for carrying out frame selection judgment on a specific range of the sight line of the user in the driving process of the user, the alcohol content detection module is used for detecting and judging the alcohol content of a main driving user according to the weight of the main driving in the vehicle, and the holographic influence module can be used for detecting and judging the content of the unit pointed by an, and judging that the user specifically points to the unit content.

Description

Self-adaptive man-machine interaction system and method based on big data
Technical Field
The invention relates to the technical field of big data, in particular to a big data-based self-adaptive man-machine interaction system and method.
Background
With the progress of modern science and technology, the development of a human-computer interaction system is better and better, a human-computer interaction interface is taken as an independent and important research field and is deeply penetrated into the lives of people, the human-computer interaction interface is inseparable from the lives of people, and the human-computer interaction system is applied in various fields, such as: industrial and commercial work space design, biological design, three-dimensional human body model, automobile industry and the like. The holographic display is most used in the field of automobiles, and at present, when a man-machine interaction system is used on an automobile, people generally look up specific contents by themselves and need to press a selection button during waiting time, which is very inconvenient and even causes traffic accidents, so that the holographic display is generally used for solving the problem;
however, when people use the holographic display screen, the content of a unit specifically pointed by people cannot be judged, in the process of clicking the unit, the error of clicking the unit is usually caused by the pointed error, when people use the holographic display screen to observe users, accidents are often caused by short stay of sight lines, when a driver is detected to drink, the driving of the users cannot be prevented, according to survey display, 98% of accidents are accidents caused by alcohol in the driving accidents, and therefore, the safety of people in the driving process is most important;
therefore, there is a need for an adaptive human-computer interaction system and method based on big data to solve the above problems.
Disclosure of Invention
The present invention provides a self-adaptive human-computer interaction system and method based on big data, so as to solve the problems proposed in the background art.
In order to solve the technical problems, the invention provides the following technical scheme: a self-adaptive man-machine interaction system based on big data comprises a video transmission module, a holographic image module, an alcohol content detection module and a sight line forming module, wherein the video transmission module is used for monitoring all angles of a vehicle and transmitting videos to the module so as to monitor the inside and the outside of the vehicle and protect the safety of a user using the vehicle, the holographic influence module is used for starting a holographic display screen according to gestures of the user and judging a unit specifically pointed by the user according to a specific characteristic vector amplified by fingers of the user so as to ensure the driving safety of the user, the sight line forming detection module is used for performing frame selection judgment on a specific range of sight lines of the user in the driving process of the user so that the system can know the selection range of the user according to the sight lines of the user so as to ensure the driving safety of the user, the alcohol content detection module is used for detecting and judging the alcohol content of a main driving user according to the weight of the main driving in, therefore, the driving safety of the user is ensured, the user can not violate traffic rules due to drunk driving in the driving process, the video transmission module is connected with the holographic image module, and the video transmission module is connected with the alcohol content detection module.
Preferably, the video transmission module comprises a vehicle collision prediction unit, a GPRS positioning unit, a photographing recording unit and an automatic cleaning unit, wherein the vehicle collision prediction unit is used for performing curve fitting on the direction and the distance between a user vehicle and a vehicle which is close to the surrounding distance, so that the vehicle using safety of the user is ensured, no accident is caused, the GPRS positioning unit is used for positioning the positions of the user vehicle and the vehicle which is close to the user, coordinates are displayed in a two-dimensional plane model, so that the specific position of the user can be known, the driving safety of the user is ensured, the photographing recording unit is used for photographing and evidence obtaining when the position of the user is close to the surrounding vehicle, so that contradictions between vehicle owners are reduced, the contradictions can be proved when necessary, the automatic cleaning unit is used for storing a photographed picture in a set time limit range and is provided with a restorable function, therefore, the photo can be ensured to be in a random taking state, the system memory is not influenced, and the output end of the vehicle collision prediction unit is connected with the input ends of the GPRS positioning unit, the photographing recording unit and the automatic cleaning unit.
Preferably, the holographic image module comprises a gesture memory unit, a feature area selection unit, an error rate calculation unit and a feature area confirmation unit, wherein the gesture memory unit is used for displaying different operations on the holographic display screen according to different gestures of a user, so that the user can operate on the holographic display screen quickly, conveniently and safely, the feature area selection unit is used for amplifying a pointed range according to a finger effective area of the user, so that the unit selected by the user is correct, the error rate calculation unit is used for calculating an error rate generated when the user clicks the range in the past, the error rate is judged according to the fact that other units are selected within a set time after the user confirms the selected unit in the past, the user can avoid the same error when the unit is selected at the time, the feature area confirmation unit is used for confirming the unit selected by the user according to the angle inclination and the distance between the finger effective area of the user and the holographic display screen, and the output end of the gesture memory unit is connected with the input ends of the characteristic area selection unit, the error rate calculation unit and the characteristic area confirmation unit.
The user can store the gestures of page turning, gliding, zooming in and zooming out in the gesture memory unit, so that the user can operate the holographic display screen conveniently.
Preferably, the sight forming module comprises a sight stopping unit, a sight acquiring unit and a converting unit, the sight stopping unit is used for calculating the time for the sight of the user to stop on the interface, so that the state of the user at the moment is judged, the sight acquiring unit is used for carrying out frame selection and amplification on the specific user sight stopping unit, the system can know the specific operation range of the user conveniently, the converting unit is used for automatically converting the operation page into a voice operation mode when the user is detected to be in the driving process, the user can drive safely, and the output end of the sight stopping unit is connected with the input ends of the sight acquiring unit and the converting unit.
Preferably, the alcohol content detection module comprises a facial feature recognition unit, a designated driving periphery recognition unit, an information sending unit and an intelligent electronic lock unit, wherein the facial feature unit is used for carrying out face recognition detection on a weight borne by a main driver when the main driver position detects the weight, judging whether a user identified by the main driver has alcohol concentration or not so as to ensure the driving safety of the main driver, the intelligent electronic lock unit is used for detecting whether the main driver takes alcohol in real time or not, and for the main driver taking alcohol, the vehicle of the main driver cannot be started so as to prohibit the main driver from driving with alcohol, the designated driving periphery recognition unit is used for positioning according to the position of the user and the designated driving position, and contacting the designated driving closest to the position of the user so as to ensure the safety of the user and others, and the information sending unit is used for sending the position of the user and designated driving information to a user contact person, after permission of the user contact person is obtained, the designated driver can open the door of the user, so that the user contact person can be ensured to contact the user and the designated driver at any time, user information and personal safety are ensured, the output end of the face feature recognition unit is connected with the input end of the intelligent electronic lock unit, and the output end of the designated driver peripheral recognition unit is connected with the input end of the information sending unit.
A self-adaptive man-machine interaction method based on big data comprises the following steps:
q1: calculating the direction, distance and angle between the user vehicle and other vehicles close to the user vehicle by using a vehicle collision prediction unit, and judging whether the safety of the user vehicle in running is influenced by curves formed by the other vehicles;
q2: the specific unit pointed by the user can be judged according to the angle and the distance between the effective area of the finger of the user and the holographic display screen by using the characteristic area confirmation unit;
q3: calculating the stay interval time of the sight of the user by using a sight forming module, judging the current condition of the user, and adopting different coping behaviors according to different conditions of the user;
q4: the alcohol content detection module is used for judging the alcohol content of a main driver seat user, selecting designated drivers who are close to the user and have high ranking within the range of the user, and the main driver seat user cannot start the vehicle before the designated drivers arrive.
In the step Q1, according to the GPRS positioning unit and the two-dimensional plane model, the position coordinates of the user vehicle are detected in real time, and the set of the position coordinates of the head of the user vehicle is
Figure 203167DEST_PATH_IMAGE002
The set of the coordinates of the tail position of the user vehicle is
Figure 713783DEST_PATH_IMAGE004
Other vehiclesThe set of head position coordinates is
Figure 821416DEST_PATH_IMAGE006
The set of the coordinates of the tail position of the user vehicle is
Figure 685467DEST_PATH_IMAGE008
For the closest distance M between the user vehicle and the other vehicle;
according to the formula:
the closest distance between the user's vehicle head and the other vehicle heads is
Figure 96244DEST_PATH_IMAGE010
Figure 387548DEST_PATH_IMAGE012
Setting a curve according to the head coordinates and the tail coordinates of the user vehicle, wherein the curve is
Figure 248057DEST_PATH_IMAGE014
Setting a curve according to the head coordinates and the tail coordinates of other user vehicles, wherein the curve is
Figure 915799DEST_PATH_IMAGE016
When the Z curve is detected
Figure 178153DEST_PATH_IMAGE018
When the curves have intersection points, the user vehicle needs to adjust the position of the vehicle head, and when the Z curve sum is detected
Figure 640358DEST_PATH_IMAGE018
When the curve has no intersection point, the user vehicle can normally run, wherein e and d are the slope and the constant of the curve.
In the step Q2, the center coordinates of the user's finger effective area are P = { e, f }, and when the user's finger effective area is at the boundary of several areas selected by the holographic display screen, the meter countsCalculating the distances between the effective area coordinates of the fingers and the area coordinates, and setting the area position coordinate set at the boundary of the selected area as T =
Figure 988163DEST_PATH_IMAGE020
According to the formula:
Figure 194016DEST_PATH_IMAGE022
wherein the minimum distance between the user's finger active area and the selected boundary is the user selected cell,
Figure 310877DEST_PATH_IMAGE024
and amplifying the position of the area when the area selected by the user is wrong for the distance between the effective coordinates of the finger of the user and the different areas, and recalculating the distance according to the coordinates of the finger of the user in the database.
In the step Q3, the BP neural network is used to determine the degree of the angle formed between the user's sight line and the holographic display screen and the stay time of the sight line as the input layer of the BP neural network, and the current state of the user as the output layer of the BP neural network, so as to determine different behaviors of the user according to different conditions of the user.
Compared with the prior art, the invention has the following beneficial effects:
1. the video transmission module is used, so that curve setting can be carried out according to the direction and the distance between the user vehicle and other vehicles close to the user vehicle, the positions of the user vehicle and the other vehicles are calculated in real time, whether the vehicles of the two parties collide or not is judged, and the safety of the vehicles used by the two parties is guaranteed;
2. by using the holographic image module, the content of the area can be paid attention to according to the content of the unit pointed by the finger effective area of the user and the error rate of the user occurring at the area in the past, and the angle inclination and the distance between the finger effective area of the user and the holographic display screen confirm the unit specifically selected by the user, so that the user can use the holographic image module quickly and conveniently;
3. calculating the time of the sight of the user staying in the interface by using a sight forming module so as to judge the current state of the user, and taking different measures to treat the user according to different states of the user so as to ensure the driving safety of the user;
4. the method comprises the steps of using an alcohol content detection module to judge whether a main driving user relates to alcohol content, when the main driving user takes alcohol, selecting a designated driving person which is close to the user position and has a higher score, and before the designated driving person arrives, enabling the main driving user not to drive, so that the safety of the main driving user and other driving users is guaranteed.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of the module components of an adaptive human-computer interaction system and method based on big data according to the present invention;
FIG. 2 is a schematic diagram illustrating steps of an adaptive human-computer interaction system and method based on big data according to the present invention;
FIG. 3 is a schematic diagram of a holographic image module of an adaptive human-computer interaction system and method based on big data according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-3, the present invention provides the following technical solutions:
a self-adaptive man-machine interaction system based on big data comprises a video transmission module, a holographic image module, an alcohol content detection module and a sight line forming module, wherein the video transmission module is used for monitoring all angles of a vehicle and transmitting videos to the module so as to monitor the inside and the outside of the vehicle and protect the safety of a user using the vehicle, the holographic influence module is used for starting a holographic display screen according to gestures of the user and judging a unit specifically pointed by the user according to a specific characteristic vector amplified by fingers of the user so as to ensure the driving safety of the user, the sight line forming detection module is used for performing frame selection judgment on a specific range of sight lines of the user in the driving process of the user so that the system can know the selection range of the user according to the sight lines of the user so as to ensure the driving safety of the user, the alcohol content detection module is used for detecting and judging the alcohol content of a main driving user according to the weight of the main driving in, therefore, the driving safety of the user is ensured, the user can not violate traffic rules due to drunk driving in the driving process, the video transmission module is connected with the holographic image module, and the video transmission module is connected with the alcohol content detection module.
Preferably, the video transmission module comprises a vehicle collision prediction unit, a GPRS positioning unit, a photographing recording unit and an automatic cleaning unit, wherein the vehicle collision prediction unit is used for performing curve fitting on the direction and the distance between a user vehicle and a vehicle which is close to the surrounding distance, so that the vehicle using safety of the user is ensured, no accident is caused, the GPRS positioning unit is used for positioning the positions of the user vehicle and the vehicle which is close to the user, coordinates are displayed in a two-dimensional plane model, so that the specific position of the user can be known, the driving safety of the user is ensured, the photographing recording unit is used for photographing and evidence obtaining when the position of the user is close to the surrounding vehicle, so that contradictions between vehicle owners are reduced, the contradictions can be proved when necessary, the automatic cleaning unit is used for storing a photographed picture in a set time limit range and is provided with a restorable function, therefore, the photo can be ensured to be in a random taking state, the system memory is not influenced, and the output end of the vehicle collision prediction unit is connected with the input ends of the GPRS positioning unit, the photographing recording unit and the automatic cleaning unit.
Preferably, the holographic image module comprises a gesture memory unit, a feature area selection unit, an error rate calculation unit and a feature area confirmation unit, wherein the gesture memory unit is used for displaying different operations on the holographic display screen according to different gestures of a user, so that the user can operate on the holographic display screen quickly, conveniently and safely, the feature area selection unit is used for amplifying a pointed range according to a finger effective area of the user, so that the unit selected by the user is correct, the error rate calculation unit is used for calculating an error rate generated when the user clicks the range in the past, the error rate is judged according to the fact that other units are selected within a set time after the user confirms the selected unit in the past, the user can avoid the same error when the unit is selected at the time, the feature area confirmation unit is used for confirming the unit selected by the user according to the angle inclination and the distance between the finger effective area of the user and the holographic display screen, and when the error rate of the content of the area is highest, the specific pointed content of the user is paid extra attention to the area.
The user can store the gestures of page turning, gliding, zooming in and zooming out in the gesture memory unit, so that the user can operate the holographic display screen conveniently, and the user can call the holographic display screen from the database next time.
Preferably, the sight forming module comprises a sight stopping unit, a sight acquiring unit and a converting unit, the sight stopping unit is used for calculating the time for the sight of the user to stop on the interface, so that the state of the user at the moment is judged, the sight acquiring unit is used for carrying out frame selection and amplification on the specific user sight stopping unit, the system can know the specific operation range of the user conveniently, the converting unit is used for automatically converting the operation page into a voice operation mode when the user is detected to be in the driving process, the user can drive safely, and the output end of the sight stopping unit is connected with the input ends of the sight acquiring unit and the converting unit.
Preferably, the alcohol content detection module comprises a facial feature recognition unit, a designated driving periphery recognition unit, an information sending unit and an intelligent electronic lock unit, wherein the facial feature unit is used for carrying out face recognition detection on a weight borne by a main driver when the main driver position detects the weight, judging whether a user identified by the main driver has alcohol concentration or not so as to ensure the driving safety of the main driver, the intelligent electronic lock unit is used for detecting whether the main driver takes alcohol in real time or not, and for the main driver taking alcohol, the vehicle of the main driver cannot be started so as to prohibit the main driver from driving with alcohol, the designated driving periphery recognition unit is used for positioning according to the position of the user and the designated driving position, and contacting the designated driving closest to the position of the user so as to ensure the safety of the user and others, and the information sending unit is used for sending the position of the user and designated driving information to a user contact person, after permission of a user contact person is obtained, a designated driver can open a user vehicle door, so that the user contact person can be ensured to contact a user and the designated driver at any time, and user information and personal safety are ensured;
the user vehicle is a user using the vehicle, and the other vehicles are vehicles closer to the user vehicle.
A self-adaptive man-machine interaction method based on big data comprises the following steps:
q1: calculating the direction, distance and angle between the user vehicle and other vehicles close to the user vehicle by using a vehicle collision prediction unit, and judging whether the safety of the user vehicle in running is influenced by curves formed by the other vehicles;
q2: the specific unit pointed by the user can be judged according to the angle and the distance between the effective area of the finger of the user and the holographic display screen by using the characteristic area confirmation unit;
q3: calculating the stay interval time of the sight of the user by using a sight forming module, judging the current condition of the user, and adopting different coping behaviors according to different conditions of the user;
q4: the alcohol content detection module is used for judging the alcohol content of a main driver seat user, selecting designated drivers who are close to the user and have high ranking within the range of the user, and the main driver seat user cannot start the vehicle before the designated drivers arrive.
In the step Q1, according to the GPRS positioning unit and the two-dimensional plane model, the position coordinates of the user vehicle are detected in real time, and the set of the position coordinates of the head of the user vehicle is
Figure DEST_PATH_IMAGE026
The set of the coordinates of the tail position of the user vehicle is
Figure DEST_PATH_IMAGE028
The set of the coordinates of the head positions of other vehicles is
Figure DEST_PATH_IMAGE030
The set of the coordinates of the tail position of the user vehicle is
Figure DEST_PATH_IMAGE032
For the closest distance M between the user vehicle and the other vehicle;
according to the formula:
the closest distance between the user's vehicle head and the other vehicle heads is
Figure DEST_PATH_IMAGE034
Figure DEST_PATH_IMAGE036
Setting a curve according to the head coordinates and the tail coordinates of the user vehicle, wherein the curve is
Figure DEST_PATH_IMAGE038
Setting a curve according to the head coordinates and the tail coordinates of other user vehicles, wherein the curve is
Figure DEST_PATH_IMAGE040
When the Z curve is detected
Figure DEST_PATH_IMAGE042
When the curves have intersection points, the user vehicle needs to adjust the position of the vehicle head, and when the Z curve sum is detected
Figure 190321DEST_PATH_IMAGE042
When the curve has no intersection point, the user vehicle can normally run, wherein e and d are the slope and the constant of the curve.
In the step Q2, the center coordinates of the user's finger effective area are P = { e, f }, when the user's finger effective area is at the boundary of the several areas selected by the holographic display screen, the distances between the coordinates of the finger effective area and the coordinates of the several areas are calculated, and the set of coordinates of the area position at the boundary of the selected area is set to T =
Figure DEST_PATH_IMAGE044
According to the formula:
Figure DEST_PATH_IMAGE046
wherein the minimum distance between the user's finger active area and the selected boundary is the user selected cell,
Figure DEST_PATH_IMAGE048
and amplifying the position of the area when the area selected by the user is wrong for the distance between the effective coordinates of the finger of the user and the different areas, and recalculating the distance according to the coordinates of the finger of the user in the database.
In the step Q3, the BP neural network is used to determine the degree of the angle formed between the user's sight line and the holographic display screen and the stay time of the sight line as the input layer of the BP neural network, and the current state of the user as the output layer of the BP neural network, so as to determine different behaviors of the user according to different conditions of the user.
Example 1: according to the GPRS positioning unit and the two-dimensional plane model, the position coordinates of the user vehicle are detected in real time, and the set of the position coordinates of the head of the user vehicle is
Figure DEST_PATH_IMAGE050
The set of the coordinates of the tail position of the user vehicle is
Figure DEST_PATH_IMAGE052
The set of the coordinates of the head positions of other vehicles is
Figure DEST_PATH_IMAGE054
The set of the coordinates of the tail position of the user vehicle is
Figure DEST_PATH_IMAGE056
For the closest distance M between the user vehicle and the other vehicle;
according to the formula:
the closest distance between the user's vehicle head and the other vehicle heads is
Figure DEST_PATH_IMAGE058
Figure DEST_PATH_IMAGE060
Figure DEST_PATH_IMAGE062
;
Figure DEST_PATH_IMAGE064
;
Figure DEST_PATH_IMAGE066
;
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE068
the position coordinates (120, 230), (122, 219) of the head and tail of the user vehicle and the position coordinates (130,250), (110, 220) of the head and tail of other vehicles are respectively substituted into the set curves Z and Z
Figure DEST_PATH_IMAGE070
Setting a curve according to the head coordinates and the tail coordinates of the user vehicle, wherein the curve is
Figure DEST_PATH_IMAGE072
The final curve is Z = -5.5 e-430;
setting a curve according to the head coordinates and the tail coordinates of other user vehicles, wherein the curve is
Figure DEST_PATH_IMAGE074
The final curve is
Figure DEST_PATH_IMAGE076
Additionally, 5.5e-430=1.5e +55, the intersection point is negative, the two curves have no intersection point, and the user can normally drive;
when the Z curve is detected
Figure DEST_PATH_IMAGE078
When the curve has an intersection point, the user vehicle needs to adjust the position of the vehicle head, wherein e and d are the slope and the constant of the curve.
Example 2: by utilizing the BP neural network, the degree of an included angle formed by the sight of a user and the holographic display screen and the stay time of the sight are used as an input layer of the BP neural network, the current state of the user is used as an output layer of the BP neural network, different behaviors of the user are judged according to different conditions of the user, and the specific state of the user is estimated according to the BP neural network as follows:
Figure DEST_PATH_IMAGE080
it is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A self-adaptive man-machine interaction system based on big data is characterized in that: the man-machine interaction system comprises a video transmission module, a holographic image module, an alcohol content detection module and a sight line forming module, wherein the video transmission module is used for monitoring all angles of a vehicle and transmitting videos to the module, the holographic influence module is used for starting a holographic display screen according to gestures of a user and judging a unit specifically pointed by the user according to specific characteristic vectors amplified by fingers of the user, the sight line forming detection module is used for performing frame selection judgment on a specific range of the sight line of the user in the driving process of the user, the alcohol content detection module is used for detecting and judging the alcohol content of a main driving user according to the weight of main driving in the vehicle, the video transmission module is connected with the holographic image module, and the video transmission module is connected with the alcohol content detection module.
2. The big data-based adaptive man-machine interaction system according to claim 1, wherein: the video transmission module comprises a vehicle collision prediction unit, a GPRS positioning unit, a photographing recording unit and an automatic cleaning unit, wherein the vehicle collision prediction unit is used for performing curve fitting on the direction and the distance between a user vehicle and a vehicle which is close to the surrounding vehicle, the GPRS positioning unit is used for positioning the user vehicle and the position of the vehicle which is close to the user vehicle and displaying coordinates in a two-dimensional plane model, the photographing recording unit is used for photographing and evidence obtaining when the position of the user is close to the surrounding vehicle, the automatic cleaning unit is used for storing a photographed picture within a set time limit range and is provided with a restorable function, and the output end of the vehicle collision prediction unit is connected with the input ends of the GPRS positioning unit, the photographing recording unit and the automatic cleaning unit.
3. The big data-based adaptive man-machine interaction system according to claim 1, wherein: the holographic image module comprises a gesture memory unit, a feature area selection unit, an error rate calculation unit and a feature area confirmation unit, wherein the gesture memory unit is used for displaying different operations on the holographic display screen according to different gestures of a user, the feature area selection unit is used for amplifying a pointed range according to a finger effective area of the user, the error rate calculation unit is used for calculating the error rate of the user when the user clicks the range in the past, the feature area confirmation unit is used for confirming the user selected unit according to the angle inclination and the distance between the finger effective area of the user and the holographic display screen, and the output end of the gesture memory unit is connected with the input ends of the feature area selection unit, the error rate calculation unit and the feature area confirmation unit.
4. The big data-based adaptive man-machine interaction system according to claim 3, wherein: the user can save the gestures of page turning, sliding down, zooming in and zooming out in the gesture memory unit.
5. The big data-based adaptive man-machine interaction system according to claim 1, wherein: the sight forming module comprises a sight stopping unit, a sight acquiring unit and a converting unit, the sight stopping unit is used for calculating the time for the user to stop the sight on the interface, the sight acquiring unit is used for carrying out frame selection and amplification on the specific user sight stopping unit, the converting unit is used for automatically converting an operation page into a voice operation mode when the fact that the user is driving is detected, and the output end of the sight stopping unit is connected with the input ends of the sight acquiring unit and the converting unit.
6. The big data-based adaptive man-machine interaction system according to claim 1, wherein: the alcohol content detection module comprises a facial feature recognition unit, a designated driver periphery recognition unit, an information sending unit and an intelligent electronic lock unit, wherein the facial feature unit is used for carrying out face recognition detection on a weight object borne by a main driver when the main driver position detects the weight, judging whether a user recognized by the main driver has alcohol concentration or not, the intelligent electronic lock unit is used for detecting whether the main driver takes alcohol or not in real time, and the vehicle cannot be started for the main driver taking alcohol, the designated driver periphery recognition unit is used for positioning according to the positions of the user and the designated driver and contacting the designated driver closest to the position of the user, the information sending unit is used for sending the position of the user and designated driver information to a user contact person, after permission of the user contact person is obtained, a vehicle door of the user can be opened, and the output end of the facial feature recognition unit is connected with the input end of the intelligent electronic lock unit, and the output end of the designated driving periphery identification unit is connected with the input end of the information sending unit.
7. A self-adaptive man-machine interaction method based on big data is characterized in that: the man-machine interaction method comprises the following steps:
q1: calculating the direction, distance and angle between the user vehicle and other vehicles close to the user vehicle by using a vehicle collision prediction unit, and judging whether the safety of the user vehicle in running is influenced by curves formed by the other vehicles;
q2: the specific unit pointed by the user can be judged according to the angle and the distance between the effective area of the finger of the user and the holographic display screen by using the characteristic area confirmation unit;
q3: calculating the stay interval time of the sight of the user by using a sight forming module, judging the current condition of the user, and adopting different coping behaviors according to different conditions of the user;
q4: the alcohol content detection module is used for judging the alcohol content of a main driver seat user, selecting designated drivers who are close to the user and have high ranking within the range of the user, and the main driver seat user cannot start the vehicle before the designated drivers arrive.
8. The big data-based adaptive man-machine interaction method according to claim 7, wherein: in the step Q1, according to the GPRS positioning unit and the two-dimensional plane model, the position coordinates of the user vehicle are detected in real time, and the set of the position coordinates of the head of the user vehicle is
Figure DEST_PATH_IMAGE001
The set of the coordinates of the tail position of the user vehicle is
Figure 800836DEST_PATH_IMAGE002
The set of the coordinates of the head positions of other vehicles is
Figure DEST_PATH_IMAGE003
The set of the coordinates of the tail position of the user vehicle is
Figure 211089DEST_PATH_IMAGE004
For the closest distance M between the user vehicle and the other vehicle;
according to the formula:
the closest distance between the user's vehicle head and the other vehicle heads is
Figure DEST_PATH_IMAGE005
Figure 511489DEST_PATH_IMAGE006
Setting a curve according to the head coordinates and the tail coordinates of the user vehicle, wherein the curve is
Figure DEST_PATH_IMAGE007
Setting a curve according to the head coordinates and the tail coordinates of other user vehicles, wherein the curve is
Figure 374403DEST_PATH_IMAGE008
When the Z curve is detected
Figure DEST_PATH_IMAGE009
When the curves have intersection points, the user vehicle needs to adjust the position of the vehicle head, and when the Z curve sum is detected
Figure 864290DEST_PATH_IMAGE009
When the curve has no intersection point, the user vehicle can normally run, wherein e and d are the slope and the constant of the curve.
9. The big data-based adaptive man-machine interaction method according to claim 7, wherein: in the step Q2, the center coordinates of the user's finger effective area are P = { e, f }, when the user's finger effective area is at the boundary of the several areas selected by the holographic display screen, the distances between the coordinates of the finger effective area and the coordinates of the several areas are calculated, and the set of coordinates of the area position at the boundary of the selected area is set to T =
Figure 581579DEST_PATH_IMAGE010
According to the formula:
Figure DEST_PATH_IMAGE011
wherein the minimum distance between the user's finger active area and the selected boundary is the user selected cell,
Figure DEST_PATH_IMAGE013
and amplifying the position of the area when the area selected by the user is wrong for the distance between the effective coordinates of the finger of the user and the different areas, and recalculating the distance according to the coordinates of the finger of the user in the database.
10. The big data-based adaptive man-machine interaction method according to claim 7, wherein: in the step Q3, the BP neural network is used to determine the degree of the angle formed between the user's sight line and the holographic display screen and the stay time of the sight line as the input layer of the BP neural network, and the current state of the user as the output layer of the BP neural network, so as to determine different behaviors of the user according to different conditions of the user.
CN202011168303.6A 2020-10-28 2020-10-28 Big data-based self-adaptive human-computer interaction system and method Active CN112215198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011168303.6A CN112215198B (en) 2020-10-28 2020-10-28 Big data-based self-adaptive human-computer interaction system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011168303.6A CN112215198B (en) 2020-10-28 2020-10-28 Big data-based self-adaptive human-computer interaction system and method

Publications (2)

Publication Number Publication Date
CN112215198A true CN112215198A (en) 2021-01-12
CN112215198B CN112215198B (en) 2024-07-12

Family

ID=74057211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011168303.6A Active CN112215198B (en) 2020-10-28 2020-10-28 Big data-based self-adaptive human-computer interaction system and method

Country Status (1)

Country Link
CN (1) CN112215198B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI845168B (en) 2023-02-17 2024-06-11 圓展科技股份有限公司 Method and system for zooming-in/out based on a target object

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101144908B1 (en) * 2011-04-27 2012-05-14 동의대학교 산학협력단 System and method for 4 sided monitoring integrated vehicle black box based on most network
CN103303224A (en) * 2013-06-18 2013-09-18 桂林电子科技大学 Vehicle-mounted equipment gesture control system and usage method thereof
CN105584368A (en) * 2014-11-07 2016-05-18 威斯通全球技术公司 System For Information Transmission In A Motor Vehicle
KR101709129B1 (en) * 2015-10-08 2017-02-22 국민대학교산학협력단 Apparatus and method for multi-modal vehicle control
CN206155363U (en) * 2016-09-28 2017-05-10 马文强 Eye movement behavior analysis and vehicle screen controlling means
CN107199886A (en) * 2017-05-04 2017-09-26 河池学院 One kind is driven when intoxicated limits device
CN107390874A (en) * 2017-07-27 2017-11-24 深圳市泰衡诺科技有限公司 A kind of intelligent terminal control method and control device based on human eye
CN108597251A (en) * 2018-04-02 2018-09-28 昆明理工大学 A kind of traffic intersection distribution vehicle collision prewarning method based on car networking
CN109263637A (en) * 2018-10-12 2019-01-25 北京双髻鲨科技有限公司 A kind of method and device of prediction of collision
CN109406161A (en) * 2018-09-13 2019-03-01 行为科技(北京)有限公司 A kind of preceding defence crash tests system and its test method based on distance test
CN109407845A (en) * 2018-10-30 2019-03-01 盯盯拍(深圳)云技术有限公司 Screen exchange method and screen interactive device
CN109532662A (en) * 2018-11-30 2019-03-29 广州鹰瞰信息科技有限公司 A kind of spacing and Collision time calculation method and device
CN110276988A (en) * 2019-06-26 2019-09-24 重庆邮电大学 A kind of DAS (Driver Assistant System) based on collision warning algorithm
CN110325953A (en) * 2017-02-23 2019-10-11 三星电子株式会社 Screen control method and equipment for virtual reality service
CN110825216A (en) * 2018-08-10 2020-02-21 北京魔门塔科技有限公司 Method and system for man-machine interaction of driver during driving
CN111144258A (en) * 2019-12-18 2020-05-12 上海擎感智能科技有限公司 Vehicle designated driving method, terminal equipment, computer storage medium and system

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101144908B1 (en) * 2011-04-27 2012-05-14 동의대학교 산학협력단 System and method for 4 sided monitoring integrated vehicle black box based on most network
CN103303224A (en) * 2013-06-18 2013-09-18 桂林电子科技大学 Vehicle-mounted equipment gesture control system and usage method thereof
CN105584368A (en) * 2014-11-07 2016-05-18 威斯通全球技术公司 System For Information Transmission In A Motor Vehicle
KR101709129B1 (en) * 2015-10-08 2017-02-22 국민대학교산학협력단 Apparatus and method for multi-modal vehicle control
CN206155363U (en) * 2016-09-28 2017-05-10 马文强 Eye movement behavior analysis and vehicle screen controlling means
CN110325953A (en) * 2017-02-23 2019-10-11 三星电子株式会社 Screen control method and equipment for virtual reality service
CN107199886A (en) * 2017-05-04 2017-09-26 河池学院 One kind is driven when intoxicated limits device
CN107390874A (en) * 2017-07-27 2017-11-24 深圳市泰衡诺科技有限公司 A kind of intelligent terminal control method and control device based on human eye
CN108597251A (en) * 2018-04-02 2018-09-28 昆明理工大学 A kind of traffic intersection distribution vehicle collision prewarning method based on car networking
CN110825216A (en) * 2018-08-10 2020-02-21 北京魔门塔科技有限公司 Method and system for man-machine interaction of driver during driving
CN109406161A (en) * 2018-09-13 2019-03-01 行为科技(北京)有限公司 A kind of preceding defence crash tests system and its test method based on distance test
CN109263637A (en) * 2018-10-12 2019-01-25 北京双髻鲨科技有限公司 A kind of method and device of prediction of collision
CN109407845A (en) * 2018-10-30 2019-03-01 盯盯拍(深圳)云技术有限公司 Screen exchange method and screen interactive device
CN109532662A (en) * 2018-11-30 2019-03-29 广州鹰瞰信息科技有限公司 A kind of spacing and Collision time calculation method and device
CN110276988A (en) * 2019-06-26 2019-09-24 重庆邮电大学 A kind of DAS (Driver Assistant System) based on collision warning algorithm
CN111144258A (en) * 2019-12-18 2020-05-12 上海擎感智能科技有限公司 Vehicle designated driving method, terminal equipment, computer storage medium and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI845168B (en) 2023-02-17 2024-06-11 圓展科技股份有限公司 Method and system for zooming-in/out based on a target object

Also Published As

Publication number Publication date
CN112215198B (en) 2024-07-12

Similar Documents

Publication Publication Date Title
KR102469234B1 (en) Driving condition analysis method and device, driver monitoring system and vehicle
KR102469233B1 (en) Driving state detection method and device, driver monitoring system and vehicle
CN102592143B (en) Method for detecting phone holding violation of driver in driving
CN112389448B (en) Abnormal driving behavior identification method based on vehicle state and driver state
Uma et al. Accident prevention and safety assistance using IOT and machine learning
JP3257310B2 (en) Inattentive driving detection device
JP4633043B2 (en) Image processing device
US20200001892A1 (en) Passenger assisting apparatus, method, and program
US10817751B2 (en) Learning data creation method, learning method, risk prediction method, learning data creation device, learning device, risk prediction device, and recording medium
US20150009010A1 (en) Vehicle vision system with driver detection
CN111461020A (en) Method and device for identifying behaviors of insecure mobile phone and related storage medium
CN114872713A (en) Device and method for monitoring abnormal driving state of driver
CN112238859A (en) Driving support device
CN105117096A (en) Image identification based anti-tracking method and apparatus
Guria et al. Iot-enabled driver drowsiness detection using machine learning
CN107284449A (en) A kind of traffic safety method for early warning and system, automobile, readable storage medium storing program for executing
Rani et al. Development of an Automated Tool for Driver Drowsiness Detection
US20120189161A1 (en) Visual attention apparatus and control method based on mind awareness and display apparatus using the visual attention apparatus
CN106874831A (en) Driving behavior method for detecting and its system
CN112215198B (en) Big data-based self-adaptive human-computer interaction system and method
Amanullah et al. Accident prevention by eye-gaze tracking using imaging Constraints
CN116968765B (en) Lane departure warning method and system with self-adaptive warning time interval
CN111368590A (en) Emotion recognition method and device, electronic equipment and storage medium
Zhai et al. A detection model for driver's unsafe states based on real-time face-vision
Azaiz et al. In-cabin occupant monitoring system based on improved Yolo, deep reinforcement learning, and multi-task CNN for autonomous driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240610

Address after: 430000, Building 1-10, C4 Annex, Biological Innovation Park, No. 666 Gaoxin Avenue, Donghu New Technology Development Zone, Wuhan City, Hubei Province

Applicant after: Wuhan Chang'e Investment Partnership Enterprise (Limited Partnership)

Country or region after: China

Address before: No. 87, Jinzhou North Road, Huangpu District, Guangzhou, Guangdong 510715

Applicant before: Zheng Xiaoyun

Country or region before: China

GR01 Patent grant