Background
With the progress of modern science and technology, the development of a human-computer interaction system is better and better, a human-computer interaction interface is taken as an independent and important research field and is deeply penetrated into the lives of people, the human-computer interaction interface is inseparable from the lives of people, and the human-computer interaction system is applied in various fields, such as: industrial and commercial work space design, biological design, three-dimensional human body model, automobile industry and the like. The holographic display is most used in the field of automobiles, and at present, when a man-machine interaction system is used on an automobile, people generally look up specific contents by themselves and need to press a selection button during waiting time, which is very inconvenient and even causes traffic accidents, so that the holographic display is generally used for solving the problem;
however, when people use the holographic display screen, the content of a unit specifically pointed by people cannot be judged, in the process of clicking the unit, the error of clicking the unit is usually caused by the pointed error, when people use the holographic display screen to observe users, accidents are often caused by short stay of sight lines, when a driver is detected to drink, the driving of the users cannot be prevented, according to survey display, 98% of accidents are accidents caused by alcohol in the driving accidents, and therefore, the safety of people in the driving process is most important;
therefore, there is a need for an adaptive human-computer interaction system and method based on big data to solve the above problems.
Disclosure of Invention
The present invention provides a self-adaptive human-computer interaction system and method based on big data, so as to solve the problems proposed in the background art.
In order to solve the technical problems, the invention provides the following technical scheme: a self-adaptive man-machine interaction system based on big data comprises a video transmission module, a holographic image module, an alcohol content detection module and a sight line forming module, wherein the video transmission module is used for monitoring all angles of a vehicle and transmitting videos to the module so as to monitor the inside and the outside of the vehicle and protect the safety of a user using the vehicle, the holographic influence module is used for starting a holographic display screen according to gestures of the user and judging a unit specifically pointed by the user according to a specific characteristic vector amplified by fingers of the user so as to ensure the driving safety of the user, the sight line forming detection module is used for performing frame selection judgment on a specific range of sight lines of the user in the driving process of the user so that the system can know the selection range of the user according to the sight lines of the user so as to ensure the driving safety of the user, the alcohol content detection module is used for detecting and judging the alcohol content of a main driving user according to the weight of the main driving in, therefore, the driving safety of the user is ensured, the user can not violate traffic rules due to drunk driving in the driving process, the video transmission module is connected with the holographic image module, and the video transmission module is connected with the alcohol content detection module.
Preferably, the video transmission module comprises a vehicle collision prediction unit, a GPRS positioning unit, a photographing recording unit and an automatic cleaning unit, wherein the vehicle collision prediction unit is used for performing curve fitting on the direction and the distance between a user vehicle and a vehicle which is close to the surrounding distance, so that the vehicle using safety of the user is ensured, no accident is caused, the GPRS positioning unit is used for positioning the positions of the user vehicle and the vehicle which is close to the user, coordinates are displayed in a two-dimensional plane model, so that the specific position of the user can be known, the driving safety of the user is ensured, the photographing recording unit is used for photographing and evidence obtaining when the position of the user is close to the surrounding vehicle, so that contradictions between vehicle owners are reduced, the contradictions can be proved when necessary, the automatic cleaning unit is used for storing a photographed picture in a set time limit range and is provided with a restorable function, therefore, the photo can be ensured to be in a random taking state, the system memory is not influenced, and the output end of the vehicle collision prediction unit is connected with the input ends of the GPRS positioning unit, the photographing recording unit and the automatic cleaning unit.
Preferably, the holographic image module comprises a gesture memory unit, a feature area selection unit, an error rate calculation unit and a feature area confirmation unit, wherein the gesture memory unit is used for displaying different operations on the holographic display screen according to different gestures of a user, so that the user can operate on the holographic display screen quickly, conveniently and safely, the feature area selection unit is used for amplifying a pointed range according to a finger effective area of the user, so that the unit selected by the user is correct, the error rate calculation unit is used for calculating an error rate generated when the user clicks the range in the past, the error rate is judged according to the fact that other units are selected within a set time after the user confirms the selected unit in the past, the user can avoid the same error when the unit is selected at the time, the feature area confirmation unit is used for confirming the unit selected by the user according to the angle inclination and the distance between the finger effective area of the user and the holographic display screen, and the output end of the gesture memory unit is connected with the input ends of the characteristic area selection unit, the error rate calculation unit and the characteristic area confirmation unit.
The user can store the gestures of page turning, gliding, zooming in and zooming out in the gesture memory unit, so that the user can operate the holographic display screen conveniently.
Preferably, the sight forming module comprises a sight stopping unit, a sight acquiring unit and a converting unit, the sight stopping unit is used for calculating the time for the sight of the user to stop on the interface, so that the state of the user at the moment is judged, the sight acquiring unit is used for carrying out frame selection and amplification on the specific user sight stopping unit, the system can know the specific operation range of the user conveniently, the converting unit is used for automatically converting the operation page into a voice operation mode when the user is detected to be in the driving process, the user can drive safely, and the output end of the sight stopping unit is connected with the input ends of the sight acquiring unit and the converting unit.
Preferably, the alcohol content detection module comprises a facial feature recognition unit, a designated driving periphery recognition unit, an information sending unit and an intelligent electronic lock unit, wherein the facial feature unit is used for carrying out face recognition detection on a weight borne by a main driver when the main driver position detects the weight, judging whether a user identified by the main driver has alcohol concentration or not so as to ensure the driving safety of the main driver, the intelligent electronic lock unit is used for detecting whether the main driver takes alcohol in real time or not, and for the main driver taking alcohol, the vehicle of the main driver cannot be started so as to prohibit the main driver from driving with alcohol, the designated driving periphery recognition unit is used for positioning according to the position of the user and the designated driving position, and contacting the designated driving closest to the position of the user so as to ensure the safety of the user and others, and the information sending unit is used for sending the position of the user and designated driving information to a user contact person, after permission of the user contact person is obtained, the designated driver can open the door of the user, so that the user contact person can be ensured to contact the user and the designated driver at any time, user information and personal safety are ensured, the output end of the face feature recognition unit is connected with the input end of the intelligent electronic lock unit, and the output end of the designated driver peripheral recognition unit is connected with the input end of the information sending unit.
A self-adaptive man-machine interaction method based on big data comprises the following steps:
q1: calculating the direction, distance and angle between the user vehicle and other vehicles close to the user vehicle by using a vehicle collision prediction unit, and judging whether the safety of the user vehicle in running is influenced by curves formed by the other vehicles;
q2: the specific unit pointed by the user can be judged according to the angle and the distance between the effective area of the finger of the user and the holographic display screen by using the characteristic area confirmation unit;
q3: calculating the stay interval time of the sight of the user by using a sight forming module, judging the current condition of the user, and adopting different coping behaviors according to different conditions of the user;
q4: the alcohol content detection module is used for judging the alcohol content of a main driver seat user, selecting designated drivers who are close to the user and have high ranking within the range of the user, and the main driver seat user cannot start the vehicle before the designated drivers arrive.
In the step Q1, according to the GPRS positioning unit and the two-dimensional plane model, the position coordinates of the user vehicle are detected in real time, and the set of the position coordinates of the head of the user vehicle is
The set of the coordinates of the tail position of the user vehicle is
Other vehiclesThe set of head position coordinates is
The set of the coordinates of the tail position of the user vehicle is
For the closest distance M between the user vehicle and the other vehicle;
according to the formula:
the closest distance between the user's vehicle head and the other vehicle heads is
:
Setting a curve according to the head coordinates and the tail coordinates of the user vehicle, wherein the curve is
;
Setting a curve according to the head coordinates and the tail coordinates of other user vehicles, wherein the curve is
;
When the Z curve is detected
When the curves have intersection points, the user vehicle needs to adjust the position of the vehicle head, and when the Z curve sum is detected
When the curve has no intersection point, the user vehicle can normally run, wherein e and d are the slope and the constant of the curve.
In the step Q2, the center coordinates of the user's finger effective area are P = { e, f }, and when the user's finger effective area is at the boundary of several areas selected by the holographic display screen, the meter countsCalculating the distances between the effective area coordinates of the fingers and the area coordinates, and setting the area position coordinate set at the boundary of the selected area as T =
;
According to the formula:
wherein the minimum distance between the user's finger active area and the selected boundary is the user selected cell,
and amplifying the position of the area when the area selected by the user is wrong for the distance between the effective coordinates of the finger of the user and the different areas, and recalculating the distance according to the coordinates of the finger of the user in the database.
In the step Q3, the BP neural network is used to determine the degree of the angle formed between the user's sight line and the holographic display screen and the stay time of the sight line as the input layer of the BP neural network, and the current state of the user as the output layer of the BP neural network, so as to determine different behaviors of the user according to different conditions of the user.
Compared with the prior art, the invention has the following beneficial effects:
1. the video transmission module is used, so that curve setting can be carried out according to the direction and the distance between the user vehicle and other vehicles close to the user vehicle, the positions of the user vehicle and the other vehicles are calculated in real time, whether the vehicles of the two parties collide or not is judged, and the safety of the vehicles used by the two parties is guaranteed;
2. by using the holographic image module, the content of the area can be paid attention to according to the content of the unit pointed by the finger effective area of the user and the error rate of the user occurring at the area in the past, and the angle inclination and the distance between the finger effective area of the user and the holographic display screen confirm the unit specifically selected by the user, so that the user can use the holographic image module quickly and conveniently;
3. calculating the time of the sight of the user staying in the interface by using a sight forming module so as to judge the current state of the user, and taking different measures to treat the user according to different states of the user so as to ensure the driving safety of the user;
4. the method comprises the steps of using an alcohol content detection module to judge whether a main driving user relates to alcohol content, when the main driving user takes alcohol, selecting a designated driving person which is close to the user position and has a higher score, and before the designated driving person arrives, enabling the main driving user not to drive, so that the safety of the main driving user and other driving users is guaranteed.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-3, the present invention provides the following technical solutions:
a self-adaptive man-machine interaction system based on big data comprises a video transmission module, a holographic image module, an alcohol content detection module and a sight line forming module, wherein the video transmission module is used for monitoring all angles of a vehicle and transmitting videos to the module so as to monitor the inside and the outside of the vehicle and protect the safety of a user using the vehicle, the holographic influence module is used for starting a holographic display screen according to gestures of the user and judging a unit specifically pointed by the user according to a specific characteristic vector amplified by fingers of the user so as to ensure the driving safety of the user, the sight line forming detection module is used for performing frame selection judgment on a specific range of sight lines of the user in the driving process of the user so that the system can know the selection range of the user according to the sight lines of the user so as to ensure the driving safety of the user, the alcohol content detection module is used for detecting and judging the alcohol content of a main driving user according to the weight of the main driving in, therefore, the driving safety of the user is ensured, the user can not violate traffic rules due to drunk driving in the driving process, the video transmission module is connected with the holographic image module, and the video transmission module is connected with the alcohol content detection module.
Preferably, the video transmission module comprises a vehicle collision prediction unit, a GPRS positioning unit, a photographing recording unit and an automatic cleaning unit, wherein the vehicle collision prediction unit is used for performing curve fitting on the direction and the distance between a user vehicle and a vehicle which is close to the surrounding distance, so that the vehicle using safety of the user is ensured, no accident is caused, the GPRS positioning unit is used for positioning the positions of the user vehicle and the vehicle which is close to the user, coordinates are displayed in a two-dimensional plane model, so that the specific position of the user can be known, the driving safety of the user is ensured, the photographing recording unit is used for photographing and evidence obtaining when the position of the user is close to the surrounding vehicle, so that contradictions between vehicle owners are reduced, the contradictions can be proved when necessary, the automatic cleaning unit is used for storing a photographed picture in a set time limit range and is provided with a restorable function, therefore, the photo can be ensured to be in a random taking state, the system memory is not influenced, and the output end of the vehicle collision prediction unit is connected with the input ends of the GPRS positioning unit, the photographing recording unit and the automatic cleaning unit.
Preferably, the holographic image module comprises a gesture memory unit, a feature area selection unit, an error rate calculation unit and a feature area confirmation unit, wherein the gesture memory unit is used for displaying different operations on the holographic display screen according to different gestures of a user, so that the user can operate on the holographic display screen quickly, conveniently and safely, the feature area selection unit is used for amplifying a pointed range according to a finger effective area of the user, so that the unit selected by the user is correct, the error rate calculation unit is used for calculating an error rate generated when the user clicks the range in the past, the error rate is judged according to the fact that other units are selected within a set time after the user confirms the selected unit in the past, the user can avoid the same error when the unit is selected at the time, the feature area confirmation unit is used for confirming the unit selected by the user according to the angle inclination and the distance between the finger effective area of the user and the holographic display screen, and when the error rate of the content of the area is highest, the specific pointed content of the user is paid extra attention to the area.
The user can store the gestures of page turning, gliding, zooming in and zooming out in the gesture memory unit, so that the user can operate the holographic display screen conveniently, and the user can call the holographic display screen from the database next time.
Preferably, the sight forming module comprises a sight stopping unit, a sight acquiring unit and a converting unit, the sight stopping unit is used for calculating the time for the sight of the user to stop on the interface, so that the state of the user at the moment is judged, the sight acquiring unit is used for carrying out frame selection and amplification on the specific user sight stopping unit, the system can know the specific operation range of the user conveniently, the converting unit is used for automatically converting the operation page into a voice operation mode when the user is detected to be in the driving process, the user can drive safely, and the output end of the sight stopping unit is connected with the input ends of the sight acquiring unit and the converting unit.
Preferably, the alcohol content detection module comprises a facial feature recognition unit, a designated driving periphery recognition unit, an information sending unit and an intelligent electronic lock unit, wherein the facial feature unit is used for carrying out face recognition detection on a weight borne by a main driver when the main driver position detects the weight, judging whether a user identified by the main driver has alcohol concentration or not so as to ensure the driving safety of the main driver, the intelligent electronic lock unit is used for detecting whether the main driver takes alcohol in real time or not, and for the main driver taking alcohol, the vehicle of the main driver cannot be started so as to prohibit the main driver from driving with alcohol, the designated driving periphery recognition unit is used for positioning according to the position of the user and the designated driving position, and contacting the designated driving closest to the position of the user so as to ensure the safety of the user and others, and the information sending unit is used for sending the position of the user and designated driving information to a user contact person, after permission of a user contact person is obtained, a designated driver can open a user vehicle door, so that the user contact person can be ensured to contact a user and the designated driver at any time, and user information and personal safety are ensured;
the user vehicle is a user using the vehicle, and the other vehicles are vehicles closer to the user vehicle.
A self-adaptive man-machine interaction method based on big data comprises the following steps:
q1: calculating the direction, distance and angle between the user vehicle and other vehicles close to the user vehicle by using a vehicle collision prediction unit, and judging whether the safety of the user vehicle in running is influenced by curves formed by the other vehicles;
q2: the specific unit pointed by the user can be judged according to the angle and the distance between the effective area of the finger of the user and the holographic display screen by using the characteristic area confirmation unit;
q3: calculating the stay interval time of the sight of the user by using a sight forming module, judging the current condition of the user, and adopting different coping behaviors according to different conditions of the user;
q4: the alcohol content detection module is used for judging the alcohol content of a main driver seat user, selecting designated drivers who are close to the user and have high ranking within the range of the user, and the main driver seat user cannot start the vehicle before the designated drivers arrive.
In the step Q1, according to the GPRS positioning unit and the two-dimensional plane model, the position coordinates of the user vehicle are detected in real time, and the set of the position coordinates of the head of the user vehicle is
The set of the coordinates of the tail position of the user vehicle is
The set of the coordinates of the head positions of other vehicles is
The set of the coordinates of the tail position of the user vehicle is
For the closest distance M between the user vehicle and the other vehicle;
according to the formula:
the closest distance between the user's vehicle head and the other vehicle heads is
:
Setting a curve according to the head coordinates and the tail coordinates of the user vehicle, wherein the curve is
;
Setting a curve according to the head coordinates and the tail coordinates of other user vehicles, wherein the curve is
;
When the Z curve is detected
When the curves have intersection points, the user vehicle needs to adjust the position of the vehicle head, and when the Z curve sum is detected
When the curve has no intersection point, the user vehicle can normally run, wherein e and d are the slope and the constant of the curve.
In the step Q2, the center coordinates of the user's finger effective area are P = { e, f }, when the user's finger effective area is at the boundary of the several areas selected by the holographic display screen, the distances between the coordinates of the finger effective area and the coordinates of the several areas are calculated, and the set of coordinates of the area position at the boundary of the selected area is set to T =
;
According to the formula:
wherein the minimum distance between the user's finger active area and the selected boundary is the user selected cell,
and amplifying the position of the area when the area selected by the user is wrong for the distance between the effective coordinates of the finger of the user and the different areas, and recalculating the distance according to the coordinates of the finger of the user in the database.
In the step Q3, the BP neural network is used to determine the degree of the angle formed between the user's sight line and the holographic display screen and the stay time of the sight line as the input layer of the BP neural network, and the current state of the user as the output layer of the BP neural network, so as to determine different behaviors of the user according to different conditions of the user.
Example 1: according to the GPRS positioning unit and the two-dimensional plane model, the position coordinates of the user vehicle are detected in real time, and the set of the position coordinates of the head of the user vehicle is
The set of the coordinates of the tail position of the user vehicle is
The set of the coordinates of the head positions of other vehicles is
The set of the coordinates of the tail position of the user vehicle is
For the closest distance M between the user vehicle and the other vehicle;
according to the formula:
the closest distance between the user's vehicle head and the other vehicle heads is
:
Wherein the content of the first and second substances,
the position coordinates (120, 230), (122, 219) of the head and tail of the user vehicle and the position coordinates (130,250), (110, 220) of the head and tail of other vehicles are respectively substituted into the set curves Z and Z
Setting a curve according to the head coordinates and the tail coordinates of the user vehicle, wherein the curve is
The final curve is Z = -5.5 e-430;
setting a curve according to the head coordinates and the tail coordinates of other user vehicles, wherein the curve is
The final curve is
;
Additionally, 5.5e-430=1.5e +55, the intersection point is negative, the two curves have no intersection point, and the user can normally drive;
when the Z curve is detected
When the curve has an intersection point, the user vehicle needs to adjust the position of the vehicle head, wherein e and d are the slope and the constant of the curve.
Example 2: by utilizing the BP neural network, the degree of an included angle formed by the sight of a user and the holographic display screen and the stay time of the sight are used as an input layer of the BP neural network, the current state of the user is used as an output layer of the BP neural network, different behaviors of the user are judged according to different conditions of the user, and the specific state of the user is estimated according to the BP neural network as follows:
it is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.