CN108919804B - Intelligent vehicle unmanned system - Google Patents

Intelligent vehicle unmanned system Download PDF

Info

Publication number
CN108919804B
CN108919804B CN201810726257.3A CN201810726257A CN108919804B CN 108919804 B CN108919804 B CN 108919804B CN 201810726257 A CN201810726257 A CN 201810726257A CN 108919804 B CN108919804 B CN 108919804B
Authority
CN
China
Prior art keywords
robot
user
emotion
emotional state
emotional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810726257.3A
Other languages
Chinese (zh)
Other versions
CN108919804A (en
Inventor
陈志林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tangshan Dehui Aviation Equipment Co.,Ltd.
Original Assignee
Tangshan Dehui Aviation Equipment Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tangshan Dehui Aviation Equipment Co ltd filed Critical Tangshan Dehui Aviation Equipment Co ltd
Priority to CN201810726257.3A priority Critical patent/CN108919804B/en
Publication of CN108919804A publication Critical patent/CN108919804A/en
Application granted granted Critical
Publication of CN108919804B publication Critical patent/CN108919804B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Toys (AREA)

Abstract

The invention provides an intelligent vehicle unmanned system which comprises a radar arranged outside a vehicle, a man-vehicle interaction robot arranged in the vehicle and a vehicle control device, wherein the radar is used for acquiring obstacle information in front of the vehicle, the man-vehicle interaction robot is used for interacting with a user through the intelligent driving system, and the vehicle control device is used for controlling the vehicle according to the obstacle information and interaction conditions. The invention has the beneficial effects that: the utility model provides an intelligent vehicle unmanned system carries out the people and vehicles interaction through people vehicle interaction robot, has promoted user's driving experience.

Description

Intelligent vehicle unmanned system
Technical Field
The invention relates to the technical field of intelligent driving, in particular to an intelligent vehicle unmanned system.
Background
With social development and economic progress, various intelligent driving systems appear, however, the interaction effect between the existing intelligent driving system and the user is not satisfactory, and the user experience is poor.
With the comprehensive development of artificial intelligence, the development of robots has also come in spring, and robots are gradually applied to various fields in the intelligent human-computer interaction and cooperation process.
Disclosure of Invention
In view of the above problems, the present invention aims to provide an intelligent vehicle unmanned system.
The purpose of the invention is realized by adopting the following technical scheme:
the utility model provides an intelligent vehicle unmanned system, including set up radar outside the vehicle, set up people's car interaction robot and the vehicle control device in the vehicle, the radar is used for acquireing the barrier information in vehicle the place ahead, people's car interaction robot is used for intelligent driving system and user to interact, vehicle control device is used for controlling the vehicle according to barrier information and mutual condition.
The invention has the beneficial effects that: the utility model provides an intelligent vehicle unmanned system carries out the people and vehicles interaction through people vehicle interaction robot, has promoted user's driving experience.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
FIG. 1 is a schematic structural view of the present invention;
reference numerals:
radar 1, man-vehicle interaction robot 2, vehicle control device 3.
Detailed Description
The invention is further described with reference to the following examples.
Referring to fig. 1, the intelligent vehicle unmanned system of the embodiment includes a radar 1 disposed outside a vehicle, a human-vehicle interaction robot 2 disposed in the vehicle, and a vehicle control device 3, where the radar 1 is used to acquire obstacle information in front of the vehicle, the human-vehicle interaction robot 2 is used for interaction between the intelligent driving system and a user, and the vehicle control device 3 is used to control the vehicle according to the obstacle information and interaction conditions.
The embodiment provides an intelligent vehicle unmanned system, carries out the people and vehicle interaction through people and vehicle interaction robot, has promoted user's driving experience.
Preferably, the human-vehicle interaction robot 2 comprises a first processing subsystem, a second processing subsystem and a third processing subsystem, wherein the first processing subsystem is used for acquiring external environment information, the first processing subsystem comprises a microphone and a high-definition camera, the microphone is used for acquiring voice information of a user, the high-definition camera is used for acquiring face information of the user, the second processing subsystem is used for performing voice interaction with the user according to the voice information, and the third processing subsystem is used for performing emotion interaction with the user according to the face information.
The second processing subsystem comprises an identification module, a synthesis module and a playing module, wherein the identification module is used for extracting voice information of a user and converting the voice information into recognizable binary machine language, the synthesis module is used for converting character information into voice information, and the playing module is used for playing the converted voice information.
The human-vehicle interaction robot of the preferred embodiment realizes good interaction between the robot and the user, and the second processing subsystem realizes accurate voice interaction between the robot and the user.
Preferably, the third processing subsystem includes a first processing module, a second processing module, a third processing module and a fourth processing module, the first processing module is used for establishing an emotion space model, the second processing module determines emotion energy according to the emotion space model, the third processing module is used for acquiring user emotion according to face information, and the fourth processing module is used for the robot to make corresponding emotion changes according to the user emotion.
The first processing module is used for establishing an emotion space model: establishing a two-dimensional emotion space model, wherein the dimensionalities of the two-dimensional emotion space are a happiness degree and an activation degree respectively, the happiness degree is used for expressing the happiness degree of the emotion, and the activation degree is used for expressing the activation degree of the emotion;
the emotional state set of the robot is expressed as: PL ═ PL1,PL2,…,PLn};
In the above formula, PLiThe method comprises the following steps of representing the ith emotional state of the robot, wherein i is 1,2, …, n is n, the number of emotional states of the robot is represented, and the emotional states of the robot are described in a point form in a two-dimensional emotional space: (a)i,bi) Wherein a isiExpressing the Happy degree of the i-th emotional state of the robot, biThe activation degree of the ith emotional state of the robot is represented;
the set of emotional states of the user is represented as: GW ═ GW1,GW2,…,GWm};
In the above formula, GWjThe i-th emotional state of the user is represented, j is 1,2, …, m represents the number of emotional states of the user, and the emotional states of the user are described in a two-dimensional emotional space in the form of points: (a)j,bj) Wherein a isjExpressing the Happy degree of the jth emotional state of the user, biThe activation degree of the j-th emotional state of the user is represented;
according to the preferred embodiment, the two-dimensional emotion space model is established, so that accurate expression of the emotional state is realized, the calculated amount is reduced, the calculation efficiency is improved, and a foundation is laid for subsequent interaction.
Preferably, the second processing module determines the emotion energy according to the emotion space model, and specifically includes: defining the various psychoactive source dynamics as psychoenergy, denoted by UA: UA ═ KW1+KW2
In the above formula, KW1Representing the spontaneous generation of free psychological energy, KW, under appropriate conditions1=δ1UA,KW2Representing constrained psychological energy, KW, produced under the action of external stimuli2=62UA of, wherein1Indicating a degree of psychological arousal, delta2Indicating the degree of psychological suppression, 61、δ2∈[0,1],δ12=1;
Determining the psychological energy of the emotion according to the emotion space model: UA ═ y × D (a + b);
in the formula, D represents emotional intensity, y represents emotional coefficient, and a and b represent the happiness and activation degree of emotional state respectively;
the emotion energy is determined using the following equation: KWq=KW1+μKW2=(1-62+μ62)×y×D(a+b);
In the above formula, KWqRepresents emotional energy, mu represents psychological emotional excitation parameters, mu belongs to [0, 1 ]];
The second processing module of the preferred embodiment defines the psychological energy and the emotional energy, which is beneficial to improving the interaction performance of the robot and lays a foundation for subsequent interaction.
Preferably, the fourth processing module is used for the robot to make corresponding emotion changes according to the user emotion, and specifically includes: when the current emotional state of the robot is the same as the emotional state of the user, the emotional state of the robot does not change, but the emotional energy of the robot is doubled;
when the current emotional state of the robot is different from the user emotional state, the next emotional state of the robot is changed, the next emotional state is not only related to the current emotional state of the robot but also related to the user emotional state, and the current emotional state of the robot is PLi(aiBi), i ═ 1,2, …, n, and the user emotional state is GWj(aj,bj) J is {1,2, …, m }, and any possible emotional state of the robot at the next moment is PLk(ak,bk),k={1,2,…,n},i≠j≠k;
Calculating a characteristic vector for transferring from the current emotional state to the user emotional state as PA1:PA1=(aj-ai,bj-bi) The feature vector for the transition from the current emotional state to any of the possible emotional states is PA2:PA2=(ak-ai,bk-bi) The feature vector for transferring from the emotional state of the user to any possible emotional state is PA2:PA2=(ak-aj,bk-bj) The emotion transfer function TZ is determined using the following equation:
Figure BDA0001719856520000031
minimizing the emotion transfer function to obtain the emotion state PL when the emotion transfer function takes the minimum valuez(az,bz) And z belongs to k, and the emotional state is taken as the state of the robot at the next moment.
The fourth processing module of the preferred embodiment adopts a mathematical theory method, so that the robot can simulate the emotion generation and change of human beings and conform to the emotion change rule of the human beings, the requirement of human emotion in the driving process is met, when the current emotion state of the robot is the same as the emotion state of a user, the emotion energy of the robot is increased, and when the current emotion state of the robot is different from the emotion state of the user, the next emotion state of the robot is changed; the current emotional state of the robot is associated with the emotional state of the user through the emotional transfer function, so that the type of the next emotional state of the robot is judged, and the interaction capacity of the robot is improved.
The intelligent vehicle unmanned system is adopted for driving, 5 users are selected for experiments, the users are respectively user 1, user 2, user 3, user 4 and user 5, the driving safety and the user satisfaction degree are counted, and compared with the existing intelligent driving system, the intelligent vehicle unmanned system has the following beneficial effects:
driving safety improvement User satisfaction enhancement
User 1 29% 27%
User 2 27% 26%
User 3 26% 26%
User 4 25% 24%
User 5 24% 22%
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (3)

1. The intelligent vehicle unmanned system is characterized by comprising a radar arranged outside a vehicle, a human-vehicle interaction robot and a vehicle control device, wherein the human-vehicle interaction robot and the vehicle control device are arranged in the vehicle;
the human-vehicle interaction robot comprises a first processing subsystem, a second processing subsystem and a third processing subsystem, wherein the first processing subsystem is used for acquiring external environment information, the first processing subsystem comprises a microphone and a high-definition camera, the microphone is used for acquiring voice information of a user, the high-definition camera is used for acquiring face information of the user, the second processing subsystem is used for carrying out voice interaction with the user according to the voice information, and the third processing subsystem is used for carrying out emotion interaction with the user according to the face information;
the second processing subsystem comprises a recognition module, a synthesis module and a playing module, wherein the recognition module is used for extracting voice information of a user and converting the voice information into recognizable binary machine language, the synthesis module is used for converting character information into voice information, and the playing module is used for playing the converted voice information;
the third processing subsystem comprises a first processing module, a second processing module, a third processing module and a fourth processing module, wherein the first processing module is used for establishing an emotion space model, the second processing module determines emotion energy according to the emotion space model, the third processing module is used for acquiring user emotion according to face information, and the fourth processing module is used for the robot to make corresponding emotion changes according to the user emotion;
the first processing module is used for establishing an emotion space model: establishing a two-dimensional emotion space model, wherein the dimensionalities of the two-dimensional emotion space are a happiness degree and an activation degree respectively, the happiness degree is used for expressing the happiness degree of the emotion, and the activation degree is used for expressing the activation degree of the emotion;
the emotional state set of the robot is expressed as:PL={PL1,PL2,…,PLn};
in the above formula, PLiThe method comprises the following steps of representing the ith emotional state of the robot, wherein i is 1,2, …, n is n, the number of emotional states of the robot is represented, and the emotional states of the robot are described in a point form in a two-dimensional emotional space: (a)i,bi) Wherein a isiExpressing the Happy degree of the i-th emotional state of the robot, biThe activation degree of the ith emotional state of the robot is represented;
the set of emotional states of the user is represented as: GW ═ GW1,GW2,…,GWm};
In the above formula, GWjJ is 1,2, …, m represents the emotional state number of the user, and the emotional state of the user is described in a two-dimensional emotional space in the form of a point: (a)j,bj) Wherein a isjExpressing the Happy degree of the jth emotional state of the user, bjAnd the activation degree of the j-th emotional state of the user is shown.
2. The smart vehicle unmanned system of claim 1, wherein the second processing module determines the emotion energy according to an emotion space model, specifically: defining the various psychoactive source dynamics as psychoenergy, denoted by UA: UA ═ KW1+KW2
In the above formula, KW1Representing the spontaneous generation of free psychological energy, KW, under appropriate conditions1=δ1UA,KW2Representing constrained psychological energy, KW, produced under the action of external stimuli2=δ2UA of, wherein1Indicating a degree of psychological arousal, delta2Indicating a degree of psychological inhibition, δ1、δ2∈[0,1],δ12=1;
Determining the psychological energy of the emotion according to the emotion space model: UA ═ y × D (a + b);
in the formula, D represents emotional intensity, y represents emotional coefficient, and a and b represent the happiness and activation degree of emotional state respectively;
the emotion energy is determined using the following equation:
KWq=KW1+μKW2=(1-δ2+μδ2)×y×D(a+b)
in the above formula, KWqRepresents emotional energy, mu represents psychological emotional excitation parameters, mu belongs to [0, 1 ]]。
3. The intelligent vehicle unmanned system of claim 1, wherein the fourth processing module is configured to enable the robot to make corresponding emotion changes according to user emotions, and specifically is configured to: when the current emotional state of the robot is the same as the emotional state of the user, the emotional state of the robot does not change, but the emotional energy of the robot is doubled;
when the current emotional state of the robot is different from the user emotional state, the next emotional state of the robot is changed, the next emotional state is not only related to the current emotional state of the robot but also related to the user emotional state, and the current emotional state of the robot is PLi(ai,bi) I ═ 1,2, …, n, and the user emotional state is GWj(aj,bj) J is {1,2, … m }, and any possible emotional state of the robot at the next moment is PLk(ak,bk),k={1,2,…,n},i≠j≠k;
Calculating a characteristic vector for transferring from the current emotional state to the user emotional state as PA1:PA1=(aj-ai,bj-bi) The feature vector for the transition from the current emotional state to any of the possible emotional states is PA2:PA2=(ak-ai,bk-bi) The feature vector for transferring from the emotional state of the user to any possible emotional state is PA3:PA3=(ak-aj,bk-bj) The emotion transfer function TZ is determined using the following equation:
Figure FDA0003062668760000031
minimizing the emotion transfer function to obtain the emotion state PL when the emotion transfer function takes the minimum valuez(az,bz) And z belongs to k, and the emotional state is taken as the state of the robot at the next moment.
CN201810726257.3A 2018-07-04 2018-07-04 Intelligent vehicle unmanned system Active CN108919804B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810726257.3A CN108919804B (en) 2018-07-04 2018-07-04 Intelligent vehicle unmanned system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810726257.3A CN108919804B (en) 2018-07-04 2018-07-04 Intelligent vehicle unmanned system

Publications (2)

Publication Number Publication Date
CN108919804A CN108919804A (en) 2018-11-30
CN108919804B true CN108919804B (en) 2022-02-25

Family

ID=64425077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810726257.3A Active CN108919804B (en) 2018-07-04 2018-07-04 Intelligent vehicle unmanned system

Country Status (1)

Country Link
CN (1) CN108919804B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111538335A (en) * 2020-05-15 2020-08-14 深圳国信泰富科技有限公司 Anti-collision method of driving robot
CN113433874B (en) * 2021-07-21 2023-03-31 广东工业大学 Unmanned ship integrated control management system based on 5G

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060080317A (en) * 2005-01-05 2006-07-10 현대자동차주식회사 An emotion-based software robot for automobile
CN101571930A (en) * 2008-04-30 2009-11-04 悠进机器人股份公司 Robot capable of interacting with human
CN103324100B (en) * 2013-05-02 2016-08-31 郭海锋 A kind of emotion on-vehicle machines people of information-driven
CN104199321A (en) * 2014-08-07 2014-12-10 刘松珍 Emotion interacting type vehicle-mounted robot
CN108009573B (en) * 2017-11-24 2020-08-14 北京物灵智能科技有限公司 Robot emotion model generation method, emotion model and interaction method

Also Published As

Publication number Publication date
CN108919804A (en) 2018-11-30

Similar Documents

Publication Publication Date Title
CN108010514B (en) Voice classification method based on deep neural network
Hans et al. A CNN-LSTM based deep neural networks for facial emotion detection in videos
CN108919804B (en) Intelligent vehicle unmanned system
CN108009573A (en) A kind of robot emotion model generating method, mood model and exchange method
CN112101219B (en) Intention understanding method and system for elderly accompanying robot
Lee The generalization effect for multilingual speech emotion recognition across heterogeneous languages
CN105975932A (en) Gait recognition and classification method based on time sequence shapelet
Pandey et al. Emotion recognition from raw speech using wavenet
Song et al. Dynamic facial models for video-based dimensional affect estimation
CN106897706B (en) A kind of Emotion identification device
SatyanarayanaMurty et al. Facial expression recognition based on features derived from the distinct LBP and GLCM
CN106326873B (en) The manipulation Intention Anticipation method of CACC driver's limbs electromyography signal characterization
CN114611527B (en) Task-oriented dialogue strategy learning method for user personality perception
Liu et al. A novel facial expression recognition method based on extreme learning machine
CN109948569B (en) Three-dimensional mixed expression recognition method using particle filter framework
CN109961152B (en) Personalized interaction method and system of virtual idol, terminal equipment and storage medium
Rasoulzadeh Facial expression recognition using fuzzy inference system
Botzheim et al. Gestural and facial communication with smart phone based robot partner using emotional model
Amit et al. Recognition of real-time hand gestures using mediapipe holistic model and LSTM with MLP architecture
CN106489114A (en) A kind of generation method of robot interactive content, system and robot
CN116795971A (en) Man-machine dialogue scene construction system based on generated language model
Wang et al. Human posture recognition based on convolutional neural network
CN108733962B (en) Method and system for establishing anthropomorphic driver control model of unmanned vehicle
Garcíia Bueno et al. Facial gesture recognition using active appearance models based on neural evolution
Li et al. Multimodal information-based broad and deep learning model for emotion understanding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220130

Address after: 063100 No. 7, Hongbei Road, economic development zone, Guye District, Tangshan City, Hebei Province

Applicant after: Tangshan Dehui Aviation Equipment Co.,Ltd.

Address before: Room 9213-9215, building 9, No. 200, Yuangang Road, Tianhe District, Guangzhou, Guangdong 510000

Applicant before: GUANGDONG ZHUJIANQIANG INTERNET TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant