WO2017219319A1 - 车辆自动驾驶方法和车辆自动驾驶*** - Google Patents

车辆自动驾驶方法和车辆自动驾驶*** Download PDF

Info

Publication number
WO2017219319A1
WO2017219319A1 PCT/CN2016/086914 CN2016086914W WO2017219319A1 WO 2017219319 A1 WO2017219319 A1 WO 2017219319A1 CN 2016086914 W CN2016086914 W CN 2016086914W WO 2017219319 A1 WO2017219319 A1 WO 2017219319A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
person
automatic driving
mode
car
Prior art date
Application number
PCT/CN2016/086914
Other languages
English (en)
French (fr)
Inventor
张丹
彭进展
姜岩
周小成
罗赛
周鑫
Original Assignee
驭势科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 驭势科技(北京)有限公司 filed Critical 驭势科技(北京)有限公司
Priority to CN201680001428.5A priority Critical patent/CN107223101A/zh
Priority to PCT/CN2016/086914 priority patent/WO2017219319A1/zh
Publication of WO2017219319A1 publication Critical patent/WO2017219319A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0098Details of control systems ensuring comfort, safety or stability not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions

Definitions

  • the present invention generally relates to a vehicle automatic driving technique, and more particularly to an in-vehicle sensing based vehicle automatic driving method and a vehicle automatic driving system.
  • the plan mainly simulates the various possible automatic driving schemes in the next time period based on the results of the environment perception, including speed adjustment and steering adjustment, etc.; then passes the best solution to the decision-making layer to realize the vehicle.
  • Environmental perception is mainly the external physical road perception and cognition, as well as the perception and cognition of moving targets around the vehicle.
  • the sensing result of the external environment and the speed sensor from the vehicle itself are mainly used for motion planning and control, and the perception of the interior environment is less, and the existing perception of the interior of the vehicle is Generally limited to the perception of the driver's state, such as fatigue driving.
  • a method for automatically driving a vehicle includes: sensing a person inside the vehicle, obtaining profile information of a person inside the vehicle; classifying the person inside the vehicle; and determining a corresponding one according to the classification Auto-driving driving mode and adaptive control of the vehicle.
  • the vehicle automatic driving method may further include: continuously receiving feedback of the in-vehicle personnel during driving; and adaptively adjusting parameters of the automatic driving according to feedback of the in-vehicle personnel.
  • the outline information of the in-vehicle personnel may include a traveling mode preferred by each person in the vehicle.
  • each person's preferred driving mode may be represented by a corresponding driving parameter.
  • classifying the in-vehicle personnel may include: determining whether each in-vehicle person is a patient; determining whether each in-vehicle person is an elderly person, a child, or a disabled person; And to determine if everyone has set the preferred driving mode.
  • the automatic driving mode may include: a comfort mode, a normal mode, and a sport mode
  • the determining, according to the classification, determining the corresponding autonomous driving mode may include determining whether the in-vehicle personnel have The patient, if there is a patient, determines that the automatic driving mode is the comfort mode; if the person in the vehicle has no patient, it is determined whether each person in the vehicle has a preferred driving mode, and if each person in the vehicle has a preferred driving mode, the use Among the preferred driving modes, the most comfortable driving mode; and if not everyone in the car has a preferred driving mode, determine whether there is an elderly person, a child, or a disabled person among those who do not have a preferred driving mode; If it exists, use the comfort mode.
  • sensing the in-vehicle personnel may include using at least one of the following: a camera, a pressure sensor and a microphone, a fingerprint recognizer, and an infrared sensor.
  • obtaining the profile information of the in-vehicle personnel may include: identifying the identity of the in-vehicle personnel, and identifying the identity of the in-vehicle personnel includes: based on the identity information database, by one of the following identification technologies or The combination is used to identify the identity of the person in the vehicle: face recognition, voiceprint recognition, fingerprint recognition, iris recognition, infrared recognition; and based on the identity of the identified person in the vehicle, the contour information of the person in the vehicle is retrieved from the profile information database. The contour information is stored in association with the identity of the person in the vehicle in the profile information database.
  • the contour information input of the in-vehicle personnel can be received and stored in the database.
  • the outline information may include one or more of age, gender, mood, health status information, and preferred driving mode.
  • different modes may correspond to associated planning and control parameters, which may be selected from one or more of the following items: normal acceleration, abnormal acceleration, normal deceleration, abnormality Deceleration, maximum value of front wheel yaw and vehicle speed ratio, safety distance, minimum frequency of lane change, and safe distance of lane change.
  • adaptively adjusting the parameter of the automatic driving according to the feedback of the person in the vehicle may include: responding to the feedback command of the current driving state by the person in the vehicle, using the preset corresponding to the feedback Adjustments are made by parameters and adjustments that indicate a determined adjustment to the driving state.
  • the current parameter is recorded as the driving mode parameter of the user's preference.
  • adaptively adjusting the parameters of the automatic driving according to the feedback of the in-vehicle personnel may include: adjusting the learning method in response to the feedback command of the in-vehicle personnel to the current driving state.
  • one or more of the identification technologies may be run in a local in-vehicle computer system, in a cloud server, or in a manner in which the local in-vehicle computer system and the cloud server cooperate with each other.
  • one or both of the identity information database and the profile information database of the in-vehicle personnel are stored in the cloud storage.
  • the vehicle automatic driving method may further include: sensing whether a specific animal and/or a specific item exists in the vehicle; determining a corresponding automatic driving driving mode based on sensing that a specific animal and/or a specific item exists in the vehicle.
  • a vehicle automatic driving system which may include: a sensor for sensing a person inside the vehicle; a contour information obtaining unit for obtaining contour information of a person inside the vehicle; and a classification unit for the interior of the vehicle The personnel are classified; and the adaptive control unit determines the corresponding automatic driving mode according to the classification and adaptively controls the vehicle.
  • the vehicle automatic driving system may further include: a feedback receiving unit that continuously receives feedback of the in-vehicle personnel during running; and the adaptive control unit adaptively adjusts parameters of the automatic driving according to feedback of the in-vehicle personnel.
  • the profile information of the in-vehicle personnel may include a preferred travel mode for each person in the vehicle.
  • categorizing the in-vehicle personnel by the categorizing unit may include: determining whether each in-vehicle person is a patient; determining whether each in-vehicle person is an elderly person, a child, or a disabled person; And to determine if everyone has set the preferred driving mode.
  • the automatic driving mode may include: a comfort mode, a normal mode, and a sport mode
  • the determining, according to the classification, determining the corresponding autonomous driving mode includes: determining whether the in-vehicle personnel There is a patient, if there is a patient, it is determined that the automatic driving mode is a comfortable mode; if the person inside the vehicle has no patient, it is determined whether each person in the vehicle has a preferred driving mode, and if everyone in the car has a preferred driving mode, then Using the most comfortable driving mode among the preferred driving modes; and if not everyone in the car has a preferred driving mode, determining whether there is an elderly person, a child, or a disabled person among those who do not have a preferred driving mode; And if it exists, use comfort mode.
  • adaptively adjusting the parameters of the automated driving based on feedback from the in-vehicle personnel may include adjusting using an enhanced learning method in response to a feedback command from the in-vehicle personnel to the current driving state.
  • the senor may also sense the presence or absence of a particular animal and/or particular item within the vehicle; and the adaptive control unit determines based on sensing that a particular animal and/or particular item is present within the vehicle.
  • the corresponding autonomous driving mode may be used to determine whether a particular animal and/or particular item is present within the vehicle.
  • a vehicle automatic driving method may include: sensing a person in a vehicle, obtaining contour information of a person in the vehicle; and adapting the vehicle based on the obtained contour information of the person inside the vehicle control.
  • the vehicle automatic driving system and method of the embodiments of the present invention pay attention to the perception of the inside of the vehicle, determine the corresponding automatic driving mode, and continuously perform the in-vehicle situation sensing, and adaptively control the vehicle to provide safer and more efficient. Automated driving methods and systems.
  • FIG. 1 shows a block diagram showing the structure of a vehicle automatic driving system 100 according to an embodiment of the present invention.
  • FIG. 2 shows a functional block diagram of a profile information acquisition unit module 120 in accordance with one embodiment of the present invention.
  • FIG. 3 is a block diagram showing the structure of a vehicle automatic driving system 200 according to another embodiment of the present invention.
  • FIG. 4 shows a schematic structural view of a vehicle automatic driving system 300 according to another embodiment of the present invention.
  • FIG. 5 illustrates a general flow diagram of a method of automated vehicle driving 400 in accordance with one embodiment of the present invention.
  • FIG. 6 depicts a more detailed process of an exemplary vehicle autopilot method 500 in accordance with one embodiment of the present invention.
  • FIG. 7 shows a flow diagram of an exemplary adaptive control method 600 in accordance with an embodiment of the present invention.
  • FIG. 8 illustrates a flow diagram of an exemplary adaptive control method 700 using an enhanced learning method in accordance with another embodiment of the present invention.
  • the profile information which is the information of the person in the car, can be registered by the person in the car, preferably including the driving mode preferred by the person in the car, and may also include name, age, gender, health condition, mood, and the like.
  • Adaptive control and/or adjustment refers to automatically adjusting the parameters of the automatic driving according to the feedback of the personnel in the vehicle until the requirements of the personnel in the vehicle are met.
  • the invention pays attention to the perception of the personnel in the vehicle, classifies and analyzes the personnel inside the vehicle, determines the corresponding automatic driving mode, and continuously carries out the situation sensing in the vehicle, and adaptively controls the vehicle, thereby providing safer and more efficient. Automated driving methods and systems.
  • FIG. 1 shows a block diagram showing the structure of a vehicle automatic driving system 100 according to an embodiment of the present invention.
  • the vehicle automatic driving system 100 includes a sensor 110, a contour information obtaining unit 120, a categorizing unit 130, and an adaptive control unit 140.
  • the sensor 110 senses the in-vehicle personnel
  • the contour information obtaining unit 120 is configured to obtain the contour information of the in-vehicle personnel
  • the classification unit 130 classifies the in-vehicle personnel
  • the adaptive control unit 140 determines the corresponding according to the classification. Auto-driving driving mode and adaptive control of the vehicle.
  • the contour information obtaining unit 120, the categorizing unit 130, and the adaptive control unit 140 herein may be implemented by software of an onboard computer, hardware, or a combination of both, or may be implemented by an onboard computer in conjunction with cloud storage and/or application services. ,
  • the in-vehicle computer system includes components such as CPU, memory, and permanent storage media, as well as optional FPGA, ASI C, DSP, GPU and other computational acceleration components.
  • the in-vehicle computer system can be a stand-alone system or a distributed system. In the case of a distributed system, some kind of network connection (such as Ethernet, etc.) is used between multiple computing nodes. Multiple nodes of a distributed system can be isomorphic or heterogeneous, such as using different architectures and operating systems.
  • the in-vehicle computer system can operate independently or be connected to the cloud 20.
  • Cloud-based systems can store a large amount of driver and passenger information from individual vehicle uploads and can provide face recognition and Identification services such as voiceprint recognition.
  • the sensor here is a generalized concept and does not specifically refer to a certain sensor device, but may be a collection of sensor devices.
  • Sense of in-vehicle personnel includes sensing the image, sound, quality, etc. of the person inside the vehicle.
  • a person in the sensing vehicle can use at least one of the following means: a camera, a pressure sensor and a microphone, a fingerprint reader, an infrared sensor.
  • each camera can be a monocular, binocular, or depth camera.
  • the mounting method for example, one of the following two can be considered.
  • Installation method 1 Install a camera facing the face in front of each seat to clearly capture the front of the face.
  • Installation method 2 You can choose other camera installation methods, such as using a camera to take multiple seats, provided that you can capture the front of all drivers and passengers' faces.
  • Pressure (mass) sensor on each seat.
  • the number of passengers in the car can be provided in real time, and the quality information can be used to identify whether there is a rough age information such as a child in the passenger.
  • the in-car microphone can capture the passenger's voice commands and feedback, and understand the commands and feedback through natural language processing. And can identify different people according to the voiceprint.
  • Microphone installation method one can be installed in the middle position in the car, or one in the middle position of the front row and the middle position in the rear row.
  • sensor devices are merely examples, sensor devices may be added or subtracted as desired, and any existing or future occurring sensor devices may be used in the present invention.
  • the outline information herein may include, for example but not limited to, identity, age, gender, face characteristics, voiceprint characteristics, health status, emotional state, age, and the like.
  • the profile information of the in-vehicle personnel includes its preferred travel mode.
  • the exercise mode is divided into a comfort mode, a normal mode, and a sport mode, and the comfort level is sequentially reduced.
  • Different exercise modes correspond to different planning and control parameters, such as: normal acceleration, extraordinary acceleration, Normal deceleration, super deceleration, maximum lateral control, safety distance, minimum frequency of lane change, safe distance of lane change, etc.
  • the comfort mode it is generally desirable for the elderly to drive the vehicle as smoothly as possible, so it is desirable to have a lower acceleration and change the frequency of the lane. The rate is lower; and for young people, there is no such high requirement for stability. Therefore, preferably, the preferred automatic driving mode can be customized for each person.
  • FIG. 2 shows a functional block diagram of a profile information acquisition unit module 120 in accordance with one embodiment of the present invention.
  • the contour information acquisition unit module 120 may include an identity recognition module 121, a contour information collection module 124, other attributes and status detection modules 123 of the in-vehicle personnel, and a contour information integration module 122.
  • the profile information can be obtained as follows: First, the identity of the person in the vehicle is identified by the identity recognition module 121, for example, based on the identity information database, the facial features, voiceprint features, and iris features sensed based on the in-vehicle sensor And infrared characteristics, etc., identify the identity of the person in the vehicle by one or a combination of the following identification technologies: face recognition, voiceprint recognition, fingerprint recognition, iris recognition, infrared recognition; and recognition based vehicle
  • face recognition, voiceprint recognition, fingerprint recognition, iris recognition, infrared recognition and recognition based vehicle
  • the identity of the insider, the profile information of the in-vehicle personnel is retrieved from the profile information database, and the profile information is stored in association with the identity of the in-vehicle personnel in the profile information database.
  • the profile information database and the identity information database herein may be separate or shared together, and the identity information here is, for example, the ID of the person in the vehicle (mobile phone number, ID card information) together with the facial features of the person inside the vehicle. , voiceprint features, handprint features, iris features, infrared features, vascular features, etc. can uniquely identify a person's identity, based on sensory information (images, sounds, fingerprints, etc.) obtained through sensors and a database of identity information, Identification technology is used to identify the identity of people in the car.
  • profile information such as name, age, gender, health status, preferred exercise mode, and the like are stored in association with the in-vehicle person identification.
  • the profile information collection module 124 collects profile information of the new person, and can input some basic materials such as name and age by, for example, voice interaction. , gender, preferred driving mode, etc.; the camera in the car can take multiple photos, as well as record voice voiceprints, fingerprints, etc., and store the information into identity information records and/or contour information records in the identity information database and/or contours. In the information database.
  • the current in-vehicle personnel information can also be obtained in real time by other properties and status detection modules 123 of the in-vehicle personnel, for example by images taken by the camera. Perform image processing, measure the blood pressure of the person in the car, heartbeat, etc., or obtain the current mood of the person in the car by interacting with the voice of the person inside the car. Status and current health status.
  • the contour information synthesis module 122 integrates the information from the contour information database, the information collected by the contour information collection module 134, and other attributes of the vehicle personnel and the information of the state detection module 123 to obtain contour information of each vehicle personnel for subsequent return. Class units and adaptive control units are used.
  • the module and profile information database and the identity information database of the identity recognition module 121 can be run and stored in the onboard computer and/or Or the cloud.
  • the categorizing unit 130 classifies the personnel in the vehicle. For example, based on the profile information of the in-vehicle personnel obtained by the profile information obtaining unit 120, determining the category to which each in-vehicle person belongs, for example, classifying the in-vehicle personnel includes: determining whether each in-vehicle person is a patient; determining each car Whether the person is an elderly person, a child or a disabled person; and whether each person has set a preferred driving mode.
  • the adaptive control unit 140 determines a corresponding autonomous driving mode according to the categorization and adaptively controls the vehicle.
  • the adaptive control unit 140 determines, according to the classification, that the corresponding automatic driving mode comprises:
  • each person in the car has a preferred driving mode, and if each person in the car has a preferred driving mode, the driving mode with the highest comfort among the preferred driving modes is used;
  • FIG. 3 is a block diagram showing the structure of a vehicle automatic driving system 200 according to another embodiment of the present invention.
  • the vehicle automatic driving system 200 further includes a feedback receiving unit 250.
  • the structure and working process of the sensor 210, the contour information obtaining unit 220, and the categorizing unit 230 in FIG. 3 may be similar to the corresponding portions shown in FIG. 1, and details are not described herein again.
  • the feedback receiving unit 250 continuously receives the feedback of the in-vehicle personnel during the running, and the adaptive control unit 240 adaptively adjusts the parameters of the automatic driving according to the feedback of the in-vehicle personnel.
  • the way to receive feedback from the people in the car can be voice mode, through the voice recognition module to identify the feedback of the people inside the car, such as, for example, "open fast, open slow, open, don't be too meat", etc.;
  • Other means such as providing a predetermined physical control button for pressing on the vehicle, etc.; or feedback by a vehicle driving control application on the in-vehicle's mobile phone; or gestures can be used to give feedback, Deaf people are particularly suitable;
  • the adaptive adjustment of the parameters of the automatic driving according to the feedback of the personnel in the vehicle may include: one is to preset the parameters and the adjustment amount that each command needs to be adjusted according to the prior knowledge; the other is to adopt the method of enhanced learning in a large number of Learn to determine these parameter values while driving.
  • FIG. 4 shows a schematic structural view of a vehicle automatic driving system 300 according to another embodiment of the present invention.
  • the vehicle automatic driving system 300 of FIG. 4 further includes a driving mode command receiving module 360.
  • the other modules 310, 320, 330, and 350 may be the same as the corresponding modules shown in FIG. 3, and details are not described herein again.
  • the driving mode command receiving module 360 is configured to receive an explicitly specified vehicle driving mode of a person in the vehicle, and the system provides a plurality of predefined driving modes: such as a normal mode, a comfort mode, a sports mode, and the like.
  • the manner of receiving the driving mode command of the person in the vehicle may be a voice mode, and the driving mode command is recognized by the voice recognition module, for example, “comfort mode”, “sport mode”, etc.;
  • a predetermined physical control button is provided on the car for pressing, for example, two buttons, one for "better” and one for "worse", of course, more buttons can be designed, and other forms can be designed.
  • Feedback hardware, etc.; or gestures can be used to give feedback, which is especially useful for deaf people; or feedback through vehicle driving control applications on the phone of a person in the car.
  • the senor also senses whether a particular animal and/or a particular item is present in the vehicle, and the adaptive control unit determines the corresponding autonomous driving mode based on sensing that a particular animal and/or particular item is present within the vehicle. . For example, to detect the presence of fragile items in the car, such as porcelain, glass products, etc., whether there are some small animals in the car, such as kittens or puppies, thereby adjusting the automatic driving mode, such as when the presence of porcelain is sensed. Adjust the automatic driving exercise mode to the comfort mode to drive more smoothly.
  • a vehicle automatic driving method according to an embodiment of the present invention will be described below with reference to the accompanying drawings.
  • the vehicle automatic driving method is implemented in combination with the aforementioned vehicle automatic driving system.
  • FIG. 5 illustrates an overall flow of a vehicle automatic driving method 400 in accordance with one embodiment of the present invention.
  • step S410 the in-vehicle personnel are sensed by the sensor to obtain contour information of the in-vehicle personnel.
  • step S420 the persons in the vehicle are classified.
  • step S430 according to the classification, the corresponding automatic driving mode is determined, and the vehicle is adaptively controlled.
  • the vehicle autopilot method 400 further includes: continuously receiving feedback from the in-vehicle personnel during driving; and adaptively adjusting parameters of the autonomous driving based on feedback from the in-vehicle personnel.
  • Adjusting the parameters of the automatic driving adaptively according to the feedback of the in-vehicle personnel may include: adjusting, in response to a feedback command of the in-vehicle personnel to the current driving state, using a preset parameter and an adjustment amount corresponding to the feedback, the adjusting The feedback command indicates a certain adjustment to the driving state.
  • the current parameter can be recorded as the preferred driving mode parameter for the user, and the user's profile information is updated to the profile information database.
  • adaptively adjusting the parameters of the automated driving based on feedback from the in-vehicle personnel includes adjusting in an enhanced learning method in response to feedback commands from the in-vehicle personnel to the current driving state.
  • the profile information profile information of the in-vehicle personnel may include, for example, without limitation, identity, age, gender, facial features, voiceprint features, health status, emotional state, age, and the like.
  • the preferred driving mode for each person can be represented by the corresponding driving parameters, that is, the preferred exercising mode can be customized for each person.
  • categorizing the in-vehicle personnel includes determining whether each in-vehicle person is a patient, determining whether each in-vehicle person is an elderly person, a child, or a disabled person, and determining whether each person has set a preference Driving mode.
  • the automatic driving mode may include: a comfort mode, a normal mode, and a sport mode, and determining, according to the categorization, determining a corresponding autonomous driving mode includes: determining whether a person in the vehicle has a patient, and if there is a patient, Then, the automatic driving mode is determined to be a comfortable mode; if the person in the vehicle has no patient, it is determined whether each person in the vehicle has a preferred driving mode, and if each person in the vehicle has a preferred driving mode, the preferred driving mode is used. The most comfortable driving mode; and if not everyone in the car has a preferred driving mode, determine whether there is an elderly person, a child or a disabled person among those who do not have a preferred driving mode; and if present, use the comfort mode .
  • Different driving modes correspond to different planning and control parameters, such as: normal acceleration, abnormal acceleration, normal deceleration, super deceleration, maximum value of front wheel declination and vehicle speed ratio, safety distance, minimum frequency of lane change. , lane change safety distance and so on.
  • sensing the in-vehicle personnel includes using at least one of the following: a camera, a pressure sensor and a microphone, a fingerprint reader, an infrared sensor.
  • obtaining the profile information of the in-vehicle personnel includes: identifying the identity of the in-vehicle personnel, identifying the identity of the in-vehicle personnel, including identifying the in-vehicle by one or a combination of the following identification technologies based on the identity information database
  • the identity of the person face recognition, voiceprint recognition, fingerprint recognition, iris recognition, infrared recognition; and based on the identity of the identified person in the vehicle, retrieve the contour information of the person in the vehicle from the profile information database, in the profile information database Profile information is stored in association with the identity of the person in the vehicle.
  • the outline information input of the in-vehicle person is received and stored in the database.
  • one or more of the identification technologies operate in a cloud server or in a manner that the local onboard computer system and the cloud server cooperate with each other.
  • other application services can also run on a local onboard computer system, or run in the cloud, or in a way that the local onboard computer system and the cloud server work together.
  • one or both of the identity information database and the profile information database of the in-vehicle personnel may be stored in the local computer system and/or stored in the cloud storage, or stored in a cooperative manner in both.
  • the vehicle autopilot method may also sense the presence or absence of a particular animal and/or particular item within the vehicle; based on sensing the presence of a particular animal and/or particular item within the vehicle, determining a corresponding autonomous driving mode of travel.
  • the onboard autopilot computer system reads the sensor input and processes it by each sensor module (such as face recognition, voiceprint recognition, etc.), and then by the recognition module (such as Identification), and finally controlled according to a certain adaptive algorithm until the end of the trip. Adjustments can be made in the adaptive algorithm taking into account driver and passenger feedback.
  • each sensor module such as face recognition, voiceprint recognition, etc.
  • the recognition module such as Identification
  • each of the identification modules performs processing based on input of each sensor device such as a camera, a microphone, a weight sensor, etc., specifically, by the face recognition module in step S510.
  • Face recognition voiceprint recognition is performed by the voiceprint recognition module in step S520, and other attributes and states of the person in the vehicle are identified by other attributes and state recognition modules of the in-vehicle personnel in step S530.
  • the specific address, the face image in the car is obtained in real time by the on-board camera, the number of passengers and the number of passengers in the car is recognized by image processing according to the face feature, and the driver, the identity, age, gender, mood and the like of the passenger are recognized.
  • the face recognition module can run on a vehicle-mounted local computer system, or in a server running in the cloud, or run in two systems in a collaborative manner for better results.
  • Information such as identity, age, gender, and facial features of the in-vehicle personnel is stored in a permanent storage of the in-vehicle local computer system, and optionally in cloud storage, such as in an identity information database and/or a profile information database, as before
  • the identity information database and the profile information database may be combined into one, or may be separately stored and organized.
  • the specific address, voiceprint recognition through the microphone requires the car voiceprint recognition module to initiate and the passenger's voice interaction to achieve.
  • the voiceprint recognition module can run on a vehicle-mounted local computer system, or in a server running in the cloud, or run in two systems in a collaborative manner for better results.
  • Information such as the identity, age, gender, voiceprint characteristics of the vehicle personnel can be stored in the permanent storage of the onboard local computer system, and optionally in cloud storage, such as in an identity information database and/or profile information database, such as As mentioned above, both the identity information database and the profile information database can be combined into one, or can be stored and organized separately.
  • Other attributes and status detection modules of the vehicle personnel can detect more information, such as the number of personnel and the health status of each person (health, fatigue, etc.) by processing information sensed by sensor devices such as cameras, microphones, and pressure sensors. Illness, disability, etc.), emotional state (happy, sad, angry, nervous, surprised, etc.), age (child, youth, middle-aged, elderly).
  • Information such as other attributes and status of the in-vehicle personnel can be stored in the permanent storage of the onboard local computer system, as well as in optional cloud storage, or in a collaborative manner to achieve better results in both systems.
  • it is stored in the identity information database and/or the profile information database.
  • both the identity information database and the profile information database may be combined into one, or may be separately stored and grouped. Weaving.
  • step S540 based on the results of face recognition S510, voiceprint recognition S520, and other attributes of the in-vehicle personnel and status recognition S530, reference is made to user attributes stored in the local/cloud (eg, identity information database and profile information).
  • the user attribute stored in the database identifies the identity information of the person in the vehicle and obtains the outline information of the person inside the vehicle.
  • the contour information here may be retrieved from the contour information database, or input as a subsequent source based on 1 (in the case where there is no record of the person in the profile information database), or other persons in the vehicle
  • the attributes and status identification are collected, or they can be obtained from the combination of the above.
  • step S570 a voice command is recognized. It should be noted that the monitoring, identification and processing of voice commands exist in the whole process of automatic driving of the vehicle.
  • step S580 it is determined whether the voice command is a manual designated travel mode. If the result of the determination is "Yes”, the flow proceeds to the adaptive control step of step S550; if the result of the determination is "NO”, the process proceeds to step S590.
  • step S590 it is determined whether the voice command is feedback on the driving situation. If the result of the determination is "Yes”, the flow proceeds to the adaptive control step of step S550; if the result of the determination is "NO”, the process proceeds to step S591.
  • step S591 it is determined whether the voice command is for personnel information entry. If the result of the determination is "Y (Y)", the flow proceeds to step S540 (shown as 1 in the figure) to perform in-vehicle personnel information entry, wherein in step S540, some basic data can be entered through voice interaction. Such as age, gender, preferred driving mode; and the camera in the car can take multiple photos, as well as recording voice voiceprints, fingerprints, etc., and store this information as identification information and/or contour information on the vehicle computer and / or cloud On the storage; if the judgment result is "No (N)", it is determined to be an invalid command, and it is not processed.
  • step S540 some basic data can be entered through voice interaction. Such as age, gender, preferred driving mode; and the camera in the car can take multiple photos, as well as recording voice voiceprints, fingerprints, etc., and store this information as identification information and/or contour information on the vehicle computer and / or cloud On the storage; if the judgment result is "No (N)", it is determined
  • the speech recognition module recognizes the above three commands (manually specified driving mode, feedback on driving conditions, personnel information entry), and ignores other unrecognizable speech. Other commands can also be designed, identified, and processed as needed.
  • the speech recognition module can run on a local computer system in the vehicle, or in a server running in the cloud, or run in two systems in a collaborative manner for better results.
  • step S550 adaptive control is performed. A detailed description of an example of the adaptive control method will be given later with reference to FIG.
  • step S560 it is determined whether the vehicle automatic driving stroke ends, and if the result is "Y", the process results, otherwise returns to the beginning of the process, and continues to monitor the sensor input.
  • FIG. 7 shows a flow diagram of an exemplary adaptive control method 600 that may be applied to step S550 shown in FIG. 6, in accordance with an embodiment of the present invention.
  • step S610 based on the contour information of the in-vehicle personnel, it is determined whether each in-vehicle person is classified as a patient. When it is determined that a certain in-vehicle person is classified as a patient, the flow proceeds to step S611, and the system proceeds to step S611. Comfort mode is used by default. If it is determined that there is no patient among the persons in the vehicle based on the outline information of the in-vehicle personnel, the flow advances to step S620.
  • step S620 it is determined whether each person in the vehicle has a preference mode, a specific address, and the outline information of each person in the vehicle is checked to see if all of the preferred driving modes are set. If step S620 determines that each of the in-vehicle personnel has their own preferred driving mode, then go to step S621, in which the driving mode of the vehicle is set to be the highest among the preferred driving modes of all in-vehicle personnel. Driving mode.
  • step S630 If it is determined in step S620 that not every in-vehicle person has his own preferred driving mode, then step S630 is reached.
  • step S630 it is determined whether there is an elderly person, a child, and a disabled person among the persons in the driving mode in which the preference is not set. If the determination result is YES, the process goes to step S631, in which the comfort mode is used by default.
  • step S640 the adaptive control unit collects sensor data and receives voice feedback from the person in the vehicle.
  • step S650 it is determined whether or not there is voice feedback. If the result of the determination is YES, the process proceeds to step S660, otherwise, the process returns to step S640.
  • step S660 the control is adaptively adjusted according to the voice feedback of each person, and these features are used to update the traveling mode of the corresponding individual.
  • the automatic driving system provides default control parameters for the comfort mode, and a passenger requires that the control during driving is smoother and safer. At this time, he will issue the command "re-stable", and the automatic driving system will receive the feedback command. These parameters are fine-tuned. And that The passenger may repeatedly issue commands to adjust until the final no longer adjusts, the system records the passenger's personalized comfort mode parameters as the passenger's preferred driving mode, and updates the passenger's profile information to the identity information database and/or profile. Information database.
  • step S670 it is determined whether the vehicle has finished traveling, for example, has arrived at the destination, or received a parking order of the passenger, and the like. In the case where it is judged at step S670 that the running of the vehicle is over, the process ends, otherwise the process returns to step S640.
  • FIG. 8 illustrates a flow diagram of an exemplary adaptive control method 700 using an enhanced learning method, which may be applied to step S550 shown in FIG. 6, in accordance with another embodiment of the present invention.
  • FIG. 8 Compared with FIG. 7, the difference of FIG. 8 is that the step S750 of adjusting the parameter control amount using the enhanced learning method is replaced with the step S660, and there is no step corresponding to the step S650 of determining whether or not there is voice feedback of FIG.
  • the remaining steps in FIG. 8 are similar to the steps in FIG. 7 and will not be described again.
  • the speech feedback judging step is removed in FIG. 8 because, in the enhanced learning method, the speech feedback is used as a reward function value of the enhanced learning method, and when there is no speech in the vehicle, it is considered that there is a certain return function value, so It is not necessary to specifically judge whether or not there is voice feedback as a condition of the flow jump as shown in FIG.
  • Enhanced learning algorithms have different classifications, such as Monte Carlo methods and time difference methods, but the commonalities and cores of these methods include state sets, action sets, and reward functions.
  • state set, action set, and reward function are defined as follows.
  • A1 The throttle is 0, the brake is 0, and the front wheel is 0.
  • Throttle is 0, brake is 0, front wheel yaw is +0.5 degrees
  • A3 The throttle is 0, the brake is 0, and the front wheel is +1 degree.
  • Throttle is 0, brake is 0, front wheel yaw angle is -0.5 degrees
  • A53 The throttle is 0, the brake is 0, and the front wheel is -1.0 degrees.
  • Throttle is 0, brake is 0, front wheel yaw angle is -25 degrees
  • A104 throttle is 4, brake is 0, front wheel yaw is 0
  • A107 The throttle is 0, the brake is 3, and the front wheel is 0.
  • the feedback from the people in the car, the feedback information at this time is measured by the satisfaction of the people in the car, divided into five discrete values as shown in the following table, as shown in the fourth row of the table, for the case without feedback,
  • the return function value r is set to 0, which is considered to be the same as the case where the person in the vehicle responds (for example, by voice) to "normal” or "just".
  • the corresponding setting between the feedback information of this table and the reward function value r is merely an example, and those skilled in the art can design according to the situation.
  • the exemplary process is as follows:
  • a is a coefficient that controls the learning speed.
  • the adaptive planning and control module operates on a vehicle-mounted local computer system.
  • the automatic driving mode is described as including a comfort mode, a normal mode, and a sport mode. This is just an example, allowing for more granular classification or more coarse-grained classification as needed.
  • voice feedback is taken as an example of the feedback method, but this is only an example, and other feedback methods may be used depending on the situation, such as a physical button set on the car, an app on the passenger's mobile phone. Programs, passenger gestures, etc.
  • the vehicles in this article should be interpreted broadly. In addition to various large, medium and small vehicles traveling on land, they can also include ships on the surface of the water, and even aircraft that fly in the air.
  • the embodiments may be implemented in hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof.
  • the program code or code segments needed to perform the tasks can be stored in a computer readable medium, such as a storage medium, and the processor can perform the required tasks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

提供了车辆自动驾驶方法和车辆自动驾驶***,车辆自动驾驶***包括:传感器(110),感测车内人员;轮廓信息获得单元(120),用于获得车内人员的轮廓信息;归类单元(130),对车内人员进行归类;以及自适应控制单元(140),根据该归类,确定相应的自动驾驶行驶模式,并对车辆进行自适应控制。提供了通过车内感知来做自适应控制的方法和***,能够提供更为安全和高效的自动驾驶方法和***。

Description

车辆自动驾驶方法和车辆自动驾驶*** 技术领域
本发明总体地涉及车辆自动驾驶技术,更具体地涉及基于车内感知的车辆自动驾驶方法和车辆自动驾驶***。
背景技术
在自动驾驶***中,规划主要依据环境感知的结果前瞻模拟下一个时间段内各种可能的自动驾驶方案,包括速度调整和转向调整等;然后把最佳的方案传递到决策层去实现车辆的物理控制。环境感知主要是外部的物理道路感知和认知,以及和车辆周围运动目标的感知和认知等。
在现有的自动驾驶***中,主要依赖于外部环境的感知结果、以及来自车辆本身的速度传感器等来做运动规划和控制,对车内环境的感知较少,现有的对车内的感知一般限于对司机的状态的感知,如是否疲劳驾驶。
发明内容
根据本发明的一个方面,提供了一种车辆自动驾驶方法,包括:感测车内人员,获得车内人员的轮廓信息;对车内人员进行归类;以及根据所述归类,确定相应的自动驾驶行驶模式,并对车辆进行自适应控制。
进一步地,车辆自动驾驶方法还可以包括:在行驶过程中,持续接收车内人员的反馈;以及根据车内人员的反馈自适应调整自动驾驶的参数。
进一步地,在车辆自动驾驶方法中,车内人员的轮廓信息可以包括车内每个人偏好的行驶模式。
进一步地,在车辆自动驾驶方法中,每个人偏好的行驶模式可以由对应的行驶参数来表示。
进一步地,在车辆自动驾驶方法中,对车内人员进行归类可以包括:确定每个车内人员是否为病人;确定每个车内人员是否是老人、小孩或残疾人; 以及确定是否每个人都设置了所偏好的驾驶模式。
进一步地,在车辆自动驾驶方法中,自动驾驶模式可以包括:舒适模式、普通模式和运动模式,以及所述根据所述归类,确定相应的自动驾驶行驶模式可以包括:确定车内人员是否有病人,如果有病人,则确定自动驾驶模式为舒适模式;如果车内人员没有病人,则确定车内每个人是否都有偏好的驾驶模式,如果车内每个人都有偏好的驾驶模式,则使用这些偏好的驾驶模式之中舒适度最高的驾驶模式;以及如果并非车内每个人都有偏好的驾驶模式,则确定不具有偏好的驾驶模式的人中是否存在老人、小孩或者残疾人;以及如果存在,则使用舒适模式。
进一步地,在车辆自动驾驶方法中,感测车内人员可以包括使用下述手段中的至少一个:摄像头、压力传感器和麦克风、指纹识别器、红外传感器。
进一步地,在车辆自动驾驶方法中,获得车内人员的轮廓信息可以包括:识别车内人员的身份,识别车内人员的身份包括基于身份信息数据库,通过下述身份识别技术中的一项或者其组合来识别车内人员的身份:人脸识别、声纹识别、指纹识别、虹膜识别、红外识别;以及基于识别得到的车内人员的身份,从轮廓信息数据库检索该车内人员的轮廓信息,在轮廓信息数据库中与车内人员的身份标识相关联地存储了轮廓信息。
进一步地,在车辆自动驾驶方法中,在轮廓数据库中未存储该车内人员的轮廓信息的情况下,可以接收车内人员的轮廓信息输入,并将其存储到数据库中。
进一步地,在车辆自动驾驶方法中,轮廓信息可以包括:年龄、性别、情绪、健康状态信息、偏好的行驶模式中的一项或多项。
进一步地,在车辆自动驾驶方法中,不同的模式可以对应于相关联的规划和控制参数,所述参数可以选自下列项目中的一个或多个:正常加速度、超常加速度、正常减速度、超常减速度、前轮偏角和车速比的最大值、安全距离、换道最小频率、换道安全车距。
进一步地,在车辆自动驾驶方法中,根据车内人员的反馈自适应调整自动驾驶的参数可以包括:响应于车内人员对当前驾驶状态的反馈式命令,利用预设的对应于所述反馈的参数和调整量来进行调整,所述反馈式命令指示对驾驶状态进行确定调整。
进一步地,在车辆自动驾驶方法中,当用户不再给出反馈式命令时,记 录当前的参数作为该用户的偏好的驾驶模式参数。
进一步地,在车辆自动驾驶方法中,根据车内人员的反馈自适应调整自动驾驶的参数可以包括:响应于车内人员对当前驾驶状态的反馈式命令,采用增强学习方法进行调整。
进一步地,在车辆自动驾驶方法中,身份识别技术中的一个或多个可以运行在本地车载计算机***中、云端服务器中,或者以本地车载计算机***和云端服务器相互协作的方式运行。
进一步地,在车辆自动驾驶方法中,所述车内人员的身份信息数据库和轮廓信息数据库中的一个或两者存储在云端存储中。
进一步地,车辆自动驾驶方法还可以包括:感测车内是否存在特定动物和/或特定物品;基于感测到车内存在特定动物和/或特定物品,确定相应的自动驾驶行驶模式。
根据本发明的另一方面,提供了一种车辆自动驾驶***,可以包括:传感器,感测车内人员;轮廓信息获得单元,用于获得车内人员的轮廓信息;归类单元,对车内人员进行归类;以及自适应控制单元,根据所述归类,确定相应的自动驾驶行驶模式,并对车辆进行自适应控制。
进一步地,车辆自动驾驶***还可以包括:反馈接收单元,在行驶过程中,持续接收车内人员的反馈;以及所述自适应控制单元根据车内人员的反馈自适应调整自动驾驶的参数。
在一个示例中,车内人员的轮廓信息可以包括车内每个人偏好的行驶模式。
在一个示例中,在车辆自动驾驶***中,归类单元对车内人员进行归类可以包括:确定每个车内人员是否为病人;确定每个车内人员是否是老人、小孩或残疾人;以及确定是否每个人都设置了所偏好的驾驶模式。
在一个示例中,在车辆自动驾驶***中,自动驾驶模式可以包括:舒适模式、普通模式和运动模式,以及所述根据所述归类,确定相应的自动驾驶行驶模式包括:确定车内人员是否有病人,如果有病人,则确定自动驾驶模式为舒适模式;如果车内人员没有病人,则确定车内每个人是否都有偏好的驾驶模式,如果车内每个人都有偏好的驾驶模式,则使用这些偏好的驾驶模式之中舒适度最高的驾驶模式;以及如果并非车内每个人都有偏好的驾驶模式,则确定不具有偏好的驾驶模式的人中是否存在老人、小孩或者残疾人; 以及如果存在,则使用舒适模式。
在一个示例中,根据车内人员的反馈自适应调整自动驾驶的参数可以包括:响应于车内人员对当前驾驶状态的反馈式命令,采用增强学习方法进行调整。
在一个示例中,在车辆自动驾驶***中,传感器还可以感测车内是否存在特定动物和/或特定物品;以及自适应控制单元基于感测到车内存在特定动物和/或特定物品,确定相应的自动驾驶行驶模式。
根据本发明的另一方面,提供了一种车辆自动驾驶方法,可以包括:感测车内人员,获得车内人员的轮廓信息;基于所获得的车内人员的轮廓信息,对车辆进行自适应控制。
本发明实施例的车辆自动驾驶***和方法关注车内人员的感知,并确定相应的自动驾驶行驶模式,并持续进行车内情况感知,并对车辆进行自适应控制,能够提供更为安全和高效的自动驾驶方法和***。
附图说明
从下面结合附图对本发明实施例的详细描述中,本发明的这些和/或其它方面和优点将变得更加清楚并更容易理解,其中:
图1示出了根据本发明一个实施例的车辆自动驾驶***100的结构示意框图。
图2示出了根据本发明一个实施例的轮廓信息获取单元模块120的功能结构图。
图3示出了根据本发明另一实施例的车辆自动驾驶***200的结构框图。
图4示出了根据本发明另一实施例的车辆自动驾驶***300的结构示意图。
图5示出了根据本发明一个实施例的车辆自动驾驶方法400的总体流程图。
图6描述根据本发明一个实施例的示例性车辆自动驾驶方法500的更详细过程。
图7示出了根据本发明实施例的示例性自适应控制方法600的流程图。
图8示出了根据本发明另一实施例的使用增强学习方法的示例性自适应控制方法700的流程图。
具体实施方式
为了使本领域技术人员更好地理解本发明,下面结合附图和具体实施方式对本发明作进一步详细说明。
轮廓信息(profile),为车内人员的资料,可以由车内人员注册得到,优选包括车内人员偏好的驾驶模式,还可以包括姓名、年龄、性别、健康状况、情绪等等。
自适应控制和/或调整,指根据车内人员的反馈来自动调整自动驾驶的参数,直至满足车内人员的要求。
本发明关注车内人员的感知,对车内人员进行归类分析,并确定相应的自动驾驶行驶模式,并持续进行车内情况感知,并对车辆进行自适应控制,能够提供更为安全和高效的自动驾驶方法和***。
图1示出了根据本发明一个实施例的车辆自动驾驶***100的结构示意框图。
车辆自动驾驶***100包括传感器110、轮廓信息获得单元120、归类单元130、自适应控制单元140。传感器110感测车内人员,轮廓信息获得单元120用于获得车内人员的轮廓信息,归类单元130对车内人员进行归类,自适应控制单元140,根据所述归类,确定相应的自动驾驶行驶模式,并对车辆进行自适应控制。
这里的轮廓信息获得单元120、归类单元130、自适应控制单元140可以由车载计算机的软件、硬件或者两者的结合来实现,也可以由车载计算机协同云端的存储和/或应用服务来实现,
在自动驾驶车辆内安装高性能嵌入式计算机***来处理传感器数据,执行感知、分析、规划、决策和控制等任务。车内计算机***包含CPU、内存和永久性存储介质等部件,以及可选的FPGA、ASI C、DSP、GPU等计算加速部件。车内计算机***可以是单机***,也可以是分布式***。如果是分布式***,其多个计算节点之间使用某种网络连接(如以太网等)。分布式***的多个节点可以是同构的、也可以是异构的,比如采用不同的体系结构和操作***。
车内计算机***可以独立运行,也可以连接到云20。云端的***可以存储大量来自各个车辆上传的驾驶员和乘客信息,并可以提供基于人脸识别和 声纹识别等的身份识别服务。
这里的传感器为广义的概念,并不特指某一个传感器设备,而可以是传感器器件的集合。
感测车内人员包括感测车内人员的图像、声音、质量等等。
例如感测车内人员可以使用下述手段中的至少一个:摄像头、压力传感器和麦克风、指纹识别器、红外传感器。
关于车内传感器设备,可以考虑下面的设置。
(1)车内摄像头。可安装一个或多个摄像头,每个摄像头可以是单目、双目、深度摄像头。安装方式例如可以考虑下述两种之一。
安装方式一:每个座位前方安装一个正对人脸的摄像头,可以清晰拍到人脸的正面。
安装方式二:可以选择其他摄像头安装方式,比如使用一个摄像头拍摄多个座位,前提是保证能拍到所有驾驶员和乘客的脸的正面。
(2)每个座位上的压力(质量)传感器。可以实时的提供车内乘客人数信息,也可以通过质量信息识别乘客内是否有小孩等粗略年龄信息。
(3)麦克风。车内麦克风可以获取乘客的语音命令和反馈,通过自然语言处理来理解命令和反馈。并能根据声纹来识别不同的人。
麦克风安装方式:可以车内的中间位置安装一个,也可以在前排的中间位置和后排的中间位置各安装一个。
以上传感器器件仅为示例,可以视需要增加或减少传感器器件,任何现有的或未来出现的传感器器件均可以用于本发明。
关于轮廓信息获得单元120,这里的轮廓信息可以包括,例如但不限于,身份、年龄、性别、人脸特征、声纹特征、健康状态、情绪状态、年龄等等。
在一个优选示例中,车内人员的轮廓信息包括其偏好的行驶模式。在一个示例中,按照舒适程度,将行使模式分为舒适模式、普通模式和运动模式,舒适程度顺次降低,不同的行使模式对应于不同的规划和控制参数,比如:正常加速度、超常加速度、正常减速度、超常减速度、横向控制量最大值、安全距离、换道最小频率、换道安全车距等。需要说明的是,即便同为舒适模式,对于不同人来说,对应的参数也可以视不同的。例如,关于舒适模式,一般对于老人来说,希望车辆开的尽量平稳,因此希望加速度低些,换道频 率低些;而一般对于青年人来说,则对于平稳性没有如此高的要求。因此,优选地,可以针对每个人定制其偏好的自动驾驶模式。
图2示出了根据本发明一个实施例的轮廓信息获取单元模块120的功能结构图。
轮廓信息获取单元模块120可以包括身份识别模块121、轮廓信息采集模块124、车内人员的其他属性和状态检测模块123以及轮廓信息综合模块122。
在一个示例中,轮廓信息可以如下获得:首先,由身份识别模块121识别车内人员的身份,例如,基于身份信息数据库,基于车内传感器感测得到的人脸特征、声纹特征、虹膜特征和红外特征等等,通过下述身份识别技术中的一项或者其组合来识别车内人员的身份:人脸识别、声纹识别、指纹识别、虹膜识别、红外识别;以及基于识别得到的车内人员的身份,从轮廓信息数据库检索该车内人员的轮廓信息,在轮廓信息数据库中与车内人员的身份标识相关联地存储了轮廓信息。这里的轮廓信息数据库和身份信息数据库可以是分开的,也可以是统一在一起共用的,这里的身份信息例如为车内人员的ID(手机号码,身份证信息)连同车内人员的人脸特征、声纹特征、手纹特征、虹膜特征、红外特征、血管特征等等能够唯一标识一个人身份的信息,基于通过传感器获得的感测信息(图像、声音、指纹等等)和身份信息数据库,通过识别技术来识别车内人员的身份。轮廓信息数据库中,与车内人员身份标识相关联地存储了轮廓信息,例如姓名、年龄、性别、健康状况、偏好的行使模式等等。
在身份信息数据库和/或轮廓信息数据库中缺少某个车内人员信息的情况下,轮廓信息采集模块124来采集新人的轮廓信息,可以通过例如语音交互方式来录入一些基本资料,比如姓名、年龄、性别、偏好的行驶模式等;车内摄像头可以拍摄多张照片,以及录制语音声纹、指纹等等,将这些信息形成身份信息记录和/或轮廓信息记录存储在身份信息数据库和/或轮廓信息数据库中。
除了获得身份信息数据库和/或轮廓信息数据库中已有的轮廓信息外,还可以由车内人员的其他属性和状态检测模块123实时地获得当前的车内人员信息,例如通过对摄像头拍摄的图像进行图像处理、通过测量车内人员的血压、心跳等等、或者通过与车内人员的语音交互来获得车内人员当前的情绪 状态和当前的健康状况。
轮廓信息综合模块122综合来自轮廓信息数据库的信息、轮廓信息采集模块134采集的信息以及车内人员的其它属性和状态检测模块123的信息,得到各个车内人员的轮廓信息,以供后续的归类单元和自适应控制单元使用。
图2中所示的身份识别模块121(包括人脸识别模块、声纹识别模块、虹膜识别模块、指纹识别模块等)等模块和轮廓信息数据库、身份信息数据库可以运行和存储在车载计算机和/或云端。
回到图1,归类单元130对车内人员进行归类。例如,基于轮廓信息获得单元120获得的车内人员的轮廓信息,判断各个车内人员所属的类别,例如对车内人员进行归类包括:确定每个车内人员是否为病人;确定每个车内人员是否是老人、小孩或残疾人;以及确定是否每个人都设置了所偏好的驾驶模式。
自适应控制单元140根据所述归类,确定相应的自动驾驶行驶模式,并对车辆进行自适应控制。
自适应控制单元140根据所述归类,确定相应的自动驾驶行驶模式包括:
确定车内人员是否有病人,如果有病人,则确定自动驾驶模式为舒适模式;
如果车内人员没有病人,则确定车内每个人是否都有偏好的驾驶模式,如果车内每个人都有偏好的驾驶模式,则使用这些偏好的驾驶模式之中舒适度最高的驾驶模式;以及
如果并非车内每个人都有偏好的驾驶模式,则确定不具有偏好的驾驶模式的人中是否存在老人、小孩或者残疾人;以及如果存在,则使用舒适模式。
上述对车内人员的归类仅为优选示例,可以根据需要来进行不同的归类。
后续将结合方法流程图来详细说明自适应控制的过程。
图3示出了根据本发明另一实施例的车辆自动驾驶***200的结构框图。
相比于图1,车辆自动驾驶***200还包括反馈接收单元250。图3中的传感器210、轮廓信息获得单元220和归类单元230的结构和工作过程可以与图1所示的对应部分类似,这里不再赘述。
反馈接收单元250在行驶过程中,持续接收车内人员的反馈,自适应控制单元240根据车内人员的反馈自适应调整自动驾驶的参数。
有关接收车内人员的反馈的方式,可以为语音方式,通过语音识别模块来识别车内人员的反馈,例如比如“开快点、开慢点、开稳点、别太肉”等等;也可以为其他方式,例如在车上提供预定的实体控制按钮供按压等;或者是通过车内人员的手机上的车辆驾驶控制应用来进行反馈等等;或者可以用手势来给出反馈,这对于聋哑人特别适用;。
根据车内人员的反馈自适应调整自动驾驶的参数可以包括:一种是根据先验知识由算法预设每个命令需要调整的参数和调整量;另一种是采用增强学习的方法在大量的行驶中来学习确定这些参数值。
图4示出了根据本发明另一实施例的车辆自动驾驶***300的结构示意图。
相比于图3,图4的车辆自动驾驶***300还包括驾驶模式指令接收模块360,其它模块310、320、330、350可以与图3所示的对应模块相同,这里不再赘述。
驾驶模式指令接收模块360用于接收车内人员的显式指定车辆行驶模式,***提供若干预定义的行驶模式:比如普通模式、舒适模式、运动模式等等。有关接收车内人员的驾驶模式指令的方式,可以为语音方式,通过语音识别模块来识别车内人员的驾驶模式指令,例如比如“舒适模式”、“运动模式”等等;也可以为其他方式,例如在车上提供预定的实体控制按钮供按压等,例如两个按钮,一个表示“更好了”,一个表示“更差了”,当然也可以设计更多的按钮,也可以设计其它形式的反馈硬件等;或者可以用手势来给出反馈,这对于聋哑人特别适用;或者是通过车内人员的手机上的车辆驾驶控制应用来进行反馈等等。
在一个示例中,所述传感器还感测车内是否存在特定动物和/或特定物品,以及自适应控制单元基于感测到车内存在特定动物和/或特定物品,确定相应的自动驾驶行驶模式。例如,感测车内是否存在易碎品,例如瓷器、玻璃制品等,车内是否存在某些小动物,例如小猫或小狗等,从而调整自动驾驶行使模式,如在感测到存在瓷器时,调整自动驾驶行使模式为舒适模式,以较平稳地驾驶。
下面结合附图描述根据本发明实施例的车辆自动驾驶方法。车辆自动驾驶方法结合前文的车辆自动驾驶***来实现。
图5示出了根据本发明一个实施例的车辆自动驾驶方法400的总体流程 图。
如图5所示,在步骤S410中,通过传感器感测车内人员,获得车内人员的轮廓信息。
在步骤S420中,对车内人员进行归类。
在步骤S430中,根据所述归类,确定相应的自动驾驶行驶模式,并对车辆进行自适应控制。
在一个示例中,车辆自动驾驶方法400还包括:在行驶过程中,持续接收车内人员的反馈;以及根据车内人员的反馈自适应调整自动驾驶的参数。
根据车内人员的反馈自适应调整自动驾驶的参数可以包括:响应于车内人员对当前驾驶状态的反馈式命令,利用预设的对应于所述反馈的参数和调整量来进行调整,所述反馈式命令指示对驾驶状态进行确定调整。
在一个示例中,当用户不再给出反馈式命令时,可以记录当前的参数作为该用户的偏好的驾驶模式参数,并更新该用户的轮廓信息至轮廓信息数据库。
在一个示例中,根据车内人员的反馈自适应调整自动驾驶的参数包括:响应于车内人员对当前驾驶状态的反馈式命令,采用增强学习方法进行调整。
在一个示例中,车内人员的轮廓信息轮廓信息可以包括,例如但不限于,身份、年龄、性别、人脸特征、声纹特征、健康状态、情绪状态、年龄等等。
每个人偏好的行驶模式可以由对应的行驶参数来表示,即可以针对每个人来定制起偏好的行使模式。
在一个示例中,对车内人员进行归类包括:确定每个车内人员是否为病人;确定每个车内人员是否是老人、小孩或残疾人;以及确定是否每个人都设置了所偏好的驾驶模式。
在一个示例中,自动驾驶模式可以包括:舒适模式、普通模式和运动模式,以及所述根据所述归类,确定相应的自动驾驶行驶模式包括:确定车内人员是否有病人,如果有病人,则确定自动驾驶模式为舒适模式;如果车内人员没有病人,则确定车内每个人是否都有偏好的驾驶模式,如果车内每个人都有偏好的驾驶模式,则使用这些偏好的驾驶模式之中舒适度最高的驾驶模式;以及如果并非车内每个人都有偏好的驾驶模式,则确定不具有偏好的驾驶模式的人中是否存在老人、小孩或者残疾人;以及如果存在,则使用舒适模式。
不同的驾驶模式对应于不同的规划和控制参数,这些参数例如有:正常加速度、超常加速度、正常减速度、超常减速度、前轮偏角和车速比的最大值、安全距离、换道最小频率、换道安全车距等等。
在一个示例中,感测车内人员包括使用下述手段中的至少一个:摄像头、压力传感器和麦克风、指纹识别器、红外传感器。
在一个示例中,获得车内人员的轮廓信息包括:识别车内人员的身份,识别车内人员的身份包括基于身份信息数据库,通过下述身份识别技术中的一项或者其组合来识别车内人员的身份:人脸识别、声纹识别、指纹识别、虹膜识别、红外识别;以及基于识别得到的车内人员的身份,从轮廓信息数据库检索该车内人员的轮廓信息,在轮廓信息数据库中与车内人员的身份标识相关联地存储了轮廓信息。
在一个示例中,在轮廓数据库中未存储该车内人员的轮廓信息的情况下,接收车内人员的轮廓信息输入,并将其存储到数据库中。
在一个示例中,身份识别技术中的一个或多个运行在云端服务器中,或者以本地车载计算机***和云端服务器相互协作的方式运行。类似地,其它应用服务也可以运行在本地车载计算机***,或运行在云端,或者以本地车载计算机***和云端服务器相互协作的方式运行。
类似地,车内人员的身份信息数据库和轮廓信息数据库中的一个或两者可以存储在本地计算机***和/或存储在云端存储中,或者以相互协作的方式存储在两者中。
在一个示例中,车辆自动驾驶方法还可以感测车内是否存在特定动物和/或特定物品;基于感测到车内存在特定动物和/或特定物品,确定相应的自动驾驶行驶模式。
作为具体示例,下面结合图6描述根据本发明一个实施例的示例性车辆自动驾驶方法500的更详细过程。
如图6所示,行程开始后在每个周期内,车载自动驾驶计算机***会读取传感器输入,由各个感知模块(如人脸识别、声纹识别等)进行处理,再由识别模块(如身份识别),最终按照一定的自适应算法来进行控制,直至行程结束。在自适应算法中可以考虑驾驶员和乘客的反馈来进行调整。
如图6所示,基于各个传感器器件例如摄像头、麦克风、体重传感器等的输入,各个识别模块进行处理,具体地,在步骤S510由人脸识别模块进行 人脸识别,在步骤S520中由声纹识别模块进行声纹识别,在步骤S530中由车内人员其他属性和状态识别模块来识别车内人员的其他属性和状态。
下面分别详述之。
(1)人脸识别
具体地址,通过车载摄像头实时获取车内的人脸图像,根据人脸特征通过图像处理识别驾驶员和车内乘客人数,并可以识别驾驶员和车内乘客身份、年龄、性别、情绪等信息。
人脸识别模块可以运行在车载本地计算机***,或者运行在云端的服务器中,或者以协作方式运行在两个***中以获得更佳效果。
车内人员的身份、年龄、性别、人脸特征等信息保存在车载本地计算机***的永久性存储器,以及可选的云端存储中,例如存储在身份信息数据库和/或轮廓信息数据库中,如前所述,身份信息数据库和轮廓信息数据库两者可以合并为一,也可以分开存储和组织。
(2)声纹识别
具体地址,通过麦克风进行声纹识别,需要车载声纹识别模块发起和乘车人的语音交互来实现。
声纹识别模块可以运行在车载本地计算机***,或者运行在云端的服务器中,或者以协作方式运行在两个***中以获得更佳效果。
车内人员的身份、年龄、性别、声纹特征等信息可以保存在车载本地计算机***的永久性存储器,以及可选的云端存储中,例如存储在身份信息数据库和/或轮廓信息数据库中,如前所述,身份信息数据库和轮廓信息数据库两者可以合并为一,也可以分开存储和组织。
(3)车内人员的其他属性和状态检测
车内人员的其他属性和状态检测模块通过对摄像头、麦克风、压力传感器等传感器器件感测的信息进行处理,可以检测到更多的信息,包括人员数量、每个人的健康状态(健康、疲劳、生病、残疾等)、情绪状态(快乐、悲伤、愤怒、紧张、惊讶等)、年龄(儿童、青年、中年、老人)等。
车内人员的其他属性和状态等信息可以保存在车载本地计算机***的永久性存储器,以及可选的云端存储中,或者以协作方式运行在两个***中获得更佳效果。例如存储在身份信息数据库和/或轮廓信息数据库中,如前所述,身份信息数据库和轮廓信息数据库两者可以合并为一,也可以分开存储和组 织。
回到图6,在步骤S540中,基于人脸识别S510、声纹识别S520和车内人员其他属性和状态识别S530的结果,参考本地/云端中存储的用户属性(例如身份信息数据库和轮廓信息数据库中存储的用户属性),来识别出车内人员的身份信息,获得车内人员的轮廓信息。这里的轮廓信息可以是从轮廓信息数据库检索得到的,或者如后续的基于①的来源而输入的(对于轮廓信息数据库中不存在该车内人员的记录的情况)、或者是由车内人员其他属性和状态识别收集到的,也可以是由上述各项组合得到。
此外,在车辆自动驾驶方法500中存在语音命令识别和处理路径。
在步骤S570中,识别语音命令。需要说明的是,语音命令的监测、识别和处理是存在于车辆自动驾驶的整个过程中的。
在识别到语音命令后,在步骤S580中,判断该语音命令是否为人工指定行驶模式。如果判断结果为“是(Y)”,则流程转到步骤S550的自适应控制步骤;如果判断结果为“否(N)”,则前进到步骤S590。
在步骤S590中,判断该语音命令是否为对行驶状况的反馈。如果判断结果为“是(Y)”,则流程转到步骤S550的自适应控制步骤;如果判断结果为“否(N)”,则前进到步骤S591。
在步骤S591中,判断该语音命令是否为进行人员信息录入。如果判断结果为“是(Y)”,则流程转到步骤S540(如图中的①所示),进行车内人员信息录入,其中在步骤S540中可以通过语音交互方式来录入一些基本资料,比如年龄、性别、偏好的行驶模式;以及车内摄像头可以拍摄多张照片,以及录制语音声纹、指纹等等,将这些信息作为身份识别信息和/或轮廓信息存储在车载计算机和/或云存储上;如果判断结果为“否(N)”,则确定其为无效命令,对其不予处理。
在本示例中,语音识别模块会识别上面这三种命令(人工指定行驶模式,对行驶状况的反馈,人员信息录入),并忽略其他无法识别的语音。根据需要,也可以设计、识别和处理其他命令。
语音识别模块可以运行在车载本地计算机***,或者运行在云端的服务器中,或者以协作方式运行在两个***中以获得更佳效果。
在步骤S550中,进行自适应控制。后面将参考图7给出自适应控制方法示例的详细描述。
在步骤S560中,判断是否车辆自动驾驶行程结束,如结果为“Y”,则过程结果,否则返回到过程开始,继续监视传感器的输入。
自适应的控制调整方法可以有两种,一种是根据先验知识由算法预设每个命令需要调整的参数和调整量。另一种是采用增强学习的方法在大量的行驶中来学习确定这些参数值。下面分别结合图7和图8对上述两种自适应控制调整方法的示例加以描述。
下面结合图7给出根据先验知识的自适应控制方法示例的详细描述。
图7示出了根据本发明实施例的示例性自适应控制方法600的流程图,自适应控制方法600可以应用于图6所示的步骤S550。
如图7所示,在步骤S610中,基于车内人员的轮廓信息,判断各个车内人员是否被归为病人,当确定某个车内人员被归为病人时,流程转到步骤S611,***默认采用舒适模式。如果基于车内人员的轮廓信息,确定车内人员中不存在病人,则流程前进到步骤S620。
在步骤S620中,判断车内人员是否每个人都有偏好模式,具体地址,检查每个车内人员的轮廓信息,看是否都设定了自己偏好的驾驶模式。如果步骤S620确定每个车内人员都有自己偏好的驾驶模式,则转到步骤S621,在步骤S621中,将车辆的驾驶模式设置为所有车内人员的偏好的驾驶模式之中舒适度最高的驾驶模式。
如果步骤S620确定不是每个车内人员都有自己偏好的驾驶模式,则前进到步骤S630。
在步骤S630中,判断未设定偏好的驾驶模式中的人中是否有老人、小孩和残疾人,如果判断结果为是,则转到步骤S631,在步骤S631中,默认使用舒适模式。
在步骤S640中,自适应控制单元收集传感器数据,接收车内人员的语音反馈。
在步骤S650中,判断是否存在语音反馈。在判断结果为是的情况下,前进到步骤S660,否则跳回到步骤S640。
在步骤S660中,根据每个人的语音反馈,自适应调节控制,并用这些特征来更新相应个人的行驶模式。比如自动驾驶***对舒适模式提供了默认的控制参数,而某个乘客要求行驶中的控制更平稳和安全,这时他会发出命令“再稳点”,自动驾驶***收到反馈命令后会对这些参数进行细微调整。而该 乘客可能会反复发出命令来调整,直到最终不再调整时,***记录该乘客的个性化的舒适模式参数作为该乘客偏好的驾驶模式,并更新该乘客的轮廓信息至身份信息数据库和/或轮廓信息数据库。
在步骤S670中,判断车辆是否行驶结束,例如已经到达目的地,或者接收到乘客的停车命令等等。在步骤S670判断车辆行驶结束的情况下,过程结束,否则返回到步骤S640。
图8示出了根据本发明另一实施例的使用增强学习方法的示例性自适应控制方法700的流程图,自适应控制方法700可以应用于图6所示的步骤S550。
相比于图7,图8的不同点在于用使用增强学习方法调整参数控制量的步骤S750代替了步骤S660,以及不存在与图7的判定是否存在语音反馈的步骤S650对应的步骤。图8中其余的步骤和图7中对应标号的步骤类似,这里不再赘述。
图8中去掉了语音反馈判断步骤是因为,在增强学习方法中,将语音反馈作为增强学习方法的一个回报函数值,当车内人员没有语音时,也认为是存在一定回报函数值的,因此无需如图7那样特别判断是否存在语音反馈来作为流程跳转的条件。
下面给出使用增强学习方法调整自动驾驶规划和控制参数控制量的示例性实现的描述,该示例性实现可以用于图8的步骤S750。
增强学习算法有不同的分类,比如蒙特卡罗方法和时间差分法,但这些方法的共同点和核心包括状态集、动作集、回报函数。在一个示例中,状态集、动作集、回报函数定义如下。
状态集S:
s1:纵向车速
s2:纵向加速度
s3:横向速度
s4:横向加速度
s5:和本车道前车之间距离
s6:和本车道后车之间距离
s7:和左侧车道前车之间距离
s8:和左侧车道后车之间距离
s9:和右侧车道前车之间距离
s10:和右侧车道后车之间距离
s11:期望的纵向车速
s12:期望的横向车速
s13:期望变道的方位
上面所有的状态都是离散化后的区间值。
动作集A:
a1:油门为0,刹车为0,前轮偏角为0
a2:油门为0,刹车为0,前轮偏角为+0.5度
a3:油门为0,刹车为0,前轮偏角为+1度
a51:油门为0,刹车为0,前轮偏角为+25度
a52:油门为0,刹车为0,前轮偏角为-0.5度
a53:油门为0,刹车为0,前轮偏角为-1.0度
a100:油门为0,刹车为0,前轮偏角为-25度
a101:油门为1,刹车为0,前轮偏角为0
a102:油门为2,刹车为0,前轮偏角为0
a103:油门为3,刹车为0,前轮偏角为0
a104:油门为4,刹车为0,前轮偏角为0
a105:油门为0,刹车为1,前轮偏角为0
a106:油门为0,刹车为2,前轮偏角为0
a107:油门为0,刹车为3,前轮偏角为0
a108:油门为0,刹车为4,前轮偏角为0
上面所有的动作(油门、刹车和前轮偏角)是离散化后的值。
回报函数R:
车内人员的反馈,此时的反馈信息是用车内人员的满意度来衡量,分为5个离散值如下表所示,其中如表中第4行所示,对于没有反馈的情况,将 其回报函数值r设置为0,视为与车内人员回应(例如以语音方式)以“一般”“正好”的情况相同。此表的反馈信息与回报函数值r之间的对应设置仅为示例,本领域技术人员可以根据情况来进行设计。
反馈信息 回报r
非常糟糕 -2
更差了 -1
一般、正好或没有反馈 0
更好了 +1
非常好 +2
作为增强学习方法中的时间差分法类型的例子,以Q-Learning算法为例,示例性流程如下:
1.初始化Q(s,a)为任意值
2.重复下面的循环直到policy的变化量小于预设的阈值
·初始化s为任意值
·重复下面的步骤直到s为终止状态
·应用Q中的policyπ(s,a)(比如采用ε-greedy方法),对当前状态s选择一个动作a
·执行动作a,观察即时回报r,以及下一个状态s’
·Q(s,a)+=α[r+γmaxa′Q(s′,a′)-Q(s,a)]
·S=S’
在上面的算法中,a是控制学习速度的系数。
ε-greedy方法的公式如下:
Figure PCTCN2016086914-appb-000001
这里,优选地,自适应规划和控制模块运行在车载本地计算机***上。
在前面的示例性描述中,自动驾驶模式被描述为包括舒适模式、普通模式和运动模式。此仅为示例,视需要可以进行更细致的分类,或者进行更粗粒度的分类。
需要说明的是,在图7和图8中,以语音反馈作为反馈方式的例子,但这仅为示例,可以视情况采用其它的反馈方式,例如车上设置的实体按钮,乘客手机上的App程序,乘客的手势等等。
本文中的车辆应该做广义解释,除了包括陆地上行驶的各种大中小型车辆外,还可以包括水面上行驶的轮船,乃至空中驾驶的飞行器等。
应该注意,以上讨论的方法、***和设备仅意图作为例子。需要强调的是,各个实施例可以在适当时省略、替换或者添加各种过程或者组件。例如,应该理解,在替换实施例中,可以按与所述不同的顺序进行方法,并且可以添加、省略或组合各个步骤。而且,可以在各种其他实施例中组合关于某些实施例所述的特征。可以按类似的方式组合实施例的不同方面和要素。而且,应该强调,技术不断发展以及因此本文所示例的许多要素仅是例子,而不应被解释为限制本发明的范围。
此外,可以通过硬件、软件、固件、中间件、微代码、硬件描述语言或其任意组合来实现实施例。当以软件、固件、中间件或微代码实现任务时,用于进行任务所需的程序代码或者码段可以存储在诸如存储介质的计算机可读介质中,处理器可以进行所需的任务。
以上已经描述了本发明的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。因此,本发明的保护范围应该以权利要求的保护范围为准。

Claims (25)

  1. 一种车辆自动驾驶方法,包括:
    感测车内人员,获得车内人员的轮廓信息;
    对车内人员进行归类;以及
    根据所述归类,确定相应的自动驾驶行驶模式,并对车辆进行自适应控制。
  2. 根据权利要求1的车辆自动驾驶方法,还包括:
    在行驶过程中,持续接收车内人员的反馈;以及
    根据车内人员的反馈自适应调整自动驾驶的参数。
  3. 根据权利要求1的车辆自动驾驶方法,所述车内人员的轮廓信息包括车内每个人偏好的行驶模式。
  4. 根据权利要求3的车辆自动驾驶方法,所述每个人偏好的行驶模式由对应的行驶参数来表示。
  5. 根据权利要求1的车辆自动驾驶方法,所述对车内人员进行归类包括:
    确定每个车内人员是否为病人;
    确定每个车内人员是否是老人、小孩或残疾人;以及
    确定是否每个人都设置了所偏好的驾驶模式。
  6. 根据权利要求1的车辆自动驾驶方法,其中所述自动驾驶模式包括:舒适模式、普通模式和运动模式,以及所述根据所述归类,确定相应的自动驾驶行驶模式包括:
    确定车内人员是否有病人,如果有病人,则确定自动驾驶模式为舒适模式;
    如果车内人员没有病人,则确定车内每个人是否都有偏好的驾驶模式,如果车内每个人都有偏好的驾驶模式,则使用这些偏好的驾驶模式之中舒适度最高的驾驶模式;以及
    如果并非车内每个人都有偏好的驾驶模式,则确定不具有偏好的驾驶模式的人中是否存在老人、小孩或者残疾人;以及如果存在,则使用舒适模式。
  7. 根据权利要求1的车辆自动驾驶方法,所述感测车内人员包括使用下述手段中的至少一个:
    摄像头、压力传感器和麦克风、指纹识别器、红外传感器。
  8. 根据权利要求1的车辆自动驾驶方法,所述获得车内人员的轮廓信息包括:
    识别车内人员的身份,识别车内人员的身份包括基于身份信息数据库,通过下述身份识别技术中的一项或者其组合来识别车内人员的身份:人脸识别、声纹识别、指纹识别、虹膜识别、红外识别;以及
    基于识别得到的车内人员的身份,从轮廓信息数据库检索该车内人员的轮廓信息,在轮廓信息数据库中与车内人员的身份标识相关联地存储了轮廓信息。
  9. 根据权利要求8的自动驾驶方法,在轮廓数据库中未存储该车内人员的轮廓信息的情况下,接收车内人员的轮廓信息输入,并将其存储到数据库中。
  10. 根据权利要求8的车辆自动驾驶方法,所述轮廓信息包括:
    年龄、性别、情绪、健康状态信息、偏好的行驶模式中的一项或多项。
  11. 根据权利要求1的车辆自动驾驶方法,不同的模式对应于相关联的规划和控制参数,所述参数选自下列项目中的一个或多个:
    正常加速度、超常加速度、正常减速度、超常减速度、前轮偏角和车速比的最大值、安全距离、换道最小频率、换道安全车距。
  12. 根据权利要求2的车辆自动驾驶方法,所述根据车内人员的反馈自适应调整自动驾驶的参数包括:
    响应于车内人员对当前驾驶状态的反馈式命令,利用预设的对应于所述反馈的参数和调整量来进行调整,所述反馈式命令指示对驾驶状态进行确定调整。
  13. 根据权利要求12的车辆自动驾驶方法,当用户不再给出反馈式命令时,记录当前的参数作为该用户的偏好的驾驶模式参数。
  14. 根据权利要求2的车辆自动驾驶方法,所述根据车内人员的反馈自适应调整自动驾驶的参数包括:
    响应于车内人员对当前驾驶状态的反馈式命令,采用增强学习方法进行调整。
  15. 根据权利要求8的车辆自动驾驶方法,所述身份识别技术中的一个或多个运行在本地车载计算机***中、云端服务器中,或者以本地车载计算 机***和云端服务器相互协作的方式运行。
  16. 根据权利要求8的车辆自动驾驶方法,所述车内人员的身份信息数据库和轮廓信息数据库中的一个或两者存储在本地车载计算机***中或者云端存储中。
  17. 根据权利要求1到16任一项的车辆自动驾驶方法,还包括:
    感测车内是否存在特定动物和/或特定物品;
    基于感测到车内存在特定动物和/或特定物品,确定相应的自动驾驶行驶模式。
  18. 一种车辆自动驾驶***,包括:
    传感器,感测车内人员;
    轮廓信息获得单元,用于获得车内人员的轮廓信息;
    归类单元,对车内人员进行归类;以及
    自适应控制单元,根据所述归类,确定相应的自动驾驶行驶模式,并对车辆进行自适应控制。
  19. 根据权利要求18的车辆自动驾驶***,还包括:
    反馈接收单元,在行驶过程中,持续接收车内人员的反馈;以及
    所述自适应控制单元根据车内人员的反馈自适应调整自动驾驶的参数。
  20. 根据权利要求18的车辆自动驾驶***,所述车内人员的轮廓信息包括车内每个人偏好的行驶模式。
  21. 根据权利要求18的车辆自动驾驶***,所述对车内人员进行归类包括:
    确定每个车内人员是否为病人;
    确定每个车内人员是否是老人、小孩或残疾人;以及
    确定是否每个人都设置了所偏好的驾驶模式。
  22. 根据权利要求18-21任一项的车辆自动驾驶***,其中所述自动驾驶模式包括:舒适模式、普通模式和运动模式,以及所述根据所述归类,确定相应的自动驾驶行驶模式包括:
    确定车内人员是否有病人,如果有病人,则确定自动驾驶模式为舒适模式;
    如果车内人员没有病人,则确定车内每个人是否都有偏好的驾驶模式,如果车内每个人都有偏好的驾驶模式,则使用这些偏好的驾驶模式之中舒适 度最高的驾驶模式;以及
    如果并非车内每个人都有偏好的驾驶模式,则确定不具有偏好的驾驶模式的人中是否存在老人、小孩或者残疾人;以及如果存在,则使用舒适模式。
  23. 根据权利要求19的车辆自动驾驶***,所述根据车内人员的反馈自适应调整自动驾驶的参数包括:
    响应于车内人员对当前驾驶状态的反馈式命令,采用增强学习方法进行调整。
  24. 根据权利要求18到21任一项的车辆自动驾驶***,其中
    所述传感器还感测车内是否存在特定动物和/或特定物品;以及
    自适应控制单元基于感测到车内存在特定动物和/或特定物品,确定相应的自动驾驶行驶模式。
  25. 一种车辆自动驾驶方法,包括:
    感测车内人员,获得车内人员的轮廓信息;
    基于所获得的车内人员的轮廓信息,对车辆进行自适应控制。
PCT/CN2016/086914 2016-06-23 2016-06-23 车辆自动驾驶方法和车辆自动驾驶*** WO2017219319A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201680001428.5A CN107223101A (zh) 2016-06-23 2016-06-23 车辆自动驾驶方法和车辆自动驾驶***
PCT/CN2016/086914 WO2017219319A1 (zh) 2016-06-23 2016-06-23 车辆自动驾驶方法和车辆自动驾驶***

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/086914 WO2017219319A1 (zh) 2016-06-23 2016-06-23 车辆自动驾驶方法和车辆自动驾驶***

Publications (1)

Publication Number Publication Date
WO2017219319A1 true WO2017219319A1 (zh) 2017-12-28

Family

ID=59928253

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/086914 WO2017219319A1 (zh) 2016-06-23 2016-06-23 车辆自动驾驶方法和车辆自动驾驶***

Country Status (2)

Country Link
CN (1) CN107223101A (zh)
WO (1) WO2017219319A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110570127A (zh) * 2019-09-12 2019-12-13 启迪数华科技有限公司 一种智能公交***、车辆运行调度方法及装置
CN112418162A (zh) * 2020-12-07 2021-02-26 安徽江淮汽车集团股份有限公司 车辆控制的方法、设备、存储介质及装置
CN112906304A (zh) * 2021-03-10 2021-06-04 北京航空航天大学 一种刹车控制方法和装置
CN112947759A (zh) * 2021-03-08 2021-06-11 上汽大众汽车有限公司 车载情感化交互平台及交互方法
CN113255347A (zh) * 2020-02-10 2021-08-13 阿里巴巴集团控股有限公司 实现数据融合的方法和设备及实现无人驾驶设备的识别方法
CN114407907A (zh) * 2022-01-18 2022-04-29 上汽通用五菱汽车股份有限公司 智能驾驶***参数自适应调整方法、设备及存储介质
US11643086B2 (en) 2017-12-18 2023-05-09 Plusai, Inc. Method and system for human-like vehicle control prediction in autonomous driving vehicles
US11650586B2 (en) 2017-12-18 2023-05-16 Plusai, Inc. Method and system for adaptive motion planning based on passenger reaction to vehicle motion in autonomous driving vehicles
CN117334066A (zh) * 2023-09-20 2024-01-02 广州亿胜鑫网络科技有限公司 基于车辆数据的风险分析方法、***、终端及存储介质

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017219585A1 (de) * 2017-11-03 2019-05-09 Zf Friedrichshafen Ag Verfahren zur Anpassung eines Komforts eines Fahrzeugs, Regelvorrichtung und Fahrzeug
CN109760603A (zh) * 2017-11-09 2019-05-17 英属开曼群岛商麦迪创科技股份有限公司 车体设备控制***与车体设备控制方法
TWM563382U (zh) * 2017-11-09 2018-07-11 英屬開曼群島商麥迪創科技股份有限公司 車體設備控制系統
CN108177611A (zh) * 2017-12-14 2018-06-19 蔚来汽车有限公司 车辆内目标体的监测***、方法及车辆
CN108146360A (zh) * 2017-12-25 2018-06-12 出门问问信息科技有限公司 车辆控制的方法、装置、车载设备和可读存储介质
JP6743072B2 (ja) * 2018-01-12 2020-08-19 本田技研工業株式会社 制御装置、制御装置の動作方法及びプログラム
CN110162027A (zh) * 2018-02-11 2019-08-23 上海捷谷新能源科技有限公司 一种车辆自动驾驶控制***
JP2019158646A (ja) 2018-03-14 2019-09-19 本田技研工業株式会社 車両制御装置、車両制御方法、及びプログラム
CN108466619B (zh) * 2018-03-26 2020-03-27 北京小马慧行科技有限公司 一种自动驾驶***运行模式自动设置的方法及装置
CN110555346A (zh) * 2018-06-01 2019-12-10 杭州海康威视数字技术股份有限公司 驾驶员情绪检测方法、装置、电子设备及存储介质
CN110654394A (zh) * 2018-06-29 2020-01-07 比亚迪股份有限公司 驾驶控制***和方法以及车辆
CN108891422B (zh) * 2018-07-09 2020-04-24 深圳市易成自动驾驶技术有限公司 智能车辆的控制方法、装置及计算机可读存储介质
CN110789469A (zh) * 2018-08-02 2020-02-14 罗伯特·博世有限公司 用于驾驶车辆的方法、相应的控制器和相应的车辆
CN109131167A (zh) * 2018-08-03 2019-01-04 百度在线网络技术(北京)有限公司 用于控制车辆的方法和装置
CN110871810A (zh) * 2018-08-21 2020-03-10 上海博泰悦臻网络技术服务有限公司 车辆、车机设备及其基于驾驶模式的行车信息提示方法
CN109177971B (zh) * 2018-08-28 2020-12-29 武汉联城一家科技发展有限公司 汽车和服务器及其安全控制方法、控制装置及存储介质
CN110370267B (zh) * 2018-09-10 2021-08-20 北京京东尚科信息技术有限公司 用于生成模型的方法和装置
US10729378B2 (en) * 2018-11-30 2020-08-04 Toyota Motor North America, Inc. Systems and methods of detecting problematic health situations
CN109858359A (zh) * 2018-12-28 2019-06-07 青岛科技大学 一种考虑情感的汽车驾驶人驾驶意图辨识方法
JP7216893B2 (ja) * 2019-03-06 2023-02-02 トヨタ自動車株式会社 移動体及び移動システム
CN109895777A (zh) * 2019-03-11 2019-06-18 汉腾汽车有限公司 一种共享自动驾驶汽车***
CN110103989A (zh) * 2019-05-17 2019-08-09 爱驰汽车有限公司 自动驾驶主动式交互车载***、方法、设备及存储介质
CN110263664A (zh) * 2019-05-29 2019-09-20 深圳市元征科技股份有限公司 一种多乘员车道违章识别方法及装置
KR20210018627A (ko) * 2019-08-07 2021-02-18 현대자동차주식회사 자율주행차량의 거동 제어 장치 및 그 방법
CN110576864A (zh) * 2019-08-15 2019-12-17 中国第一汽车股份有限公司 驾驶模式的控制方法、装置、车辆及存储介质
CN110509983B (zh) * 2019-09-24 2021-07-16 吉林大学 一种适用于不同驾驶需求的线控转向路感反馈装置
CN111204348A (zh) * 2020-01-21 2020-05-29 腾讯云计算(北京)有限责任公司 调节车辆行驶参数方法、装置、车辆及存储介质
WO2021226767A1 (zh) * 2020-05-09 2021-11-18 华为技术有限公司 一种自适应优化自动驾驶***的方法及装置
CN111599202A (zh) * 2020-05-27 2020-08-28 四川邮电职业技术学院 一种车载通信终端及车载通信***
CN113799717A (zh) * 2020-06-12 2021-12-17 广州汽车集团股份有限公司 一种疲劳驾驶缓解方法及其***、计算机可读存储介质
CN111885547B (zh) * 2020-07-10 2024-06-04 吉利汽车研究院(宁波)有限公司 一种车载人机交互***
CN112158151A (zh) * 2020-10-08 2021-01-01 南昌智能新能源汽车研究院 一种基于5g网络的自动驾驶汽车手势控制***及其方法
CN114633757A (zh) * 2020-11-30 2022-06-17 荷兰移动驱动器公司 驾驶辅助方法及车载装置
CN113581215B (zh) * 2021-09-01 2022-08-05 国汽智控(北京)科技有限公司 车辆的控制方法、装置及车辆
CN114194122B (zh) * 2021-11-24 2024-03-05 重庆长安汽车股份有限公司 一种安全提示***及汽车
CN114973727B (zh) * 2022-08-02 2022-09-30 成都工业职业技术学院 一种基于乘客特征的智能驾驶方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102036855A (zh) * 2008-05-19 2011-04-27 通用汽车环球科技运作公司 基于车辆设置的驾驶员识别***
CN104002807A (zh) * 2014-05-28 2014-08-27 长城汽车股份有限公司 一种汽车安全驾驶控制方法及***
CN104260725A (zh) * 2014-09-23 2015-01-07 北京理工大学 一种含有驾驶员模型的智能驾驶***
CN104590274A (zh) * 2014-11-26 2015-05-06 浙江吉利汽车研究院有限公司 一种驾驶行为自适应***及驾驶行为自适应方法
CN104648383A (zh) * 2013-11-22 2015-05-27 福特全球技术公司 改进的自主车辆设置
CN104765598A (zh) * 2014-01-06 2015-07-08 哈曼国际工业有限公司 自动驾驶者识别

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2902864B1 (en) * 2014-01-30 2017-05-31 Volvo Car Corporation Control arrangement for autonomously driven vehicle
US9097549B1 (en) * 2014-03-17 2015-08-04 Ford Global Technologies, Llc Learning automated vehicle
US20150302718A1 (en) * 2014-04-22 2015-10-22 GM Global Technology Operations LLC Systems and methods for interpreting driver physiological data based on vehicle events

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102036855A (zh) * 2008-05-19 2011-04-27 通用汽车环球科技运作公司 基于车辆设置的驾驶员识别***
CN104648383A (zh) * 2013-11-22 2015-05-27 福特全球技术公司 改进的自主车辆设置
CN104765598A (zh) * 2014-01-06 2015-07-08 哈曼国际工业有限公司 自动驾驶者识别
CN104002807A (zh) * 2014-05-28 2014-08-27 长城汽车股份有限公司 一种汽车安全驾驶控制方法及***
CN104260725A (zh) * 2014-09-23 2015-01-07 北京理工大学 一种含有驾驶员模型的智能驾驶***
CN104590274A (zh) * 2014-11-26 2015-05-06 浙江吉利汽车研究院有限公司 一种驾驶行为自适应***及驾驶行为自适应方法

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11643086B2 (en) 2017-12-18 2023-05-09 Plusai, Inc. Method and system for human-like vehicle control prediction in autonomous driving vehicles
US11650586B2 (en) 2017-12-18 2023-05-16 Plusai, Inc. Method and system for adaptive motion planning based on passenger reaction to vehicle motion in autonomous driving vehicles
CN110570127A (zh) * 2019-09-12 2019-12-13 启迪数华科技有限公司 一种智能公交***、车辆运行调度方法及装置
CN110570127B (zh) * 2019-09-12 2022-10-04 启迪数华科技有限公司 一种智能公交***、车辆运行调度方法及装置
CN113255347A (zh) * 2020-02-10 2021-08-13 阿里巴巴集团控股有限公司 实现数据融合的方法和设备及实现无人驾驶设备的识别方法
CN112418162A (zh) * 2020-12-07 2021-02-26 安徽江淮汽车集团股份有限公司 车辆控制的方法、设备、存储介质及装置
CN112418162B (zh) * 2020-12-07 2024-01-12 安徽江淮汽车集团股份有限公司 车辆控制的方法、设备、存储介质及装置
CN112947759A (zh) * 2021-03-08 2021-06-11 上汽大众汽车有限公司 车载情感化交互平台及交互方法
CN112906304A (zh) * 2021-03-10 2021-06-04 北京航空航天大学 一种刹车控制方法和装置
CN112906304B (zh) * 2021-03-10 2023-04-07 北京航空航天大学 一种刹车控制方法和装置
CN114407907A (zh) * 2022-01-18 2022-04-29 上汽通用五菱汽车股份有限公司 智能驾驶***参数自适应调整方法、设备及存储介质
CN117334066A (zh) * 2023-09-20 2024-01-02 广州亿胜鑫网络科技有限公司 基于车辆数据的风险分析方法、***、终端及存储介质

Also Published As

Publication number Publication date
CN107223101A (zh) 2017-09-29

Similar Documents

Publication Publication Date Title
WO2017219319A1 (zh) 车辆自动驾驶方法和车辆自动驾驶***
US11919531B2 (en) Method for customizing motion characteristics of an autonomous vehicle for a user
US11345234B2 (en) Method and apparatus for detecting status of vehicle occupant
CN108688677B (zh) 车辆驾驶支援***以及车辆驾驶支援方法
CN108688676B (zh) 车辆驾驶支援***以及车辆驾驶支援方法
WO2019161766A1 (en) Method for distress and road rage detection
JP7324716B2 (ja) 情報処理装置、移動装置、および方法、並びにプログラム
CN107554528B (zh) 驾乘人员的疲劳等级检测方法及装置、存储介质、终端
JP5782726B2 (ja) 覚醒低下検出装置
JPWO2019202881A1 (ja) 情報処理装置、移動装置、情報処理システム、および方法、並びにプログラム
US20080238694A1 (en) Drowsiness alarm apparatus and program
CN108688675B (zh) 车辆驾驶支援***
CN113491519A (zh) 基于情感-认知负荷的数字助理
JP2007122362A (ja) ニューラルネットワークを用いた状態推定方法及びニューラルネットワークを用いた状態推定装置
CN113905938B (zh) 用于改进多个自动驾驶车辆与其所在驾驶环境间交互的***和方法
CN110472512A (zh) 一种基于深度学习的人脸状态识别方法及其装置
KR102125756B1 (ko) 지능형 차량 편의 제어 장치 및 방법
CN108875617A (zh) 辅助驾驶方法和装置、车辆
US20220330848A1 (en) Method, Computer Program, and Device for Determining Vehicle Occupant Respiration
KR102679466B1 (ko) 차량 및 그 제어방법
JP2019101472A (ja) 感情推定装置
JP7204283B2 (ja) 雰囲気推測装置およびコンテンツの提示方法
WO2020039994A1 (ja) カーシェアリングシステム、運転制御調整装置、および車両の嗜好適合化方法
JP6689470B1 (ja) 情報処理装置、プログラム及び情報処理方法
JP2004280673A (ja) 情報提供装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16905861

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16905861

Country of ref document: EP

Kind code of ref document: A1