WO2022179396A1 - 车辆挡位控制方法、装置、计算机设备和存储介质 - Google Patents

车辆挡位控制方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2022179396A1
WO2022179396A1 PCT/CN2022/075722 CN2022075722W WO2022179396A1 WO 2022179396 A1 WO2022179396 A1 WO 2022179396A1 CN 2022075722 W CN2022075722 W CN 2022075722W WO 2022179396 A1 WO2022179396 A1 WO 2022179396A1
Authority
WO
WIPO (PCT)
Prior art keywords
label
driving
vehicle
road
target
Prior art date
Application number
PCT/CN2022/075722
Other languages
English (en)
French (fr)
Inventor
孙中阳
安昊
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP22758761.5A priority Critical patent/EP4266210A1/en
Publication of WO2022179396A1 publication Critical patent/WO2022179396A1/zh
Priority to US17/992,513 priority patent/US20230089742A1/en

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H61/00Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing
    • F16H61/02Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used
    • F16H61/0202Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric
    • F16H61/0204Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric for gearshift control, e.g. control functions for performing shifting or generation of shift signal
    • F16H61/0213Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric for gearshift control, e.g. control functions for performing shifting or generation of shift signal characterised by the method for generating shift signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H59/00Control inputs to control units of change-speed-, or reversing-gearings for conveying rotary motion
    • F16H59/14Inputs being a function of torque or torque demand
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H59/00Control inputs to control units of change-speed-, or reversing-gearings for conveying rotary motion
    • F16H59/14Inputs being a function of torque or torque demand
    • F16H59/18Inputs being a function of torque or torque demand dependent on the position of the accelerator pedal
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H59/00Control inputs to control units of change-speed-, or reversing-gearings for conveying rotary motion
    • F16H59/14Inputs being a function of torque or torque demand
    • F16H59/24Inputs being a function of torque or torque demand dependent on the throttle opening
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H59/00Control inputs to control units of change-speed-, or reversing-gearings for conveying rotary motion
    • F16H59/36Inputs being a function of speed
    • F16H59/44Inputs being a function of speed dependent on machine speed of the machine, e.g. the vehicle
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H59/00Control inputs to control units of change-speed-, or reversing-gearings for conveying rotary motion
    • F16H59/48Inputs being a function of acceleration
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H59/00Control inputs to control units of change-speed-, or reversing-gearings for conveying rotary motion
    • F16H59/60Inputs being a function of ambient conditions
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H59/00Control inputs to control units of change-speed-, or reversing-gearings for conveying rotary motion
    • F16H59/60Inputs being a function of ambient conditions
    • F16H59/66Road conditions, e.g. slope, slippery
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H59/00Control inputs to control units of change-speed-, or reversing-gearings for conveying rotary motion
    • F16H59/68Inputs being a function of gearing status
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H59/00Control inputs to control units of change-speed-, or reversing-gearings for conveying rotary motion
    • F16H59/60Inputs being a function of ambient conditions
    • F16H2059/605Traffic stagnation information, e.g. traffic jams
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H59/00Control inputs to control units of change-speed-, or reversing-gearings for conveying rotary motion
    • F16H59/60Inputs being a function of ambient conditions
    • F16H59/66Road conditions, e.g. slope, slippery
    • F16H2059/663Road slope
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H61/00Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing
    • F16H2061/0075Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by a particular control method
    • F16H2061/0087Adaptive control, e.g. the control parameters adapted by learning
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H61/00Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing
    • F16H61/02Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used
    • F16H61/0202Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric
    • F16H61/0204Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric for gearshift control, e.g. control functions for performing shifting or generation of shift signal
    • F16H61/0213Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric for gearshift control, e.g. control functions for performing shifting or generation of shift signal characterised by the method for generating shift signals
    • F16H2061/0218Calculation or estimation of the available ratio range, i.e. possible gear ratios, e.g. for prompting a driver with a display
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H61/00Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing
    • F16H61/02Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used
    • F16H61/0202Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric
    • F16H61/0204Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric for gearshift control, e.g. control functions for performing shifting or generation of shift signal
    • F16H61/0213Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric for gearshift control, e.g. control functions for performing shifting or generation of shift signal characterised by the method for generating shift signals
    • F16H2061/022Calculation or estimation of optimal gear ratio, e.g. best ratio for economy drive or performance according driver preference, or to optimise exhaust emissions
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16HGEARING
    • F16H61/00Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing
    • F16H61/02Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used
    • F16H61/0202Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric
    • F16H61/0204Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric for gearshift control, e.g. control functions for performing shifting or generation of shift signal
    • F16H61/0213Control functions within control units of change-speed- or reversing-gearings for conveying rotary motion ; Control of exclusively fluid gearing, friction gearing, gearings with endless flexible members or other particular types of gearing characterised by the signals used the signals being electric for gearshift control, e.g. control functions for performing shifting or generation of shift signal characterised by the method for generating shift signals
    • F16H2061/0234Adapting the ratios to special vehicle conditions

Definitions

  • the present application relates to the technical field of vehicle control, and in particular, to a vehicle gear control method, device, computer equipment and storage medium.
  • the transmission is one of the core components of the vehicle.
  • the transmission can provide the necessary traction by switching gears according to the driving needs of the vehicle to maintain the normal driving of the vehicle.
  • the vehicle's slow shifting and shifting delay will greatly affect the drivability and dynamics of the vehicle, in order to improve the drivability and dynamics of the vehicle, automatic transmissions emerge as the times require.
  • the automatic transmission can realize the automatic shifting of the vehicle, reduce the driver's manipulation fatigue, and improve the driving safety.
  • an automatic transmission may determine a corresponding shift mode based on the vehicle speed, and perform automatic shifting according to the corresponding shift mode.
  • the prior art does not take into account the influence of the driving environment during the driving process on shifting, resulting in an inaccurate shifting manner of vehicle shifting.
  • a vehicle gear control method executed by a computer device, the method comprising:
  • the driving scene label includes at least one of a road attribute label, a traffic attribute label and an environment attribute label;
  • a target gear shift pattern matching the vehicle is determined; the target gear shift pattern is used to control the vehicle during driving Drive in the target gear at the target shift time.
  • a vehicle gear control device comprising:
  • the image acquisition module is used to acquire the driving scene image of the vehicle during the driving process
  • the label recognition module is configured to perform image recognition on the acquired driving scene image to obtain a driving scene label;
  • the driving scene label includes at least one of a road attribute label, a traffic attribute label and an environment attribute label;
  • a gear shift mode determination module configured to obtain the driving state data and driving behavior data corresponding to the vehicle; based on the driving state data, the driving behavior data, and the driving scene label, determine whether the vehicle matches the vehicle
  • the target shifting mode is used to control the vehicle in the driving process to drive according to the target gear at the target shifting time.
  • a computer device comprising a memory and one or more processors, the memory having computer-readable instructions stored therein, the computer-readable instructions, when executed by the processor, cause the one or more processors to execute The vehicle gear control method described in the above aspect.
  • One or more non-volatile readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the functions described in the above aspects.
  • the described vehicle gear control method The described vehicle gear control method.
  • a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium, the computer instructions being read from the computer-readable storage medium by a processor of the computer device, processing The computer executes the computer instructions, so that the computer device executes the vehicle gear control method described in the above aspects.
  • 1 is an application environment diagram of a vehicle gear control method in one embodiment
  • FIG. 2 is a schematic flowchart of a vehicle gear control method in one embodiment
  • Fig. 3 is the training schematic diagram of scene classification model in one embodiment
  • FIG. 4 is a schematic diagram of a driving scene label in one embodiment
  • FIG. 5 is a schematic diagram of determining a target shifting manner based on a target driving mode in one embodiment
  • FIG. 6 is a schematic flowchart of a vehicle gear control method in a specific embodiment
  • FIG. 7 is a schematic flowchart of a vehicle gear control method in another embodiment
  • FIG. 8 is a structural block diagram of a vehicle gear control device in one embodiment
  • FIG. 9 is a structural block diagram of a vehicle gear control device in one embodiment
  • Figure 10 is a diagram of the internal structure of a computer device in one embodiment.
  • FIG. 1 is an application environment diagram of a vehicle gear control method in one embodiment.
  • the vehicle gear control method is applied to a vehicle gear control system 100 .
  • the vehicle gear control system 100 includes a vehicle 102 and a server 104, wherein the vehicle 102 communicates with the server 104 through a network.
  • the vehicle 102 side can be used alone to execute the vehicle gear control method provided in the embodiments of the present application.
  • the vehicle 102 and the server 104 may also be used in cooperation to execute the vehicle gear control method provided in the embodiments of the present application.
  • the vehicle 102 can collect the driving scene image through the on-board camera, and send the driving scene image to the server 104, and the server 104 performs image recognition on the driving scene image to obtain the driving scene Label.
  • the server 104 generates a corresponding target shift pattern based on the driving scene tag, the driving state of the vehicle 102 and the driving behavior data of the vehicle 102 , and sends the target shift pattern to the vehicle 102 , so that the vehicle 102 changes according to the target during the driving process.
  • Drive in gear The server 104 may be implemented by an independent server or a server cluster composed of multiple servers.
  • the vehicle 102 can collect the driving scene image through the vehicle-mounted camera, and perform image recognition on the driving scene image to obtain the driving scene label.
  • the vehicle 102 obtains its own driving state and driving behavior data, generates a target shift pattern including the target shift time and the target gear position according to the driving scene label, driving state and driving behavior data, and executes the target shift pattern based on the target shift pattern. gear time, and drive according to the target gear.
  • the method can be applied to vehicles, servers and electronic devices.
  • the electronic device includes but is not limited to a mobile phone, a computer, an intelligent voice interaction device, a smart home appliance, a vehicle-mounted terminal, an aircraft, and the like.
  • a vehicle gear shifting method is provided. It is easy to understand that the above vehicle gear shifting method can be executed by a computer device, and the computer device can specifically be the above electronic device, vehicle or server. For better description, the following description takes the method applied to the vehicle in FIG. 1 as an example, including the following steps:
  • Step S202 acquiring a driving scene image of the vehicle during the driving process.
  • the driving scene image refers to an image obtained by collecting scene information of the driving area where the running vehicle is currently located.
  • a running vehicle refers to a vehicle that is in motion.
  • images of surrounding driving scenes can be collected in real time through the image collection device.
  • the image collection device may collect images of the driving scene according to a preset collection frequency, and in each collection process, at least one image of the driving scene is collected for the current driving area.
  • the scene of the driving area where the vehicle is located may be photographed by the vehicle-mounted camera to obtain the driving scene image; the scene of the driving area may also be photographed by the camera external to the vehicle to obtain the driving scene image;
  • the driving scene image can be obtained by photographing the surrounding environment of the vehicle through a monitoring camera installed in the driving area. This embodiment is not limited herein.
  • acquiring a driving scene image of the vehicle during driving includes: when the vehicle is driving, collecting scene information around the vehicle in real time through an on-board camera deployed on the vehicle to obtain At least one image of the driving scene.
  • the image acquisition device may specifically be a vehicle-mounted camera.
  • the driving scene image collected by the running vehicle can be collected by an on-board camera set on the vehicle.
  • the image collection direction of the vehicle-mounted camera that collects the road surface environment is different from that of the vehicle-mounted camera that collects the weather environment
  • the image collection direction of the vehicle-mounted camera that collects the weather environment is different from that of the vehicle-mounted camera that collects traffic signs.
  • the image collection of the driving scene may also be performed by a rotating camera device with a controllable collection direction.
  • the target shifting mode of the vehicle can be determined in time through the driving scene image collected in real time, thereby improving the timeliness of determining the target shifting mode.
  • a first in-vehicle camera for capturing road images a second in-vehicle camera for capturing images of traffic participants
  • a third in-vehicle camera for capturing weather images may be mounted on the vehicle.
  • traffic participants refer to pedestrians, motor vehicles and non-motor vehicles in the driving area.
  • Step S204 Perform image recognition on the acquired driving scene image to obtain a driving scene label; the driving scene label includes at least one of a road attribute label, a traffic attribute label and an environment attribute label.
  • the driving scene label refers to information indicating the scene condition of the driving scene in the driving area.
  • Driving scene labels include road attribute labels, traffic attribute labels and environment attribute labels.
  • the road attribute label refers to the information indicating the road condition of the running vehicle in the current driving area.
  • the road attribute label can be "big uphill”, “small uphill”, “straight road”, “flat road” and so on.
  • the traffic attribute label refers to information indicating the traffic condition of the running vehicle in the current driving area.
  • the traffic attribute label may be "serious congestion", "high risk” and so on.
  • the environmental attribute label refers to information indicating the environmental conditions of the current driving area, for example, the environmental attribute label may be "low visibility of the road", “strong light” and so on.
  • the vehicle can perform image recognition on the driving scene image, identify the road conditions, traffic conditions and environmental conditions in the driving scene image, and output the corresponding road based on the road conditions, traffic conditions and environmental conditions Attribute Labels, Traffic Attribute Labels, and Environment Attribute Labels.
  • the vehicle can identify the road in the driving scene image by a road recognition algorithm, the traffic condition in the driving scene image can be recognized by the traffic recognition algorithm, and the environmental condition in the driving scene image can be recognized by the environment recognition algorithm .
  • the road recognition algorithm, traffic recognition algorithm and environment recognition algorithm can be customized as needed, such as OpenCV road detection algorithm, Matlab traffic recognition algorithm, etc.
  • performing image recognition on the acquired driving scene image to obtain the driving scene label includes: acquiring a scene classification model; extracting image features in the driving scene image through the scene classification model; the image features include road features , traffic characteristics and environmental characteristics; through the scene classification model, based on road characteristics, traffic characteristics and environmental characteristics, determine the road attribute labels, traffic attribute labels and environmental attribute labels corresponding to the driving scene image.
  • the scene classification model refers to a machine learning model used to output driving scene labels.
  • Machine learning models can include linear models and nonlinear models.
  • machine learning models may include regression models, support vector machines, decision tree-based models, Bayesian models, and/or neural networks (eg, deep neural networks).
  • neural networks may include feedforward neural networks, recurrent neural networks (eg, long short-term memory recurrent neural networks), convolutional neural networks, or other forms of neural networks. It should be noted that the scene classification model is not necessarily limited to a neural network, and may also include other forms of machine learning models.
  • Image features are data that can reflect the characteristics of the scene around the vehicle.
  • Image features include road features, traffic features, and environmental features.
  • the road feature is the data used to reflect the road feature in the driving scene, and the road feature can reflect the color value distribution, brightness value distribution and depth distribution of the pixel points of the road surface and road facilities.
  • the traffic feature is data used to reflect the characteristics of traffic participants in the driving scene, and the traffic feature can reflect one or more feature information such as the distance between pedestrians, motor vehicles and non-motor vehicles.
  • the environmental feature is data for reflecting the environmental feature in the driving scene. Environmental features can reflect the brightness distribution and depth distribution of pixels such as the sky and trees.
  • the vehicle when the driving scene image is obtained, the vehicle can input the driving scene image into the scene classification model, perform coding processing on the driving scene image through the scene classification model, obtain the coding result, and extract the driving scene image according to the coding result.
  • road characteristics, traffic characteristics and environmental characteristics Further, the scene classification model decodes road features, traffic features and environmental features to obtain road attribute labels, traffic attribute labels and environmental attribute labels corresponding to the driving scene image.
  • a vehicle-mounted camera may be set up on the vehicle in advance, and a plurality of driving scene images may be collected in different driving scenes through the vehicle-mounted camera.
  • the on-board camera can be used to collect driving scene images of uphill road sections in rainy days; to collect driving scene images of curved road sections in sunny days; to collect driving scene images of congested road sections in snowy days.
  • the model training personnel can perform label labeling processing on the collected driving scene images, obtain corresponding training samples through label labeling processing, and perform model training on the scene classification model through the training samples until the training stop condition is reached.
  • the trained scene classification model is a schematic diagram of training a scene classification model in one embodiment.
  • the trained scene classification model can extract image features in an all-round way, so that multi-dimensional driving scene labels can be output based on the extracted image features, and then the following can be based on The multi-dimensional driving scene label accurately obtains the target shifting mode of the vehicle.
  • Step S206 acquiring the driving state data and driving behavior data corresponding to the vehicle.
  • the driving state data refers to the vehicle condition data during the running process of the vehicle.
  • the ride status data includes one or a combination of parameters of vehicle type, vehicle speed, and vehicle acceleration.
  • vehicle type is a parameter used to describe the size of the vehicle, and may specifically be the model of the vehicle.
  • vehicle speed refers to the speed data of the vehicle when it is running
  • acceleration of the vehicle refers to the speed change rate of the vehicle when it is running.
  • the driving behavior data refers to the control data that the vehicle driving object controls the current vehicle.
  • the driving behavior data includes one or a combination of parameters of the degree of brake pedal depression, the degree of accelerator pedal depression, and the degree of throttle opening.
  • the driving object refers to the driver who controls the driving of the vehicle. It can be understood that the driver may be a real person or a robot that is automatically controlled by the system.
  • the opening degree of the throttle valve refers to the opening angle of the throttle valve of the engine. The opening degree of the throttle valve of the vehicle can be manipulated by the driving object through the accelerator pedal to change the intake air volume of the engine, thereby controlling the operation of the engine. Different throttle openings indicate different operating conditions of the engine.
  • the vehicle can determine the vehicle speed and vehicle acceleration through the internal vehicle speed sensor and acceleration sensor, and can determine the accelerator pedal depression degree, braking degree, Pedal depression and throttle opening.
  • the vehicle may determine the corresponding vehicle speed directly by reading instrument panel data in the vehicle instrument panel.
  • the vehicle may acquire the driving state data and the driving behavior data in real time, and may also acquire the driving state data and the driving behavior data at preset time intervals. , obtain driving state data and driving behavior data.
  • This embodiment is not limited herein.
  • Step S208 based on the driving state data, the driving behavior data, and the driving scene label, determine a target shifting mode that matches the vehicle; the target shifting mode is used to control the vehicle in the driving process to follow the target shifting time at the target shifting time. to drive.
  • the target shift mode includes a target shift time and a target gear.
  • the target shift time refers to the gear change time for changing the vehicle from the original gear to the target gear.
  • the target gear refers to a gear that matches the current vehicle's driving state data, the current driving behavior data of the driving object, and the driving scene label of the current driving environment.
  • the vehicle generates a corresponding target shift mode according to the driving state data, driving behavior data and driving scene labels, adjusts the gear to the target gear at the target shift time, and drives according to the target gear.
  • the vehicle determines that the driving area ahead is dangerous based on the driving scene tag, for example, when the road ahead is determined to be an accident-prone place based on the driving scene tag, or it is determined that the front is a sharp curve, the vehicle Set the target shift time to "Before the driver hits the brake pedal", so you can prepare to downshift and slow down before the driver hits the brake pedal, thus reducing switching and synchronization clutch time.
  • the vehicle may determine the target shift time based on the reaction time of the driving object.
  • the reaction time of the driving object refers to how often the driving object responds to the sudden situation.
  • the reaction time of the driving object is related to the analysis speed of the processor of the vehicle.
  • the reaction time of the driving object can be obtained by identifying the driver's identity, acquiring the driver's historical driving data, and analyzing the reaction data in the historical driving data.
  • the vehicle determines that the driving area ahead is dangerous based on the driving scene label
  • the vehicle obtains the determined time point of the driving scene label, adds the reaction time to the determined time point, and obtains the shift time period, and sets the target shift time as the determined time To any time point in the shift time period, it is possible to prepare to shift to a low gear in advance before the driving object steps on the brake pedal.
  • the determined time point is the time point at which the driving scene label is generated.
  • the vehicle determines that the driving area ahead is a slow-moving area based on the driving scene tag, for example, when it is determined based on the driving scene tag that the front is congested, has a big uphill, or is a sharp turn, then it is necessary to Try to avoid frequent shifting, and avoid the situation where the transmission is upshifted due to the driving object stepping on the accelerator pedal. If the driving object steps on the brake pedal or the accelerator pedal at the current moment, the vehicle judges whether the driving object has stepped on the brake pedal or the accelerator pedal within the preset time period from the previous moment before the current moment based on the principle of "reducing the number of shifts". .
  • the driving object does not step on the brake pedal or the accelerator pedal within the preset time period, it can be considered that the operation of the driving object stepping on the brake pedal or the accelerator pedal is not a misoperation. target shift time, and drive according to the target gear at the target shift time.
  • the driving object steps on the brake pedal or the accelerator pedal within the preset time period, it can be considered that the driving object has stepped on the brake pedal or the accelerator pedal multiple times in a short period of time, and then it can be considered that the driving object has stepped on the brake pedal or accelerated at the current moment.
  • the operation of the pedal is a misoperation.
  • the vehicle ignores the operation of stepping on the brake pedal by the driving object and keeps the current gear unchanged. For example, when the driving area is a big uphill, ideally, the vehicle should keep the 1st or 2nd gear and move forward at a constant speed. Therefore, in order to avoid the situation that the transmission will be upshifted due to the driver stepping on the accelerator pedal, the vehicle can be based on the "reduce shift".
  • the driving object repeatedly depresses the accelerator pedal or depresses the brake pedal in a short period of time as a misoperation, and avoids frequent shifting of gears by not responding to the misoperation of the driving object, thereby improving the fuel economy of the vehicle. sex.
  • the driving subject may pre-install a gear decision system in the vehicle.
  • the vehicle can input the driving state data, driving behavior data, and driving scene labels to the gear decision system, and determine the membership function of the gear decision system through the membership function of the gear decision system.
  • the target shift pattern that matches the vehicle.
  • the input parameters of the membership function can be driving state data, driving behavior data, and driving scene labels
  • the output data can be the target shifting mode that matches the vehicle.
  • the vehicle may obtain the data of multiple driving tests of the expert driving object under different preset driving scenarios, and perform data analysis on the data of the multiple driving tests, thereby obtaining the membership function of the gear decision system.
  • the membership function of the gear decision system can be used to determine the vehicle's target gear shifting mode according to the vehicle's driving state data, driving behavior data, and driving scene labels.
  • the expert driving object refers to the driver who can drive the vehicle proficiently.
  • the expert driving object can drive the vehicle in the preset driving scene, and record the driving mode used when driving in the preset driving scene, take the preset driving scene and the corresponding driving mode as driving test data, and determine the corresponding driving mode based on the test data Membership function.
  • an expert driving subject can drive the vehicle to climb multiple times, and determine the gear and shift time used for each climb, and use the gear and shift time used for each climb as the driving test data. .
  • the vehicle can determine the model through the gear shifting manner, and determine the target shifting manner that matches the vehicle according to the driving state data, the driving behavior data, and the driving scene label.
  • the shift mode determination model refers to a machine learning model trained based on driving state data, driving behavior data, driving scene labels and corresponding target shifting modes.
  • test data is the gear and shift time used when climbing a hill
  • developers can obtain images of the climbing scene collected by the vehicle when climbing, and based on the gear and shifting time used when climbing According to the block time, the label processing is performed on the images of the climbing driving scene to obtain the training data. Further, the shift mode determination model can be trained through training data until the training stop condition is reached.
  • the vehicle-mounted camera may collect the driving scene image according to the preset collection frequency. If the current driving scene label of the driving scene image collected at the current collection time point is the same as the driving scene image collected at the next sequential collection time point When the difference between the labels of the subsequent driving scene is less than the difference threshold, the vehicle keeps the target shifting mode corresponding to the label of the current driving scene unchanged.
  • the vehicle may be adjusted from a source gear to a target gear via a dual-clutch automatic transmission.
  • the dual-clutch automatic transmission has 2 clutches, the first clutch controls odd-numbered gears and reverse gears, and the second clutch controls even-numbered gears; when the shift lever is in odd-numbered gears, the first clutch is connected to the first input shaft, and the first input The shaft works, the second clutch disengages, and the second input shaft does not work; similarly, when the shift lever is in an even gear, the second clutch is connected to the second input shaft, and the first clutch is disengaged from the first input shaft. In this way, there are always two gears connected in the whole working process, one is working and the other is ready for the next shift at any time.
  • the working principle of the dual-clutch automatic transmission determines that the dual-clutch automatic transmission must be in the semi-clutch state for a long time, which makes the dual-clutch automatic transmission overheat and even causes the dual-clutch automatic transmission to stop working.
  • the dual-clutch automatic transmission is always in a slipping state, and the engine speed and torque output will be increased in order to obtain the power required for starting, which also leads to the dual-clutch automatic transmission. Rapid temperature rise of the automatic transmission and sudden acceleration of the vehicle.
  • the dual-clutch automatic transmission is completely separated, but this also leads to power interruption, which makes the dual-clutch automatic transmission more jitter, more frustration, and poor smoothness.
  • more reliable environmental information can be provided for the vehicle based on the accurately identified driving scene, so that the vehicle can generate more reliable environmental information based on the more reliable environmental information. It is a reasonable target shifting method, so as to ensure that the dual-clutch automatic transmission can reasonably choose whether to shift gears, how many gears to shift and when to shift gears when driving on congested roads, curves, steep slopes and slippery roads. Solve the problem of vehicle power interruption and strong frustration caused by the design problem of dual-clutch automatic transmission.
  • the vehicle gear control method by acquiring a driving scene image during the driving process, image recognition can be performed on the driving scene image, and a corresponding driving scene label can be obtained.
  • the target shifting mode that matches the vehicle can be determined based on the driving state data, driving behavior data, and driving scene labels.
  • the gear time is driven according to the target gear. Because the driving state data, driving behavior data, and driving scene labels are combined to determine the target gear shift mode, the accuracy of the determined target gear shift mode is improved.
  • the scene classification model includes a first road model related to a road, a first traffic model related to traffic, and a first environment model related to the environment;
  • the first road model includes at least a road gradient model, One of a curve curvature model, a road adhesion model, a road smoothness model, a signal light model, and a traffic sign model;
  • the first traffic model includes at least one of a danger level model and a congestion status model;
  • the first environment model at least Including one of a road visibility model, a weather condition model, and a light intensity model;
  • the road attribute label includes at least the road gradient label output by the road gradient model, the curve curvature label output by the curve curvature model, and the road adhesion model output.
  • the traffic attribute label at least includes the degree of passing danger One of the danger level label output by the model and the congestion condition label output by the congestion condition model
  • the environmental attribute label at least includes the road visibility label output by the road visibility model, the weather condition label output by the weather condition model, and the illumination One of the light intensity labels output by the intensity model.
  • the scene classification model may include a first road model related to a road, a first traffic model related to traffic, and a first environment model related to the environment.
  • the vehicle passes the first road model, and can output road attribute labels; passes through the first traffic model, can output traffic attribute labels; passes through the first environment model, can output environmental attribute labels.
  • the first road model may further include a road gradient model, a curve curvature model, a road surface adhesion model, a road surface smoothness model, a signal light model and a traffic sign model.
  • the vehicle can output the road gradient label through the road gradient model; can output the curve curvature label through the curve curvature model; can output the road adhesion label through the road adhesion model; can output the road smoothness label through the road surface smoothness model; can output the road surface smoothness label through the signal light
  • the model outputs the signal light label; and the traffic sign label can be output by the traffic sign model.
  • the road gradient label refers to the information indicating the road gradient condition of the driving area, and the road gradient label can specifically be “big uphill”, “small uphill”, “flat road”, “small downhill”, “big downhill” Slope” etc.
  • the curve curvature label refers to the information indicating the curve curvature condition of the driving area, and the curve curvature label may specifically be “straight”, “curve”, “sharp curve” and so on.
  • the road adhesion label refers to information indicating the road adhesion status of the driving area. For example, the road adhesion label may specifically be “low adhesion”, “medium adhesion”, “high adhesion” and so on.
  • the road surface smoothness label refers to the information indicating the road surface smoothness condition of the driving area, and the road surface smoothness label may specifically be “pavement smoothness", “bumpy road surface” and so on.
  • the signal light label refers to the information of the traffic light indicating the driving area, and the signal light label can specifically be “red light”, “yellow light”, “street light” and so on.
  • the traffic sign label refers to the information indicating the existing traffic sign in the driving area, and the traffic sign label may specifically be "school ahead”, “zebra crossing ahead”, “high accident-prone place ahead” and so on.
  • the first traffic model includes a danger level model and a congestion condition model.
  • the vehicle can output the danger level label through the danger level identification result, and can output the congestion state label through the congestion state model.
  • the danger level label refers to information indicating the degree of traffic danger in the driving area, and the danger level label may specifically be “high danger”, “medium danger” or “low danger”.
  • the congestion status label refers to information indicating the degree of traffic congestion in the driving area, and the congestion status label may specifically be "high congestion”, “medium congestion” or "low congestion”.
  • the first environment model includes a road visibility model, a weather condition model and a light intensity model.
  • the vehicle may output road visibility labels through the road visibility model, weather condition labels may be output through the weather condition model, and light intensity labels may be output through the light intensity model.
  • the road visibility label refers to information indicating the visibility of the road in the driving area, and the road visibility label may specifically be “high visibility”, “medium visibility” and “low visibility”.
  • the weather condition label refers to a label indicating the current weather condition, and the weather condition label may specifically be “sunny", “rain”, “snow”, “fog”, and so on.
  • the light intensity label refers to the information indicating the light intensity of the driving area, and the light intensity label may specifically be “light intensity”, “medium light”, “weak light” and so on.
  • FIG. 4 shows a schematic diagram of driving scene tags in one embodiment.
  • the danger level model may combine the danger level of the pedestrian dimension, the danger level of the motor vehicle dimension, and the danger level of the non-motor vehicle dimension, and output a danger level label corresponding to the driving scene image.
  • the risk level model can identify pedestrians, motor vehicles and non-motor vehicles in the image of the driving scene, obtain the recognition results, and determine the degree of danger of pedestrians, motor vehicles and non-motor vehicles in the driving area according to the recognition results. The danger level of pedestrians, motor vehicles and non-motor vehicles in the driving area is obtained, and the danger status label corresponding to the driving scene image is obtained.
  • the congestion condition model may combine the congestion degree of pedestrian dimension, the congestion degree of motor vehicle dimension and the congestion degree of non-motor vehicle dimension, and output the congestion condition label corresponding to the driving scene image.
  • the congestion situation model can identify pedestrians, motor vehicles and non-motor vehicles in the driving scene image, obtain the recognition results, and determine the density of pedestrians, motor vehicles and non-motor vehicles in the driving area according to the recognition results. The density of pedestrians, motor vehicles and non-motor vehicles in the driving area is obtained, and the congestion status label corresponding to the driving scene image is obtained.
  • each model in the scene classification model may be trained separately to obtain a trained model.
  • the output labels can be made more accurate.
  • the scene classification model includes a second road model, a second traffic model and a second environment model;
  • the road attribute label includes at least one of a road condition label and a road facility label, and the road condition label at least includes One of road gradient label, curve curvature label, road adhesion label and road surface smoothness label
  • road facility label includes at least one of signal light label and traffic sign label;
  • traffic attribute label includes at least pedestrian label, motor vehicle label one of a label and a non-motor vehicle label, a pedestrian label includes at least one of a pedestrian danger level label and a pedestrian congestion status label, and a motor vehicle label includes at least one of a motor vehicle danger level label and a motor vehicle congestion status label
  • the non-motor vehicle label includes at least one of the non-motor vehicle danger degree label and the non-motor vehicle congestion status label;
  • the environmental attribute label includes at least one of the weather label and the light label, and the weather label includes at least the road visibility label and the weather condition label.
  • the illumination label includes at least the illumination intensity label; through the scene classification model, the road attribute label, the traffic attribute label and the environment attribute label corresponding to the driving scene image are determined based on the road characteristics, traffic characteristics and environmental characteristics, including: The second road model, based on road features, outputs at least one of a road gradient label, a curve curvature label, a road adhesion label, a road surface smoothness label, a signal light label, and a traffic sign label; through the second traffic model, based on the traffic feature, output at least one of pedestrian danger level label, pedestrian congestion status label, motor vehicle danger level label, motor vehicle congestion status label, non-motor vehicle danger level label and non-motor vehicle congestion status label; through the second environment model, Based on the environmental features, at least one of a road visibility label, a weather condition label, and a light intensity label is output.
  • the multi-task model is a machine learning model that can handle different tasks.
  • the multi-task model can improve the learning efficiency and quality of each task by learning the connections and differences between different tasks.
  • the scene classification model may include a second road model, a second traffic model, and a second environment model.
  • the vehicle can input the driving scene image into the second road model, extract the road features in the driving scene image through the second road model, identify the roads and road facilities in the driving area according to the road characteristics, and output the road according to the recognition result.
  • Condition labels and road facility labels include road gradient labels, curve curvature labels, road adhesion labels and road surface smoothness labels; road facility labels include signal light labels and traffic sign labels.
  • the vehicle can input the driving scene image to the second traffic model, extract traffic features through the second traffic model, identify pedestrians, motor vehicles and non-motor vehicles in the driving area according to the traffic features, and output pedestrians based on the recognition results.
  • Labels, motor vehicle labels and non-motor vehicle labels include the pedestrian danger level label and the pedestrian congestion status label;
  • the motor vehicle label includes the motor vehicle danger level label and the vehicle congestion status label;
  • the non-motor vehicle label includes the non-motor vehicle danger level label and the non-motor vehicle congestion status label.
  • the vehicle can input the driving scene image into the second environment model, extract the environment features by handing over the second environment model, and identify the weather and illumination in the driving area according to the second environment model, and output weather tags and lights based on the identification results.
  • Lighting label include road visibility labels and weather condition labels; and the light labels include light intensity labels.
  • determining a target shift mode that matches the vehicle based on the driving state data, the driving behavior data, and the driving scene label includes: determining the corresponding candidate shift mode based on the driving state data and the driving behavior data. Gear shift mode; based on the driving scene label, the candidate shift mode is corrected and processed to obtain the target shift mode matching the vehicle.
  • the vehicle can call a preset shift regularity table, and search from the shift regularity table to find the corresponding driving state data and driving behavior data obtained. candidate shift mode.
  • the shift regularity table refers to a table that stores the corresponding relationship between the driving state data, the driving behavior data and the gear shifting mode, and the shift regularity table may specifically be a shift MAP map.
  • the vehicle can revise the candidate shifting mode based on the driving scene label, adjust the candidate shifting mode to the target shifting mode, and trigger the vehicle to follow the target shifting mode. to drive.
  • the vehicle may modify the candidate shift patterns according to the gear dynamic modification logic in Table 1.
  • the corresponding correction logic is "preset to lower the gear in advance to avoid frequent shifting".
  • the vehicle lowers the candidate gear in the candidate shifting method. to the target gear, and adjust the shift time in the candidate shift pattern to "before the driver depresses the brake pedal".
  • the candidate shifting mode is corrected by the correction logic, so that the vehicle can avoid frequent shifting when climbing a slope, which not only reduces the energy consumed by frequent shifting and the wear of the components of the shifting actuator, but also avoids the power cut off. If the time ratio is too high, the economy and power of the vehicle will be lost.
  • the target shift time and the specific target gear to be adjusted to may be determined by the above-mentioned membership function, or may be determined by the above-mentioned shifting method determination model.
  • a gear range and a shift time range corresponding to the driving scene label can be determined through the above membership function, and the candidate shift mode can be adjusted to be within the range of the gear range and the shift time to obtain the target gear and target shift time.
  • the candidate shift mode can be modified based on the minimum correction principle. For example, when the gear range is between the 1st gear and the 2nd gear, if the candidate gear in the candidate shift mode is the 3rd gear, the candidate gear The position is adjusted to 2nd gear instead of 1st gear, which can improve the safety of vehicle driving.
  • the vehicle when the driving scene label is "low road adhesion", the vehicle is upshifted to the target gear according to the corresponding correction logic "prepare to increase the gear when shifting gears on low adhesion road to avoid excess driving force".
  • the correction logic corresponding to the road surface adhesion it can not only prevent the vehicle from driving on a low adhesion road surface, and the tires and the road surface do not have enough adhesion to cause rapid slippage, which may cause dangerous accidents.
  • Using the correction logic under the low-adhesion road surface can also reduce the switching of low gears, meet the vehicle's demand for the adhesion rate, and improve the vehicle's fuel economy.
  • the vehicle uses the “curve shifting to ensure driving safety, and to lower the gear and avoid frequent shifting” as the correction logic to lower the candidate gear to the target. Based on the operation of the driving object stepping on the brake pedal or the accelerator pedal, it can identify the wrong operation of the driving object in the process of driving on the curve, and avoid frequent shifting of the gear by not responding to the wrong operation of the driving object.
  • the vehicle uses “lower the gear, if the speed basically matches, keep the current gear, avoid frequent shifting” as the correction logic, lower the candidate gear to the target gear, and Avoid frequent shifting.
  • the probability of the dual-clutch automatic transmission being in the semi-clutch state for a long time can be reduced, thereby prolonging the life of the dual-clutch automatic transmission and reducing the frustration during driving.
  • the candidate shifting mode is modified through the driving scene tag, so that the revised target shifting mode can integrate the surrounding environment information, driving state information and driving behavior information, so as to obtain a more reasonable target shifting mode.
  • determining the corresponding candidate shifting modes based on the driving state data and the driving behavior data includes: determining a target driving mode of the vehicle; acquiring a target shifting regularity table corresponding to the target driving mode; The driving state data and driving behavior data are used to find the corresponding candidate shifting modes in the target shifting regularity table.
  • the driving mode refers to the driving mode, and the driving mode may specifically be a sports driving mode, a comfortable driving mode, an energy-saving driving mode, an off-road driving mode, and the like.
  • the vehicle will adjust the response of steering, gearbox, engine, suspension, etc., as well as the intervention time and strength of the electronic stability program, according to predetermined parameters.
  • the vehicle may determine the current target driving mode, obtain a target shift regularity table corresponding to the current target driving mode, and search for candidate shifts corresponding to both the driving state data and the driving behavior data from the target shift regularity table. blocking method.
  • the target driving mode refers to the driving mode the vehicle is in at the current moment.
  • a shift schedule table corresponding to each driving model may be pre-stored in the vehicle, and when the target driving mode of the vehicle is determined, the vehicle may select from the stored multiple data according to the mode identifier of the target driving mode. In each shift schedule table, find the target shift schedule table.
  • the candidate shifting modes can be more accurately determined based on the target shifting regularity table.
  • the target gear shift pattern includes a target gear position and a target shift time; based on the driving scene tag, the candidate gear shift pattern is corrected to obtain a target gear shift pattern matching the vehicle, including: Determine the gear range and shift time range corresponding to the driving scene label, driving state data and driving behavior data; when the candidate gear in the candidate shifting mode exceeds the gear range, adjust the candidate gear according to the gear range, Obtain the target gear that matches the vehicle; when the candidate shifting time in the candidate shifting mode exceeds the shifting time range, adjust the candidate shifting time according to the shifting time range to obtain the target shifting time matching the vehicle.
  • the vehicle can determine the gear range and shift range corresponding to the driving scene label, driving state data, and driving behavior data through the membership function of the gear decision system. Block time range. Further, the vehicle judges whether the candidate gear in the candidate shift mode exceeds the gear range, and if it exceeds the gear range, the candidate gear is adjusted to be within the gear range to obtain the target gear. For example, when the gear range is from the 1st gear to the 2nd gear and the candidate gear is the 3rd gear, the vehicle can correct the gear from the 3rd gear to the 2nd gear or the 1st gear. Further, the vehicle determines whether the candidate shift time in the candidate shift mode exceeds the shift time range, and if it exceeds the shift time range, the candidate shift time is adjusted to be within the shift time range to obtain the target shift time.
  • the vehicle may determine the current target driving mode, and determine the corresponding target shift schedule table based on the target driving mode . Further, the vehicle can look up the candidate shifting modes corresponding to the vehicle speed and the throttle opening from the target shifting regular table, and correct the candidate shifting modes through the driving scene label to obtain the target shifting gear position and the target shifting mode. time, and control the dual-clutch automatic transmission to adjust the gear from the original gear to the target gear at the target shift time.
  • FIG. 5 shows a schematic diagram of determining a target shifting manner based on a target driving mode in one embodiment.
  • the final determined target shifting manner can be more accurate and reasonable.
  • the number of driving scene images is multiple, and the collection times of the multiple driving scene images are consecutive, and the vehicle gear control method further includes: determining, among the multiple driving scene images, each driving scene image The label difference between the corresponding driving scene labels; determine the number of driving scene images whose label difference is less than or equal to the preset difference threshold; when the number is greater than or equal to the preset number threshold, the control is in the driving process through the target shifting method The vehicle in the target shift time is driven according to the target gear.
  • the in-vehicle camera may collect images of the driving scene according to a preset collection frequency, and collect multiple images of the driving scene that are consecutive in time each time the image of the driving scene is collected. For each driving scene image in the multiple time-continuous driving scene images, the vehicle performs image recognition on it through the scene classification model, and obtains the corresponding driving scene label. Further, the vehicle determines the label difference between the labels of each driving scene, and determines whether the label difference is less than or equal to a preset difference threshold.
  • the vehicle responds to the target shifting mode, and controls the vehicle in the driving process to follow the target shifting time through the target shifting mode. Drive in the target gear.
  • the vehicle may record driving scene images with a label difference less than or equal to a preset difference threshold, so as to determine, according to the recording result, the driving scene images whose label difference is less than or equal to the preset difference threshold. quantity.
  • the vehicle can determine the corresponding target gear shift mode by using the first driving scene image in the plurality of time-sequential driving scene images, and the vehicle can determine the corresponding target shift mode by using the first driving scene image in the multiple time-sequential driving scene images
  • the remaining driving scene images other than the driving scene image are taken to determine the accuracy of the recognition result of the scene classification model on the driving scene image, that is, the confidence level of the driving scene label is determined, and the confidence level of the driving scene label is higher than the preset confidence level.
  • the vehicle in the driving process is controlled to drive according to the target gear at the target gear shift time through the target gear shift method.
  • the vehicle may perform image recognition on a plurality of time-sequential driving scene images through a scene classification model, and obtain a driving scene label corresponding to each driving scene image.
  • the vehicle determines the label difference between the driving scene labels, and when there is a preset number of driving scene labels whose differences are less than or equal to the preset difference threshold, filters out the driving scene labels from the driving scene labels.
  • the target shifting mode is determined according to the target driving scene label, the driving state data and the driving behavior data.
  • the vehicle when the difference between the preset number of driving scene tags is less than or equal to the preset difference threshold, the vehicle is controlled based on the target shift mode, which can improve the safety of vehicle gear control and reduce the risk of driving. The probability that the scene label is wrong and the vehicle is in a dangerous state.
  • the vehicle gear control method further includes: determining the label confidence levels corresponding to the road attribute label, the traffic attribute label and the environmental attribute label respectively; when the road attribute label, the traffic attribute label and the environmental attribute label respectively correspond to When the confidence levels of the labels are greater than or equal to the preset confidence threshold, the vehicle in the driving process is controlled to drive according to the target gear at the target gear shift time through the target gear shift method.
  • the scene classification model when the scene classification model outputs road attribute labels, traffic attribute labels, and environmental attribute labels, it can also output corresponding label confidences of the road attribute labels, traffic attribute labels, and environmental attribute labels.
  • the vehicle judges whether the label confidences corresponding to road attribute labels, traffic attribute labels and environmental attribute labels are all greater than or equal to the preset confidence threshold, and the corresponding label confidences of road attribute labels, traffic attribute labels and environmental attribute labels are all equal.
  • the preset reliability threshold it is considered that the accuracy of the driving scene label obtained through the driving scene image is high.
  • Drive in gear If there is a driving scene label whose label confidence is less than the preset confidence threshold, it can be considered that the target shifting method is wrong, and the vehicle stops at the target shifting time and drives according to the target gear.
  • the vehicle when the confidence levels of the labels corresponding to the road attribute label, the traffic attribute label and the environment attribute label are all greater than or equal to the preset confidence threshold, the vehicle is controlled based on the target shift mode, and the vehicle gear can be increased.
  • the safety of the control reduces the probability of the vehicle being in a dangerous state due to the wrong labeling of the driving scene.
  • determining a target shift mode matching the vehicle based on the driving state data, the driving behavior data, and the driving scene labels includes: determining the corresponding driving scene labels corresponding to the driving scene images. Label priority; determine the label confidence level corresponding to each driving scene label corresponding to the driving scene image; based on the label priority and label confidence, filter out the target driving scene label from the driving scene labels corresponding to the driving scene image; Target driving scene labels, driving state data, and driving behavior data to determine the target shift pattern that matches the vehicle.
  • the label priority corresponding to each driving scene label can be preset.
  • the priority of "big uphill” can be set as the first priority
  • "low adhesion” can be set as the second priority
  • “high” can be set as the second priority
  • Lighting” is the third priority, etc.
  • the first priority is higher than the second priority
  • the second priority is higher than the third priority.
  • the driving scene labels corresponding to the driving scene image are "big uphill”, “red light” and “low adhesion”, and the priority of "big uphill” is the first priority
  • the confidence level is 90%
  • the priority of "red light” is the first priority and the confidence level is 70%
  • the priority of "low adhesion” is the second priority and the confidence level is 95%.
  • the labels “big uphill” and “red light” with the first priority are filtered out from the labels, and the target driving scene label "big uphill” can be filtered from the “big uphill” and “red light” based on the confidence.
  • the vehicle may determine the tag priority corresponding to each driving scene tag based on Table 1. For example, when the priority corresponding to the correction logic of "prepare to lower gears in advance when going uphill and avoid frequent shifting; take full advantage of engine drag to control vehicle speed when going downhill” is the first priority, it is the same as “big uphill”. ”, “Uphill”, “Small Downhill” and “Large Downhill” can be the first priority of labels.
  • the target driving scene label is screened out from the driving scene labels according to the label priority and label confidence, and the corresponding target shifting mode is determined according to the target driving scene label, so that the vehicle can adopt a more reasonable target shifting. Drive in gear.
  • the above-mentioned vehicle gear control method includes: determining the current gear of the vehicle; when the target gear in the target shift mode is inconsistent with the current gear , adjust the vehicle from the current gear to the target gear, so as to control the vehicle to drive according to the target gear at the target gear shift time.
  • the vehicle determines the current gear position, and judges whether the current gear position is consistent with the target gear position in the target shift mode. If it is consistent, the current gear position is kept unchanged; time, adjust the vehicle from the current gear to the target gear, so as to control the vehicle to drive according to the target gear at the target shift time.
  • a vehicle gear control method including the following steps:
  • S602 when the vehicle is in the driving process, collect scene information around the vehicle in real time through the vehicle-mounted camera deployed on the vehicle to obtain at least one image of the driving scene.
  • S604 output at least one of a road gradient label, a curve curvature label, a road surface adhesion label, a road surface smoothness label, a signal light label, and a traffic sign label based on the road features extracted from the driving scene image through the second road model kind.
  • S606 output the pedestrian danger level label, the pedestrian congestion status label, the motor vehicle danger level label, the motor vehicle congestion status label, the non-motor vehicle danger level label and the non-motor vehicle danger level label based on the traffic feature extracted from the driving scene image through the second traffic model At least one of the vehicle congestion status tags.
  • S608 output at least one of a road visibility label, a weather condition label, and a light intensity label based on the environmental features extracted from the driving scene image through the second environment model.
  • S610 Acquire driving state data and driving behavior data corresponding to the vehicle.
  • S614 Determine the label priority corresponding to each driving scene label corresponding to the driving scene image; determine the label confidence level corresponding to each driving scene label corresponding to the driving scene image.
  • S618 Determine a gear range and a shift time range corresponding to the driving scene label, the driving state data, and the driving behavior data.
  • S626 Determine the label confidence levels corresponding to the road attribute label, the traffic attribute label, and the environmental attribute label respectively; when the label confidence degrees corresponding to the road attribute label, the traffic attribute label, and the environmental attribute label are all greater than or equal to the preset confidence threshold, Through the target shift mode, the vehicle in the driving process is controlled to drive according to the target gear during the target shift time.
  • the above-mentioned vehicle gear control method can perform image recognition on the driving scene image by acquiring the driving scene image during the driving process, and obtain the corresponding driving scene label.
  • the target shifting mode that matches the vehicle can be determined based on the driving state data, driving behavior data, and driving scene labels.
  • the gear time is driven according to the target gear. Because the driving state data, driving behavior data, and driving scene labels are combined to determine the target gear shift mode, the accuracy of the determined target gear shift mode is improved.
  • the present application also provides an application scenario, where the above-mentioned vehicle gear control method is applied to the application scenario.
  • the application of the vehicle gear control method in this application scenario is as follows:
  • step S702 when the driving object drives the vehicle in the driving area, the vehicle may acquire at least one image of the driving scene that is continuous in time.
  • step S704 the vehicle may perform image recognition on the collected driving scene images through the scene classification model to obtain corresponding driving scene labels. Step S706,
  • the vehicle may determine the target driving mode selected by the driving subject.
  • the vehicle may determine a target shifting regularity table (also referred to as a shifting regularity map table) corresponding to the target driving mode, and determine the current driving state data and current driving behavior data corresponding to the current driving state data based on the target shifting regularity table.
  • Alternative shifts In step S710, the vehicle dynamically corrects the candidate shift pattern based on the driving scene tag to obtain the target shift pattern.
  • the vehicle monitors the rationality and safety of the target gear shifting mode.
  • the vehicle determines whether the target shift mode is reasonable and safe according to the monitoring result.
  • S716 when it is determined that the target gear shift mode is reasonable and safe, the vehicle runs according to the target gear during the target gear shift time.
  • FIG. 7 shows a schematic flowchart of a vehicle gear control method in one or more embodiments.
  • the present application further provides an application scenario, where the above-mentioned vehicle gear control method is applied to the application scenario.
  • the application of the vehicle gear control method in this application scenario is as follows:
  • an on-board camera Before driving the vehicle in the driving area, an on-board camera can be set up on the vehicle, so that the driving scene image of the vehicle during the driving process can be collected through the on-board camera. Further, the vehicle sends the driving scene image to the server, and the server performs image recognition on the driving scene image through the scene classification model to obtain the corresponding driving scene label, and based on the driving state data, driving behavior data, and driving scene label, determine the corresponding driving scene label.
  • the target gear shift mode matched with the vehicle is sent to the vehicle, so that the vehicle runs according to the target gear position at the target gear shift time.
  • steps in the flowcharts of FIGS. 2 and 6 are shown in sequence according to the arrows, these steps are not necessarily executed in the sequence shown by the arrows. Unless explicitly stated herein, the execution of these steps is not strictly limited to the order, and these steps may be performed in other orders. Moreover, at least a part of the steps in FIGS. 2 and 6 may include multiple steps or multiple stages. These steps or stages are not necessarily executed and completed at the same time, but may be executed at different times. The execution of these steps or stages The order is also not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the steps or phases within the other steps.
  • a vehicle gear control device 800 is provided.
  • the device can use software modules or hardware modules, or a combination of the two to become a part of computer equipment. Specifically, it includes: an image acquisition module 802, a label identification module 804 and a shift mode determination module 806, wherein:
  • the image acquisition module 802 is configured to acquire a driving scene image of the vehicle during the driving process.
  • the label recognition module 804 is configured to perform image recognition on the obtained driving scene image to obtain a driving scene label; the driving scene label includes at least one of a road attribute label, a traffic attribute label and an environment attribute label.
  • Shift mode determination module 806, configured to acquire the driving state data and driving behavior data corresponding to the vehicle; A matching target gear shift pattern; the target gear shift pattern is used to control the vehicle in the driving process to drive according to the target gear position at the target gear shift time.
  • the image acquisition module is further configured to, when the vehicle is driving, use an on-board camera deployed on the vehicle to monitor the scene around the vehicle in real time. Information is collected to obtain at least one driving scene image.
  • the label identification module 804 further includes a feature extraction module 8041 for acquiring a scene classification model; extracting image features in the driving scene image through the scene classification model; the image features include road features, traffic features and environmental features; through the scene classification model, based on the road features, the traffic features and the environmental features, determine the road attribute labels, traffic attribute labels and the environment corresponding to the driving scene image attribute label.
  • a feature extraction module 8041 for acquiring a scene classification model; extracting image features in the driving scene image through the scene classification model; the image features include road features, traffic features and environmental features; through the scene classification model, based on the road features, the traffic features and the environmental features, determine the road attribute labels, traffic attribute labels and the environment corresponding to the driving scene image attribute label.
  • the label identification module 804 is further configured to output driving scene labels through a scene classification model;
  • the scene classification model includes a first road model related to roads, a first traffic model related to traffic, and a first traffic model related to traffic.
  • the first road model includes at least one of a road gradient model, a curve curvature model, a road adhesion model, a road surface smoothness model, a signal light model and a traffic sign model;
  • the first traffic model at least It includes one of a danger level model and a congestion status model;
  • the first environment model includes at least one of a road visibility model, a weather condition model, and a light intensity model;
  • the road attribute label includes at least the road gradient label output through the road gradient model, Curve curvature label output by the curve curvature model, road adhesion label output by the road adhesion model, road surface smoothness label output by the road surface smoothness model, signal light label output by the signal light model and output by the traffic sign model
  • the road attribute label includes at least one of a road condition label and a road facility label
  • the road condition label includes at least one of a road gradient label, a curve curvature label, a road adhesion label, and a road surface smoothness label
  • a road facility label includes at least one of a signal light label and a traffic sign label
  • a traffic attribute label includes at least one of a pedestrian label, a motor vehicle label, and a non-motor vehicle label
  • the pedestrian label includes at least a pedestrian danger level label.
  • the motor vehicle label includes at least one of the motor vehicle danger level label and the motor vehicle congestion status label
  • the non-motor vehicle label includes at least the non-motor vehicle danger level label and the non-motor vehicle congestion status label
  • the environmental attribute label includes at least one of a weather label and a light label
  • the weather label includes at least one of a road visibility label and a weather condition label
  • the light label includes at least a light intensity label
  • the label identification module 804 It is also used for outputting the road gradient label, the curve curvature label, the road surface adhesion label, the road surface smoothness label, the signal light label and the At least one of the traffic sign labels; through the second traffic model, based on the traffic characteristics, output the pedestrian danger level label, the pedestrian congestion status label, the motor vehicle danger level label, the at least one of the motor vehicle congestion status label, the non-motor vehicle danger level label and the non-motor vehicle congestion status label; through the second environment model, based on the environmental characteristics, output the road visibility label , at least one of the weather
  • the gear shift mode determination module 806 further includes a correction processing module 8061, configured to determine a corresponding candidate gear shift mode based on the driving state data and the driving behavior data; based on the The driving scene tag performs correction processing on the candidate shift patterns to obtain a target shift pattern matching the vehicle.
  • a correction processing module 8061 configured to determine a corresponding candidate gear shift mode based on the driving state data and the driving behavior data; based on the The driving scene tag performs correction processing on the candidate shift patterns to obtain a target shift pattern matching the vehicle.
  • the correction processing module 8061 is further configured to determine a target driving mode of the vehicle; obtain a target shift schedule table corresponding to the target driving mode; according to the driving state data and For the driving behavior data, the corresponding candidate shifting modes are searched in the target shifting regularity table.
  • the correction processing module 8061 is further configured to determine a gear range and a shift time range corresponding to the driving scene label, the driving state data and the driving behavior data; when When the candidate gear in the candidate shift mode exceeds the gear range, adjust the candidate gear according to the gear range to obtain a target gear matching the vehicle; when the candidate gear shifts When the candidate shift time in the method exceeds the shift time range, the candidate shift time is adjusted according to the shift time range to obtain a target shift time matching the vehicle.
  • the number of the driving scene images is multiple, and the collection times of the multiple driving scene images are consecutive
  • the vehicle gear control device 800 further includes a monitoring module 808 for Determine the label difference between the driving scene labels corresponding to each of the driving scene images in the plurality of driving scene images; determine the number of driving scene images whose label difference is less than or equal to a preset difference threshold; when the number is greater than When the number is equal to or equal to the preset number threshold, the vehicle in the running process is controlled to drive according to the target gear during the target gear shift time by using the target gear shift manner.
  • the monitoring module 808 is further configured to determine the label confidence levels corresponding to the road attribute label, the traffic attribute label and the environment attribute label respectively; When the respective label confidences corresponding to the traffic attribute label and the environment attribute label are both less than or equal to the preset confidence threshold, the target shifting mode is used to control the vehicle in the driving process to follow the target shifting time according to the target shift time. Drive in the target gear.
  • the shift mode determination module 806 further includes a priority determination module 8062, configured to determine the label priority corresponding to each driving scene label corresponding to the driving scene image; label confidence levels corresponding to each driving scene label corresponding to the driving scene image; based on the label priority and the label confidence level, filter out the target driving scene label from the driving scene labels corresponding to the driving scene image; According to the target driving scene tag, the driving state data, and the driving behavior data, a target gear shift mode matching the vehicle is determined.
  • a priority determination module 8062 configured to determine the label priority corresponding to each driving scene label corresponding to the driving scene image; label confidence levels corresponding to each driving scene label corresponding to the driving scene image; based on the label priority and the label confidence level, filter out the target driving scene label from the driving scene labels corresponding to the driving scene image; According to the target driving scene tag, the driving state data, and the driving behavior data, a target gear shift mode matching the vehicle is determined.
  • the vehicle gear control device 800 is further configured to determine the current gear of the vehicle; when the target gear in the target shift mode is different from the current gear When the gears are inconsistent, at the target shift time, adjust the vehicle from the current gear to the target gear, so as to control the vehicle to drive according to the target gear at the target gear shift time .
  • Each module in the above-mentioned vehicle gear control device may be implemented in whole or in part by software, hardware and combinations thereof.
  • the above modules can be embedded in or independent of the processor in the computer device in the form of hardware, or stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device may be an electronic device, a vehicle or a server, and an internal structure diagram thereof may be shown in FIG. 10 .
  • the computer device includes a processor, memory, and a network interface connected by a system bus. Among them, the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium, an internal memory.
  • the nonvolatile storage medium stores an operating system, a computer program, and a database.
  • the internal memory provides an environment for the execution of the operating system and computer programs in the non-volatile storage medium.
  • the database of the computer device is used to store vehicle gear control data.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer program implements a vehicle gear control method when executed by the processor.
  • FIG. 10 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied. Include more or fewer components than shown in the figures, or combine certain components, or have a different arrangement of components.
  • a computer device including a memory and a processor, where a computer program is stored in the memory, and when the processor executes the computer program, the steps in the foregoing method embodiments are implemented.
  • a computer-readable storage medium which stores a computer program, and when the computer program is executed by a processor, implements the steps in the foregoing method embodiments.
  • a computer program product or computer program comprising computer instructions stored in a computer readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the steps in the foregoing method embodiments.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical memory, and the like.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • the RAM may be in various forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

本申请涉及人工智能领域,具体涉及一种车辆挡位控制方法、装置、计算机设备和存储介质。所述方法包括获取车辆在行驶过程中的行驶场景图像;对获取的行驶场景图像进行图像识别,得到行驶场景标签;行驶场景标签至少包括道路属性标签、交通属性标签和环境属性标签中的一种;获取与车辆对应的行驶状态数据和驾驶行为数据;基于行驶状态数据、驾驶行为数据、及行驶场景标签,确定与车辆相匹配的目标换挡方式;目标换挡方式用于控制处于行驶过程中的车辆在目标换挡时间按照目标挡位进行行驶。可应用的领域包括但不限于车联网、自动驾驶、路网协同等领域。

Description

车辆挡位控制方法、装置、计算机设备和存储介质
本申请要求于2021年2月23日提交中国专利局,申请号为2021102005391、发明名称为“车辆挡位控制方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及车辆控制技术领域,特别是涉及一种车辆挡位控制方法、装置、计算机设备和存储介质。
背景技术
变速器是车辆的核心部件之一,变速器可根据车辆行驶的需求,通过切换挡位提供必需的牵引力,以维持车辆正常行驶。然而,由于车辆换挡慢和换挡延迟会在很大程度上影响车辆的驾驶性和动力性,因此,为了提升车辆的驾驶性和动力性,自动变速器应运而生。自动变速器可实现车辆的自动换挡,减少驾驶员操纵疲劳,提高行车安全性。
现有技术中,自动变速器可基于车速来确定对应的换挡方式,并按照对应的换挡方式进行自动换挡。但是,现有技术并未考虑到驾驶过程中的行驶环境对换挡造成的影响,从而导致车辆换挡的换挡方式并不准确。
发明内容
一种车辆挡位控制方法,由计算机设备执行,所述方法包括:
获取车辆在行驶过程中的行驶场景图像;
对获取的所述行驶场景图像进行图像识别,得到行驶场景标签;所述行驶场景标签至少包括道路属性标签、交通属性标签和环境属性标签中的一种;
获取与所述车辆对应的行驶状态数据和驾驶行为数据;及
基于所述行驶状态数据、所述驾驶行为数据、及所述行驶场景标签,确定与所述车辆相匹配的目标换挡方式;所述目标换挡方式用于控制处于行驶过程中的所述车辆在目标换挡时间按照目标挡位进行行驶。
一种车辆挡位控制装置,所述装置包括:
图像获取模块,用于获取车辆在行驶过程中的行驶场景图像;
标签识别模块,用于对获取的所述行驶场景图像进行图像识别,得到行驶场景标签;所述行驶场景标签至少包括道路属性标签、交通属性标签和环境属性标签中的一种;及
换挡方式确定模块,用于获取与所述车辆对应的行驶状态数据和驾驶行为数据;基于所述行驶状态数据、所述驾驶行为数据、及所述行驶场景标签,确定与所述车辆相匹配的目标换挡方式;所述目标换挡方式用于控制处于行驶过程中的所述车辆在目标换挡时间按照目标挡位进行行驶。
一种计算机设备,包括存储器和一个或多个处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述一个或多个处理器执行上述方面所述的车辆挡位控制方法。
一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以上述方面所述的车辆挡位控制方法。
一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中,计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述方面所述的车辆挡位控制方法。
附图说明
图1为一个实施例中车辆挡位控制方法的应用环境图;
图2为一个实施例中车辆挡位控制方法的流程示意图;
图3为一个实施例中场景分类模型的训练示意图;
图4为一个实施例中行驶场景标签的示意图;
图5为一个实施例中基于目标驾驶模式确定目标换挡方式的示意图;
图6为一个具体实施例中车辆挡位控制方法的流程示意图;
图7为另一个实施例中车辆挡位控制方法的流程示意图;
图8为一个实施例中车辆挡位控制装置的结构框图;
图9为一个实施例中车辆挡位控制装置的结构框图;
图10为一个实施例中计算机设备的内部结构图。
具体实施方式
图1为一个实施例中车辆挡位控制方法的应用环境图。参照图1,该车辆挡位控制方法应用于车辆挡位控制***100。车辆挡位控制***100包括车辆102和服务器104,其中,车辆102通过网络与服务器104进行通信。需要说明的是,车辆102侧可单独用于执行本申请实施例中提供的车辆挡位控制方法。车辆102和服务器104也可协同用于执行本申请实施例中提供的车辆挡位控制方法。
当车辆102和服务器104协同执行车辆挡位控制方法时,车辆102可通过车载摄像头采集行驶场景图像,并将行驶场景图像发送至服务器104,由服务器104对行驶场景图像进行图像识别,得到行驶场景标签。服务器104基于行驶场景标签、车辆102的行驶状态以及车辆102的驾驶行为数据,生成对应的目标换挡方式,并将目标换挡方式发送至车辆102,以使车辆102在行驶过程中按照目标换挡方式进行行驶。其中,服务器104可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
当车辆102侧单独执行车辆挡位控制方法时,车辆102可通过车载摄像头采集行驶场景图像,并对行驶场景图像进行图像识别,得到行驶场景标签。车辆102获取自身的行驶状态和驾驶行为数据,根据行驶场景标签、行驶状态和驾驶行为数据,生成包含有目标换挡时间和目标档位的目标换挡方式,并基于目标换档方式在目标换档时间,按照目标档位进行行驶。
需要说明的是该方法可应用于车辆、服务器和电子设备。其中,该电子设备包括但不限于是手机、电脑、智能语音交互设备、智能家电、车载终端、飞行器等。
应该理解的是,本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。除非上下文另外清楚地指出,否则单数形式“一个”、“一”或者“该”等类似词语也不表示数量限制,而是表示存在至少一个。
在一个或多个实施例中,如图2所示,提供了一种车辆换挡方法,容易理解的,上述车辆换挡方法可由计算机设备执行,该计算机设备具体可以为上述的电子设备、车辆或者服务器。为了更好地进行描述,下述以该方法应用于图1中的车辆为例进行说明,包括以下步骤:
步骤S202,获取车辆在行驶过程中的行驶场景图像。
其中,行驶场景图像指的是对运行车辆当前所处行驶区域的场景信息进行采集所得到的图像。运 行车辆指的是处于运动状态的车辆。
具体地,当车辆处于运行状态时,可通过图像采集装置实时采集周围的行驶场景图像。在一个或多个实施例中,图像采集装置可按照预设采集频率采集行驶场景图像,并且在每次采集过程中,均针对当前所处行驶区域采集至少一张行驶场景图像。
在一个或多个实施例中,可通过车载摄像头对车辆所处行驶区域的场景进行拍摄,得到行驶场景图像;还可通过车辆外接的摄像头对行驶区域的场景进行拍摄,得到行驶场景图像;亦可通过架设于行驶区域中的监控摄像头,对车辆的周围环境进行拍摄,得到行驶场景图像。本实施例在此不作限定。
在一个或多个实施例中,获取车辆在行驶过程中的行驶场景图像,包括:当车辆处于行驶过程中时,通过部署在车辆上的车载摄像头,实时对车辆周围的场景信息进行采集,得到至少一张行驶场景图像。
具体地,图像采集设备具体可以为车载摄像头。运行车辆采集的行驶场景图像可以通过设置于车辆上的车载摄像头采集得到。其中,车载摄像头具体可以是多个,可以根据图像获取需要设置于车辆的不同位置。例如,采集车道路面环境的车载摄像头的图像采集方向与采集天气环境的车载摄像头的图像采集方向不同,采集天气环境的车载摄像头的图像采集方向与采集交通指示牌的车载摄像头的图像采集方向不同。在其他实施例中,还可以通过采集方向可控的转动摄像装置进行行驶场景图像采集。
通过实时对车辆周围的场景进行采集,使得后续可通过实时采集的行驶场景图像,及时确定车辆的目标换挡方式,进而提升了确定目标换挡方式的及时性。
在一个或多个实施例中,可在车辆上架设用于采集道路图像的第一车载摄像头、用于采集交通参与者图像的第二车载摄像头以及用于采集天气图像的第三车载摄像头。其中,交通参与者指的是行驶区域中的行人、机动车和非机动车。
步骤S204,对获取的行驶场景图像进行图像识别,得到行驶场景标签;行驶场景标签至少包括道路属性标签、交通属性标签和环境属性标签中的一种。
其中,行驶场景标签指的是标明行驶区域中的行驶场景的场景状况的信息。行驶场景标签包括道路属性标签、交通属性标签和环境属性标签。道路属性标签指的是标明运行车辆在当前所处行车区域的道路状况的信息,例如,道路属性标签可以为“大上坡”、“小上坡”、“直道”、“平整路面”等。交通属性标签指的是标明运行车辆在当前所处行车区域的交通状况的信息,例如,交通属性标签可以为“拥堵严重”、“危险程度高”等。环境属性标签指的是标明当前所处行驶区域的环境状况的信息,例如,环境属性标签可为“道路可见度低”、“强光照”等。
具体地,当获取得到行驶场景图像时,车辆可对行驶场景图像进行图像识别,识别行驶场景图像中的道路状况、交通状况和环境状况,并基于道路状况、交通状况和环境状况输出对应的道路属性标签、交通属性标签和环境属性标签。
在一个或多个实施例中,车辆可通过道路识别算法识别行驶场景图像中的道路,可通过交通识别算法识别行驶场景图像中的交通状况,可通过环境识别算法识别行驶场景图像中的环境状况。其中,道路识别算法、交通识别算法和环境识别算法均可根据需要自定义,如可为OpenCV道路检测算法、Matlab的交通识别算法等。
在一个或多个实施例中,对获取的行驶场景图像进行图像识别,得到行驶场景标签,包括:获取场景分类模型;通过场景分类模型,提取行驶场景图像中的图像特征;图像特征包括道路特征、交通特征和环境特征;通过场景分类模型,基于道路特征、交通特征和环境特征,确定与行驶场景图像对应的道路属性标签、交通属性标签和环境属性标签。
其中,场景分类模型指的是用以输出行驶场景标签的机器学习模型。机器学习模型可以包括线性 模型和非线性模型。例如,机器学习模型可以包括回归模型、支持向量机、基于决策树的模型、贝叶斯模型和/或神经网络(例如,深度神经网络)。例如,神经网络可以包括前馈神经网络、递归神经网络(例如,长短期记忆递归神经网络)、卷积神经网络或其他形式的神经网络。需要说明,场景分类模型不一定限于是神经网络,还可以包括其他形式的机器学习模型。
图像特征是可以反映出车辆周围的场景特征的数据。图像特征包括道路特征、交通特征和环境特征。道路特征是用于反映行驶场景中的道路特征的数据,道路特征可以反映出道路路面和道路设施的像素点的颜色值分布、亮度值分布和深度分布。交通特征是用于反映行驶场景中的交通参与者特征的数据,交通特征可以反映出行人、机动车和非机动车之间的距离等其中的一种或多种特征信息。环境特征是用于反映行驶场景中的环境特征的数据。环境特征可以反映出天空、树木等像素点的亮度分布和深度分布。
具体地,当获取得到行驶场景图像时,车辆可将行驶场景图像输入至场景分类模型中,通过场景分类模型对行驶场景图像进行编码处理,得到编码结果,并根据编码结果提取出行驶场景图像中的道路特征、交通特征和环境特征。进一步地,场景分类模型对道路特征、交通特征和环境特征进行解码处理,得到与行驶场景图像对应的道路属性标签、交通属性标签和环境属性标签。
在一个或多个实施例中,参考图3,可预先在车辆上架设车载摄像头,并通过车载摄像头在不同的行驶场景中采集多张行驶场景图像。例如,可通过车载设摄像头在雨天采集上坡路段的行驶场景图像;在晴天采集弯道路段的行驶场景图像;在雪天采集拥堵路段的行驶场景图像等。进一步的,模型训练人员可对采集得到的行驶场景图像进行标签标注处理,通过标签标注处理得到对应的训练样本,并通过训练样本对场景分类模型进行模型训练,直至达到训练停止条件时停止,得到训练好的场景分类模型。图3为一个实施例中场景分类模型的训练示意图。
上述实施例中,通过预先对场景分类模型进行训练,使得训练后的场景分类模型可以全方位地提取出图像特征,从而可以基于提取出的图像特征输出多维度的行驶场景标签,进而后续可基于多维度的行驶场景标签准确得到车辆的目标换挡方式。
步骤S206,获取与车辆对应的行驶状态数据和驾驶行为数据。
其中,行使状态数据指的是车辆在运行过程中的车辆状况数据。在一个或多个实施例中,行使状态数据包括车辆类型、车辆速度以及车辆加速度中的一个参数或者多个参数的组合。其中,车辆类型是用于描述车的大小的参数,具体可以是车辆的型号等。车辆速度指的是车辆在运行时的速度数据,车辆的加速度指的是车辆在运行时的速度变化率。
驾驶行为数据指的是车辆驶对象对当前车辆进行控制的控制数据。在一个或多个实施例中,驾驶行为数据包括制动踏板踩踏程度、加速踏板踩踏程度以及节气门开度中的一个参数或者多个参数的组合。其中,驾驶对象是指对车辆进行驾驶控制的驾驶者,可以理解,驾驶者具体可以是真实的人,也可以时***自动控制代替的机器人。节气门开度指的是发动机节气门的开启角度,车辆节气门开度可由驾驶对象通过加速踏板来操纵,以改变发动机的进气量,从而控制发送机的运转。不同的节气门开度标志着发送机的不同运转工况。
具体地,车辆可通过架设于内部的车速传感器、加速度传感器确定车辆速度以及车辆加速度,可通过架设于内部的加速踏板传感器、制动踏板传感器以及发动机节气门传感器,确定加速踏板踩踏程度、制动踏板踩踏程度以及节气门开度。
在一个或多个实施例中,车辆可直接通过读取车辆仪表盘中的仪表盘数据,来确定对应的车辆速度。
在一个或多个实施例中,车辆可实时获取行驶状态数据和驾驶行为数据,也可每隔预设时间获取 行驶状态数据和驾驶行为数据,还可在确定驾驶对象踩踏加速踏板或者制动踏板时,获取行驶状态数据和驾驶行为数据。本实施例在此不作限定。
步骤S208,基于行驶状态数据、驾驶行为数据、及行驶场景标签,确定与车辆相匹配的目标换挡方式;目标换挡方式用于控制处于行驶过程中的车辆在目标换挡时间按照目标挡位进行行驶。
其中,目标换挡方式包括目标换挡时间和目标挡位。目标换挡时间指的是将车辆从原始挡位更换至目标挡位的挡位更换时间。目标挡位指的是与当前车辆的行驶状态数据、当前驾驶对象的驾驶行为数据以及当前行驶环境的行驶场景标签相匹配的挡位。
具体地,车辆根据行驶状态数据、驾驶行为数据和行驶场景标签,生成相应的目标换挡方式,并在目标换挡时间将挡位调整至目标挡位,按照目标挡位进行行驶。
在一个或多个实施例中,当车辆基于行驶场景标签,确定前方行驶区域具有危险时,例如,当基于行驶场景标签,确定前方路面为事故多发地,或者确定前方为急转弯道时,车辆将目标换挡时间设置为“在驾驶对象踩踏制动踏板之前”,如此,便能在驾驶员踩踏制动踏板之前,就提前准备换低挡位,准备减速慢行,从而减少了切换和同步离合器的时间。
在一个或多个实施例中,车辆可根据驾驶对象的反应时间,确定目标换挡时间。其中,驾驶对象的反应时间是指驾驶对象对突发的状况做出反应的时常,一般来说,无人驾驶的情况,驾驶对象的反应时间与车辆的处理器分析速度有关,人工驾驶情况下,可通过识别驾驶者的身份,通过获取该驾驶者的历史驾驶数据,通过对历史驾驶数据中的反应数据进行分析,以得到该驾驶对象的反应时间。当车辆基于行驶场景标签,确定前方行驶区域具有危险时,车辆获取行驶场景标签的确定时间点,并将确定时间点加上反应时间,得到换挡时间期限,将目标换挡时间设置为确定时间点至换挡时间期限中的任意一个时间点,从而实现在驾驶对象踩踏制动踏板之前,就提前准备换低挡位。其中,确定时间点为生成行驶场景标签的时间点。
在一个或多个实施例中,当车辆基于行驶场景标签,确定前方行驶区域为慢行区域时,例如,当基于行驶场景标签,确定前方拥堵、为大上坡或者为急转弯时,则需要尽量避免换挡频繁,避免由于驾驶对象踩踏加速踏板就导致变速器升挡的情况。若当前时刻驾驶对象踩踏了制动踏板或者加速踏板,车辆基于“减少换挡次数”原则,判断自当前时刻之前的前一时刻起的预设时长内,驾驶对象是否踩踏制动踏板或者加速踏板。若预设时长内驾驶对象未踩踏制动踏板或者加速踏板时,可认为此时驾驶对象踩踏制动踏板或者加速踏板的操作不为误操作,从而,车辆将当前时刻加上预设时长,得到目标换挡时间,并在目标换挡时间按照目标档位进行行驶。
若预设时长内驾驶对象踩踏制动踏板或者加速踏板时,则可认为驾驶对象在短时间内多次踩踏了制动踏板或者加速踏板,进而可认为驾驶对象在当前时刻踩踏制动踏板或者加速踏板的操作为误操作,此时,车辆忽略此次驾驶对象的踩踏制动踏板操作,保持当前档位不变。例如,当行驶区域为大上坡时,理想状态下车辆应保持1挡或者2挡匀速前行,因此,为了避免由于驾驶对象踩踏加速踏板就导致变速器升挡的情况,车辆可基于“减少换挡次数”原则,将短时间内的驾驶对象反复踩踏加速踏板操作或踩踏制动踏板操作判定为误操作动作,并通过不响应驾驶对象误操作动作来避免挡位频繁更换,从而提升车辆燃油经济性。
在一个或多个实施例中,驾驶对象可预先在车辆中安装挡位决策***。当获取得到行驶状态数据、驾驶行为数据、及行驶场景标签时,车辆可将行驶状态数据、驾驶行为数据、及行驶场景标签输入至挡位决策***,通过挡位决策***的隶属度函数,确定与车辆相匹配的目标换挡方式。其中,隶属度函数的输入参数可为行驶状态数据、驾驶行为数据、行驶场景标签,输出数据可为与车辆相匹配的目标换挡方式。
在一个或多个实施例中,车辆可获取专家驾驶对象在不同预设行驶场景下的多次驾驶试验数据,并对多次驾驶试验数据进行数据分析,从而得到挡位决策***的隶属度函数,其中,挡位决策***的隶属度函数可用于根据车辆的行驶状态数据、驾驶行为数据、及行驶场景标签,确定车辆的目标换挡方式。其中,专家驾驶对象指的是能够熟练驾驶车辆的驾驶员。专家驾驶对象可驾驶车辆行驶于预设行驶场景中,并记录行驶于预设行驶场景时所采用的驾驶方式,将预设行驶场景以及对应的驾驶方式作为驾驶试验数据,基于试验数据确定对应的隶属度函数。例如,专家驾驶对象可驾驶车辆进行多次爬坡,并确定每次爬坡时所采用的挡位和换挡时间,将每次爬坡时所采用的挡位和换挡时间作为驾驶试验数据。
在一个或多个实施例中,车辆可通过换挡方式确定模型,根据行驶状态数据、驾驶行为数据、及行驶场景标签,确定与车辆相匹配的目标换挡方式。其中,换挡方式确定模型指的是基于行驶状态数据、驾驶行为数据、行驶场景标签以及对应的目标换挡方式训练而得的机器学习模型。在通过换挡方式确定模型,输出目标换挡方式之前,研发人员可获得大量的驾驶试验数据,并通过驾驶试验数据对相应的行驶场景图像添加换挡方式标签,得到训练数据。例如,当试验数据为爬坡时所采用的挡位和换挡时间时,研发人员可获取车辆在爬坡时所采集的爬坡行驶场景图像,并基于爬坡时所采用的挡位和换挡时间,对爬坡行驶场景图像进行标签标注处理,从而得到训练数据。进一步地,换挡方式确定模型可通过训练数据进行训练,直至达到训练停止条件时停止。
在一个或多个实施例中,车载摄像头可按照预设采集频率采集行驶场景图像,若当前采集时间点采集的行驶场景图像的当前行驶场景标签,与下一顺序采集时间点采集的行驶场景图像的后序行驶场景标签之间的差异小于差异阈值时,车辆保持与当前行驶场景标签相对应的目标换挡方式不变。
在一个或多个实施例中,车辆可通过双离合自动变速器,从原始档位调整至目标档位。其中,双离合自动变速器有2个离合器,第一离合器控制奇数挡和倒挡,第二离合器控制偶数挡;当拨挡杆在奇数挡时,第一离合器与第一输入轴连接,第一输入轴工作,第二离合器分离,第二输入轴不工作;同样,当拨挡杆在偶数挡时,第二离合器与第二输入轴连接,第一离合器与第一输入轴分离。这样整个工作过程总有两个挡位相联接,一个正在工作时,另一个则在随时准备着下一次换挡。
双离合自动变速器的工作原理决定了双离合自动变速器要长时间处于半离合状态,从而使得双离合自动变速器过热,甚至导致双离合自动变速器停止工作。尤其在拥堵路况,车辆处于起步阶段或低于某一车速时,双离合自动变速器始终处于滑磨状态,且为获取起步所需动力还会增大发动机转速与扭矩输出,这也导致了双离合自动变速器迅速温升和车辆的急加速。现有控制策略中,为保护双离合自动变速器,会使双离合自动变速器完全分离,但这又导致动力中断,使得双离合自动变速器抖动、顿挫感比较严重,平顺性较差。
在一个或多个实施例中,通过准确地识别行驶过程中的行驶场景,能够基于准确识别的行驶场景为车辆提供更为可靠的环境信息,从而车辆可基于更为可靠的环境信息,生成更为合理的目标换挡方式,以此保证双离合自动变速器能在拥堵路况起步、弯道、陡坡、湿滑路面上行驶时合理选择是否要进行换挡,换几档以及何时换挡,从而解决双离合自动变速器设计问题而导致的车辆动力中断、顿挫感强的问题。
上述车辆挡位控制方法中,通过获取在行驶过程中的行驶场景图像,可对行驶场景图像进行图像识别,得到对应的行驶场景标签。通过获取行驶状态数据和驾驶行为数据,可基于行驶状态数据、驾驶行为数据、及行驶场景标签,确定与车辆相匹配的目标换挡方式,如此,便能控制处于行驶过程中的车辆在目标换挡时间按照目标挡位进行行驶。由于是综合行驶状态数据、驾驶行为数据、及行驶场景标签确定目标换挡方式的,因此提升了所确定的目标换挡方式的准确性。
在一个或多个实施例中,场景分类模型包括与道路相关的第一道路模型、与交通相关的第一交通模型以及与环境相关的第一环境模型;第一道路模型至少包括道路坡度模型、弯道曲率模型、路面附着度模型、路面平整度模型、信号灯模型和交通指示牌模型中的一种;第一交通模型至少包括危险程度模型和拥堵状况模型中的一种;第一环境模型至少包括道路可见度模型、天气状况模型和光照强度模型中的一种;道路属性标签至少包括通过道路坡度模型输出的道路坡度标签、通过弯道曲率模型输出的弯道曲率标签、通过路面附着度模型输出的路面附着度标签、通过路面平整度模型输出的路面平整度标签、通过信号灯模型输出的信号灯标签和通过交通指示牌模型输出的交通指示牌标签中的一种;交通属性标签至少包括通过危险程度模型输出的危险程度标签、及通过拥堵状况模型输出的拥堵状况标签中的一种;环境属性标签至少包括通过道路可见度模型输出的道路可见度标签、通过天气状况模型输出的天气状况标签、及通过光照强度模型输出的光照强度标签中的一种。
具体地,场景分类模型可包括与道路相关的第一道路模型、与交通相关的第一交通模型、以及与环境相关的第一环境模型。车辆通过第一道路模型,可输出道路属性标签;通过第一交通模型,可输出交通属性标签;通过第一环境模型,可输出环境属性标签。
进一步地,参考图4,第一道路模型又可包括道路坡度模型、弯道曲率模型、路面附着度模型、路面平整度模型、信号灯模型和交通指示牌模型。车辆可通过道路坡度模型输出道路坡度标签;可通过弯道曲率模型输出弯道曲率标签;可通过路面附着度模型输出路面附着度标签;可通过路面平整度模型输出路面平整度标签;可通过信号灯模型输出信号灯标签;以及可通过交通指示牌模型输出交通指示牌标签。
其中,道路坡度标签指的是标明行车区域的道路坡度状况的信息,道路坡度标签具体可以为“大上坡”、“小上坡”、“平路”、“小下坡”、“大下坡”等。弯道曲率标签指的是标明行车区域的弯道曲率状况的信息,弯道曲率标签具体可以为“直道”、“弯道”、“急转弯道”等。路面附着度标签指的是标明行车区域的路面附着度状况的信息,例如,路面附着度标签具体可以为“低附着”、“中附着”、“高附着”等。路面平整度标签指的是标明行车区域的路面平整度状况的信息,路面平整度标签具体可以为“路面平整”、“颠簸路面”等。信号灯标签指的是标明行车区域的交通信号灯的信息,信号灯标签具体可以为“红灯”、“黄灯”、“路灯”等。交通指示牌标签指的是标明行车区域的所存在的交通指示牌的信息,交通指示牌标签具体可以为“前方学校”、“前方斑马线”、“前方事故多发地”等。
进一步地,第一交通模型包括危险程度模型和拥堵状况模型。车辆可通过危险程度识别结果输出危险程度标签,可通过拥堵状况模型输出拥堵状况标签。
其中,危险程度标签指的是标明行车区域的交通危险程度的信息,危险程度标签具体可以为“高危险”、“中危险”或者“低危险”等。拥堵状况标签指的是标明行车区域的交通拥堵程度的信息,拥堵状况标签具体可以为“高拥堵”、“中拥堵”或者“低拥堵”等。
进一步地,第一环境模型包括道路可见度模型、天气状况模型和光照强度模型。车辆可通过道路可见度模型输出道路可见度标签,可通过天气状况模型输出天气状况标签,以及可通过光照强度模型输出的光照强度标签。
其中,道路可见度标签指的是标明行车区域中的道路的可见程度的信息,道路可见度标签具体可以为“高可见度”、“中可见度”和“低可见度”。天气状况标签指的是标明当前天气情况的标签,天气状况标签具体可以为“晴”、“雨”、“雪”、“雾”等。光照强度标签指的是标明行车区域的光照强度的信息,光照强度标签具体可以为“光照强”、“光照中等”、“光照弱”等。图4示出了一个实施例中行驶场景标签的示意图。
在一个或多个实施例中,危险程度模型可综合行人维度的危险程度、机动车维度的危险程度和非 机动车维度的危险程度,输出与行驶场景图像对应的危险程度标签。具体地,危险程度模型可对行驶场景图像中的行人、机动车以及非机动车进行识别,得到识别结果,并根据识别结果确定行人、机动车和非机动车在行车区域中的危险程度,综合行人、机动车和非机动车在行车区域中的危险程度,得到与行驶场景图像对应的危险状况标签。
在一个或多个实施例中,拥堵状况模型可综合行人维度的拥堵程度、机动车维度的拥堵程度和非机动车维度的拥堵程度,输出与行驶场景图像对应的拥堵状况标签。具体地,拥堵状况模型可对行驶场景图像中的行人、机动车以及非机动车进行识别,得到识别结果,并根据识别结果确定行人、机动车和非机动车在行车区域中的密集程度,综合行人、机动车和非机动车在行车区域中的密集程度,得到与行驶场景图像对应的拥堵状况标签。
在一个或多个实施例中,在使用场景分类模型对行驶场景图像进行图像识别之前,可分别对场景分类模型中的各个模型进行训练,得到训练好的模型。
上述实施例中,由于是基于不同的模型输出对应的标签,可使得所输出的标签更为准确。
在一个或多个实施例中,场景分类模型包括第二道路模型、第二交通模型和第二环境模型;道路属性标签至少包括道路状况标签和道路设施标签中的一种,道路状况标签至少包括道路坡度标签、弯道曲率标签、路面附着度标签和路面平整度标签中的一种,道路设施标签至少包括信号灯标签和交通指示牌标签中的一种;交通属性标签至少包括行人标签、机动车标签和非机动车标签中的一种,行人标签至少包括行人危险程度标签和行人拥堵状况标签中的一种,机动车标签至少包括机动车危险程度标签和机动车拥堵状况标签中的一种,非机动车标签至少包括非机动车危险程度标签和非机动车拥堵状况标签中的一种;环境属性标签至少包括天气标签和光照标签中的一种,天气标签至少包括道路可见度标签和天气状况标签中的一种,光照标签至少包括光照强度标签;通过场景分类模型,基于道路特征、交通特征和环境特征,确定与行驶场景图像对应的道路属性标签、交通属性标签和环境属性标签,包括:通过第二道路模型,基于道路特征,输出道路坡度标签、弯道曲率标签、路面附着度标签、路面平整度标签、信号灯标签和交通指示牌标签中的至少一种;通过第二交通模型,基于交通特征,输出行人危险程度标签、行人拥堵状况标签、机动车危险程度标签、机动车拥堵状况标签、非机动车危险程度标签和非机动车拥堵状况标签中的至少一种;通过第二环境模型,基于环境特征,输出道路可见度标签、天气状况标签和光照强度标签中的至少一种。
其中,多任务模型是可以处理不同任务的机器学习模型,多任务模型通过学习不同任务的之间联系和差异,可以提高每个任务的学习效率和质量。
具体地,场景分类模型可包括第二道路模型、第二交通模型和第二环境模型。车辆可将行驶场景图像输入至第二道路模型中,通过第二道路模型提取行驶场景图像中的道路特征,并根据道路特征分别对行驶区域中的道路和道路设施进行识别,根据识别结果输出道路状况标签和道路设施标签。其中,道路状况标签包括道路坡度标签、弯道曲率标签、路面附着度标签和路面平整度标签;道路设施标签包括信号灯标签和交通指示牌标签。
进一步地,车辆可将行驶场景图像输入至第二交通模型,通过第二交通模型提取交通特征,并根据交通特征对行驶区域中的行人、机动车和非机动车进行识别,基于识别结果输出行人标签、机动车标签和非机动车标签。其中,行人标签包括行人危险程度标签和行人拥堵状况标签;机动车标签包括机动车危险程度标签和机动车拥堵状况标签;非机动车标签包括非机动车危险程度标签和非机动车拥堵状况标签。
进一步地,车辆可将行驶场景图像输入至第二环境模型,通过交第二环境模型提取环境特征,并根据第二环境模型对行驶区域中的天气、光照进行识别,基于识别结果输出天气标签和光照标签。其 中,天气标签包括道路可见度标签、天气状况标签;光照标签包括光照强度标签。
由于可通过多任务模型一次性输出多个行驶场景标签,从而提升了行驶场景标签的输出效率。
在一个或多个实施例中,基于行驶状态数据、驾驶行为数据、及行驶场景标签,确定与车辆相匹配的目标换挡方式,包括:基于行驶状态数据和驾驶行为数据,确定对应的候选换挡方式;基于行驶场景标签对候选换挡方式进行修正处理,得到与车辆相匹配的目标换挡方式。
具体地,当获取得到运行车辆的行驶状态数据和驾驶行为数据时,车辆可调用预设的换挡规律表,并从换挡规律表中查找与获取得到的行驶状态数据和驾驶行为数据均对应的候选换挡方式。其中,换挡规律表指的是存储有行驶状态数据、驾驶行为数据和换挡方式之间的对应关系的表格,换挡规律表具体可以为换挡MAP图。
进一步地,当确定与行驶场景图像对应的行驶场景标签时,车辆可基于行驶场景标签对候选换挡方式进行修正,将候选换挡方式调整为目标换挡方式,并触发车辆按照目标换挡方式进行行驶。
在一个或多个实施例中,参考表1,车辆可按照表1中的档位动态修正逻辑,对候选换挡方式进行修正。
例如,在行驶场景标签为“大上坡”时,所对应的修正逻辑为“上坡提前预设调低档位,避免频繁换挡”,此时,车辆降低候选换挡方式中的候选挡位至目标档位,并将候选换挡方式中的换挡时间调整为“在驾驶对象踩踏制动踏板之前”。通过修正逻辑对候选换挡方式进行修正,使得车辆能够在爬坡时避免频繁换挡,从而不仅减少了因频繁换挡而耗费的能量、换挡执行机构的部件磨损,而且避免了因动力切断时间比例过高而损失车辆的经济性、动力性。其中,目标换挡时间和具体应调整至的目标档位,可通过上述的隶属度函数确定,或者可通过上述的换挡方式确定模型确定。比如,可通过上述的隶属度函数确定一个与行驶场景标签相对应的挡位范围和换挡时间范围,并将候选换挡方式调整至挡位范围和换挡时间范围之内,得到目标档位和目标换挡时间。其中,可基于最小修正原则,对候选换挡方式进行修正,比如,当挡位范围为1挡至2挡之间,若候选换挡方式中的候选挡位为3挡时,则将候选挡位调整至2挡,而非调整至1挡,如此便能提升车辆驾驶的安全性。
又例如,在行驶场景标签为“低路面附着度”时,车辆根据对应的修正逻辑“低附着路面换挡时预备调高挡位,避免驱动力过剩”调高档位至目标档位。通过与路面附着相对应的修正逻辑对候选档位进行修正,不仅可以防止车辆在低附着路面上行驶时,由于轮胎与路面没有足够达到附着力而引起急剧打滑,而造成危险事故的概率,而且采用低附着路面下的修正逻辑进行工作,还可以减少低挡位的切换,满足车辆对附着率的需求,提升车辆燃油经济性。
再例如,当行驶场景标签为“急转弯道”时,车辆以“弯道换挡在保证行驶安全基础上,以调低挡位、避免频繁换挡”为修正逻辑,降低候选档位至目标档位,并基于驾驶对象踩踏制动踏板或者加速踏板的操作,识别驾驶对象在弯道行驶过程中的误操作,通过不响应驾驶对象误操作动作来避免挡位频繁更换。
例如,当行驶场景标签为“高拥堵”时,车辆以“调低挡位,若速度基本匹配则保持当前挡位,避免频繁换挡”为修正逻辑,降低候选档位至目标档位,并避免频繁换挡。通过避免频繁换挡,可以减少因双离合自动变速器长时间处于半离合状态的概率,从而延升双离合自动变速器的寿命,减少驾驶时的顿挫感。
表1:挡位动态修正逻辑表
Figure PCTCN2022075722-appb-000001
Figure PCTCN2022075722-appb-000002
上述实施例中,通过行驶场景标签对候选换挡方式进行修正处理,使得修正后的目标换挡方式可以综合周围环境信息、行驶状态信息和驾驶行为信息,从而得到更为合理的目标换挡方式。
在一个或多个实施例中,基于行驶状态数据和驾驶行为数据,确定对应的候选换挡方式,包括:确定车辆的目标驾驶模式;获取与目标驾驶模式相对应的目标换挡规律表;根据行驶状态数据和驾驶行为数据,在目标换挡规律表中查找对应的候选换挡方式。
其中,驾驶模式指的是驾驶的方式,驾驶模式具体可为运动驾驶模式、舒适驾驶模式、节能驾驶模式和越野驾驶模式等。在不同的驾驶模式下,车辆会根据预定的参数,对转向、变速箱、发动机、悬架等响应以及电子稳定程序介入时间和力度作出相应调整。具体地,车辆可确定当前的目标驾驶模式,并获取与当前的目标驾驶模式相对应的目标换挡规律表,从目标换挡规律表中查找与行驶状态数据和驾驶行为数据均对应的候选换挡方式。其中目标驾驶模式指的是车辆在当前时刻所处的驾驶模式。
在一个或多个实施例中,车辆中可预先存储有各驾驶模型各自对应的换挡规律表,当确定车辆的目标驾驶模式时,车辆可根据目标驾驶模式的模式标识,从所存储的多个换挡规律表中,查找目标换挡规律表。
上述实施例中,通过获取与目标驾驶模式对应的目标换挡规律表,可以基于目标换挡规律表更为准确地确定候选换挡方式。
在一个或多个实施例中,目标换挡方式包括目标挡位和目标换挡时间;基于行驶场景标签,对候 选换挡方式进行修正处理,得到与车辆相匹配的目标换挡方式,包括:确定与行驶场景标签、行驶状态数据和驾驶行为数据相对应的挡位范围和换挡时间范围;当候选换挡方式中的候选挡位超出挡位范围时,按照挡位范围调整候选挡位,得到与车辆相匹配的目标挡位;当候选换挡方式中的候选换挡时间超出换挡时间范围时,按照换挡时间范围调整候选换挡时间,得到与车辆匹配的目标换挡时间。
具体地,当确定行驶场景标签、行驶状态数据和驾驶行为数据时,车辆可通过挡位决策***的隶属度函数确定与行驶场景标签、行驶状态数据和驾驶行为数据相对应的挡位范围和换挡时间范围。进一步地,车辆判断候选换挡方式中的候选挡位是否超出挡位范围,若超出档位范围,则将候选挡位调整至挡位范围之内,得到目标档位。例如,当挡位范围为1挡至2挡,候选挡位为3挡时,车辆可将挡位从3挡修正为2挡或者1挡。进一步地,车辆判断候选换挡方式中的候选换挡时间是否超出换挡时间范围,若超出换挡时间范围,则将候选换挡时间调整至换挡时间范围之内,得到目标换挡时间。
在一个或多个实施例中,参考图5,当获取得到车速、气节门开度和行驶场景标签时,车辆可确定当前的目标驾驶模式,并基于目标驾驶模式确定对应的目标换挡规律表。进一步地,车辆可从目标换挡规律表中查找与车速、气节门开度对应的候选换挡方式,并通过行驶场景标签对候选换挡方式进行修正,得到目标换挡档位和目标换挡时间,并控制双离合自动变速器在目标换挡时间将档位从原始档位调整至目标档位。图5示出了一个实施例中基于目标驾驶模式确定目标换挡方式的示意图。
上述实施例中,通过对候选换挡方式进行修正处理,使得最终确定的目标换挡方式能够更为准确、合理。
在一个或多个实施例中,行驶场景图像的数量为多张,且多张行驶场景图像的采集时间相连续,车辆挡位控制方法还包括:确定多张行驶场景图像中,各行驶场景图像各自对应的行驶场景标签之间的标签差异;确定标签差异小于或等于预设差异阈值的行驶场景图像的数量;当数量大于或等于预设数量阈值时,通过目标换挡方式,控制处于行驶过程中的车辆在目标换挡时间按照目标挡位进行行驶。
具体地,车载摄像头可按照预设采集频率采集行驶场景图像,并在每次采集行驶场景图像时,采集多张时间相连续行驶场景图像。对于多张时间相连续的行驶场景图像中的每张行驶场景图像,车辆均通过场景分类模型对其进行图像识别,得到相应的行驶场景标签。进一步地,车辆确定各行驶场景标签之间的标签差异,并确定标签差异是否小于或等于预设的差异阈值。若存在有预设数量的行驶场景标签之间的差异小于或等于预设差异阈值时,例如若存在10张行驶场景图像的行驶场景标签之间的差异小于预设差异阈值时,则可认为场景分类模型对行驶场景图像的识别结果较为准确,行驶场景标签的置信度较高,此时车辆响应于目标换挡方式,通过目标换挡方式,控制处于行驶过程中的车辆在目标换挡时间按照目标挡位进行行驶。在一个或多个实施例中,车辆可对具有标签差异小于或等于预设的差异阈值的行驶场景图像进行记录,从而根据记录结果,确定标签差异小于或等于预设差异阈值的行驶场景图像的数量。
在一个或多个实施例中,车辆可通过多张时间相连续行驶场景图像中的首张行驶场景图像,确定对应的目标换挡方式,可通过多张时间相连续行驶场景图像中的除首张行驶场景图像之外的其余行驶场景图像,确定场景分类模型对行驶场景图像的识别结果的准确度,也即确定行驶场景标签的置信度,并在行驶场景标签的置信度高于预设置信度阈值时,通过目标换挡方式,控制处于行驶过程中的车辆在目标换挡时间按照目标挡位进行行驶。
在一个或多个实施例中,车辆可通过场景分类模型对时间相连续的多张行驶场景图像进行图像识别,得到每张行驶场景图像各自对应的行驶场景标签。车辆确定各行驶场景标签之间的标签差异,并在存在有预设数量的行驶场景标签之间的差异小于或等于预设差异阈值时,从行驶场景标签中,筛选出行驶场景标签之间的差异小于或等于预设差异阈值的目标行驶场景标签,根据目标行驶场景标签、 行驶状态数据和驾驶行为数据,确定目标换挡方式。
上述实施例中,在有预设数量的行驶场景标签之间的差异小于或等于预设差异阈值时,才基于目标换挡方式控制车辆行驶,可以提升车辆挡位控制的安全性,减少由于行驶场景标签错误而导致车辆处于危险状态的概率。
在一个或多个实施例中,车辆挡位控制方法还包括:确定道路属性标签、交通属性标签和环境属性标签分别对应的标签置信度;当道路属性标签、交通属性标签和环境属性标签分别对应的标签置信度均大于或等于预设置信度阈值时,通过目标换挡方式,控制处于行驶过程中的车辆在目标换挡时间按照目标挡位进行行驶。
具体地,当场景分类模型输出道路属性标签、交通属性标签和环境属性标签时,还可对应输出道路属性标签、交通属性标签和环境属性标签各自对应的标签置信度。车辆判断道路属性标签、交通属性标签和环境属性标签分别对应的标签置信度是否均大于或等于预设置信度阈值,并在路属性标签、交通属性标签和环境属性标签分别对应的标签置信度均大于或等于预设置信度阈值时,认为通过行驶场景图像得到的行驶场景标签的准确性较高,此时,车辆通过目标换挡方式,控制处于行驶过程中的车辆在目标换挡时间按照目标挡位进行行驶。若存在标签置信度小于预设置信度阈值的行驶场景标签时,可认为目标换挡方式的有误,此时车辆停止在目标换挡时间按照目标挡位进行行驶。
上述实施例中,在道路属性标签、交通属性标签和环境属性标签分别对应的标签置信度是否均大于或等于预设置信度阈值时,才基于目标换挡方式控制车辆行驶,可以提升车辆挡位控制的安全性,减少由于行驶场景标签错误而导致车辆处于危险状态的概率。
在一个或多个实施例中,基于行驶状态数据、驾驶行为数据、及行驶场景标签,确定与车辆相匹配的目标换挡方式,包括:确定与行驶场景图像对应的各行驶场景标签各自对应的标签优先级;确定与行驶场景图像对应的各行驶场景标签各自对应的标签置信度;基于标签优先级和标签置信度,从与行驶场景图像对应的行驶场景标签中筛选出目标行驶场景标签;根据目标行驶场景标签、行驶状态数据和驾驶行为数据,确定与车辆相匹配的目标换挡方式。
具体地,可预先设置每个行驶场景标签各自对应的标签优先级,例如,可设置“大上坡”的优先级为第一优先级,设置“低附着”为第二优先级,设置“高光照”为第三优先级等。其中,第一优先级高于第二优先级,第二优先级高于第三优先级。当获取得到与行驶场景图像对应的至少一个的行驶场景标签时,车辆可确定各行驶场景标签各自对应的标签优先级、以及确定各行驶场景标签各自对应的标签置信度,并从行驶场景标签中筛选出具有最高优先级的候选行驶场景标签,从候选行驶场景标签中筛选出具有最高置信度的目标行驶场景标签,基于目标行驶场景标签、行驶状态数据和驾驶行为数据,确定与车辆相匹配的目标换挡方式。
例如,当与行驶场景图像相对应的行驶场景标签为“大上坡”、“红灯”以及“低附着”,且“大上坡”的优先级为第一优先级、置信度为90%,“红灯”的优先级为第一优先级、置信度为70%,“低附着”的优先级为第二优先级、置信度为95%时,车辆可基于标签优先级,从行驶场景标签中筛选出具有第一优先级的标签“大上坡”和“红灯”,可基于置信度从“大上坡”和“红灯”中筛选出目标行驶场景标签“大上坡”。
在一个或多个实施例中,当获取得到各行驶场景标签时,车辆可基于表1确定各行驶场景标签各自对应的标签优先级。例如,当与“上坡提前预备调低档位,避免频繁换挡;下坡充分利用发动机牵阻作用控制车速”的修正逻辑相对应的优先级为第一优先级时,与“大上坡”、“上坡”、“小下坡”以及“大下坡”对应的标签优先级可以为第一优先级。
上述实施例中,通过标签优先级和标签置信度,从行驶场景标签中筛选出目标行驶场景标签,并 根据目标行驶场景标签确定对应的目标换挡方式,使得车辆可以采用更为合理的目标换挡方式进行行驶。
在一个或多个实施例中,上述车辆挡位控制方法包括:确定车辆的当前所处挡位;当目标换挡方式中的目标挡位与当前所处挡位不一致时,在目标换挡时间,将车辆从当前所处挡位调整至目标挡位,以控制车辆在目标换挡时间按照目标挡位进行行驶。
具体地,车辆确定当前所处挡位,并判断当前所处挡位是否与目标换挡方式中的目标挡位一致,若一致,则保持当前挡位不变;若不一致,则在目标换挡时间,将车辆从当前所处挡位调整至目标挡位,以控制车辆在目标换挡时间按照目标挡位进行行驶。
由于在目标换挡方式中的目标挡位与当前所处挡位不一致时,才进行挡位更换,可以减少不必要的挡位更换次数。
在一个或多个实施例中,参考图6,提供了一种车辆挡位控制方法,包括以下步骤:
S602,当车辆处于行驶过程中时,通过部署在车辆上的车载摄像头,实时对车辆周围的场景信息进行采集,得到至少一张的行驶场景图像。
S604,通过第二道路模型,基于从行驶场景图像中提取的道路特征,输出道路坡度标签、弯道曲率标签、路面附着度标签、路面平整度标签、信号灯标签和交通指示牌标签中的至少一种。
S606,通过第二交通模型,基于从行驶场景图像中提取的交通特征,输出行人危险程度标签、行人拥堵状况标签、机动车危险程度标签、机动车拥堵状况标签、非机动车危险程度标签和非机动车拥堵状况标签中的至少一种。
S608,通过第二环境模型,基于从行驶场景图像中提取的环境特征,输出道路可见度标签、天气状况标签和光照强度标签中的至少一种。
S610,获取与车辆对应的行驶状态数据和驾驶行为数据。
S612,确定车辆的目标驾驶模式;获取与目标驾驶模式相对应的目标换挡规律表;根据行驶状态数据和驾驶行为数据,在目标换挡规律表中查找对应的候选换挡方式。
S614,确定与行驶场景图像对应的各行驶场景标签各自对应的标签优先级;确定与行驶场景图像对应的各行驶场景标签各自对应的标签置信度。
S616,基于标签优先级和标签置信度,从与行驶场景图像对应的行驶场景标签中筛选出目标行驶场景标签,根据目标行驶场景标签、行驶状态数据和驾驶行为数据,确定与车辆相匹配的目标换挡方式。
S618,确定与行驶场景标签、行驶状态数据和驾驶行为数据相对应的挡位范围和换挡时间范围。
S620,当候选换挡方式中的候选挡位超出挡位范围时,按照挡位范围调整候选挡位,得到与车辆相匹配的目标挡位。
S622,当候选换挡方式中的候选换挡时间超出换挡时间范围时,按照换挡时间范围调整候选换挡时间,得到与车辆匹配的目标换挡时间。
S624,当多张行驶场景图像所对应的行驶场景标签中,存在有预设数量的行驶场景标签之间的差异小于或等于预设差异阈值时,通过目标换挡方式,控制处于行驶过程中的车辆在目标换挡时间按照目标挡位进行行驶。
S626,确定道路属性标签、交通属性标签和环境属性标签分别对应的标签置信度;当道路属性标签、交通属性标签和环境属性标签分别对应的标签置信度均大于或等于预设置信度阈值时,通过目标换挡方式,控制处于行驶过程中的车辆在目标换挡时间按照目标挡位进行行驶。
上述车辆挡位控制方法,通过获取在行驶过程中的行驶场景图像,可对行驶场景图像进行图像识 别,得到对应的行驶场景标签。通过获取行驶状态数据和驾驶行为数据,可基于行驶状态数据、驾驶行为数据、及行驶场景标签,确定与车辆相匹配的目标换挡方式,如此,便能控制处于行驶过程中的车辆在目标换挡时间按照目标挡位进行行驶。由于是综合行驶状态数据、驾驶行为数据、及行驶场景标签确定目标换挡方式的,因此提升了所确定的目标换挡方式的准确性。
本申请还提供一种应用场景,该应用场景应用上述的车辆挡位控制方法。具体地,该车辆挡位控制方法在该应用场景的应用如下:
参考图7,步骤S702,当驾驶对象驾驶车辆行驶于行驶区域时,车辆可获取时间连续的至少一张行驶场景图像。步骤S704,车辆可通过场景分类模型对采集得到的行驶场景图像进行图像识别,得到对应的行驶场景标签。步骤S706,
车辆可确定驾驶对象选择的目标驾驶模式。步骤S708,车辆可确定与目标驾驶模式对应的目标换挡规律表(也可称作换挡规律map表),并基于目标换挡规律表确定与当前行驶状态数据和当前驾驶行为数据相对应的候选换挡方式。步骤S710,车辆基于行驶场景标签对候选换挡方式进行动态修正,得到目标换挡方式。步骤S712,车辆对目标换挡方式的合理性以及安全性进行监测。步骤S714,车辆根据监测结果,确定目标换挡方式是否合理、安全。S716,当确定目标换挡方式合理且安全时,车辆在目标换挡时间按照目标挡位进行行驶。图7示出了一个或多个实施例中车辆挡位控制方法的流程示意图。
本申请还另外提供一种应用场景,该应用场景应用上述的车辆挡位控制方法。具体地,该车辆挡位控制方法在该应用场景的应用如下:
在驾驶车辆行驶于行驶区域之前,可在车辆上架设车载摄像头,从而可通过车载摄像头采集车辆在行驶过程中的行驶场景图像。进一步地,车辆将行驶场景图像发送至服务器,由服务器通过场景分类模型对行驶场景图像进行图像识别,得到对应的行驶场景标签,并基于行驶状态数据、驾驶行为数据、及行驶场景标签,确定与车辆相匹配的目标换挡方式,将目标换挡方式发送至车辆,以使车辆在目标换挡时间按照目标挡位进行行驶。
应该理解的是,虽然图2和6的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2和6中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在一个或多个实施例中,如图8所示,提供了一种车辆挡位控制装置800,该装置可以采用软件模块或硬件模块,或者是二者的结合成为计算机设备的一部分,该装置具体包括:图像获取模块802、标签识别模块804和换挡方式确定模块806,其中:
图像获取模块802,用于获取车辆在行驶过程中的行驶场景图像。
标签识别模块804,用于对获取的所述行驶场景图像进行图像识别,得到行驶场景标签;所述行驶场景标签至少包括道路属性标签、交通属性标签和环境属性标签中的一种。
换挡方式确定模块806,用于获取与所述车辆对应的行驶状态数据和驾驶行为数据;基于所述行驶状态数据、所述驾驶行为数据、及所述行驶场景标签,确定与所述车辆相匹配的目标换挡方式;所述目标换挡方式用于控制处于行驶过程中的所述车辆在目标换挡时间按照目标挡位进行行驶。
在一个或多个实施例中,参考图9,所述图像获取模块还用于当所述车辆处于行驶过程中时,通 过部署在所述车辆上的车载摄像头,实时对所述车辆周围的场景信息进行采集,得到至少一张行驶场景图像。
在一个或多个实施例中,所述标签识别模块804还包括特征提取模块8041,用于获取场景分类模型;通过场景分类模型,提取所述行驶场景图像中的图像特征;所述图像特征包括道路特征、交通特征和环境特征;通过所述场景分类模型,基于所述道路特征、所述交通特征和所述环境特征,确定与所述行驶场景图像对应的道路属性标签、交通属性标签和环境属性标签。
在一个或多个实施例中,所述标签识别模块804还用于通过场景分类模型输出行驶场景标签;场景分类模型包括与道路相关的第一道路模型、与交通相关的第一交通模型以及与环境相关的第一环境模型;第一道路模型至少包括道路坡度模型、弯道曲率模型、路面附着度模型、路面平整度模型、信号灯模型和交通指示牌模型中的一种;第一交通模型至少包括危险程度模型和拥堵状况模型中的一种;第一环境模型至少包括道路可见度模型、天气状况模型和光照强度模型中的一种;道路属性标签至少包括通过道路坡度模型输出的道路坡度标签、通过弯道曲率模型输出的弯道曲率标签、通过路面附着度模型输出的路面附着度标签、通过路面平整度模型输出的路面平整度标签、通过信号灯模型输出的信号灯标签和通过交通指示牌模型输出的交通指示牌标签中的一种;交通属性标签至少包括通过危险程度模型输出的危险程度标签、及通过拥堵状况模型输出的拥堵状况标签中的一种;环境属性标签至少包括通过道路可见度模型输出的道路可见度标签、通过天气状况模型输出的天气状况标签、及通过光照强度模型输出的光照强度标签中的一种。
在一个或多个实施例中,道路属性标签至少包括道路状况标签和道路设施标签中的一种,道路状况标签至少包括道路坡度标签、弯道曲率标签、路面附着度标签和路面平整度标签中的一种,道路设施标签至少包括信号灯标签和交通指示牌标签中的一种;交通属性标签至少包括行人标签、机动车标签和非机动车标签中的一种,行人标签至少包括行人危险程度标签和行人拥堵状况标签中的一种,机动车标签至少包括机动车危险程度标签和机动车拥堵状况标签中的一种,非机动车标签至少包括非机动车危险程度标签和非机动车拥堵状况标签中的一种;环境属性标签至少包括天气标签和光照标签中的一种,天气标签至少包括道路可见度标签和天气状况标签中的一种,光照标签至少包括光照强度标签;所述标签识别模块804还用于通过所述第二道路模型,基于所述道路特征,输出所述道路坡度标签、所述弯道曲率标签、所述路面附着度标签、所述路面平整度标签、所述信号灯标签和所述交通指示牌标签中的至少一种;通过所述第二交通模型,基于所述交通特征,输出所述行人危险程度标签、所述行人拥堵状况标签、所述机动车危险程度标签、所述机动车拥堵状况标签、所述非机动车危险程度标签和所述非机动车拥堵状况标签中的至少一种;通过所述第二环境模型,基于所述环境特征,输出所述道路可见度标签、所述天气状况标签和所述光照强度标签中的至少一种。
在一个或多个实施例中,所述换挡方式确定模块806还包括修正处理模块8061,用于基于所述行驶状态数据和所述驾驶行为数据,确定对应的候选换挡方式;基于所述行驶场景标签对所述候选换挡方式进行修正处理,得到与所述车辆相匹配的目标换挡方式。
在一个或多个实施例中,所述修正处理模块8061还用于确定所述车辆的目标驾驶模式;获取与所述目标驾驶模式相对应的目标换挡规律表;根据所述行驶状态数据和所述驾驶行为数据,在所述目标换挡规律表中查找对应的候选换挡方式。
在一个或多个实施例中,所述修正处理模块8061还用于确定与所述行驶场景标签、所述行驶状态数据和所述驾驶行为数据相对应的挡位范围和换挡时间范围;当所述候选换挡方式中的候选挡位超出所述挡位范围时,按照所述挡位范围调整所述候选挡位,得到与所述车辆相匹配的目标挡位;当所述候选换挡方式中的候选换挡时间超出所述换挡时间范围时,按照所述换挡时间范围调整所述候选换 挡时间,得到与所述车辆匹配的目标换挡时间。
在一个或多个实施例中,所述行驶场景图像的数量为多张,且多张所述行驶场景图像的采集时间相连续,所述车辆挡位控制装置800还包括监控模块808,用于确定多张所述行驶场景图像中,各所述行驶场景图像各自对应的行驶场景标签之间的标签差异;确定标签差异小于或等于预设差异阈值的行驶场景图像的数量;当所述数量大于或等于预设数量阈值时,通过所述目标换挡方式,控制处于行驶过程中的所述车辆在目标换挡时间按照目标挡位进行行驶。
在一个或多个实施例中,所述监控模块808还用于确定所述道路属性标签、所述交通属性标签和所述环境属性标签分别对应的标签置信度;当所述道路属性标签、所述交通属性标签和所述环境属性标签分别对应的标签置信度均小于或等于预设置信度阈值时,通过所述目标换挡方式,控制处于行驶过程中的所述车辆在目标换挡时间按照目标挡位进行行驶。
在一个或多个实施例中,所述换挡方式确定模块806还包括优先级确定模块8062,用于确定与所述行驶场景图像对应的各行驶场景标签各自对应的标签优先级;确定与所述行驶场景图像对应的各行驶场景标签各自对应的标签置信度;基于所述标签优先级和所述标签置信度,从与所述行驶场景图像对应的行驶场景标签中筛选出目标行驶场景标签;根据所述目标行驶场景标签、所述行驶状态数据和所述驾驶行为数据,确定与所述车辆相匹配的目标换挡方式。
在一个或多个实施例中,所述车辆挡位控制装置800还用于确定所述车辆的当前所处挡位;当所述目标换挡方式中的目标挡位与所述当前所处挡位不一致时,在所述目标换挡时间,将所述车辆从当前所处挡位调整至所述目标挡位,以控制所述车辆在所述目标换挡时间按照所述目标挡位进行行驶。
关于车辆挡位控制装置的具体限定可以参见上文中对于车辆挡位控制方法的限定,在此不再赘述。上述车辆挡位控制装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个或多个实施例中,提供了一种计算机设备,该计算机设备可以是电子设备、车辆或者服务器,其内部结构图可以如图10所示。该计算机设备包括通过***总线连接的处理器、存储器和网络接口。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作***、计算机程序和数据库。该内存储器为非易失性存储介质中的操作***和计算机程序的运行提供环境。该计算机设备的数据库用于存储车辆挡位控制数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种车辆挡位控制方法。
本领域技术人员可以理解,图10中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个或多个实施例中,还提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述各方法实施例中的步骤。
在一个或多个实施例中,提供了一种计算机可读存储介质,存储有计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。
在一个或多个实施例中,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述各方法实施例中的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种车辆挡位控制方法,由计算机设备执行,所述方法包括:
    获取车辆在行驶过程中的行驶场景图像;
    对获取的所述行驶场景图像进行图像识别,得到行驶场景标签;所述行驶场景标签至少包括道路属性标签、交通属性标签和环境属性标签中的一种;获取与所述车辆对应的行驶状态数据和驾驶行为数据;及
    基于所述行驶状态数据、所述驾驶行为数据、及所述行驶场景标签,确定与所述车辆相匹配的目标换挡方式;所述目标换挡方式用于控制处于行驶过程中的所述车辆在目标换挡时间按照目标挡位进行行驶。
  2. 根据权利要求1所述的方法,其特征在于,所述获取车辆在行驶过程中的行驶场景图像,包括:
    当所述车辆处于行驶过程中时,通过部署在所述车辆上的车载摄像头,实时对所述车辆周围的场景信息进行采集,得到至少一张行驶场景图像。
  3. 根据权利要求1所述的方法,其特征在于,所述对获取的所述行驶场景图像进行图像识别,得到行驶场景标签,包括:
    获取场景分类模型;
    通过所述场景分类模型,提取所述行驶场景图像中的图像特征;所述图像特征包括道路特征、交通特征和环境特征;及
    通过所述场景分类模型,基于所述道路特征、所述交通特征和所述环境特征,确定与所述行驶场景图像对应的道路属性标签、交通属性标签和环境属性标签。
  4. 根据权利要求3所述的方法,其特征在于,所述场景分类模型包括与道路相关的第一道路模型、与交通相关的第一交通模型以及与环境相关的第一环境模型;
    所述第一道路模型至少包括道路坡度模型、弯道曲率模型、路面附着度模型、路面平整度模型、信号灯模型和交通指示牌模型中的一种;
    所述第一交通模型至少包括危险程度模型和拥堵状况模型中的一种;
    所述第一环境模型至少包括道路可见度模型、天气状况模型和光照强度模型中的一种;
    所述道路属性标签至少包括通过所述道路坡度模型输出的道路坡度标签、通过所述弯道曲率模型输出的弯道曲率标签、通过所述路面附着度模型输出的路面附着度标签、通过所述路面平整度模型输出的路面平整度标签、通过所述信号灯模型输出的信号灯标签和通过所述交通指示牌模型输出的交通指示牌标签中的一种;
    所述交通属性标签至少包括通过所述危险程度模型输出的危险程度标签、及通过所述拥堵状况模型输出的拥堵状况标签中的一种;及
    所述环境属性标签至少包括通过所述道路可见度模型输出的道路可见度标签、通过所述天气状况模型输出的天气状况标签、及通过所述光照强度模型输出的光照强度标签中的一种。
  5. 根据权利要求3所述的方法,其特征在于,所述场景分类模型包括第二道路模型、第二交通模型和第二环境模型;
    所述道路属性标签至少包括道路状况标签和道路设施标签中的一种,所述道路状况标签至少包括道路坡度标签、弯道曲率标签、路面附着度标签和路面平整度标签中的一种,所述道路设施标签至少包括信号灯标签和交通指示牌标签中的一种;
    所述交通属性标签至少包括行人标签、机动车标签和非机动车标签中的一种,所述行人标签至少包括行人危险程度标签和行人拥堵状况标签中的一种,所述机动车标签至少包括机动车危险程度标签和机动车拥堵状况标签中的一种,所述非机动车标签至少包括非机动车危险程度标签和非机动车拥堵状况标签中的一种;
    所述环境属性标签至少包括天气标签和光照标签中的一种,所述天气标签至少包括道路可见度标签和天气状况标签中的一种,所述光照标签至少包括光照强度标签;
    所述通过所述场景分类模型,基于所述道路特征、所述交通特征和所述环境特征,确定与所述行驶场景图像对应的道路属性标签、交通属性标签和环境属性标签,包括:
    通过所述第二道路模型,基于所述道路特征,输出所述道路坡度标签、所述弯道曲率标签、所述路面附着度标签、所述路面平整度标签、所述信号灯标签和所述交通指示牌标签中的至少一种;
    通过所述第二交通模型,基于所述交通特征,输出所述行人危险程度标签、所述行人拥堵状况标签、所述机动车危险程度标签、所述机动车拥堵状况标签、所述非机动车危险程度标签和所述非机动车拥堵状况标签中的至少一种;及
    通过所述第二环境模型,基于所述环境特征,输出所述道路可见度标签、所述天气状况标签和所述光照强度标签中的至少一种。
  6. 根据权利要求1所述的方法,其特征在于,所述基于所述行驶状态数据、所述驾驶行为数据、及所述行驶场景标签,确定与所述车辆相匹配的目标换挡方式,包括:
    基于所述行驶状态数据和所述驾驶行为数据,确定对应的候选换挡方式;及
    基于所述行驶场景标签对所述候选换挡方式进行修正处理,得到与所述车辆相匹配的目标换挡方式。
  7. 根据权利要求6所述的方法,其特征在于,所述基于所述行驶状态数据和所述驾驶行为数据,确定对应的候选换挡方式,包括:
    确定所述车辆的目标驾驶模式;
    获取与所述目标驾驶模式相对应的目标换挡规律表;及
    根据所述行驶状态数据和所述驾驶行为数据,在所述目标换挡规律表中查找对应的候选换挡方式。
  8. 根据权利要求6所述的方法,其特征在于,所述目标换挡方式包括目标挡位和目标换挡时间;所述基于所述行驶场景标签对所述候选换挡方式进行修正处理,得到与所述车辆相匹配的目标换挡方式,包括:
    确定与所述行驶场景标签、所述行驶状态数据和所述驾驶行为数据相对应的挡位范围和换挡时间范围;
    当所述候选换挡方式中的候选挡位超出所述挡位范围时,按照所述挡位范围调整所述候选挡位,得到与所述车辆相匹配的目标挡位;及
    当所述候选换挡方式中的候选换挡时间超出所述换挡时间范围时,按照所述换挡时间范围调整所述候选换挡时间,得到与所述车辆匹配的目标换挡时间。
  9. 根据权利要求6所述的方法,其特征在于,所述行驶场景图像的数量为多张,且多张所述行驶场景图像的采集时间相连续,所述方法还包括:
    确定多张所述行驶场景图像中,各所述行驶场景图像各自对应的行驶场景标签之间的标签差异;
    确定标签差异小于或等于预设差异阈值的行驶场景图像的数量;及
    当所述数量大于或等于预设数量阈值时,通过所述目标换挡方式,控制处于行驶过程中的所述车 辆在目标换挡时间按照目标挡位进行行驶。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    确定所述道路属性标签、所述交通属性标签和所述环境属性标签分别对应的标签置信度;及
    当所述道路属性标签、所述交通属性标签和所述环境属性标签分别对应的标签置信度均大于或等于预设置信度阈值时,通过所述目标换挡方式,控制处于行驶过程中的所述车辆在目标换挡时间按照目标挡位进行行驶。
  11. 根据权利要求1所述的方法,其特征在于,所述基于所述行驶状态数据、所述驾驶行为数据、及所述行驶场景标签,确定与所述车辆相匹配的目标换挡方式,包括:
    确定与所述行驶场景图像对应的各行驶场景标签各自对应的标签优先级;
    确定与所述行驶场景图像对应的各行驶场景标签各自对应的标签置信度;
    基于所述标签优先级和所述标签置信度,从与所述行驶场景图像对应的行驶场景标签中筛选出目标行驶场景标签;及
    根据所述目标行驶场景标签、所述行驶状态数据和所述驾驶行为数据,确定与所述车辆相匹配的目标换挡方式。
  12. 根据权利要求1至11任意一项所述的方法,其特征在于,所述方法还包括:
    确定所述车辆的当前所处挡位;及
    当所述目标换挡方式中的目标挡位与所述当前所处挡位不一致时,在所述目标换挡时间,将所述车辆从当前所处挡位调整至所述目标挡位,以控制所述车辆在所述目标换挡时间按照所述目标挡位进行行驶。
  13. 一种车辆挡位控制装置,所述装置包括:
    图像获取模块,用于获取车辆在行驶过程中的行驶场景图像;
    标签识别模块,用于对获取的所述行驶场景图像进行图像识别,得到行驶场景标签;所述行驶场景标签至少包括道路属性标签、交通属性标签和环境属性标签中的一种;及
    换挡方式确定模块,用于获取与所述车辆对应的行驶状态数据和驾驶行为数据;基于所述行驶状态数据、所述驾驶行为数据、及所述行驶场景标签,确定与所述车辆相匹配的目标换挡方式;所述目标换挡方式用于控制处于行驶过程中的所述车辆在目标换挡时间按照目标挡位进行行驶。
  14. 根据权利要求13所述的装置,其特征在于,所述图像获取模块还用于当所述车辆处于行驶过程中时,通过部署在所述车辆上的车载摄像头,实时对所述车辆周围的场景信息进行采集,得到至少一张行驶场景图像。
  15. 根据权利要求13所述的装置,其特征在于,所述标签识别模块还包括特征提取模块,用于获取场景分类模型;通过所述场景分类模型,提取所述行驶场景图像中的图像特征;所述图像特征包括道路特征、交通特征和环境特征;及通过所述场景分类模型,基于所述道路特征、所述交通特征和所述环境特征,确定与所述行驶场景图像对应的道路属性标签、交通属性标签和环境属性标签。
  16. 根据权利要求13所述的装置,其特征在于,所述换挡方式确定模块还包括修正处理模块,用于基于所述行驶状态数据和所述驾驶行为数据,确定对应的候选换挡方式;及基于所述行驶场景标签对所述候选换挡方式进行修正处理,得到与所述车辆相匹配的目标换挡方式。
  17. 根据权利要求16所述的装置,其特征在于,所述修正处理模块还用于确定所述车辆的目标驾驶模式;获取与所述目标驾驶模式相对应的目标换挡规律表;及根据所述行驶状态数据和所述驾驶 行为数据,在所述目标换挡规律表中查找对应的候选换挡方式。
  18. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现权利要求1至12中任一项所述的方法的步骤。
  19. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至12中任一项所述的方法的步骤。
  20. 一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现权利要求1至12中任一项所述的方法的步骤。
PCT/CN2022/075722 2021-02-23 2022-02-09 车辆挡位控制方法、装置、计算机设备和存储介质 WO2022179396A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22758761.5A EP4266210A1 (en) 2021-02-23 2022-02-09 Vehicle gear control method and apparatus, computer device, and storage medium
US17/992,513 US20230089742A1 (en) 2021-02-23 2022-11-22 Vehicle gear control method and apparatus, computer device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110200539.1 2021-02-23
CN202110200539.1A CN112818910B (zh) 2021-02-23 2021-02-23 车辆挡位控制方法、装置、计算机设备和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/992,513 Continuation US20230089742A1 (en) 2021-02-23 2022-11-22 Vehicle gear control method and apparatus, computer device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022179396A1 true WO2022179396A1 (zh) 2022-09-01

Family

ID=75864938

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/075722 WO2022179396A1 (zh) 2021-02-23 2022-02-09 车辆挡位控制方法、装置、计算机设备和存储介质

Country Status (4)

Country Link
US (1) US20230089742A1 (zh)
EP (1) EP4266210A1 (zh)
CN (1) CN112818910B (zh)
WO (1) WO2022179396A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4358039A1 (en) * 2022-10-19 2024-04-24 Zenseact AB Lane-assignment for traffic objects on a road

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112092797B (zh) * 2020-09-23 2021-09-21 中国第一汽车股份有限公司 一种基于多维度的换挡控制方法、装置、车辆及介质
CN112818910B (zh) * 2021-02-23 2022-03-18 腾讯科技(深圳)有限公司 车辆挡位控制方法、装置、计算机设备和存储介质
CN113232674A (zh) * 2021-05-28 2021-08-10 南京航空航天大学 基于决策树和贝叶斯网络的车辆控制方法和装置
CN113392804B (zh) * 2021-07-02 2022-08-16 昆明理工大学 一种基于多角度的交警目标数据集的场景化构建方法及***
US11938960B1 (en) * 2021-07-12 2024-03-26 Amazon Technologies, Inc. Systems and methods for controlling vehicles with navigation markers
CN114941710B (zh) * 2022-05-12 2024-03-01 上海伯镭智能科技有限公司 一种无人驾驶矿车档位切换控制方法
CN115440028B (zh) * 2022-07-22 2024-01-30 中智行(苏州)科技有限公司 一种基于标签化交通道路场景分类方法
KR20240015792A (ko) * 2022-07-27 2024-02-06 재단법인 지능형자동차부품진흥원 자율 주행 차량의 센서 데이터 계측을 이용한 시나리오 라벨링 시스템 및 방법
CN115476857B (zh) * 2022-08-26 2023-08-04 桂林电子科技大学 一种基于自然驾驶数据控制换挡的方法和装置
CN117948419A (zh) * 2022-10-18 2024-04-30 广州汽车集团股份有限公司 基于场景的预测性换挡规律自适应方法及相关设备
CN116044990B (zh) * 2023-03-29 2023-06-27 北京集度科技有限公司 车辆档位切换方法、装置、电子设备、介质及程序产品

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108644374A (zh) * 2018-05-02 2018-10-12 天津职业技术师范大学 变速箱控制方法及装置
CN110206878A (zh) * 2019-04-29 2019-09-06 东风商用车有限公司 一种重型车自动变速箱的换挡控制方法
CN111457083A (zh) * 2020-03-18 2020-07-28 宁波上中下自动变速器有限公司 一种自动换挡控制方法、***及汽车
CN112818910A (zh) * 2021-02-23 2021-05-18 腾讯科技(深圳)有限公司 车辆挡位控制方法、装置、计算机设备和存储介质

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8099220B2 (en) * 2008-07-29 2012-01-17 GM Global Technology Operations LLC Method for adapting shift schedule of an automatic transmission based on GPS/map information
KR20160034769A (ko) * 2014-09-22 2016-03-30 현대자동차주식회사 자동변속기의 변속 제어 장치 및 방법
DE102015111938A1 (de) * 2015-07-22 2017-01-26 Getrag Getriebe- Und Zahnradfabrik Hermann Hagenmeyer Gmbh & Cie Kg Schaltanordnung und Verfahren zum Betätigen eines Antriebsstranges
US9645577B1 (en) * 2016-03-23 2017-05-09 nuTonomy Inc. Facilitating vehicle driving and self-driving
CN108510771A (zh) * 2017-02-27 2018-09-07 奥迪股份公司 驾驶辅助***及包括该驾驶辅助***的车辆
US11763410B1 (en) * 2018-02-25 2023-09-19 Matthew Roy Mobile payment system for traffic prioritization in self-driving vehicles
CN108995655B (zh) * 2018-07-06 2020-04-10 北京理工大学 一种驾驶员驾驶意图识别方法及***
CN110725944B (zh) * 2018-07-17 2020-12-18 长城汽车股份有限公司 车辆的换挡补偿控制方法、***及车辆
CN109237011A (zh) * 2018-09-11 2019-01-18 江苏大学 一种含有驾驶行为预测模型的自动变速箱换挡控制方法
CN109866765A (zh) * 2019-01-11 2019-06-11 苏州工业园区职业技术学院 一种无人驾驶电动汽车安全行驶***
CN109857002B (zh) * 2019-01-15 2021-07-20 北京百度网讯科技有限公司 数据采集方法、装置、设备及计算机可读存储介质
CN109829395B (zh) * 2019-01-15 2022-03-08 北京百度网讯科技有限公司 基于无人车的数据处理方法、装置、设备以及存储介质
CN111666921B (zh) * 2020-06-30 2022-05-20 腾讯科技(深圳)有限公司 车辆控制方法、装置、计算机设备和计算机可读存储介质
CN112092797B (zh) * 2020-09-23 2021-09-21 中国第一汽车股份有限公司 一种基于多维度的换挡控制方法、装置、车辆及介质
US11655893B1 (en) * 2021-10-25 2023-05-23 Ambarella International Lp Efficient automatic gear shift using computer vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108644374A (zh) * 2018-05-02 2018-10-12 天津职业技术师范大学 变速箱控制方法及装置
CN110206878A (zh) * 2019-04-29 2019-09-06 东风商用车有限公司 一种重型车自动变速箱的换挡控制方法
CN111457083A (zh) * 2020-03-18 2020-07-28 宁波上中下自动变速器有限公司 一种自动换挡控制方法、***及汽车
CN112818910A (zh) * 2021-02-23 2021-05-18 腾讯科技(深圳)有限公司 车辆挡位控制方法、装置、计算机设备和存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4358039A1 (en) * 2022-10-19 2024-04-24 Zenseact AB Lane-assignment for traffic objects on a road

Also Published As

Publication number Publication date
EP4266210A1 (en) 2023-10-25
CN112818910B (zh) 2022-03-18
CN112818910A (zh) 2021-05-18
US20230089742A1 (en) 2023-03-23

Similar Documents

Publication Publication Date Title
WO2022179396A1 (zh) 车辆挡位控制方法、装置、计算机设备和存储介质
US11341667B2 (en) Camera systems using filters and exposure times to detect flickering illuminated objects
US20190332875A1 (en) Traffic Signal State Classification for Autonomous Vehicles
US20190130220A1 (en) Domain adaptation via class-balanced self-training with spatial priors
EP2731844B1 (en) Vehicle control system and method
US11008005B2 (en) Vehicle travel control apparatus
JP2015089801A (ja) 運転制御装置
CN109754626B (zh) 无人驾驶自主换道策略
US11499516B2 (en) Methods and systems for an adaptive stop-start inhibitor
CN111275249A (zh) 基于dqn神经网络和高精度定位的驾驶行为优化方法
CN110335484B (zh) 控制车辆行驶的方法及装置
CN109800773B (zh) 基于三维激光雷达的越野路面提取方法
CN110765224A (zh) 电子地图的处理方法、车辆视觉重定位的方法和车载设备
AU2022218458A1 (en) Scene security level determination method, device and storage medium
CN105620486B (zh) 应用于车辆能量管理的行驶模式判断装置及方法
CN115179928A (zh) 用于向交通工具的使用者警告危险情况的方法和警告装置
CN114495060A (zh) 一种道路交通标线识别方法及装置
Balcerek et al. Automatic detection of traffic lights changes from red to green and car turn signals in order to improve urban traffic
CN113815616B (zh) 车辆控制方法及装置
CN112861266B (zh) 设备驱动模式的控制方法、装置、介质及电子设备
CN114895682A (zh) 一种基于云端数据的无人驾驶矿车行走参数校正方法及***
Tian et al. Reliable identification of road surface condition considering shadow interference
RU2777850C1 (ru) Система формирования рекуперационного энергоэффективного трека эксплуатируемого транспортного средства, снабженного системой рекуперации электроэнергии при торможении, при движении эксплуатируемого транспортного средства по участку пути, включающему точку возможной децелерации
WO2023282798A1 (en) System for generating a recuperation energy-efficient track for the vehicle
WO2023121516A9 (en) Method for generating a modified energy-efficient track for a vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22758761

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022758761

Country of ref document: EP

Effective date: 20230719

NENP Non-entry into the national phase

Ref country code: DE