CN116567895A - Vehicle-mounted atmosphere lamp control method and device, electronic equipment and vehicle - Google Patents

Vehicle-mounted atmosphere lamp control method and device, electronic equipment and vehicle Download PDF

Info

Publication number
CN116567895A
CN116567895A CN202310389153.9A CN202310389153A CN116567895A CN 116567895 A CN116567895 A CN 116567895A CN 202310389153 A CN202310389153 A CN 202310389153A CN 116567895 A CN116567895 A CN 116567895A
Authority
CN
China
Prior art keywords
special effect
vehicle
atmosphere lamp
target
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310389153.9A
Other languages
Chinese (zh)
Inventor
鲍迪
张署光
金龙一
董悦
范伟大
刘亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Box Zhixing Technology Co ltd
Original Assignee
Beijing Box Zhixing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Box Zhixing Technology Co ltd filed Critical Beijing Box Zhixing Technology Co ltd
Priority to CN202310389153.9A priority Critical patent/CN116567895A/en
Publication of CN116567895A publication Critical patent/CN116567895A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/165Controlling the light source following a pre-assigned programmed sequence; Logic control [LC]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q3/00Arrangement of lighting devices for vehicle interiors; Lighting devices specially adapted for vehicle interiors
    • B60Q3/70Arrangement of lighting devices for vehicle interiors; Lighting devices specially adapted for vehicle interiors characterised by the purpose
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q3/00Arrangement of lighting devices for vehicle interiors; Lighting devices specially adapted for vehicle interiors
    • B60Q3/80Circuits; Control arrangements
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F21LIGHTING
    • F21SNON-PORTABLE LIGHTING DEVICES; SYSTEMS THEREOF; VEHICLE LIGHTING DEVICES SPECIALLY ADAPTED FOR VEHICLE EXTERIORS
    • F21S10/00Lighting devices or systems producing a varying lighting effect
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F21LIGHTING
    • F21VFUNCTIONAL FEATURES OR DETAILS OF LIGHTING DEVICES OR SYSTEMS THEREOF; STRUCTURAL COMBINATIONS OF LIGHTING DEVICES WITH OTHER ARTICLES, NOT OTHERWISE PROVIDED FOR
    • F21V23/00Arrangement of electric circuit elements in or on lighting devices
    • F21V23/003Arrangement of electric circuit elements in or on lighting devices the elements being electronics drivers or controllers for operating the light source, e.g. for a LED array
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F21LIGHTING
    • F21WINDEXING SCHEME ASSOCIATED WITH SUBCLASSES F21K, F21L, F21S and F21V, RELATING TO USES OR APPLICATIONS OF LIGHTING DEVICES OR SYSTEMS
    • F21W2106/00Interior vehicle lighting devices
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a vehicle-mounted atmosphere lamp control method, device, electronic equipment and vehicle, relates to the technical field of automobile atmosphere lamps, and comprises the following steps: determining passenger emotional characteristics and vehicle driving characteristics; inputting the emotion characteristics of the passengers and/or the driving characteristics of the vehicles into a special effect prediction model to obtain target special effects of the atmosphere lamp; and controlling the atmosphere lamp to display the target special effect. According to the technical scheme, the emotion characteristics of the passengers and the driving characteristics of the vehicles can be combined, the prediction of the lamplight special effects of the vehicle-mounted atmosphere lamps can be automatically carried out, the lamplight special effects matched with the emotion characteristics of the current passengers and/or the driving characteristics of the vehicles can be obtained, and the riding experience of the users is improved.

Description

Vehicle-mounted atmosphere lamp control method and device, electronic equipment and vehicle
Technical Field
The application relates to the technical field of automobile atmosphere lamps, in particular to a vehicle-mounted atmosphere lamp control method, a vehicle-mounted atmosphere lamp control device, electronic equipment and a vehicle.
Background
The automobile atmosphere lamp is used for building the atmosphere lamp in the automobile, can build different atmospheres in the automobile along with different colors emitted, improves the driving and riding feeling of people, and gradually becomes the first choice for improving the atmosphere in the automobile along with the improvement of life quality of people.
In the prior art, an automobile atmosphere lamp mainly carries out rhythm transformation on light rays, and is too thin in function, so that user experience is poor.
Disclosure of Invention
In view of the above, the application provides a vehicle-mounted atmosphere lamp control method, a device, electronic equipment and a vehicle, which can intelligently regulate and control the light special effect of the vehicle-mounted atmosphere lamp and improve the riding experience of a user.
According to a first aspect of the present application, there is provided a vehicle-mounted atmosphere lamp control method, including:
determining passenger emotional characteristics and vehicle driving characteristics;
inputting the passenger emotion characteristics and/or the vehicle driving characteristics into a special effect prediction model to obtain a target special effect of the atmosphere lamp;
and controlling the atmosphere lamp to display the target special effect.
In some embodiments of the present disclosure, the determining the passenger emotional characteristics and the vehicle driving characteristics includes:
acquiring biological identification information of a passenger and vehicle driving state information, wherein the biological identification information comprises at least one of passenger image information and passenger audio information, and the vehicle driving state information comprises at least one of in-vehicle environment information, vehicle audio information, vehicle driving information and vehicle driving time information:
and respectively inputting the biological identification information and the vehicle driving state information into a feature identification model after training is completed, and acquiring the emotion features of the passengers and the driving features of the vehicles.
In some embodiments of the present disclosure, the target special effects include any one of a first target special effect, a second target special effect, and a third target special effect, and the inputting the passenger emotion feature and/or the vehicle driving feature into a special effect prediction model, to obtain the target special effect of the atmosphere lamp includes:
inputting the emotion characteristics of the passengers into the special effect prediction model to obtain a first target special effect of the atmosphere lamp; or alternatively, the first and second heat exchangers may be,
inputting the driving characteristics of the vehicle into the special effect prediction model to obtain a second target special effect of the atmosphere lamp; or alternatively, the first and second heat exchangers may be,
and inputting the passenger emotion characteristics and the vehicle driving characteristics into the special effect prediction model to obtain a third target special effect of the atmosphere lamp.
In some embodiments of the present disclosure, the inputting the passenger emotion feature and the vehicle driving feature into the special effect prediction model, to obtain a third target special effect of the mood light, includes:
determining fusion characteristics corresponding to the passenger emotion characteristics and the vehicle driving characteristics according to preset characteristic weights respectively corresponding to the passenger emotion characteristics and the vehicle driving characteristics;
inputting the fusion features into the pre-trained special effect prediction model to obtain a third target special effect corresponding to the fusion features.
In some embodiments of the present disclosure, the controlling the atmosphere lamp to display the target special effect includes:
determining target display information of the target special effect, wherein the target display information comprises at least one of display tone, display azimuth, lighting mode and functional linkage;
and controlling the atmosphere lamp to display the target special effects according to the target display information, wherein the target special effects comprise at least one of lamplight special effects, dynamic special effects, flickering special effects and projection special effects.
In some embodiments of the present disclosure, the method further comprises a training method of the special effect prediction model:
extracting historical emotion characteristics of passengers and/or historical driving characteristics of vehicles in a historical driving time period, and historical special effect prediction results under the historical emotion characteristics and/or the historical driving characteristics;
and taking the historical emotion characteristics and/or the historical driving characteristics as input characteristics of the special effect prediction model, taking the historical special effect prediction result as a training label of the special effect prediction model, and iteratively training the special effect prediction model until the loss function of the special effect prediction model is smaller than a preset loss function threshold value, and judging that the training of the special effect prediction model is completed.
According to a second aspect of the present application, there is provided an in-vehicle atmosphere lamp control device comprising:
the determining module is used for determining the emotion characteristics of the passengers and the driving characteristics of the vehicles;
the input module is used for inputting the emotion characteristics of the passengers and/or the driving characteristics of the vehicles into a special effect prediction model to obtain target special effects of the atmosphere lamp;
and the control module is used for controlling the atmosphere lamp to display the target special effect.
According to a third aspect of the present application, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the vehicle-mounted atmosphere lamp control method according to the first aspect.
According to a fourth aspect of the present application, there is provided an electronic device, comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, the processor implementing the vehicle-mounted atmosphere lamp control method according to the first aspect when executing the computer program.
According to a fifth aspect of the present application, there is provided a vehicle comprising: the electronic device of the fourth aspect.
By means of the technical scheme, the vehicle-mounted atmosphere lamp control method, the vehicle-mounted atmosphere lamp control device, the electronic equipment and the vehicle can determine the emotion characteristics of passengers and the driving characteristics of the vehicle firstly compared with the existing vehicle-mounted atmosphere lamp control method; inputting the emotion characteristics of the passengers and/or the driving characteristics of the vehicles into a special effect prediction model to obtain target special effects of the atmosphere lamp; and controlling the atmosphere lamp to display the target special effect. According to the technical scheme, the emotion characteristics of the passengers and the driving characteristics of the vehicles can be combined, the prediction of the lamplight special effects of the vehicle-mounted atmosphere lamps can be automatically carried out, the lamplight special effects matched with the emotion characteristics of the current passengers and/or the driving characteristics of the vehicles can be obtained, and the riding experience of the users is improved.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a schematic flow chart of a vehicle-mounted atmosphere lamp control method according to an embodiment of the disclosure;
fig. 2 is a schematic flow chart of a vehicle-mounted atmosphere lamp control according to an embodiment of the disclosure;
fig. 3 is a schematic flow chart of a vehicle-mounted atmosphere lamp control method according to another embodiment of the disclosure;
fig. 4 is a schematic structural diagram of a vehicle-mounted atmosphere lamp control device according to an embodiment of the disclosure;
fig. 5 is a schematic structural diagram of a vehicle-mounted atmosphere lamp control device according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The following describes a vehicle-mounted atmosphere lamp control method, a device, an electronic device and a vehicle according to embodiments of the present disclosure with reference to the accompanying drawings.
In the prior art, an automobile atmosphere lamp mainly carries out rhythm transformation on light rays, and is too thin in function, so that user experience is poor.
In order to solve the technical problems, the present disclosure provides a vehicle-mounted atmosphere lamp control method, a device, an electronic device and a vehicle, which can intelligently regulate and control the light special effect of the vehicle-mounted atmosphere lamp, and improve the riding experience of a user.
As shown in fig. 1, an embodiment of the present disclosure provides a vehicle-mounted atmosphere lamp control method, including:
step 101, determining the emotion characteristics of the passengers and the driving characteristics of the vehicle.
Among other things, emotional characteristics may be used to reflect the current passenger's emotional state, such as tiredness, pleasure, anger, tension, fear, sadness, neutrality, and the like. In a specific application scene, passenger information data can be collected through technologies such as sensors and biological recognition, automatic recognition of the emotional state of the passenger is carried out, and the emotional characteristics of the passenger are extracted; the vehicle driving characteristics are used to reflect a driving state, which may include driving speed, driving acceleration, vehicle driving time, vehicle audio, etc., as well as a driving environment, which may include in-vehicle ambient temperature, in-vehicle ambient humidity, in-vehicle ambient light, out-vehicle ambient temperature, out-of-vehicle ambient humidity, out-of-vehicle ambient light, out-of-vehicle air quality, vehicle scram/crash, etc. In a specific application scene, the driving characteristics of the vehicle can be further extracted by dynamically acquiring the interior and surrounding environment information of the vehicle and/or vehicle state information such as vehicle driving time, vehicle audio and the like through an environment sensing system such as a vehicle recorder, a camera, a sensor and the like.
For the implementation subject of the disclosure, as shown in fig. 2, the vehicle control module may obtain, in real time, vehicle internal environment information, vehicle surrounding environment information, vehicle driving state information and the like, and detect an emotional state of a passenger by using a passenger emotional state detection module (such as a body temperature sensor, a heart rate detector, a camera, a microphone and the like), so that the vehicle control module may predict a special effect display result of an atmosphere lamp in a vehicle according to the vehicle driving state, the driving environment and the emotional state of the passenger, where the vehicle driving state information may include vehicle audio information, vehicle driving time information and the like.
And 102, inputting the emotion characteristics of the passengers and/or the driving characteristics of the vehicles into a special effect prediction model to obtain the target special effect of the atmosphere lamp.
The special effect prediction model is a model capable of determining special effect display corresponding to the emotion characteristics and the driving characteristics according to the emotion characteristics and the driving characteristics of the passengers. It will be appreciated that the special effect prediction model may be a variety, and is not limited in this disclosure; the target special effect is to input the emotion characteristics and/or the vehicle driving characteristics into a special effect prediction model to obtain special effects corresponding to the emotion characteristics and/or the vehicle driving characteristics.
For the embodiment of the disclosure, when predicting the special effect display of the atmosphere lamp, as a possible implementation manner, the special effect display result of the atmosphere lamp can be directly predicted according to the emotion characteristics of the passengers; as another possible implementation manner, the special effect display result of the atmosphere lamp can be predicted according to the driving characteristics of the vehicle; as another possible implementation manner, the prediction of the special effect display of the atmosphere lamp can be comprehensively realized by combining the emotion characteristics, the driving characteristics and other dimension characteristics which can influence the driving of the passengers. In order to ensure that the prediction result of the special effect display of the atmosphere lamp meets the requirements of users, in the embodiment of the disclosure, a mode of comprehensively predicting the special effect display result of the atmosphere lamp by combining multi-dimensional features is preferred. In the following example steps, the technical solutions in the present disclosure will be described by taking multidimensional features including emotional features and driving features as examples, but the technical solutions in the present disclosure are not limited in particular.
And 103, controlling the atmosphere lamp to display the target special effect.
For the disclosed embodiments, the target effect may include at least one of a lighting effect, a dynamic effect, a blinking effect, and a projection effect. In order to ensure that the prediction result of the special effect display of the atmosphere lamp meets the requirements of users, the riding experience of the users is improved, and the special effect prediction model controls the atmosphere lamp to display corresponding special effects according to the recognized emotion characteristics of passengers and/or driving characteristics of vehicles.
In summary, according to the vehicle-mounted atmosphere lamp control method provided by the present disclosure, compared with the current atmosphere lamp control mode of the vehicle, the present application may determine the emotional characteristics of the passenger and the driving characteristics of the vehicle first; inputting the emotion characteristics of the passengers and/or the driving characteristics of the vehicles into a special effect prediction model to obtain target special effects of the atmosphere lamp; and controlling the atmosphere lamp to display the target special effect. According to the technical scheme, the emotion characteristics of the passengers and the driving characteristics of the vehicles can be combined, the prediction of the lamplight special effects of the vehicle-mounted atmosphere lamps can be automatically carried out, the lamplight special effects matched with the emotion characteristics of the current passengers and/or the driving characteristics of the vehicles can be obtained, and the riding experience of the users is improved.
Further, as a refinement and extension of the foregoing embodiment, in order to fully describe a specific implementation procedure of the method of the present embodiment, the present embodiment provides a specific method as shown in fig. 3, where the method includes:
step 201, acquiring biological identification information and vehicle driving state information of a passenger, and respectively inputting the biological identification information and the vehicle driving state information into a feature identification model after training to acquire emotional features and vehicle driving features of the passenger.
Wherein the biometric information may include at least one of passenger image information and passenger audio information: passenger image information may include passenger facial expressions, morphological features, hand features, heart rate, body temperature, etc.; the passenger audio information may include passenger intonation, mood, and the like. The vehicle running state information includes at least one of in-vehicle environment information, vehicle audio information, vehicle running information, and vehicle driving time information: the in-vehicle environmental information may include in-vehicle environmental temperature, in-vehicle environmental humidity, in-vehicle environmental light, in-vehicle air quality, and the like; the outside environment information may include outside environment temperature, outside environment humidity, outside ambient light, outside air quality, etc.; the vehicle travel information may include vehicle travel acceleration, vehicle scram/crash, etc.; the vehicle audio information may include music, video, etc. played by the vehicle in real time; the vehicle driving time information may include the current date and its corresponding key holidays (e.g., occupant birthday, spring festival, national celebration, etc.), and the driving time of the vehicle from start to the moment, etc. The passenger image information, the passenger audio information, the vehicle exterior environment information, the vehicle audio information, the vehicle travel information, and the vehicle driving time information may also include other information capable of reflecting such information, and are not particularly limited herein.
In a specific application scene, facial expressions of passengers can be acquired through technologies such as a sensor and biological recognition, body temperature data of the passengers are acquired through a body temperature sensor, heart rate information of the passengers is monitored in real time through a heart rate detection instrument, video data of the passengers are acquired through an in-vehicle camera, facial expression recognition such as panic, happiness, pain, anger, fear, sadness and the like are recognized according to the video data, morphological feature extraction of the passengers can be performed according to action frames in the video data, such as the deviation degree of upper body parts of the passengers along driving directions in unit time, the continuous eye closing time of the passengers is larger than a preset time threshold value and the like, and voice data of the passengers can be acquired through a microphone.
For the embodiments of the present disclosure, the feature recognition model may be any of calculation models capable of implementing emotion recognition and vehicle driving state recognition functions, such as a neural network model, a deep learning model, and the like. By way of example, the feature recognition model in the present disclosure may be selected from the group consisting of a hidden Markov model, a conditional random field, and a neural network model, without limitation. In a specific application scenario, before executing the steps of the embodiment, the task training of emotion feature recognition and driving feature recognition can be respectively performed on the feature recognition model through corresponding training modes in supervised learning, non-supervised learning, semi-supervised learning and reinforcement learning based on a certain number of emotion recognition samples and driving state recognition samples until the feature recognition model is judged to reach a convergence state, and the feature recognition model is determined to complete training. The trained feature recognition model can be directly put into the specific application of feature recognition, namely, the biological recognition information and the vehicle driving state information are respectively input into the trained feature recognition model, and the feature recognition model can directly output the emotion features and the vehicle driving features of passengers.
Taking a feature recognition model as a neural network model as an example, when the feature recognition model is trained, a training data set can be firstly constructed, wherein the training data set comprises various types of biological recognition information and various types of driving state information, and a preset emotion feature tag matched with the biological recognition information and a preset vehicle driving feature tag matched with the driving state information; respectively inputting training data sets with configured emotion feature tags and vehicle driving feature tags into feature recognition models, respectively taking biological identification information and vehicle driving state information in the training data sets as input features, respectively taking preset emotion feature tags corresponding to the biological identification information and preset vehicle driving feature tags corresponding to the vehicle driving state information as tag data, and respectively training the feature recognition models; the method comprises the steps of obtaining predicted emotion characteristics and predicted vehicle driving characteristics output by a characteristic recognition model, and respectively calculating a loss function of the characteristic recognition model according to preset emotion characteristic labels and the predicted emotion characteristics and preset vehicle driving characteristic labels and the predicted vehicle driving characteristics; if the loss function is smaller than the preset threshold value, the feature recognition model training is judged to be completed; if the loss function is determined to be greater than or equal to the preset threshold, the model parameters of the feature recognition model are iteratively updated by using the training data set until the loss function of the feature recognition model is less than the preset threshold, and the feature recognition model is determined to complete training.
Step 202, inputting the emotion characteristics of the passengers and/or the driving characteristics of the vehicles into a special effect prediction model to obtain target special effects of the atmosphere lamp, wherein the target special effects comprise any one of a first target special effect, a second target special effect and a third target special effect.
The first target effect may include a lighting effect, a dynamic effect and a blinking effect, the second target effect may include a blinking effect and a projection effect, and the third target effect may include a dynamic effect, a blinking effect and a projection effect, which may be understood that the effects included in the first target effect, the second target effect and the third target effect in the embodiment of the present disclosure may be changed according to different application scenarios, which is not particularly limited.
For the embodiment of the present disclosure, the atmosphere lamp is used as an auxiliary light source, and the light distribution of the main light source and the functional linkage of the design and the main light source can be designed, and as shown in table 1, the embodiment mainly describes the atmosphere light source (i.e. the atmosphere lamp light source).
Table 1:
in a specific application scene, as the arrangement positions of the atmosphere lamps are different, the types of the atmosphere lamps at different positions are different in order to enable the lamplight released by the atmosphere lamps to meet the requirements of users, and the atmosphere lamps (1) are installed in the front-row instrument panel to be a hidden light source; the front row door plate is provided with an atmosphere lamp (2) which is a surface light source; the rear row door plate is provided with an atmosphere lamp (3) which is a surface light source; an atmosphere lamp (4) is arranged on the ceiling and is a surface light source; an atmosphere lamp (5) is arranged below the front-row instrument panel and below the rear-row seat and is a point/line light source; the atmosphere lamps in the vehicle support special effect display, namely support to display corresponding target special effects.
For the embodiment of the present disclosure, the target special effects corresponding to the atmosphere lamps at different positions are different, and are illustrated in table 2.
Table 2:
the linkage function may include advanced driving assistance system (advanced driver assistance system, ADAS), vehicle audio recognition, biological recognition, in-vehicle environment recognition, vehicle driving time recognition, and the corresponding operators may include vehicle driving state information, user driving intention, vehicle collision or sudden braking, music, video, voice, expression, mood, intonation, morphological features, temperature, humidity, air quality, light data, temperature, light, road view data, key holidays, driving time, etc., where the operator of the user driving intention is estimated according to the current vehicle driving state, and the corresponding operation model may be a model issued by a vehicle end, or an end cloud combined model, respectively; the light sources involved in different linkage functions are also different, and the corresponding linkage methods are also different. The linkage method can comprise a flicker special effect, a projection special effect, a lamplight special effect, a dynamic special effect and the like, wherein the flicker special effect can be a variable-frequency bright-dark; the projection special effects can be the projection of music and video content and the projection of a voice virtual image; the special effect of the light can be light color, such as white, orange, gradual change, cool tone, warm tone and the like; the dynamic special effects may be dynamic effects of the light, such as dot blinking light, star blinking light, dominant light source, hidden light source, etc., it is understood that the specific linked special effects may vary according to the identified features, and are not limited in this disclosure.
The audio recognition model issued by the vehicle end is used for carrying out vehicle audio recognition on music, video and the like of the vehicle to obtain audio features, the audio features are input into the special effect prediction model, and the special effect prediction model controls atmosphere lamps (1) (2) (3) (4) to carry out linkage of projecting special effects; firstly, constructing a training data set in an end cloud combination mode, wherein the training data set can comprise various types of biological identification information sent by a vehicle end and preset emotion feature labels matched with the biological identification information; inputting various types of biological identification information with configured emotion feature labels into an emotion identification model, taking the biological identification information in a training data set as input features, and taking a preset emotion feature label matched with the biological identification information as label data to train the emotion identification model; acquiring predicted emotion characteristics output by the emotion recognition model, and calculating a loss function of the emotion recognition model according to the predicted emotion characteristics and preset emotion characteristic labels; if the loss function is smaller than the preset threshold, judging that training of the emotion recognition model is completed; if the loss function is determined to be greater than or equal to the preset threshold, iteratively updating model parameters of the emotion recognition model by using the training data set until the loss function of the emotion recognition model is less than the preset threshold, and determining that training of the emotion recognition model is completed; the vehicle end receives the trained emotion recognition model sent by the cloud end, the biological recognition information of the passenger in the vehicle is obtained through a sensor, a biological recognition technology and the like, and the emotion recognition model receives the biological recognition information and calculates to obtain the emotion characteristics of the passenger; the special effect prediction model acquires the passenger emotion characteristics sent by the emotion recognition model; the special effect prediction model controls the atmosphere lamps (2), (3) and (4) to output the lamplight special effect and the dynamic special effect matched with the emotion characteristics of the passengers.
For the embodiment of the disclosure, atmosphere lamps with different arrangement positions have different light sources, and the atmosphere lamps with the same arrangement position show different special effects due to different received characteristic information. Correspondingly, the method for obtaining the target special effects of the atmosphere lamp by inputting the emotional characteristics of the passengers and/or the driving characteristics of the vehicles into the special effect prediction model comprises the following steps: inputting the emotion characteristics of the passengers into a special effect prediction model to obtain a first target special effect of the atmosphere lamp; or inputting the driving characteristics of the vehicle into a special effect prediction model to obtain a second target special effect of the atmosphere lamp; or inputting the emotion characteristics of the passengers and the driving characteristics of the vehicles into a special effect prediction model to obtain a third target special effect of the atmosphere lamp. The method for obtaining the third target special effect of the atmosphere lamp by inputting the emotion characteristics of the passengers and the driving characteristics of the vehicles into the special effect prediction model comprises the following steps as a possible implementation mode: according to preset feature weights respectively corresponding to the passenger emotion features and the vehicle driving features, determining fusion features corresponding to the passenger emotion features and the vehicle driving features; and inputting the fusion characteristics into a pre-trained special effect prediction model to obtain a third target special effect corresponding to the fusion characteristics.
It should be noted that, in this embodiment, there are various training methods of the special effect prediction model, which may be implemented as a possible manner, and the steps of the embodiment may specifically include: extracting historical emotion characteristics of passengers and/or historical driving characteristics of vehicles in a historical driving time period, and historical special effect prediction results under the historical emotion characteristics and/or the historical driving characteristics; and taking the historical emotion characteristics and/or the historical driving characteristics as input characteristics of the special effect prediction model, taking the historical special effect prediction result as a training label of the special effect prediction model, and iteratively training the special effect prediction model until the loss function of the special effect prediction model is smaller than a preset loss function threshold value, and judging that the training of the special effect prediction model is completed.
And 203, determining target display information of the target special effects, and controlling the atmosphere lamp to display the target special effects according to the target display information.
Wherein the target display information may include at least one of display hue, display orientation, lighting pattern, and functional linkage; the target effect may include at least one of a lighting effect, a dynamic effect, a blinking effect, and a projection effect. In a specific application scene, the display tone of the target display information is matched with the lamplight special effect of the target special effect; the display azimuth of the target display information is matched with the dynamic special effect of the target special effect; the lighting mode of the target display information is matched with the flicker special effect of the target special effect; the functional linkage of the target display information is matched with the projected special effect of the target special effect. Wherein, the displayed color tone can comprise a cool color tone, a warm color tone, etc., and is not exhaustive herein; the display orientation may include dominant light sources, hidden light sources, etc., and is not intended to be exhaustive; the lighting pattern may include respiration, prosody, long lighting, etc., and is not exhaustive herein; the functional linkage may include audio display and adjustment, projecting avatars, and the like, and is not intended to be exhaustive.
For the embodiment of the disclosure, the special effect prediction model can be utilized to determine the target display information of the target special effect according to the identified emotional characteristics of the passengers and/or the driving characteristics of the vehicles, and the atmosphere lamp is controlled to display the target special effect according to the target display information. The vehicle end receives a trained emotion recognition model sent by the cloud end, the biological recognition information of the passenger in the vehicle is obtained through a sensor, a biological recognition technology and the like, and the emotion recognition model receives the biological recognition information and calculates to obtain the emotion characteristics of the passenger; the special effect prediction model obtains the passenger emotion characteristics sent by the emotion recognition model, and determines target display information as display color tone and display azimuth according to the recognized passenger emotion characteristics; the special effect prediction model controls atmosphere lamps (2), (3) and (4) to output matched lamplight special effects and dynamic special effects according to the display color level and the display azimuth.
In summary, according to the vehicle-mounted atmosphere lamp control method, the device, the electronic equipment and the vehicle provided by the application, compared with the existing vehicle-mounted atmosphere lamp control method, the application can determine the emotion characteristics of passengers and the driving characteristics of the vehicle; inputting the emotion characteristics of the passengers and/or the driving characteristics of the vehicles into a special effect prediction model to obtain target special effects of the atmosphere lamp; and controlling the atmosphere lamp to display the target special effect. According to the technical scheme, the emotion characteristics of the passengers and the driving characteristics of the vehicles can be combined, the prediction of the lamplight special effects of the vehicle-mounted atmosphere lamps can be automatically carried out, the lamplight special effects matched with the emotion characteristics of the current passengers and/or the driving characteristics of the vehicles can be obtained, and the riding experience of the users is improved.
Based on the specific implementation of the methods shown in fig. 1 and fig. 3, this embodiment provides a vehicle-mounted atmosphere lamp control device, as shown in fig. 4, including: a determination module 31, an input module 32, a control module 33;
a determination module 31 operable to determine a passenger emotional characteristic and a vehicle driving characteristic;
the input module 32 may be configured to input the emotional characteristics of the passenger and/or the driving characteristics of the vehicle into a special effect prediction model to obtain a target special effect of the atmosphere lamp;
the control module 33 may be used to control the atmosphere lamp to display the target special effect.
In a specific application scenario, the determining module 31 is specifically configured to obtain biometric information of a passenger and vehicle driving status information, where the biometric information includes at least one of passenger image information and passenger audio information, and the vehicle driving status information includes at least one of in-vehicle environment information, vehicle audio information, vehicle driving information and vehicle driving time information: and respectively inputting the biological identification information and the vehicle driving state information into the feature identification model after training, and obtaining the emotion features of the passengers and the driving features of the vehicles.
In a specific application scenario, the target special effects include any one of a first target special effect, a second target special effect and a third target special effect, and the input module 32 is specifically configured to input the passenger emotion feature into the special effect prediction model to obtain the first target special effect of the atmosphere lamp; or inputting the driving characteristics of the vehicle into a special effect prediction model to obtain a second target special effect of the atmosphere lamp; or inputting the emotion characteristics of the passengers and the driving characteristics of the vehicles into a special effect prediction model to obtain a third target special effect of the atmosphere lamp.
In a specific application scenario, the input module 32 may be specifically configured to determine a fusion feature corresponding to the passenger emotion feature and the vehicle driving feature according to preset feature weights corresponding to the passenger emotion feature and the vehicle driving feature, respectively; and inputting the fusion characteristics into a pre-trained special effect prediction model to obtain a third target special effect corresponding to the fusion characteristics.
In a specific application scenario, the control module 33 is specifically configured to determine target display information of a target special effect, where the target display information includes at least one of a display tone, a display azimuth, a lighting mode, and a functional linkage; and controlling the atmosphere lamp to display target special effects according to the target display information, wherein the target special effects comprise at least one of lamplight special effects, dynamic special effects, flickering special effects and projection special effects.
In a specific application scenario, as shown in fig. 5, the apparatus may further include: an extraction module 34 and a training module 35;
the extracting module 34 may be configured to extract historical emotion characteristics of the passenger and/or historical driving characteristics of the vehicle during the historical driving time period, and historical special effect prediction results under the historical emotion characteristics and/or the historical driving characteristics;
the training module 35 may be configured to iteratively train the special effect prediction model using the historical emotion feature and/or the historical driving feature as input features of the special effect prediction model and the historical special effect prediction result as a training label of the special effect prediction model until a loss function of the special effect prediction model is less than a preset loss function threshold value, and determine that training of the special effect prediction model is completed.
It should be noted that, for other corresponding descriptions of each functional unit related to the vehicle braking device provided in this embodiment, reference may be made to corresponding descriptions in fig. 1 and fig. 3, and no further description is given here.
Based on the above-described methods shown in fig. 1 and 3, correspondingly, the present embodiment further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the above-described methods shown in fig. 1 and 3.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a server, or a network device, etc.) to perform the method of each implementation scenario of the present application.
Based on the methods shown in fig. 1 and 3 and the virtual device embodiments shown in fig. 4 and 5, in order to achieve the above objects, the embodiments of the present application further provide an electronic device that may be configured on an end side of a vehicle (such as an electric automobile), where the device includes a storage medium and a processor; a storage medium storing a computer program; a processor for executing a computer program to implement the method as described above and shown in fig. 1 and 3.
Optionally, the entity device may further include a user interface, a network interface, a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WI-FI module, and so on. The user interface may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), etc.
It will be appreciated by those skilled in the art that the above-described physical device structure provided in this embodiment is not limited to this physical device, and may include more or fewer components, or may combine certain components, or may be a different arrangement of components.
The storage medium may also include an operating system, a network communication module. The operating system is a program that manages the physical device hardware and software resources described above, supporting the execution of information handling programs and other software and/or programs. The network communication module is used for realizing communication among all components in the storage medium and communication with other hardware and software in the information processing entity equipment.
Based on the above electronic device, the embodiment of the application further provides a vehicle, which may specifically include: the electronic equipment. The vehicle may be an electric automobile or the like.
From the above description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented by means of software plus necessary general hardware platforms, or may be implemented by hardware. According to the technical scheme, the emotion characteristics of the driver and the driving characteristics of the vehicle can be combined, dangerous driving behaviors can be predicted automatically, when the dangerous trend and danger of the vehicle are judged intelligently, an emergency braking instruction can be issued to the vehicle end automatically, so that the vehicle end can execute braking control on the vehicle in time, and further the driving safety of the vehicle can be improved. In addition, by adding the corresponding auxiliary driving strategy, the vehicle end can execute the braking control on the vehicle in response to the braking instruction and execute the corresponding auxiliary counter measure at the same time, so that the vehicle can be caused to be separated from a dangerous driving state, and therefore, traffic police site supervision is not needed, and the electronic equipment is used for replacing traffic police and other personnel to prompt drivers to pay attention to safe driving, so that the driving safety is improved, the occurrence of traffic accidents is reduced, and the traffic safety is improved.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely a specific embodiment of the application to enable one skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A vehicle-mounted atmosphere lamp control method, characterized by comprising:
determining passenger emotional characteristics and vehicle driving characteristics;
inputting the passenger emotion characteristics and/or the vehicle driving characteristics into a special effect prediction model to obtain a target special effect of the atmosphere lamp;
and controlling the atmosphere lamp to display the target special effect.
2. The method of claim 1, wherein the determining the passenger emotional characteristics and the vehicle driving characteristics comprises:
acquiring biological identification information of a passenger and vehicle driving state information, wherein the biological identification information comprises at least one of passenger image information and passenger audio information, and the vehicle driving state information comprises at least one of in-vehicle environment information, vehicle audio information, vehicle driving information and vehicle driving time information:
and respectively inputting the biological identification information and the vehicle driving state information into a feature identification model after training is completed, and acquiring the emotion features of the passengers and the driving features of the vehicles.
3. The method according to claim 2, wherein the target special effects include any one of a first target special effect, a second target special effect and a third target special effect, and the inputting the passenger emotion feature and/or the vehicle driving feature into a special effect prediction model to obtain the target special effect of the atmosphere lamp includes:
inputting the emotion characteristics of the passengers into the special effect prediction model to obtain a first target special effect of the atmosphere lamp; or alternatively, the first and second heat exchangers may be,
inputting the driving characteristics of the vehicle into the special effect prediction model to obtain a second target special effect of the atmosphere lamp; or alternatively, the first and second heat exchangers may be,
and inputting the passenger emotion characteristics and the vehicle driving characteristics into the special effect prediction model to obtain a third target special effect of the atmosphere lamp.
4. A method according to claim 3, wherein said inputting the passenger emotional characteristics and the vehicle driving characteristics into the special effect prediction model to obtain a third target special effect of the mood light comprises:
determining fusion characteristics corresponding to the passenger emotion characteristics and the vehicle driving characteristics according to preset characteristic weights respectively corresponding to the passenger emotion characteristics and the vehicle driving characteristics;
inputting the fusion features into the pre-trained special effect prediction model to obtain a third target special effect corresponding to the fusion features.
5. The method of claim 1, wherein the controlling the atmosphere lamp to display the target effect comprises:
determining target display information of the target special effect, wherein the target display information comprises at least one of display tone, display azimuth, lighting mode and functional linkage;
and controlling the atmosphere lamp to display the target special effects according to the target display information, wherein the target special effects comprise at least one of lamplight special effects, dynamic special effects, flickering special effects and projection special effects.
6. The method of claim 3, further comprising a training method of the special effect prediction model:
extracting historical emotion characteristics of passengers and/or historical driving characteristics of vehicles in a historical driving time period, and historical special effect prediction results under the historical emotion characteristics and/or the historical driving characteristics;
and taking the historical emotion characteristics and/or the historical driving characteristics as input characteristics of the special effect prediction model, taking the historical special effect prediction result as a training label of the special effect prediction model, and iteratively training the special effect prediction model until the loss function of the special effect prediction model is smaller than a preset loss function threshold value, and judging that the training of the special effect prediction model is completed.
7. An on-vehicle atmosphere lamp control device of car, characterized by comprising:
the determining module is used for determining the emotion characteristics of the passengers and the driving characteristics of the vehicles;
the input module is used for inputting the emotion characteristics of the passengers and/or the driving characteristics of the vehicles into a special effect prediction model to obtain target special effects of the atmosphere lamp;
and the control module is used for controlling the atmosphere lamp to display the target special effect.
8. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any one of claims 1 to 6.
9. An electronic device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, characterized in that the processor implements the method of any one of claims 1 to 6 when executing the computer program.
10. A vehicle, characterized by comprising: the electronic device of claim 9.
CN202310389153.9A 2023-04-12 2023-04-12 Vehicle-mounted atmosphere lamp control method and device, electronic equipment and vehicle Pending CN116567895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310389153.9A CN116567895A (en) 2023-04-12 2023-04-12 Vehicle-mounted atmosphere lamp control method and device, electronic equipment and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310389153.9A CN116567895A (en) 2023-04-12 2023-04-12 Vehicle-mounted atmosphere lamp control method and device, electronic equipment and vehicle

Publications (1)

Publication Number Publication Date
CN116567895A true CN116567895A (en) 2023-08-08

Family

ID=87490708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310389153.9A Pending CN116567895A (en) 2023-04-12 2023-04-12 Vehicle-mounted atmosphere lamp control method and device, electronic equipment and vehicle

Country Status (1)

Country Link
CN (1) CN116567895A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117979518A (en) * 2024-03-28 2024-05-03 深圳市易联科电子有限公司 Control method, device, equipment and storage medium for vehicle atmosphere lamp

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117979518A (en) * 2024-03-28 2024-05-03 深圳市易联科电子有限公司 Control method, device, equipment and storage medium for vehicle atmosphere lamp
CN117979518B (en) * 2024-03-28 2024-06-07 深圳市易联科电子有限公司 Control method, device, equipment and storage medium for vehicle atmosphere lamp

Similar Documents

Publication Publication Date Title
KR102645589B1 (en) Acoustic control system, apparatus and method
JP4305289B2 (en) VEHICLE CONTROL DEVICE AND VEHICLE CONTROL SYSTEM HAVING THE DEVICE
CN108725357A (en) Parameter control method, system based on recognition of face and cloud server
CN113459943B (en) Vehicle control method, device, equipment and storage medium
US11460309B2 (en) Control apparatus, control method, and storage medium storing program
CN110379443A (en) Voice recognition device and sound identification method
CN110876047A (en) Vehicle exterior projection method, device, equipment and storage medium
CN116567895A (en) Vehicle-mounted atmosphere lamp control method and device, electronic equipment and vehicle
CN109102801A (en) Audio recognition method and speech recognition equipment
CN112061059B (en) Screen adjusting method and device for vehicle, vehicle and readable storage medium
CN107458381A (en) A kind of motor vehicle driving approval apparatus based on artificial intelligence
Wan et al. Driving anger states detection based on incremental association markov blanket and least square support vector machine
KR20200020313A (en) Vehicle and control method for the same
CN115268334A (en) Vehicle window control method, device, equipment and storage medium
CN111681651A (en) Agent device, agent system, server device, agent device control method, and storage medium
CN114013445A (en) Vehicle user assistance system, vehicle user assistance device, and vehicle user assistance server
CN111724798B (en) Vehicle-mounted device control system, vehicle-mounted device control apparatus, vehicle-mounted device control method, and storage medium
JP2020144285A (en) Agent system, information processing device, control method for mobile body mounted apparatus, and program
CN111902864A (en) Method for operating a sound output device of a motor vehicle, speech analysis and control device, motor vehicle and server device outside the motor vehicle
CN114734912A (en) Method and device for reminding in cabin through atmosphere lamp, electronic equipment and storage medium
KR102036606B1 (en) System and method for provision of head up display information according to driver's condition and driving condition based on speech recognition
KR20230142243A (en) Method for processing dialogue, user terminal and dialogue system
JP7392827B2 (en) Speech recognition device and speech recognition method
JP7280066B2 (en) AGENT DEVICE, CONTROL METHOD OF AGENT DEVICE, AND PROGRAM
JP7176383B2 (en) Information processing device and information processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination