CN117891337A - Vehicle-mounted-based cloud travel scene construction method, device, system, medium and equipment - Google Patents

Vehicle-mounted-based cloud travel scene construction method, device, system, medium and equipment Download PDF

Info

Publication number
CN117891337A
CN117891337A CN202311769705.5A CN202311769705A CN117891337A CN 117891337 A CN117891337 A CN 117891337A CN 202311769705 A CN202311769705 A CN 202311769705A CN 117891337 A CN117891337 A CN 117891337A
Authority
CN
China
Prior art keywords
interaction
scene
intention
man
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311769705.5A
Other languages
Chinese (zh)
Inventor
李子伦
姜珊珊
高弈
普玮攀
周俊锋
杨静
黄晓攀
苏建业
卢万成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
United Automotive Electronic Systems Co Ltd
Original Assignee
United Automotive Electronic Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by United Automotive Electronic Systems Co Ltd filed Critical United Automotive Electronic Systems Co Ltd
Priority to CN202311769705.5A priority Critical patent/CN117891337A/en
Publication of CN117891337A publication Critical patent/CN117891337A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a vehicle-mounted-based cloud travel scene construction method, which comprises the following steps: responding to a first triggering operation of a user, and determining a human-computer interaction mode, wherein the human-computer interaction mode comprises human-computer interaction and non-human-computer interaction; if the man-machine interaction mode is man-machine interaction, identifying the interaction intention of the user according to the state parameters of the user acquired by the vehicle-mounted sensing equipment, and matching a cloud travel scene corresponding to the interaction intention based on the interaction intention; if the man-machine interaction mode is non-man-machine interaction, responding to a second triggering operation of the user, and determining a cloud travel scene; determining one or more target interaction objects based on the cloud travel scene, and controlling the one or more target interaction objects to execute interaction actions so as to complete construction of the cloud travel scene; according to the invention, passenger intentions are acquired through vehicle-mounted sensing and recognition, compared with stored information in a control system, a scene matched with the user intentions is constructed through actions of a target interaction object (intelligent cabin equipment), and differential interaction experience is realized.

Description

Vehicle-mounted-based cloud travel scene construction method, device, system, medium and equipment
Technical Field
The invention relates to the technical field of vehicle-mounted cabin entertainment systems, in particular to a vehicle-mounted-based cloud travel scene construction method, device, system, medium and equipment.
Background
With the development of intelligent internet-connected automobiles, intelligent cabins are gradually becoming a third living space of people. The intelligent cabin based on man-machine co-driving aims at providing intelligent and networking experience for drivers and passengers through intelligent man-machine interaction and meets the increasingly-improved living requirements of people. In the man-machine co-driving technology, immersive scene entertainment experience is an important factor for improving the level of intelligence. With the increasingly outstanding contradiction between the acceleration of social pace and the longer time of travel to improve the quality of life, cloud travel and video travel are becoming modern popular travel modes.
However, the mainstream cabin entertainment scene at present is mainly based on man-machine interaction of a central control screen, and intelligent experience and expansibility are required to be improved. Aiming at the increasingly rising travel demands at present, cloud travel is mostly available in conventional consumer electronic products such as mobile phones, and people can acquire contact with the virtual world through screen interaction, but cannot have more intelligent immersive experience. But intelligent cabin has contained multichannel sound system, AR (augmented reality ) augmented reality system at present, and the air conditioning system in the intelligent cabin, among the lighting system are more and more intelligent, provide good immersive experience condition, nevertheless use scene exists in home theater, abundant APP (application) etc. more, and is not enough to the use scene attention of tourism class.
Disclosure of Invention
In view of the above drawbacks of the prior art, the present invention aims to provide a vehicle-mounted cloud travel scene construction method, device, system, medium and apparatus, which are used for solving the problems existing in the prior art.
To achieve the above and other related objects, the present invention provides a vehicle-mounted-based cloud travel scene construction method, which includes:
responding to a first triggering operation of a user, and determining a human-computer interaction mode, wherein the human-computer interaction mode comprises human-computer interaction and non-human-computer interaction;
if the man-machine interaction mode is man-machine interaction, identifying the interaction intention of the user according to the state parameters of the user acquired by the vehicle-mounted sensing equipment, and matching a cloud travel scene corresponding to the interaction intention based on the interaction intention; if the man-machine interaction mode is non-man-machine interaction, responding to a second triggering operation of the user, and selecting a cloud travel scene;
determining one or more target interaction objects based on the cloud travel scene, and calling scene parameters corresponding to the cloud travel scene from a scene database constructed in advance;
and controlling the one or more target interaction objects to execute interaction actions based on the scene parameters so as to complete the construction of the cloud travel scene.
In an embodiment of the present invention, the status parameters include: gesture parameters, facial expression parameters, and voice parameters.
In an embodiment of the present invention, the interactive object includes at least one of the following: sound equipment, display equipment, lighting equipment, air conditioning equipment, seat equipment and fragrance equipment.
In an embodiment of the present invention, before determining the man-machine interaction mode, the construction method further includes:
acquiring state information of a vehicle;
judging whether a vehicle meets preset conditions or not based on the state information, and generating first prompt information when the vehicle meets the preset conditions, wherein the first prompt information is used for indicating to enter a man-machine interaction mode selection mode, and determining a man-machine interaction mode through the first triggering operation in the man-machine interaction mode selection mode; and generating second prompt information when the vehicle does not meet the preset condition, wherein the second prompt information is used for indicating failure to enter a man-machine interaction mode selection mode.
In an embodiment of the present invention, the step of identifying the interaction intention of the user according to the state parameter includes:
extracting the intention of the gesture parameters to obtain the gesture intention;
extracting the intention of the facial expression parameters to obtain facial intention;
Extracting intention of the voice parameters to obtain voice intention;
and carrying out multi-modal intention fusion on the gesture intention, the facial intention and the voice intention to obtain an interaction intention.
In an embodiment of the present invention, the step of constructing the scene database includes:
acquiring scene parameters;
and classifying the scene parameters based on the categories of the interaction scenes to obtain a plurality of scene parameter categories, wherein the scene parameters in the same scene parameter category are related to the same interaction scene.
To achieve the above and other related objects, the present invention further provides a vehicle-mounted cloud travel scene construction method, where the construction device includes:
the interaction module determining module is used for responding to a first triggering operation of a user and determining a human-computer interaction mode, wherein the human-computer interaction mode comprises human-computer interaction and non-human-computer interaction;
the interaction scene determining module is used for identifying the interaction intention of the user according to the state parameters of the user acquired by the vehicle-mounted sensing equipment when the man-machine interaction mode is man-machine interaction, and matching the cloud travel scene corresponding to the intention based on the interaction intention; when the man-machine interaction mode is non-man-machine interaction, responding to a second triggering operation of the user, and selecting a cloud travel scene;
The scene parameter calling module is used for determining one or more target interaction objects based on the cloud travel scene and calling scene parameters corresponding to the cloud travel scene from a scene database constructed in advance;
and the interactive scene construction module is used for controlling the one or more target interactive objects to execute interactive actions based on scene parameters so as to complete the construction of the cloud travel scene.
To achieve the above and other related objects, the present invention further provides a system for constructing a cloud travel scenario, the system comprising: the intelligent sensing system comprises a man-machine interaction module, an intelligent sensing module, a control module, a storage module and an actuator module;
the system comprises a human-computer interaction module, a control module and a control module, wherein the human-computer interaction module is used for receiving a first trigger operation and a second trigger operation of a user, the first trigger operation is used for determining a human-computer interaction mode, and the human-computer interaction mode comprises human-computer interaction and non-human-computer interaction; the second triggering operation is used for selecting an interaction scene under the interaction of non-human-computer;
the intelligent perception module is used for collecting state parameters of a user when the man-machine interaction mode is man-machine interaction, identifying interaction intention of the user according to the state parameters and sending the interaction intention to the control module;
The control module is used for matching cloud travel scenes corresponding to the interaction intention from the scene database in the storage module; determining one or more target interaction objects included by an executor module according to the interaction scene, calling scene parameters corresponding to the cloud travel scene from a pre-constructed scene database, and controlling the one or more target interaction objects to execute interaction actions based on the scene parameters;
and the executor module is used for responding to the control instruction of the control module to execute the interaction action to obtain one or more interaction actions so as to complete the construction of the cloud travel scene.
To achieve the above and other related objects, the present invention also provides a vehicle-mounted-based cloud travel scene construction apparatus, including:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the processors to perform one or more of the aforementioned construction methods.
To achieve the above and other related objects, the present invention also provides one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the processors to perform one or more of the aforementioned construction methods.
As described above, the vehicle-mounted-based cloud travel scene construction method, device, medium and equipment provided by the invention have the following beneficial effects:
the invention discloses a construction method of a cloud travel scene, which comprises the following steps: responding to a first triggering operation of a user, and determining a human-computer interaction mode, wherein the human-computer interaction mode comprises human-computer interaction and non-human-computer interaction; if the man-machine interaction mode is man-machine interaction, identifying the interaction intention of the user according to the state parameters of the user acquired by the vehicle-mounted sensing equipment, and matching a cloud travel scene corresponding to the interaction intention based on the interaction intention; if the man-machine interaction mode is non-man-machine interaction, responding to a second triggering operation of the user, and selecting a cloud travel scene; determining one or more target interaction objects based on the cloud travel scene, and calling scene parameters corresponding to the cloud travel scene from a scene database constructed in advance; controlling the one or more target interactive objects to execute interaction based on scene parameters so as to complete the construction of a cloud travel scene; according to the invention, passenger intentions are acquired through vehicle-mounted sensing and recognition, compared with stored information in a control system, a scene matched with the user intentions is constructed, and differential interaction experience is realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an exemplary vehicle-based cloud travel scenario construction system of the present application;
FIG. 2 is a flow chart of an exemplary vehicle-based cloud travel scenario construction method of the present application;
FIG. 3 is a flow chart of an exemplary method for identifying a user's intent to interact
FIG. 4 is a block diagram of an exemplary vehicle-based cloud travel scenario construction apparatus of the present application;
FIG. 5 illustrates a schematic diagram of a computer system suitable for use in implementing the memory of embodiments of the present application.
Detailed Description
Further advantages and effects of the present invention will become readily apparent to those skilled in the art from the disclosure herein, by referring to the accompanying drawings and the preferred embodiments. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be understood that the preferred embodiments are presented by way of illustration only and not by way of limitation.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
In the following description, numerous details are set forth in order to provide a more thorough explanation of embodiments of the present invention, it will be apparent, however, to one skilled in the art that embodiments of the present invention may be practiced without these specific details, in other embodiments, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the embodiments of the present invention.
With the development of intelligent internet-connected automobiles, intelligent cabins are gradually becoming a third living space of people. The intelligent cabin based on man-machine co-driving aims at providing intelligent and networking experience for drivers and passengers through intelligent man-machine interaction and meets the increasingly-improved living requirements of people. In the man-machine co-driving technology, immersive scene entertainment experience is an important factor for improving the level of intelligence. With the increasingly outstanding contradiction between the acceleration of social pace and the longer time of travel to improve the quality of life, cloud travel and video travel are becoming modern popular travel modes.
However, the mainstream cabin entertainment scene at present is mainly based on man-machine interaction of a central control screen, and intelligent experience and expansibility are required to be improved. Aiming at the increasingly rising travel demands at present, cloud travel is mostly available in conventional consumer electronic products such as mobile phones, and people can acquire contact with the virtual world through screen interaction, but cannot have more intelligent immersive experience. But intelligent cabin has contained multichannel sound system, AR augmented reality system at present, and the air conditioning system in the intelligent cabin, among the lighting system are more and more intelligent, have provided good immersive experience condition, nevertheless use scene to exist in home theater, abundant APP etc. more, and the degree of concern is not enough to the use scene of tourism class. In addition, with the high-speed development of an electric control system and a domain controller, the digital information of cloud tourism can be stored in a high-capacity controller, and the controller can effectively integrate whole vehicle resources, build intelligent and immersive experience and can perform more intelligent interaction with a tourism scene. Each module system of intelligent cabin combines with cloud tourism, can effectively promote whole car intelligent level, bring abundant intelligent experience for on-vehicle personnel.
The embodiment of the application respectively provides a vehicle-mounted-based cloud travel scene construction method, a vehicle-mounted-based cloud travel scene construction device, a vehicle-mounted-based cloud travel scene construction system, vehicle-mounted-based cloud travel scene construction equipment and a vehicle-mounted-based cloud travel scene storage medium.
Referring to fig. 1, fig. 1 is a schematic block diagram of a vehicle-based cloud travel scenario construction system according to an exemplary embodiment of the present application. In fig. 1, the build system includes: the human-computer interaction module 110, the intelligent perception module 120, the control module 130, the storage module 140, the executor module 150 includes: a multi-channel stereo sub-module 160, an AR augmented reality sub-module 170, a smart light actuator sub-module 180, and other smart actuator sub-modules 190.
The human-computer interaction module 110 is configured to receive a first trigger operation and a second trigger operation of a user, where the first trigger operation is used to determine a human-computer interaction mode, and the human-computer interaction mode includes human-computer interaction and non-human-computer interaction; the second triggering operation is used for determining an interaction scene under the interaction of non-human-computer;
specifically, the man-machine interaction module may interact with the control module 130 for starting a man-machine interaction mode selection mode, selecting a scene, selecting a man-machine interaction mode, and the like. The starting of the man-machine interaction mode selection mode is judged through the state of the vehicle, if the state of the vehicle is a parking state, the man-machine interaction mode selection mode is started, and if the state of the vehicle is a non-parking state, the man-machine interaction mode selection mode is failed to be started; the man-machine interaction mode of the man-machine interaction module 110 mainly comprises central control screen control, voice recognition control, gesture control and the like, and is used for acquiring user instructions and executing operations such as selection, confirmation, cancellation and the like; the selection of the man-machine interaction mode is achieved through a first triggering operation of a user. Scene selection mainly involves retrieving, selecting, and storing installed scene information in the controller. The man-machine interaction mode comprises an interaction mode and a non-interaction mode; in the non-interactive mode, the control module 110 selects an interactive scene (determined through a second triggering operation) through a user, and sequentially completes the construction of an immersive cloud travel scene through the intelligent executors (the multi-channel stereo module 150, the AR augmented reality sub-module 160, the intelligent light executor sub-module 170 and the other intelligent executor sub-modules 180) according to the data stored in the storage module 140, so that normal play of the scene is realized, and the playing process is not influenced by gestures, voices and the like of the user in the cabin. In an interaction mode, the control module acquires state parameters (gestures, voices and the like) of a user in the vehicle through the intelligent sensing module, recognizes interaction intention of the user according to the state parameters of the user acquired by the vehicle-mounted sensing equipment, and matches a cloud travel scene corresponding to the interaction intention based on the interaction intention; after the interaction mode is determined, one or more target interaction objects are determined based on the cloud travel scene, and scene parameters corresponding to the cloud travel scene are called from a scene database constructed in advance; and controlling the one or more target interaction objects to execute interaction actions based on the scene parameters so as to complete the construction of the cloud travel scene.
The control module 130 is configured to match a cloud travel scenario corresponding to the interaction intention from the scenario database in the storage module; determining one or more target interaction objects included by an executor module according to the interaction scene, calling scene parameters corresponding to the cloud travel scene from a pre-constructed scene database, and controlling the one or more target interaction objects to execute interaction actions based on the scene parameters;
specifically, the control module is used for receiving information, comprehensively judging and executing operations, including interaction with the man-machine interface. The control module can be started under the parking condition, and whether the control module is in a parking state or not is judged during starting, if the control module is in the parking state, the control module can enter the parking state, and if the control module is in the non-parking state, the control module refuses to enter the parking state and sends out an alarm prompt. And after entering the control module, judging the mode, and selecting an entered scene mode comprising an interactive mode and a non-interactive mode. And interacting with the storage module, acquiring scene data selected by a user, distinguishing modes for processing, and performing input sensing and output control. In the non-interactive mode, the control module 110 selects an interactive scene through a user, and sequentially completes the construction of an immersive cloud travel scene through the intelligent executors (the multi-channel stereo module 150, the AR augmented reality module 160, the intelligent light executor module 170 and other intelligent executor modules 180) according to the data stored in the storage module 140, so that the normal play of the scene is realized, and the playing process is not influenced by gestures, voices and the like of the user in the conference cabin. In an interaction mode, the control module acquires state parameters (gestures, voices and the like) of a user in the vehicle through the intelligent sensing module, recognizes interaction intention of the user according to the state parameters of the user acquired by the vehicle-mounted sensing equipment, and matches a cloud travel scene corresponding to the interaction intention based on the interaction intention; after the interaction mode is determined, one or more target interaction objects are determined based on the cloud travel scene, and scene parameters corresponding to the cloud travel scene are called from a scene database constructed in advance; and controlling the one or more target interaction objects to execute interaction actions based on the scene parameters so as to complete the construction of the cloud travel scene.
The storage module 140 is configured to store preset scene parameters of cloud travel, including data such as multi-channel sound, stereo video, and interaction with a user. The cloud travel scene data are stored in different scenes and can be updated through OTA. The data of the module can be read by the control module and can be executed by the intelligent executor.
The intelligent perception module 120 is configured to collect a state parameter of a user when the man-machine interaction mode is man-machine interaction, identify an interaction intention of the user according to the state parameter, and send the interaction intention to the control module;
specifically, the intelligent perception module is used for acquiring user state parameters, identifying interaction intention of a user and transmitting the interaction intention to the control module under the condition of selecting a man-machine interaction mode, and the control module compares the interaction intention with scenes stored in the scene database in the storage module to obtain cloud travel scenes and acquire scene parameters and interaction objects. The intelligent perception module comprises camera gesture recognition, expression recognition, voice recognition and the like.
And the executor module 150 is configured to respond to the control instruction of the control module to execute an interaction action to obtain one or more interactions corresponding to the interaction action so as to complete the construction of the cloud travel scene.
The multi-channel stereo sound executing sub-module 160 is used for playing the sound of the scene, creating the stereo sound effect, including multi-channel stereo sound in the cabin, and can realize independent adjustment of different directions and different sound sizes, so that the sound is more stereo, and can be changed in real time according to the scene data and the control module. Simulating dynamic sounds of nature, animals, moving objects and the like; the AR enhancement display sub-module 170 is used to play the image of the scene. The virtual stereo image display device comprises a front windshield projection module, a skylight projection module and the like, can realize the playing of a virtual stereo image, and can realize more accurate visual enhancement display by setting and correcting the visual angle deviation; the intelligent light actuator submodule 180 is used for simulating information such as illumination by using light, and comprises atmosphere lamps, various lamp modules in a cabin and the like, and can be combined with scene information to simulate scenes such as sun, light and the like; other intelligent actuator submodules 190 include actuators such as air conditioners, seats, fragrances and the like, are used for increasing the richness of scenes and can simulate scenes such as wind, sports and the like. Such as seat shake, simulates a scene of an earthquake, rocking chair, etc.
The invention discloses a vehicle-mounted cloud travel scene construction system, which is an immersive interactive cloud travel system based on an intelligent cabin. In the vehicle cabin, a preset immersive scene is created through various intelligent actuators in the intelligent cabin, and intelligent interaction with passengers or drivers can be performed. The intelligent system can fully utilize intelligent executors in the intelligent cabin, such as multichannel sound equipment, intelligent light, AR display and the like, complete construction of a cloud travel immersive scene, and acquire interaction intention of passengers and the system through intelligent input sensors, such as DMS (Driver Monitoring System ) cameras, voice recognition and the like, complete interaction with the cloud travel system, further fully exert intelligent action of the intelligent cabin, promote intelligent level of immersive cloud travel in the automobile intelligent cabin, bring more comfortable riding and man-machine interaction experience for people, further satisfy requirements of people on immersive travel, and promote life happiness of people.
Referring to fig. 2, fig. 2 is a flowchart of an exemplary vehicle-based cloud travel scenario construction method, where the construction method at least includes steps S210 to S240, and the following details are described:
step S210, a human-computer interaction mode is determined in response to a first triggering operation of a user, wherein the human-computer interaction mode comprises human-computer interaction and non-human-computer interaction;
step S220, if the man-machine interaction mode is man-machine interaction, identifying the interaction intention of the user according to the state parameters of the user acquired by the vehicle-mounted sensing equipment, and allocating a cloud travel scene corresponding to the interaction intention based on the interaction intention; if the man-machine interaction mode is non-man-machine interaction, responding to a second triggering operation of the user, and selecting a cloud travel scene;
step S230, determining one or more target interaction objects based on the cloud travel scene, and calling scene parameters corresponding to the cloud travel scene from a pre-constructed scene database;
step S240, controlling the one or more target interaction objects to execute interaction based on the scene parameters to complete the construction of the cloud travel scene.
According to the invention, passenger intentions are acquired through vehicle-mounted sensing and recognition, compared with stored information in a control system, scenes matched with the user intentions are played, and differential interaction experience is realized.
Each step in the above embodiment will be described in detail below.
In step S210, a human-computer interaction mode is determined in response to a first trigger operation of a user, wherein the human-computer interaction mode includes human-computer interaction and non-human-computer interaction;
before the cloud travel scene is built, a user can start the immersive cloud travel system through the man-machine interaction module, and after entering the immersive cloud travel system, the user needs to select a man-machine interaction mode, wherein the man-machine interaction mode comprises man-machine interaction and non-man-machine interaction.
Specifically, a man-machine interaction mode selection interface is arranged on the man-machine interaction module, and a user selects a man-machine interaction mode on the selection interface through a first trigger operation. The triggering operation may be selected by a user through a finger on the interface, or may be selected by voice control, gesture control, or a specific action, which is not limited to a specific triggering manner in this embodiment.
In step S220, if the man-machine interaction mode is man-machine interaction, identifying an interaction intention of the user according to the state parameter of the user acquired by the vehicle-mounted sensing device, and matching a cloud travel scene corresponding to the interaction intention based on the interaction intention; if the man-machine interaction mode is non-man-machine interaction, responding to a second triggering operation of the user, and determining a cloud travel scene;
When the user selects the man-machine interaction mode to be man-machine interaction, the state parameters of the user are collected through the vehicle-mounted sensing equipment arranged on the vehicle, wherein the state parameters comprise: gesture parameters, facial expression parameters, and voice parameters; then, the interactive intention of the user is identified based on the state parameters, and then the cloud travel scene corresponding to the interactive intention is matched based on the interactive intention. It should be noted that, each cloud travel scene corresponds to one or more scene parameters, and these parameters may be dynamic parameters, static parameters, or a combination of dynamic parameters and static parameters. The output of these scene parameters is achieved by one or more interactive objects performing an interactive action. It should be further noted that, the scene parameters, the association relations between the scene parameters and the cloud travel scenes, and the association relations between the scene parameters and the interactive objects are stored in the scene database, and different cloud travel scenes correspond to different scene parameters and correspond to different interactive objects. Wherein, scene parameters include: the change of sound, the display of images, the change of lamplight, the change of temperature, the vibration of the seat and the like. Specifically, the adjustment of sound includes: the sound is more stereo due to the adjustment of different directions and different sound sizes, and the sound can simulate dynamic natural sounds, animals, moving objects and the like, and the adjustment of the sound is realized through a multichannel stereo sound executing module; the adjustment of the images can be realized through a front windshield projection module, a skylight projection module and the like, and the playing of the virtual stereoscopic image can be realized; the adjustment of the lamplight can be realized through atmosphere lamps, various lamp modules in the cabin and the like, and scenes such as sun, lamplight and the like can be simulated.
Referring to fig. 3, fig. 3 is a flowchart illustrating an exemplary method for identifying an interaction intention of a user. In fig. 3, the step of identifying the user's interaction intention according to the state parameter includes:
step S310, extracting intention of the gesture parameters to obtain gesture intention;
step S320, extracting intention of the facial expression parameters to obtain facial intention;
step S330, extracting intention of the voice parameter to obtain voice intention;
step S340, performing multi-modal intent fusion on the gesture intent, the facial intent, and the voice intent to obtain an interaction intent.
Specifically, after gesture parameters, voice parameters and facial expression parameters of a user are obtained, respectively extracting the gesture parameters, the voice parameters and the facial expression parameters through a preset intention extraction model based on a neural network to obtain gesture intention, facial intention and voice intention; and then, carrying out intention fusion by utilizing a pre-trained intention fusion model to obtain the interaction intention. It should be noted that, extraction of gesture intent, facial intent, and voice intent may be implemented by using different intent extraction models, that is, gesture parameters are extracted by using a gesture intent extraction model, facial expression parameters are extracted by using a facial intent extraction model, and voice parameters are extracted by using a voice intent extraction model.
When the user selects the man-machine interaction mode as the non-man-machine interaction mode, responding to a second trigger operation of the user, and selecting a cloud travel scene;
it should be noted that, a selection interface with a man-machine interaction mode is provided on the man-machine interaction module, and the user selects the man-machine interaction mode on the selection interface through a first trigger operation. Under the non-man-machine interaction mode, different cloud travel scenes can appear on a selection interface of the man-machine interaction module for a user to select, and the user selects the corresponding cloud travel scene through triggering operation according to actual requirements. The second triggering operation may be selected by a user through a finger on the interface, or may be selected by a voice control or gesture control mode, and the specific triggering mode is not limited in this embodiment.
In step S230, determining one or more target interaction objects based on the cloud travel scene, and calling scene parameters corresponding to the cloud travel scene from a pre-constructed scene database;
after the cloud travel scene is determined, one or more target interaction objects can be determined through the cloud travel scene due to the fact that the cloud travel scene is associated with the corresponding interaction objects, and meanwhile scene parameters corresponding to the cloud travel scene are called from a pre-built scene database. Wherein the interactive object comprises at least one of: sound equipment, display equipment, lighting equipment, air conditioning equipment, seat equipment and fragrance equipment.
In one embodiment, the step of constructing the scene database includes: acquiring scene parameters; and classifying the scene parameters based on the categories of the interaction scenes to obtain a plurality of scene parameter categories, wherein the scene parameters in the same scene parameter category are related to the same interaction scene.
Step S240, controlling the one or more target interaction objects to execute interaction based on the scene parameters to complete the construction of the cloud travel scene.
And after the target interaction object is selected, controlling the target interaction object to execute corresponding interaction actions according to the scene parameters. Specifically, by controlling the sound equipment (multi-channel stereo execution module) to play the sound of the scene, the stereo effect is created, and the multi-channel stereo in the cabin is specifically included, so that the independent adjustment of different directions and different sound sizes can be realized, the sound is more stereo, and the dynamic natural, animal, moving object and other sounds can be simulated; the virtual stereoscopic image can be played by controlling the display equipment (AR enhanced display module) to play the images of the scene, and the virtual stereoscopic image can be played by specifically comprising a front windshield projection module, a skylight projection module and the like, and more accurate visual enhancement display can be realized by setting and correcting the visual angle deviation; the intelligent lighting display module is used for simulating illumination and other information by controlling the lighting equipment (intelligent lighting display module), specifically comprising atmosphere lamps, various lamp modules in a cabin and the like, and can be combined with scene information to simulate scenes such as sun, light and the like; regulating the temperature of the cabin by controlling the air conditioning equipment; the chair equipment is controlled to simulate the situations of earthquake, rocking chair and the like.
In an embodiment, before determining the man-machine interaction mode, the construction method further includes: acquiring state information of a vehicle; judging whether a vehicle meets preset conditions or not based on the state information, and generating first prompt information when the vehicle meets the preset conditions, wherein the first prompt information is used for indicating to enter a man-machine interaction mode selection mode, and determining a man-machine interaction mode through the first triggering operation in the man-machine interaction mode selection mode; and generating second prompt information when the vehicle does not meet the preset condition, wherein the second prompt information is used for indicating failure to enter a man-machine interaction mode selection mode.
Specifically, the state information of the vehicle may be a parking state of the vehicle, if the vehicle is in the parking state, a preset condition is satisfied, at this time, the immersive cloud tourism system is started, and first prompt information is generated, and the first prompt information indicates that the vehicle enters a man-machine interaction mode selection mode. After entering the man-machine interaction mode selection mode, a selection interface can be displayed on a car screen, and a user completes a first trigger operation to select the interaction mode; if the vehicle is in a non-parking state, the vehicle does not meet preset conditions, at the moment, the immersive cloud tourism system cannot enter, and meanwhile, prompt information is displayed on a display interface of the vehicle machine, wherein the prompt information is used for indicating that the entering of the man-machine interaction mode selection mode fails.
In summary, the invention creates a preset immersive scene by various intelligent actuators in the intelligent cabin in the vehicle cabin under specific conditions, and can perform intelligent interaction with a user (passengers or drivers). The intelligent cabin intelligent system can fully utilize intelligent actuators in the intelligent cabin, such as multichannel sound equipment, intelligent light, AR display and the like, complete construction of a cloud travel immersion scene, acquire interaction intention of passengers and a system through intelligent input sensors, such as DMS (Driver Monitoring System) cameras, voice recognition and the like, complete interaction with the cloud travel system, further fully play the intelligent effect of the intelligent cabin, promote the intelligent level of immersion cloud travel in the intelligent cabin of an automobile, bring more comfortable riding and man-machine interaction experience for users, further meet the demands of the users on the immersion travel, and promote life happiness of the users.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
FIG. 4 is a block diagram of a cloud travel scenario construction apparatus, which may be applied to the implementation environment shown in FIG. 1, as illustrated in an exemplary embodiment of the present application.
As shown in fig. 4, a vehicle-mounted cloud travel scene construction device, the construction device includes:
an interaction module determining module 410, configured to determine a human-computer interaction mode in response to a first trigger operation of a user, where the human-computer interaction mode includes human-computer interaction and non-human-computer interaction;
the interaction scene determining module 420 is configured to identify an interaction intention of a user according to a state parameter of the user acquired by the vehicle-mounted sensing device when the human-computer interaction mode is human-computer interaction, and match a cloud travel scene corresponding to the intention based on the interaction intention; when the man-machine interaction mode is non-man-machine interaction, responding to a second triggering operation of the user, and selecting a cloud travel scene;
a scene parameter calling module 430, configured to determine one or more target interaction objects based on the cloud travel scene, and call scene parameters corresponding to the cloud travel scene from a pre-constructed scene database;
the interactive scene construction module 440 controls the one or more target interactive objects to perform an interactive action based on the scene parameters to complete construction of the cloud travel scene.
It should be noted that, the vehicle-based cloud travel scenario construction device provided by the foregoing embodiment and the vehicle-based cloud travel scenario construction method provided by the foregoing embodiment belong to the same concept, and the specific manner in which each module and unit perform operations has been described in detail in the method embodiment, which is not described herein. In practical application provided by the above embodiment, the above function allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to perform all or part of the functions described above, which is not limited herein.
The embodiment of the application also provides vehicle-mounted-based cloud travel scene construction equipment, which comprises: one or more processors; and a storage device for storing one or more programs, which when executed by the one or more processors, cause the memory to implement the construction methods provided in the respective embodiments described above.
FIG. 5 illustrates a schematic diagram of a computer system suitable for use in implementing the memory of embodiments of the present application. It should be noted that the computer system with the memory shown in fig. 5 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 5, the computer system includes a central processing unit (Central Processing Unit, CPU) that can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) or a program loaded from a storage section into a random access Memory (Random Access Memory, RAM). In the RAM, various programs and data required for the system operation are also stored. The CPU, ROM and RAM are connected to each other by a bus. An Input/Output (I/O) interface is also connected to the bus.
The following components are connected to the I/O interface: an input section including a keyboard, a mouse, etc.; an output section including a Cathode Ray Tube (CRT), a liquid crystal display (Liquid Crystal Display, LCD), and the like, and a speaker, and the like; a storage section including a hard disk or the like; and a communication section including a network interface card such as a LAN (Local Area Network ) card, a modem, or the like. The communication section performs communication processing via a network such as the internet. The drives are also connected to the I/O interfaces as needed. Removable media such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, and the like are mounted on the drive as needed so that a computer program read therefrom is mounted into the storage section as needed.
In particular, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the construction method shown in flowchart 2. In such embodiments, the computer program may be downloaded and installed from a network via a communication portion, and/or installed from a removable medium. When executed by a Central Processing Unit (CPU), performs the various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium may be, for example, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with a computer-readable computer program embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. A computer program embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Where each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by means of software, or may be implemented by means of hardware, and the described units may also be provided in a processor. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
Another aspect of the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of a computer, causes the computer to perform the method of constructing a cloud travel scenario as described above. The computer-readable storage medium may be contained in the memory described in the above embodiment or may exist alone without being assembled into the memory.
Another aspect of the present application also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method for constructing the cloud travel scenario provided in the above embodiments.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. It is therefore intended that all equivalent modifications and changes made by those skilled in the art without departing from the spirit and technical spirit of the present invention shall be covered by the appended claims.

Claims (10)

1. The vehicle-mounted-based cloud travel scene construction method is characterized by comprising the following steps of:
responding to a first triggering operation of a user, and determining a human-computer interaction mode, wherein the human-computer interaction mode comprises human-computer interaction and non-human-computer interaction;
if the man-machine interaction mode is man-machine interaction, identifying the interaction intention of the user according to the state parameters of the user acquired by the vehicle-mounted sensing equipment, and matching a cloud travel scene corresponding to the interaction intention based on the interaction intention; if the man-machine interaction mode is non-man-machine interaction, responding to a second triggering operation of the user, and selecting a cloud travel scene; different cloud travel scenes correspond to different scene parameters;
determining one or more target interaction objects based on the cloud travel scene, and calling scene parameters corresponding to the cloud travel scene from a scene database constructed in advance;
and controlling the one or more target interaction objects to execute interaction actions based on the scene parameters so as to complete the construction of the cloud travel scene.
2. The vehicle-based cloud travel scenario construction method of claim 1, wherein the state parameters include: gesture parameters, facial expression parameters, and voice parameters.
3. The vehicle-based cloud travel scenario construction method of claim 1, wherein the interactive object comprises at least one of: sound equipment, display equipment, lighting equipment, air conditioning equipment, seat equipment and fragrance equipment.
4. The vehicle-based cloud travel scenario construction method of claim 1, wherein before determining the human-machine interaction mode, the construction method further comprises:
acquiring state information of a vehicle;
judging whether a vehicle meets preset conditions or not based on the state information, and generating first prompt information when the vehicle meets the preset conditions, wherein the first prompt information is used for indicating to enter a man-machine interaction mode selection mode, and determining a man-machine interaction mode through the first triggering operation in the man-machine interaction mode selection mode; and generating second prompt information when the vehicle does not meet the preset condition, wherein the second prompt information is used for indicating failure to enter a man-machine interaction mode selection mode.
5. The vehicle-based cloud travel scenario construction method of claim 2, wherein said step of identifying a user's interaction intent based on said state parameter comprises:
Extracting the intention of the gesture parameters to obtain the gesture intention;
extracting the intention of the facial expression parameters to obtain facial intention;
extracting intention of the voice parameters to obtain voice intention;
and carrying out multi-modal intention fusion on the gesture intention, the facial intention and the voice intention to obtain an interaction intention.
6. The vehicle-based cloud travel scenario construction method according to claim 1, wherein the step of constructing the scenario database includes:
acquiring scene parameters;
and classifying the scene parameters based on the categories of the interaction scenes to obtain a plurality of scene parameter categories, wherein the scene parameters in the same scene parameter category are related to the same interaction scene.
7. Vehicle-mounted-based cloud travel scene construction device is characterized in that the construction device comprises:
the interaction module determining module is used for responding to a first triggering operation of a user and determining a human-computer interaction mode, wherein the human-computer interaction mode comprises human-computer interaction and non-human-computer interaction;
the interaction scene determining module is used for identifying the interaction intention of the user according to the state parameters of the user acquired by the vehicle-mounted sensing equipment when the man-machine interaction mode is man-machine interaction, and matching the cloud travel scene corresponding to the intention based on the interaction intention; when the man-machine interaction mode is non-man-machine interaction, responding to a second triggering operation of the user, and selecting a cloud travel scene;
The scene parameter calling module is used for determining one or more target interaction objects based on the cloud travel scene and calling scene parameters corresponding to the cloud travel scene from a scene database constructed in advance;
and the interactive scene construction module is used for controlling the one or more target interactive objects to execute interactive actions based on scene parameters so as to complete the construction of the cloud travel scene.
8. A vehicle-based cloud travel scenario construction system, the construction system comprising: the intelligent sensing system comprises a man-machine interaction module, an intelligent sensing module, a control module, a storage module and an actuator module;
the system comprises a human-computer interaction module, a control module and a control module, wherein the human-computer interaction module is used for receiving a first trigger operation and a second trigger operation of a user, the first trigger operation is used for determining a human-computer interaction mode, and the human-computer interaction mode comprises human-computer interaction and non-human-computer interaction; the second triggering operation is used for selecting an interaction scene under the interaction of non-human-computer;
the intelligent perception module is used for collecting state parameters of a user when the man-machine interaction mode is man-machine interaction, identifying interaction intention of the user according to the state parameters and sending the interaction intention to the control module;
the control module is used for matching cloud travel scenes corresponding to the interaction intention from the scene database in the storage module; determining one or more target interaction objects included by an executor module according to the interaction scene, calling scene parameters corresponding to the cloud travel scene from a pre-constructed scene database, and controlling the one or more target interaction objects to execute interaction actions based on the scene parameters;
And the executor module is used for responding to the control instruction of the control module to execute the interaction action to obtain one or more interaction actions so as to complete the construction of the cloud travel scene.
9. Vehicle-mounted-based cloud travel scene construction equipment is characterized by comprising the following components:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the processors to perform the method of construction as recited in one or more of claims 1-6.
10. One or more machine readable media having instructions stored thereon that, when executed by one or more processors, cause the processors to perform the construction method of one or more of claims 1-6.
CN202311769705.5A 2023-12-20 2023-12-20 Vehicle-mounted-based cloud travel scene construction method, device, system, medium and equipment Pending CN117891337A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311769705.5A CN117891337A (en) 2023-12-20 2023-12-20 Vehicle-mounted-based cloud travel scene construction method, device, system, medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311769705.5A CN117891337A (en) 2023-12-20 2023-12-20 Vehicle-mounted-based cloud travel scene construction method, device, system, medium and equipment

Publications (1)

Publication Number Publication Date
CN117891337A true CN117891337A (en) 2024-04-16

Family

ID=90638782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311769705.5A Pending CN117891337A (en) 2023-12-20 2023-12-20 Vehicle-mounted-based cloud travel scene construction method, device, system, medium and equipment

Country Status (1)

Country Link
CN (1) CN117891337A (en)

Similar Documents

Publication Publication Date Title
US11034362B2 (en) Portable personalization
EP4197874A1 (en) Scene generation method, apparatus and system, device and storage medium
US11461980B2 (en) Methods and systems for providing a tutorial for graphic manipulation of objects including real-time scanning in an augmented reality
CN110559656A (en) Vehicle-mounted air conditioner control method and device in game scene
CN112959998B (en) Vehicle-mounted human-computer interaction method and device, vehicle and electronic equipment
TWI738132B (en) Human-computer interaction method based on motion analysis, in-vehicle device
CN114179730A (en) Vehicle cabin viewing method and device and computer readable storage medium
CN110929078A (en) Automobile voice image reloading method, device, equipment and storage medium
CN112437246B (en) Video conference method based on intelligent cabin and intelligent cabin
CN111265851B (en) Data processing method, device, electronic equipment and storage medium
CN117891337A (en) Vehicle-mounted-based cloud travel scene construction method, device, system, medium and equipment
CN115951779A (en) Vehicle-mounted digital virtual image interaction and generation method, device, equipment and medium
CN116400805A (en) Vehicle-mounted entertainment interaction method and device, vehicle and storage medium
US20230386138A1 (en) Virtual environments for autonomous vehicle passengers
CN113709954B (en) Control method and device of atmosphere lamp, electronic equipment and storage medium
CN112951216B (en) Vehicle-mounted voice processing method and vehicle-mounted information entertainment system
WO2022217500A1 (en) In-vehicle theater mode control method and apparatus, device, and storage medium
CN114367103A (en) Space atmosphere simulation method and system
CN112061058B (en) Scene triggering method, device, equipment and storage medium
CN116712723A (en) Vehicle-mounted game method and device based on face recognition and voice recognition
CN116958329A (en) Object display method, device, equipment, storage medium and program product
CN117573011A (en) Virtual image display method, device and equipment
CN114475479A (en) Automobile control method and device and computer storage medium
CN115346023A (en) Method and system for realizing virtual scene by radio station, storage medium and terminal
CN117584878A (en) Method and system for providing an immersive experience to a user

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination