CN112061049B - Scene triggering method, device, equipment and storage medium - Google Patents

Scene triggering method, device, equipment and storage medium Download PDF

Info

Publication number
CN112061049B
CN112061049B CN202010930443.6A CN202010930443A CN112061049B CN 112061049 B CN112061049 B CN 112061049B CN 202010930443 A CN202010930443 A CN 202010930443A CN 112061049 B CN112061049 B CN 112061049B
Authority
CN
China
Prior art keywords
module
triggering
scene
vehicle
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010930443.6A
Other languages
Chinese (zh)
Other versions
CN112061049A (en
Inventor
丁磊
郑洲
王昶旭
赵叶霖
卢加浚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Human Horizons Shanghai Internet Technology Co Ltd
Original Assignee
Human Horizons Shanghai Internet Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Human Horizons Shanghai Internet Technology Co Ltd filed Critical Human Horizons Shanghai Internet Technology Co Ltd
Priority to CN202010930443.6A priority Critical patent/CN112061049B/en
Publication of CN112061049A publication Critical patent/CN112061049A/en
Application granted granted Critical
Publication of CN112061049B publication Critical patent/CN112061049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/037Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a method, a device, equipment and a storage medium for scene triggering, wherein the method comprises the following steps: detecting the position relation between a target object and a vehicle according to an execution request of a target scene; triggering a corresponding first scene execution module to execute a function corresponding to the position relation according to the position relation; under the condition that a target object exists in the target seat, triggering a second scene execution module to work; and in the case that the target object is detected to leave the vehicle, triggering the first display module to display the first AIC work. According to the method, the corresponding functions can be executed by triggering different scene execution modules under different detection conditions, so that romantic surprises can be made for the target object on important days, intelligent service of the vehicle is realized, and user experience is improved.

Description

Scene triggering method, device, equipment and storage medium
Technical Field
The present application relates to the field of intelligent vehicle technologies, and in particular, to a method, an apparatus, a device, and a storage medium for scene triggering.
Background
Various output devices are provided on the vehicle, such as a display screen, an atmosphere light, a seat, a sound, an air conditioner, and the like. These output devices typically perform some function alone and cannot cooperate with each other to achieve some scenario. As vehicle intelligence has developed, users expect vehicles to provide a greater variety of services.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for scene triggering, which are used for solving the problems in the related technology, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a scene triggering method, where the method includes:
detecting the position relation between a target object and a vehicle according to an execution request of a target scene;
triggering a corresponding first scene execution module to execute a function corresponding to the position relation according to the position relation;
under the condition that a target object exists in the target seat, triggering a second scene execution module to work;
and in the case that the target object is detected to leave the vehicle, triggering the first display module to display the first AIC work.
In a second aspect, an embodiment of the present application provides a scene triggering apparatus, including:
the position detection module is used for detecting the position relation between the target object and the vehicle according to the execution request of the target scene;
the first triggering module is used for triggering the corresponding first scene execution module to execute the function corresponding to the position relation according to the position relation;
the second trigger module is used for triggering the second scene execution module to work under the condition that the target seat is detected to have the target object;
and the third triggering module triggers the first display module to display the first AIC work under the condition that the target object is detected to leave the vehicle.
In a third aspect, an embodiment of the present application provides a scene trigger device, including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the scenario trigger method of the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium, and when the computer instructions are executed by a processor, the computer-readable storage medium implements the scenario triggering method of the present application.
The advantages or beneficial effects in the above technical solution at least include: by triggering different scene execution modules to execute corresponding functions under different detection conditions when an execution request is received, romantic surprises can be made for a target object on important days, intelligent service of a vehicle is realized, and user experience is improved.
The foregoing summary is provided for the purpose of description only and is not intended to be limiting in any way. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features of the present application will be readily apparent by reference to the drawings and following detailed description.
Drawings
In the drawings, like reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily to scale. It is appreciated that these drawings depict only some embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope.
Fig. 1 is an application architecture diagram of a scene triggering method according to an embodiment of the present application;
fig. 2 is a flowchart of a scenario triggering method according to an embodiment of the present application;
FIG. 3 is a schematic view of a projection lamp display effect of an outer lamp module according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an ISD screen display effect of an exterior light module according to the present application;
FIG. 5 is a schematic view of an outer lamp module implemented in accordance with the present application;
fig. 6 is a timing diagram of an application example of a scene trigger method according to an embodiment of the present application;
fig. 7 is a flowchart of an application example of a scenario trigger method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a scene trigger apparatus according to an embodiment of the present application;
fig. 9 is a schematic diagram of a scene triggering device according to an embodiment of the present application.
Detailed Description
In the following, only certain exemplary embodiments are briefly described. As those skilled in the art will recognize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
The application provides a scene triggering method for triggering a corresponding scene execution module at a vehicle end to execute a corresponding function so as to realize a target scene.
In one example, as shown in fig. 1, each user (including the first user) may edit a scene at its mobile terminal or vehicle end (a1), upload the edited scene to the cloud scene server (a2), and save the edited scene to the scene database (A3). The mobile terminal comprises mobile intelligent equipment such as a mobile phone and a tablet personal computer. The management terminal may send a scenario query request to the cloud scenario server to request a query scenario (B1 and B2). The cloud scene server inquires one or more scenes to be pushed from the scene database and sends the scenes to the management terminal as a scene inquiry result (B3). The management terminal may receive the scene query result from the cloud scene server, so as to obtain the scene to be pushed (B3). And the management terminal screens the scenes to be pushed to obtain the selected scenes. The management terminal may also configure some initial scenarios as pick scenarios. The management terminal sends a scene push request to the message management center to request the refined scene push (B5), and the message management center pushes the refined scene to the corresponding vehicle end (B6).
After the vehicle end receives the pushed scenes, the scenes can be synchronized to a scene service module (C1) at the vehicle end. The condition management module at the vehicle end monitors the current scene condition at the vehicle end (C2), and when the current scene condition meets the scene trigger condition, the scene engine module at the vehicle end executes the scene (C3).
After the vehicle end executes the scene, the scene execution result may be uploaded to the cloud scene server (C4). The cloud scene server is provided with a big data center and presets a data burying point (C5).
Fig. 2 shows a flowchart of a scenario triggering method according to an embodiment of the present application. As shown in fig. 2, the method may include:
step S201 detects a positional relationship between the target object and the vehicle according to the execution request of the target scene.
The target scene may be a scene selected by a user. For example: the user can select a target scene from a plurality of scenes prestored in the scene service module based on a vehicle-mounted scene Application program (APP) at the vehicle end, and further trigger an execution request of the target scene. The method for selecting the target scene based on the vehicle-mounted scene APP includes but is not limited to screen interface selection, voice triggering and the like.
In addition, the user can also select a target scene based on the scene application APP on the mobile terminal, and then send an execution request of the target scene to the vehicle end through network communication between the mobile terminal and the vehicle end. In the embodiment of the application, the target scene can be a romantic surprise mode scene, and under the target scene, romantic and surprise scene atmosphere can be created for passengers on important days.
And the vehicle end starts the target scene according to the execution request of the target scene, and further detects the position relation between the target object and the vehicle. In one example, after the user selects the target scene, the detection of the position relationship between the target object and the vehicle may be triggered immediately, or may be triggered at certain time intervals.
The target object can be an object which is expected to make romantic surprise for a vehicle main user or a user selecting a target scene.
And step S202, triggering a corresponding first scene execution module to execute a function corresponding to the position relation according to the position relation.
The first scene execution module may include one or more of an exterior light module, a door module corresponding to a target seat, a light blanket module, a suspension module, a seat adjustment module of the target seat, and a car audio module. The target seat may be a copilot seat.
Specifically, different first scene execution modules can be triggered to work according to different distances between the target object and the vehicle. For example: under the condition that the distance between the target object and the vehicle reaches a first preset distance, the outer lamp module is triggered to display preset guest greeting content to form a romantic light show (show), and therefore a romantic mode that the target object can enter and exit intelligently is followed.
The method comprises the steps that when the distance between a target object and a vehicle reaches a first preset distance, an outer lamp module is triggered to display preset welcome content, the vehicle can be triggered based on Bluetooth detection, and a user can select a target scene and start the target scene after knowing that the first preset distance is reached, so that the target scene is manually triggered.
As shown in fig. 3 to 5, the external light module includes at least one of an Interactive Signal Display (ISD) light 52, a projection light 51, and a through light.
Specifically, the projection lamp 51 may be a Digital Light Processing (DLP) projection lamp. The DLP projection lamp 51 may be used for a conventional high beam and low beam lamp, and may also be used for projecting projection data such as video or pictures. Fig. 3 shows an example diagram of the projection effect of the DLP projection lamp 51. The ISD lights 52 may include conventional lights 521 (e.g., daytime running lights, position lights, turn lights, stop lights, backup lights, Logo lights, front and rear pass lights, etc.) and ISD screens 522. The corresponding light effect is realized by the dynamic display of the conventional lamp 521. The ISD screen 522 may be a matrix type screen formed of a plurality of Light Emitting Diode (LED) lamps, and may be used to display pictures, animations, and the like. Fig. 4 illustrates an example diagram of a display effect of the ISD screen 522.
In one example, as shown in fig. 5, the exterior lamp module may include left front lamps (DLP projection lamps 51 and ISD lamps 52), right front lamps (DLP projection lamps 51 and ISD lamps 52), left rear lamps (ISD lamps 52), and right rear lamps (ISD lamps 52) of a vehicle. The front and rear ISD lights 52 have 4 groups, and each group of ISD lights 52 includes a conventional light 521 and an ISD screen 522 under the conventional light 521.
The preset guest greeting content can be preset pictures, videos, patterns, animations and the like. Specifically, in the target scene of the embodiment of the present application, the projection lamp 51 may project a preset romantic surprise picture or video, and the ISD lamp may dynamically display a preset pattern or animation such as love.
Further, under the condition that the distance between the target object and the vehicle reaches a second preset distance, triggering the vehicle door module corresponding to the target seat to automatically open so as to facilitate the target object to enter the vehicle; and/or triggering the light blanket module to enter a preset welcome mode, for example, projecting the light blanket in front of the vehicle door to create a heavy and romantic atmosphere; and/or triggering the suspension module to automatically lower to facilitate entry of the target object into the vehicle; and/or triggering a seat adjusting module of the target seat to adjust to a position convenient for sitting, such as the target seat automatically leans backwards and the chair back automatically leans backwards; and/or triggering the vehicle-mounted sound module to play preset music, such as music in a romantic song list.
Further, as shown in fig. 2, in the case that it is detected that the target object exists in the target seat, step S203 triggers the second scene execution module to operate.
In one example, detecting whether a target object is present in a target seat may be implemented by an occupancy detection module disposed on the target seat. The occupation detecting module can be a pressure sensor, and the size of the pressure borne by the target seat detected by the pressure sensor is used for judging whether the target seat has a target object or not.
In another example, whether the target object exists in the target seat is detected, and the detection can also be performed through an on-vehicle monitoring system. The vehicle-mounted monitoring system may be a Driver Monitoring System (DMS) including at least one monitoring camera aimed at a target seat to monitor whether a target object exists on the target seat in real time.
Further, the target object in the embodiment of the present application includes a specific object having directivity and target property. For example: the user can set the target object of the target scene by inputting the face image of the target object, and after the face image of the detection object is acquired by the vehicle end through the DMS, the face characteristic comparison is carried out between the face image of the target object input by the user and the face image of the detection object, so as to determine whether the detection object is the target object.
For example, the first scene execution module may be controlled to be turned off when the target object is detected to exist in the target seat, that is, when the target object enters the vehicle and is seated in the target seat.
The second scene execution module can be an in-vehicle scene execution module, such as an in-vehicle camera, a display screen, an out-vehicle camera capable of acquiring an out-vehicle video, and the like. Further, according to the execution sequence, the second scene execution module may include a first-stage scene execution module, a second-stage scene execution module, and a third-stage scene execution module, and then according to the execution sequence in the scene configuration information, the first-stage scene execution module, the second-stage scene execution module, and the third-stage scene execution module are sequentially triggered to work.
In one embodiment, triggering the first stage scenario execution module to operate may include: when a target object just sits on the seat, triggering a camera corresponding to a target seat to acquire a first video, for example, triggering a row of cameras to start recording so as to acquire an in-vehicle video; and/or triggering the AI voice module to broadcast preset voice, such as' congratulating your 2 yearly; and/or triggering a seat adjusting module of the target seat to adjust to a comfort mode, such as massage at the lowest gear, adjusting the leg rest angle to the maximum, and starting ventilation or heating of the seat; and/or triggering the atmosphere light module to adjust to a predetermined light effect, such as romantic purple; and/or trigger the fragrance module to release a predetermined flavor, such as a romantic cherry blossom flavor.
In one embodiment, triggering the second extreme scenario execution module to operate may include: and triggering a display screen corresponding to the target seat to play a preset video or a preset picture, such as triggering the copilot screen to play a commemorative video or a picture.
In one embodiment, triggering the third-stage scene execution module to work includes: under the condition that the vehicle is detected to run, triggering a navigation display screen to display a navigation destination, such as a Michelin restaurant; and/or triggering an AI voice module to play the navigation destination, namely broadcasting the navigation destination in a voice mode; and/or triggering the camera outside the vehicle to acquire a second video, for example, triggering a vehicle event data recorder to start recording vehicle event data so as to acquire the video outside the vehicle; and/or trigger a second presentation module to present a second Artificial Intelligence Creation (AIC) work. Wherein the vehicle may be determined to be running in a case where it is detected that the vehicle is engaged to a forward (D) range.
The AIC works can be created and generated by a work creation module at the vehicle end, and can also be created and generated by a work creation module at the cloud end and returned to the vehicle end. In addition, the AIC composition may be created by authoring according to scene configuration information (e.g., authoring parameters in the scene configuration information) of the target scene, or may be created by authoring according to the first video.
The work creation module can comprise an AIC creation model, and the AIC creation model can be obtained by training a deep learning neural network through a large amount of sample data. The AIC creation model can be a plurality of AIC poetry creation models, AIC drawing creation models, AIC music creation models, AIC video clip models and the like, and different AIC works can be created.
For example: according to the creation parameters, the generated romantic AIC music works or AIC drawing works or AIC poetry works or AIC video clip works and the like can be created.
The creation parameters can be set in the scene configuration information, when the user selects the target scene, one of the creation parameters can be selected from a plurality of preset creation parameters, and then the vehicle end or the cloud end can create according to the creation parameters selected by the user.
After the second AIC work is created, the second presentation module may present the second AIC work. The second display module can comprise an in-vehicle scene display module such as a vehicle-mounted sound module and a display screen. One or more AIC works may be used. For example: corresponding AIC musical compositions can be created by utilizing the AIC music creation model, and corresponding AIC painting compositions can be created by utilizing the AIC painting creation model. And then can trigger on-vehicle stereo set module broadcast AIC musical composition to trigger one or more display screens and show AIC pictorial work.
Further, as shown in fig. 2, in the case that it is detected that the target object leaves the vehicle, step S204 triggers the first presentation module to present the first AIC work.
The detection target object leaves the vehicle, which can be based on the occupancy detection module or the DMS system, and can also be determined to leave the vehicle when the vehicle is detected to be in a stop (P) gear and powered off. The first AIC work may be identical to the second AIC work. The first display module may be an off-board scene execution module, such as an external light module including an ISD light and a projection light. The ISD light and the projection light may display the AIC pictorial representation after the target object leaves the vehicle.
In one implementation, the method of the embodiment of the present application may further include: under the condition that the target object is detected to leave the vehicle, triggering the camera outside the vehicle to stop acquiring the second video; pushing a third AIC work authored in accordance with the second video.
That is, when the target object leaves the vehicle, the second video capture is completed. A third AIC work, such as an AIC video clip (documentary) or AIC poem, etc., can be created based on the captured second video, thereby recording a nice memory on the vehicle. When the third AIC work is created, it may be pushed to the user or target object for memorial.
Further, each scene has scene configuration information, including information of the scene execution module and information of the execution function under different trigger conditions. For the target scene in this embodiment, the configuration information of the target scene includes the scene execution module that needs to be triggered and the function that needs to be executed by the scene execution module under different triggering conditions, so as to create a heavy and pleasant scene atmosphere for the detected object. The triggering condition of the vehicle exterior scene execution module corresponding to the vehicle exterior scene of the target scene can be as follows: receiving an execution request of a target scene; the triggering condition of the in-vehicle scene execution module corresponding to the in-vehicle scene of the target scene may be: the presence of the detection object in the accessory seat is detected, that is, the detection object enters the vehicle and sits on the accessory seat.
It should be noted that each scene has scene configuration information, which includes information of the scene execution module and information of the execution function under different trigger conditions. In the case of the target scenario of the embodiment, the triggering condition may include a detection result, such as detecting that the position relationship between the target object and the vehicle satisfies a preset distance (a first preset distance, a second preset distance), detecting that the target object exists in the target seat, detecting that the vehicle is running, detecting that the target object leaves the vehicle, and the like. The trigger condition may also include a time axis condition, such as a time sequence in which the first stage scene execution module, the second stage scene execution module, and the third stage scene execution module are triggered.
In one embodiment, the scene configuration information may be preset according to actual situations, for example, the scene configuration information is already set in the scene pushed to the vehicle end by the message management center, that is, set by the management terminal or edited by the user in advance. The scene configuration information can also be edited and updated by the user after the target scene is downloaded from the vehicle end. For example: one or more items of content in the scene configuration information of the target scene may be thermally updated according to the user edit information.
In one example, the user may edit the trigger condition, and may also edit a function or a parameter to be executed by a certain scene execution module, such as editing a time parameter (including a time duration of the trigger) of the function executed by each scene execution module, or editing a display effect of the ambience light or exterior light module.
The scene configuration information of the vehicle terminal can be conveniently updated by the content edited by the user through thermal update, so that the personalized scene is realized, and the user experience is further improved. In addition, the target scene edited by the user can be shared by other users, and scene sharing and interaction among the users are realized.
An application example of the scenario trigger method according to the embodiment of the present application is described below with reference to fig. 6 and 7. As shown in fig. 6 and 7, the scene trigger method of this example includes:
(1) selecting a target scene: the user can select a target scene through a scene card on a screen interface of the automobile end, can trigger the target scene through voice, and can select the target scene through an APP of the mobile terminal. In the present application example, the target scene is a romantic surprise pattern.
(2) Starting a target scene: the method comprises the steps of detecting the position relation between a target object and a vehicle, and triggering an outer lamp module (DLP projection lamp and ISD lamp) to display preset welcome contents to form romantic light show under the condition that the distance between the target object and the vehicle reaches a first preset distance (2-5 m away from the vehicle), so that a romantic mode of following the intelligent passing of the target object is realized. DLP projection lamps and ISD lamps may also display AIC works, such as AIC paintings.
(3) Romantic guests: under the condition that the distance between the target object and the vehicle reaches the second preset distance is detected through the Bluetooth, the vehicle door module, the optical blanket module, the suspension module, the seat adjusting module of the target seat, the vehicle-mounted sound module and the like corresponding to the target seat are triggered to work.
(4) Atmosphere establishment: the occupation detection module detects that a target object exists in the target seat and triggers the first-stage scene execution module to work.
(5) Surprise and climax: and the second stage scene execution module works.
(6) Starting navigation: and triggering the third-stage scene execution module to work under the condition that the vehicle is detected to run.
(7) Perfect curtain closing: under the condition that the target object is detected to leave the vehicle, triggering a first display module to display a first AIC work; and triggering the camera outside the vehicle to stop acquiring the second video.
(8) The method is characterized by comprising the following steps: pushing the third AIC work to leave the user or target object as a souvenir.
According to the method, when the execution request is received, different scene execution modules are triggered to execute corresponding functions under different detection conditions, so that romantic surprises can be made for the target object on important days, intelligent service of the vehicle is achieved, and user experience is improved.
An embodiment of the present application further provides a scene triggering apparatus, as shown in fig. 8, the apparatus may include:
a position detection module 801, configured to detect a position relationship between a target object and a vehicle according to an execution request of a target scene;
a first triggering module 802, configured to trigger a corresponding first scene execution module to execute a function corresponding to the position relationship according to the position relationship;
the second triggering module 803 is configured to trigger the second scene executing module to operate when the target seat is detected to have the target object;
and the third triggering module 804 triggers the first display module to display the first AIC work under the condition that the target object is detected to leave the vehicle.
In one embodiment, the first scene execution module includes an exterior light module, and the position detection module 801 is further configured to:
under the condition that the distance between the target object and the vehicle reaches a first preset distance, triggering an outer lamp module to display preset welcome content, wherein the outer lamp module comprises at least one of an ISD lamp, a projection lamp and a through lamp.
In one embodiment, the position detection module 801 is further configured to:
under the condition that the distance between the target object and the vehicle reaches a second preset distance, triggering the vehicle door module corresponding to the target seat to automatically open; and/or triggering the optical blanket module to enter a preset welcome mode; and/or triggering the suspension module to automatically lower; and/or triggering a seat adjustment module of the target seat to adjust to a position convenient for seating; and/or triggering the vehicle-mounted sound module to play preset music.
In one embodiment, the second triggering module 802 is further configured to:
and sequentially triggering the first-stage scene execution module, the second-stage scene execution module and the third-stage scene execution module to work.
In one embodiment, the second triggering module 802 is further configured to:
triggering a camera corresponding to the target seat to acquire a first video; and/or
Triggering an AI voice module to broadcast a preset voice; and/or
Triggering a seat adjustment module of the target seat to adjust to a comfort mode; and/or
Triggering the atmosphere lamp module to adjust to a preset lamp effect; and/or
Triggering the fragrance module to release the preset flavor.
In one embodiment, the second triggering module 802 is further configured to:
and triggering a display screen corresponding to the target seat to play a preset video or a preset picture.
In one embodiment, the second triggering module 802 is further configured to:
under the condition that the vehicle is detected to run, triggering a navigation display screen to display a navigation destination; and/or triggering an AI voice module to play the navigation destination; and/or triggering a camera outside the vehicle to acquire a second video; and/or trigger a second presentation module to present a second AIC work.
In one embodiment, the apparatus further comprises:
the fourth triggering module is used for triggering the camera outside the vehicle to stop acquiring the second video when the target object leaves the vehicle;
and the pushing module is used for pushing a third AIC work created according to the second video.
In one embodiment, the apparatus further comprises:
and the hot updating module is used for hot updating one or more items of contents in the scene configuration information of the target scene according to the user editing information, and the scene configuration information comprises the scene execution module which needs to be triggered and the function which needs to be executed under different triggering conditions.
The functions of each module in each apparatus in the embodiment of the present application may refer to corresponding descriptions in the above method, and are not described herein again.
Fig. 9 shows a block diagram of a scene trigger device according to an embodiment of the present application. As shown in fig. 9, the apparatus includes: a memory 901 and a processor 902, the memory 901 having stored therein instructions executable on the processor 902. The processor 902, when executing the instructions, implements any of the methods in the embodiments described above. The number of the memory 901 and the processor 902 may be one or more. The terminal or server is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The terminal or server may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
The device may further include a communication interface 903, which is used for communicating with an external device to perform data interactive transmission. The various devices are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor 902 may process instructions for execution within the terminal or server, including instructions stored in or on a memory to display graphical information of a GUI on an external input/output device (such as a display device coupled to an interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple terminals or servers may be connected, with each device providing portions of the necessary operations (e.g., as an array of servers, a group of blade servers, or a multi-processor system). The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
Optionally, in a specific implementation, if the memory 901, the processor 902, and the communication interface 903 are integrated on a chip, the memory 901, the processor 902, and the communication interface 903 may complete mutual communication through an internal interface.
It should be understood that the processor may be a Central Processing Unit (CPU), other general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be an advanced reduced instruction set machine (ARM) architecture supported processor.
Embodiments of the present application provide a computer-readable storage medium (such as the above-mentioned memory 901), which stores computer instructions, and when executed by a processor, the program implements the method provided in the embodiments of the present application.
Alternatively, the memory 901 may include a program storage area and a data storage area, wherein the program storage area may store an operating system and an application program required by at least one function; the storage data area may store data created according to the use of a terminal or a server, and the like. Further, the memory 901 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 901 may optionally include memory located remotely from the processor 902, which may be connected to a terminal or server over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more (two or more) executable instructions for implementing specific logical functions or steps in the process. And the scope of the preferred embodiments of the present application includes other implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. All or part of the steps of the method of the above embodiments may be implemented by hardware that is configured to be instructed to perform the relevant steps by a program, which may be stored in a computer-readable storage medium, and which, when executed, includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module may also be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various changes or substitutions within the technical scope of the present application, and these should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (18)

1. A method for triggering a scene, comprising:
detecting the position relation between a target object and a vehicle according to an execution request of a target scene;
triggering a corresponding first scene execution module to execute a function corresponding to the position relation according to the position relation, wherein the first scene execution module comprises one or more of an exterior light module, a vehicle door module corresponding to a target seat, a light blanket module, a suspension module, a seat adjusting module of the target seat and a vehicle-mounted sound module;
under the condition that the target seat is detected to have the target object, triggering a second scene execution module to work, wherein the second scene execution module comprises an in-vehicle scene execution module, and triggering the second scene execution module to work comprises: according to the execution sequence in the scene configuration information, a first-stage scene execution module, a second-stage scene execution module and a third-stage scene execution module are triggered to work in sequence; the first-stage scene execution module comprises at least one of a camera, an AI voice module, a seat adjusting module of the target seat, an atmosphere lamp module and a fragrance module corresponding to the target seat; the second stage scene module comprises a display screen corresponding to the target seat; the third-stage scene module comprises at least one of a navigation display screen, an external camera and a second display module;
under the condition that the target object is detected to leave the vehicle, triggering a first display module to display a first AIC work; wherein the first display module comprises the exterior light module.
2. The method according to claim 1, wherein triggering the corresponding first scene execution module to execute the function corresponding to the position relationship according to the position relationship comprises:
and under the condition that the distance between the target object and the vehicle reaches a first preset distance, triggering the outer lamp module to display preset welcome content, wherein the outer lamp module comprises at least one of an ISD lamp, a projection lamp and a through lamp.
3. The method according to claim 2, wherein triggering the corresponding first scene execution module to execute the function corresponding to the position relationship according to the position relationship comprises:
under the condition that the distance between the target object and the vehicle reaches a second preset distance, triggering the vehicle door module corresponding to the target seat to automatically open; and/or triggering the optical blanket module to enter a preset welcome mode; and/or triggering the suspension module to automatically lower; and/or triggering a seat adjustment module of the target seat to adjust to a position convenient for seating; and/or triggering the vehicle-mounted sound module to play preset music.
4. The method of claim 1, wherein triggering the first stage scenario execution module to operate comprises:
triggering a camera corresponding to the target seat to acquire a first video; and/or
Triggering the AI voice module to broadcast a preset voice; and/or
Triggering a seat adjustment module of the target seat to adjust to a comfort mode; and/or
Triggering the atmosphere lamp module to adjust to a preset lamp effect; and/or
Triggering the fragrance module to release a preset taste.
5. The method of claim 1, wherein triggering the second stage scenario execution module to operate comprises:
and triggering a display screen corresponding to the target seat to play a preset video or a preset picture.
6. The method of claim 1, wherein triggering the third-stage scene execution module to operate comprises:
under the condition that the vehicle is detected to run, triggering the navigation display screen to display a navigation destination; and/or triggering the AI voice module to play the navigation destination; and/or triggering the camera outside the vehicle to acquire a second video; and/or triggering the second display module to display a second AIC work.
7. The method of claim 6, further comprising:
under the condition that the target object leaves the vehicle, triggering the camera outside the vehicle to stop acquiring the second video;
pushing a third AIC work composed according to the second video.
8. The method of any one of claims 1 to 7, further comprising:
and thermally updating one or more items of contents in the scene configuration information of the target scene according to user editing information, wherein the scene configuration information comprises scene execution modules needing to be triggered and functions needing to be executed thereof under different triggering conditions.
9. A scene trigger apparatus, comprising:
the position detection module is used for detecting the position relation between the target object and the vehicle according to the execution request of the target scene;
the first triggering module is used for triggering a corresponding first scene execution module to execute a function corresponding to the position relation according to the position relation, wherein the first scene execution module comprises one or more of an exterior light module, a vehicle door module corresponding to a target seat, a light blanket module, a suspension module, a seat adjusting module of the target seat and a vehicle-mounted sound module;
the second triggering module is configured to trigger a second scene execution module to operate when the target seat is detected to have the target object, where the second scene execution module includes an in-vehicle scene execution module, and the triggering of the second scene execution module includes: according to the execution sequence in the scene configuration information, a first-stage scene execution module, a second-stage scene execution module and a third-stage scene execution module are triggered to work in sequence; the first-stage scene execution module comprises at least one of a camera, an AI voice module, a seat adjusting module of the target seat, an atmosphere lamp module and a fragrance module corresponding to the target seat; the second stage scene module comprises a display screen corresponding to the target seat; the third-stage scene module comprises at least one of a navigation display screen, an external camera and a second display module;
the third triggering module triggers the first display module to display the first AIC work under the condition that the target object is detected to leave the vehicle; wherein the first display module comprises the exterior light module.
10. The apparatus of claim 9, wherein the position detection module is further configured to:
and under the condition that the distance between the target object and the vehicle reaches a first preset distance, triggering the outer lamp module to display preset welcome content, wherein the outer lamp module comprises at least one of an ISD lamp, a projection lamp and a through lamp.
11. The apparatus of claim 10, wherein the position detection module is further configured to:
under the condition that the distance between the target object and the vehicle reaches a second preset distance, triggering the vehicle door module corresponding to the target seat to automatically open; and/or triggering the optical blanket module to enter a preset welcome mode; and/or triggering the suspension module to automatically lower; and/or triggering a seat adjustment module of the target seat to adjust to a position convenient for seating; and/or triggering the vehicle-mounted sound module to play preset music.
12. The apparatus of claim 9, wherein the second triggering module is further configured to:
triggering a camera corresponding to the target seat to acquire a first video; and/or
Triggering the AI voice module to broadcast a preset voice; and/or
Triggering a seat adjustment module of the target seat to adjust to a comfort mode; and/or
Triggering the atmosphere lamp module to adjust to a preset lamp effect; and/or
Triggering the fragrance module to release a preset taste.
13. The apparatus of claim 9, wherein the second triggering module is further configured to:
and triggering a display screen corresponding to the target seat to play a preset video or a preset picture.
14. The apparatus of claim 9, wherein the second triggering module is further configured to:
under the condition that the vehicle is detected to run, triggering the navigation display screen to display a navigation destination; and/or triggering the AI voice module to play the navigation destination; and/or triggering the camera outside the vehicle to acquire a second video; and/or triggering the second display module to display a second AIC work.
15. The apparatus of claim 14, further comprising:
the fourth triggering module is used for triggering the camera outside the vehicle to stop acquiring the second video when the target object leaves the vehicle;
and the pushing module is used for pushing a third AIC work created according to the second video.
16. The apparatus of any one of claims 9 to 15, further comprising:
and the hot updating module is used for hot updating one or more items of contents in the scene configuration information of the target scene according to the user editing information, wherein the scene configuration information comprises the scene execution module which needs to be triggered and the functions which need to be executed under different triggering conditions.
17. A scene trigger apparatus comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 8.
18. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN202010930443.6A 2020-09-07 2020-09-07 Scene triggering method, device, equipment and storage medium Active CN112061049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010930443.6A CN112061049B (en) 2020-09-07 2020-09-07 Scene triggering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010930443.6A CN112061049B (en) 2020-09-07 2020-09-07 Scene triggering method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112061049A CN112061049A (en) 2020-12-11
CN112061049B true CN112061049B (en) 2022-05-13

Family

ID=73663918

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010930443.6A Active CN112061049B (en) 2020-09-07 2020-09-07 Scene triggering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112061049B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112959961A (en) * 2021-03-10 2021-06-15 中国第一汽车股份有限公司 Method and device for controlling vehicle in specific mode, electronic equipment and storage medium
WO2022217500A1 (en) * 2021-04-14 2022-10-20 浙江吉利控股集团有限公司 In-vehicle theater mode control method and apparatus, device, and storage medium
CN113628618A (en) * 2021-07-29 2021-11-09 中汽创智科技有限公司 Multimedia file generation method and device based on intelligent cabin and terminal
CN113602090A (en) * 2021-08-03 2021-11-05 岚图汽车科技有限公司 Vehicle control method, device and system
CN115107672A (en) * 2021-12-03 2022-09-27 长城汽车股份有限公司 Vehicle control method and vehicle
CN114330778A (en) * 2021-12-31 2022-04-12 阿维塔科技(重庆)有限公司 Intelligent function management method and device for vehicle, vehicle and computer storage medium
CN115167161A (en) * 2022-06-27 2022-10-11 青岛海尔科技有限公司 Method and device for determining association relation of lamp, storage medium and electronic device
CN115366787A (en) * 2022-08-24 2022-11-22 长城汽车股份有限公司 Linkage control method and device for light-emitting backdrop, vehicle and storage medium
CN117698618A (en) * 2022-09-05 2024-03-15 华为技术有限公司 Scene display method and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106740679A (en) * 2015-11-23 2017-05-31 上海汽车集团股份有限公司 Vehicle key-free enters startup control method and system
CN110149752A (en) * 2019-05-23 2019-08-20 华人运通(江苏)技术有限公司 Control method, the device and system of vehicle door lamp
CN110696755A (en) * 2018-07-09 2020-01-17 上海擎感智能科技有限公司 Vehicle intelligent service experience method, system, vehicle machine and storage medium
CN111601161A (en) * 2020-06-04 2020-08-28 华人运通(上海)云计算科技有限公司 Video work generation method, device, terminal, server and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9092309B2 (en) * 2013-02-14 2015-07-28 Ford Global Technologies, Llc Method and system for selecting driver preferences

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106740679A (en) * 2015-11-23 2017-05-31 上海汽车集团股份有限公司 Vehicle key-free enters startup control method and system
CN110696755A (en) * 2018-07-09 2020-01-17 上海擎感智能科技有限公司 Vehicle intelligent service experience method, system, vehicle machine and storage medium
CN110149752A (en) * 2019-05-23 2019-08-20 华人运通(江苏)技术有限公司 Control method, the device and system of vehicle door lamp
CN111601161A (en) * 2020-06-04 2020-08-28 华人运通(上海)云计算科技有限公司 Video work generation method, device, terminal, server and system

Also Published As

Publication number Publication date
CN112061049A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
CN112061049B (en) Scene triggering method, device, equipment and storage medium
US20210168449A1 (en) System and method for adaptive content rendition
CN112061050B (en) Scene triggering method, device, equipment and storage medium
CN106412495B (en) Vehicle and social network using wide range mobile delayed photographic video
WO2018094254A1 (en) Server-provided visual output at a voice interface device
JP2022535375A (en) In-vehicle image processing
CN105976843A (en) In-vehicle music control method, device, and automobile
US10555399B2 (en) Illumination control
US9524698B2 (en) System and method for collectively displaying image by using a plurality of mobile devices
CN110896578A (en) Music rhythm-based in-vehicle atmosphere lamp adjusting method and system and electronic equipment
CN113296730A (en) Interaction method and device based on vehicle cabin
US20160094302A1 (en) Media playback device and method for preparing a playback of various media
CA2834217A1 (en) Sensing and adjusting features of an environment
CN111601161A (en) Video work generation method, device, terminal, server and system
US20140052278A1 (en) Sensing and adjusting features of an environment
FR3056506A1 (en) MEDIA CONTENT SHARING SYSTEM FOR MOTOR VEHICLE
EP3325920B1 (en) Method and device for giving a theme to a route taken by a vehicle
CN111752538A (en) Vehicle end scene generation method and device, cloud end, vehicle end and storage medium
CN109600427A (en) Carwash effect method for pushing, device, computer installation, storage medium and system
CN112061058B (en) Scene triggering method, device, equipment and storage medium
CN115243107B (en) Method, device, system, electronic equipment and medium for playing short video
US20220207081A1 (en) In-vehicle music system and method
US20210179139A1 (en) Vehicle and control method thereof
CN112339658A (en) Method, device and system for controlling atmosphere of vehicle and vehicle
US20240025416A1 (en) In-vehicle soundscape and melody generation system and method using continuously interpreted spatial contextualized information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant