CN116684499B - Intelligent sound console based on multi-network cooperation - Google Patents

Intelligent sound console based on multi-network cooperation Download PDF

Info

Publication number
CN116684499B
CN116684499B CN202310779422.2A CN202310779422A CN116684499B CN 116684499 B CN116684499 B CN 116684499B CN 202310779422 A CN202310779422 A CN 202310779422A CN 116684499 B CN116684499 B CN 116684499B
Authority
CN
China
Prior art keywords
instruction
tuning
execution
network
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310779422.2A
Other languages
Chinese (zh)
Other versions
CN116684499A (en
Inventor
冯治平
钟晓敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Enping Shengyi Professional Audio Technology Co ltd
Original Assignee
Enping Shengyi Professional Audio Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Enping Shengyi Professional Audio Technology Co ltd filed Critical Enping Shengyi Professional Audio Technology Co ltd
Priority to CN202310779422.2A priority Critical patent/CN116684499B/en
Publication of CN116684499A publication Critical patent/CN116684499A/en
Application granted granted Critical
Publication of CN116684499B publication Critical patent/CN116684499B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/02Arrangements for generating broadcast information; Arrangements for generating broadcast-related information with a direct linking to broadcast information or to broadcast space-time; Arrangements for simultaneous generation of broadcast information and broadcast-related information
    • H04H60/04Studio equipment; Interconnection of studios
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • H04L67/125Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an intelligent sound console based on multi-network coordination, which comprises a data receiving module, an instruction priority calculating module, an instruction overall module, an instruction executing module and a display module, and can be used for calculating and executing priorities of sound tuning instructions sent by different networks and displaying executing results. Therefore, the invention can realize the dispatching and execution of tuning instructions sent by different devices through different networks through the receiving of the instructions and the priority calculation, thereby realizing the tuning control of multi-network cooperation more intelligently, improving the accuracy and rationality of tuning and reducing tuning errors.

Description

Intelligent sound console based on multi-network cooperation
Technical Field
The invention relates to the technical field of audio data control, in particular to an intelligent sound console based on multi-network cooperation.
Background
With the development of audio control technology of large-scale performance equipment, more and more equipment manufacturers start to realize multi-network connection and more advanced processing performance of collocation on tuning equipment so as to realize more convenient and intelligent tuning. However, when implementing these techniques, the prior art does not consider how a large number of instructions sent by multiple networks should be intelligentized during processing, and generally simply receives and executes the instructions, so that the processing effect is poor when facing a large number of concurrent instruction scenarios. It can be seen that the prior art has defects and needs to be solved.
Disclosure of Invention
The technical problem to be solved by the invention is to provide the intelligent sound console based on multi-network coordination, which can more intelligently realize the multi-network coordination sound console control, improve the sound console accuracy and rationality and reduce sound console errors.
In order to solve the technical problems, the invention discloses an intelligent sound console based on multi-network cooperation, which comprises:
the data receiving module is connected to a plurality of control devices of a plurality of different users through a plurality of different types of networks and is used for receiving a plurality of tuning instructions sent by the control devices;
the instruction priority calculating module is used for obtaining instruction parameters corresponding to each tuning instruction and calculating the priority corresponding to each tuning instruction according to the instruction parameters;
the instruction overall module is used for determining execution strategies corresponding to the tuning instructions according to a preset algorithm model and priorities corresponding to the tuning instructions;
the instruction execution module is used for executing the tuning instructions according to the execution strategy;
and the display module is used for determining the display parameters of the tuning instructions according to the execution strategy and the execution result of the instruction execution module and displaying according to the display parameters.
In an alternative embodiment, the instruction parameters include at least one of instruction transmission network parameters, instruction content, instruction adjustment amplitude, instruction source device parameters, instruction source user parameters; the instruction adjusting amplitude is the changing amplitude of the audio parameters to be adjusted by the tuning instruction; the instruction transmission network parameters comprise at least one of network type, number of devices in the network and network historical data transmission record; the network type comprises at least one of a wired network, a WIFI network, a ZigBee network and a Bluetooth network; the instruction source device parameters comprise at least one of a device type, a device model number and a device history instruction transmission record of the control device.
In an alternative embodiment, the priorities include an execution priority, a network priority, and a source priority.
In an optional embodiment, the specific manner of calculating the priority corresponding to each tuning instruction by the instruction priority calculating module according to the instruction parameters includes:
for each tuning instruction, calculating a network quality parameter and a network transmission delay parameter corresponding to the tuning instruction according to the instruction transmission network parameter in the instruction parameters corresponding to the tuning instruction;
Calculating the ratio of the network quality parameter to the network transmission delay parameter to obtain a network priority parameter corresponding to the tuning instruction;
according to the network priority parameters corresponding to all the tuning instructions, determining the network priority of each tuning instruction in all the tuning instructions;
calculating the execution priority corresponding to the tuning instruction according to the instruction content and/or the instruction adjusting amplitude in the instruction parameters corresponding to the tuning instruction;
and calculating the source priority corresponding to the tuning instruction according to the instruction source equipment parameter and/or the instruction source user parameter in the instruction parameters corresponding to the tuning instruction.
In an optional embodiment, the instruction priority calculating module calculates, according to an instruction transmission network parameter in the instruction parameters corresponding to the tuning instruction, a specific mode of the network quality parameter and the network transmission delay parameter corresponding to the tuning instruction, including:
inputting the instruction transmission network parameters in the instruction parameters corresponding to the tuning instruction into the trained first neural network model to obtain the output network quality parameters and the network transmission delay parameters; the first neural network model is obtained through training a training data set comprising a plurality of training instruction transmission network parameters, corresponding network quality parameter labels and corresponding network transmission delay labels;
And the instruction priority calculating module calculates a specific mode of the execution priority corresponding to the tuning instruction according to the instruction content and/or the instruction adjusting amplitude in the instruction parameters corresponding to the tuning instruction, and the specific mode comprises the following steps:
inputting instruction content and/or instruction adjusting amplitude in the instruction parameters corresponding to the tuning instruction into a trained second neural network model to obtain output execution priority parameters; the second neural network model is obtained through training a training data set comprising a plurality of instruction contents and/or instruction adjustment amplitudes for training and corresponding instruction priority labels;
determining the execution priority of each tuning instruction in all the tuning instructions according to the execution priority parameters corresponding to all the tuning instructions;
and the instruction priority calculating module calculates a specific mode of the source priority corresponding to the tuning instruction according to the instruction source equipment parameter and/or the instruction source user parameter in the instruction parameters corresponding to the tuning instruction, and the specific mode comprises the following steps:
inputting the instruction source equipment parameters and/or the instruction source user parameters in the instruction parameters corresponding to the tuning instruction into a trained third neural network model to obtain the output source priority; the third neural network model is obtained through training a training data set comprising a plurality of instruction source equipment parameters and/or instruction source user parameters for training and corresponding priority labels;
And determining the source priority of each tuning instruction in all the tuning instructions according to the source priority parameters corresponding to all the tuning instructions.
In an optional implementation manner, the instruction orchestration module determines a specific mode of an execution strategy corresponding to the tuning instructions according to a preset algorithm model and priorities corresponding to the tuning instructions, where the specific mode includes:
acquiring a tuning demand and a tuning scene corresponding to the current tuning moment;
according to the tuning requirements and tuning scenes, determining tuning objective functions and tuning limiting conditions;
and inputting priorities corresponding to the tuning instructions into a preset dynamic programming algorithm model, and performing iterative computation according to the tuning objective function and tuning limiting conditions to determine an optimal execution strategy corresponding to the tuning instructions.
In an alternative embodiment, the tuning objective function is configured to define that the execution efficiency parameter of the calculated execution strategy is greater than a preset efficiency parameter threshold; the tuning limiting condition is used for limiting the priority of any tuning instruction in the execution strategy to be not lower than the corresponding execution priority, network priority or source priority, and the execution time of each tuning instruction corresponding to the execution strategy to be not later than a preset execution time limit;
And the instruction orchestration module inputs priorities corresponding to the tuning instructions to a preset dynamic programming algorithm model, and performs iterative computation according to the tuning objective function and tuning limiting conditions to determine a specific mode of an optimal execution strategy corresponding to the tuning instructions, wherein the specific mode comprises the following steps:
inputting priorities corresponding to the tuning instructions into a preset particle swarm algorithm model for iterative computation, and calculating execution efficiency parameters corresponding to an execution strategy corresponding to a particle state generated by each iteration in the computation; the execution efficiency parameter is the product of the execution sequence parameter, the execution equipment parameter and the execution network parameter of each tuning instruction in the execution strategy; the execution order parameter is the product of the execution order of each tuning instruction in the execution strategy and the corresponding execution priority parameter; the execution network parameter is the product of the execution order of each tuning instruction in the execution strategy and the corresponding source priority parameter; the execution network parameter is the product of the execution sequence of each tuning instruction in the execution strategy and the corresponding network priority parameter; the execution sequence is proportional to the execution forward degree of the tuning instruction;
And judging whether the current execution strategy meets the conditions according to the execution efficiency parameters, the tuning objective function and the tuning limiting conditions, and calculating and determining the optimal execution strategy corresponding to the tuning instructions.
In an optional embodiment, the method for obtaining the tuning requirement and the specific mode of the tuning scene corresponding to the current tuning moment by the instruction orchestration module includes:
acquiring a current tuning moment and a current tuning place of the intelligent tuning console, and acquiring a current tuning control instruction input by a user through interpersonal interaction equipment;
acquiring a plurality of historical tuning scenes of the current tuning place at the same moment as the current tuning moment in a historical time period;
determining a tuning scene with the largest occurrence number in the plurality of historical tuning scenes as a tuning scene corresponding to the current tuning moment;
and determining the tuning demand corresponding to the current tuning moment according to the current tuning control instruction.
In an optional embodiment, the execution result includes execution time and execution success or failure information corresponding to each tuning instruction; and the display module determines the specific modes of the display parameters of the tuning instructions according to the execution strategy and the execution result of the instruction execution module, wherein the specific modes comprise.
According to the execution result of the instruction execution module, a plurality of unsuccessful instructions which are unsuccessful in execution are screened out from the tuning instructions, and a plurality of successful instructions which are successful in execution and corresponding execution time are screened out;
acquiring execution data records of the plurality of failure instructions;
determining the execution failure reason of each failure instruction according to the execution data record; the execution failure reasons include one or more of equipment defects, execution time problems, and network problems;
determining the occurrence times of the execution failure reasons of each failure instruction in all the execution failure reasons of the failure instructions;
calculating display parameters corresponding to each tuning instruction based on a dynamic programming algorithm, the occurrence times and the execution time; the display parameters include display position and display brightness.
In an optional implementation manner, the display module calculates a display parameter corresponding to each tuning instruction based on a dynamic programming algorithm, the occurrence number and the execution time, and the display parameter includes:
determining an objective function as the minimum sum of display brightness and the minimum sum of display distance corresponding to all tuning instructions in a display strategy; the display distance is the distance between the display position and the center point of the display area;
Determining the constraint includes:
the display distance of any successful instruction is smaller than the display distance of any failed instruction;
the display brightness of the success instruction is smaller than that of the failure instruction;
among all the successful instructions, the display brightness of the successful instruction is higher and the display distance is smaller when the execution time is higher;
the display brightness of the failure instruction with higher occurrence frequency is higher and the display distance is smaller in all the failure instructions;
inputting the occurrence times and execution time corresponding to all tuning instructions into a preset particle swarm algorithm model, and carrying out iterative calculation according to the objective function and the limiting condition until an optimal display strategy is calculated; the display strategy comprises display parameters corresponding to each tuning instruction.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the dispatching and execution of tuning instructions sent by different devices through different networks can be realized through the receiving of the instructions and the priority calculation, so that the tuning control of multi-network coordination can be realized more intelligently, the accuracy and rationality of tuning are improved, and tuning errors are reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an intelligent sound console based on multi-network collaboration according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or article that comprises a list of steps or modules is not limited to the list of steps or modules but may, in the alternative, include steps or modules not listed or inherent to such process, method, article, or article.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Specifically, referring to fig. 1, fig. 1 is a schematic structural diagram of an intelligent sound console based on multi-network collaboration according to an embodiment of the present invention. As shown in fig. 1, the intelligent sound console based on multi-network collaboration at least comprises a data receiving module 101, an instruction priority calculating module 102, an instruction overall module 103, an instruction executing module 104 and a display module 105.
Specifically, the data receiving module 101 is connected to a plurality of control devices of a plurality of different users through a plurality of different types of networks, and is configured to receive a plurality of tuning instructions sent by the plurality of control devices.
Specifically, the instruction priority calculating module 102 is configured to obtain an instruction parameter corresponding to each tuning instruction, and calculate a priority corresponding to each tuning instruction according to the instruction parameter.
In an alternative embodiment, the instruction parameters include at least one of instruction transport network parameters, instruction content, instruction adjustment amplitude, instruction source device parameters, instruction source user parameters.
In an alternative embodiment, the command adjustment amplitude is the amplitude of change of the audio parameter to be adjusted by the tuning command.
In an alternative embodiment, the instruction transmission network parameters include at least one of a network type, a number of devices in the network, and a network history data transmission record.
In an alternative embodiment, the network type includes at least one of a wired network, a WIFI network, a ZigBee network, and a bluetooth network.
In an alternative embodiment, the command source device parameters include at least one of a device type, a device model number, and a device history command transfer record of the control device.
In an alternative embodiment, the priorities include an execution priority, a network priority, and a source priority.
Specifically, the instruction orchestration module 103 is configured to determine an execution policy corresponding to the plurality of tuning instructions according to a preset algorithm model and priorities corresponding to the plurality of tuning instructions.
Specifically, the instruction execution module 104 is configured to execute a plurality of tuning instructions according to an execution policy.
Specifically, the display module 105 is configured to determine display parameters of the tuning instructions according to the execution policy and the execution result of the instruction execution module 104, and display the tuning instructions according to the display parameters.
Through the tuning console, the scheduling and execution of tuning instructions sent by different devices through different networks can be realized through the receiving of the instructions and the priority calculation, so that the tuning control of multi-network coordination can be realized more intelligently, the accuracy and rationality of tuning are improved, and tuning errors are reduced.
In an alternative embodiment, the specific manner of calculating the priority corresponding to each tuning instruction by the instruction priority calculating module 102 according to the instruction parameters includes:
for each tuning instruction, calculating a network quality parameter and a network transmission delay parameter corresponding to the tuning instruction according to the instruction transmission network parameter in the instruction parameters corresponding to the tuning instruction;
calculating the ratio of the network quality parameter to the network transmission delay parameter to obtain a network priority parameter corresponding to the tuning instruction;
according to the network priority parameters corresponding to all the tuning instructions, determining the network priority of each tuning instruction in all the tuning instructions;
Calculating the execution priority corresponding to the tuning instruction according to the instruction content and/or the instruction adjusting amplitude in the instruction parameters corresponding to the tuning instruction;
and calculating the source priority corresponding to the tuning instruction according to the instruction source equipment parameter and/or the instruction source user parameter in the instruction parameters corresponding to the tuning instruction.
In an alternative embodiment, the instruction priority calculating module 102 calculates, according to the instruction transmission network parameter in the instruction parameters corresponding to the tuning instruction, a specific manner of the network quality parameter and the network transmission delay parameter corresponding to the tuning instruction, including:
inputting the instruction transmission network parameters in the instruction parameters corresponding to the tuning instruction into the trained first neural network model to obtain the output network quality parameters and the network transmission delay parameters; the first neural network model is trained by a training data set comprising a plurality of training instruction transmission network parameters and corresponding network quality parameter labels and network transmission delay labels.
In an alternative embodiment, the instruction priority calculating module 102 calculates the specific mode of the execution priority corresponding to the tuning instruction according to the instruction content and/or the instruction adjustment amplitude in the instruction parameter corresponding to the tuning instruction, including:
Inputting instruction content and/or instruction adjusting amplitude in instruction parameters corresponding to the tuning instruction into a trained second neural network model to obtain output execution priority parameters; the second neural network model is obtained through training a training data set comprising a plurality of instruction contents and/or instruction adjustment amplitudes for training and corresponding instruction priority labels;
and determining the execution priority of each tuning instruction in all the tuning instructions according to the execution priority parameters corresponding to all the tuning instructions.
In an alternative embodiment, the instruction priority calculating module 102 calculates the source priority corresponding to the tuning instruction according to the instruction source device parameter and/or the instruction source user parameter in the instruction parameters corresponding to the tuning instruction, including:
inputting the instruction source equipment parameters and/or the instruction source user parameters in the instruction parameters corresponding to the tuning instruction into a trained third neural network model to obtain the output source priority; the third neural network model is obtained through training a training data set comprising a plurality of instruction source equipment parameters and/or instruction source user parameters for training and corresponding priority labels;
And determining the source priority of each tuning instruction in all the tuning instructions according to the source priority parameters corresponding to all the tuning instructions.
Through the module, accurate calculation of different priorities of each tuning instruction can be realized through three different neural network models, and in overall follow-up, more reasonable and intelligent instruction scheduling execution can be realized by fully utilizing the priorities.
In an alternative embodiment, the instruction orchestration module 103 determines, according to a preset algorithm model and priorities corresponding to a plurality of tuning instructions, a specific manner of executing policies corresponding to the plurality of tuning instructions, where the specific manner includes:
acquiring a tuning demand and a tuning scene corresponding to the current tuning moment;
according to the tuning requirements and tuning scenes, determining tuning objective functions and tuning limiting conditions;
and inputting priorities corresponding to the plurality of tuning instructions into a preset dynamic programming algorithm model, and performing iterative computation according to the tuning objective function and the tuning limiting conditions to determine an optimal execution strategy corresponding to the plurality of tuning instructions.
In an alternative embodiment, the tuning objective function is configured to define that the execution efficiency parameter of the calculated execution strategy is greater than a preset efficiency parameter threshold; the tuning limiting condition is used for limiting the priority of any tuning instruction in the execution strategy to be not lower than the corresponding execution priority, network priority or source priority, and the execution time of each tuning instruction corresponding to the execution strategy is not later than the preset execution time limit.
Optionally, specific conditions in the tuning limiting conditions can be determined according to different tuning scenes or tuning requirements, for example, in a scene with a large number of concurrent instructions, the priority of any tuning instruction in the limiting execution strategy can be selected to be not lower than the corresponding execution priority to realize timely execution of the tuning strategy, and in a tuning scene with more important user priority, for example, when a primary tuning console and a secondary tuning console exist, the priority of any tuning instruction in the limiting execution strategy can be selected to be not lower than the corresponding source priority to realize differentiation of the source importance degree of the instructions.
In an alternative embodiment, the instruction orchestration module 103 inputs priorities corresponding to the plurality of tuning instructions to a preset dynamic programming algorithm model, and performs iterative computation according to the tuning objective function and the tuning constraint condition to determine a specific mode of an optimal execution strategy corresponding to the plurality of tuning instructions, where the specific mode includes:
inputting priorities corresponding to a plurality of tuning instructions into a preset particle swarm algorithm model for iterative calculation, and calculating an execution efficiency parameter corresponding to an execution strategy corresponding to a particle state generated by each iteration in the calculation; the execution efficiency parameter is the product of the execution sequence parameter, the execution equipment parameter and the execution network parameter of each tuning instruction in the execution strategy; the execution order parameter is the product of the execution order of each tuning instruction in the execution strategy and the corresponding execution priority parameter; the execution network parameter is the product of the execution sequence of each tuning instruction in the execution strategy and the corresponding source priority parameter; the execution network parameter is the product of the execution sequence of each tuning instruction in the execution strategy and the corresponding network priority parameter; the execution sequence is proportional to the execution forward degree of the tuning instruction;
And judging whether the current execution strategy meets the conditions according to the execution efficiency parameters, the tuning objective function and the tuning limiting conditions so as to calculate and determine the optimal execution strategy corresponding to the tuning instructions.
Through the module, the execution strategy of the plurality of instructions can be determined by means of the particle swarm algorithm model, and the limit conditions of the model can be flexibly changed through tuning scenes and tuning demands, so that the optimal execution strategy corresponding to the plurality of tuning instructions can be calculated and determined.
In an alternative embodiment, the specific manner of obtaining the tuning requirement and the tuning scene corresponding to the current tuning moment by the command orchestration module 103 includes:
acquiring a current tuning moment and a current tuning place of an intelligent tuning console, and acquiring a current tuning control instruction input by a user through interpersonal interaction equipment;
acquiring a plurality of historical tuning scenes of the current tuning place at the same moment as the current tuning moment in a historical time period;
determining a tuning scene with the largest occurrence number in a plurality of historical tuning scenes as a tuning scene corresponding to the current tuning moment;
and determining the tuning demand corresponding to the current tuning moment according to the current tuning control instruction.
Optionally, the tuning requirement corresponding to the current tuning moment may be determined according to the command-requirement correspondence, and the correspondence may be determined by an operator according to an experiment result, for example, by performing multiple experiments to obtain statistics.
Through the module, the tuning requirements and tuning scenes corresponding to the current tuning moment can be determined through data analysis, so that the objective function and the limiting conditions can be determined conveniently, and the execution strategy of a plurality of instructions can be determined by means of a particle swarm algorithm model.
In an alternative embodiment, the execution result includes the execution time and the execution success or failure information corresponding to each tuning instruction; the display module 105 determines a specific mode of the display parameters of the plurality of tuning instructions according to the execution strategy and the execution result of the instruction execution module 104, including.
According to the execution result of the instruction execution module 104, a plurality of failure instructions which are unsuccessful in execution are screened out from a plurality of tuning instructions, and a plurality of successful instructions which are successful in execution and corresponding execution time are screened out;
acquiring execution data records of a plurality of failure instructions;
determining the execution failure reason of each failure instruction according to the execution data record; the cause of the execution failure includes one or more of a device defect, an execution time problem, and a network problem;
Determining the occurrence times of the execution failure reasons of each failure instruction in the execution failure reasons of all the failure instructions;
calculating display parameters corresponding to each tuning instruction based on a dynamic programming algorithm, the occurrence times and the execution time; the display parameters include display position and display brightness.
In an alternative embodiment, the display module 105 calculates the display parameter corresponding to each tuning instruction based on the dynamic programming algorithm, the number of occurrences and the execution time, including:
determining that the objective function is minimum in sum of display brightness and minimum in sum of display distance corresponding to all tuning instructions in the display strategy; the display distance is the distance between the display position and the center point of the display area;
determining the constraint includes:
the display distance of any successful instruction is smaller than the display distance of any failed instruction;
the display brightness of the success instruction is smaller than that of the failure instruction;
among all the successful instructions, the display brightness of the successful instruction with the earlier execution time is higher and the display distance is smaller;
among all the failure instructions, the higher the occurrence number is, the higher the display brightness and the smaller the display distance of the failure instruction are;
inputting the occurrence times and execution time corresponding to all tuning instructions into a preset particle swarm algorithm model, and carrying out iterative calculation according to an objective function and a limiting condition until an optimal display strategy is calculated; the display strategy comprises display parameters corresponding to each tuning instruction.
By the module, the display strategy of the execution results of the plurality of instructions can be realized by means of the particle swarm algorithm model, so that the optimal display strategy can be calculated.
The foregoing describes certain embodiments of the present disclosure, other embodiments being within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. Furthermore, the processes depicted in the accompanying drawings do not necessarily have to be in the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus, devices, non-transitory computer readable storage medium embodiments, the description is relatively simple, as it is substantially similar to method embodiments, with reference to portions of the description of method embodiments being relevant.
The apparatus, the device, the nonvolatile computer readable storage medium and the method provided in the embodiments of the present disclosure correspond to each other, and therefore, the apparatus, the device, and the nonvolatile computer storage medium also have similar advantageous technical effects as those of the corresponding method, and since the advantageous technical effects of the method have been described in detail above, the advantageous technical effects of the corresponding apparatus, device, and nonvolatile computer storage medium are not described herein again.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., a field programmable gate array (Field Programmable gate array, FPGA)) is an integrated circuit whose logic function is determined by the user programming the device. The designer programs to "integrate" a digital intelligent tuning console onto a single PLD without requiring the chip manufacturer to design and fabricate application specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware DescriptionLanguage), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (RubyHardware Description Language), etc., VHDL (Very-High-SpeedIntegrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The intelligent tuning console, device, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that the present description embodiments may be provided as a method, smart mixing console, or computer program product. Accordingly, the present specification embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description embodiments may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (intelligent tuning consoles), and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the smart mixing console embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference is made to the section of the method embodiment for relevant points.
Finally, it should be noted that: the embodiment of the invention discloses an intelligent sound console based on multi-network cooperation, which is only a preferred embodiment of the invention, and is only used for illustrating the technical scheme of the invention, but not limiting the technical scheme; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme recorded in the various embodiments can be modified or part of technical features in the technical scheme can be replaced equivalently; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (8)

1. Intelligent sound mixing console based on multi-network cooperation, characterized in that, intelligent sound mixing console includes:
The data receiving module is connected to a plurality of control devices of a plurality of different users through a plurality of different types of networks and is used for receiving a plurality of tuning instructions sent by the control devices;
the instruction priority calculating module is used for obtaining instruction parameters corresponding to each tuning instruction and calculating the priority corresponding to each tuning instruction according to the instruction parameters; the priorities include an execution priority, a network priority, and a source priority; the specific mode of calculating the priority corresponding to each tuning instruction by the instruction priority calculating module according to the instruction parameters comprises the following steps:
for each tuning instruction, calculating a network quality parameter and a network transmission delay parameter corresponding to the tuning instruction according to the instruction transmission network parameter in the instruction parameters corresponding to the tuning instruction;
calculating the ratio of the network quality parameter to the network transmission delay parameter to obtain a network priority parameter corresponding to the tuning instruction;
according to the network priority parameters corresponding to all the tuning instructions, determining the network priority of each tuning instruction in all the tuning instructions;
calculating the execution priority corresponding to the tuning instruction according to the instruction content and/or the instruction adjusting amplitude in the instruction parameters corresponding to the tuning instruction;
Calculating the source priority corresponding to the tuning instruction according to the instruction source equipment parameter and/or the instruction source user parameter in the instruction parameters corresponding to the tuning instruction;
the instruction overall module is used for determining execution strategies corresponding to the tuning instructions according to a preset algorithm model and priorities corresponding to the tuning instructions;
the instruction execution module is used for executing the tuning instructions according to the execution strategy;
and the display module is used for determining the display parameters of the tuning instructions according to the execution strategy and the execution result of the instruction execution module and displaying according to the display parameters.
2. The intelligent sound mixing console based on multi-network collaboration of claim 1, wherein the instruction parameters include at least one of instruction transmission network parameters, instruction content, instruction adjustment amplitude, instruction source equipment parameters, instruction source user parameters; the instruction adjusting amplitude is the changing amplitude of the audio parameters to be adjusted by the tuning instruction; the instruction transmission network parameters comprise at least one of network type, number of devices in the network and network historical data transmission record; the network type comprises at least one of a wired network, a WIFI network, a ZigBee network and a Bluetooth network; the instruction source device parameters comprise at least one of a device type, a device model number and a device history instruction transmission record of the control device.
3. The intelligent sound mixing console based on multi-network collaboration according to claim 1, wherein the instruction priority calculating module calculates a specific mode of the network quality parameter and the network transmission delay parameter corresponding to the sound mixing instruction according to the instruction transmission network parameter in the instruction parameters corresponding to the sound mixing instruction, including:
inputting the instruction transmission network parameters in the instruction parameters corresponding to the tuning instruction into the trained first neural network model to obtain the output network quality parameters and the network transmission delay parameters; the first neural network model is obtained through training a training data set comprising a plurality of training instruction transmission network parameters, corresponding network quality parameter labels and corresponding network transmission delay labels;
and the instruction priority calculating module calculates a specific mode of the execution priority corresponding to the tuning instruction according to the instruction content and/or the instruction adjusting amplitude in the instruction parameters corresponding to the tuning instruction, and the specific mode comprises the following steps:
inputting instruction content and/or instruction adjusting amplitude in the instruction parameters corresponding to the tuning instruction into a trained second neural network model to obtain output execution priority parameters; the second neural network model is obtained through training a training data set comprising a plurality of instruction contents and/or instruction adjustment amplitudes for training and corresponding instruction priority labels;
Determining the execution priority of each tuning instruction in all the tuning instructions according to the execution priority parameters corresponding to all the tuning instructions;
and the instruction priority calculating module calculates a specific mode of the source priority corresponding to the tuning instruction according to the instruction source equipment parameter and/or the instruction source user parameter in the instruction parameters corresponding to the tuning instruction, and the specific mode comprises the following steps:
inputting the instruction source equipment parameters and/or the instruction source user parameters in the instruction parameters corresponding to the tuning instruction into a trained third neural network model to obtain the output source priority; the third neural network model is obtained through training a training data set comprising a plurality of instruction source equipment parameters and/or instruction source user parameters for training and corresponding priority labels;
and determining the source priority of each tuning instruction in all the tuning instructions according to the source priority parameters corresponding to all the tuning instructions.
4. The intelligent sound mixing console based on multi-network collaboration according to claim 3, wherein the instruction orchestration module determines a specific mode of an execution strategy corresponding to the plurality of sound mixing instructions according to a preset algorithm model and priorities corresponding to the plurality of sound mixing instructions, and the specific mode comprises the following steps:
Acquiring a tuning demand and a tuning scene corresponding to the current tuning moment;
according to the tuning requirements and tuning scenes, determining tuning objective functions and tuning limiting conditions;
and inputting priorities corresponding to the tuning instructions into a preset dynamic programming algorithm model, and performing iterative computation according to the tuning objective function and tuning limiting conditions to determine execution strategies corresponding to the tuning instructions.
5. The intelligent tuning console based on multi-network collaboration of claim 4, wherein the tuning objective function is configured to define that an execution efficiency parameter of the calculated execution strategy is greater than a preset efficiency parameter threshold; the tuning limiting condition is used for limiting the priority of any tuning instruction in the execution strategy to be not lower than the corresponding execution priority, network priority or source priority, and the execution time of each tuning instruction corresponding to the execution strategy to be not later than a preset execution time limit;
and the instruction orchestration module inputs priorities corresponding to the tuning instructions to a preset dynamic programming algorithm model, and performs iterative computation according to the tuning objective function and tuning limiting conditions to determine a specific mode of an execution strategy corresponding to the tuning instructions, wherein the specific mode comprises the following steps:
Inputting priorities corresponding to the tuning instructions into a preset particle swarm algorithm model for iterative computation, and calculating execution efficiency parameters corresponding to an execution strategy corresponding to a particle state generated by each iteration in the computation; the execution efficiency parameter is the product of the execution sequence parameter, the execution equipment parameter and the execution network parameter of each tuning instruction in the execution strategy; the execution order parameter is the product of the execution order of each tuning instruction in the execution strategy and the corresponding execution priority parameter; the execution network parameter is the product of the execution order of each tuning instruction in the execution strategy and the corresponding source priority parameter; the execution network parameter is the product of the execution sequence of each tuning instruction in the execution strategy and the corresponding network priority parameter; the execution sequence is proportional to the execution forward degree of the tuning instruction;
and judging whether the current execution strategy meets the conditions according to the execution efficiency parameters, the tuning objective function and the tuning limiting conditions so as to calculate and determine the execution strategy corresponding to the tuning instructions.
6. The intelligent sound mixing console based on multi-network coordination according to claim 4, wherein the specific mode of obtaining the tuning requirement and the tuning scene corresponding to the current tuning moment by the instruction orchestration module comprises:
acquiring a current tuning moment and a current tuning place of the intelligent tuning console, and acquiring a current tuning control instruction input by a user through interpersonal interaction equipment;
acquiring a plurality of historical tuning scenes of the current tuning place at the same moment as the current tuning moment in a historical time period;
determining a tuning scene with the largest occurrence number in the plurality of historical tuning scenes as a tuning scene corresponding to the current tuning moment;
and determining the tuning demand corresponding to the current tuning moment according to the current tuning control instruction.
7. The intelligent sound mixing console based on multi-network coordination according to claim 1, wherein the execution result comprises execution time and execution success or failure information corresponding to each sound mixing instruction; the display module determines a specific mode of display parameters of the tuning instructions according to the execution strategy and the execution result of the instruction execution module, and the specific mode comprises the following steps:
According to the execution result of the instruction execution module, a plurality of unsuccessful instructions which are unsuccessful in execution are screened out from the tuning instructions, and a plurality of successful instructions which are successful in execution and corresponding execution time are screened out;
acquiring execution data records of the plurality of failure instructions;
determining the execution failure reason of each failure instruction according to the execution data record; the execution failure reasons include one or more of equipment defects, execution time problems, and network problems;
determining the occurrence times of the execution failure reasons of each failure instruction in all the execution failure reasons of the failure instructions;
calculating display parameters corresponding to each tuning instruction based on a dynamic programming algorithm, the occurrence times and the execution time; the display parameters include display position and display brightness.
8. The intelligent tuning console based on multi-network collaboration of claim 7, wherein the display module calculates a display parameter corresponding to each tuning instruction based on a dynamic programming algorithm, the number of occurrences and the execution time, comprising:
determining an objective function as the minimum sum of display brightness and the minimum sum of display distance corresponding to all tuning instructions in a display strategy; the display distance is the distance between the display position and the center point of the display area;
Determining the constraint includes:
the display distance of any successful instruction is smaller than the display distance of any failed instruction;
the display brightness of the success instruction is smaller than that of the failure instruction;
among all the successful instructions, the display brightness of the successful instruction is higher and the display distance is smaller when the execution time is higher;
the display brightness of the failure instruction with higher occurrence frequency is higher and the display distance is smaller in all the failure instructions;
inputting the occurrence times and execution time corresponding to all tuning instructions into a preset particle swarm algorithm model, and performing iterative computation according to the objective function and the limiting conditions until a display strategy is calculated; the display strategy comprises display parameters corresponding to each tuning instruction.
CN202310779422.2A 2023-06-28 2023-06-28 Intelligent sound console based on multi-network cooperation Active CN116684499B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310779422.2A CN116684499B (en) 2023-06-28 2023-06-28 Intelligent sound console based on multi-network cooperation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310779422.2A CN116684499B (en) 2023-06-28 2023-06-28 Intelligent sound console based on multi-network cooperation

Publications (2)

Publication Number Publication Date
CN116684499A CN116684499A (en) 2023-09-01
CN116684499B true CN116684499B (en) 2023-12-19

Family

ID=87783715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310779422.2A Active CN116684499B (en) 2023-06-28 2023-06-28 Intelligent sound console based on multi-network cooperation

Country Status (1)

Country Link
CN (1) CN116684499B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1412681A (en) * 2001-10-16 2003-04-23 惠普公司 System for defining priority level of several mobile calculating equipment control devices and its method
CN104023305A (en) * 2014-05-30 2014-09-03 爱培科科技开发(深圳)有限公司 Audio mixing control method and apparatus for Wince vehicle-mounted multimedia
CN109637512A (en) * 2018-11-28 2019-04-16 北京昆羽科技有限公司 A kind of tuning system and tuning method
CN109991842A (en) * 2019-03-14 2019-07-09 合肥工业大学 Piano tone tuning method and system neural network based

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1965526A1 (en) * 2002-07-30 2008-09-03 Yamaha Corporation Digital mixing system with dual consoles and cascade engines

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1412681A (en) * 2001-10-16 2003-04-23 惠普公司 System for defining priority level of several mobile calculating equipment control devices and its method
CN104023305A (en) * 2014-05-30 2014-09-03 爱培科科技开发(深圳)有限公司 Audio mixing control method and apparatus for Wince vehicle-mounted multimedia
CN109637512A (en) * 2018-11-28 2019-04-16 北京昆羽科技有限公司 A kind of tuning system and tuning method
CN109991842A (en) * 2019-03-14 2019-07-09 合肥工业大学 Piano tone tuning method and system neural network based

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
音频播控***改造设计方案浅析;刘娜娜;彭永贤;;电视技术(第06期);76-79页 *

Also Published As

Publication number Publication date
CN116684499A (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN110401700A (en) Model loading method and system, control node and execution node
CN111782409B (en) Task processing method, device and electronic equipment, and risk identification task processing method and device
CN110389842A (en) A kind of dynamic resource allocation method, device, storage medium and equipment
CN116185532B (en) Task execution system, method, storage medium and electronic equipment
CN109922298A (en) Meeting room monitoring method and device
CN116151363B (en) Distributed Reinforcement Learning System
CN116521350B (en) ETL scheduling method and device based on deep learning algorithm
CN116684499B (en) Intelligent sound console based on multi-network cooperation
CN116842715B (en) Simulation data structuring processing system
CN116402113B (en) Task execution method and device, storage medium and electronic equipment
CN115526058A (en) Command decision system
CN108921375A (en) A kind of data processing method and device
CN111753990B (en) Quantum computer environment simulation method, device and medium
CN114120273A (en) Model training method and device
CN114124838B (en) Data transmission method and device for big data platform and big data platform management system
CN117393140B (en) Intelligent finger ring control method and device based on historical data
CN117423445B (en) Intelligent finger ring control method and device based on user cluster perception
CN117787420A (en) Wind control method, device, equipment and readable storage medium
CN116760871B (en) Intelligent table management system based on multi-protocol cooperation
CN116882278B (en) Think simulation experiment system
CN116755862B (en) Training method, device, medium and equipment for operator optimized scheduling model
CN116384038B (en) Combat behavior modeling simulation system
CN116501852B (en) Controllable dialogue model training method and device, storage medium and electronic equipment
CN116089434B (en) Data storage method and device, storage medium and electronic equipment
CN114430526B (en) Internet of things data transmission method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant