WO2020189496A1 - 変換システム、方法及びプログラム - Google Patents
変換システム、方法及びプログラム Download PDFInfo
- Publication number
- WO2020189496A1 WO2020189496A1 PCT/JP2020/010806 JP2020010806W WO2020189496A1 WO 2020189496 A1 WO2020189496 A1 WO 2020189496A1 JP 2020010806 W JP2020010806 W JP 2020010806W WO 2020189496 A1 WO2020189496 A1 WO 2020189496A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- output
- server
- data
- stage
- input
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/04—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
- H04L63/0428—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
- H04L63/0442—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload wherein the sending and receiving network entities apply asymmetric encryption, i.e. different keys for encryption and decryption
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/602—Providing cryptographic facilities or services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/04—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
- H04L63/0428—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
Definitions
- the present invention relates to a conversion system, method, program, etc. using machine learning technology.
- AI Artificial Intelligence
- machine learning technology has been attracting attention, and application of machine learning technology to various uses and problem solving has been attempted.
- manufacturers, automation companies of production lines, etc. are trying to apply machine learning technology to industrial robots installed in factories and the like to perform more appropriate control (for example, patents).
- Document 1 1).
- machine learning technology is still widely used. Therefore, when applying machine learning technology to a specific target, businesses that have specialized knowledge about machine learning technology, etc., to users who do not have knowledge about machine learning technology but have specific problems. , Often done in a format that provides machine learning technology.
- the present invention has been made in the background of the above technical background, and an object of the present invention is to provide a secure system capable of satisfying the demands of both users and providers of machine learning technology. There is.
- the conversion system connects the client device to the server via the network with the client device, performs conversion processing on the input data based on the learned model obtained by machine learning, and outputs the output data.
- the conversion system to be generated the client device is a part of the trained model from the input stage of the trained model to the first intermediate stage, and the conversion process is performed based on the input data.
- An input-side conversion processing unit that generates a first intermediate output in the first intermediate stage of the trained model, and a client-side transmission unit that transmits the first intermediate output to the server.
- a second intermediate output which is a conversion output in the second intermediate stage that is generated in the server from the first intermediate output and is closer to the output side than the first intermediate stage of the trained model.
- a client-side receiver that receives from the server and is part of the trained model from the second intermediate stage to the output stage of the trained model and is converted based on the second intermediate output. It includes an output-side conversion processing unit that generates the output data by performing the processing.
- the words of the first intermediate output and the second intermediate output are not only the mere output values of each stage of the trained model, but also undergo a predetermined conversion such as encrypting the output values. It also includes the values performed.
- the client device further includes a cache table storage unit that stores a cache table representing the correspondence between the first intermediate output and the second intermediate output, and the first intermediate output in the cache table.
- a determination unit that determines whether or not the corresponding second intermediate output exists, and the determination unit determines that the second intermediate output corresponding to the first intermediate output exists in the cache table. If so, instead of operating the client-side transmitting unit and the client-side receiving unit, the corresponding second intermediate output is acquired from the cache table, while the determination unit obtains the corresponding second intermediate output.
- the client-side transmitting unit and the client-side receiving unit are operated to operate the client-side receiving unit.
- a selective acquisition unit that acquires the second intermediate output received in the above may be provided.
- the client device may further include a cache table storage unit that stores the second intermediate output received by the client-side receiving unit in the cache table in association with the corresponding first intermediate output. Good.
- the client device further has an encryption unit that encrypts the first intermediate output to generate a first encrypted intermediate output, and a second encryption that is a second intermediate output encrypted by the server.
- a decryption unit that decrypts the encryption intermediate output is provided, the client-side transmission unit transmits the first encryption intermediate output to the server, and the server receives the first encryption.
- the intermediate output is decrypted to restore the first intermediate output, and the second intermediate output is encrypted to generate the second encrypted intermediate output, which is transmitted to the client device and is transmitted to the client side.
- the receiving unit may be one that receives the second encrypted intermediate output.
- the client device further includes a hashing processing unit that hashes the first encrypted intermediate output to generate a first hash value, and the first intermediate output in the cache table is the first.
- the determination unit may determine whether or not the corresponding second intermediate output exists based on the first hash value.
- the client device may further include a value rounding processing unit that rounds the first intermediate output to generate the first rounding intermediate output.
- the client device further generates an approximate function generator that generates an approximate function based on the cache table, and generates the second intermediate output based on the approximate function with the first intermediate output as an input. It may be provided with an approximate conversion processing unit.
- the approximation function may be a function to which the backpropagation method can be applied.
- the approximation function may include a bypass function.
- the approximation function may be composed of a weighted sum of a plurality of different approximation functions.
- the client device may be composed of a plurality of devices, and the cache table may be shared by the plurality of client devices.
- the server is a part of the trained model from the first intermediate stage to the second intermediate stage, and the conversion process is performed based on the first intermediate output. It may be provided with an intermediate conversion processing unit that generates the second intermediate output in the second intermediate stage.
- the server is composed of multi-stage servers connected via a network, and each server holds a partial model that divides the trained model between the first intermediate stage and the second intermediate stage. Then, the second intermediate output may be generated by sequentially performing conversion processing based on each partial model of each server.
- the client device may further include an input / output data table storage unit that stores an input / output data table representing the relationship between the input data and the output data corresponding to the input data.
- the present invention can also be conceived as a client device. That is, the client device according to the present invention is a client device that is connected to a server via a network, performs conversion processing on input data based on a trained model obtained by machine learning, and generates output data. , A part of the trained model from the input stage of the trained model to the first intermediate stage, and by performing the conversion process based on the input data, the first intermediate of the trained model. An input-side conversion processing unit that generates the first intermediate output in the stage, a client-side transmission unit that transmits the first intermediate output to the server, and the server based on the first intermediate output.
- a client-side receiver that is generated and receives a second intermediate output from the server, which is a conversion output in a second intermediate stage closer to the output side than the first intermediate stage of the trained model, and the trained Output-side conversion that is a part of the trained model from the second intermediate stage to the output stage of the model and generates the output data by performing conversion processing based on the second intermediate output.
- a client device including a processing unit.
- the present invention can also be considered as a conversion method. That is, the conversion method according to the present invention is a conversion method that is connected to a server via a network, performs conversion processing on input data based on a learned model obtained by machine learning, and generates output data. , The first intermediate of the trained model by performing a conversion process based on the input data using a part of the trained model from the input stage of the trained model to the first intermediate stage. An input-side conversion process step that generates a first intermediate output in a step, a client-side transmission step that transmits the first intermediate output to the server, and the server based on the first intermediate output.
- a client-side receiving step that receives from the server a second intermediate output that is generated and is a conversion output in a second intermediate stage that is closer to the output side than the first intermediate stage of the trained model, and the trained Output-side conversion that generates the output data by performing conversion processing based on the second intermediate output using a part of the trained model from the second intermediate stage to the output stage of the model. It has a processing step.
- control program is a control program of a client device that is connected to a server via a network, performs conversion processing on input data based on a trained model obtained by machine learning, and generates output data.
- the first step of the trained model is performed by performing a conversion process based on the input data using a part of the trained model from the input stage of the trained model to the first intermediate stage. Based on the input-side conversion processing step that generates the first intermediate output in the intermediate stage of 1 and the client-side transmission step that transmits the first intermediate output to the server and the first intermediate output.
- a client-side reception step that receives a second intermediate output from the server, which is a conversion output in a second intermediate stage that is generated in the server and is closer to the output side than the first intermediate stage of the trained model. It is a part of the trained model from the second intermediate stage to the output stage of the trained model, and the output data is generated by performing conversion processing based on the second intermediate output. It includes an output-side conversion processing step.
- the present invention can also be thought of as a server. That is, the server according to the present invention is a server that is connected to a client device via a network, performs conversion processing on input data based on a learned model obtained by machine learning, and generates output data.
- the client device is a part of the trained model from the input stage of the trained model to the first intermediate stage, and the trained model is described by performing a conversion process based on the input data.
- the client-side transmitting unit Based on the input-side conversion processing unit that generates the first intermediate output in the first intermediate stage, the client-side transmitting unit that transmits the first intermediate output to the server, and the first intermediate output.
- a client-side receiver that receives a second intermediate output from the server, which is generated in the server and is a conversion output in the second intermediate stage closer to the output side than the first intermediate stage of the trained model.
- a part of the trained model from the second intermediate stage to the output stage of the trained model, and the output data is generated by performing conversion processing based on the second intermediate output.
- the output side conversion processing unit and.
- the conversion system according to the present invention viewed from another aspect is a conversion system that generates output data by performing conversion processing on input data based on a trained model obtained by machine learning, and is the machine learning model. It is a part of the machine learning model from the input stage to the first intermediate stage of the above, and the first of the machine learning model is performed by performing conversion processing based on the input data to the machine learning model. It is a part of the input side conversion processing unit that generates the first intermediate output in the intermediate stage and the machine learning model from the second intermediate stage closer to the output side than the first intermediate stage to the output stage.
- the output side conversion processing unit that generates the output data of the machine learning model by performing the conversion processing based on the input for the second intermediate stage, the first intermediate output in the machine learning model, and the above.
- the conversion process is performed based on the approximation function generated based on the sample information representing the correspondence with the second intermediate output, and the second intermediate output is generated based on the first intermediate output.
- the output data includes a conversion processing unit, and the output data is obtained by operating the input side conversion processing unit, the intermediate conversion processing unit, and the output side conversion processing unit using the input data as an input of the input side conversion processing unit. It is generated.
- FIG. 1 is an overall configuration diagram (first embodiment) of the system.
- FIG. 2 is a hardware configuration diagram of the server.
- FIG. 3 is a hardware configuration diagram of the robot.
- FIG. 4 is a functional block diagram (first embodiment) of the robot.
- FIG. 5 is a functional block diagram (first embodiment) of the server.
- FIG. 6 is a prediction process (first embodiment) (No. 1) in the robot.
- FIG. 7 is a prediction process (first embodiment) (No. 2) in the robot.
- FIG. 8 is a prediction process (first embodiment) in the server.
- FIG. 9 is a conceptual diagram (first embodiment) relating to the prediction process.
- FIG. 10 is an overall configuration diagram (second embodiment) of the system.
- FIG. 11 is a hardware configuration diagram of the intermediate server.
- FIG. 11 is a hardware configuration diagram of the intermediate server.
- FIG. 12 is a functional block diagram (second embodiment) relating to the intermediate server.
- FIG. 13 is a prediction process (second embodiment) (No. 1) in the intermediate server.
- FIG. 14 is a prediction process (second embodiment) (No. 2) in the intermediate server.
- FIG. 15 is a prediction process (second embodiment) in the final server.
- FIG. 16 is a conceptual diagram (second embodiment) relating to the prediction process.
- FIG. 17 is a functional block diagram (third embodiment) of the robot.
- FIG. 18 is a functional block diagram (third embodiment) of the intermediate server.
- FIG. 19 is a functional block diagram (third embodiment) regarding the final server.
- FIG. 20 is a learning process (third embodiment) in the robot.
- FIG. 21 is a conceptual diagram regarding approximate data.
- FIG. 22 is a storage process (third embodiment) in the intermediate server.
- FIG. 23 is a learning process (third embodiment) in the intermediate server.
- FIG. 24 is a storage process (third embodiment) in the final server.
- FIG. 25 is a learning process (third embodiment) in the final server.
- FIG. 26 is a conceptual diagram (third embodiment) regarding the learning process.
- FIG. 27 is an overall configuration diagram (modification example) of the system.
- FIG. 28 is a conceptual diagram regarding an example of using the bypass function.
- FIG. 29 is a conceptual diagram of the bypass function.
- FIG. 30 is a conceptual diagram of approximation using a sub-approximation function.
- prediction processing means forward arithmetic processing of the trained model and can therefore be replaced with, for example, simply conversion processing, inference processing, and the like.
- FIG. 1 is an overall configuration diagram of the system 10 according to the present embodiment.
- a server 1 having a communication function and a plurality of (N) robots 3 having a communication function form a server / client system, and they are mutually WAN (Wide Area Network). And are connected via LAN (Local Area Network).
- the WAN is, for example, the Internet, and the LAN is installed, for example, in a factory.
- FIG. 2 is a diagram showing a hardware configuration of the server 1.
- the server 1 includes a control unit 11, a storage unit 12, an I / O unit 13, a communication unit 14, a display unit 15, and an input unit 16, which are connected to each other via a system bus or the like.
- the control unit 11 is composed of a processor such as a CPU or GPU and executes execution processing of various programs.
- the storage unit 12 is a storage device for a ROM, RAM, hard disk, flash memory, etc., and stores various data, operation programs, and the like.
- the I / O unit 13 performs input / output and the like with an external device.
- the communication unit 14 is, for example, a communication unit that communicates based on a predetermined communication standard, and communicates with the robot 3 that is the client device in the present embodiment.
- the display unit 15 is connected to a display or the like to perform a predetermined display.
- the input unit 16 receives input from the administrator using, for example, a keyboard or a mouse.
- FIG. 3 is a diagram showing a hardware configuration of the robot 3.
- the robot 3 is, for example, an industrial robot arranged in a factory or the like.
- the robot 3 includes a control unit 31, a storage unit 32, an I / O unit 33, a communication unit 34, a display unit 35, a detection unit 36, and a drive unit 37, which are via a system bus or the like.
- the control unit 31 is composed of a processor such as a CPU or GPU and executes execution processing of various programs.
- the storage unit 32 is a storage device for a ROM, RAM, hard disk, flash memory, etc., and stores various data, operation programs, and the like.
- the I / O unit 33 performs input / output and the like with an external device.
- the communication unit 34 is, for example, a communication unit that communicates based on a predetermined communication standard, and in the present embodiment, communicates with the server 1.
- the display unit 35 is connected to a display or the like to perform a predetermined display.
- the detection unit 36 is connected to the sensor and detects the sensor information as digital data.
- the drive unit 37 drives a connected (not shown) motor or the like in response to a command from the control unit.
- FIG. 4 is a functional block diagram of the control unit 31 of the robot 3.
- the control unit 31 includes a sensor information acquisition unit 311, a prediction processing unit 312, an encryption processing unit 319, a hashing processing unit 313, an information acquisition necessity determination unit 314, and a cache information acquisition processing unit 315. It includes a server information acquisition processing unit 316, a decoding unit 317, and a drive command unit 318.
- the sensor information acquisition unit 311 acquires the sensor information acquired by the detection unit 36.
- the prediction processing unit 312 reads basic information, weight information, etc. regarding the configuration of the prediction model (trained model) generated by supervised learning of the neural network, and outputs a predetermined prediction based on the input data. To generate.
- the encryption processing unit 319 performs a process of encrypting the input data with a public key or the like.
- the hashing processing unit 313 hashes the input information and generates a corresponding hash value, that is, a fixed-length value without regularity.
- the information acquisition necessity determination unit 314 determines whether or not the data corresponding to the predetermined data is already stored in the predetermined table.
- the cache information acquisition processing unit 315 determines that the data corresponding to the predetermined data exists in the information acquisition necessity determination unit 314, the cache information acquisition processing unit 315 acquires the corresponding data.
- the server information acquisition processing unit 315 transmits predetermined data to the server 1 and receives the data corresponding to the data.
- the decryption unit 317 decrypts the data encrypted by the public key or the like with the encryption key.
- the drive command unit 318 drives the motor or the like based on the output data.
- FIG. 5 is a functional block diagram of the control unit 11 of the server 1.
- the control unit 11 includes a data input reception unit 111, a decryption processing unit 112, a prediction processing unit 113, an encryption processing unit 114, and a data transmission unit 115.
- the data input receiving unit 111 receives the data input transmitted from the robot 3.
- the decryption processing unit 112 decrypts the data encrypted by the public key or the like with the encryption key or the like.
- the prediction processing unit 113 reads basic information such as the configuration of a prediction model (learned model) and weight information generated by supervised learning of a neural network, and outputs a predetermined prediction based on the input data. To generate.
- the encryption processing unit 114 encrypts the input data with a public key or the like.
- the data transmission unit performs a process of transmitting the data to be transmitted to the robot 3.
- the robot 3 performs a predetermined prediction process based on the acquired sensor information to drive an operating unit such as a motor.
- the process of acquiring the sensor information (I) via the sensor information acquisition unit 311 is performed (S1). Subsequently, the sensor information (I) is input to the prediction processing unit 312 to perform prediction processing from the input stage to the first intermediate layer to generate input-side intermediate layer data (X1) (S3).
- the generated input side intermediate layer data (X1) is encrypted by the public key in the encryption processing unit 319, and the encrypted input side intermediate layer data (X1') is generated (S5).
- the encrypted input side intermediate layer data (X1') is then hashed by the hashing processing unit 313 to generate a hash value (Y1) (S7).
- the information acquisition necessity determination processing unit 314 reads the hash table, and whether or not the encrypted output side intermediate layer data (Z1') corresponding to the generated hash value (Y1) exists in the hash table. Is determined (S9).
- the output side intermediate layer data (Z1) represents the output of the second intermediate layer closer to the output layer than the first intermediate layer, and the encrypted output side intermediate layer data (Z1') represents the server. Represents the output of the second mesosphere encrypted by the public key in 1.
- the cache information acquisition processing unit 315 performs the encrypted output.
- a process of acquiring the side intermediate layer data (Z1') as cache information is performed (S13).
- the server information acquisition processing unit 316 performs the encrypted input side intermediate.
- the layer data (X1') is transmitted to the server 1 (S15), and then the state shifts to a predetermined standby state (S17NO).
- the standby state is released (S17YES), and the received encrypted output side intermediate layer data (Z1') is used as the hash value (Z1').
- a process of associating with Y1) and saving is performed (S19). The operation of the server 1 during this period will be described in detail in FIG.
- the decryption unit 317 generates the output side intermediate layer data (Z1) by decrypting the acquired encrypted output side intermediate layer data (Z1') with the private key (S21).
- the prediction processing unit 312 performs prediction processing from the second intermediate layer to the output layer based on the generated output-side intermediate layer data (Z1) to generate the final output (O) (S23).
- the drive command unit 318 issues a drive command to a drive unit such as a motor based on the final output (O) (S25).
- a drive unit such as a motor based on the final output (O) (S25).
- the server 1 shifts to a predetermined standby state by the data input receiving unit 111 (S31NO).
- the standby state is released (S31NO), and the decryption processing unit 112 receives the encrypted input side intermediate layer data (X1').
- the prediction processing unit 113 performs prediction processing from the first intermediate layer to the second intermediate layer using the input side intermediate layer data (X1) as an input, and generates output side intermediate layer data (Z1) ( S35).
- the encryption processing unit 114 encrypts the output side intermediate layer data (Z1) with a public key to generate encrypted output side intermediate layer data (Z1) (S37). After that, the data transmission unit 115 transmits the encrypted output side intermediate layer data (Z1') to the robot 3 (S39). When this transmission process is completed, the server 1 returns to the reception standby state (S31) again, and thereafter, a series of processes (S31 to S39) are repeated.
- FIG. 9 is a conceptual diagram of the prediction process realized by the system 1 according to the present embodiment.
- the upper part is a conceptual diagram of the prediction process performed by the robot 3, and the lower part is a conceptual diagram of the prediction process performed by the server 1.
- the left side of the figure shows the input side, and the right side shows the output side.
- the prediction processing unit 312 performs prediction processing from the input stage to the first intermediate layer, and inputs the input side intermediate layer data ( X1) is generated. After that, the input-side intermediate layer data (X1) is transmitted to the server 1 via encryption, and is decrypted by the server 1.
- the prediction processing unit 113 receives the input side intermediate layer data (X1) as an input, performs prediction processing from the first intermediate layer to the second intermediate layer, and generates the output side intermediate layer data (Z1). To do. After that, the output-side intermediate layer data (Z1) is transmitted to the robot 3 via encryption, and is decrypted by the robot 3.
- the prediction processing unit 312 performs prediction processing between the second intermediate layer and the output layer to generate the final output (O).
- the server usage cost can be reduced and the prediction processing can be speeded up.
- the client device can be operated almost autonomously.
- the intermediate output communicated between the client device and the server is encrypted. Therefore, the data is further secured.
- hashing processing is performed.
- the security of the data is improved, and the speed of the determination process can be realized by speeding up the search process in the hash table.
- Second embodiment> In this embodiment, the servers are arranged in multiple stages in the system 20.
- FIG. 10 is an overall configuration diagram of the system 20 according to the present embodiment.
- the server 5 and a plurality of robots 7 (7-1 to 7-N) as client devices are connected by communication via a network. It is the same as the embodiment of 1.
- this embodiment differs from the first embodiment in that the intermediate server 6 is interposed between the robot 7 and the final server 5.
- the intermediate server 6 is operated by, for example, a machine learning technology vendor (AI vendor).
- AI vendor machine learning technology vendor
- FIG. 11 is a diagram showing a hardware configuration of an intermediate server 6 interposed between the robot 7 and the final server 5.
- the intermediate server 6 includes a control unit 61, a storage unit 62, an I / O unit 63, a communication unit 64, a display unit 65, and an input unit 66, which are connected to each other via a system bus or the like.
- the control unit 61 is composed of a processor such as a CPU or GPU and executes execution processing of various programs.
- the storage unit 62 is a storage device for a ROM, RAM, hard disk, flash memory, etc., and stores various data, operation programs, and the like.
- the I / O unit 63 performs input / output and the like with an external device.
- the communication unit 64 is, for example, a communication unit that communicates based on a predetermined communication standard, and communicates with the final server 5 and the robot 7 as a client device.
- the display unit 65 is connected to a display or the like to perform a predetermined display.
- the input unit 66 receives input from the administrator using, for example, a keyboard or a mouse.
- FIG. 12 is a functional block diagram of the control unit 61 of the intermediate server 6.
- the control unit 61 includes a data input reception unit 611, a decryption processing unit 612, a prediction processing unit 613, an encryption processing unit 614, a hashing processing unit 615, an information acquisition necessity determination unit 616, and a cache. It includes an information acquisition processing unit 617, a server information acquisition processing unit 618, and a data transmission unit 619.
- the data input reception unit 611 accepts data input transmitted from the robot 3 or the final server 5.
- the decryption processing unit 612 decrypts the data encrypted by the public key or the like with the encryption key or the like.
- the prediction processing unit 613 reads basic information such as the configuration of a prediction model (learned model) and weight information generated by supervised learning of a neural network, and outputs a predetermined prediction based on the input data. To generate.
- the encryption processing unit 614 encrypts the input data with a public key or the like.
- the hashing processing unit 615 hashes the input information and generates a corresponding hash value, that is, a fixed-length value without regularity.
- the information acquisition necessity determination unit 616 determines whether or not the data corresponding to the predetermined data is already stored in the predetermined table. When the cache information acquisition processing unit 617 determines that the data corresponding to the predetermined data exists in the information acquisition necessity determination unit 616, the cache information acquisition processing unit 617 acquires the corresponding data.
- the server information acquisition processing unit 618 transmits predetermined data to the final server 5 and receives the data corresponding to the data.
- the data transmission unit 619 performs a process of transmitting the data to be transmitted to the robot 3 or the final server 5.
- the operation of the robot 7 is substantially the same as that of the first embodiment. That is, as shown in FIGS. 6 and 7, as a result of the determination (S9) in the information acquisition necessity determination processing unit 314, the encrypted output side intermediate layer data (Z1') corresponding to the hash value (Y1) is a hash table. If it does not exist in (S11NO), the server information acquisition processing unit 316 transmits the first encrypted input side intermediate layer data (X1') to the intermediate server 6 (S15), and then a predetermined standby state (S17NO). ).
- 13 and 14 are flowcharts relating to the prediction processing operation of the intermediate server 6.
- the intermediate server 6 shifts to a predetermined standby state by the data input receiving unit 611 (S51NO). After that, when the first encrypted input side intermediate layer data (X1') is received from the robot 7 (S51YES), the standby state is released. After that, the decryption processing unit 612 decrypts the received first encrypted input side intermediate layer data (X1') with the private key, and generates the first input side intermediate layer data (X1) (S53). ).
- the prediction processing unit 613 performs prediction processing from the first intermediate layer to the third intermediate layer based on the decoded first input side intermediate layer data (X1), and performs prediction processing from the first intermediate layer to the second input side intermediate layer data.
- (X2) is generated (S55).
- the encryption processing unit 614 encrypts the second input side intermediate layer data (X2) with the public key to generate the second encrypted input side intermediate layer data (X2') (S57). Further, the hash processing unit 615 hash-processes the second encrypted input side intermediate layer data (X2') to generate a second hash value (Y2) (S59).
- the information acquisition necessity determination unit 616 reads the second hash table stored in the intermediate server 6, and the second encrypted output side intermediate layer data corresponding to the generated second hash value (Y2). It is determined whether or not (Z2') exists in the second hash table (S61). As a result of this determination (S9), when the second encrypted output side intermediate layer data (Z2') corresponding to the second hash value (Y2) exists in the hash table (S63YES), the cache information acquisition processing unit. 617 performs a process of acquiring the second encrypted output side intermediate layer data (Z2') as cache information (S65).
- the server information acquisition process transmits the second encrypted input side intermediate layer data (X2') to the server 1 (S67), and then shifts to a predetermined standby state (S69NO).
- the standby state is released (S69YES), and the received second encrypted output side intermediate layer data (Z2') is released.
- ') Is stored in association with the second hash value (Y2) (S71). The operation of the final server 5 during this period will be described later in FIG.
- the decryption processing unit 612 generates the second output side intermediate layer data (Z2) by decrypting the acquired second encrypted output side intermediate layer data (Z2') with the private key (S73). After that, the prediction processing unit 613 performs prediction processing from the fourth intermediate layer to the second intermediate layer based on the generated second output side intermediate layer data (Z2), and performs prediction processing from the first output side intermediate layer.
- the layer data (Z1) is generated (S75).
- the encryption processing unit 614 performs encryption processing on the first output side intermediate layer data (Z1) to generate the first encrypted output side intermediate layer data (Z1') (S77). After that, the data transmission unit 619 transmits the first encrypted output side intermediate layer data (Z1') to the robot 7. When this transmission process is completed, the intermediate server 6 returns to the reception standby state (S51NO) again, and thereafter, a series of processes (S51 to S79) are repeated.
- FIG. 15 shows a flowchart relating to the prediction processing operation of the final server 5.
- the final server 5 shifts to a predetermined standby state by the data input receiving unit 111 (S81NO).
- the standby state is released (S81YES).
- the decryption processing unit 112 performs a process of decrypting the received second encrypted input side intermediate layer data (X2') with the private key, and generates the second input side intermediate layer data (X2) (S83). ).
- the prediction processing unit 113 performs prediction processing from the third intermediate layer to the fourth intermediate layer by using the second input side intermediate layer data (X2) as an input, and performs prediction processing from the third intermediate layer to the fourth intermediate layer, and the second output side intermediate layer data (X2). Z2) is generated (S85).
- the encryption processing unit 114 encrypts the second output side intermediate layer data (Z2) using the public key to generate the second encrypted output side intermediate layer data (Z2') (S87). After that, the data transmission unit 115 transmits the second encrypted output side intermediate layer data (Z2') to the intermediate server 6 (S89). When this transmission process is completed, the final server 5 returns to the reception standby state (S81) again, and thereafter, a series of processes (S81 to S89) are repeated.
- FIG. 16 is a conceptual diagram of the prediction process realized by the system 20 according to the present embodiment.
- the upper part is a conceptual diagram of the prediction processing performed by the robot 7
- the middle part is a conceptual diagram of the prediction processing performed by the intermediate server 6
- the lower part is a conceptual diagram of the prediction processing performed by the final server 5.
- the left side shows the input side
- the right side shows the output side.
- the prediction processing unit 312 performs the prediction processing from the input stage to the first intermediate layer, and performs the prediction processing from the input stage to the first intermediate layer.
- the prediction processing unit 613 performs prediction processing between the first intermediate layer and the third intermediate layer, and generates the second input side intermediate layer data (X2). After that, the second input-side intermediate layer data (X2) is transmitted to the final server 5 via encryption, and is decrypted in the final server 5.
- the prediction processing unit 113 receives the second input side intermediate layer data (X2) as an input, performs prediction processing from the third intermediate layer to the fourth intermediate layer, and performs prediction processing from the third intermediate layer to the fourth intermediate layer, and performs prediction processing from the second intermediate layer to the second output side.
- Generate layer data (Z2) After that, the second output-side intermediate layer data (Z2) is transmitted to the intermediate server 6 via encryption, and is decrypted by the intermediate server 6.
- the prediction processing unit 613 performs prediction processing between the fourth intermediate layer and the second intermediate layer, and generates the first output side intermediate layer data (Z1). After that, the first output-side intermediate layer data (Z1) is transmitted to the robot 7 via encryption, and is decrypted by the robot 7.
- the prediction processing unit 312 performs prediction processing between the second intermediate layer and the output layer, and generates the final output (O).
- the servers are provided in multiple stages, it is possible to reduce the processing load of the client device and each device in each server, and at the same time, predict in the client device due to the economies of scale due to the multi-stage. Performance improvement can also be expected. Moreover, even if the number of stages is increased in this way, the speed of the processing is unlikely to occur because each server also performs the prediction processing based on the cache information. Since the prediction model is distributed, it is expected that the safety of the system will be further improved, and it will be possible for a plurality of administrators to share the management of each server.
- the system 30 performs learning processing in addition to prediction processing.
- FIG. 17 is a functional block diagram of the control unit 710 of the robot 7.
- the content of the prediction processing unit 7101 is substantially the same as the configuration shown in FIG. 4, so detailed description thereof will be omitted.
- the prediction processing unit 7101 is different in that it further includes a cache table addition processing unit 7109.
- the cache table addition processing unit 7109 performs the decoding process (S21) in FIG. 7 to generate the output side intermediate layer data (Z1), and then transfers the output side intermediate layer data (Z1) to the corresponding input side intermediate layer. A process of additionally storing the data (X1) in the cache table is performed. This cache table is used for the learning process described later.
- the control unit 710 further has a learning processing unit 7102.
- the learning processing unit 7102 includes a data reading unit 7102, an approximate function generation processing unit 7116, a prediction processing unit 7117, an error back propagation processing unit 7118, a parameter update processing unit 7119, an encryption processing unit 7120, and a data transmission processing unit 7121. ing.
- the data reading unit 7115 performs reading processing of various data stored in the robot 7.
- the approximate function generation processing unit 7116 generates an approximate function by a method described later based on a cache table relating to a predetermined input / output correspondence.
- the prediction processing unit 7117 reads basic information such as the configuration of a prediction model (learned model) and weight information generated by supervised learning of a neural network, and outputs a predetermined prediction based on the input data. To generate.
- the error back propagation processing unit 7118 performs a process (Backpropagation) of propagating the error obtained by comparing the output of the prediction model with the teacher data from the output side of the model to the input side.
- the parameter update processing unit 7119 performs processing for updating model parameters such as weights so as to reduce the error between the output of the prediction model and the teacher data.
- the encryption processing unit 7120 performs a process of encrypting a predetermined target data with a public key or the like.
- the data transmission processing unit 7121 performs a process of transmitting predetermined target data to the intermediate server 6.
- FIG. 18 is a functional block diagram of the control unit 610 of the intermediate server 6.
- the content of the prediction processing unit 6101 is substantially the same as the configuration shown in FIG. 12, so detailed description thereof will be omitted.
- the prediction processing unit 6101 is different in that it further includes a cache table addition processing unit 6112.
- the cache table addition processing unit 6112 performs the decoding process (S75) in FIG. 14 to generate the second output-side intermediate layer data (Z2), and then generates the second output-side intermediate layer data (Z2).
- a process of additionally storing the data in the cache table together with the corresponding second input-side intermediate layer data (X2) is performed. This cache table is used for the learning process described later.
- the control unit 610 further has a learning processing unit 6102.
- the learning processing unit 6102 includes a data input receiving unit 6123, a data reading unit 6115, a sampling processing unit 6116, an approximate function generation processing unit 6117, a prediction processing unit 6118, an error back propagation processing unit 6119, a parameter update processing unit 6120, and an encryption process.
- a unit 6121 and a data transmission processing unit 6122 are provided.
- the data input receiving unit 6123 performs a process of receiving, decoding, and storing various data such as the first cache table received from the robot 7.
- the data reading unit 6115 performs reading processing of various data stored in the intermediate server 6.
- the sampling processing unit 6116 performs a process of selecting a data set to be learned from the cache table.
- the approximate function generation processing unit 6117 generates an approximate function by a method described later based on a cache table relating to a predetermined input / output correspondence.
- the prediction processing unit 6118 reads basic information such as the configuration of a prediction model (trained model) and weight information generated by supervised learning of a neural network, and outputs a predetermined prediction based on the input data. To generate.
- the error back propagation processing unit 6119 performs a process (Backpropagation) of propagating the error obtained by comparing the output of the prediction model with the teacher data from the output side of the model to the input side.
- the parameter update processing unit 6120 performs processing for updating model parameters such as weights so as to reduce the error between the output of the prediction model and the teacher data.
- the encryption processing unit 6121 performs a process of encrypting a predetermined target data with a public key or the like.
- the data transmission processing unit 6122 performs a process of transmitting predetermined target data to the robot 7 or the final server 5.
- FIG. 19 is a functional block diagram of the control unit 510 of the final server 5.
- the content of the prediction processing unit 5101 is substantially the same as the configuration shown in FIG. 5, so detailed description thereof will be omitted.
- the control unit 510 further has a learning processing unit 5102.
- the learning processing unit 5102 includes a data input receiving unit 5115, a data reading unit 5110, a sampling processing unit 5111, a prediction processing unit 5112, an error back propagation processing unit 5113, and a parameter update processing unit 5114.
- the data input receiving unit 5115 performs a process of receiving various data such as a second cache table received from the intermediate server 6, decoding and storing the data.
- the data reading unit 5110 performs reading processing of various data stored in the final server 5.
- the sampling processing unit 5111 performs a process of selecting a data set to be learned from the second cache table.
- the prediction processing unit 5112 reads basic information such as the configuration of a prediction model (trained model) and weight information generated by supervised learning of a neural network, and outputs a predetermined prediction based on the input data. To generate.
- the error back propagation processing unit 5113 performs a process (Backpropagation) of propagating the error obtained by comparing the output of the prediction model with the teacher data from the output side of the model to the input side.
- the parameter update processing unit 5114 performs a process of updating model parameters such as weights so as to reduce the error between the output of the prediction model and the teacher data.
- FIG. 20 is a flowchart of the learning processing operation in the robot 7.
- the data reading unit 7115 reads out a pair of input / output pairs (X0, Z0) from the input / output data table stored in the robot 7 and corresponding to the teacher data (S101). ).
- the prediction processing unit 7117 performs prediction processing in the section from the input layer of the prediction model to the first intermediate layer based on the input data X0, and inputs the input side intermediate layer data (X1-s1). Generate (S103).
- the data reading unit 7115 receives the first input side intermediate layer data (X1) and the first output side intermediate layer data (X1) accumulated in the robot 7 during the prediction process.
- a process of reading the first cache table including the correspondence with Z1) is performed (S105).
- a process of generating an approximation function is performed based on the first cache table (S107).
- the data (X1) of the first input-side intermediate layer (temporarily referred to as the X layer for convenience of explanation) is used as an input
- the data (Z1) of the first output-side intermediate layer (temporarily referred to as the Z layer for convenience of explanation).
- the data conversion (cache conversion) that generates the above can be expressed as follows.
- the vector representing the data of the X layer composed of n neurons can be expressed as follows.
- the vector representing the data of the Z layer consisting of N neurons can be expressed as follows.
- the k-th value zk of the Z layer can be calculated independently of the other N-1 values from the mathematical formula (1), it can be expressed as follows.
- the conversion function Sk is converted to the kth value of the corresponding Z layer if the combination of each component value of the data vector of the X layer does not exist in the first cache table due to the nature of the cache conversion. Can not do it. Therefore, it is approximated by the following linear equation (5).
- the solution vk of the mathematical formula (10) can be obtained by calculating the mathematical formula (9) with a computer according to an algorithm such as Gaussian elimination.
- the mathematical formula (5) can be expressed as follows.
- this formula (11) is an approximate formula.
- partial differentiation is approximately possible for each component of the data vector of the X layer, for example, backpropagation of errors from the Z layer to the X layer can be easily performed. That is, even if each machine learning model on the input side and the output side is a multi-layer neural network model before and after the learning model part corresponding to the cache table, that is, the learning process is performed at high speed by using the error back propagation method. be able to.
- the prediction processing unit 7117 determines the first input side. Based on the mesosphere data (X1) and the approximation function, the prediction process of the section from the first mesosphere to the second mesosphere is performed, and the output side mesosphere data (Z1-s1) is generated (S109). ). After that, the prediction processing unit 7117 uses the output side intermediate layer data (Z1-s1) as an input to perform prediction processing for the section from the second intermediate layer to the output layer, and outputs the final output (Z0-s1). Generate (S111).
- the error back propagation processing unit 6119 generates an error between the teacher output (Z0) and the final output (Z0-s1) related to the teacher data, and sets the error or a predetermined value based on the error (for example, root mean square error). For example, it propagates from the output side to the input side by a method such as the steepest descent method (S113).
- the parameter update processing unit 7119 starts from the section from the input layer to the first intermediate layer and the second intermediate layer of the training model, excluding the approximate function part, based on the back-propagated error and the like.
- a process of updating parameters such as the weight of the section leading to the output layer is performed (S115).
- the robot 7 confirms from the predetermined setting information whether or not it is permitted to transmit the first cache table (S117). As a result, if there is no transmission permission, the learning end determination (S121) is performed, and if it is not completed (S121NO), all the processes (S101 to S121) are repeated again. On the other hand, when it ends (S121YES), the learning process ends.
- the data transmission processing unit 7121 performs a process of transmitting the first cache table encrypted by the encryption processing unit 7120 to the intermediate server 6. (S119). After that, the learning end determination (S121) is performed.
- FIG. 22 is a flowchart relating to reception and storage processing of the first cache table transmitted from the robot 7.
- the data input receiving unit 6123 shifts to the data reception standby state (S131).
- S131YES the data reception standby state
- the received first cache data is decrypted with the private key or the like (S133).
- S133 the private key or the like
- FIG. 23 is a flowchart relating to the learning processing operation in the intermediate server 6 executed in parallel with the reception processing of the first cache table shown in FIG. 22.
- the data reading unit 6115 reads the input / output pairs (X1-s1, Z1-s1) from the input / output data table stored in the intermediate server 6 and corresponding to the teacher data. (S141).
- the sampling processing unit 6116 extracts the input / output pair used for learning (S143).
- the prediction processing unit 6118 performs prediction processing in the section from the first intermediate layer to the third intermediate layer of the prediction model based on the input data (X1-s1), and second.
- the input side intermediate layer data (X2-s2) of the above is generated (S145).
- the data reading unit 6115 performs the second input side intermediate layer data (X2) and the first output side intermediate layer data accumulated in the intermediate server 6 at the time of prediction processing.
- a process of reading out the second cache table (X2, Z2) including the correspondence with (Z2) is performed (S147).
- the second output side mesosphere data (Z2) is generated based on the second input side mesosphere data (X2) based on the second cache table.
- a process for generating an approximate function is performed (S149). The approximate function generation process is the same as the approximate function generation in the robot 7.
- the prediction processing unit 6118 When the generation process (S145) of the second input side intermediate layer data (X2-s2) and the generation process (S149) of the approximation function are completed, the prediction processing unit 6118 causes the second input side intermediate layer data (X2-s2). ) And the approximation function, the prediction process of the section from the third intermediate layer to the fourth intermediate layer is performed, and the second output side intermediate layer data (Z2-s2) is generated (S151). After that, the prediction processing unit 6118 receives the second output side intermediate layer data (Z2-s2) as an input, performs prediction processing of the section from the fourth intermediate layer to the second intermediate layer, and performs the second prediction processing. Output side predicted output (Z1-s2) is generated (S153).
- the error back propagation processing unit 6119 generates an error between the teacher data (Z1-s1) and the second output-side predicted output (Z1-s2), and the error or a predetermined value based on the error (for example, root mean square error, etc.) ) Is propagated from the output side to the input side by a method such as the steepest descent method (S155).
- the parameter update processing unit 6120 includes a section from the first mesosphere to the third mesosphere, excluding the approximate function part, and a fourth mesosphere based on the back-propagated error and the like.
- a process of updating parameters such as the weight of the section from the intermediate layer to the second intermediate layer is performed (S157).
- the intermediate server 6 confirms from the predetermined setting information whether or not it is permitted to transmit the second cache table (X2-s2, Z2-s2) (S159). As a result, if there is no transmission permission, the learning end determination (S163) is performed, and if it is not completed (S163NO), all the processes (S141 to S163) are repeated again. On the other hand, when it ends (S163YES), the learning process ends.
- the data transmission processing unit 6122 performs a process of transmitting the second cache table encrypted by the encryption processing unit 6121 to the final server 5. (S161). After that, the learning end determination (S163) is performed.
- FIG. 24 is a flowchart relating to reception and storage processing of the second cache table (X2-s2, Z2-s2) transmitted from the intermediate server 6.
- the data input receiving unit 5115 shifts to the data reception standby state (S171).
- S171YES the data reception standby state
- the data reception standby state is released, and the received second cache data is decrypted with the private key or the like (S173).
- S175 The process of storing in the storage unit (S175) is performed.
- the final server 5 shifts to the reception standby state (S171NO) again.
- FIG. 25 is a flowchart relating to the learning processing operation in the final server 5, which is executed in parallel with the reception processing of the second cache table shown in FIG. 24.
- the data reading unit 5110 performs a process of reading the cache table (S181).
- the sampling processing unit (S5111) extracts the input / output pairs to be learned from the cache table (S183).
- the prediction processing unit 5112 performs prediction processing from the third intermediate layer to the fourth intermediate layer based on the read second input side intermediate layer data (X2-s2), and performs prediction processing from the third intermediate layer to the fourth intermediate layer, and the second output side intermediate.
- Layer data (Z2-s3) is generated (S185).
- the error back propagation processing unit 5113 generates an error between the second output side intermediate layer data (Z2-s3) and the teacher data (Z2-s2), and generates the error or a predetermined value based on the error (for example, root mean square error). Etc.) is propagated from the output side to the input side by a method such as the steepest descent method (S187).
- the parameter update processing unit 5114 performs a process of updating parameters such as weights of the learning model based on the back-propagated error and the like (S189).
- the learning end determination is performed, and if the predetermined end condition is not satisfied (S191NO), a series of processes (S181 to S189) are performed again.
- the predetermined end condition is satisfied (S191YES)
- FIG. 26 is a conceptual diagram of the learning process realized by the system 30 according to the present embodiment.
- the upper part is a conceptual diagram of the learning process performed by the robot 7
- the middle part is a conceptual diagram of the learning process performed by the intermediate server 6
- the lower part is a conceptual diagram of the learning process performed by the final server 5.
- the left side shows the input side
- the right side shows the output side.
- the prediction processing unit 7117 performs the prediction processing from the input stage to the first intermediate layer, and performs the prediction processing from the input stage to the first intermediate layer.
- the approximate function generation processing unit 7116 generates an approximate function (F (x)) based on the first cache table (X1, Z1).
- the prediction processing unit 7117 generates the first output side intermediate layer data (Z1-s1) based on the first input side intermediate layer data (X1-s1) and the approximation function (F (x)). Further, the final output data (Z0-s1) is generated based on the first output-side intermediate layer data (Z1-s1).
- the error back propagation processing unit 7118 back-propagates the error between the final output data (Z0-s1) and the teacher data (Z0) from the final output stage to the input stage via an approximate function.
- the parameter update processing unit 7119 updates the parameters including the weights from the final output stage to the second intermediate layer and from the first intermediate layer to the input stage. Further, the first cache table (X1-s1, Z1-s1) generated at this time is provided to the intermediate server 6 under predetermined conditions.
- the prediction processing unit 6118 moves from the first intermediate layer to the third intermediate layer.
- the prediction process during the period is performed to generate the second input side mesosphere data (X2-s2).
- the approximate function generation processing unit 6117 generates an approximate function (G (x)) based on the first cache table (X1-s1, Z1-s1).
- the prediction processing unit 6118 generates the second output side intermediate layer data (Z2-s2) based on the second input side intermediate layer data (X2-s2) and the approximation function (G (x)).
- the first output side intermediate layer data (Z1-s2) is generated based on the second output side intermediate layer data (Z2-s2).
- the error back propagation processing unit 6119 back-propagates the error between the final output data (Z1-s2) and the teacher data (Z1-s1) from the second intermediate layer to the first intermediate layer via an approximation function. ..
- the parameter update processing unit 6120 updates the parameters including the weights between the second intermediate layer to the fourth intermediate layer and the third intermediate layer to the first intermediate layer.
- the second cache table (X2-s2, Z2-s2) generated at this time is provided to the final server 5 under predetermined conditions.
- the prediction processing unit 5112 moves from the third intermediate layer to the fourth intermediate layer.
- the prediction process during the period is performed to generate the second output side intermediate layer data (Z2-s3).
- the error back propagation processing unit 5113 back-propagates the error between the second output side intermediate layer data (Z2-s3) and the teacher data (Z2-s2) from the fourth intermediate layer to the third intermediate layer. Let me. After that, the parameter update processing unit 5114 updates the parameters including the weights between the fourth intermediate layer and the fourth intermediate layer.
- the approximation function generated from the cache table is described as being used only in the learning process.
- the present invention is not limited to such a configuration.
- an approximation function is generated based on the cache table obtained so far, and the first intermediate layer is generated based on the first input side mesosphere data (X1) and the approximation function.
- the output side intermediate layer data (Z1) may be generated by performing the prediction processing of the section from the to the second intermediate layer. According to such a configuration, for example, after data is accumulated in the hash table to a certain extent, it is possible to significantly reduce the frequency of inquiries to the server side or perform prediction processing without making inquiries. ..
- the input-side intermediate layer data (X) (for example, X1 or X2) is encrypted and hashed
- a hash table search process is performed using the hash value as a key (for example, FIG. 6). S11, S55 in FIG. 13 and the like).
- the present invention is not limited to such a configuration. Therefore, for example, the input-side intermediate layer data (X) may be subjected to rounding processing, then encrypted and / or hashed, and a hash table search may be performed.
- the rounding process when the set to which the input-side mesosphere data (X) belongs is U, it is considered that the specific input-side mesosphere data belonging to the set U has the same value (X_u)) (representative value). It is a process. For example, even if some node values (neuron firing values) of the input side intermediate layer data (X) are discretized into integer values by rounding up or down, etc., a set of a plurality of integer values is formed. Good. According to such a configuration, it is possible to improve the consistency with the hash value obtained in the past and realize speeding up of processing and the like.
- FIG. 27 is an overall configuration diagram of the system 40 according to the modified example.
- the system 40 includes a server 2 that performs prediction processing, an intermediary server 8 that is connected to the server 2 via a WAN and is connected to a LAN, and a robot 9 as a client device connected to the LAN. It is composed of.
- the exchange of information between the server 2 and the client device 9 is performed via the intermediary server 8.
- supervised learning using a neural network is illustrated as a machine learning algorithm.
- the present invention is not limited to such a configuration. Therefore, for example, other machine learning algorithms that are divisible and can handle intermediate values in a similar format may be used.
- unsupervised learning such as GAN (Generative Advanced Networking), VAE (Variational Autoencoder), and SOM (Self-Organizing Map)
- reinforcement learning may be used.
- reinforcement learning for example, prediction processing on a simulator may be used.
- the approximation function was generated by approximating with the linear equation shown in Equation 5.
- the approximation method is not limited to such an example, and the approximation may be performed by another method.
- FIG. 28 is a conceptual diagram regarding an example of using the bypass function.
- H (x) represents an approximate function based on the linear equation represented by Equation 5 and the like
- J (x) represents a bypass function, forming an approximate function as a whole.
- the bypass function J (x) is arranged in parallel so as to bypass the approximate function H (x) according to the linear equation.
- the backpropagation method can be applied to any of the functions.
- FIG. 29 is a conceptual diagram of the bypass function J (x).
- the bypass function J (x) compresses the data by a pooling layer having a smaller number of nodes (for example, about half the number of nodes in the input-side intermediate layer). After that, the node output in the pooling layer is provided to the output-side intermediate layer. At this time, zero (0) is provided for the node in which there is no connection from the pooling layer to the output-side intermediate layer (zero padding).
- the number of nodes in the pooling layer is 16 which is half of the number of nodes n_x in the input side intermediate layer.
- the pooling method an average pooling or the like that takes an average with the adjacent node values can be used.
- the 16 outputs from the pooling layer are provided to the output-side intermediate layer.
- zero (0) is provided for the four output-side mesosphere nodes that do not correspond to the pooling layer nodes.
- the pooling layer is used in this modification, it is not always necessary to use the pooling layer, and for example, a detour that allows data to pass through may be formed as it is.
- FIG. 30 is a conceptual diagram of approximation using the sum of sub-approximation functions.
- This contribution coefficient may be a fixed value, or may be varied by giving a different value each time the forward calculation or error back propagation.
- Each sub-approximate function is an approximate function generated based on a cache table, and is an approximate function based on a linear equation used in a neural network or the above-described embodiment. All sub-approximation functions are configured so that the backpropagation method can be applied.
- the approximation accuracy is expected to be improved by the ensemble effect with the layers before and after the approximation function, and as a result, the approximation accuracy can be maintained or improved even when the data accumulation in the cache table is insufficient. You can expect it.
- the robot, the intermediate server, the final server, etc. are exemplified as all single devices.
- the present invention is not limited to such a configuration. Therefore, for example, a part of the device configuration may be separately provided as an external device.
- an external large-capacity storage storage may be installed and connected to a device such as a server.
- the distributed processing or the like may be performed using a plurality of devices instead of a single device. Further, virtualization technology or the like may be used.
- one client device holds one hash table, but the present invention is not limited to such a configuration. Therefore, for example, the hash table may be shared among a plurality of client devices. As a result, the cache of the prediction processing performed in each client device is accumulated as a shared one, so that the server usage cost can be reduced, the processing can be speeded up, and the client device can operate autonomously more quickly. It can be realized.
- the hash table may be shared, for example, by using the intermediary server 8 in the system of FIG. 27, or by using a technique such as a distributed hash table, directly between the client devices without going through a server or the like. It may be done by inquiring information.
- the parameters may be updated in batch after accumulating a certain amount of errors corresponding to a plurality of input / output pairs.
- so-called online learning in which learning processing is performed in parallel with prediction processing may be performed.
- a robot is exemplified as a client device.
- the client device should be construed as including any device, with or without physical operation.
- the client device includes all information processing devices such as smartphones, tablet terminals, personal computers, smart speakers, and wearable terminals.
- robot motion information (sensor signal or motor signal) is used as a learning target, but the present invention is not limited to such a configuration. Therefore, for example, the learning target data includes all kinds of information such as an imaging signal, a voice signal, an image signal, a moving image signal, language information, and character information, and includes various information such as voice recognition processing, image signal processing, and natural language processing. The desired processing may be performed.
- the client device has a configuration in which the server side performs an operation between the input side intermediate layer (X) and the output side intermediate layer (Z), but the present invention has such a configuration.
- the client device may also perform the prediction process by partially holding a predetermined divided intermediate layer and transmitting and receiving a partial prediction result to and from the server a plurality of times.
- parameters such as weights are updated for the portion of the learning model excluding the approximation function based on the error back-propagated by the error back-propagation method (for example, S115, S157, etc.).
- the present invention is not limited to such a configuration. Therefore, for example, the parameter of the approximate function part may also be updated.
- the present invention can be used in all industries that utilize machine learning technology.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Bioethics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Information Transfer Between Computers (AREA)
- Manipulator (AREA)
Abstract
Description
<1.1 システムの構成>
まず、図1~図5を参照しつつ、本実施形態におけるシステム10の構成について説明する。
次に、図6~図9を参照しつつ、システム10の動作について説明する。
本実施形態では、システム20において、サーバが多段階に配置される。
図10~図12を参照しつつ、本実施形態に係るシステム20の構成について説明する。本実施形態においては、サーバ5、6が多段で構成されている。
次に、図13~図16を参照しつつ、本実施形態に係るシステム20の動作について説明する。
本実施形態では、システム30が予測処理に加えて学習処理を行う。
本実施形態に係るシステム30の構成は、第2の実施形態において示したものと略同一である。尤も、ロボット7、中間サーバ6及び最終サーバ5の各制御部が、予測処理に加えて学習処理のための機能ブロックを有する点において相違する。
続いて、図20~図26を参照しつつ、システム30の動作について説明する。なお、予測処理動作については第2の実施形態と略同一であるのでここでは説明を省略する。
本発明は上述の実施形態の構成・動作に限定されるものではなく、様々に変形することが可能である。
3 ロボット
5 最終サーバ
6 中間サーバ
7 ロボット
8 仲介サーバ
10 システム
Claims (19)
- クライアント装置と、前記クライアント装置とネットワークを介してサーバと接続され、機械学習により得られた学習済モデルに基づいて入力データに対する変換処理を行って出力データを生成する、変換システムであって、
前記クライアント装置は、
前記学習済モデルの入力段階から第1の中間段階へと至る前記学習済モデルの一部であって、前記入力データに基づいて変換処理を行うことにより前記学習済モデルの前記第1の中間段階における第1の中間出力を生成する、入力側変換処理部と、
前記第1の中間出力を前記サーバへと送信する、クライアント側送信部と、
前記第1の中間出力に由来して前記サーバにおいて生成され、前記学習済モデルの前記第1の中間段階より出力側に近い第2の中間段階における変換出力である、第2の中間出力を前記サーバから受信する、クライアント側受信部と、
前記学習済モデルの前記第2の中間段階から出力段階へと至る前記学習済モデルの一部であって、前記第2の中間出力に基づいて変換処理を行うことにより前記出力データを生成する、出力側変換処理部と、を備える変換システム。 - 前記クライアント装置は、さらに、
前記第1の中間出力と前記第2の中間出力との対応関係を表すキャッシュテーブルを記憶した、キャッシュテーブル記憶部と、
前記キャッシュテーブル中に前記第1の中間出力と対応する前記第2の中間出力が存在するか否かを判定する、判定部と、
前記判定部において前記キャッシュテーブル中に前記第1の中間出力と対応する前記第2の中間出力が存在すると判定された場合には、前記クライアント側送信部と前記クライアント側受信部とを動作させることに代えて前記キャッシュテーブルから対応する前記第2の中間出力を取得し、一方、前記判定部において前記キャッシュテーブル中に前記第1の中間出力と対応する前記第2の中間出力が存在しないと判定された場合には、前記クライアント側送信部と前記クライアント側受信部とを動作させることにより前記クライアント側受信部において受信された前記第2の中間出力を取得する、選択的取得部と、を備える、請求項1に記載の変換システム。 - 前記クライアント装置は、さらに、
前記クライアント側受信部において受信された前記第2の中間出力を対応する前記第1の中間出力と対応付けて前記キャッシュテーブルへと記憶させるキャッシュテーブル記憶部を備える、請求項2に記載の変換システム。 - 前記クライアント装置は、さらに、
前記第1の中間出力を暗号化して第1の暗号化中間出力を生成する暗号化部と、
前記サーバにより暗号化された第2の中間出力である第2の暗号化中間出力を復号化する復号化部と、を備え、
前記クライアント側送信部は、前記第1の暗号化中間出力を前記サーバへと送信し、
前記サーバは、受信した前記第1の暗号化中間出力を復号化して前記第1の中間出力を復元すると共に、前記第2の中間出力を暗号化して前記第2の暗号化中間出力を生成して前記クライアント装置へと送信し、
前記クライアント側受信部は、前記第2の暗号化中間出力を受信する、請求項2に記載の変換システム。 - 前記クライアント装置は、さらに、
前記第1の暗号化中間出力をハッシュ化して第1のハッシュ値を生成する、ハッシュ化処理部を備え、
前記キャッシュテーブルにおける前記第1の中間出力は、前記第1のハッシュ値であり、
前記判定部は、前記第1のハッシュ値に基づいて、対応する前記第2の中間出力が存在するか否かを判定する、請求項4に記載の変換システム。 - 前記クライアント装置は、さらに、
前記第1の中間出力を丸め処理して第1の丸め中間出力を生成する、値丸め処理部を備える、請求項5に記載の変換システム。 - 前記クライアント装置は、さらに、
前記キャッシュテーブルに基づいて、近似関数を生成する近似関数生成部と、
前記第1の中間出力を入力として前記近似関数に基づいて前記第2の中間出力を生成する、近似変換処理部と、を備える、請求項2に記載の変換システム。 - 前記近似関数は、誤差逆伝播法を適用可能な関数である、請求項7に記載の変換システム。
- 前記近似関数は、バイパス関数を含むものである、請求項7に記載の変換システム。
- 前記近似関数は、複数の異なる近似関数の重み付き和により構成される、請求項7に記載の変換システム。
- 前記クライアント装置は複数から成り、
前記キャッシュテーブルは、複数の前記クライアント装置により共有される、請求項2に記載の変換システム。 - 前記サーバは、さらに、
前記第1の中間段階から前記第2の中間段階へと至る前記学習済モデルの一部であって、前記第1の中間出力に基づいて変換処理を行うことにより前記第2の中間段階における前記第2の中間出力を生成する、中間変換処理部を備える請求項1に記載の変換システム。 - 前記サーバは、ネットワークを介して接続された多段のサーバで構成され、
各サーバは、前記第1の中間段階と前記第2の中間段階との間の学習済モデルを分割した部分モデルをそれぞれ保持し、各サーバの各部分モデルに基づいて順に変換処理を行うことにより前記第2の中間出力を生成する、請求項1に記載の変換システム。 - 前記クライアント装置は、さらに、
前記入力データと、前記入力データと対応する前記出力データとの関係性を表す入出力データテーブルを記憶する入出力データテーブル記憶部を備える、請求項1に記載の変換システム。 - ネットワークを介してサーバと接続され、機械学習により得られた学習済モデルに基づいて入力データに対する変換処理を行って出力データを生成する、クライアント装置であって、
前記学習済モデルの入力段階から第1の中間段階へと至る前記学習済モデルの一部であって、前記入力データに基づいて変換処理を行うことにより前記学習済モデルの前記第1の中間段階における第1の中間出力を生成する、入力側変換処理部と、
前記第1の中間出力を前記サーバへと送信する、クライアント側送信部と、
前記第1の中間出力に基づいて前記サーバにおいて生成され、前記学習済モデルの前記第1の中間段階より出力側に近い第2の中間段階における変換出力である第2の中間出力を前記サーバから受信する、クライアント側受信部と、
前記学習済モデルの前記第2の中間段階から出力段階へと至る前記学習済モデルの一部であって、前記第2の中間出力に基づいて変換処理を行うことにより前記出力データを生成する、出力側変換処理部と、を備えるクライアント装置。 - ネットワークを介してサーバと接続され、機械学習により得られた学習済モデルに基づいて入力データに対する変換処理を行って出力データを生成する、変換方法であって、
前記学習済モデルの入力段階から第1の中間段階へと至る前記学習済モデルの一部を用いて、前記入力データに基づいて変換処理を行うことにより前記学習済モデルの前記第1の中間段階における第1の中間出力を生成する、入力側変換処理ステップと、
前記第1の中間出力を前記サーバへと送信する、クライアント側送信ステップと、
前記第1の中間出力に基づいて前記サーバにおいて生成され、前記学習済モデルの前記第1の中間段階より出力側に近い第2の中間段階における変換出力である第2の中間出力を前記サーバから受信する、クライアント側受信ステップと、
前記学習済モデルの前記第2の中間段階から出力段階へと至る前記学習済モデルの一部を用いて、前記第2の中間出力に基づいて変換処理を行うことにより前記出力データを生成する、出力側変換処理ステップと、を備える変換方法。 - ネットワークを介してサーバと接続され、機械学習により得られた学習済モデルに基づいて入力データに対する変換処理を行って出力データを生成する、クライアント装置の制御プログラムであって、
前記学習済モデルの入力段階から第1の中間段階へと至る前記学習済モデルの一部を用いて、前記入力データに基づいて変換処理を行うことにより前記学習済モデルの前記第1の中間段階における第1の中間出力を生成する、入力側変換処理ステップと、
前記第1の中間出力を前記サーバへと送信する、クライアント側送信ステップと、
前記第1の中間出力に基づいて前記サーバにおいて生成され、前記学習済モデルの前記第1の中間段階より出力側に近い第2の中間段階における変換出力である第2の中間出力を前記サーバから受信する、クライアント側受信ステップと、
前記学習済モデルの前記第2の中間段階から出力段階へと至る前記学習済モデルの一部であって、前記第2の中間出力に基づいて変換処理を行うことにより前記出力データを生成する、出力側変換処理ステップと、を備える、制御プログラム。 - クライアント装置とネットワークを介して接続され、機械学習により得られた学習済モデルに基づいて入力データに対する変換処理を行って出力データを生成する、サーバであって、
前記クライアント装置は、
前記学習済モデルの入力段階から第1の中間段階へと至る前記学習済モデルの一部であって、前記入力データに基づいて変換処理を行うことにより前記学習済モデルの前記第1の中間段階における第1の中間出力を生成する、入力側変換処理部と、
前記第1の中間出力を前記サーバへと送信する、クライアント側送信部と、
前記第1の中間出力に基づいて前記サーバにおいて生成され、前記学習済モデルの前記第1の中間段階より出力側に近い第2の中間段階における変換出力である第2の中間出力を前記サーバから受信する、クライアント側受信部と、
前記学習済モデルの前記第2の中間段階から出力段階へと至る前記学習済モデルの一部であって、前記第2の中間出力に基づいて変換処理を行うことにより前記出力データを生成する、出力側変換処理部と、を備えるサーバ。 - 機械学習により得られた学習済モデルに基づいて入力データに対する変換処理を行って出力データを生成する、変換システムであって、
前記機械学習モデルの入力段階から第1の中間段階へと至る前記機械学習モデルの一部であって、前記機械学習モデルへの前記入力データに基づいて変換処理を行うことにより前記機械学習モデルの前記第1の中間段階における第1の中間出力を生成する、入力側変換処理部と、
前記第1の中間段階より出力側に近い第2の中間段階から出力段階へと至る前記機械学習モデルの一部であって、第2の中間段階に対する入力に基づいて変換処理を行うことにより前記機械学習モデルの前記出力データを生成する、出力側変換処理部と、
前記機械学習モデルにおける前記第1の中間出力と前記第2の中間出力との対応関係を表すサンプル情報に基づいて生成された近似関数に基づいて変換処理を行って、前記第1の中間出力に基づいて前記第2の中間出力を生成する、中間変換処理部と、を備え、
前記出力データは、前記入力データを前記入力側変換処理部の入力として前記入力側変換処理部、前記中間変換処理部及び前記出力側変換処理部を動作させることにより生成される、変換システム。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2020240239A AU2020240239A1 (en) | 2019-03-15 | 2020-03-12 | Conversion system, method and program |
EP20774533.2A EP3940567A4 (en) | 2019-03-15 | 2020-03-12 | TRANSFORMATION SYSTEM, PROCESS AND PROGRAM |
JP2021507273A JPWO2020189496A1 (ja) | 2019-03-15 | 2020-03-12 | |
US17/261,346 US11943277B2 (en) | 2019-03-15 | 2020-03-12 | Conversion system, method and program |
CA3106843A CA3106843A1 (en) | 2019-03-15 | 2020-03-12 | Conversion system, method, and program |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019-049138 | 2019-03-15 | ||
JP2019-049137 | 2019-03-15 | ||
JP2019049137 | 2019-03-15 | ||
JP2019049138 | 2019-03-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020189496A1 true WO2020189496A1 (ja) | 2020-09-24 |
Family
ID=72519836
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/010806 WO2020189496A1 (ja) | 2019-03-15 | 2020-03-12 | 変換システム、方法及びプログラム |
PCT/JP2020/010809 WO2020189498A1 (ja) | 2019-03-15 | 2020-03-12 | 学習装置、方法及びプログラム |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/010809 WO2020189498A1 (ja) | 2019-03-15 | 2020-03-12 | 学習装置、方法及びプログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US11943277B2 (ja) |
EP (1) | EP3940567A4 (ja) |
JP (1) | JPWO2020189496A1 (ja) |
AU (1) | AU2020240239A1 (ja) |
CA (1) | CA3106843A1 (ja) |
WO (2) | WO2020189496A1 (ja) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11699097B2 (en) * | 2019-05-21 | 2023-07-11 | Apple Inc. | Machine learning model with conditional execution of multiple processing tasks |
JP6942900B1 (ja) * | 2021-04-12 | 2021-09-29 | 望 窪田 | 情報処理装置、情報処理方法及びプログラム |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5816771B2 (ja) * | 1977-06-06 | 1983-04-02 | 三菱電機株式会社 | 回線切換方式 |
JP2017030014A (ja) | 2015-07-31 | 2017-02-09 | ファナック株式会社 | 機械学習装置、アーク溶接制御装置、アーク溶接ロボットシステムおよび溶接システム |
JP2018097612A (ja) * | 2016-12-13 | 2018-06-21 | 富士通株式会社 | 情報処理装置、プログラム及び情報処理方法 |
JP2018163623A (ja) * | 2017-03-28 | 2018-10-18 | 株式会社カブク | 多重型学習システムおよび多重型学習プログラム |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10225365B1 (en) * | 2014-12-19 | 2019-03-05 | Amazon Technologies, Inc. | Machine learning based content delivery |
JP5816771B1 (ja) | 2015-06-08 | 2015-11-18 | 株式会社Preferred Networks | 学習装置ユニット |
JP2017182319A (ja) * | 2016-03-29 | 2017-10-05 | 株式会社メガチップス | 機械学習装置 |
US10331588B2 (en) | 2016-09-07 | 2019-06-25 | Pure Storage, Inc. | Ensuring the appropriate utilization of system resources using weighted workload based, time-independent scheduling |
US20180144244A1 (en) * | 2016-11-23 | 2018-05-24 | Vital Images, Inc. | Distributed clinical workflow training of deep learning neural networks |
US20180336463A1 (en) | 2017-05-18 | 2018-11-22 | General Electric Company | Systems and methods for domain-specific obscured data transport |
US10360214B2 (en) * | 2017-10-19 | 2019-07-23 | Pure Storage, Inc. | Ensuring reproducibility in an artificial intelligence infrastructure |
US11556730B2 (en) | 2018-03-30 | 2023-01-17 | Intel Corporation | Methods and apparatus for distributed use of a machine learning model |
US11455541B2 (en) * | 2018-05-10 | 2022-09-27 | Fmr Llc | AI-based neighbor discovery search engine apparatuses, methods and systems |
-
2020
- 2020-03-12 EP EP20774533.2A patent/EP3940567A4/en active Pending
- 2020-03-12 CA CA3106843A patent/CA3106843A1/en active Pending
- 2020-03-12 WO PCT/JP2020/010806 patent/WO2020189496A1/ja active Application Filing
- 2020-03-12 US US17/261,346 patent/US11943277B2/en active Active
- 2020-03-12 AU AU2020240239A patent/AU2020240239A1/en active Pending
- 2020-03-12 JP JP2021507273A patent/JPWO2020189496A1/ja active Pending
- 2020-03-12 WO PCT/JP2020/010809 patent/WO2020189498A1/ja active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5816771B2 (ja) * | 1977-06-06 | 1983-04-02 | 三菱電機株式会社 | 回線切換方式 |
JP2017030014A (ja) | 2015-07-31 | 2017-02-09 | ファナック株式会社 | 機械学習装置、アーク溶接制御装置、アーク溶接ロボットシステムおよび溶接システム |
JP2018097612A (ja) * | 2016-12-13 | 2018-06-21 | 富士通株式会社 | 情報処理装置、プログラム及び情報処理方法 |
JP2018163623A (ja) * | 2017-03-28 | 2018-10-18 | 株式会社カブク | 多重型学習システムおよび多重型学習プログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP3940567A4 |
Also Published As
Publication number | Publication date |
---|---|
EP3940567A1 (en) | 2022-01-19 |
WO2020189498A1 (ja) | 2020-09-24 |
US11943277B2 (en) | 2024-03-26 |
JPWO2020189496A1 (ja) | 2020-09-24 |
CA3106843A1 (en) | 2020-09-24 |
EP3940567A4 (en) | 2023-03-22 |
US20210266383A1 (en) | 2021-08-26 |
AU2020240239A1 (en) | 2021-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11151479B2 (en) | Automated computer-based model development, deployment, and management | |
US20230039182A1 (en) | Method, apparatus, computer device, storage medium, and program product for processing data | |
Yu et al. | Toward resource-efficient federated learning in mobile edge computing | |
US10832087B1 (en) | Advanced training of machine-learning models usable in control systems and other systems | |
CN111245950A (zh) | 基于深度学习的工业物联网边缘资源智能调度***及方法 | |
Rajawat et al. | Fusion deep learning based on back propagation neural network for personalization | |
WO2020189496A1 (ja) | 変換システム、方法及びプログラム | |
EP4350572A1 (en) | Method, apparatus and system for generating neural network model, devices, medium and program product | |
US10776721B1 (en) | Accelerating configuration of machine-learning models | |
CN113505882A (zh) | 基于联邦神经网络模型的数据处理方法、相关设备及介质 | |
WO2022057433A1 (zh) | 一种机器学习模型的训练的方法以及相关设备 | |
Candan et al. | A dynamic island model for adaptive operator selection | |
Ito et al. | An on-device federated learning approach for cooperative model update between edge devices | |
WO2019162568A1 (en) | Artificial neural networks | |
Mertens et al. | i-WSN League: Clustered Distributed Learning in Wireless Sensor Networks | |
Jiang et al. | Joint model pruning and topology construction for accelerating decentralized machine learning | |
US11366699B1 (en) | Handling bulk requests for resources | |
US20230032249A1 (en) | Graphics processing unit optimization | |
EP3876158A1 (en) | Method and system for adjusting a machine learning output | |
Zhao et al. | PPCNN: An efficient privacy‐preserving CNN training and inference framework | |
Huang et al. | Quantum correlation generation capability of experimental processes | |
Guendouzi et al. | Aggregation using genetic algorithms for federated learning in industrial cyber-physical systems | |
CN116663064B (zh) | 一种隐私保护神经网络预测方法及*** | |
US11501041B1 (en) | Flexible program functions usable for customizing execution of a sequential Monte Carlo process in relation to a state space model | |
He et al. | C-RSA: Byzantine-robust and communication-efficient distributed learning in the non-convex and non-IID regime |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20774533 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021507273 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 3106843 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 2020240239 Country of ref document: AU Date of ref document: 20200312 Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2020774533 Country of ref document: EP |