EP3485491A1 - Factory automation system and programmable logic controller - Google Patents

Factory automation system and programmable logic controller

Info

Publication number
EP3485491A1
EP3485491A1 EP16767386.2A EP16767386A EP3485491A1 EP 3485491 A1 EP3485491 A1 EP 3485491A1 EP 16767386 A EP16767386 A EP 16767386A EP 3485491 A1 EP3485491 A1 EP 3485491A1
Authority
EP
European Patent Office
Prior art keywords
programmable logic
logic controller
remote server
device command
command
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP16767386.2A
Other languages
German (de)
English (en)
French (fr)
Inventor
Senol PANKADUZ
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Electric Corp
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Publication of EP3485491A1 publication Critical patent/EP3485491A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • G05B19/4185Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM] characterised by the network communication
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/35Nc in input of data, input till input file format
    • G05B2219/35453Voice announcement, oral, speech input
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present invention relates to a factory automation system and a programmable logic controller.
  • Programmable logic controllers are commonly used in factory automation systems for controlling industrial processes, such as manufacturing processes, by controlling motors, sensors, supply of energy, etc.
  • the programmable logic controllers are often housed in environments where access by users is difficult since, for example, the available space between electronic racks is narrow.
  • programmable logic controllers are optimized for low cost and their main purpose which is the control of devices in real time. Therefore, the programmable logic controller does typically not include a powerful processor and memory as found in today's general purpose computers. Since the computing resources of the programmable logic controller and the available space around the programmable logic controller are limited, the controller does typically not include a comfortable user interface including, for example, a large screen, a keyboard and a computer mouse. Therefore, it is desirable to extend the possibilities for a user to interact with a programmable logic controller.
  • US 7,249,023 discloses an industrial human-machine interface which allows to control navigation through a displayed menu by voice commands analyzed by a speech processor of the human-machine interface.
  • speech recognition involves intensive computations and a large amount of vocabulary data and language information which is continuously changing. This requires a powerful processor and memory not available at the typical programmable logic controller for the reasons described above.
  • the present invention provides a factory automation system and a programmable logic controller having a user interface providing accepting voice commands from a user.
  • a factory automation system comprises: a plurality of programmable logic controllers; a microphone; and a remote server connected to the plurality of programmable logic controllers via a network; wherein the microphone is connected to a first programmable logic controller of the plurality of programmable logic controllers, wherein one or more devices are connected to and controlled by the first programmable logic controller; wherein the first programmable logic controller is configured to encode spoken words of a user recorded by the microphone into an electronic speech representation; wherein the first programmable logic controller is configured to transmit the electronic speech representation to the remote server via the network; wherein the remote server is configured to determine at least one device command based on the electronic speech representation transmitted by the first programmable logic controller, and to transmit the determined at least one device command to the first programmable logic controller; and wherein the first programmable logic controller is further configured to control the one or more devices based on the at least one device command transmitted by the remote server.
  • Such system provides a user interface to the first programmable logic controller which allows a user to speak words which are recognized by the system and translated into device commands suitable to control devices connected to the first programmable logic controller.
  • the first programmable logic controller is configured to encode spoken words of the user into the electronic speech representation.
  • the hardware required for performing this function includes the microphone and a suitable encoder, which are both not particularly expensive.
  • the main task of determining the device commands based on the electronic speech representations which is a computationally intensive speech recognition task, is performed on the remote server.
  • a speech recognition software is running on the remote server, and the remote server has the computation power and memory necessary for performing the speech recognition.
  • programmable logic controllers Since many programmable logic controllers can be connected to the remote server, they can share the speech recognition function provided by the remote server. Accordingly, it is possible to provide programmable logic controllers offering speech recognition in a user interface without substantially increasing the hardware requirements of the programmable logic controller. Moreover, frequent updates of the speech recognition software running on the remote server are immediately available to all programmable logic controllers connected to the remote server.
  • the remote server includes a first translation module providing a first mapping from electronic speech representations to a plurality of predefined first device commands, and a second translation module providing a second mapping from electronic speech representations to a plurality of predefined second device commands, wherein the plurality of predefined second device commands are editable by the user, wherein the remote server is configured to apply the second mapping to the electronic speech representation in order to determine the second device command having a best correspondence with the electronic speech representation and to determine a confidence value representing a quality of the correspondence between the electronic speech representation and the determined second device command, wherein the remote server is configured to transmit the determined second device command to the first programmable logic controller, if the determined confidence value meets a predefined requirement, and, if the determined confidence value does not meet the predefined requirement, to apply the first mapping to the electronic speech representation in order to determine the first device command having a best correspondence with the electronic speech representation, and to transmit the determined first device command to the first programmable logic controller.
  • the second translation module can be configured by the user, and the user may in particular define the device commands which he wishes to be recognized by the system. Compared to the total number of different types of programmable logic controllers and controlled devices, a particular user operates a relatively low number of different programmable logic controls and devices. Therefore, the number of device commands the particular user wishes to have recognized is much lower than the total number of device commands required by all possible users operating many different types of programmable logic controllers and devices. Since the remote server first tries to recognize a command spoken by the user using the user-specific second translation module, the recognition quality is high since the relatively low number of device commands to be recognized allows for a relatively small search space in the speech recognition process.
  • the first translation module which is not user-specific and shared by all possible users of the system, may then include definitions of the larger number of possible device commands for all devices and programmable logic controllers which shall be used with the system.
  • the speech recognition process translates the spoken word into a device command using the first or second translation module. For each translation, the process also generates a confidence value representing the quality of the correspondence between the spoken word and the determined device command. This confidence value may also be understood as an estimated probability that the word spoken by the user is correctly recognized as the determined device command. Typically, the confidence value will be higher if the user speaks loudly and clearly in a calm environment, and it will be lower if the user speaks softly or mumbles in a noisy environment.
  • the remote server first tries to translate the spoken word into a recognized device command using the user-specific second translation module, and, if no device command defined in the second translation module can be identified with a sufficiently high confidence value, the remote server tries to translate the same spoken word into a device command using the first translation module. If a device command defined in the first translation module can be identified with a sufficiently high confidence value, this device command is sent by the remote server to the first programmable logic controller. If it is not possible to determine a device command with a sufficiently high confidence value, the remote server transmits electronic data to the first programmable logic controller indicating that the translation of the spoken words into a device command has failed.
  • one or more devices are connected to and controlled by a second programmable logic controller of the plurality of programmable logic controllers, wherein the at least one device command includes at least one command for controlling the one or more devices connected to the second programmable logic controller.
  • a display is connected to the first programmable logic controller, and the first programmable logic controller is configured to display a visual representation of the at least one device command.
  • the visual representation can be, for example, a menu in which plural commands are listed in different lines or boxes, wherein a selected or recognized command is highlighted. This provides a visual feedback to the user so that he knows which commands are available and that his spoken words were correctly recognized.
  • the at least one device command comprises a command for setting an operational parameter of a device to a user determined value.
  • the operational parameter may include, for example, settings of a network module, such as baud rate and network addresses, settings of a motor driver, such as maximum frequency, and others.
  • the at least one device command may then include a command which triggers storing of the settings of the operational parameters in the remote server.
  • the settings stored in the remote server can then be reused at later times for initializing new programmable logic controllers or devices connected to the system.
  • a programmable logic controller is provided which is configured to be connected to a microphone, a remote server and one or more devices to be controlled by the programmable logic controller; to encode spoken words of a user detected by the microphone into an electronic speech representation; to transmit the electronic speech representation to the remote server; to receive at least one device command from the remote server in response to the transmission of the electronic speech representation; and to control the one or more devices based on the received at least one device command.
  • Figure 1 is a schematic diagram illustrating a factory automation system.
  • Figure 2 is a flow chart illustrating a process performed by a programmable logic controller of the factory automation system shown in Figure 1.
  • Figure 3 is a flow chart illustrating a process performed by a remote server of the factory automation system shown in Figure 1.
  • Figure 4 is a schematic illustration of a menu shown on a display of the factory automation system shown in Figure 1.
  • Figure 5 shows tables representing configuration data of the factory automation system shown in Figure 1.
  • Figure 6 shows tables representing configuration data of the factory automation system shown in Figure 1.
  • Figure 7 shows menus displayed on a display of the factory automation system shown in Figure 1 during operations performed by a user.
  • Figure 8 shows menus displayed on a display of the factory automation system shown in Figure 1 during operations performed by a user.
  • Figure 9 shows menus displayed on a display of the factory automation system shown in Figure 1 during operations performed by a user.
  • Figure 10 shows menus displayed on a display of the factory automation system shown in Figure 1 during operations performed by a user.
  • Figure 11 shows menus displayed on a display of the factory automation system shown in Figure 1 during operations performed by a user.
  • FIG. 1 is a schematic diagram illustrating a factory automation system 1 according to an embodiment.
  • the factory automation system 1 comprises a plurality of programmable logic controllers 3, 5 and 7, connected to a network 9, such as the internet.
  • a remote server 10 is also connected to the network 9 such that the programmable logic controller 3 can communicate with the remote server 10 and the other programmable logic controllers 5, 7 via the network 9.
  • the programmable logic controller 3 includes plural network modules 11, such as a PROFIBUS module, wherein plural devices 13 can be connected to each of the network modules 11.
  • the devices 13 may include machines, robots, motors, sensors and so on, and are controlled by the programmable logic controller to which they are connected.
  • the programmable logic controller executes a program including device commands.
  • the programmable logic controller 3 when executing a device command, sends electronic signals to the respective devices 13 such that they perform the functions associated with the device commands.
  • Devices 13 are also connected to the programmable logic controllers 5 and 7, wherein the network modules of the programmable logic controllers 5 and 7 are not shown in Figure 1.
  • the factory automation system 1 further includes a microphone 15 and a display 17 which are connected to the programmable logic controller 3 to provide a user interface of the programmable logic controller 3 for a user 19.
  • This user interface is in particular capable of performing speech recognition and translating spoken words of the user 19 into device commands which are then executed by the programmable logic controller 3.
  • the programmable logic controller 3 cooperates with the remote server 10, which includes processors and memory not available at the programmable logic controller 3 but necessary for performing a speech recognition task.
  • the speech recognition task includes a process performed by the programmable logic controller 3 and a process performed by the remote server 10.
  • Figure 2 shows a flowchart illustrating the process performed by the programmable logic controller 3
  • Figure 3 shows a flowchart illustrating the process performed by the remote server 10.
  • the microphone 15 is connected to the programmable logic controller 3 via a cable connected to the programmable logic controller 3 via a separate input. However, the microphone can also be connected to one of the network modules 11 of the programmable logic controller 3. Similarly, the display 17 is connected to the programmable logic controller 3.
  • the programmable logic controller 3 further includes an encoder 21 which is configured to encode spoken words of the user 19 detected by the microphone 15 into an electronic speech representation (steps 31 and 33 of the flowchart shown in Figure 2).
  • the programmable logic controller 3 then transmits the electronic speech representation to the remote server 10 via the network 9 (step 35 of the flowchart shown in Figure 2).
  • the remote server 10 receives the electronic speech representation transmitted by the programmable logic controller 3 (step 51 in the flowchart of Figure 3). After receipt of the electronic speech representation, the remote server 10 analyzes the electronic speech representation in order to determine one or more device commands corresponding to the spoken words of the user which were encoded into the electronic speech representation by the programmable logic controller 3.
  • the remote server 10 includes a first translation module 23 and a second translation module 25, wherein each translation module 23, 25 provides a mapping from electronic speech representations to a plurality of the predefined device commands.
  • each translation module 23, 25 provides a mapping from electronic speech representations to a plurality of the predefined device commands.
  • a plurality of device command candidates can be determined wherein each of the device command candidates might correspond to the spoken words of the user encoded as the analyzed electronic speech representation.
  • a confidence value is determined for each device command candidate, wherein the confidence value represents a quality of the correspondence between the electronic speech representation and the respective device command candidate.
  • the remote server 10 will select the device command candidate having the highest confidence value as the device command determined based on the electronic speech representation.
  • one translation module can be sufficient to translate spoken words into a number of predefined device commands.
  • the remote server 10 includes two translation modules 23 and 25, wherein the translation module 25 is specific to the particular user 19, whereas the translation module 23 is not specific to the user 19.
  • the remote server 10 may include plural translation modules 25, wherein each of the translation modules 25 is specific to one user out of a plurality of users registered for interacting with the factory automation system. If one of these users interacts with the factory automation system, the remote server 10 uses that translation module 25 which is specific to the particular user and the translation module 23 which is not specific to any of the registered users.
  • the translation module 25 specific to the user 19 can be edited by the user 19.
  • the user may in particular edit the set of predefined commands which are recognized when the translation module 25 is used.
  • the user may edit the predefined commands by editing a text file stored in the remote server 10, wherein each line in the text file includes a written representation of the device command, and it may also include an identifier of the device 13 or a group of devices 13 which are to execute the command.
  • the user 19 may further register unique commands which are only used by the user 19.
  • the user may use optical character recognition to generate the written representation from an operating manual of a device.
  • Such editing can be performed using a computer, such as a laptop computer 27 connected to the remote server 10 via the network 9.
  • the remote server 10 After receipt of the electronic speech representation, the remote server 10 determines the at least one device command using the user-specific translation module 25, and it also determines the confidence value associated with the translation from the electronic speech representation to the at least one device command (step 53 in Figure 3). The remote server 10 then decides whether the confidence value meets a predefined requirement. If the confidence value is a numeric value, this determination can be done by comparing the confidence value with a predefined threshold (step 55 in Figure 3). If the confidence value meets the predefined requirement, the remote server 10 transmits the determined at least one device command to the programmable logic controller 3 (see step 57 in Figure 3).
  • the remote server 10 uses the translation module 23 for determining the at least one device command corresponding to the analyzed electronic speech representation (step 59 in Figure 3). If the confidence value indicating the quality of the correspondence between the electronic speech representation and the at least one device command determined when the translation module 23 is used meets the predefined requirement (step 61 in Figure 3), the at least one device command determined using the translation module 23 is transmitted to the programmable controller 3 (step 63 in Figure 3).
  • the programmable logic controller 3 After receipt of the at least one device command corresponding to the recorded spoken word of the user 19 (step 37 in Figure 2), the programmable logic controller 3 executes the received at least one device command by sending corresponding electronic signals to the connected one or more devices 13 (step 39 in Figure 2).
  • the remote server 10 If the confidence value does not meet the predefined requirement, the remote server 10 generates failure information indicating that the translation of the electronic speech representation to a device command was not possible with a sufficient accuracy and transmits this failure information to the programmable logic controller 3 (step 65 in Figure 3).
  • the programmable logic controller 3 may then signal the failure to the user such that he may repeat the command, i.e. speak the words again.
  • the programmable logic controller 3 may use the display 17 for providing the user interface to the user 19.
  • the display 17 may display a menu 31 (see Figure 4), which shows a visual representation of a set of device commands which can be executed by the programmable logic controller 3 in a given situation.
  • these commands are “Control/Command” 32, “Programming” 33, “Status” 34, “Configuration” 35, “Other” 36, and “Additional” 37.
  • the speech recognition system can be used to control the devices 13 to perform immediate actions, such as instructing a motor to move an object or a sensor to provide a measurement value. Moreover, the speech recognition system can be used for other purposes. For example, if the user 19 speaks the word “Configuration” 35, the programmable logic controller 3 may enter a mode in which spoken words are recognized as device commands for configuring the programmable logic controllers 3, 5, 7 and devices 13. The device commands will then include commands for setting or changing operational parameters of the programmable logic controllers 3, 5, 7 and devices 13. For example, the operational parameters may include settings of the network modules 11 of the programmable logic controller 3, wherein the settings may include network addresses, baud rates, etc.
  • the device commands may further include a command instructing the programmable logic controller 3 to transmit the operational parameters to the remote server 10 such that they can be reused at a later time, for example for initializing a new programmable logic controller or a new device connected to the factory automation system 1.
  • the device commands may further include a command instructing the programmable logic controller 3 to obtain a file containing setting information for the device 13 already connected to the programmable logic controller 3 or a newly connected device from the remote server 10.
  • the programmable logic controller 3 may apply the settings defined in such file to the devices 13 after receipt from the remote server 10.
  • the device commands may further include a command instructing the programmable logic controller 3 to control an actuating device, such as a motor such that it moves an object by a certain distance. Since this movement may generate a collision with another object if performed at a wrong time, and since the speech command given by the user might be misunderstood by the factory automation system, this command is potentially harmful.
  • the programmable logic controller can be configured such that it outputs a predefined signal to the user before executing the potentially harmful command. The signal may be a sound from a buzzer or an artificial voice generated by a speaker, for example. The programmable logic controller then waits until the user issues a further confirming command and executes the command only after receipt of the confirming command from the user.
  • Fig. 5 shows an example of a table of speech commands which can be used with programmable logic controllers of the factory automation system 1.
  • the table has five columns, “No.”, “PLC No.”, “Auto/Manual”, ”Voice Command” and “Link”.
  • Column “No.” indicates a line or record number of the table.
  • Column “PLC No.” indicates an identifier of the programmable logic controller to which the line of the table relates. For example, identifier "1" may stand for the programmable logic controller 3, identifier "2" may stand for the programmable logic controller 5, and identifier "3" may stand for the programmable logic controller 7.
  • a programmable logic controller executes a series of steps according to a sequence program stored in a storage area of the programmable logic controller.
  • the user needs to run the sequence step by step in order to find out which step of the sequence program is performed correctly or not.
  • the category “Auto” means that voice commands can execute all steps of the sequence automatically.
  • the category “Manual” means that voice commands can execute only one step in a sequence.
  • the column “Voice Command” sometimes different programmable logic controllers are operated by voice commands which have the same meaning and represent the same device command but have different phonetic representations. For example, the spoken words “Process 1” (Line No. 1), “Process” (Line No. 51), and “Auto Process” (Line No. 101) sound differently but indicate the same device command. For this reason, the column “Link” indicates a list of numbers of lines or records of the table containing same device commands with possibly different phonetic representations. The “Link” column can also be used to identify voice commands which have the same phonetic representation but relate to different device commands. For example, “Positioning” (Line No. 1 and line No. 102) and “Positioning” (Line No. 52) have the same phonetic representation but relate to different device commands. The table illustrated above can be helpful for preventing malfunction of devices through unintended voice input.
  • Fig. 6 shows a further illustration of the table shown as Fig. 5 in a situation when a new programmable logic controller having “PLC No.” 4 is connected to the factory automation system 1.
  • the user inputs "Process” as the written phonetic representation of the command to be used with the new programmable logic controller in the column “Voice Command” in line 151, and the user enters the number “1” into the “Link” column of the same line.
  • the numbers “51” ,”101” and ”151” which are already connected to the number “1” are automatically added by the system and displayed in the cell of the “Link” column in the line No. 151, and the number “151” is added in the lines which relate to the same device command, such as “No. 1”, “No. 51” and “No.101”.
  • Such procedure prevents the occurrence of a discrepancy in the link attributes.
  • Fig. 7 shows an example of “PLC No. 1” in local control during operation.
  • the user 19 says “Control and Command” to the microphone 15 connected to the programmable logic controller 3 which presents the “Speech Menu” 31 (Fig. 4) on the display 17.
  • the display 17 shows a menu 40.
  • the user 19 selects “Local”, “Auto” and “Process 1” by saying the corresponding words.
  • “Local” the “PLC No.” is fixed to “1”.
  • Fig. 8 shows an example of a menu 41 on the display 17 of “PLC No. 1” in global control. Similar to the example illustrated with reference to Fig. 7 above, when selecting “Global” and “Process 1” the user 19 can designate other remote programmable logic controllers which are linked to the voice command “Process 1” in the table (e.g. “PLC No. 2”, “PLC No. 3” and “PLC No. 4”).
  • Fig. 9 shows an example of a menu 42 on the display 17 when the user has selected the entry of “Status” 34 from the “Speech Menu” 31 (Fig. 4) on the display 17.
  • the display 17 displays a menu which means that the programmable logic controllers having numbers 1 to 8 and 10 to 20 are operated by the voice command “Process 1”, that the programmable logic controllers having numbers 7 and 9 are operated by another voice command and that the programmable logic controller having number 8 has stopped due to an abnormality.
  • Fig. 10 shows an example when the user has selected the entry of “Configuration” 35 from the “Speech Menu” 31 (Fig. 4) on the display 17.
  • the display 17 then displays a menu 43 which means that the user 19 has selected local control and has indicated downloading of a configuration file having file name “******” to the programmable logic controller having number 1.
  • Fig. 11 shows an example when the user has selected the entry of “Configuration” 35 from the “Speech Menu” 31 (Fig. 4) on the display 17.
  • the display 17 displays a menu 44 which means that the user 19 has selected “Global” control and has indicated downloading of the configuration file having file name “******” to the programmable logic controllers having numbers 1 to 4.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Quality & Reliability (AREA)
  • Manufacturing & Machinery (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Programmable Controllers (AREA)
EP16767386.2A 2016-08-26 2016-08-26 Factory automation system and programmable logic controller Withdrawn EP3485491A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/003888 WO2018037435A1 (en) 2016-08-26 2016-08-26 Factory automation system and programmable logic controller

Publications (1)

Publication Number Publication Date
EP3485491A1 true EP3485491A1 (en) 2019-05-22

Family

ID=56958983

Family Applications (1)

Application Number Title Priority Date Filing Date
EP16767386.2A Withdrawn EP3485491A1 (en) 2016-08-26 2016-08-26 Factory automation system and programmable logic controller

Country Status (3)

Country Link
EP (1) EP3485491A1 (ja)
JP (1) JP6452826B2 (ja)
WO (1) WO2018037435A1 (ja)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4293440A1 (en) * 2022-06-15 2023-12-20 Abb Schweiz Ag Programmable logic controller with voice control

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6386107B1 (en) * 1999-08-02 2002-05-14 Heidelberger Druckmaschinen Ag Voice based interface for a printing press
JP2003295893A (ja) * 2002-04-01 2003-10-15 Omron Corp 音声認識システム、装置、音声認識方法、音声認識プログラム及び音声認識プログラムを記録したコンピュータ読み取り可能な記録媒体
US7249023B2 (en) 2003-03-11 2007-07-24 Square D Company Navigated menuing for industrial human machine interface via speech recognition
US20040260538A1 (en) * 2003-06-19 2004-12-23 Schneider Automation Inc. System and method for voice input to an automation system
JP2005181439A (ja) * 2003-12-16 2005-07-07 Nissan Motor Co Ltd 音声認識装置
JP2010197607A (ja) * 2009-02-24 2010-09-09 Toshiba Corp 音声認識装置、音声認識方法およびプログラム
US9478216B2 (en) * 2009-12-08 2016-10-25 Nuance Communications, Inc. Guest speaker robust adapted speech recognition
JP5658641B2 (ja) * 2011-09-15 2015-01-28 株式会社Nttドコモ 端末装置、音声認識プログラム、音声認識方法および音声認識システム
JP2014062944A (ja) * 2012-09-20 2014-04-10 Sharp Corp 情報処理装置

Also Published As

Publication number Publication date
WO2018037435A1 (en) 2018-03-01
JP2018537734A (ja) 2018-12-20
JP6452826B2 (ja) 2019-01-16

Similar Documents

Publication Publication Date Title
Norberto Pires et al. Programming‐by‐demonstration in the coworker scenario for SMEs
WO2006062620A2 (en) Method and system for generating input grammars for multi-modal dialog systems
WO2015147702A1 (ru) Способ и система голосового интерфейса
JP5951200B2 (ja) 加工関連データ処理システム
JP6111675B2 (ja) 安全コントローラのユーザプログラムの設計を支援する方法、装置およびプログラム
US11615788B2 (en) Method for executing function based on voice and electronic device supporting the same
US20160125037A1 (en) Information processing apparatus, information processing method, information processing program, and storage medium
JP6675078B2 (ja) 誤認識訂正方法、誤認識訂正装置及び誤認識訂正プログラム
US11322147B2 (en) Voice control system for operating machinery
US20190012168A1 (en) Program generating apparatus
US20030110040A1 (en) System and method for dynamically changing software programs by voice commands
WO2023003913A1 (en) Virtual assistant architecture with enhanced queries and context-specific results for semiconductor-manufacturing equipment
WO2018037435A1 (en) Factory automation system and programmable logic controller
US11314221B2 (en) Machine tool and management system
EP3995909A1 (en) Configuring modular industrial plants
WO2020261218A9 (en) Interactive field device interface for monitoring and controlling an industrial process by industrial automation system
WO2020018544A1 (en) Method, system, and computer program product for harmonizing industrial machines with an intelligent industrial assistant having a set of predefined commands
JP4443436B2 (ja) 制御システムおよび制御方法
Fedosov et al. Concept of implementing computer voice control for CNC machines using natural language processing
WO2022269760A1 (ja) 音声認識装置
CN111867789A (zh) 机器人的示教装置
WO2023152803A9 (ja) 音声認識装置、及びコンピュータが読み取り可能な記録媒体
US20230373098A1 (en) Method and System for Robotic Programming
JP2014016402A (ja) 音声入力装置
Pai et al. Implementation of a Voice-ControlSystem for Issuing Commands in a Virtual Manufacturing Simulation Process

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20190215

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

17Q First examination report despatched

Effective date: 20190702

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
INTG Intention to grant announced

Effective date: 20191108

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20200603