WO2022215120A1 - Information processing device, information processing method, and information processing program - Google Patents

Information processing device, information processing method, and information processing program Download PDF

Info

Publication number
WO2022215120A1
WO2022215120A1 PCT/JP2021/014513 JP2021014513W WO2022215120A1 WO 2022215120 A1 WO2022215120 A1 WO 2022215120A1 JP 2021014513 W JP2021014513 W JP 2021014513W WO 2022215120 A1 WO2022215120 A1 WO 2022215120A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
input
unit
target person
presenting
Prior art date
Application number
PCT/JP2021/014513
Other languages
French (fr)
Japanese (ja)
Inventor
公之 茶谷
直樹 千葉
Original Assignee
株式会社KPMG Ignition Tokyo
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社KPMG Ignition Tokyo filed Critical 株式会社KPMG Ignition Tokyo
Priority to PCT/JP2021/014513 priority Critical patent/WO2022215120A1/en
Publication of WO2022215120A1 publication Critical patent/WO2022215120A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/151Transformation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • G10L13/10Prosody rules derived from text; Stress or intonation

Definitions

  • the present invention relates to information processing technology.
  • chatbots and virtual assistants (hereinafter collectively referred to as AI assistants) is underway with the aim of supporting users using artificial intelligence (AI).
  • AI assistants are widely used for online customer service, and automatically select the appropriate response from a group of pre-prepared responses to inquiries entered in text by customers visiting websites. respond.
  • Virtual assistants are implemented in smart devices such as smart speakers, smartphones, and smart watches, as well as in mobile applications. They automatically respond to user inquiries by voice, etc., using the vast amount of information available on the Internet. .
  • the present invention has been made in view of this situation, and its purpose is to provide an information processing apparatus that presents information appropriately according to the user.
  • An information processing apparatus includes an information input unit for inputting information, a reading designation unit for designating how to read the input information by referring to background information about a person to whom information is to be presented, an information presentation unit for presenting the input information to the information presentation target person in accordance with the designated reading.
  • An information processing apparatus includes an information input unit for inputting information, and a presentation mode specification unit for specifying a presentation mode of the input information by referring to background information about an information presentation target person. and an information presentation unit that presents the input information to the information presentation target person in accordance with the designated presentation mode.
  • An information processing apparatus includes an information input unit into which information on minutes is input, and background information about an information presentation target person, so that the input information on the minutes is input as an information presentation target. and an information presenting unit for presenting the information of the processed minutes to the information presentation target person.
  • An information processing apparatus includes an information input unit for inputting information, a privacy determination unit for determining whether or not privacy protection is necessary with reference to background information about a person to whom information is to be presented, a privacy An encryption processing unit that performs encryption processing on at least part of the input information when it is determined that protection is necessary, and an information presentation unit that presents the information subjected to the encryption processing to an information presentation target person.
  • information can be presented appropriately according to the user.
  • FIG. 1 is a functional block diagram of an information processing device according to a first embodiment
  • FIG. It is a figure which shows the registration example of a phrase in a phrase database.
  • 4 is a flowchart showing processing of the information processing apparatus according to the first embodiment
  • FIG. 7 is a functional block diagram of an information processing device according to a second embodiment
  • FIG. 4 is a diagram showing an example of registration of presentation modes in a presentation mode database
  • 9 is a flowchart showing processing of an information processing apparatus according to the second embodiment
  • FIG. 11 is a functional block diagram of an information processing device according to a third embodiment
  • FIG. 10 is a flowchart showing processing of an information processing apparatus according to the third embodiment
  • FIG. 11 is a functional block diagram of an information processing device according to a fourth embodiment
  • FIG. 14 is a flowchart showing processing of an information processing apparatus according to a fourth embodiment
  • FIG. 10 is a flowchart showing processing of an information processing apparatus according to the third embodiment
  • FIG. 11 is a functional block diagram of an information processing device according to a fourth embodiment
  • FIG. 14 is a flowchart showing processing of an information processing apparatus according to a fourth embodiment
  • FIG. 1 schematically shows an example in which the information processing apparatus 1 of this embodiment is configured in a client-server model.
  • the information processing apparatus 1 includes a server 11 that centrally handles information processing, a group of user devices 12 in which at least one functions as a client for the user 10, and a network 13 that interconnects the server 11 and the group of user devices 12. be.
  • the information input unit 110 is provided on the server 11 side, but the information input unit 110 may be provided on the client side.
  • the functions of the information input unit 110 may be implemented in one or more user devices belonging to the user device group 12 .
  • the user device functioning as the information input unit 110 generates various types of information to be presented to the user 10 based on the operation of the user 10 and situation detection described later.
  • Information generated by the user device is sent to the server 11 via the network 13 and subjected to information processing, which will be described later.
  • the information processed by the server 11 is sent again to the user device group 12 via the network 13 and presented to the user 10 .
  • the function of the information input unit 110 may be implemented in the server 11.
  • the server 11 functioning as the information input unit 110 generates information to be presented to the user 10 based on various information held therein or acquired via the network 13 .
  • the information generated by the server 11 and to be presented to the user 10 is information generated by another user 10 ′ by operating the server 11 itself or another user device 12 ′ connected to the server 11 via the network 13 . and includes, but is not limited to, information intended to be communicated to user 10 by other user 10'.
  • the server 11 performs information processing on the information input from the information input unit 110 according to background information such as attributes and circumstances of the user 10 . Attributes of the user 10 include age, gender, hometown, nationality, professional qualifications, occupation, and affiliated organization. These attribute information may be stored in advance in the background information database 120 accessible by the server 11, or may be stored in the user device group 12 in real time via the network 13 during information processing by the server 11. It may be read out. As for the situation of the user 10, the server 11 refers to situation detection information detected from the user 10 in real time by the user device group 12 as main information and situation history information held in the background information database 120 as secondary information. Interpret the user 10 situation.
  • the user device group 12 is one or more arbitrary devices that perform at least one of inputting and/or outputting information regarding the user 10 .
  • the user device is not limited to a device owned and used by the user 10, but may be a device installed in a place where the user 10 is temporarily or permanently present, or a device used by a third party around the user 10.
  • User devices are broadly classified into those that both input and output information about the user 10, those that only input information, and those that only output information.
  • Wearable devices such as a personal computer 12A, a smart phone 12B, a tablet, a smart speaker, and a smart watch are exemplified as input/output type user devices of the first type.
  • the second type of input-type user device includes a camera 12C, a watch 12D having a function of measuring biological signals, etc., a microphone for acquiring the speech of the user 10 and sounds around the user 10, and the situation of the user 10 himself/herself.
  • Examples include various sensors capable of measuring the status of the user 10 or the situation or environment around the user 10 (hereinafter also referred to as the situation of the user 10 or the situation in which the user 10 is placed) with or without contact with the user 10 .
  • "input information about the user” means information input by the user himself/herself by operating the user device (mainly the first type), including both the information that measured the
  • a display 12E as a display unit and a speaker 12F as an audio output unit are exemplified as the output type user device of the third type.
  • the various user devices as described above may be connected by wire or wirelessly so as to be able to communicate with each other.
  • the client-server model described above is merely one configuration example of the information processing apparatus 1, and a standalone configuration in which each function of the information processing apparatus 1 is implemented in the user device group 12 locally used by the user 10 may be employed.
  • FIG. 2 is a functional block diagram of the information processing device 1 according to the first embodiment.
  • the information processing apparatus 1 includes an information input unit 110, a phrase extraction unit 131, a phrase database 132, a reading designation unit 141, a background information database 120, a background information acquisition unit 150, an information presentation unit 161, and an explanation acquisition unit. It has a section 162 and a reading checking section 170 .
  • These functional blocks are realized through cooperation between hardware resources such as the computer's central processing unit, memory, input device, output device, and peripheral devices connected to the computer, and software executed using them. . Regardless of the type of computer or installation location, each of the above functional blocks may be implemented using the hardware resources of a single computer, or may be implemented by combining hardware resources distributed among multiple computers. .
  • Information to be presented to the information presentation target person 10 is input to the information input unit 110 .
  • the type of information to be input is not particularly limited as long as it includes linguistic information from which words can be extracted by the word/phrase extraction unit 131 in the subsequent stage. Examples include character information (text information), image information (image information), voice information (audio information), and image information (video information).
  • non-text information such as image information, audio information, and video information is input to the information input unit 110
  • the linguistic information included in these information is extracted for phrase extraction using image recognition technology, voice recognition technology, or the like. Convert to text information.
  • the phrase extraction unit 131 extracts phrases from the linguistic information input to the information input unit 110 . Specifically, the word/phrase extraction unit 131 searches the word/phrase database 132 for each word/phrase included in the language information input to the information input unit 110 . Phrases including multiple readings are registered in advance in the phrase database 132, and the phrase extraction unit 131 extracts phrases hit by the search and provides them to the reading designation unit 141 in the subsequent stage. Specific examples of the phrases registered in the phrase database 132 will be described later. Phrases are registered in the phrase database 132 .
  • words of parts of speech other than nouns such as verbs, adjectives, adverbs, pronouns, auxiliary verbs, conjunctions, articles, and interjections may be registered in the word database 132. good.
  • the reading designation unit 141 refers to the background information about the information presentation target person 10 and designates the reading of the phrase extracted by the phrase extraction unit 131 .
  • a specific processing example of the reading designation unit 141 will be described later with reference to FIG.
  • the background information about the information presentation target person 10 referred to by the reading designation unit 141 is acquired by the background information acquisition unit 150 .
  • the background information acquisition section 150 includes an attribute acquisition section 151 and a situation detection section 152 .
  • the attribute acquisition unit 151 acquires the attribute information of the information presentation target person 10, such as age, gender, hometown, nationality, professional qualifications, occupation, and affiliated organization, as background information. These pieces of attribute information may be stored in advance in the attribute information database 121 included in the background information database 120 accessible by the background information acquisition unit 150, or may be stored in the user device group 12 (Fig. 1) may be stored in advance. Note that if there is a discrepancy between the attribute information held by the attribute information database 121 and the attribute information held by the user device group 12, the attribute information with the latest storage date or the attribute information held by the majority of the devices is the most reliable. It is provided to the reading designation unit 141 as attribute information.
  • the situation detection unit 152 detects the situation in which the information target person 10 is placed as background information. Specifically, the situation detection unit 152 selects from the user device group 12 ( FIG. 1 ) used by the information presentation target person 10 or located in the information presentation target person 10 Obtain various types of status information that directly or indirectly indicate status.
  • the situation detection unit 152 analyzes the conversation of the information presentation target person 10 and detects the situation.
  • a smart speaker, a smart phone, or the like on which a virtual assistant having a voice recognition function is implemented functions as a user device that listens to conversation as situation information of the information presentation target person 10 .
  • the conversation heard by the user device is analyzed by the user device itself or an analysis engine mounted in the situation detection unit 152, and the situation in which the information presentation target person 10 is placed is detected.
  • the atmosphere of whether the information presentation target person 10 is in a formal or casual setting can be detected from the content, speed, tone, tone, etc. of the information presentation target person 10 or the conversation partner.
  • the conversation including the information presentation target person 10 itself It is possible to infer with high accuracy the attributes of the participants and the situation in which the conversation is taking place.
  • the situation in which the information presentation target person 10 is placed may be detected based on other situation information that can be acquired from the user device group 12 .
  • situation information information input by the information presentation target person 10 by operating the personal computer 12A, the smartphone 12B, a tablet, or the like is highly valuable as situation information, similar to what the information presentation target person 10 says in conversation.
  • the facial expression, posture, gesture, clothing, etc. of the information presentation target person 10 can , furniture, surrounding congestion, and other useful situational information.
  • the location of the information presentation target person 10 can also be detected from a positioning sensor such as a GPS (Global Positioning System) sensor built into a smartphone or the like.
  • GPS Global Positioning System
  • the information presentation target person 10 wears a wearable device such as a smart watch capable of measuring biosignals such as heartbeat, body temperature, blood pressure, respiration, and perspiration
  • the biometric information obtained therefrom is also an information presentation target. It can be utilized as situation information suggesting the situation in which the person 10 is placed.
  • the user device group 12 includes a sensor that measures information related to the environment of the information presentation target person 10, such as temperature, humidity, and luminance, such environmental measurement information can also be used as situation information. Date and time information that can be obtained from a clock can also be referred to as background information in a broad sense.
  • Each of these biometric information, environmental measurement information, and date and time information is merely information that indirectly suggests the situation in which the information presentation target person 10 is placed.
  • the multifaceted situation of the information presentation target person 10 including the physical and mental conditions of the target person 10 such as physical condition and mood can be detected.
  • the situation information that can be obtained from the user device group 12 illustrated above is roughly classified into four types: “conversation”, “operation input”, “image”, and “measurement information”. In the following, to simplify the explanation, these types will be used as appropriate.
  • the situation detection unit 152 is configured by artificial intelligence so as to make an accurate situation judgment by synthesizing such various types of situation information.
  • a situation history database 122 is provided in the background information database 120 for the purpose of supporting situation determination by the situation detection unit 152 .
  • situation information acquired by various user devices and descriptions of situations in which the user was placed at that time are associated as past history data of the information presentation target person 10 or a third party. A lot of reference data is kept.
  • a situation history database in which a description of the user's situation, such as “a few people in suits are having a conversation in a private room” detected by a camera and a description of the user's situation, such as "a user consults with a lawyer about inheritance,” is associated with each other as reference data. 122.
  • the information presentation target person 10 works at a specific place B such as a coffee shop during time zone A on weekdays
  • a specific place B such as a coffee shop during time zone A on weekdays
  • "time zone A” measured by a clock built in a smartphone or the like and "place A” measured by a GPS sensor B” is stored in the situation history database 122 as reference data in which a description relating to the situation of the information presentation target person 10 is associated with “at work”.
  • the situation detection unit 152 searches the situation history database 122 based on the group of situation information acquired from the group of user devices 12, and finds reference data containing a group of similar situation information. Since there is a high possibility that the “description regarding the situation in which the user is placed” included in the found reference data represents the current situation in which the information presentation target person 10 is placed, the situation detection unit 152 stores the information in the situation history database. By referring to 122, the situation of the information presentation target person 10 can be detected with high accuracy. It should be noted that the situation detection unit 152 may be composed of artificial intelligence capable of machine learning, and the reference data held in the situation history database 122 may be machine-learned in advance as training data. In this case, the machine-learned situation detection unit 152 can quickly determine the situation without referring to the situation history database 122 .
  • the information presentation unit 161 presents the information input by the information input unit 110 to the information presentation target person 10 according to the reading specified by the reading specification unit 141 . Specifically, when the information presenting unit 161 presents the information, the reading specified by the reading specifying unit 141 is assigned to the word/phrase extracted by the word/phrase extracting unit 131 . When the information presentation unit 161 presents information to the information presentation target person 10 through the speaker 12F (which may be a speaker built into the smartphone 12B or the like) as an audio output unit, the phrase extracted by the phrase extraction unit 131 is input to the pronunciation designation unit 141. The speaker 12F is made to output the voice read aloud according to the reading specified in .
  • the information presentation unit 161 presents information to the information presentation target person 10 on the display 12E (which may be a display built into the smartphone 12B or the like) as a display unit
  • the words extracted by the phrase extraction unit 131 are read by the reading designation unit.
  • the reading information specified by 141 is displayed on the display 12E. For example, if the pronunciation of "igon" is specified for the word "will” in the example of FIG. However, it is possible to display the reading in parentheses after the kanji like ⁇ igon'', or display only the hiragana (reading) ⁇ igon'' instead of the kanji for ⁇ will''. You may
  • the description acquisition unit 162 acquires a description of the information input by the information input unit 110.
  • the explanation of the word/phrase extracted by the word/phrase extraction unit 131 is obtained from the word/phrase database 132 .
  • the explanation acquisition unit 162 acquires the explanation that the business can be carried out freely without doing anything.
  • the information presentation unit 161 presents the explanation acquired by the explanation acquisition unit 162 to the information presentation target person 10 together with the information input by the information input unit 110 .
  • the information presenting unit 161 causes the speaker 12F to read out "freedom to operate” and then read out the explanation "you can freely do business without infringing on the intellectual property rights of a third party". .
  • the information presenting unit 161 causes the word/phrase extracting unit 131 to display a message such as “FTO (freedom to operate: free to do business without infringing on the intellectual property rights of a third party)” on the display 12E.
  • the extracted word/phrase, the reading specified by the reading specification unit 141, and the explanation acquired by the explanation acquisition unit 162 may be collectively displayed. It is also possible to display words and phrases and how to read them, and to display explanations outside the main text as footnotes, such as "* You can freely do business without infringing on the intellectual property rights of third parties.”
  • the reading confirmation unit 170 includes an inquiry unit 171 and a response reception unit 172. If there are multiple reading candidates for the word extracted by the word extracting unit 131 and the reading specifying unit 141 cannot determine the reading even by referring to the background information acquired by the background information acquiring unit 150, the inquiry unit 171 obtains information The presentation target person 10 is asked how to read. The inquiry to the information presentation target person 10 may be made by voice through the speaker 12F, or may be made by display through the display 12E.
  • the reply receiving section 172 receives the information presentation target person 10's reply to the inquiry from the inquiry section 171 .
  • the information presentation target person 10 can use any input means of the user device group 12 to reply with the correct reading of the phrase related to the inquiry.
  • the information presentation target person 10 may respond by voice to a user device having a voice recognition function, may input the correct reading as text on the screen of the smartphone 12B or the like, or may select reading candidates. If the number is limited, the correct reading may be selected on the screen of the smartphone 12B or the like.
  • the reading designation unit 141 designates the reading of the word/phrase extracted by the word/phrase extraction unit 131 in response to the reply from the information presentation candidate 10 received by the reply reception unit 172 . Henceforth, unless the background information acquired by the background information acquisition unit 150 changes significantly, the reading specified here will be consistently used. Therefore, it is possible to prevent redundant inquiries to the information presentation target person 10 regarding the same phrase.
  • FIG. 3 shows an example of word/phrase registration in the word/phrase database 132 .
  • Multiple readings are registered for each word, and for each reading, a type, a typical usage scene, and an explanation are registered.
  • the type is a type such as a field in which the reading is used.
  • a typical usage scene is a typical scene in which the reading is used, and corresponds to the attribute and the situation of the information presentation target person 10 (or the conversation partner) acquired by the background information acquisition unit 150 .
  • the description as an optional item is a description of a word or phrase in that reading, and the information presentation section 161 presents the information presentation target person 10 with the description acquired by the description acquisition section 162 .
  • Each term will be described below.
  • the reading designation unit 141 designates the reading “yuigon” for the word “will”.
  • the type may be estimated from the attributes of the information presentation target person 10 or the conversation partner acquired by the background information acquisition unit 150 or the situation, and the reading of the type that matches the estimated type may be adopted.
  • the phrase "FTO” has two readings: “Freedom to Operate” and "FTO".
  • "Freedom to operate” is a reading used in the field of "intellectual property.”
  • the attributes and circumstances of the information presentation target person 10 or the conversation partner acquired by the background information acquisition unit 150 match or resemble a typical usage scene "examination of third party's intellectual property rights in the business development area”.
  • the reading designation unit 141 designates the reading “freedom to operate” for the word “FTO”.
  • the explanation acquiring unit 162 acquires from the word/phrase database 132 an explanation that ⁇ you can freely do business without infringing on the intellectual property rights of a third party,'' and causes the information presenting unit 161 to present it together.
  • “Ftio” is the reading of each alphabet as it is.
  • the attributes and the situation of the information presentation target person 10 or the conversation partner acquired by the background information acquisition unit 150 can be used in a typical usage scene of the reading "freedom to operate” as "a third party's intellectual property in the business development area”. If it is not similar to "Ken no shinsei", the reading designation unit 141 designates the reading "ef-to-oh” for the word "FTO”.
  • the word “0" that appears in the middle digit of the number may be read as “fly” or nothing.
  • ⁇ 1010 yen'' may be read as ⁇ sen ⁇ tonde'' juen'' or as ⁇ senjuen''.
  • "Tonde” is a reading used in the field of "finance”. If the attributes and situation of the information presentation target person 10 or the conversation partner acquired by the background information acquisition unit 150 match or resemble a typical usage scene “reading out amounts in the financial field”, the reading specification unit 141 selects the phrase Specify the reading "tonde” for "0".
  • the reading designation unit 141 designates that the word "0" is read as nothing.
  • the phrase "KYC” has two readings: “know your customer” and “keyy sea”. "Know Your Customer” is a reading used in the field of "commerce”. If the attributes and the situation of the information presentation target person 10 or the conversation partner acquired by the background information acquisition unit 150 match or resemble a typical usage scene “personal identification at the time of opening a bank account, etc.”, the reading designation unit 141 specifies the reading "know your customer” for the phrase “KYC”. At this time, the explanation acquiring unit 162 acquires the explanation “customer confirmation required when starting a commercial transaction with a customer” from the word/phrase database 132 and causes the information presenting unit 161 to present it together. "Kyysee” is the reading of each alphabet as it is.
  • the attributes and circumstances of the information presentation target person 10 or the conversation partner acquired by the background information acquisition unit 150 are used in a typical usage scene of "identity verification when opening a bank account, etc.” If they are not similar, the reading specifying unit 141 specifies the reading "Kywaisi" for the word "KYC”.
  • the phrase "Mita” has two readings: "Mita” and "Sanda.” "Mita” is a name often used in the Kanto region of Japan, and "Sanda” is a name often used in the Kansai region of Japan. Even if you refer to the background information, it is highly likely that you will not be able to determine how to read it. Therefore, a flag of "inquiry required” is entered in the "type” column so that the reading checking unit 170 inquires of the information presentation target person 10 about the reading. If this flag is present, the reading confirming unit 170 inquires the information presentation target person 10 about the reading, except in exceptional cases where the reading specifying unit 141 can determine the reading from the background information. The reading designation unit 141 designates the reading according to the response from the information presentation target person 10 to the inquiry.
  • FIG. 4 is a flowchart showing processing of the information processing device 1 according to the first embodiment.
  • “S” in the flow chart means “step”.
  • information to be presented to the information presentation target person 10 is input to the information input section 110 .
  • the word/phrase extraction unit 131 searches the word/phrase database 132 for each word/phrase included in the information input in S1.
  • the word/phrase extraction unit 131 determines whether or not the word/phrase searched in S2 is found in the word/phrase database 132 (hit). If no hits are found in the search, the information input in S1 does not contain a word or phrase that should be read carefully. presented to If the search hits, the word/phrase extraction unit 131 extracts the word/phrase and proceeds to S4.
  • the background information acquisition unit 150 acquires background information (attributes and/or situations) of the information presentation target person 10 or the conversation partner.
  • the reading designation unit 141 determines whether it is necessary to inquire of the information presentation target person 10 how to read the words extracted in S3. If the reading specification unit 141 can specify the reading based on the background information acquired in S4, the process proceeds to S8 without inquiring the information presentation target person 10. FIG. If the reading cannot be specified even with reference to the background information acquired in S4, or if there is a flag of "inquiry required" as in the example of "Mita” in FIG. 10 is determined to be required, and the process proceeds to S6.
  • the inquiry unit 171 inquires of the information presentation target person 10 how to read the words determined to require inquiry in S5.
  • the response receiving unit 172 receives the information presentation target person 10's response to the inquiry in S6.
  • the reading specifying unit 141 refers to the background information acquired in S4 and specifies how to read the words extracted in S3.
  • the reading specification unit 141 specifies the readings according to the information presentation target person 10's reply received in S7.
  • the explanation acquisition unit 162 acquires the explanation registered in the phrase database 132 for the phrase whose pronunciation is specified in S8.
  • the information presentation unit 161 presents the information input in S1 to the information presentation target person 10 according to the reading specified in S8. It should be noted that the explanation of the phrase whose explanation is acquired in S9 is presented to the information presentation target person 10 together with the information inputted in S1.
  • FIG. 5 is a functional block diagram of the information processing device 1 according to the second embodiment.
  • the same reference numerals are given to the same constituent elements as in the above-described embodiment, and the description thereof is omitted.
  • the presentation mode specification unit 142 refers to the background information about the information presentation target person 10 acquired by the background information acquisition unit 150 and specifies the presentation mode of the information input by the information input unit 110 .
  • the presentation modes registered in the presentation mode database 133 accessible by the presentation mode designating unit 142 include the speed of presenting the input information, the amount of presenting the input information, the input It includes at least one of a tone for presenting the input information and a voice for reading out the input information.
  • the information presentation unit 161 presents the information input by the information input unit 110 to the information presentation target person 10 in accordance with the presentation mode specified by the presentation mode specification unit 142 .
  • FIG. 6 shows an example of presentation mode registration in the presentation mode database 133 .
  • the presentation manner database 133 associates the background information (attribute/situation) of the information presentation target person 10 acquired by the background information acquisition unit 150 with the information presentation manner (speed/information amount/tone/volume/voice). configured as a table.
  • the presentation mode designating unit 142 may be configured by machine-learnable artificial intelligence, and a table held in the presentation mode database 133 may be machine-learned in advance as training data.
  • the presentation mode specifying unit 142 can not only quickly process cases that match the table, but also flexibly process cases that do not match the table while performing autonomous machine learning.
  • the information presentation speed is "slow” and the information presentation amount is “small”.
  • the reading speed is slowed down when the information is presented on the speaker 12F
  • the display speed is slowed down when the information is presented on the display 12E.
  • Edit processing such as deletion is performed.
  • the attribute of the information presentation target person 10 is "minor"
  • the tone of information presentation "gentle” when the information is presented on the speaker 12F, the reading tone is softened, and when the information is presented on the display 12E, hiragana characters are used more than kanji characters. Soften the expression by using ⁇ desu-masu-cho'' instead of ⁇ dearu-cho''.
  • the attribute of the information presentation target person 10 When the attribute of the information presentation target person 10 is "elderly”, it is considered that basically the same information presentation mode as “minor” is suitable. The volume of information presented by the speaker 12F is increased. When information is presented on the display 12E, the display size of characters is “larger.” In addition, if the attribute of the information presentation target person 10 is "person whose native language is not the presentation language”, it is assumed that the information processing ability in the presentation language is inferior, so the same as “minor” and “elderly” to "slow” the information presentation speed and "decrease” the amount of information presented. When the attribute of the information presentation target person 10 is "person from a specific region", the information may be presented by interweaving the accent or dialect of the hometown.
  • the information presentation speed is set to “slow” and the information presentation amount is set to "low” so as not to interfere with the relaxed atmosphere or cause noise to the family and neighbors.
  • the information presentation target person 10 is in a situation of "working/studying”, there is no problem even if the information presentation amount is "large” because the information processing ability is enhanced by active brain activity. Keep the volume low so as not to disturb your concentration.
  • the concentration of the information presentation target person 10 is particularly high, by temporarily refraining from presenting information other than information that is highly important or urgent for the information presentation target person 10, the information presentation target person 10 can concentrate.
  • the information presentation target person 10 When the information presentation target person 10 is placed in an "emergency" situation, it is necessary to concisely and reliably convey to the information presentation target person 10 only the information necessary to deal with the emergency, so the information presentation speed is reduced.
  • the amount of information presented is “faster”
  • the amount of information presented is “smaller”
  • the tone of information presentation is “severe”
  • the sound volume is “larger”.
  • the information presentation speed is set to "slow” so that even the information presentation target person 10 whose information processing ability is reduced can understand.
  • the tone of information presentation is set to be gentle and the sound volume is set to low so as not to disturb the rest of the information presentation target person 10.
  • the information presentation mode may be determined for each place or event. For example, information may be presented to the information presentation target person 10 using terms, expressions, accents, dialects, character voices, event information, etc. specific to a place or event.
  • the weather around the information presentation target person 10 can be predicted from the temperature and humidity information, or the weather forecast for that place on that date and time can be obtained from a predetermined weather forecast server according to the date and time information and the place information. Therefore, it is possible to present information according to those weather conditions. For example, if it is raining, a soft voice is provided, and if it is fine weather, a dry voice is provided. Provides tense audio if in a typhoon.
  • FIG. 7 is a flowchart showing processing of the information processing device 1 according to the second embodiment.
  • information to be presented to the information presentation target person 10 is input to the information input unit 110 .
  • the background information acquisition unit 150 acquires background information (attributes and/or situations) of the information presentation target person 10 or the conversation partner.
  • the presentation mode specifying unit 142 refers to the background information acquired in S12 and the presentation mode database 133, and specifies the presentation mode of the information input in S11.
  • the information presentation unit 161 presents the information input in S11 to the information presentation target person 10 in accordance with the presentation mode specified in S13.
  • information can be presented in an appropriate manner that matches the information presentation target person 10's attributes and circumstances.
  • FIG. 8 is a functional block diagram of the information processing device 1 according to the third embodiment.
  • the same reference numerals are given to the same constituent elements as in the above-described embodiment, and the description thereof is omitted.
  • the information input unit 110 receives information on minutes to be presented to the information presentation target person 10 .
  • the phrase extraction unit 131 extracts phrases from the minutes information input to the information input unit 110 .
  • the word/phrase extraction unit 131 searches the word/phrase database 132 for each word/phrase included in the information of the minutes input to the information input unit 110 .
  • phrases whose presentation mode should be changed according to the background information or attributes of the information presentation target person 10 are registered in advance. 143.
  • the minutes processing unit 143 refers to the background information about the information presentation target person 10 held in the information presentation target person database 153, and transmits the information of the minutes input to the information input unit 110 to the information presentation target person 10. processed together. Specifically, the minutes processing unit 143 changes the presentation mode of the phrases extracted by the phrase extraction unit 131 in accordance with the background information of the information presentation target person 10 .
  • the information presentation unit 161 presents the information of the minutes processed by the minutes processing unit 143 to the information presentation target person 10 .
  • an electronic file of the minutes processed for each information presentation target person 10 or an e-mail in which the minutes processed for each information presentation target person 10 are written in the text are sent to each information presentation target person 10. sent separately to
  • FIG. 9 shows an example of word/phrase registration in the word/phrase database 132 .
  • a type for each of the words “AAA” to “GGG”, a type, a disclosing range, a well-known range, and an explanation to be added when disclosing outside the well-known range are registered.
  • the type is a pattern of each word, and examples include "company name”, “business name”, “product name”, “service name”, “project name”, “technical name”, and “organization name”.
  • the disclosing range is the range in which each word can be disclosed, and is specified by the organization such as company or division, title such as president or manager, membership in project or task force, or the like.
  • the minutes processing unit 143 deletes the words and phrases, and paraphrases them into words and phrases that can be disclosed. etc. processing or confidentiality processing.
  • the well-known range is the range in which each word is well-known, and is specified by affiliated organization, title, membership, etc., similar to the disclosing range.
  • the explanation added when disclosing outside the known range is a supplementary explanation added by the explanation acquisition unit 162 when providing the minutes to the information presentation target person 10 who is within the disclosing possible range and outside the known range, Typically, it is information about the outline of each word. These summaries are not included in the minutes because they are known to the information presentation target persons 10 within the known range, but are useful for understanding unknown words and phrases for the information presentation target persons 10 outside the known range. included in the record.
  • FIG. 10 shows an example of registration of background information in the information presentation target person database 153.
  • Each information presentation target person 10 is listed as a person to whom minutes are to be sent, and an external flag, an organization to which the person belongs, and a title are registered for each person to whom minutes are to be sent.
  • the outside flag is a flag indicating that the person to whom the minutes are to be sent is outside the company.
  • the illustrated example is the information presentation target person database 153 constructed by "AAA Corporation", and the outside flag is set for the minutes delivery target person "F" who belongs to "XXX Corporation” outside "AAA Corporation”. be done.
  • each confidential word registered in the word/phrase database 132 each word whose disclosure range is limited to "AAA Corporation" ) must be carefully processed (or concealed). These processes are performed by the minutes processing unit 143.
  • the minutes processing unit 143 or the information presenting unit 161 can determine whether or not to forward the minutes to the person in charge of final confirmation according to the presence or absence of the external flag.
  • the minutes processing unit 143 does not process any of the minutes delivery target persons "A" to "F".
  • these words and phrases are included in the minutes to be sent to "A” to "E” who belong to "AAA Corporation”.
  • No supplementary explanation is added.
  • supplementary explanations summary of AAA corporation or outline of BBB business for these words are added to the minutes to be sent to the minutes recipient "F" who belongs to "XXX Corporation”.
  • the minutes processing unit 143 is subjected to confidentiality processing such as deletion of the word "DDD".
  • the word "DDD” is not well known, the minutes sent to the participants "A” to “C” within the scope of possible disclosure include a supplementary explanation of the word “DDD” (for the DDD service). summary) is added.
  • the minutes processing unit 143 uses the word “GGG ” will be processed, such as rephrasing it into other words that can be disclosed.
  • the range of the word “GGG” that has been known is only “AAA Co., Ltd. / GGG Office”, and in the minutes to be sent to the target persons "A” to "E” who are not in the known range , Supplementary explanation about the word “GGG” (outline of the GGG room) is added.
  • FIG. 11 is a flowchart showing processing of the information processing device 1 according to the third embodiment.
  • the information of the minutes to be presented to the information presentation target person 10 is input to the information input section 110 .
  • the word/phrase extraction unit 131 searches the word/phrase database 132 for each word/phrase included in the information of the minutes input in S15.
  • the word/phrase extraction unit 131 determines whether or not the word/phrase searched in S16 is found in the word/phrase database 132 (hit). If no hits are found in the search, the information input in S15 does not include words that should be noted when sending the minutes to each information presentation target person 10. The minutes input in step 2 are sent to the information presentation target person 10 as they are. If the search hits, the word/phrase extraction unit 131 extracts the word/phrase and proceeds to S18.
  • the minutes processing unit 143 designates one of the minutes delivery target persons "A" to "F" registered in the information presentation target person database 153.
  • the minutes processing unit 143 acquires from the information presentation target person database 153 the background information such as the "outside company flag", "organization”, and "title” of the person to whom the minutes are to be sent specified in S18.
  • the minutes processing unit 143 searches the word/phrase database 132 based on the background information acquired in S19, and determines whether or not there is a word/phrase outside the disclosing range. If there is a word/phrase outside the disclosing range, the process proceeds to S21, and the minutes processing unit 143 deletes or corrects the word/phrase.
  • the minutes processing unit 143 searches the word/phrase database 132 based on the background information acquired in S19, and determines whether or not there is a word/phrase outside the known range. If there is a word/phrase outside the known range, the process proceeds to S23, and the explanation acquisition unit 162 adds a supplementary explanation to the word/phrase.
  • the minutes processing unit 143 determines whether or not all the minutes delivery target persons "A" to "F" have been specified in S18. If there is an unspecified person to whom the minutes are to be sent, the process returns to S18, a new person to whom the minutes are to be sent is specified, and the processes of S19 to S24 are repeated. When the processing of S18 to S23 is completed for all the minutes delivery target persons "A" to "F", the process proceeds to S25, and the information presentation unit 161 displays the processed minutes for each minutes delivery target person. to the person.
  • the minutes can be appropriately processed according to the information presentation target person 10's attributes and circumstances.
  • the technical idea of the third embodiment may be applied to the first embodiment or the second embodiment. That is, when presenting the input information to the information presentation target person in voice or text, referring to the phrase database, it is determined whether it is within the range that can be disclosed and whether it is within the known range, The voice or text to be output may be changed according to the determination result.
  • FIG. 12 is a functional block diagram of the information processing device 1 according to the fourth embodiment.
  • the same reference numerals are given to the same constituent elements as in the above-described embodiment, and the description thereof is omitted.
  • the privacy determination unit 144 refers to background information about the information presentation target person 10 acquired by the background information acquisition unit 150 and determines whether or not privacy protection is necessary.
  • the criterion for determining whether privacy protection is necessary can be arbitrarily set based on the attribute of the information presentation target person 10 acquired by the attribute acquisition unit 151 and the information presentation target person 10 situation detected by the situation detection unit 152 .
  • AAA Corporation's privacy protection means that confidential information related to AAA Corporation's business, products, services, projects, technology, organization, etc. is kept confidential so that it is not leaked outside the company, as in the third embodiment above. means that
  • the anonymization processing unit 145 subjects at least part of the information input by the information input unit 110 to an encryption process.
  • the anonymization processing unit 145 deletes the personal information by which the information presentation target person 10 can be identified, Confidentiality processing such as replacement with the information of For example, when calling the information presentation target person 10 by automatic voice from the speaker 12F functioning as the information presentation unit 161 in a hospital, government office, etc., normally, "Mr. AAA, please come to the counter" etc.
  • the anonymization processing unit 145 includes the information presentation target person 10 himself, the affiliation AAA Corporation, the AAA Corporation's business, Confidentiality processing is performed, such as deleting information that can identify products, services, projects, technologies, organizations, etc., or replacing it with other information.
  • the information presentation unit 161 presents the information subjected to the security processing by the security processing unit 145 to the information presentation target person 10 .
  • FIG. 13 is a flowchart showing processing of the information processing device 1 according to the fourth embodiment.
  • S ⁇ b>26 information to be presented to the information presentation target person 10 is input to the information input unit 110 .
  • the background information acquisition unit 150 acquires background information (attributes and/or situations) of the information presentation target person 10 .
  • the privacy determination unit 144 determines the necessity of privacy protection by referring to the background information about the information presentation target person 10 acquired in S27.
  • the anonymization processing unit 145 performs an anonymization process on at least part of the information input in S26.
  • the information presentation unit 161 presents the information subjected to the confidentiality processing in S29 to the information presentation target person 10.
  • FIG. 1 information to be presented to the information presentation target person 10 is input to the information input unit 110 .
  • the privacy can be appropriately protected according to the information presentation target person 10's attributes and circumstances.
  • each device described in the embodiments can be realized by hardware resources or software resources, or by cooperation between hardware resources and software resources.
  • Processors, ROMs, RAMs, and other LSIs can be used as hardware resources.
  • Programs such as operating systems and applications can be used as software resources.
  • the present invention relates to information processing technology.
  • 1 information processing device 10 information presentation target person, 12 user device group, 110 information input unit, 120 background information database, 121 attribute information database, 122 situation history database, 12E display, 12F speaker, 131 phrase extraction unit, 132 phrase database , 133 presentation mode database, 141 reading specification unit, 142 presentation mode specification unit, 143 minutes processing unit, 144 privacy determination unit, 145 confidentiality processing unit, 150 background information acquisition unit, 151 attribute acquisition unit, 152 situation detection unit, 153 Information presentation target person database, 161 information presentation unit, 162 description acquisition unit, 170 reading check unit, 171 inquiry unit, 172 reply reception unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

An information processing device 1 comprises: an information input unit 110 into which information to be presented to an information presentation subject 10 is inputted; a phrase extraction unit 131 which extracts a phrase registered in a phrase database 132 from the information inputted into the information input unit 110; a phonetic reading designation unit 141 that refers to background information pertaining to the information presentation subject 10 acquired by a background information acquisition unit 150 and designates a phonetic reading for the phrase extracted by the phrase extraction unit 131; and an information presentation unit 161 that presents the information inputted into the information input unit 110 to the information presentation subject 10 in accordance with the phonetic reading designated by the phonetic reading designation unit 141.

Description

情報処理装置、情報処理方法、情報処理プログラムInformation processing device, information processing method, information processing program
 本発明は、情報処理技術に関する。 The present invention relates to information processing technology.
 人工知能(AI: Artificial Intelligence)を利用したユーザの支援を目的として、チャットボットやバーチャルアシスタント(以下、これらを総称してAIアシスタントともいう)の開発が進められている。チャットボットは、オンラインでの顧客サービスに広く使用されており、ウェブサイトを訪れた顧客がテキストで入力した問合せに対して、予め用意されている応答群から適切なものを選択して自動的に応答する。バーチャルアシスタントは、スマートスピーカ、スマートフォン、スマートウォッチ等のスマートデバイスやモバイルアプリケーションに実装されており、音声等によるユーザからの問合せに対して、インターネット上の膨大な情報を活用して自動的に応答する。  The development of chatbots and virtual assistants (hereinafter collectively referred to as AI assistants) is underway with the aim of supporting users using artificial intelligence (AI). Chatbots are widely used for online customer service, and automatically select the appropriate response from a group of pre-prepared responses to inquiries entered in text by customers visiting websites. respond. Virtual assistants are implemented in smart devices such as smart speakers, smartphones, and smart watches, as well as in mobile applications. They automatically respond to user inquiries by voice, etc., using the vast amount of information available on the Internet. .
特開2021-18803号公報Japanese Patent Application Laid-Open No. 2021-18803
 従来のAIアシスタントは、ユーザからの問合せに対して定型的または画一的に応答するものが多い。個々のユーザに合わせて応答内容をカスタマイズするパーソナライゼーション(personalization)の取り組みもあるが、本発明者はさらなる改善の余地があることを認識した。 Many conventional AI assistants respond to inquiries from users in a fixed or uniform manner. Although there are personalization efforts to customize responses for individual users, the inventors have recognized that there is room for further improvement.
 本発明はこうした状況に鑑みてなされたものであり、その目的は、ユーザに合わせて適切に情報を提示する情報処理装置を提供することにある。 The present invention has been made in view of this situation, and its purpose is to provide an information processing apparatus that presents information appropriately according to the user.
 本発明の第1の態様の情報処理装置は、情報が入力される情報入力部と、情報提示対象者についての背景情報を参照して、入力された情報の読み方を指定する読み方指定部と、指定された読み方に応じて、入力された情報を情報提示対象者に提示する情報提示部と、を備える。 An information processing apparatus according to a first aspect of the present invention includes an information input unit for inputting information, a reading designation unit for designating how to read the input information by referring to background information about a person to whom information is to be presented, an information presentation unit for presenting the input information to the information presentation target person in accordance with the designated reading.
 本発明の第2の態様の情報処理装置は、情報が入力される情報入力部と、情報提示対象者についての背景情報を参照して、入力された情報の提示態様を指定する提示態様指定部と、指定された提示態様に応じて、入力された情報を情報提示対象者に提示する情報提示部と、を備える。 An information processing apparatus according to a second aspect of the present invention includes an information input unit for inputting information, and a presentation mode specification unit for specifying a presentation mode of the input information by referring to background information about an information presentation target person. and an information presentation unit that presents the input information to the information presentation target person in accordance with the designated presentation mode.
 本発明の第3の態様の情報処理装置は、議事録の情報が入力される情報入力部と、情報提示対象者についての背景情報を参照して、入力された議事録の情報を情報提示対象者に合わせて加工する議事録加工部と、加工された議事録の情報を情報提示対象者に提示する情報提示部と、を備える。 An information processing apparatus according to a third aspect of the present invention includes an information input unit into which information on minutes is input, and background information about an information presentation target person, so that the input information on the minutes is input as an information presentation target. and an information presenting unit for presenting the information of the processed minutes to the information presentation target person.
 本発明の第4の態様の情報処理装置は、情報が入力される情報入力部と、情報提示対象者についての背景情報を参照して、プライバシー保護の要否を判定するプライバシー判定部と、プライバシー保護が必要と判定された場合、入力された情報の少なくとも一部に秘匿処理を施す秘匿処理部と、秘匿処理が施された情報を情報提示対象者に提示する情報提示部と、を備える。 An information processing apparatus according to a fourth aspect of the present invention includes an information input unit for inputting information, a privacy determination unit for determining whether or not privacy protection is necessary with reference to background information about a person to whom information is to be presented, a privacy An encryption processing unit that performs encryption processing on at least part of the input information when it is determined that protection is necessary, and an information presentation unit that presents the information subjected to the encryption processing to an information presentation target person.
 なお、以上の構成要素の任意の組合せ、本発明の表現を方法、装置、システム、記録媒体、コンピュータプログラムなどの間で変換したものもまた、本発明の態様として有効である。 It should be noted that any combination of the above constituent elements, and any conversion of the expression of the present invention between methods, devices, systems, recording media, computer programs, etc. are also effective as embodiments of the present invention.
 本発明によれば、ユーザに合わせて適切に情報を提示できる。 According to the present invention, information can be presented appropriately according to the user.
本実施形態の情報処理装置をクライアントサーバモデルで構成した例を模式的に示す図である。It is a figure which shows typically the example which comprised the information processing apparatus of this embodiment by the client server model. 第1実施形態に係る情報処理装置の機能ブロック図である。1 is a functional block diagram of an information processing device according to a first embodiment; FIG. 語句データベースにおける語句の登録例を示す図である。It is a figure which shows the registration example of a phrase in a phrase database. 第1実施形態に係る情報処理装置の処理を示すフローチャートである。4 is a flowchart showing processing of the information processing apparatus according to the first embodiment; 第2実施形態に係る情報処理装置の機能ブロック図である。FIG. 7 is a functional block diagram of an information processing device according to a second embodiment; 提示態様データベースにおける提示態様の登録例を示す図である。FIG. 4 is a diagram showing an example of registration of presentation modes in a presentation mode database; 第2実施形態に係る情報処理装置の処理を示すフローチャートである。9 is a flowchart showing processing of an information processing apparatus according to the second embodiment; 第3実施形態に係る情報処理装置の機能ブロック図である。FIG. 11 is a functional block diagram of an information processing device according to a third embodiment; 語句データベースにおける語句の登録例を示す図である。It is a figure which shows the registration example of a phrase in a phrase database. 情報提示対象者データベースにおける背景情報の登録例を示す図である。It is a figure which shows the registration example of the background information in an information presentation target person database. 第3実施形態に係る情報処理装置の処理を示すフローチャートである。10 is a flowchart showing processing of an information processing apparatus according to the third embodiment; 第4実施形態に係る情報処理装置の機能ブロック図である。FIG. 11 is a functional block diagram of an information processing device according to a fourth embodiment; 第4実施形態に係る情報処理装置の処理を示すフローチャートである。FIG. 14 is a flowchart showing processing of an information processing apparatus according to a fourth embodiment; FIG.
 図1は、本実施形態の情報処理装置1をクライアントサーバモデルで構成した例を模式的に示す。情報処理装置1は、情報処理を集中的に担うサーバ11と、少なくとも一つがユーザ10に対するクライアントとして機能するユーザデバイス群12と、サーバ11とユーザデバイス群12を相互に接続するネットワーク13によって構成される。 FIG. 1 schematically shows an example in which the information processing apparatus 1 of this embodiment is configured in a client-server model. The information processing apparatus 1 includes a server 11 that centrally handles information processing, a group of user devices 12 in which at least one functions as a client for the user 10, and a network 13 that interconnects the server 11 and the group of user devices 12. be.
 具体例は後述するが、ユーザ10に提示すべき各種の情報が、情報入力部110からサーバ11に入力される。図示の都合上、情報入力部110はサーバ11側に設けたが、情報入力部110はクライアント側に設けてもよい。例えば、情報入力部110の機能はユーザデバイス群12に属する一または複数のユーザデバイスに実装されてもよい。この場合、情報入力部110として機能するユーザデバイスは、ユーザ10の操作や後述する状況検知に基づいて、ユーザ10に提示すべき各種の情報を生成する。ユーザデバイスが生成した情報は、ネットワーク13を介してサーバ11に送られて後述する情報処理が施される。サーバ11が情報処理を施した情報は、再度ネットワーク13を介してユーザデバイス群12に送られてユーザ10に提示される。 Various types of information to be presented to the user 10 are input from the information input unit 110 to the server 11, although specific examples will be described later. For convenience of illustration, the information input unit 110 is provided on the server 11 side, but the information input unit 110 may be provided on the client side. For example, the functions of the information input unit 110 may be implemented in one or more user devices belonging to the user device group 12 . In this case, the user device functioning as the information input unit 110 generates various types of information to be presented to the user 10 based on the operation of the user 10 and situation detection described later. Information generated by the user device is sent to the server 11 via the network 13 and subjected to information processing, which will be described later. The information processed by the server 11 is sent again to the user device group 12 via the network 13 and presented to the user 10 .
 また、情報入力部110の機能はサーバ11に実装してもよい。この場合、情報入力部110として機能するサーバ11は、自身に保持された各種の情報またはネットワーク13を介して取得した各種の情報に基づいて、ユーザ10に提示すべき情報を生成する。サーバ11が生成するユーザ10に提示すべき情報は、他のユーザ10′がサーバ11自体またはネットワーク13を介してサーバ11に接続された他のユーザデバイス12′を操作して生成した情報であって、他のユーザ10′がユーザ10に伝える意図を有する情報を含むが、これに限られるものではない。 Also, the function of the information input unit 110 may be implemented in the server 11. In this case, the server 11 functioning as the information input unit 110 generates information to be presented to the user 10 based on various information held therein or acquired via the network 13 . The information generated by the server 11 and to be presented to the user 10 is information generated by another user 10 ′ by operating the server 11 itself or another user device 12 ′ connected to the server 11 via the network 13 . and includes, but is not limited to, information intended to be communicated to user 10 by other user 10'.
 サーバ11は、情報入力部110から入力された情報に対して、ユーザ10の属性や状況等の背景情報に応じた情報処理を施す。ユーザ10の属性としては、年齢、性別、出身地、国籍、専門資格、職業、所属組織が例示される。これらの属性情報は、サーバ11がアクセス可能な背景情報データベース120に予め保持されたものでもよいし、ユーザデバイス群12に保持されたものがサーバ11での情報処理時にネットワーク13を介してリアルタイムで読み出されたものでもよい。また、ユーザ10の状況については、ユーザデバイス群12がユーザ10からリアルタイムで検知した状況検知情報を主情報として、背景情報データベース120に保持された状況履歴情報も副情報として参照しながらサーバ11がユーザ10の状況を解釈する。 The server 11 performs information processing on the information input from the information input unit 110 according to background information such as attributes and circumstances of the user 10 . Attributes of the user 10 include age, gender, hometown, nationality, professional qualifications, occupation, and affiliated organization. These attribute information may be stored in advance in the background information database 120 accessible by the server 11, or may be stored in the user device group 12 in real time via the network 13 during information processing by the server 11. It may be read out. As for the situation of the user 10, the server 11 refers to situation detection information detected from the user 10 in real time by the user device group 12 as main information and situation history information held in the background information database 120 as secondary information. Interpret the user 10 situation.
 ユーザデバイス群12は、ユーザ10に関する情報の入力および出力の少なくともいずれかを行う一または複数の任意のデバイスである。ユーザデバイスはユーザ10が保有して使用するデバイスに限らず、ユーザ10が一時的または定常的にいる場所に設置されたデバイスや、ユーザ10の周囲にいる第三者が使用するデバイスでもよい。各ユーザデバイスは、ユーザ10に関する情報の入力および出力の両方を行うもの、情報の入力のみを行うもの、情報の出力のみを行うものに大別される。第1の類型の入出力タイプのユーザデバイスとしては、パーソナルコンピュータ12A、スマートフォン12B、タブレット、スマートスピーカ、スマートウォッチ等のウェアラブルデバイスが例示される。 The user device group 12 is one or more arbitrary devices that perform at least one of inputting and/or outputting information regarding the user 10 . The user device is not limited to a device owned and used by the user 10, but may be a device installed in a place where the user 10 is temporarily or permanently present, or a device used by a third party around the user 10. User devices are broadly classified into those that both input and output information about the user 10, those that only input information, and those that only output information. Wearable devices such as a personal computer 12A, a smart phone 12B, a tablet, a smart speaker, and a smart watch are exemplified as input/output type user devices of the first type.
 第2の類型の入力タイプのユーザデバイスとしては、カメラ12C、生体信号等の測定機能を有する時計12D、ユーザ10の発話やユーザ10の周囲の音を取得するためのマイクロフォン、ユーザ10自身の状況や状態またはユーザ10の周囲の状況や環境(以下、ユーザ10の状況またはユーザ10の置かれた状況ともいう)をユーザ10と接触または非接触で測定可能な各種のセンサが例示される。なお、第1および第2の類型のユーザデバイスにおいて「ユーザに関する入力情報」とは、ユーザが自らユーザデバイスを操作して入力した情報(主に第1の類型)と、ユーザデバイスがユーザの状況を測定した情報の両方を含む。 The second type of input-type user device includes a camera 12C, a watch 12D having a function of measuring biological signals, etc., a microphone for acquiring the speech of the user 10 and sounds around the user 10, and the situation of the user 10 himself/herself. Examples include various sensors capable of measuring the status of the user 10 or the situation or environment around the user 10 (hereinafter also referred to as the situation of the user 10 or the situation in which the user 10 is placed) with or without contact with the user 10 . In addition, in the user devices of the first and second types, "input information about the user" means information input by the user himself/herself by operating the user device (mainly the first type), including both the information that measured the
 第3の類型の出力タイプのユーザデバイスとしては、表示部としてのディスプレイ12E、音声出力部としてのスピーカ12Fが例示される。以上のような各種のユーザデバイスは有線または無線で相互に通信可能に接続されていてもよく、そのうち少なくとも一つのユーザデバイスをネットワーク13と通信可能に構成することで、クライアントとしてのユーザデバイス群12がネットワーク13を介してサーバ11と接続されたクライアントサーバモデルを構築できる。 A display 12E as a display unit and a speaker 12F as an audio output unit are exemplified as the output type user device of the third type. The various user devices as described above may be connected by wire or wirelessly so as to be able to communicate with each other. is connected to the server 11 via the network 13, a client-server model can be constructed.
 なお、以上のクライアントサーバモデルは情報処理装置1の一構成例に過ぎず、情報処理装置1の各機能をユーザ10がローカルで使用するユーザデバイス群12に実装したスタンドアローン構成としてもよい。 The client-server model described above is merely one configuration example of the information processing apparatus 1, and a standalone configuration in which each function of the information processing apparatus 1 is implemented in the user device group 12 locally used by the user 10 may be employed.
 図2は、第1実施形態に係る情報処理装置1の機能ブロック図である。情報処理装置1は、情報入力部110と、語句抽出部131と、語句データベース132と、読み方指定部141と、背景情報データベース120と、背景情報取得部150と、情報提示部161と、説明取得部162と、読み方確認部170を備える。これらの機能ブロックは、コンピュータの中央演算処理装置、メモリ、入力装置、出力装置、コンピュータに接続される周辺機器等のハードウェア資源と、それらを用いて実行されるソフトウェアの協働により実現される。コンピュータの種類や設置場所は問わず、上記の各機能ブロックは、単一のコンピュータのハードウェア資源で実現してもよいし、複数のコンピュータに分散したハードウェア資源を組み合わせて実現してもよい。 FIG. 2 is a functional block diagram of the information processing device 1 according to the first embodiment. The information processing apparatus 1 includes an information input unit 110, a phrase extraction unit 131, a phrase database 132, a reading designation unit 141, a background information database 120, a background information acquisition unit 150, an information presentation unit 161, and an explanation acquisition unit. It has a section 162 and a reading checking section 170 . These functional blocks are realized through cooperation between hardware resources such as the computer's central processing unit, memory, input device, output device, and peripheral devices connected to the computer, and software executed using them. . Regardless of the type of computer or installation location, each of the above functional blocks may be implemented using the hardware resources of a single computer, or may be implemented by combining hardware resources distributed among multiple computers. .
 情報入力部110には、情報提示対象者10に提示すべき情報が入力される。入力される情報の類型は特に限定されるものではなく、後段の語句抽出部131で語句を抽出可能な言語情報を含むものであればよい。例えば、文字情報(テキスト情報)、画像情報(イメージ情報)、音声情報(オーディオ情報)、映像情報(ビデオ情報)が挙げられる。画像情報、音声情報、映像情報等の非テキスト情報が情報入力部110に入力された場合は、画像認識技術、音声認識技術等を用いてこれらの情報に含まれる言語情報を語句抽出のためにテキスト情報に変換する。 Information to be presented to the information presentation target person 10 is input to the information input unit 110 . The type of information to be input is not particularly limited as long as it includes linguistic information from which words can be extracted by the word/phrase extraction unit 131 in the subsequent stage. Examples include character information (text information), image information (image information), voice information (audio information), and image information (video information). When non-text information such as image information, audio information, and video information is input to the information input unit 110, the linguistic information included in these information is extracted for phrase extraction using image recognition technology, voice recognition technology, or the like. Convert to text information.
 語句抽出部131は、情報入力部110に入力された言語情報から語句を抽出する。具体的には、語句抽出部131は、情報入力部110に入力された言語情報に含まれる各語句を語句データベース132で検索する。語句データベース132には複数の読み方を含む語句が予め登録されており、語句抽出部131は検索でヒットした語句を抽出して後段の読み方指定部141に提供する。語句データベース132に登録される語句の具体例は後述するが、情報提示対象者10の背景情報すなわち情報提示対象者10の属性や情報提示対象者10の置かれた状況に応じて読み方を変えるべき語句が語句データベース132に登録される。語句データベース132に登録される語句の品詞は典型的には名詞だが、動詞、形容詞、副詞、代名詞、助動詞、接続詞、冠詞、間投詞等の名詞以外の品詞の語句を語句データベース132に登録してもよい。 The phrase extraction unit 131 extracts phrases from the linguistic information input to the information input unit 110 . Specifically, the word/phrase extraction unit 131 searches the word/phrase database 132 for each word/phrase included in the language information input to the information input unit 110 . Phrases including multiple readings are registered in advance in the phrase database 132, and the phrase extraction unit 131 extracts phrases hit by the search and provides them to the reading designation unit 141 in the subsequent stage. Specific examples of the phrases registered in the phrase database 132 will be described later. Phrases are registered in the phrase database 132 . Although the parts of speech of words registered in the word database 132 are typically nouns, words of parts of speech other than nouns such as verbs, adjectives, adverbs, pronouns, auxiliary verbs, conjunctions, articles, and interjections may be registered in the word database 132. good.
 読み方指定部141は、情報提示対象者10についての背景情報を参照して、語句抽出部131で抽出された語句の読み方を指定する。読み方指定部141の具体的な処理例は図3を参照して後述する。読み方指定部141が参照する情報提示対象者10についての背景情報は、背景情報取得部150によって取得される。背景情報取得部150は、属性取得部151と状況検知部152を備える。 The reading designation unit 141 refers to the background information about the information presentation target person 10 and designates the reading of the phrase extracted by the phrase extraction unit 131 . A specific processing example of the reading designation unit 141 will be described later with reference to FIG. The background information about the information presentation target person 10 referred to by the reading designation unit 141 is acquired by the background information acquisition unit 150 . The background information acquisition section 150 includes an attribute acquisition section 151 and a situation detection section 152 .
 属性取得部151は、年齢、性別、出身地、国籍、専門資格、職業、所属組織等の情報提示対象者10の属性情報を背景情報として取得する。これらの属性情報は、背景情報取得部150がアクセス可能な背景情報データベース120に含まれる属性情報データベース121に予め保持されたものでもよいし、情報提示対象者10が使用するユーザデバイス群12(図1)に予め保持されたものでもよい。なお、属性情報データベース121が保持する属性情報と、ユーザデバイス群12が保持する属性情報に齟齬がある場合、格納日が最新の属性情報や過半数のデバイスが保持する属性情報が、信頼性の高い属性情報として読み方指定部141に提供される。 The attribute acquisition unit 151 acquires the attribute information of the information presentation target person 10, such as age, gender, hometown, nationality, professional qualifications, occupation, and affiliated organization, as background information. These pieces of attribute information may be stored in advance in the attribute information database 121 included in the background information database 120 accessible by the background information acquisition unit 150, or may be stored in the user device group 12 (Fig. 1) may be stored in advance. Note that if there is a discrepancy between the attribute information held by the attribute information database 121 and the attribute information held by the user device group 12, the attribute information with the latest storage date or the attribute information held by the majority of the devices is the most reliable. It is provided to the reading designation unit 141 as attribute information.
 状況検知部152は、情報提示対象者10の置かれた状況を背景情報として検知する。具体的には、状況検知部152は、情報提示対象者10が使用する、または、情報提示対象者10の周囲にあるユーザデバイス群12(図1)から、情報提示対象者10の置かれた状況を直接的または間接的に示す各種の状況情報を取得する。 The situation detection unit 152 detects the situation in which the information target person 10 is placed as background information. Specifically, the situation detection unit 152 selects from the user device group 12 ( FIG. 1 ) used by the information presentation target person 10 or located in the information presentation target person 10 Obtain various types of status information that directly or indirectly indicate status.
 例えば、状況検知部152は情報提示対象者10の会話を分析して状況を検知する。この場合、音声認識機能を持つバーチャルアシスタントが実装されたスマートスピーカやスマートフォン等が、情報提示対象者10の状況情報としての会話を聴取するユーザデバイスとして機能する。ユーザデバイスによって聴取された会話は、ユーザデバイス自身または状況検知部152に実装された分析エンジンによって分析され、情報提示対象者10の置かれた状況が検知される。例えば、情報提示対象者10または話し相手の話す内容、速度、調子、トーン等から、情報提示対象者10がフォーマルな場にいるのかカジュアルな場にいるのか等の雰囲気を検知できる。また、特定の年齢、性別、出身地、国籍、専門資格、職業、所属組織等に特有の話題、用語、表現、アクセント、方言等を検知することで、情報提示対象者10自身を含む会話の参加者の属性や会話が行われている状況を高精度に推測できる。 For example, the situation detection unit 152 analyzes the conversation of the information presentation target person 10 and detects the situation. In this case, a smart speaker, a smart phone, or the like on which a virtual assistant having a voice recognition function is implemented functions as a user device that listens to conversation as situation information of the information presentation target person 10 . The conversation heard by the user device is analyzed by the user device itself or an analysis engine mounted in the situation detection unit 152, and the situation in which the information presentation target person 10 is placed is detected. For example, the atmosphere of whether the information presentation target person 10 is in a formal or casual setting can be detected from the content, speed, tone, tone, etc. of the information presentation target person 10 or the conversation partner. In addition, by detecting topics, terms, expressions, accents, dialects, etc. specific to a specific age, gender, hometown, nationality, professional qualification, occupation, affiliated organization, etc., the conversation including the information presentation target person 10 itself It is possible to infer with high accuracy the attributes of the participants and the situation in which the conversation is taking place.
 上記の会話の分析による状況の検知に加えてまたは代えて、ユーザデバイス群12から取得できる他の状況情報に基づいて、情報提示対象者10の置かれた状況を検知してもよい。例えば、情報提示対象者10がパーソナルコンピュータ12A、スマートフォン12B、タブレット等を操作して入力する情報は、情報提示対象者10が会話で話す内容と同様に状況情報として価値が高い。また、情報提示対象者10自身や情報提示対象者10がいる場所を撮影するカメラがある場合、情報提示対象者10の表情、姿勢、仕草、服装等、情報提示対象者10がいる場所の内装、家具、周囲の混雑状況等の有用な状況情報が得られる。なお、情報提示対象者10がいる場所は、スマートフォン等に内蔵されたGPS(Global Positioning System)センサ等の測位センサからも検知できる。 In addition to or instead of detecting the situation by analyzing the conversation described above, the situation in which the information presentation target person 10 is placed may be detected based on other situation information that can be acquired from the user device group 12 . For example, information input by the information presentation target person 10 by operating the personal computer 12A, the smartphone 12B, a tablet, or the like is highly valuable as situation information, similar to what the information presentation target person 10 says in conversation. In addition, if there is a camera for photographing the information presentation target person 10 or the place where the information presentation target person 10 is present, the facial expression, posture, gesture, clothing, etc. of the information presentation target person 10 can , furniture, surrounding congestion, and other useful situational information. The location of the information presentation target person 10 can also be detected from a positioning sensor such as a GPS (Global Positioning System) sensor built into a smartphone or the like.
 また、情報提示対象者10が心拍、体温、血圧、呼吸、発汗等の生体信号を測定可能なスマートウォッチ等のウェアラブルデバイスを身に付けている場合、そこから得られる生体測定情報も情報提示対象者10の置かれた状況を示唆する状況情報として活用できる。同様に、温度、湿度、輝度等の情報提示対象者10の置かれた環境に関する情報を測定するセンサがユーザデバイス群12に含まれていれば、これらの環境測定情報も状況情報として利用できる。また、時計から取得できる日時情報も広義の背景情報として参照できる。これらの生体測定情報、環境測定情報、日時情報は、それぞれ単独では情報提示対象者10の置かれた状況を間接的に示唆する情報に過ぎないが、その他の状況情報と組み合わせることで、情報提示対象者10の体調や気分等の心身の状態も含めた情報提示対象者10の多面的な状況を検知できる。 In addition, when the information presentation target person 10 wears a wearable device such as a smart watch capable of measuring biosignals such as heartbeat, body temperature, blood pressure, respiration, and perspiration, the biometric information obtained therefrom is also an information presentation target. It can be utilized as situation information suggesting the situation in which the person 10 is placed. Similarly, if the user device group 12 includes a sensor that measures information related to the environment of the information presentation target person 10, such as temperature, humidity, and luminance, such environmental measurement information can also be used as situation information. Date and time information that can be obtained from a clock can also be referred to as background information in a broad sense. Each of these biometric information, environmental measurement information, and date and time information is merely information that indirectly suggests the situation in which the information presentation target person 10 is placed. The multifaceted situation of the information presentation target person 10 including the physical and mental conditions of the target person 10 such as physical condition and mood can be detected.
 以上で例示したユーザデバイス群12から取得できる状況情報は「会話」「操作入力」「画像」「測定情報」の四つの類型に大別される。以下、説明を簡素化するため、適宜これらの類型を用いる。状況検知部152は、このような多様な類型の状況情報を総合して的確な状況判断を下せるように、人工知能によって構成される。また、状況検知部152の状況判断を支援する目的で、状況履歴データベース122が背景情報データベース120に設けられる。状況履歴データベース122には、情報提示対象者10または第三者の過去の履歴データとして、各種のユーザデバイスで取得された状況情報と、その時にユーザが置かれていた状況に関する記述が関連づけられた参考データが多数保持される。 The situation information that can be obtained from the user device group 12 illustrated above is roughly classified into four types: "conversation", "operation input", "image", and "measurement information". In the following, to simplify the explanation, these types will be used as appropriate. The situation detection unit 152 is configured by artificial intelligence so as to make an accurate situation judgment by synthesizing such various types of situation information. In addition, a situation history database 122 is provided in the background information database 120 for the purpose of supporting situation determination by the situation detection unit 152 . In the situation history database 122, situation information acquired by various user devices and descriptions of situations in which the user was placed at that time are associated as past history data of the information presentation target person 10 or a third party. A lot of reference data is kept.
 例えば、過去にユーザが弁護士と遺産相続に関する相談をした場合の履歴データとして、ユーザデバイスとしてのスマートスピーカが会話から検知した「遺言」「訴訟」「被告」等の用語や、同じくユーザデバイスとしてのカメラが検知した「スーツ姿の数名が個室で会話」という状況と、「ユーザが弁護士と遺産相続に関して相談」というユーザの置かれた状況に関する記述が、互いに関連づけられた参考データとして状況履歴データベース122に保持される。また、情報提示対象者10が平日の時間帯Aに喫茶店等の特定の場所Bを仕事場としている場合、スマートフォン等に内蔵された時計が測定した「時間帯A」およびGPSセンサが測定した「場所B」に対し、「仕事中」という情報提示対象者10の状況に関する記述を関連づけた参考データが状況履歴データベース122に保持される。 For example, as historical data when a user consulted a lawyer about inheritance in the past, terms such as "will", "lawsuit", "defendant" detected from conversations by a smart speaker as a user device, A situation history database in which a description of the user's situation, such as "a few people in suits are having a conversation in a private room" detected by a camera and a description of the user's situation, such as "a user consults with a lawyer about inheritance," is associated with each other as reference data. 122. Further, when the information presentation target person 10 works at a specific place B such as a coffee shop during time zone A on weekdays, "time zone A" measured by a clock built in a smartphone or the like and "place A" measured by a GPS sensor B” is stored in the situation history database 122 as reference data in which a description relating to the situation of the information presentation target person 10 is associated with “at work”.
 状況検知部152は、ユーザデバイス群12から取得した状況情報群に基づいて状況履歴データベース122を検索し、類似する状況情報群を含む参考データを見つける。見つけられた参考データに含まれる「ユーザの置かれた状況に関する記述」は、現在の情報提示対象者10の置かれた状況を表している可能性が高いため、状況検知部152は状況履歴データベース122を参照することで高精度に情報提示対象者10の状況を検知できる。なお、状況検知部152を機械学習可能な人工知能で構成し、状況履歴データベース122に保持された参考データを訓練データとして予め機械学習させてもよい。この場合、機械学習済の状況検知部152は状況履歴データベース122を参照せずに迅速に状況を判断できる。 The situation detection unit 152 searches the situation history database 122 based on the group of situation information acquired from the group of user devices 12, and finds reference data containing a group of similar situation information. Since there is a high possibility that the “description regarding the situation in which the user is placed” included in the found reference data represents the current situation in which the information presentation target person 10 is placed, the situation detection unit 152 stores the information in the situation history database. By referring to 122, the situation of the information presentation target person 10 can be detected with high accuracy. It should be noted that the situation detection unit 152 may be composed of artificial intelligence capable of machine learning, and the reference data held in the situation history database 122 may be machine-learned in advance as training data. In this case, the machine-learned situation detection unit 152 can quickly determine the situation without referring to the situation history database 122 .
 情報提示部161は、読み方指定部141で指定された読み方に応じて、情報入力部110で入力された情報を情報提示対象者10に提示する。具体的には、情報提示部161の情報提示の際に、語句抽出部131で抽出された語句について読み方指定部141で指定された読み方が割り当てられる。情報提示部161が音声出力部としてのスピーカ12F(スマートフォン12B等に内蔵されたスピーカでもよい)によって情報提示対象者10に情報提示する場合、語句抽出部131で抽出された語句を読み方指定部141で指定された読み方に従って読み上げる音声をスピーカ12Fに出力させる。 The information presentation unit 161 presents the information input by the information input unit 110 to the information presentation target person 10 according to the reading specified by the reading specification unit 141 . Specifically, when the information presenting unit 161 presents the information, the reading specified by the reading specifying unit 141 is assigned to the word/phrase extracted by the word/phrase extracting unit 131 . When the information presentation unit 161 presents information to the information presentation target person 10 through the speaker 12F (which may be a speaker built into the smartphone 12B or the like) as an audio output unit, the phrase extracted by the phrase extraction unit 131 is input to the pronunciation designation unit 141. The speaker 12F is made to output the voice read aloud according to the reading specified in .
 また、情報提示部161が表示部としてのディスプレイ12E(スマートフォン12B等に内蔵されたディスプレイでもよい)によって情報提示対象者10に情報提示する場合、語句抽出部131で抽出された語句について読み方指定部141が指定した読み方の情報をディスプレイ12Eに表示させる。例えば、後述の図3の例の「遺言」という語句について「いごん」という読み方が指定された場合、「遺言」という漢字の上に「いごん」という振り仮名を表示してもよいし、「遺言(いごん)」のように漢字の後の括弧内に読み方を表示してもよいし、「遺言」という漢字の代わりに「いごん」という平仮名(読み方)のみを表示してもよい。 Further, when the information presentation unit 161 presents information to the information presentation target person 10 on the display 12E (which may be a display built into the smartphone 12B or the like) as a display unit, the words extracted by the phrase extraction unit 131 are read by the reading designation unit. The reading information specified by 141 is displayed on the display 12E. For example, if the pronunciation of "igon" is specified for the word "will" in the example of FIG. However, it is possible to display the reading in parentheses after the kanji like ``igon'', or display only the hiragana (reading) ``igon'' instead of the kanji for ``will''. You may
 説明取得部162は、情報入力部110で入力された情報についての説明を取得する。具体的には、語句抽出部131で抽出された語句についての説明を語句データベース132から取得する。例えば、後述の図3の例の「FTO」という語句について、「フリーダム・トゥ・オペレート」という読み方が読み方指定部141で指定された場合、それに対応する「第三者の知的財産権を侵害することなく自由に事業を行えること」という説明を説明取得部162が取得する。情報提示部161は説明取得部162が取得した説明を、情報入力部110で入力された情報と共に情報提示対象者10に提示する。 The description acquisition unit 162 acquires a description of the information input by the information input unit 110. Specifically, the explanation of the word/phrase extracted by the word/phrase extraction unit 131 is obtained from the word/phrase database 132 . For example, for the word “FTO” in the example of FIG. 3 described later, if the reading “freedom to operate” is specified in the reading specification unit 141, the corresponding “infringement of a third party’s intellectual property right” is specified. The explanation acquisition unit 162 acquires the explanation that the business can be carried out freely without doing anything. The information presentation unit 161 presents the explanation acquired by the explanation acquisition unit 162 to the information presentation target person 10 together with the information input by the information input unit 110 .
 例えば、情報提示部161は、スピーカ12Fに「フリーダム・トゥ・オペレート」と読み上げさせた後に、「第三者の知的財産権を侵害することなく自由に事業を行えること」という説明を読み上げさせる。また、情報提示部161は、ディスプレイ12Eに「FTO(フリーダム・トゥ・オペレート/第三者の知的財産権を侵害することなく自由に事業を行えること)」のように、語句抽出部131で抽出された語句、読み方指定部141で指定された読み方、説明取得部162で取得された説明をまとめて表示させてもよいし、本文には「FTO(フリーダム・トゥ・オペレート)※」のように語句と読み方を表示させ、脚注等として「※第三者の知的財産権を侵害することなく自由に事業を行えること」のように説明を本文の外に表示させてもよい。 For example, the information presenting unit 161 causes the speaker 12F to read out "freedom to operate" and then read out the explanation "you can freely do business without infringing on the intellectual property rights of a third party". . In addition, the information presenting unit 161 causes the word/phrase extracting unit 131 to display a message such as “FTO (freedom to operate: free to do business without infringing on the intellectual property rights of a third party)” on the display 12E. The extracted word/phrase, the reading specified by the reading specification unit 141, and the explanation acquired by the explanation acquisition unit 162 may be collectively displayed. It is also possible to display words and phrases and how to read them, and to display explanations outside the main text as footnotes, such as "* You can freely do business without infringing on the intellectual property rights of third parties."
 読み方確認部170は、問合せ部171と返答受付部172を備える。問合せ部171は、語句抽出部131で抽出された語句の読み方の候補が複数あり、背景情報取得部150で取得された背景情報を参照しても読み方指定部141が読み方を確定できない場合、情報提示対象者10に読み方を問い合わせる。情報提示対象者10への問合せは、スピーカ12Fを介した音声によって行ってもよいし、ディスプレイ12Eを介した表示によって行ってもよい。 The reading confirmation unit 170 includes an inquiry unit 171 and a response reception unit 172. If there are multiple reading candidates for the word extracted by the word extracting unit 131 and the reading specifying unit 141 cannot determine the reading even by referring to the background information acquired by the background information acquiring unit 150, the inquiry unit 171 obtains information The presentation target person 10 is asked how to read. The inquiry to the information presentation target person 10 may be made by voice through the speaker 12F, or may be made by display through the display 12E.
 返答受付部172は、問合せ部171からの問合せに対する情報提示対象者10の返答を受け付ける。情報提示対象者10はユーザデバイス群12の任意の入力手段を用いて、問合せに係る語句の正しい読み方を返答できる。例えば、情報提示対象者10は、音声認識機能を有するユーザデバイスに対して音声で返答してもよいし、スマートフォン12B等の画面に正しい読み方をテキストで入力してもよいし、読み方の候補が限られている場合はスマートフォン12B等の画面上で正しい読み方を選択してもよい。読み方指定部141は、返答受付部172で受け付けられた情報提示対象者10からの返答に応じて、語句抽出部131で抽出された語句の読み方を指定する。以降、背景情報取得部150で取得される背景情報が大きく変わらない限り、ここで指定された読み方が一貫して使用される。このため、同一の語句について情報提示対象者10に対する重複した問合せを防止できる。 The reply receiving section 172 receives the information presentation target person 10's reply to the inquiry from the inquiry section 171 . The information presentation target person 10 can use any input means of the user device group 12 to reply with the correct reading of the phrase related to the inquiry. For example, the information presentation target person 10 may respond by voice to a user device having a voice recognition function, may input the correct reading as text on the screen of the smartphone 12B or the like, or may select reading candidates. If the number is limited, the correct reading may be selected on the screen of the smartphone 12B or the like. The reading designation unit 141 designates the reading of the word/phrase extracted by the word/phrase extraction unit 131 in response to the reply from the information presentation candidate 10 received by the reply reception unit 172 . Henceforth, unless the background information acquired by the background information acquisition unit 150 changes significantly, the reading specified here will be consistently used. Therefore, it is possible to prevent redundant inquiries to the information presentation target person 10 regarding the same phrase.
 図3は、語句データベース132における語句の登録例を示す。各語句について複数の読み方が登録されており、各読み方についてタイプ、典型的な利用シーン、説明が登録されている。タイプは、その読み方が使用される分野等の類型である。典型的な利用シーンは、その読み方が使用される典型的な場面であり、背景情報取得部150が取得する情報提示対象者10(または話し相手)の属性や置かれた状況に対応する。任意項目としての説明は、その読み方での語句の説明であり、説明取得部162が取得したものを情報提示部161が情報提示対象者10に提示する。以下、各語句について説明する。 FIG. 3 shows an example of word/phrase registration in the word/phrase database 132 . Multiple readings are registered for each word, and for each reading, a type, a typical usage scene, and an explanation are registered. The type is a type such as a field in which the reading is used. A typical usage scene is a typical scene in which the reading is used, and corresponds to the attribute and the situation of the information presentation target person 10 (or the conversation partner) acquired by the background information acquisition unit 150 . The description as an optional item is a description of a word or phrase in that reading, and the information presentation section 161 presents the information presentation target person 10 with the description acquired by the description acquisition section 162 . Each term will be described below.
 語句「遺言」には、「いごん」と「ゆいごん」の二つの読み方がある。「いごん」は「法律」の分野で使われる読み方である。背景情報取得部150が取得した情報提示対象者10または話し相手の属性や置かれた状況が、典型的な利用シーン「弁護士等の法律の専門家の会話」に合致または類似する場合、読み方指定部141は語句「遺言」について読み方「いごん」を指定する。「ゆいごん」は「法律」の分野外で一般的に使われる読み方である。背景情報取得部150が取得した情報提示対象者10または話し相手の属性や置かれた状況が、読み方「いごん」の典型的な利用シーン「弁護士等の法律の専門家の会話」に類似しない場合、読み方指定部141は語句「遺言」について読み方「ゆいごん」を指定する。あるいはまた、背景情報取得部150が取得した情報提示対象者10または話し相手の属性や置かれた状況からタイプを推定し、推定されたタイプとマッチするタイプの読み方を採用してもよい。 There are two readings for the phrase "will": "igon" and "yuigon". "IGON" is a reading used in the field of "law". If the attributes and the situation of the information presentation target person 10 or the conversational partner acquired by the background information acquisition unit 150 match or resemble a typical usage scene “conversation of a legal expert such as a lawyer”, the reading designation unit 141 designates the reading "igon" for the word "will". "Yuigon" is a commonly used reading outside the field of "law." The attribute and the situation of the information presentation target person 10 or the conversation partner acquired by the background information acquisition unit 150 do not resemble the typical usage scene of the reading "igon", "conversation of a legal expert such as a lawyer". In this case, the reading designation unit 141 designates the reading “yuigon” for the word “will”. Alternatively, the type may be estimated from the attributes of the information presentation target person 10 or the conversation partner acquired by the background information acquisition unit 150 or the situation, and the reading of the type that matches the estimated type may be adopted.
 語句「FTO」には、「フリーダム・トゥ・オペレート」と「エフティーオー」の二つの読み方がある。「フリーダム・トゥ・オペレート」は「知的財産」の分野で使われる読み方である。背景情報取得部150が取得した情報提示対象者10または話し相手の属性や置かれた状況が、典型的な利用シーン「事業展開エリアにおける第三者の知的財産権の検討」に合致または類似する場合、読み方指定部141は語句「FTO」について読み方「フリーダム・トゥ・オペレート」を指定する。この時、説明取得部162は「第三者の知的財産権を侵害することなく自由に事業を行えること」との説明を語句データベース132から取得して情報提示部161に併せて提示させる。「エフティーオー」は各アルファベットそのままの読み方である。背景情報取得部150が取得した情報提示対象者10または話し相手の属性や置かれた状況が、読み方「フリーダム・トゥ・オペレート」の典型的な利用シーン「事業展開エリアにおける第三者の知的財産権の検討」に類似しない場合、読み方指定部141は語句「FTO」について読み方「エフティーオー」を指定する。 The phrase "FTO" has two readings: "Freedom to Operate" and "FTO". "Freedom to operate" is a reading used in the field of "intellectual property." The attributes and circumstances of the information presentation target person 10 or the conversation partner acquired by the background information acquisition unit 150 match or resemble a typical usage scene "examination of third party's intellectual property rights in the business development area". In this case, the reading designation unit 141 designates the reading “freedom to operate” for the word “FTO”. At this time, the explanation acquiring unit 162 acquires from the word/phrase database 132 an explanation that ``you can freely do business without infringing on the intellectual property rights of a third party,'' and causes the information presenting unit 161 to present it together. "Ftio" is the reading of each alphabet as it is. The attributes and the situation of the information presentation target person 10 or the conversation partner acquired by the background information acquisition unit 150 can be used in a typical usage scene of the reading "freedom to operate" as "a third party's intellectual property in the business development area". If it is not similar to "Ken no shinsei", the reading designation unit 141 designates the reading "ef-to-oh" for the word "FTO".
 数値の中間桁に現われる語句「0」は、「とんで」と読む場合と何も読まない場合がある。例えば、「1010円」は「せん『とんで』じゅうえん」と読む場合と、「せんじゅうえん」と読む場合がある。「とんで」は「金融」の分野で使われる読み方である。背景情報取得部150が取得した情報提示対象者10または話し相手の属性や置かれた状況が、典型的な利用シーン「金融分野における金額の読み上げ」に合致または類似する場合、読み方指定部141は語句「0」について読み方「とんで」を指定する。一方、背景情報取得部150が取得した情報提示対象者10または話し相手の属性や置かれた状況が、読み方「とんで」の典型的な利用シーン「金融分野における金額の読み上げ」に類似しない場合、読み方指定部141は語句「0」について何も読まないことを指定する。 The word "0" that appears in the middle digit of the number may be read as "fly" or nothing. For example, ``1010 yen'' may be read as ``sen ``tonde'' juen'' or as ``senjuen''. "Tonde" is a reading used in the field of "finance". If the attributes and situation of the information presentation target person 10 or the conversation partner acquired by the background information acquisition unit 150 match or resemble a typical usage scene “reading out amounts in the financial field”, the reading specification unit 141 selects the phrase Specify the reading "tonde" for "0". On the other hand, if the attributes and the situation of the information presentation target person 10 or the conversation partner acquired by the background information acquisition unit 150 are not similar to the typical usage scene of the pronunciation "tonde" "reading out the amount of money in the financial field", The reading designation unit 141 designates that the word "0" is read as nothing.
 語句「KYC」には、「ノウ・ユア・カスタマー」と「ケーワイシー」の二つの読み方がある。「ノウ・ユア・カスタマー」は「商取引」の分野で使われる読み方である。背景情報取得部150が取得した情報提示対象者10または話し相手の属性や置かれた状況が、典型的な利用シーン「銀行の口座開設時等の本人確認」に合致または類似する場合、読み方指定部141は語句「KYC」について読み方「ノウ・ユア・カスタマー」を指定する。この時、説明取得部162は「顧客との商取引開始の際に求められる顧客確認」との説明を語句データベース132から取得して情報提示部161に併せて提示させる。「ケーワイシー」は各アルファベットそのままの読み方である。背景情報取得部150が取得した情報提示対象者10または話し相手の属性や置かれた状況が、読み方「ノウ・ユア・カスタマー」の典型的な利用シーン「銀行の口座開設時等の本人確認」に類似しない場合、読み方指定部141は語句「KYC」について読み方「ケーワイシー」を指定する。 The phrase "KYC" has two readings: "know your customer" and "keyy sea". "Know Your Customer" is a reading used in the field of "commerce". If the attributes and the situation of the information presentation target person 10 or the conversation partner acquired by the background information acquisition unit 150 match or resemble a typical usage scene “personal identification at the time of opening a bank account, etc.”, the reading designation unit 141 specifies the reading "know your customer" for the phrase "KYC". At this time, the explanation acquiring unit 162 acquires the explanation “customer confirmation required when starting a commercial transaction with a customer” from the word/phrase database 132 and causes the information presenting unit 161 to present it together. "Kyysee" is the reading of each alphabet as it is. The attributes and circumstances of the information presentation target person 10 or the conversation partner acquired by the background information acquisition unit 150 are used in a typical usage scene of "identity verification when opening a bank account, etc." If they are not similar, the reading specifying unit 141 specifies the reading "Kywaisi" for the word "KYC".
 語句「三田」には、「みた」と「さんだ」の二つの読み方がある。「みた」は日本の関東地域で使われることが多い名称であり、「さんだ」は日本の関西地域で使われることが多い名称であるが、読み方指定部141が背景情報取得部150で取得された背景情報を参照したとしても読み方を確定できない可能性が高い。そこで、読み方確認部170が情報提示対象者10に読み方を問い合わせるように、「タイプ」の欄に「要問合せ」のフラグが記入されている。このフラグがある場合、読み方指定部141が背景情報から読み方を確定できた例外的な場合を除き、読み方確認部170は原則として情報提示対象者10に読み方を問い合わせる。読み方指定部141は、問合せに対する情報提示対象者10からの返答に応じて読み方を指定する。 The phrase "Mita" has two readings: "Mita" and "Sanda." "Mita" is a name often used in the Kanto region of Japan, and "Sanda" is a name often used in the Kansai region of Japan. Even if you refer to the background information, it is highly likely that you will not be able to determine how to read it. Therefore, a flag of "inquiry required" is entered in the "type" column so that the reading checking unit 170 inquires of the information presentation target person 10 about the reading. If this flag is present, the reading confirming unit 170 inquires the information presentation target person 10 about the reading, except in exceptional cases where the reading specifying unit 141 can determine the reading from the background information. The reading designation unit 141 designates the reading according to the response from the information presentation target person 10 to the inquiry.
 図4は、第1実施形態に係る情報処理装置1の処理を示すフローチャートである。フローチャートにおける「S」は「ステップ」を意味する。S1では、情報提示対象者10に提示すべき情報が情報入力部110に入力される。S2では、語句抽出部131がS1で入力された情報に含まれる各語句を語句データベース132で検索する。S3では、S2で検索した語句が語句データベース132で見つかったか(ヒットしたか)否かを語句抽出部131が判定する。検索がヒットしなかった場合、読み方に注意すべき語句がS1で入力された情報に含まれていないため、S10に進んで情報提示部161がS1で入力された情報をそのまま情報提示対象者10に提示する。検索がヒットした場合、語句抽出部131はその語句を抽出してS4に進む。 FIG. 4 is a flowchart showing processing of the information processing device 1 according to the first embodiment. "S" in the flow chart means "step". In S<b>1 , information to be presented to the information presentation target person 10 is input to the information input section 110 . In S2, the word/phrase extraction unit 131 searches the word/phrase database 132 for each word/phrase included in the information input in S1. In S3, the word/phrase extraction unit 131 determines whether or not the word/phrase searched in S2 is found in the word/phrase database 132 (hit). If no hits are found in the search, the information input in S1 does not contain a word or phrase that should be read carefully. presented to If the search hits, the word/phrase extraction unit 131 extracts the word/phrase and proceeds to S4.
 S4では、背景情報取得部150が情報提示対象者10または話し相手の背景情報(属性および/または状況)を取得する。S5では、S3で抽出された語句について情報提示対象者10への読み方の問合せの要否を読み方指定部141が判定する。S4で取得された背景情報に基づいて読み方指定部141が読み方を指定できる場合、情報提示対象者10への問合せは行わずにS8に進む。S4で取得された背景情報を参照しても読み方を指定できない場合や、図3の「三田」の例のように「要問合せ」のフラグがある場合は、読み方指定部141は情報提示対象者10への読み方の問合せが必要と判定してS6に進む。 In S4, the background information acquisition unit 150 acquires background information (attributes and/or situations) of the information presentation target person 10 or the conversation partner. In S5, the reading designation unit 141 determines whether it is necessary to inquire of the information presentation target person 10 how to read the words extracted in S3. If the reading specification unit 141 can specify the reading based on the background information acquired in S4, the process proceeds to S8 without inquiring the information presentation target person 10. FIG. If the reading cannot be specified even with reference to the background information acquired in S4, or if there is a flag of "inquiry required" as in the example of "Mita" in FIG. 10 is determined to be required, and the process proceeds to S6.
 S6では、問合せ部171がS5で問合せ要と判定された語句の読み方を情報提示対象者10に問い合わせる。S7では、返答受付部172がS6の問合せに対する情報提示対象者10の返答を受け付ける。S8では、読み方指定部141がS4で取得された背景情報を参照してS3で抽出された語句の読み方を指定する。なお、S6で情報提示対象者10に読み方を問い合わせた語句については、S7で受け付けられた情報提示対象者10の返答に応じて、読み方指定部141が読み方を指定する。S9では、説明取得部162がS8で読み方が指定された語句について語句データベース132に登録されている説明を取得する。S10では、情報提示部161がS8で指定された読み方に応じて、S1で入力された情報を情報提示対象者10に提示する。なお、S9で説明が取得された語句については、その説明がS1で入力された情報と共に情報提示対象者10に提示される。 In S6, the inquiry unit 171 inquires of the information presentation target person 10 how to read the words determined to require inquiry in S5. In S7, the response receiving unit 172 receives the information presentation target person 10's response to the inquiry in S6. In S8, the reading specifying unit 141 refers to the background information acquired in S4 and specifies how to read the words extracted in S3. As for the words for which the information presentation target person 10 was asked how to read them in S6, the reading specification unit 141 specifies the readings according to the information presentation target person 10's reply received in S7. In S9, the explanation acquisition unit 162 acquires the explanation registered in the phrase database 132 for the phrase whose pronunciation is specified in S8. In S10, the information presentation unit 161 presents the information input in S1 to the information presentation target person 10 according to the reading specified in S8. It should be noted that the explanation of the phrase whose explanation is acquired in S9 is presented to the information presentation target person 10 together with the information inputted in S1.
 以上の第1実施形態によれば、情報提示対象者10についての背景情報を参照することで、情報提示対象者10の属性や状況に合った適切な読み方で情報を提示できる。 According to the first embodiment described above, by referring to the background information about the information presentation target person 10, information can be presented in an appropriate reading that matches the information presentation target person 10's attributes and circumstances.
 図5は、第2実施形態に係る情報処理装置1の機能ブロック図である。前述の実施形態と同等の構成要素には同一の符号を付して説明を省略する。提示態様指定部142は、背景情報取得部150が取得した情報提示対象者10についての背景情報を参照して、情報入力部110で入力された情報の提示態様を指定する。後述の図6で例示するように、提示態様指定部142がアクセス可能な提示態様データベース133に登録される提示態様は、入力された情報を提示する速度、入力された情報を提示する量、入力された情報を提示する際の調子、入力された情報を読み上げる際の音声の少なくとも一つを含む。情報提示部161は、提示態様指定部142で指定された提示態様に応じて、情報入力部110で入力された情報を情報提示対象者10に提示する。 FIG. 5 is a functional block diagram of the information processing device 1 according to the second embodiment. The same reference numerals are given to the same constituent elements as in the above-described embodiment, and the description thereof is omitted. The presentation mode specification unit 142 refers to the background information about the information presentation target person 10 acquired by the background information acquisition unit 150 and specifies the presentation mode of the information input by the information input unit 110 . As illustrated in FIG. 6 described later, the presentation modes registered in the presentation mode database 133 accessible by the presentation mode designating unit 142 include the speed of presenting the input information, the amount of presenting the input information, the input It includes at least one of a tone for presenting the input information and a voice for reading out the input information. The information presentation unit 161 presents the information input by the information input unit 110 to the information presentation target person 10 in accordance with the presentation mode specified by the presentation mode specification unit 142 .
 図6は、提示態様データベース133における提示態様の登録例を示す。提示態様データベース133は、背景情報取得部150で取得される情報提示対象者10の背景情報(属性/状況)と、情報提示態様(速度/情報量/調子/音量/音声)が対応付けられたテーブルとして構成されている。図6には簡易的な例を示すが、背景情報および情報提示態様の各要素の無数の組合せによって精緻なテーブルを構成できる。また、提示態様指定部142を機械学習可能な人工知能で構成し、提示態様データベース133に保持されたテーブルを訓練データとして予め機械学習させてもよい。この場合、提示態様指定部142は、テーブルと一致するケースを迅速に処理できるだけでなく、テーブルと一致しないケースも自律的に機械学習しながら柔軟に処理できる。以下、図6のいくつかの例について説明する。 FIG. 6 shows an example of presentation mode registration in the presentation mode database 133 . The presentation manner database 133 associates the background information (attribute/situation) of the information presentation target person 10 acquired by the background information acquisition unit 150 with the information presentation manner (speed/information amount/tone/volume/voice). configured as a table. Although a simple example is shown in FIG. 6, an elaborate table can be configured by countless combinations of elements of background information and information presentation modes. Alternatively, the presentation mode designating unit 142 may be configured by machine-learnable artificial intelligence, and a table held in the presentation mode database 133 may be machine-learned in advance as training data. In this case, the presentation mode specifying unit 142 can not only quickly process cases that match the table, but also flexibly process cases that do not match the table while performing autonomous machine learning. Some examples of FIG. 6 are described below.
 情報提示対象者10の属性が「未成年」の場合、成年に比べて情報処理能力が劣ることが想定されるため、情報提示速度を「遅く」情報提示量を「少なく」する。情報提示速度を「遅く」するためには、情報提示がスピーカ12Fで行われる場合は読み上げ速度を遅くし、情報提示がディスプレイ12Eで行われる場合は表示速度を遅くする。情報提示量を「少なく」するためには、情報入力部110で入力された情報に対して、複雑で難解な言い回しを簡素で平易な言い回しに置換する、なくても意味が伝わる冗長な言い回しを削除する等の編集処理を施す。また、情報提示対象者10の属性が「未成年」の場合、無用な恐怖を抱かせないように、情報提示の調子またはニュアンスを「優しく」音声を「女声」とするのが好ましい。情報提示の調子を「優しく」するためには、情報提示がスピーカ12Fで行われる場合は読み上げる調子を柔らかくし、情報提示がディスプレイ12Eで行われる場合は漢字より平仮名を多めにする、「だ・である調」ではなく「です・ます調」を用いる等によって表現を柔らかくする。 When the attribute of the information presentation target person 10 is "minor", it is assumed that the information processing ability is inferior to that of an adult, so the information presentation speed is "slow" and the information presentation amount is "small". In order to "slow" the information presentation speed, the reading speed is slowed down when the information is presented on the speaker 12F, and the display speed is slowed down when the information is presented on the display 12E. In order to "reduce" the amount of information to be presented, complicated and difficult phrases are replaced with simple and plain phrases, or redundant phrases that convey the meaning are replaced with simple and plain phrases. Edit processing such as deletion is performed. Further, when the attribute of the information presentation target person 10 is "minor", it is preferable to set the tone or nuance of information presentation to "gentle" and the voice to "female voice" so as not to instill unnecessary fear. In order to make the tone of information presentation "gentle", when the information is presented on the speaker 12F, the reading tone is softened, and when the information is presented on the display 12E, hiragana characters are used more than kanji characters. Soften the expression by using ``desu-masu-cho'' instead of ``dearu-cho''.
 情報提示対象者10の属性が「高齢者」の場合、基本的には「未成年」と同様の情報提示態様が好適と考えられるが、聴力や視力が低下していることが考えられるため、スピーカ12Fによる情報提示音量を「大きく」する。情報提示がディスプレイ12Eで行われる場合は文字の表示サイズを「大きく」する。また、情報提示対象者10の属性が「提示言語を母国語としない者」の場合、提示言語での情報処理能力が劣ることが想定されるため、「未成年」や「高齢者」と同様に情報提示速度を「遅く」情報提示量を「少なく」する。情報提示対象者10の属性が「特定の地域を出身地とする者」の場合、その出身地のアクセントや方言を織り交ぜて情報を提示してもよい。 When the attribute of the information presentation target person 10 is "elderly", it is considered that basically the same information presentation mode as "minor" is suitable. The volume of information presented by the speaker 12F is increased. When information is presented on the display 12E, the display size of characters is "larger." In addition, if the attribute of the information presentation target person 10 is "person whose native language is not the presentation language", it is assumed that the information processing ability in the presentation language is inferior, so the same as "minor" and "elderly" to "slow" the information presentation speed and "decrease" the amount of information presented. When the attribute of the information presentation target person 10 is "person from a specific region", the information may be presented by interweaving the accent or dialect of the hometown.
 情報提示対象者10の置かれた状況が「夜間」の場合、リラックスした雰囲気を妨げたり、家族や近隣への騒音となったりしないよう、情報提示速度を「遅く」情報提示量を「少なく」情報提示の調子を「優しく」音量を「小さく」する。情報提示対象者10の置かれた状況が「仕事中/勉強中」の場合、脳の活発な活動によって情報処理能力が高まっていることから情報提示量は「多く」ても問題ない一方で、集中力を乱さないように音量は「小さく」する。なお、情報提示対象者10の集中力が特に高まっている状況では、情報提示対象者10にとって重要性や緊急性の高い情報以外の提示を一時的に控えることで、情報提示対象者10の集中力を乱さないように配慮してもよい。情報提示対象者10の置かれた状況が「会話中」の場合、会話中の情報提示対象者10でも要点を掴めるように情報提示速度を「遅く」情報提示量を「少なく」し、進行中の会話を邪魔しないように音量を「小さく」する。 When the situation of the information presentation target person 10 is "night", the information presentation speed is set to "slow" and the information presentation amount is set to "low" so as not to interfere with the relaxed atmosphere or cause noise to the family and neighbors. Make the tone of information presentation "gentle" and the volume "lower". When the information presentation target person 10 is in a situation of "working/studying", there is no problem even if the information presentation amount is "large" because the information processing ability is enhanced by active brain activity. Keep the volume low so as not to disturb your concentration. In addition, in a situation where the concentration of the information presentation target person 10 is particularly high, by temporarily refraining from presenting information other than information that is highly important or urgent for the information presentation target person 10, the information presentation target person 10 can concentrate. Care should be taken not to disturb the force. When the situation where the information presentation target person 10 is placed is "during a conversation", the information presentation speed is set to "slow" and the information presentation amount is set to "low" so that even the information presentation target person 10 who is in conversation can grasp the main points. Turn down the volume so as not to disturb your conversation.
 情報提示対象者10の置かれた状況が「緊急時」の場合、緊急事態に対処するために必要な情報のみを簡潔かつ確実に情報提示対象者10に伝える必要があるため、情報提示速度を「早く」情報提示量を「少なく」情報提示の調子を「厳しく」音量を「大きく」する。情報提示対象者10の置かれた状況が「疲労時/体調不良時」の場合、情報処理能力が低下している情報提示対象者10でも理解できるように情報提示速度を「遅く」情報提示量を「少なく」し、情報提示対象者10の安静を妨げないように情報提示の調子を「優しく」音量を「小さく」する。情報提示対象者10の置かれた状況が「騒音の多い環境」の場合、騒音の中でも情報提示対象者10が良い点を理解できるように情報提示速度を「遅く」情報提示量を「少なく」音量を「大きく」する。 When the information presentation target person 10 is placed in an "emergency" situation, it is necessary to concisely and reliably convey to the information presentation target person 10 only the information necessary to deal with the emergency, so the information presentation speed is reduced. The amount of information presented is “faster”, the amount of information presented is “smaller”, the tone of information presentation is “severe”, and the sound volume is “larger”. When the situation of the information presentation target person 10 is "fatigued/poor physical condition", the information presentation speed is set to "slow" so that even the information presentation target person 10 whose information processing ability is reduced can understand. is reduced, and the tone of information presentation is set to be gentle and the sound volume is set to low so as not to disturb the rest of the information presentation target person 10.例文帳に追加When the information presentation target person 10 is placed in a "noisy environment", the information presentation speed is set to "slow" and the information presentation amount is set to "small" so that the information presentation target person 10 can understand the good points even in the noise. "Louder" the volume.
 情報提示対象者10の置かれた状況が「特定の場所にいる」または「特定の行事に参加中」の場合、場所毎または行事毎に情報提示態様を定めてもよい。例えば、場所や行事に特有の用語、表現、アクセント、方言、キャラクター音声、イベント情報等を用いて情報提示対象者10に情報を提示してもよい。あるいはまた、温度や湿度の情報から情報提示対象者10の周囲の天候が予測でき、または日時情報と場所の情報とに従って所定の天気予報サーバからその日時のその場所で予想される天気を取得できるので、それらの天候・天気に応じた情報の提示が可能となる。例えば、雨天であればしっとりとした音声を提供し、快晴であればからっとした音声を提供する。台風の中にあるならば緊迫した音声を提供する。 When the information presentation target person 10 is in a situation of "in a specific place" or "participating in a specific event", the information presentation mode may be determined for each place or event. For example, information may be presented to the information presentation target person 10 using terms, expressions, accents, dialects, character voices, event information, etc. specific to a place or event. Alternatively, the weather around the information presentation target person 10 can be predicted from the temperature and humidity information, or the weather forecast for that place on that date and time can be obtained from a predetermined weather forecast server according to the date and time information and the place information. Therefore, it is possible to present information according to those weather conditions. For example, if it is raining, a soft voice is provided, and if it is fine weather, a dry voice is provided. Provides tense audio if in a typhoon.
 図7は、第2実施形態に係る情報処理装置1の処理を示すフローチャートである。S11では、情報提示対象者10に提示すべき情報が情報入力部110に入力される。S12では、背景情報取得部150が情報提示対象者10または話し相手の背景情報(属性および/または状況)を取得する。S13では、提示態様指定部142が、S12で取得された背景情報および提示態様データベース133を参照して、S11で入力された情報の提示態様を指定する。S14では、情報提示部161がS13で指定された提示態様に応じてS11で入力された情報を情報提示対象者10に提示する。 FIG. 7 is a flowchart showing processing of the information processing device 1 according to the second embodiment. In S<b>11 , information to be presented to the information presentation target person 10 is input to the information input unit 110 . In S12, the background information acquisition unit 150 acquires background information (attributes and/or situations) of the information presentation target person 10 or the conversation partner. In S13, the presentation mode specifying unit 142 refers to the background information acquired in S12 and the presentation mode database 133, and specifies the presentation mode of the information input in S11. In S14, the information presentation unit 161 presents the information input in S11 to the information presentation target person 10 in accordance with the presentation mode specified in S13.
 以上の第2実施形態によれば、情報提示対象者10についての背景情報を参照することで、情報提示対象者10の属性や状況に合った適切な態様で情報を提示できる。 According to the second embodiment described above, by referring to the background information about the information presentation target person 10, information can be presented in an appropriate manner that matches the information presentation target person 10's attributes and circumstances.
 図8は、第3実施形態に係る情報処理装置1の機能ブロック図である。前述の実施形態と同等の構成要素には同一の符号を付して説明を省略する。情報入力部110には、情報提示対象者10に提示すべき議事録の情報が入力される。語句抽出部131は、情報入力部110に入力された議事録の情報から語句を抽出する。具体的には、語句抽出部131は、情報入力部110に入力された議事録の情報に含まれる各語句を語句データベース132で検索する。語句データベース132には情報提示対象者10の背景情報または属性に応じて提示態様を変えるべき語句が予め登録されており、語句抽出部131は検索でヒットした語句を抽出して後段の議事録加工部143に提供する。 FIG. 8 is a functional block diagram of the information processing device 1 according to the third embodiment. The same reference numerals are given to the same constituent elements as in the above-described embodiment, and the description thereof is omitted. The information input unit 110 receives information on minutes to be presented to the information presentation target person 10 . The phrase extraction unit 131 extracts phrases from the minutes information input to the information input unit 110 . Specifically, the word/phrase extraction unit 131 searches the word/phrase database 132 for each word/phrase included in the information of the minutes input to the information input unit 110 . In the phrase database 132, phrases whose presentation mode should be changed according to the background information or attributes of the information presentation target person 10 are registered in advance. 143.
 議事録加工部143は、情報提示対象者データベース153に保持された情報提示対象者10についての背景情報を参照して、情報入力部110に入力された議事録の情報を情報提示対象者10に合わせて加工する。具体的には、議事録加工部143は、情報提示対象者10の背景情報に合わせて、語句抽出部131で抽出された語句の提示態様を変える。情報提示部161は、議事録加工部143で加工された議事録の情報を情報提示対象者10に提示する。典型的には、情報提示対象者10毎に加工された議事録の電子ファイルや、情報提示対象者10毎に加工された議事録が本文に記入された電子メールが、各情報提示対象者10に個別に送信される。 The minutes processing unit 143 refers to the background information about the information presentation target person 10 held in the information presentation target person database 153, and transmits the information of the minutes input to the information input unit 110 to the information presentation target person 10. processed together. Specifically, the minutes processing unit 143 changes the presentation mode of the phrases extracted by the phrase extraction unit 131 in accordance with the background information of the information presentation target person 10 . The information presentation unit 161 presents the information of the minutes processed by the minutes processing unit 143 to the information presentation target person 10 . Typically, an electronic file of the minutes processed for each information presentation target person 10 or an e-mail in which the minutes processed for each information presentation target person 10 are written in the text are sent to each information presentation target person 10. sent separately to
 図9は、語句データベース132における語句の登録例を示す。各語句「AAA」~「GGG」についてタイプ、開示可能範囲、周知済範囲、周知済範囲外に開示する際に付加する説明が登録されている。タイプは各語句の類型であり、「会社名」「事業名」「製品名」「サービス名」「プロジェクト名」「技術名」「組織名」が例示される。開示可能範囲は、各語句を開示可能な範囲であり、会社や事業部等の所属組織、社長や部長等の肩書、プロジェクトやタスクフォース等のメンバーシップ等によって指定される。後述するように、各情報提示対象者10に議事録を提供する際に開示可能範囲外の語句が含まれている場合、議事録加工部143はその語句を削除する、開示可能な語句に言い換える等の加工処理または秘匿処理を施す。 FIG. 9 shows an example of word/phrase registration in the word/phrase database 132 . For each of the words "AAA" to "GGG", a type, a disclosing range, a well-known range, and an explanation to be added when disclosing outside the well-known range are registered. The type is a pattern of each word, and examples include "company name", "business name", "product name", "service name", "project name", "technical name", and "organization name". The disclosing range is the range in which each word can be disclosed, and is specified by the organization such as company or division, title such as president or manager, membership in project or task force, or the like. As will be described later, when the minutes are provided to each information presentation target person 10, if words and phrases that are out of the range that can be disclosed are included, the minutes processing unit 143 deletes the words and phrases, and paraphrases them into words and phrases that can be disclosed. etc. processing or confidentiality processing.
 周知済範囲は各語句が周知された範囲であり、開示可能範囲と同様に所属組織、肩書、メンバーシップ等によって指定される。周知済範囲外に開示する際に付加する説明は、開示可能範囲内かつ周知済範囲外の情報提示対象者10に議事録を提供する際に、説明取得部162が付加する補足説明であり、典型的には各語句の概要に関する情報である。これらの概要は、周知済範囲内の情報提示対象者10にとっては既知であるため議事録に含められないが、周知済範囲外の情報提示対象者10にとっては未知の語句の理解に役立つため議事録に含められる。 The well-known range is the range in which each word is well-known, and is specified by affiliated organization, title, membership, etc., similar to the disclosing range. The explanation added when disclosing outside the known range is a supplementary explanation added by the explanation acquisition unit 162 when providing the minutes to the information presentation target person 10 who is within the disclosing possible range and outside the known range, Typically, it is information about the outline of each word. These summaries are not included in the minutes because they are known to the information presentation target persons 10 within the known range, but are useful for understanding unknown words and phrases for the information presentation target persons 10 outside the known range. included in the record.
 図10は、情報提示対象者データベース153における背景情報の登録例を示す。各情報提示対象者10が議事録送付対象者としてリストアップされており、各議事録送付対象者について社外フラグ、所属組織、肩書が登録されている。 FIG. 10 shows an example of registration of background information in the information presentation target person database 153. FIG. Each information presentation target person 10 is listed as a person to whom minutes are to be sent, and an external flag, an organization to which the person belongs, and a title are registered for each person to whom minutes are to be sent.
 社外フラグは、議事録送付対象者が社外の者であることを示すフラグである。図示の例は「AAA株式会社」が構築した情報提示対象者データベース153であり、「AAA株式会社」外の「XXX株式会社」に所属する議事録送付対象者「F」には社外フラグが立てられる。社外フラグが立っている議事録送付対象者「F」に議事録を送付する場合、語句データベース132に登録されている社外秘の各語句(開示可能範囲が「AAA株式会社」に限定された各語句)の加工処理(または秘匿処理)を慎重に行う必要がある。これらの加工処理は議事録加工部143によって行われるが、加工された議事録を情報提示部161が社外フラグの立った議事録対象者「F」に送付する前に、「AAA株式会社」の担当者が社外秘の情報が議事録に含まれていないことをマニュアルで最終確認するルールとしてもよい。この場合、議事録加工部143または情報提示部161は、社外フラグの有無に応じて最終確認担当者への議事録の回送の要否を判断できる。 The outside flag is a flag indicating that the person to whom the minutes are to be sent is outside the company. The illustrated example is the information presentation target person database 153 constructed by "AAA Corporation", and the outside flag is set for the minutes delivery target person "F" who belongs to "XXX Corporation" outside "AAA Corporation". be done. When sending the minutes to the person who is to be sent the minutes "F" with the external flag set, each confidential word registered in the word/phrase database 132 (each word whose disclosure range is limited to "AAA Corporation") ) must be carefully processed (or concealed). These processes are performed by the minutes processing unit 143. However, before the information presentation unit 161 sends the processed minutes to the target person for the minutes "F" with the external flag set, "AAA Co., Ltd." A rule may be adopted in which the person in charge manually confirms that confidential information is not included in the minutes. In this case, the minutes processing unit 143 or the information presenting unit 161 can determine whether or not to forward the minutes to the person in charge of final confirmation according to the presence or absence of the external flag.
 続いて、図9のいくつかの語句について、図10のテーブルに応じた具体的な処理を説明する。図9の語句「AAA」または「BBB」は開示制限がないため、議事録加工部143は全ての議事録送付対象者「A」~「F」に対して加工処理を施さない。また、いずれの語句も「AAA株式会社」には周知済であるため、「AAA株式会社」に所属する議事録送付対象者「A」~「E」に送付する議事録には、これらの語句についての補足説明が付加されない。一方、「XXX株式会社」に所属する議事録送付対象者「F」に送付する議事録には、これらの語句についての補足説明(AAA株式会社の概要またはBBB事業の概要)が付加される。 Next, specific processing according to the table in FIG. 10 will be described for some words in FIG. Since the words "AAA" or "BBB" in FIG. 9 are not subject to disclosure restrictions, the minutes processing unit 143 does not process any of the minutes delivery target persons "A" to "F". In addition, since both words and phrases are already known to "AAA Corporation", these words and phrases are included in the minutes to be sent to "A" to "E" who belong to "AAA Corporation". No supplementary explanation is added. On the other hand, supplementary explanations (summary of AAA corporation or outline of BBB business) for these words are added to the minutes to be sent to the minutes recipient "F" who belongs to "XXX Corporation".
 図9の語句「DDD」は「AAA株式会社/BBB事業部の部長以上」を開示可能範囲とするため、議事録加工部143は開示可能範囲外の議事録送付対象者「D」~「F」に送付する議事録では語句「DDD」を削除する等の秘匿処理を施す。また、語句「DDD」は未周知であるため、開示可能範囲内の議事録送付対象者「A」~「C」に送付する議事録には、語句「DDD」についての補足説明(DDDサービスの概要)が付加される。 Since the word "DDD" in FIG. 9 is set to "AAA Co., Ltd. / BBB division manager or higher" as a disclosing range, the minutes processing unit 143 , etc., is subjected to confidentiality processing such as deletion of the word "DDD". In addition, since the word "DDD" is not well known, the minutes sent to the participants "A" to "C" within the scope of possible disclosure include a supplementary explanation of the word "DDD" (for the DDD service). summary) is added.
 図9の語句「GGG」は「AAA株式会社」を開示可能範囲とするため、議事録加工部143は開示可能範囲外の議事録送付対象者「F」に送付する議事録では、語句「GGG」を開示可能な他の語句に言い換える等の加工処理を施す。また、語句「GGG」の周知済範囲は「AAA株式会社/GGG室」のみであり、いずれも周知済範囲外である議事録送付対象者「A」~「E」に送付する議事録には、語句「GGG」についての補足説明(GGG室の概要)が付加される。 Since the word “GGG” in FIG. 9 makes “AAA Co., Ltd.” the disclosureable range, the minutes processing unit 143 uses the word “GGG ” will be processed, such as rephrasing it into other words that can be disclosed. In addition, the range of the word "GGG" that has been known is only "AAA Co., Ltd. / GGG Office", and in the minutes to be sent to the target persons "A" to "E" who are not in the known range , Supplementary explanation about the word "GGG" (outline of the GGG room) is added.
 図11は、第3実施形態に係る情報処理装置1の処理を示すフローチャートである。S15では、情報提示対象者10に提示すべき議事録の情報が情報入力部110に入力される。S16では、語句抽出部131がS15で入力された議事録の情報に含まれる各語句を語句データベース132で検索する。S17では、S16で検索した語句が語句データベース132で見つかったか(ヒットしたか)否かを語句抽出部131が判定する。検索がヒットしなかった場合、各情報提示対象者10に議事録を送付する際に注意すべき語句がS15で入力された情報に含まれていないため、S25に進んで情報提示部161がS15で入力された議事録をそのまま情報提示対象者10に送付する。検索がヒットした場合、語句抽出部131はその語句を抽出してS18に進む。 FIG. 11 is a flowchart showing processing of the information processing device 1 according to the third embodiment. At S<b>15 , the information of the minutes to be presented to the information presentation target person 10 is input to the information input section 110 . In S16, the word/phrase extraction unit 131 searches the word/phrase database 132 for each word/phrase included in the information of the minutes input in S15. In S17, the word/phrase extraction unit 131 determines whether or not the word/phrase searched in S16 is found in the word/phrase database 132 (hit). If no hits are found in the search, the information input in S15 does not include words that should be noted when sending the minutes to each information presentation target person 10. The minutes input in step 2 are sent to the information presentation target person 10 as they are. If the search hits, the word/phrase extraction unit 131 extracts the word/phrase and proceeds to S18.
 S18では、議事録加工部143が、情報提示対象者データベース153に登録されている議事録送付対象者「A」~「F」から一名を指定する。S19では、議事録加工部143がS18で指定された議事録送付対象者の「社外フラグ」「所属組織」「肩書」等の背景情報を情報提示対象者データベース153から取得する。S20では、議事録加工部143がS19で取得された背景情報に基づいて語句データベース132を検索し、開示可能範囲外の語句の有無を判定する。開示可能範囲外の語句があった場合はS21に進み、議事録加工部143が当該語句の削除や修正を行う。 In S18, the minutes processing unit 143 designates one of the minutes delivery target persons "A" to "F" registered in the information presentation target person database 153. In S19, the minutes processing unit 143 acquires from the information presentation target person database 153 the background information such as the "outside company flag", "organization", and "title" of the person to whom the minutes are to be sent specified in S18. In S20, the minutes processing unit 143 searches the word/phrase database 132 based on the background information acquired in S19, and determines whether or not there is a word/phrase outside the disclosing range. If there is a word/phrase outside the disclosing range, the process proceeds to S21, and the minutes processing unit 143 deletes or corrects the word/phrase.
 S22では、議事録加工部143がS19で取得された背景情報に基づいて語句データベース132を検索し、周知済範囲外の語句の有無を判定する。周知済範囲外の語句があった場合はS23に進み、説明取得部162が当該語句の補足説明の付加を行う。S24では、議事録加工部143がS18で全ての議事録送付対象者「A」~「F」を指定したか否かを判定する。未指定の議事録送付対象者がいる場合はS18に戻って新たな議事録送付対象者が指定されてS19~S24の処理が繰り返される。全ての議事録送付対象者「A」~「F」についてS18~S23の処理が完了するとS25に進み、情報提示部161が議事録送付対象者毎に加工された議事録を各議事録送付対象者に送付する。 In S22, the minutes processing unit 143 searches the word/phrase database 132 based on the background information acquired in S19, and determines whether or not there is a word/phrase outside the known range. If there is a word/phrase outside the known range, the process proceeds to S23, and the explanation acquisition unit 162 adds a supplementary explanation to the word/phrase. In S24, the minutes processing unit 143 determines whether or not all the minutes delivery target persons "A" to "F" have been specified in S18. If there is an unspecified person to whom the minutes are to be sent, the process returns to S18, a new person to whom the minutes are to be sent is specified, and the processes of S19 to S24 are repeated. When the processing of S18 to S23 is completed for all the minutes delivery target persons "A" to "F", the process proceeds to S25, and the information presentation unit 161 displays the processed minutes for each minutes delivery target person. to the person.
 以上の第3実施形態によれば、情報提示対象者10についての背景情報を参照することで、情報提示対象者10の属性や状況に合わせて議事録を適切に加工できる。なお、第3実施形態の技術的思想を第1実施形態または第2実施形態に適用してもよい。すなわち、入力された情報を情報提示対象者に音声やテキストで提示する際、語句データベースを参照して、開示可能範囲であるか否か、周知済範囲内であるか否か、を判定し、判定結果に応じて出力する音声やテキストを変更してもよい。 According to the third embodiment described above, by referring to the background information about the information presentation target person 10, the minutes can be appropriately processed according to the information presentation target person 10's attributes and circumstances. Note that the technical idea of the third embodiment may be applied to the first embodiment or the second embodiment. That is, when presenting the input information to the information presentation target person in voice or text, referring to the phrase database, it is determined whether it is within the range that can be disclosed and whether it is within the known range, The voice or text to be output may be changed according to the determination result.
 図12は、第4実施形態に係る情報処理装置1の機能ブロック図である。前述の実施形態と同等の構成要素には同一の符号を付して説明を省略する。プライバシー判定部144は、背景情報取得部150で取得される情報提示対象者10についての背景情報を参照して、プライバシー保護の要否を判定する。プライバシー保護の要否の判定基準は、属性取得部151で取得される情報提示対象者10の属性や、状況検知部152で検知される情報提示対象者10の状況に基づいて任意に設定できる。 FIG. 12 is a functional block diagram of the information processing device 1 according to the fourth embodiment. The same reference numerals are given to the same constituent elements as in the above-described embodiment, and the description thereof is omitted. The privacy determination unit 144 refers to background information about the information presentation target person 10 acquired by the background information acquisition unit 150 and determines whether or not privacy protection is necessary. The criterion for determining whether privacy protection is necessary can be arbitrarily set based on the attribute of the information presentation target person 10 acquired by the attribute acquisition unit 151 and the information presentation target person 10 situation detected by the situation detection unit 152 .
 例えば、情報提示対象者10の属性が「著名人」であり、情報提示対象者10の状況が病院や役所等の「不特定多数の人で混雑した公共の場所にいる」である場合、著名人である情報提示対象者10のプライバシー保護が必要と判断する。また、情報提示対象者10の属性が「AAA株式会社に所属」であり、情報提示対象者10の状況が「AAA株式会社以外の所属の者が周囲にいる」である場合、情報提示対象者10自身または所属先のAAA株式会社のプライバシー保護が必要と判断する。なお、AAA株式会社のプライバシー保護とは、上記の第3実施形態のように、AAA株式会社の事業、製品、サービス、プロジェクト、技術、組織等に関する社外秘の情報が社外に漏れないように秘匿することを意味する。 For example, if the attribute of the information presentation target person 10 is "celebrity" and the situation of the information presentation target person 10 is "in a public place crowded with an unspecified number of people" such as a hospital or government office, It is determined that it is necessary to protect the privacy of the information presentation target person 10 who is a person. In addition, when the attribute of the information presentation target person 10 is "affiliated with AAA Corporation" and the information presentation target person 10's status is "there are persons belonging to other than AAA Corporation around", the information presentation target person 10 It is determined that the privacy protection of yourself or your affiliated AAA Corporation is necessary. AAA Corporation's privacy protection means that confidential information related to AAA Corporation's business, products, services, projects, technology, organization, etc. is kept confidential so that it is not leaked outside the company, as in the third embodiment above. means that
 秘匿処理部145は、プライバシー判定部144でプライバシー保護が必要と判定された場合、情報入力部110で入力された情報の少なくとも一部に秘匿処理を施す。情報提示対象者10が著名人である上記の第1の例では、秘匿処理部145が、情報提示対象者10を特定可能な個人情報を削除する、情報提示対象者10を特定不可能な他の情報に置換する等の秘匿処理を施す。例えば、病院や役所等において情報提示部161として機能するスピーカ12Fからの自動音声で情報提示対象者10を呼び出す場合、通常であれば「AAAさん、窓口までお越しください」等と本名(AAA)で呼び出すところを、本名を秘匿して「11時頃に受付を済まされたBBBの用件でお越しの男性の方、窓口までお越しください」等と呼び出す。また、情報提示対象者10の本名(AAA)とは異なるニックネーム(CCC)を予め設定しておくことで、「CCCさん、窓口までお越しください」等と呼び出してもよい。 When the privacy determination unit 144 determines that privacy protection is necessary, the anonymization processing unit 145 subjects at least part of the information input by the information input unit 110 to an encryption process. In the above-described first example in which the information presentation target person 10 is a celebrity, the anonymization processing unit 145 deletes the personal information by which the information presentation target person 10 can be identified, Confidentiality processing such as replacement with the information of For example, when calling the information presentation target person 10 by automatic voice from the speaker 12F functioning as the information presentation unit 161 in a hospital, government office, etc., normally, "Mr. AAA, please come to the counter" etc. When calling, the real name is kept a secret, and the call is made by saying something like, ``If you are a man who is coming to the BBB, and the reception has been completed around 11:00, please come to the counter.'' Also, by setting a nickname (CCC) different from the real name (AAA) of the information presentation target person 10 in advance, it is possible to call such as "Mr. CCC, please come to the window."
 同様に、情報提示対象者10がAAA株式会社に所属である上記の第2の例では、秘匿処理部145が、情報提示対象者10自身、所属先のAAA株式会社、AAA株式会社の事業、製品、サービス、プロジェクト、技術、組織等を特定可能な情報を削除する、他の情報に置換する等の秘匿処理を施す。情報提示部161は、秘匿処理部145で秘匿処理が施された情報を情報提示対象者10に提示する。 Similarly, in the above second example in which the information presentation target person 10 belongs to AAA Corporation, the anonymization processing unit 145 includes the information presentation target person 10 himself, the affiliation AAA Corporation, the AAA Corporation's business, Confidentiality processing is performed, such as deleting information that can identify products, services, projects, technologies, organizations, etc., or replacing it with other information. The information presentation unit 161 presents the information subjected to the security processing by the security processing unit 145 to the information presentation target person 10 .
 図13は、第4実施形態に係る情報処理装置1の処理を示すフローチャートである。S26では、情報提示対象者10に提示すべき情報が情報入力部110に入力される。S27では、背景情報取得部150が情報提示対象者10の背景情報(属性および/または状況)を取得する。S28では、S27で取得された情報提示対象者10についての背景情報を参照して、プライバシー判定部144がプライバシー保護の要否を判定する。S29では、S28でプライバシー保護が必要と判定された場合、秘匿処理部145がS26で入力された情報の少なくとも一部に秘匿処理を施す。S30では、情報提示部161がS29で秘匿処理が施された情報を情報提示対象者10に提示する。 FIG. 13 is a flowchart showing processing of the information processing device 1 according to the fourth embodiment. In S<b>26 , information to be presented to the information presentation target person 10 is input to the information input unit 110 . In S<b>27 , the background information acquisition unit 150 acquires background information (attributes and/or situations) of the information presentation target person 10 . In S28, the privacy determination unit 144 determines the necessity of privacy protection by referring to the background information about the information presentation target person 10 acquired in S27. In S29, if it is determined in S28 that privacy protection is necessary, the anonymization processing unit 145 performs an anonymization process on at least part of the information input in S26. In S30, the information presentation unit 161 presents the information subjected to the confidentiality processing in S29 to the information presentation target person 10. FIG.
 以上の第4実施形態によれば、情報提示対象者10についての背景情報を参照することで、情報提示対象者10の属性や状況に合わせて適切にプライバシーを保護できる。 According to the fourth embodiment described above, by referring to the background information about the information presentation target person 10, the privacy can be appropriately protected according to the information presentation target person 10's attributes and circumstances.
 以上、本発明を実施の形態に基づいて説明した。実施の形態は例示であり、それらの各構成要素や各処理プロセスの組合せにいろいろな変形例が可能なこと、またそうした変形例も本発明の範囲にあることは当業者に理解されるところである。 The present invention has been described above based on the embodiment. It should be understood by those skilled in the art that the embodiments are examples, and that various modifications can be made to combinations of each component and each treatment process, and that such modifications are within the scope of the present invention. .
 なお、実施の形態で説明した各装置の機能構成はハードウェア資源またはソフトウェア資源により、あるいはハードウェア資源とソフトウェア資源の協働により実現できる。ハードウェア資源としてプロセッサ、ROM、RAM、その他のLSIを利用できる。ソフトウェア資源としてオペレーティングシステム、アプリケーション等のプログラムを利用できる。 It should be noted that the functional configuration of each device described in the embodiments can be realized by hardware resources or software resources, or by cooperation between hardware resources and software resources. Processors, ROMs, RAMs, and other LSIs can be used as hardware resources. Programs such as operating systems and applications can be used as software resources.
 本発明は、情報処理技術に関する。 The present invention relates to information processing technology.
 1 情報処理装置、10 情報提示対象者、12 ユーザデバイス群、110 情報入力部、120 背景情報データベース、121 属性情報データベース、122 状況履歴データベース、12E ディスプレイ、12F スピーカ、131 語句抽出部、132 語句データベース、133 提示態様データベース、141 読み方指定部、142 提示態様指定部、143 議事録加工部、144 プライバシー判定部、145 秘匿処理部、150 背景情報取得部、151 属性取得部、152 状況検知部、153 情報提示対象者データベース、161 情報提示部、162 説明取得部、170 読み方確認部、171 問合せ部、172 返答受付部。 1 information processing device, 10 information presentation target person, 12 user device group, 110 information input unit, 120 background information database, 121 attribute information database, 122 situation history database, 12E display, 12F speaker, 131 phrase extraction unit, 132 phrase database , 133 presentation mode database, 141 reading specification unit, 142 presentation mode specification unit, 143 minutes processing unit, 144 privacy determination unit, 145 confidentiality processing unit, 150 background information acquisition unit, 151 attribute acquisition unit, 152 situation detection unit, 153 Information presentation target person database, 161 information presentation unit, 162 description acquisition unit, 170 reading check unit, 171 inquiry unit, 172 reply reception unit.

Claims (21)

  1.  情報が入力される情報入力部と、
     情報提示対象者についての背景情報を参照して、入力された情報の読み方を指定する読み方指定部と、
     指定された読み方に応じて、入力された情報を情報提示対象者に提示する情報提示部と、
     を備える情報処理装置。
    an information input unit into which information is input;
    a reading specification unit that specifies how to read the input information by referring to background information about the information presentation target;
    an information presentation unit that presents the input information to the information presentation target person according to the specified reading;
    Information processing device.
  2.  前記情報提示部は、入力された情報を指定された読み方に従って読み上げる音声を音声出力部に出力させる請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the information presentation unit causes the voice output unit to output a voice reading out the input information according to a designated reading.
  3.  前記情報提示部は、指定された読み方の情報を入力された情報と共にディスプレイに表示させる請求項1または2に記載の情報処理装置。 The information processing apparatus according to claim 1 or 2, wherein the information presentation unit displays the specified reading information on the display together with the input information.
  4.  入力された情報の読み方の候補が複数ある場合、情報提示対象者に読み方を問い合わせる問合せ部をさらに備え、
     前記読み方指定部は、問合せに対する情報提示対象者からの返答に応じて、入力された情報の読み方を指定する、
     請求項1から3のいずれかに記載の情報処理装置。
    further comprising an inquiry unit for inquiring how to read the information presentation target person when there are multiple candidates for how to read the input information,
    The reading specification unit specifies how to read the input information in response to a response from the information presentation target person to the inquiry.
    The information processing apparatus according to any one of claims 1 to 3.
  5.  情報が入力される情報入力部と、
     情報提示対象者についての背景情報を参照して、入力された情報の提示態様を指定する提示態様指定部と、
     指定された提示態様に応じて、入力された情報を情報提示対象者に提示する情報提示部と、
     を備える情報処理装置。
    an information input unit into which information is input;
    a presentation mode specifying unit that refers to background information about an information presentation target and specifies a presentation mode of the input information;
    an information presentation unit that presents input information to an information presentation target person in accordance with a designated presentation mode;
    Information processing device.
  6.  前記提示態様は、入力された情報を提示する速度、入力された情報を提示する量、入力された情報を提示する際の調子、入力された情報を読み上げる際の音声の少なくとも一つを含む請求項5に記載の情報処理装置。 The presentation mode includes at least one of the speed of presenting the input information, the amount of presenting the input information, the tone when presenting the input information, and the voice when reading the input information. Item 6. The information processing device according to item 5.
  7.  議事録の情報が入力される情報入力部と、
     情報提示対象者についての背景情報を参照して、入力された議事録の情報を情報提示対象者に合わせて加工する議事録加工部と、
     加工された議事録の情報を情報提示対象者に提示する情報提示部と、
     を備える情報処理装置。
    an information input unit into which information of minutes is input;
    a minutes processing unit that refers to background information about the information presentation target person and processes the input information of the minutes according to the information presentation target person;
    an information presentation unit that presents processed information of the minutes to an information presentation target;
    Information processing device.
  8.  情報が入力される情報入力部と、
     情報提示対象者についての背景情報を参照して、プライバシー保護の要否を判定するプライバシー判定部と、
     プライバシー保護が必要と判定された場合、入力された情報の少なくとも一部に秘匿処理を施す秘匿処理部と、
     秘匿処理が施された情報を情報提示対象者に提示する情報提示部と、
     を備える情報処理装置。
    an information input unit into which information is input;
    a privacy determination unit that refers to background information about the information presentation target and determines whether or not privacy protection is necessary;
    an anonymization processing unit that performs an anonymization process on at least part of the input information when it is determined that privacy protection is necessary;
    an information presenting unit that presents information subjected to confidentiality processing to an information presenting target;
    Information processing device.
  9.  情報提示対象者の置かれた状況を背景情報として検知する状況検知部をさらに備える請求項1から8のいずれかに記載の情報処理装置。 The information processing apparatus according to any one of claims 1 to 8, further comprising a situation detection unit that detects the situation in which the information presentation target person is placed as background information.
  10.  前記状況検知部は、情報提示対象者の会話を分析して状況を検知する請求項9に記載の情報処理装置。 The information processing apparatus according to claim 9, wherein the situation detection unit detects the situation by analyzing the conversation of the information presentation target person.
  11.  情報提示対象者の属性を背景情報として取得する属性取得部をさらに備える請求項1から10のいずれかに記載の情報処理装置。 The information processing apparatus according to any one of claims 1 to 10, further comprising an attribute acquisition unit that acquires an attribute of an information presentation target person as background information.
  12.  前記属性は、年齢、性別、出身地、国籍、専門資格、職業、所属組織の少なくとも一つを含む請求項11に記載の情報処理装置。 The information processing apparatus according to claim 11, wherein the attributes include at least one of age, gender, hometown, nationality, professional qualifications, occupation, and affiliated organization.
  13.  入力された情報についての説明を取得する説明取得部をさらに備え、
     前記情報提示部は、取得された説明を入力された情報と共に提示する
     請求項1から12のいずれかに記載の情報処理装置。
    further comprising a description acquisition unit that acquires a description of the input information;
    The information processing apparatus according to any one of claims 1 to 12, wherein the information presenting unit presents the acquired description together with the input information.
  14.  情報が入力される情報入力ステップと、
     情報提示対象者についての背景情報を参照して、入力された情報の読み方を指定する読み方指定ステップと、
     指定された読み方に応じて、入力された情報を情報提示対象者に提示する情報提示ステップと、
     を備える情報処理方法。
    an information input step in which information is input;
    a reading designation step of referring to background information about the information presentation target and designating how to read the input information;
    an information presentation step of presenting the input information to an information presentation target person according to the designated reading;
    An information processing method comprising:
  15.  情報が入力される情報入力ステップと、
     情報提示対象者についての背景情報を参照して、入力された情報の読み方を指定する読み方指定ステップと、
     指定された読み方に応じて、入力された情報を情報提示対象者に提示する情報提示ステップと、
     をコンピュータに実行させる情報処理プログラム。
    an information input step in which information is input;
    a reading designation step of referring to background information about the information presentation target and designating how to read the input information;
    an information presentation step of presenting the input information to an information presentation target person according to the designated reading;
    An information processing program that causes a computer to execute
  16.  情報が入力される情報入力ステップと、
     情報提示対象者についての背景情報を参照して、入力された情報の提示態様を指定する提示態様指定ステップと、
     指定された提示態様に応じて、入力された情報を情報提示対象者に提示する情報提示ステップと、
     を備える情報処理方法。
    an information input step in which information is input;
    a presentation mode specification step of specifying a presentation mode of the input information by referring to background information about the information presentation target;
    an information presenting step of presenting the input information to an information presenting target person in accordance with the designated presenting mode;
    An information processing method comprising:
  17.  情報が入力される情報入力ステップと、
     情報提示対象者についての背景情報を参照して、入力された情報の提示態様を指定する提示態様指定ステップと、
     指定された提示態様に応じて、入力された情報を情報提示対象者に提示する情報提示ステップと、
     をコンピュータに実行させる情報処理プログラム。
    an information input step in which information is input;
    a presentation mode specification step of specifying a presentation mode of the input information by referring to background information about the information presentation target;
    an information presenting step of presenting the input information to an information presenting target person in accordance with the designated presenting mode;
    An information processing program that causes a computer to execute
  18.  議事録の情報が入力される情報入力ステップと、
     情報提示対象者についての背景情報を参照して、入力された議事録の情報を情報提示対象者に合わせて加工する議事録加工ステップと、
     加工された議事録の情報を情報提示対象者に提示する情報提示ステップと、
     を備える情報処理方法。
    an information input step in which information for minutes is input;
    a minutes processing step of referring to background information about the information presentation target person and processing the input information of the minutes in accordance with the information presentation target person;
    an information presenting step of presenting information of the processed minutes to an information presenting subject;
    An information processing method comprising:
  19.  議事録の情報が入力される情報入力ステップと、
     情報提示対象者についての背景情報を参照して、入力された議事録の情報を情報提示対象者に合わせて加工する議事録加工ステップと、
     加工された議事録の情報を情報提示対象者に提示する情報提示ステップと、
     をコンピュータに実行させる情報処理プログラム。
    an information input step in which information for minutes is input;
    a minutes processing step of referring to background information about the information presentation target person and processing the input information of the minutes in accordance with the information presentation target person;
    an information presenting step of presenting information of the processed minutes to an information presenting subject;
    An information processing program that causes a computer to execute
  20.  情報が入力される情報入力ステップと、
     情報提示対象者についての背景情報を参照して、プライバシー保護の要否を判定するプライバシー判定ステップと、
     プライバシー保護が必要と判定された場合、入力された情報の少なくとも一部に秘匿処理を施す秘匿処理ステップと、
     秘匿処理が施された情報を情報提示対象者に提示する情報提示ステップと、
     を備える情報処理方法。
    an information input step in which information is input;
    a privacy determination step of determining whether or not privacy protection is necessary with reference to background information about the information presentation target;
    an encryption processing step of performing encryption processing on at least part of the input information when it is determined that privacy protection is necessary;
    an information presenting step of presenting information subjected to confidentiality processing to an information presenting target;
    An information processing method comprising:
  21.  情報が入力される情報入力ステップと、
     情報提示対象者についての背景情報を参照して、プライバシー保護の要否を判定するプライバシー判定ステップと、
     プライバシー保護が必要と判定された場合、入力された情報の少なくとも一部に秘匿処理を施す秘匿処理ステップと、
     秘匿処理が施された情報を情報提示対象者に提示する情報提示ステップと、
     をコンピュータに実行させる情報処理プログラム。
    an information input step in which information is input;
    a privacy determination step of determining whether or not privacy protection is necessary with reference to background information about the information presentation target;
    an encryption processing step of performing encryption processing on at least part of the input information when it is determined that privacy protection is necessary;
    an information presenting step of presenting information subjected to confidentiality processing to an information presenting target;
    An information processing program that causes a computer to execute
PCT/JP2021/014513 2021-04-05 2021-04-05 Information processing device, information processing method, and information processing program WO2022215120A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/014513 WO2022215120A1 (en) 2021-04-05 2021-04-05 Information processing device, information processing method, and information processing program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/014513 WO2022215120A1 (en) 2021-04-05 2021-04-05 Information processing device, information processing method, and information processing program

Publications (1)

Publication Number Publication Date
WO2022215120A1 true WO2022215120A1 (en) 2022-10-13

Family

ID=83546295

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/014513 WO2022215120A1 (en) 2021-04-05 2021-04-05 Information processing device, information processing method, and information processing program

Country Status (1)

Country Link
WO (1) WO2022215120A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002207728A (en) * 2001-01-12 2002-07-26 Fujitsu Ltd Phonogram generator, and recording medium recorded with program for realizing the same
JP2008139969A (en) * 2006-11-30 2008-06-19 Fuji Xerox Co Ltd Conference minutes generation device, conference information management system, and program
JP2011102910A (en) * 2009-11-11 2011-05-26 Nippon Telegr & Teleph Corp <Ntt> Voice reading method reflecting acoustic sense characteristic, device thereof, and program
JP2013065284A (en) * 2011-08-11 2013-04-11 Apple Inc Method for removing ambiguity of multiple readings in language conversion
JP2016122183A (en) * 2014-12-09 2016-07-07 アップル インコーポレイテッド Disambiguating heteronyms in speech synthesis
JP2019008477A (en) * 2017-06-22 2019-01-17 富士通株式会社 Discrimination program, discrimination device and discrimination method
WO2020110744A1 (en) * 2018-11-28 2020-06-04 ソニー株式会社 Information processing device, information processing method, and program
JP2020149628A (en) * 2019-03-15 2020-09-17 エヌ・ティ・ティ・コミュニケーションズ株式会社 Information processing device, information processing method and program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002207728A (en) * 2001-01-12 2002-07-26 Fujitsu Ltd Phonogram generator, and recording medium recorded with program for realizing the same
JP2008139969A (en) * 2006-11-30 2008-06-19 Fuji Xerox Co Ltd Conference minutes generation device, conference information management system, and program
JP2011102910A (en) * 2009-11-11 2011-05-26 Nippon Telegr & Teleph Corp <Ntt> Voice reading method reflecting acoustic sense characteristic, device thereof, and program
JP2013065284A (en) * 2011-08-11 2013-04-11 Apple Inc Method for removing ambiguity of multiple readings in language conversion
JP2016122183A (en) * 2014-12-09 2016-07-07 アップル インコーポレイテッド Disambiguating heteronyms in speech synthesis
JP2019008477A (en) * 2017-06-22 2019-01-17 富士通株式会社 Discrimination program, discrimination device and discrimination method
WO2020110744A1 (en) * 2018-11-28 2020-06-04 ソニー株式会社 Information processing device, information processing method, and program
JP2020149628A (en) * 2019-03-15 2020-09-17 エヌ・ティ・ティ・コミュニケーションズ株式会社 Information processing device, information processing method and program

Similar Documents

Publication Publication Date Title
US11113419B2 (en) Selective enforcement of privacy and confidentiality for optimization of voice applications
US10679005B2 (en) Speech recognition and summarization
US10777206B2 (en) Voiceprint update method, client, and electronic device
US11475344B2 (en) User identification with voiceprints on online social networks
US11006077B1 (en) Systems and methods for dynamically concealing sensitive information
US20080240379A1 (en) Automatic retrieval and presentation of information relevant to the context of a user&#39;s conversation
US20230118412A1 (en) Stylizing Text-to-Speech (TTS) Voice Response for Assistant Systems
WO2020086343A1 (en) Privacy awareness for personal assistant communications
Kröger et al. Personal information inference from voice recordings: User awareness and privacy concerns
US20130144595A1 (en) Language translation based on speaker-related information
US20160171109A1 (en) Web content filtering
US9564124B2 (en) Displaying relevant information on wearable computing devices
US20160065539A1 (en) Method of sending information about a user
US20200160278A1 (en) Cognitive scribe and meeting moderator assistant
Schulze et al. Conversational context helps improve mobile notification management
KR20150041592A (en) Method for updating contact information in callee electronic device, and the electronic device
US20220035840A1 (en) Data management device, data management method, and program
WO2022215120A1 (en) Information processing device, information processing method, and information processing program
US12003575B2 (en) Routing of sensitive-information utterances through secure channels in interactive voice sessions
KR20200082232A (en) Apparatus for analysis of emotion between users, interactive agent system using the same, terminal apparatus for analysis of emotion between users and method of the same
US11782986B2 (en) Interactive query based network communication through a media device
Yeasmin Privacy analysis of voice user interfaces
JP2022018724A (en) Information processing device, information processing method, and information processing program
JP6298583B2 (en) Personal information protection method, electronic device, and computer program
US20240038222A1 (en) System and method for consent detection and validation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21935928

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21935928

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP