WO2023134069A1 - 实体关系的识别方法、设备及可读存储介质 - Google Patents

实体关系的识别方法、设备及可读存储介质 Download PDF

Info

Publication number
WO2023134069A1
WO2023134069A1 PCT/CN2022/089938 CN2022089938W WO2023134069A1 WO 2023134069 A1 WO2023134069 A1 WO 2023134069A1 CN 2022089938 W CN2022089938 W CN 2022089938W WO 2023134069 A1 WO2023134069 A1 WO 2023134069A1
Authority
WO
WIPO (PCT)
Prior art keywords
entity
relationship
vector
word
loss function
Prior art date
Application number
PCT/CN2022/089938
Other languages
English (en)
French (fr)
Inventor
杨坤
王燕蒙
王少军
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2023134069A1 publication Critical patent/WO2023134069A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3346Query execution using probabilistic model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3347Query execution using vector based model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present application relates to the technical field of artificial intelligence, and in particular to a method, device, electronic equipment, and computer-readable storage medium for identifying entity relationships.
  • Knowledge graph is an important field in the field of NLP, which aims to realize a more intelligent search engine. With the development of technology, it can be applied to scenarios such as intelligent search, intelligent question answering, and personalized recommendation. So how to build a knowledge graph has become a hot topic in the NLP field.
  • the first step in building a knowledge graph is to extract information from text, and a key technology for information extraction is relation extraction. After identifying the discrete entities of the text, it is necessary to identify the relationship between the two entities. Since there may be multiple relationships between the two entities, this is a multi-label classification problem.
  • the current method for chapter-level relationship extraction is to use a dynamic threshold method to modify the loss function. For each sample, there will be a threshold value. After the model gives the probability that the entity pair belongs to each relationship, it will give a A dynamic threshold, the relationship category above this threshold is the category of this entity pair.
  • the present application urgently needs to provide a method for identifying entity relationships.
  • the present application provides an entity relationship recognition method, device, electronic equipment, and computer-readable storage medium, the main purpose of which is to train the entity relationship recognition model cyclically through the constructed loss function, so as to solve the problem of ineffective recognition in the existing entity recognition process. good question.
  • the present application provides a method for identifying entity relationships, which is applied to electronic devices, and the method includes:
  • Preprocessing the training sample by using a pre-built entity relationship recognition model to obtain a word vector of each word in the training sample;
  • the entity pair of the training sample is processed, and the entity vector of each entity in all entity pairs of the training sample is obtained, wherein the entity pair consists of any two entities of the training sample constitute;
  • the entity relationship recognition is performed on the text to be recognized.
  • the present application also provides a device for identifying entity-based relationships, the device comprising:
  • the word vector acquisition module is used to preprocess the training sample by using the pre-built entity relationship recognition model, and obtain the word vector of each word in the training sample;
  • the entity vector acquisition module is used to process the entity pairs of the training samples according to the acquired word vectors, and acquire the entity vectors of each entity in all the entity pairs of the training samples, wherein the entity pairs are composed of the Any two entities of the training sample;
  • An additional eigenvector acquisition module configured to weight and sum the word vectors of each entity of the entity pair through a preset weight matrix, and obtain an additional eigenvector of each entity of the entity pair;
  • a prediction vector acquisition module configured to process each entity vector of the entity pair and the additional feature vector through an activation function to obtain the predicted probability of the entity pair relationship category;
  • the relationship category loss function value acquisition module is used to process the predicted probability of the entity to the relationship category through the pre-built loss function acquisition, and obtain the relationship category loss function value;
  • a model training completion module configured to cyclically obtain the loss function value of the relationship category until the loss function value of the relationship category converges to a preset range, so as to complete the iterative training of the entity relationship recognition model;
  • the relationship category recognition module is used to identify the entity relationship of the text to be recognized through the trained entity relationship recognition model.
  • an electronic device which includes:
  • the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the steps of the above-mentioned entity relationship identification method.
  • the present application also provides a computer-readable storage medium, at least one instruction is stored in the computer-readable storage medium, and the at least one instruction is executed by a processor in the electronic device to realize the above-mentioned A method for identifying entity relationships.
  • the training sample is preprocessed using the pre-built entity relationship recognition model, and the word vector of each word in the training sample is obtained; according to the obtained word vector, the entity pair of the training sample is processed, Obtaining the entity vector of each entity in all entity pairs of the training sample, wherein the entity pair is composed of any two entities of the training sample;
  • the word vectors of the entity are weighted and summed to obtain the additional feature vector of each entity of the entity pair; each entity vector of the entity pair and the additional feature vector are processed by an activation function to obtain the entity pair
  • the predicted probability of the relationship category; the predicted probability of the entity to the relationship category is processed through the pre-built loss function to obtain the relationship category loss function value; the relationship category loss function value is obtained in a loop until the relationship category loss function
  • the value converges to a preset range to complete the iterative training of the entity relationship recognition model; the entity relationship recognition is performed on the text to be recognized through the trained entity relationship recognition model.
  • the main purpose of this application is to train the entity
  • FIG. 1 is a schematic flowchart of an entity relationship identification method provided by an embodiment of the present application
  • FIG. 2 is a block diagram of an entity relationship identification device provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an internal structure of an electronic device implementing an entity relationship identification method provided by an embodiment of the present application.
  • the terms “mobile device” and/or “equipment” refer generally to wireless communication devices, and more specifically to one or more of the following: portable electronic devices, telephones (e.g., cellular phone, smartphone), computer (eg, laptop, tablet), portable media player, personal digital assistant (PDA), or any other electronic device with networking capabilities.
  • portable electronic devices e.g., cellular phone, smartphone
  • computer e.g, laptop, tablet
  • portable media player e.g., personal digital assistant (PDA), or any other electronic device with networking capabilities.
  • PDA personal digital assistant
  • Artificial Intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
  • artificial intelligence is a comprehensive technique of computer science that attempts to understand the nature of intelligence and produce a new kind of intelligent machine that can respond in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Artificial intelligence technology is a comprehensive subject that involves a wide range of fields, including both hardware-level technology and software-level technology.
  • Artificial intelligence basic technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation/interaction systems, and mechatronics.
  • Artificial intelligence software technology mainly includes several major directions such as computer vision technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
  • artificial intelligence technology has been researched and applied in many fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, drones , robots, intelligent medical care, intelligent customer service, etc., I believe that with the development of technology, artificial intelligence technology will be applied in more fields and play an increasingly important value.
  • Machine learning (Machine Learning, ML for short) is a multi-field interdisciplinary subject, involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other disciplines. Specializes in the study of how computers simulate or implement human learning behaviors to acquire new knowledge or skills, and reorganize existing knowledge structures to continuously improve their performance.
  • Machine learning is the core of artificial intelligence and the fundamental way to make computers intelligent, and its application pervades all fields of artificial intelligence.
  • Machine learning and deep learning usually include techniques such as artificial neural network, belief network, reinforcement learning, transfer learning, inductive learning, and teaching learning.
  • the present application provides a method for identifying entity relationships.
  • FIG. 1 it is a schematic flowchart of an entity relationship identification method provided by an embodiment of the present application.
  • the method may be performed by a device, and the device may be implemented by software and/or hardware.
  • the method for identifying entity relationships includes:
  • S1 Preprocess the training samples by using the pre-built entity relationship recognition model, and obtain the word vector of each word in the training samples;
  • S2 Process the entity pairs of the training samples according to the obtained word vectors, and obtain the entity vectors of each entity in all the entity pairs of the training samples, wherein the entity pairs consist of any two of the training samples entity constitutes;
  • S3 Perform a weighted summation process on the word vectors of each entity of the entity pair through a preset weight matrix to obtain an additional feature vector of each entity of the entity pair;
  • S4 Process each entity vector of the entity pair and the additional feature vector through an activation function to obtain the predicted probability of the entity pair relationship category;
  • S5 Obtaining the predicted probability of the entity to the relationship category through a pre-built loss function, and acquiring a loss function value of the relationship category;
  • S6 cyclically acquire the loss function value of the relationship category until the loss function value of the relationship category converges to a preset range, so as to complete the iterative training of the entity relationship recognition model;
  • the entity-relationship recognition model is trained cyclically through the constructed loss function, so as to solve the existing problem of poor recognition effect in the entity recognition process.
  • step S1 the training sample is preprocessed using the pre-built entity relationship recognition model, and the word vector of each word in the training sample is obtained, including:
  • the initial word vector determine the corresponding image feature vector, radical feature vector, and pinyin feature vector of the target word
  • a word vector corresponding to the target word is generated according to the initial word vector, image feature vector, root feature vector, pinyin feature vector and preset weight matrix.
  • the training sample is preprocessed by using the constructed entity relationship recognition model to obtain the word vector of each word of the training sample, wherein, the formula used to obtain the word vector of each word of the training sample is:
  • h 1 , h 2 ,...,h l represent the word vector of each word in the training sample
  • x 1 , x 2 ,...,x l represent each word in the training sample
  • l represents the training sample length.
  • BERT is a model that has been trained to obtain word vectors through massive data.
  • Vector this vector can be used as a vector for each word.
  • the vector representation of the entity when inputting to the entity relationship recognition model, insert "*" before the token of the entity, and finally use the vector corresponding to "*" as the vector representation of the entity. Since an entity appears multiple times in the text, correspondingly, multiple vector representations of this entity can be obtained.
  • For multiple vector representations of this entity use the formula in step S2 to calculate, and the result of this formula is used as the final vector representation of this entity.
  • step S2 according to the obtained word vector, the entity pair of the training sample is processed, and the entity vector of each entity in all entity pairs of the training sample is obtained, including:
  • the final vector of the entity can be expressed as the following formula, that is, the formula used to obtain the entity vector of each entity in all entity pairs of the training samples for:
  • step S3 the word vector of each entity of the entity pair is weighted and summed through the preset weight matrix, and an additional feature vector of each entity of the entity pair is obtained, including:
  • an additional feature vector of each entity of the entity pair is obtained.
  • a (s,o) A s A o
  • a s represents the weight of the head entity S in the entity pair with respect to other entities
  • a o represents the weight of the tail entity O in the entity pair with respect to other entities
  • a (s,o) represents the weight of the entity on all other tokens
  • H represents the number of head entities
  • q (s,o) represents the weight of the entity pairs of the H head entities with respect to other tokens.
  • the weight matrix the weight of the head entity S in the entity pair with respect to other entities can be obtained as A s , and the weight of the tail entity in the entity pair with respect to other entities is A o .
  • a (s,o) A s A o
  • the most important topk words about this entity pair can be taken out from large to small, and then weighted and summed with the topk corresponding vectors to obtain an additional feature vector c ( s,o) .
  • step S4 the activation function is used to process each entity vector of the entity pair and the additional feature vector to obtain the predicted probability of the entity pair relationship category, including:
  • S41 respectively process each entity vector and the additional feature vector of the entity pair through a tanh activation function, and respectively obtain a tanh result value of each entity;
  • S42 Perform product processing on the obtained tanh result value of each entity, and then input it into the sigmoid function to obtain the predicted probability of the entity to the relationship category.
  • the predicted probability of the entity for a relationship can be expressed as the following formula, that is, the formula used to obtain the predicted probability of the entity for the relationship category is:
  • Z s represents the vector after the e s entity vector passes through the fully connected layer and then passes through the activation function
  • W s represents the parameter matrix
  • Z o represents the vector after the e o entity vector passes through the fully connected layer and then passes through the activation function
  • b r represents the bias term
  • represents the sigmoid function
  • W r represents the parameter matrix
  • r represents a certain relationship.
  • L represents the loss function
  • L 1 represents the loss function of one entity in the entity pair
  • L 2 represents the loss function of the other entity in the entity pair
  • P represents the set of categories that the entity pair belongs to
  • N represents the non-entity pair Category
  • TH represents the preset category value
  • represents the sigmoid function
  • r represents a certain relationship
  • K represents the parameter.
  • step S6 the loop acquires the value of the relationship category loss function, and when the value of the relationship category loss function converges to a preset range, stop looping to acquire the value of the relationship category loss function, so as to complete the identification of the constructed entity relationship Iterative training of the model, including:
  • the basis for judging whether the entity pair belongs to the relationship category is whether the score of the entity pair in this category is higher than the score of the threshold category. According to this loss function, if the entity If the score in this category is much higher than the threshold category, it means that the entity relationship recognition model has learned how to judge this category, then the loss function value of this category is lower, and the entity relationship recognition model will pay more attention Other categories that have not been learned well.
  • the trained entity relationship recognition model can not only learn how to discriminate relationship categories with a large number of samples, but also learn how to discriminate relationship categories with a relatively small number of samples.
  • step S7 the entity relationship recognition is performed on the text to be recognized through the trained entity relationship recognition model, and the relationship category of the entity pair is obtained. That is to say, after the model is trained based on the loss function, for a new article and entity pair, the relationship category of the entity pair can be predicted according to the above steps.
  • the constructed entity relationship recognition model is used to preprocess the training samples to obtain the word vector of each word of the training sample; according to the obtained word vector, the entity pairs of the training samples are processed to obtain the Entity vectors of each entity in all entity pairs of the training samples; the weighted summation process is carried out to the word vectors of each entity of the entity pairs through the weight matrix, and the additional feature vectors of each entity of the entity pairs are obtained; Process each entity vector of the entity pair and the additional feature vector through an activation function to obtain the predicted probability of the entity to the relationship category; obtain the predicted probability of the entity to the relationship category through the constructed loss function Processing, to obtain the relationship category loss function value; loop to obtain the relationship category loss function value, when the relationship category loss function value converges to the preset range, stop looping to obtain the relationship category loss function value, to complete the construction of the entity Iterative training of the relationship recognition model; through the trained entity relationship recognition model, the entity relationship recognition is performed on the text to be recognized, and the relationship category of the entity
  • the apparatus 100 for identifying the entity relationship described in this application may be installed in an electronic device.
  • the entity relationship identification device 100 may include: a word vector acquisition module 101, an entity vector acquisition module 102, an additional feature vector acquisition module 103, a prediction vector acquisition module 104, a relationship category loss function value acquisition module 105, Model training completion module 106 and relationship category identification module 107 .
  • the module described in this application can also be called a unit, which refers to a series of computer program segments that can be executed by the processor of the electronic device and can complete fixed functions, and are stored in the memory of the electronic device.
  • each module/unit is as follows:
  • the word vector acquisition module 101 is used to preprocess the training sample by using the pre-built entity relationship recognition model, and obtain the word vector of each word in the training sample;
  • the entity vector obtaining module 102 is configured to process the entity pairs of the training samples according to the obtained word vectors, and obtain the entity vectors of each entity in all the entity pairs of the training samples, wherein the entity pairs consist of any two entities of the above training samples;
  • An additional eigenvector acquisition module 103 configured to perform a weighted summation process on the word vectors of each entity of the entity pair through a preset weight matrix, and obtain an additional eigenvector of each entity of the entity pair;
  • a prediction vector acquisition module 104 configured to process each entity vector of the entity pair and the additional feature vector through an activation function, and obtain the predicted probability of the entity pair relationship category;
  • the relationship category loss function value acquisition module 105 is used to process the predicted probability of the entity to the relationship category through the pre-built loss function acquisition, and obtain the relationship category loss function value;
  • a model training completion module 106 configured to cyclically obtain the loss function value of the relationship category until the loss function value of the relationship category converges to a preset range, so as to complete the iterative training of the entity relationship recognition model;
  • the relationship category identification module 107 is configured to perform entity relationship identification on the text to be recognized through the trained entity relationship identification model.
  • the constructed entity relationship recognition model is used to preprocess the training samples to obtain the word vector of each word of the training sample; according to the obtained word vector, the entity pairs of the training samples are processed to obtain the Entity vectors of each entity in all entity pairs of the training samples; the weighted summation process is carried out to the word vectors of each entity of the entity pairs through the weight matrix, and the additional feature vectors of each entity of the entity pairs are obtained; Process each entity vector of the entity pair and the additional feature vector through an activation function to obtain the predicted probability of the entity to the relationship category; obtain the predicted probability of the entity to the relationship category through the constructed loss function Processing, to obtain the relationship category loss function value; loop to obtain the relationship category loss function value, when the relationship category loss function value converges to the preset range, stop looping to obtain the relationship category loss function value, to complete the construction of the entity Iterative training of the relationship recognition model; through the trained entity relationship recognition model, the entity relationship recognition is performed on the text to be recognized, and the relationship category of the entity
  • FIG. 3 it is a schematic structural diagram of an electronic device implementing the entity relationship identification method of the present application.
  • the electronic device 1 may include a processor 10 , a memory 11 and a bus, and may also include a computer program stored in the memory 11 and operable on the processor 10 , such as an entity relationship recognition program 12 .
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes a flash memory, a mobile hard disk, a multimedia card, a card-type memory (for example: SD or DX memory, etc.), a magnetic memory, a magnetic disk, CD etc.
  • the storage 11 may be an internal storage unit of the electronic device 1 in some embodiments, such as a mobile hard disk of the electronic device 1 .
  • the memory 11 can also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk equipped on the electronic device 1, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital , SD) card, flash memory card (Flash Card), etc.
  • the memory 11 may also include both an internal storage unit of the electronic device 1 and an external storage device.
  • the memory 11 can not only be used to store application software and various data installed in the electronic device 1 , such as the code of a data audit program, but also can be used to temporarily store data that has been output or will be output.
  • the memory may store content that may be displayed by the electronic device or sent to other devices (eg, headphones) for display or playback by other devices.
  • the memory may also store content received from other devices. The content from other devices may be displayed, played, or used by the electronic device to perform any necessary tasks or operations that may be performed by computer processors or other components in the electronic device and/or wireless access point.
  • the processor 10 may be composed of an integrated circuit, for example, may be composed of a single packaged integrated circuit, or may be composed of multiple integrated circuits with the same function or different functions, including one or more Combination of central processing unit (Central Processing unit, CPU), microprocessor, digital processing chip, graphics processor and various control chips, etc.
  • the processor 10 is the control core (Control Unit) of the electronic device, and uses various interfaces and lines to connect the various components of the entire electronic device, by running or executing programs or modules stored in the memory 11 (such as data audit program, etc.), and call the data stored in the memory 11 to execute various functions of the electronic device 1 and process data.
  • Control Unit Control Unit
  • the electronics may also include a chipset (not shown) for controlling communications between the one or more processors and one or more of the other components of the user device.
  • the electronic device may be based on architecture or architecture, and processors and chipsets are available from Processor and Chipset Families.
  • the one or more processors 104 may also include one or more Application Specific Integrated Circuits (ASICs) or Application Specific Standard Products (ASSPs) for handling specific data processing functions or tasks.
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • the bus may be a peripheral component interconnect standard (PCI for short) bus or an extended industry standard architecture (EISA for short) bus or the like.
  • PCI peripheral component interconnect standard
  • EISA extended industry standard architecture
  • the bus can be divided into address bus, data bus, control bus and so on.
  • the bus is configured to realize connection and communication between the memory 11 and at least one processor 10 and the like.
  • network and I/O interfaces may include one or more communication interfaces or network interface devices to provide for data transfer between the electronic device and other devices (eg, web servers) via a network (not shown).
  • Communication interfaces may include, but are not limited to: Body Area Network (BAN), Personal Area Network (PAN), Wired Local Area Network (LAN), Wireless Local Area Network (WLAN), Wireless Wide Area Network (WWAN), and the like.
  • User equipment may be coupled to the network via a wired connection.
  • the wireless system interface may include hardware or software to broadcast and receive messages using the Wi-Fi Direct standard and/or the IEEE 802.11 wireless standard, the Bluetooth standard, the Bluetooth low energy standard, the Wi-Gig standard, and/or any other wireless standards and/or combinations thereof.
  • a wireless system may include a transmitter and a receiver or transceiver capable of operating over a wide range of operating frequencies governed by the IEEE 802.11 wireless standard.
  • Communication interfaces may utilize acoustic, radio frequency, optical, or other signals to exchange data between the electronic device and other devices, such as access points, hosts, servers, routers, reading devices, and the like.
  • Networks may include, but are not limited to, the Internet, private networks, virtual private networks, wireless wide area networks, local area networks, metropolitan area networks, telephone networks, and the like.
  • Displays may include, but are not limited to, liquid crystal displays, light emitting diode displays, or E-InkTM displays manufactured by E Ink Corp. of Cambridge, Massachusetts, USA.
  • the display can be used to display content to the user in the form of text, images, or video.
  • the display can also operate as a touch screen display, which can enable a user to initiate commands or operations by touching the screen with certain fingers or gestures.
  • Figure 3 only shows an electronic device with components, and those skilled in the art can understand that the structure shown in Figure 3 does not constitute a limitation on the electronic device 3, and may include fewer or more components than those shown in the illustration. components, or combinations of certain components, or different arrangements of components.
  • the electronic device 1 can also include a power supply (such as a battery) for supplying power to various components.
  • the power supply can be logically connected to the at least one processor 10 through a power management device, so that the power supply can be controlled by power management.
  • the device implements functions such as charge management, discharge management, and power consumption management.
  • the power supply may also include one or more DC or AC power supplies, recharging devices, power failure detection circuits, power converters or inverters, power status indicators and other arbitrary components.
  • the electronic device 1 may also include various sensors, bluetooth modules, Wi-Fi modules, etc., which will not be repeated here.
  • the electronic device 1 may also include a network interface, optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which are usually used in the electronic device 1 Establish a communication connection with other electronic devices.
  • a network interface optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a Bluetooth interface, etc.), which are usually used in the electronic device 1 Establish a communication connection with other electronic devices.
  • the electronic device 1 may further include a user interface, which may be a display (Display) or an input unit (such as a keyboard (Keyboard)).
  • the user interface may also be a standard wired interface or a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, and the like.
  • the display may also be appropriately called a display screen or a display unit, and is used for displaying information processed in the electronic device 1 and for displaying a visualized user interface.
  • the entity relationship recognition program 12 stored in the memory 11 in the electronic device 1 is a combination of multiple instructions, and when running in the processor 10, it can realize:
  • Preprocessing the training sample by using a pre-built entity relationship recognition model to obtain a word vector of each word in the training sample;
  • the entity pair of the training sample is processed, and the entity vector of each entity in all entity pairs of the training sample is obtained, wherein the entity pair consists of any two entities of the training sample constitute;
  • the entity relationship recognition is performed on the text to be recognized.
  • the integrated modules/units of the electronic device 1 are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, and a read-only memory (ROM, Read-Only Memory) .
  • the computer-readable storage medium stores at least one instruction, and the at least one instruction is executed by the processor in the electronic device to implement the above-mentioned entity relationship
  • the steps of the identification method are as follows:
  • Preprocessing the training sample by using a pre-built entity relationship recognition model to obtain a word vector of each word in the training sample;
  • the entity pair of the training sample is processed, and the entity vector of each entity in all entity pairs of the training sample is obtained, wherein the entity pair consists of any two entities of the training sample constitute;
  • the entity relationship recognition is performed on the text to be recognized.
  • the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical units, that is, they may be located in one place, or may also be distributed to multiple network units. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional module in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware, or in the form of hardware plus software function modules.
  • These computer-executable program instructions can be loaded into a general-purpose computer, special-purpose computer, processor, or other programmable data processing device to produce a specific machine, so that the instructions executed on the computer, processor, or other programmable data processing device Create a component that implements one or more functions specified in a flowchart block or blocks.
  • These computer program products can also be stored in a computer-readable memory, which can instruct a computer or other programmable data processing apparatus to operate in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture, which includes implementing the process An instructional component of one or more functions specified in a block or blocks of a diagram.
  • the embodiments of the present application may provide a computer program product, which includes a computer-usable medium having computer-readable program code or program instructions embodied therein, and the computer-readable program code is adapted to be executed to realize One or more functions specified in multiple boxes.
  • Computer program instructions can also be loaded onto a computer or other programmable data processing device to cause a series of operational elements or steps to be executed on the computer or other programmable device to generate a computer-implemented program such that the computer or other programmable device
  • the instructions executed above provide elements or steps for implementing the functions specified in the flowchart block or blocks.
  • blocks in the block diagrams or flow diagrams support combinations of means for performing the specified functions, combinations of elements or steps for performing the specified functions and program instruction means for performing the specified functions. It should also be understood that each block in the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by a dedicated hardware-based computer system that performs the specified functions, elements, or steps, or by dedicated hardware or A combined implementation of computer instructions.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with each other using cryptographic methods. Each data block contains a batch of network transaction information, which is used to verify its Validity of information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Machine Translation (AREA)
  • Character Discrimination (AREA)

Abstract

本申请涉及一种人工智能技术领域,提供一种实体关系的识别方法、设备及可读存储介质,其中方法包括:利用预构建的实体关系识别模型获取训练样本中的每个字的字向量;根据获取的字向量获取训练样本的所有实体对中每个实体的实体向量;通过预设的权重矩阵对实体对的每个实体的词向量进行加权求和处理,获取实体对的每个实体的额外特征向量;通过激活函数获取实体对关系类别的预测概率;通过预构建的损失函数完成对构建的实体关系识别模型的迭代训练;通过训练好的实体关系识别模型对待识别文本进行实体关系识别。本申请主要目的在于通过构建的损失函数循环训练实体关系识别模型,从而解决现有在实体识别过程中识别效果不好的问题。

Description

实体关系的识别方法、设备及可读存储介质
本申请要求于2022年01月14日提交中国专利局、申请号为202210042332.0,发明名称为“实体关系的识别方法、设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种实体关系的识别方法、装置、电子设备及计算机可读存储介质。
背景技术
知识图谱是NLP领域重要的领域,旨在实现更智能的搜索引擎。随着技术的发展,能被应用于智能搜索、智能问答、个性化推荐等场景。所以如何构建知识图谱就成为NLP领域的一个热点方向。
构建知识图谱的第一步就是从文本中进行信息抽取,信息抽取的一个关键技术就是关系抽取。在识别出文本离散的实体之后,需要对两两实体之间进行关系识别,由于两两实体之间的关系可能有多个,所以这是一个多标签分类问题。目前针对篇章级别的关系抽取的方法是采用动态阈值的方法,改造损失函数,针对每一个样本,都会有一个阀值,在模型给到这个实体对属于每个关系的概率之后,会给出一个动态阀值,高于这个阀值的关系类别都是这个实体对的类别。
但是,发明人意识到,上述方法未关注到的一个问题是:目前的多标签分类问题存在不平衡问题,不平衡数据会导致模型在识别训练数据比较少的类别的时候效果较差,导致最终整体的识别效果不好。
为解决上述问题,本申请亟需提供一种实体关系的识别方法。
发明内容
本申请提供一种实体关系的识别方法、装置、电子设备及计算机可读存储介质,其主要目的在于通过构建的损失函数循环训练实体关系识别模型,从而解决现有在实体识别过程中识别效果不好的问题。
为实现上述目的,本申请提供的一种实体关系的识别方法,应用于电子设备,所述方法包括:
利用预构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本中的每个字的字向量;
根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实 体对中每个实体的实体向量,其中,所述实体对由所述训练样本的任意两个实体构成;
通过预设的权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量;
通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率;
通过预构建的损失函数获取对所述实体对关系类别的预测概率进行处理,获取关系类别损失函数值;
循环获取所述关系类别损失函数值,直至所述关系类别损失函数值收敛至预设范围,以完成所述实体关系识别模型的迭代训练;
通过训练好的实体关系识别模型对待识别文本进行实体关系识别。
为了解决上述问题,本申请还提供一种基实体关系的识别装置,所述装置包括:
字向量获取模块,用于利用预构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本中的每个字的字向量;
实体向量获取模块,用于根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量,其中,所述实体对由所述训练样本的任意两个实体构成;
额外特征向量获取模块,用于通过预设的权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量;
预测向量获取模块,用于通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率;
关系类别损失函数值获取模块,用于通过预构建的损失函数获取对所述实体对关系类别的预测概率进行处理,获取关系类别损失函数值;
模型训练完成模块,用于循环获取所述关系类别损失函数值,直至所述关系类别损失函数值收敛至预设范围,以完成所述实体关系识别模型的迭代训练;
关系类别识别模块,用于通过训练好的实体关系识别模型对待识别文本进行实体关系识别。
为了解决上述问题,本申请还提供一种电子设备,所述电子设备包括:
至少一个处理器;以及,
与所述至少一个处理器通信连接的存储器;其中,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述的实体关系的识别方法的步骤。
为了解决上述问题,本申请还提供一种计算机可读存储介质,所述计算机可读存储介 质中存储有至少一个指令,所述至少一个指令被电子设备中的处理器执行以实现上述所述的实体关系的识别方法。
本申请实施例利用预构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本中的每个字的字向量;根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量,其中,所述实体对由所述训练样本的任意两个实体构成;通过预设的权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量;通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率;通过预构建的损失函数获取对所述实体对关系类别的预测概率进行处理,获取关系类别损失函数值;循环获取所述关系类别损失函数值,直至所述关系类别损失函数值收敛至预设范围,以完成所述实体关系识别模型的迭代训练;通过训练好的实体关系识别模型对待识别文本进行实体关系识别。本申请主要目的在于通过构建的损失函数循环训练实体关系识别模型,从而解决现有在实体识别过程中识别效果不好的问题。
附图说明
图1为本申请一实施例提供的实体关系的识别方法的流程示意图;
图2为本申请一实施例提供的实体关系的识别装置的模块示意图;
图3为本申请一实施例提供的实现实体关系的识别方法的电子设备的内部结构示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
在下面的描述中,许多具体的细节被阐述。然而,应当理解的是,本申请的实施例可以在没有这些具体细节的情况下实现。在其他实例中,众所周知的方法、结构、和技术没有被详细地示出,以免模糊对本说明书的理解。对“一个实施例”、“实施例”、“示例性实施例”、“各种实施例”等等的参考表示本申请所描述的该实施例可包括特定的特征、结构、或特性,但并不是每一个实施例都必须包括该特定特征、结构、或特性。此外,短语“在一个实施例中”的反复使用不一定是指同一个实施例,尽管有可能是。
如本文所使用的,除非另外指明,使用序数形容词“第一”、“第二”、“第三”等等来描述公共的物体仅表明类似物体的不同实例正被参考,并且不意图暗示如此描述的该物体必须依照给定的顺序,无论在时间上、空间上、顺序上还是任何其他的方式。
如本文所使用的,除非另外指明,术语“移动设备”和/或“设备”一般是指无线通信设备,并且更具体地指以下的一个或多个:便携式电子设备、电话(例如,蜂窝式电话、智能手机)、计算机(例如,膝上型电脑、平板电脑)、便携式媒体播放器、个人数字助理(PDA)、或具有联网能力的任何其他电子设备。
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用***。换句话说,人工智能是计算机科学的一个综合技术,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。
人工智能技术是一门综合学科,涉及领域广泛,既有硬件层面的技术也有软件层面的技术。人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作/交互***、机电一体化等技术。人工智能软件技术主要包括计算机视觉技术、语音处理技术、自然语言处理技术以及机器学习/深度学习等几大方向。
随着人工智能技术研究和进步,人工智能技术在多个领域展开研究和应用,例如常见的智能家居、智能穿戴设备、虚拟助理、智能音箱、智能营销、无人驾驶、自动驾驶、无人机、机器人、智能医疗、智能客服等,相信随着技术的发展,人工智能技术将在更多的领域得到应用,并发挥越来越重要的价值。
机器学习(Machine Learning,简称ML)是一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。专门研究计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。机器学习是人工智能的核心,是使计算机具有智能的根本途径,其应用遍及人工智能的各个领域。机器学习和深度学习通常包括人工神经网络、置信网络、强化学习、迁移学习、归纳学习、式教学习等技术。
本申请提供一种实体关系的识别方法。参照图1所示,为本申请一实施例提供的实体关系的识别方法的流程示意图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。
在本实施例中,实体关系的识别方法,包括:
S1:利用预构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本中的每个字的字向量;
S2:根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量,其中,所述实体对由所述训练样本的任意两个实体构成;
S3:通过预设的权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量;
S4:通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率;
S5:通过预构建的损失函数获取对所述实体对关系类别的预测概率进行处理,获取关系类别损失函数值;
S6:循环获取所述关系类别损失函数值,直至所述关系类别损失函数值收敛至预设范围,以完成所述实体关系识别模型的迭代训练;
S7:通过训练好的实体关系识别模型对待识别文本进行实体关系识别。
在本申请的实施例中,通过构建的损失函数循环训练实体关系识别模型,从而解决现有在实体识别过程中识别效果不好的问题。
在本申请的实施例中,假设一篇文档
Figure PCTCN2022089938-appb-000001
其中,l表示文档的长度,t表示第t个字符,对文档中的每个实体,在实体的开头和结尾****这个特殊符号。
在步骤S1中,所述利用预构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本中的每个字的字向量,包括:
获取所述训练样本中的目标字,并确定所述目标字对应的初始字向量;
根据所述初始字向量,确定所述目标字对应的图像特征向量、字根特征向量,以及拼音特征向量;
根据所述目初始字向量、图像特征向量、字根特征向量、拼音特征向量以及预设权重矩阵生成所述目标字对应的字向量。
其中,利用构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本的每个字的字向量,其中,获取所述训练样本的每个字的字向量采用的公式为:
[h 1,h 2,...,h l]=BERT(|x 1,x 2,...,x l|)
其中,h 1,h 2,...,h l表示训练样本中每个字的字向量;x 1,x 2,...,x l表示训练样本中的每个字;l表示训练样本的长度。
在本申请的实施例中,BERT是通过海量数据训练好的获取字向量的模型,通过将句子中的每个字输入到BERT模型中,可以得到每个字在BERT模型最后一层隐藏层的向量,这个向量可以作为每个字的向量。其中,针对实体的向量表示,在向实体关系识别模型输入的时候,在是实体的token前***“*”,最后用“*”对应的向量作为这个实体的向量表示。由于一个实体在文中会出现多次,对应的可以获得这个实体的多个向量表示。针对这个实体的多个向量表示,用步骤S2的公式来计算,以此公式的结果作为最终这个实体的向量表示。
在步骤S2中,所述根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量,包括:
根据获取的字向量,获取所述训练样本中每个词的词向量;
根据获取的词向量,获取所述训练样本中的实体向量。
其中,对于任意一个实体e i,假设该实体有N个子符,那么最终该实体的向量可以表示成如下公式,即:获取所述训练样本的所有实体对中每个实体的实体向量采用的公式为:
Figure PCTCN2022089938-appb-000002
其中,
Figure PCTCN2022089938-appb-000003
表示在训练样本中出现的第j个位置的实体向量;
Figure PCTCN2022089938-appb-000004
表示实体在训练样本中共出现N次。
在步骤S3中,所述通过预设的权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量,包括:
通过预设的权重矩阵对所述实体对中头实体的实体向量进行处理,获取所述实体对中头实体的权重;
通过预设的权重矩阵对所述实体对中尾实体的实体向量进行处理,获取所述实体对中尾实体的权重;
根据所述实体对中头实体的权重、所述实体对中尾实体的权重,获取所述实体对的每个实体的额外特征向量。
其中,获取所述实体对的每个实体的额外特征向量采用的公式为:
A (s,o)=A s·A o
Figure PCTCN2022089938-appb-000005
其中,A s表示实体对中的头实体S关于其他实体的权重;
A o表示实体对中的尾实体O关于其他实体的权重;
A (s,o)表示实体对关于其他所有token的权重;
H表示头实体个数;
q (s,o)表示H个头实体的实体对关于其他token的权重。
在本申请的实施例中,根据实体关系识别模型,可以得到最后一层toke和token的权重矩阵A ijk,其中0<i<=H,H表示B ert中的muti_heads个数,j和k都是0<j,k<=l。通过权重矩阵,可以得到实体对中的头实体S关于其他实体的权重为A s,实体对中的尾实体关于其他实体的权重为A o,结合两个实体得到这个实体对关于其他所有token的权重为:
A (s,o)=A s·A o
由于实体对有H个head,所以最终实体对关于其他token的权重为:
Figure PCTCN2022089938-appb-000006
由更进一步的,可以根据q (s,o)的值从大到小取出关于这个实体对最重要的topk个字,然后和这topk个对应的向量加权求和,得到额外的特征向量c (s,o)
在步骤S4中,所述通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率,包括:
S41:通过tanh激活函数分别对所述实体对的每个实体向量以及所述额外特征向量进行处理,分别获取每个实体的tanh结果值;
S42:将获取的每个实体的tanh结果值进行乘积处理,然后输入到sigmoid函数,获取所述实体对关系类别的预测概率。
对于一个实体对(e s,e o),根据得到的实体向量
Figure PCTCN2022089938-appb-000007
最终,该实体对关于某个关系的预测概率可以表示成如下公式,即:获取所述实体对关系类别的预测概率采用的公式为:
Figure PCTCN2022089938-appb-000008
Figure PCTCN2022089938-appb-000009
Figure PCTCN2022089938-appb-000010
其中,Z s表示e s实体向量经过全连接层之后再经过激活函数之后的向量;
W s表示参数矩阵;Z o表示e o实体向量经过全连接层之后再经过激活函数之后的向量;b r表示偏差项;σ表示sigmoid函数;
Figure PCTCN2022089938-appb-000011
表示Z s的转置矩阵;W r表示参数矩阵;r表示某个关系。
在步骤S5中,构建的损失函数公式为:L=L 1+L 2
Figure PCTCN2022089938-appb-000012
Figure PCTCN2022089938-appb-000013
其中,L表示损失函数;L 1表示实体对中一个实体的损失函数;L 2表示实体对中另一个实体的损失函数;P表示该实体对属于的类别的集合;N表示非该实体对的类别;TH表示的是预设类别值;σ表示sigmoid函数;r表示某个关系;K表示参数。
在步骤S6中,所述循环获取所述关系类别损失函数值,当所述关系类别损失函数值收敛至预设范围,停止循环获取所述关系类别损失函数值,以完成对构建的实体关系识别模型的迭代训练,包括:
S61:当所述关系类型损失函数值低于或等于预设类别值时,继续循环获取所述关系类型损失函数值;
S62:当所述关系类型损失函数值高于预设类别值时,停止获取循环获取所述关系类型损失函数值,完成对所述实体关系识别模型的训练。
在本申请的实施例中,基于损失函数L,判断的实体对是否属于该关系类别的依据是实体对在这个类别上的得分是否高于阀值类别的得分,根据这个损失函数,如果此实体对在该类别的得分远远高于阀值类别,那么说明实体关系识别模型已经学习到了如何判断这个类别,那么这个类别的损失函数值就低一些,实体关系识别模型再接下来更多的关注其它未学习好的类别。
如果实体对在该类别的得分和阀值很接近,那么说明实体关系识别模型还没学习好如何判断这个类别,那么损失函数值就高一些,让实体关系识别模型在接下里的学习中更多的关注这个类别。即:通过损失函数中的σ(K(TH-logit r))和σ(K(logit r-TH))这两项 来控制,针对正样本而言,关系r的得分logit r,预设类别值的得分是TH,sigmoid函数是单调递增函数,所以TH-logit r越小,既logit r越大,TH越小,这个类别下的损失函数就会减小,最终基于这个损失函数的改进,训练出来的实体关系识别模型既能学习到如何判别样本数量比较多的关系类别,也能学习到如何判别样本数量比较少的关系类别。
在本申请的实施例中,在损失函数L=L 1+L 2下,实体关系识别模型训练的时候,如果预测出来该样本属于某个类别的概率和预设类别的概率相差很大的话,就降低该这个类别的损失函数在全部损失函数中的占比,这样模型的损失函数中难学习样本的比例会加大,由于模型训练的优化目标是最小化损失函数,所以这样就能让模型更多的学习到难区分样本的特征,从而增加模型识别这些难区分样本的识别能力。
同时,在篇章关系抽取中,两个有关系的实体的关系重点体现在文章的某几个字中,这可以作为这两个实体属于这个关系的证据,那么如果能找到两个实体属于这个关系的证据对最终的预测就有很大的帮助了。由于目前的语言模型通过在大量的文本上进行训练,有更好的语义表示能力,将语言模型引入到下游任务,会提升下游任务的识别结果,所以现在的NLP任务都会把语言模型作为最底层的结构对文本进行编码,发现语言模型中的一个中间结果是每个字符相对其他字符的相关性大小,那么通过这个中间结果可以得到与这两个实体相关的字符由哪些,根据相关性大小取出TOPK的字符出来作为判别这两个实体的证据。
在步骤S7中,通过训练好的实体关系识别模型对待识别文本进行实体关系识别,获取实体对的关系类别。也就是说,基于该损失函数训练好模型之后,对于一个新的文章和实体对,可以根据上述步骤预测实体对的关系类别。
本申请实施例利用构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本的每个字的字向量;根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量;通过权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量;通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率;通过构建的损失函数获取对所述实体对关系类别的预测概率进行处理,获取关系类别损失函数值;循环获取所述关系类别损失函数值,当所述关系类别损失函数值收敛至预设范围,停止循环获取所述关系类别损失函数值,以完成对构建的实体关系识别模型的迭代训练;通过训练好的实体关系识别模型对待识别文本进行实体关系识别,获取实体对的关系类别。本申请主要目的在于通过构建的损失函数循环训练实体关系识别模型,从而解决现有在实体识别过程中识别效果不好的问题。
如图2所示,是本申请实体关系的识别装置的功能模块图。本申请所述实体关系的识别装置100可以安装于电子设备中。根据实现的功能,所述实体关系的识别装置100可以包括:字向量获取模块101、实体向量获取模块102、额外特征向量获取模块103、预测向 量获取模块104、关系类别损失函数值获取模块105、模型训练完成模块106和关系类别识别模块107。本申请所述模块也可以称之为单元,是指一种能够被电子设备处理器所执行,并且能够完成固定功能的一系列计算机程序段,其存储在电子设备的存储器中。
在本实施例中,关于各模块/单元的功能如下:
字向量获取模块101,用于利用预构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本中的每个字的字向量;
实体向量获取模块102,用于根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量,其中,所述实体对由所述训练样本的任意两个实体构成;
额外特征向量获取模块103,用于通过预设的权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量;
预测向量获取模块104,用于通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率;
关系类别损失函数值获取模块105,用于通过预构建的损失函数获取对所述实体对关系类别的预测概率进行处理,获取关系类别损失函数值;
模型训练完成模块106,用于循环获取所述关系类别损失函数值,直至所述关系类别损失函数值收敛至预设范围,以完成所述实体关系识别模型的迭代训练;
关系类别识别模块107,用于通过训练好的实体关系识别模型对待识别文本进行实体关系识别。
本申请实施例利用构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本的每个字的字向量;根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量;通过权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量;通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率;通过构建的损失函数获取对所述实体对关系类别的预测概率进行处理,获取关系类别损失函数值;循环获取所述关系类别损失函数值,当所述关系类别损失函数值收敛至预设范围,停止循环获取所述关系类别损失函数值,以完成对构建的实体关系识别模型的迭代训练;通过训练好的实体关系识别模型对待识别文本进行实体关系识别,获取实体对的关系类别。本申请主要目的在于通过构建的损失函数循环训练实体关系识别模型,从而解决现有在实体识别过程中识别效果不好的问题。
如图3所示,是本申请实现实体关系的识别方法的电子设备的结构示意图。
所述电子设备1可以包括处理器10、存储器11和总线,还可以包括存储在所述存储器11中并可在所述处理器10上运行的计算机程序,如实体关系的识别程序12。
其中,所述存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、 移动硬盘、多媒体卡、卡型存储器(例如:SD或DX存储器等)、磁性存储器、磁盘、光盘等。所述存储器11在一些实施例中可以是电子设备1的内部存储单元,例如该电子设备1的移动硬盘。所述存储器11在另一些实施例中也可以是电子设备1的外部存储设备,例如电子设备1上配备的插接式移动硬盘、智能存储卡(Smart Media Card,SMC)、安全数字(Secure Digital,SD)卡、闪存卡(Flash Card)等。进一步地,所述存储器11还可以既包括电子设备1的内部存储单元也包括外部存储设备。所述存储器11不仅可以用于存储安装于电子设备1的应用软件及各类数据,例如数据稽核程序的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。存储器可以存储内容,该内容可由电子设备显示或被发送到其他设备(例如,耳机)以由其他设备来显示或播放。存储器还可以存储从其他设备接收的内容。该来自其他设备的内容可由电子设备显示、播放、或使用,以执行任何必要的可由电子设备和/或无线接入点中的计算机处理器或其他组件实现的任务或操作。
所述处理器10在一些实施例中可以由集成电路组成,例如可以由单个封装的集成电路所组成,也可以是由多个相同功能或不同功能封装的集成电路所组成,包括一个或者多个中央处理器(Central Processing unit,CPU)、微处理器、数字处理芯片、图形处理器及各种控制芯片的组合等。所述处理器10是所述电子设备的控制核心(Control Unit),利用各种接口和线路连接整个电子设备的各个部件,通过运行或执行存储在所述存储器11内的程序或者模块(例如数据稽核程序等),以及调用存储在所述存储器11内的数据,以执行电子设备1的各种功能和处理数据。电子还可包括芯片组(未示出),其用于控制一个或多个处理器与用户设备的其他组件中的一个或多个之间的通信。在特定的实施例中,电子设备可基于
Figure PCTCN2022089938-appb-000014
架构或
Figure PCTCN2022089938-appb-000015
架构,并且处理器和芯片集可来自
Figure PCTCN2022089938-appb-000016
处理器和芯片集家族。该一个或多个处理器104还可包括一个或多个专用集成电路(ASIC)或专用标准产品(ASSP),其用于处理特定的数据处理功能或任务。
所述总线可以是外设部件互连标准(peripheral component interconnect,简称PCI)总线或扩展工业标准结构(extended industry standard architecture,简称EISA)总线等。该总线可以分为地址总线、数据总线、控制总线等。所述总线被设置为实现所述存储器11以及至少一个处理器10等之间的连接通信。
此外,网络和I/O接口可包括一个或多个通信接口或网络接口设备,以提供经由网络(未示出)在电子设备和其他设备(例如,网络服务器)之间的数据传输。通信接口可包括但不限于:人体区域网络(BAN)、个人区域网络(PAN)、有线局域网(LAN)、无线局域网(WLAN)、无线广域网(WWAN)、等等。用户设备可以经由有线连接耦合到网络。然而,无线***接口可包括硬件或软件以广播和接收消息,其使用Wi-Fi直连标准和/或IEEE 802.11无线标准、蓝牙标准、蓝牙低耗能标准、Wi-Gig标准、和/或任何其他无线标准和/或它们的组合。
无线***可包括发射器和接收器或能够在由IEEE 802.11无线标准所支配的操作频率的广泛范围内操作的收发器。通信接口可以利用声波、射频、光学、或其他信号来在电子 设备与其他设备(诸如接入点、主机、服务器、路由器、读取设备、和类似物)之间交换数据。网络可包括但不限于:因特网、专用网络、虚拟专用网络、无线广域网、局域网、城域网、电话网络、等等。
显示器可包括但不限于液晶显示器、发光二极管显示器、或由在美国马萨诸塞州剑桥城的E Ink公司(E Ink Corp.of Cambridge,Massachusetts)所制造的E-InkTM显示器。该显示器可用于将内容以文本、图像、或视频的形式显示给用户。在特定的实例中,该显示器还可以作为触控屏显示器操作,其可以使得用户能够藉由使用某些手指或手势来触摸屏幕以启动命令或操作。
图3仅示出了具有部件的电子设备,本领域技术人员可以理解的是,图3示出的结构并不构成对所述电子设备3的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
例如,尽管未示出,所述电子设备1还可以包括给各个部件供电的电源(比如电池),优选地,电源可以通过电源管理装置与所述至少一个处理器10逻辑相连,从而通过电源管理装置实现充电管理、放电管理、以及功耗管理等功能。电源还可以包括一个或一个以上的直流或交流电源、再充电装置、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。所述电子设备1还可以包括多种传感器、蓝牙模块、Wi-Fi模块等,在此不再赘述。
进一步地,所述电子设备1还可以包括网络接口,可选地,所述网络接口可以包括有线接口和/或无线接口(如WI-FI接口、蓝牙接口等),通常用于在该电子设备1与其他电子设备之间建立通信连接。
可选地,该电子设备1还可以包括用户接口,用户接口可以是显示器(Display)、输入单元(比如键盘(Keyboard)),可选地,用户接口还可以是标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在电子设备1中处理的信息以及用于显示可视化的用户界面。
应该了解,所述实施例仅为说明之用,在专利申请范围上并不受此结构的限制。
所述电子设备1中的所述存储器11存储的实体关系的识别程序12是多个指令的组合,在所述处理器10中运行时,可以实现:
利用预构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本中的每个字的字向量;
根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量,其中,所述实体对由所述训练样本的任意两个实体构成;
通过预设的权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量;
通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率;
通过预构建的损失函数获取对所述实体对关系类别的预测概率进行处理,获取关系类别损失函数值;
循环获取所述关系类别损失函数值,直至所述关系类别损失函数值收敛至预设范围,以完成所述实体关系识别模型的迭代训练;
通过训练好的实体关系识别模型对待识别文本进行实体关系识别。
具体地,所述处理器10对上述指令的具体实现方法可参考图1对应实施例中相关步骤的描述,在此不赘述。需要强调的是,为进一步保证上述每组待处理语义单元的相似度语义处理结果的私密和安全性,上述每组待处理语义单元的相似度语义处理还可以存储于一区块链的节点中。
进一步地,所述电子设备1集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)。
在本申请的实施例中,计算机可读存储介质,所述计算机可读存储介质中存储有至少一个指令,所述至少一个指令被电子设备中的处理器执行以实现上述所述的实体关系的识别方法的步骤,具体方法如下:
利用预构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本中的每个字的字向量;
根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量,其中,所述实体对由所述训练样本的任意两个实体构成;
通过预设的权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量;
通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率;
通过预构建的损失函数获取对所述实体对关系类别的预测概率进行处理,获取关系类别损失函数值;
循环获取所述关系类别损失函数值,直至所述关系类别损失函数值收敛至预设范围,以完成所述实体关系识别模型的迭代训练;
通过训练好的实体关系识别模型对待识别文本进行实体关系识别。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的 部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
以上参考根据本申请的示例性实施例的***和方法和/或计算机程序产品的框图和流程图描述了本申请的某些实施例。应当理解的是,框图和流程图中的一个或多个方框、以及在框图和流程图中的方框的组合,可以分别由计算机可执行程序指令实现。同样地,根据本申请的一些实施例,框图和流程图中的一些方框可以不必按照所呈现的顺序执行,或者甚至可以完全不需要执行。
这些计算机可执行程序指令可以被加载到通用计算机、专用计算机、处理器、或其他可编程数据处理装置上以产生特定机器,使得在计算机、处理器、或其他可编程数据处理装置上执行的指令创建用于实现在流程图方框或多个方框中所指定的一个或多个功能的构件。这些计算机程序产品还可以存储在计算机可读存储器中,其可以指导计算机或其他可编程数据处理装置以特定的方式运行,使得存储在计算机可读存储器中的指令产生制品,该制品包括实现在流程图的方框或多个方框中指定的一个或多个功能的指令构件。例如,本申请的实施例可提供计算机程序产品,其包括其中包含有计算机可读程序代码或程序指令的计算机可用介质,所述计算机可读程序代码适于被执行以实现在流程图方框或多个方框中指定的一个或多个功能。计算机程序指令还可以被加载到计算机或其他可编程数据处理装置上,以致使一系列操作元素或步骤在计算机或其他可编程装置上执行易产生计算机实现的程序,使得在计算机或其他可编程装置上执行的指令提供用于实现在流程图方框或多个方框中指定的功能的元素或步骤。
相应地,框图或流程图中的方框支持用以执行指定功能的构件的组合、用于执行指定功能的元素或步骤与用于执行指定功能的程序指令构件的组合。还应当理解的是,框图和流程图中的每个方框以及框图和流程图中的方框的组合可由执行指定功能、元素或步骤的专用的基于硬件的计算机***实现,或由专用硬件或计算机指令的组合实现。
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。
因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含 义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附关联图标记视为限制所涉及的权利要求。
虽然本申请的某些实施例已经结合目前被认为是最实用的且各式各样的实施例进行了描述,但应当理解,本申请并不限于所公开的实施例,而是意在覆盖包含在所附权利要求书的范围之内的各种修改和等价布置。虽然本文采用了特定的术语,但它们仅以一般性和描述性的意义使用,而不是用于限制的目的。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。

Claims (20)

  1. 一种实体关系的识别方法,应用于电子设备,其中,所述方法包括:
    利用预构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本中的每个字的字向量;
    根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量,其中,所述实体对由所述训练样本的任意两个实体构成;
    通过预设的权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量;
    通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率;
    通过预构建的损失函数获取对所述实体对关系类别的预测概率进行处理,获取关系类别损失函数值;
    循环获取所述关系类别损失函数值,直至所述关系类别损失函数值收敛至预设范围,以完成对所述实体关系识别模型的迭代训练;
    通过训练好的实体关系识别模型对待识别文本进行实体关系识别。
  2. 如权利要求1所述的实体关系的识别方法,其中,所述利用预构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本中的每个字的字向量,包括:
    获取所述训练样本中的目标字,并确定所述目标字对应的初始字向量;
    根据所述初始字向量,确定所述目标字对应的图像特征向量、字根特征向量,以及拼音特征向量;
    根据所述目初始字向量、图像特征向量、字根特征向量、拼音特征向量以及预设权重矩阵生成所述目标字对应的字向量。
  3. 如权利要求1所述的实体关系的识别方法,其中,所述根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量,包括:
    根据获取的字向量,获取所述训练样本中每个词的词向量;
    根据获取的词向量,获取所述训练样本中的实体向量。
  4. 如权利要求3所述的实体关系的识别方法,其中,所述通过预设的权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量,包括:
    通过预设的权重矩阵对所述实体对中头实体的实体向量进行处理,获取所述实体对中头实体的权重;
    通过预设的权重矩阵对所述实体对中尾实体的实体向量进行处理,获取所述实体对中尾实体的权重;
    根据所述实体对中头实体的权重、所述实体对中尾实体的权重,获取所述实体对的每 个实体的额外特征向量。
  5. 如权利要求1所述的实体关系的识别方法,其中,所述通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率,包括:
    通过tanh激活函数分别对所述实体对的每个实体向量以及所述额外特征向量进行处理,分别获取每个实体的tanh结果值;
    并将获取的每个实体的tanh结果值进行乘积处理,然后输入到sigmoid函数,获取所述实体对关系类别的预测概率。
  6. 如权利要求1所述的实体关系的识别方法,其中,所述循环获取所述关系类别损失函数值,当所述关系类别损失函数值收敛至预设范围,停止循环获取所述关系类别损失函数值,以完成对构建的实体关系识别模型的迭代训练,包括:
    当所述关系类型损失函数值低于或等于预设类别值时,继续循环获取所述关系类型损失函数值;
    当所述关系类型损失函数值高于预设类别值时,停止获取循环获取所述关系类型损失函数值,完成对所述实体关系识别模型的训练。
  7. 如权利要求6所述的实体关系的识别方法,其中,构建的损失函数公式为:L=L 1+L 2
    Figure PCTCN2022089938-appb-100001
    Figure PCTCN2022089938-appb-100002
    其中,L表示损失函数;L 1表示实体对中一个实体的损失函数;L 2表示实体对中另一个实体的损失函数;P表示该实体对属于的类别的集合;N表示非该实体对的类别;TH表示的是预设类别值;σ表示sigmoid函数;r表示某个关系;r’表示另外一个关系,K表示参数。
  8. 一种实体关系的识别装置,其中,所述装置包括:
    字向量获取模块,用于利用预构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本中的每个字的字向量;
    实体向量获取模块,用于根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量,其中,所述实体对由所述训练样本的任意两个实体构成;
    额外特征向量获取模块,用于通过预设的权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量;
    预测向量获取模块,用于通过激活函数对所述实体对的每个实体向量以及所述额外特 征向量进行处理,获取所述实体对关系类别的预测概率;
    关系类别损失函数值获取模块,用于通过预构建的损失函数获取对所述实体对关系类别的预测概率进行处理,获取关系类别损失函数值;
    模型训练完成模块,用于循环获取所述关系类别损失函数值,直至所述关系类别损失函数值收敛至预设范围,以完成所述实体关系识别模型的迭代训练;
    关系类别识别模块,用于通过训练好的实体关系识别模型对待识别文本进行实体关系识别。
  9. 一种电子设备,其中,所述电子设备包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行实体关系的识别方法的步骤,其中,
    所述实体关系的识别方法包括:
    利用预构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本中的每个字的字向量;
    根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量,其中,所述实体对由所述训练样本的任意两个实体构成;
    通过预设的权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量;
    通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率;
    通过预构建的损失函数获取对所述实体对关系类别的预测概率进行处理,获取关系类别损失函数值;
    循环获取所述关系类别损失函数值,直至所述关系类别损失函数值收敛至预设范围,以完成对所述实体关系识别模型的迭代训练;
    通过训练好的实体关系识别模型对待识别文本进行实体关系识别。
  10. 如权利要求9所述的电子设备,其中,
    所述利用预构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本中的每个字的字向量,包括:
    获取所述训练样本中的目标字,并确定所述目标字对应的初始字向量;
    根据所述初始字向量,确定所述目标字对应的图像特征向量、字根特征向量,以及拼音特征向量;
    根据所述目初始字向量、图像特征向量、字根特征向量、拼音特征向量以及预设权重矩阵生成所述目标字对应的字向量。
  11. 如权利要求9所述的电子设备,其中,
    所述根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量,包括:
    根据获取的字向量,获取所述训练样本中每个词的词向量;
    根据获取的词向量,获取所述训练样本中的实体向量。
  12. 如权利要求11所述的电子设备,其中,
    所述通过预设的权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量,包括:
    通过预设的权重矩阵对所述实体对中头实体的实体向量进行处理,获取所述实体对中头实体的权重;
    通过预设的权重矩阵对所述实体对中尾实体的实体向量进行处理,获取所述实体对中尾实体的权重;
    根据所述实体对中头实体的权重、所述实体对中尾实体的权重,获取所述实体对的每个实体的额外特征向量。
  13. 如权利要求9所述的电子设备,其中,
    所述通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率,包括:
    通过tanh激活函数分别对所述实体对的每个实体向量以及所述额外特征向量进行处理,分别获取每个实体的tanh结果值;
    并将获取的每个实体的tanh结果值进行乘积处理,然后输入到sigmoid函数,获取所述实体对关系类别的预测概率。
  14. 如权利要求9所述的电子设备,其中,
    所述循环获取所述关系类别损失函数值,当所述关系类别损失函数值收敛至预设范围,停止循环获取所述关系类别损失函数值,以完成对构建的实体关系识别模型的迭代训练,包括:
    当所述关系类型损失函数值低于或等于预设类别值时,继续循环获取所述关系类型损失函数值;
    当所述关系类型损失函数值高于预设类别值时,停止获取循环获取所述关系类型损失函数值,完成对所述实体关系识别模型的训练。
  15. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现实体关系的识别方法,其中,
    所述实体关系的识别方法包括:
    利用预构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本中的每个字的字向量;
    根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量,其中,所述实体对由所述训练样本的任意两个实体构成;
    通过预设的权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量;
    通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率;
    通过预构建的损失函数获取对所述实体对关系类别的预测概率进行处理,获取关系类别损失函数值;
    循环获取所述关系类别损失函数值,直至所述关系类别损失函数值收敛至预设范围,以完成对所述实体关系识别模型的迭代训练;
    通过训练好的实体关系识别模型对待识别文本进行实体关系识别。
  16. 如权利要求15所述的计算机可读存储介质,其中,
    所述利用预构建的实体关系识别模型对训练样本进行预处理,获取所述训练样本中的每个字的字向量,包括:
    获取所述训练样本中的目标字,并确定所述目标字对应的初始字向量;
    根据所述初始字向量,确定所述目标字对应的图像特征向量、字根特征向量,以及拼音特征向量;
    根据所述目初始字向量、图像特征向量、字根特征向量、拼音特征向量以及预设权重矩阵生成所述目标字对应的字向量。
  17. 如权利要求15所述的计算机可读存储介质,其中,
    所述根据获取的字向量,对所述训练样本的实体对进行处理,获取所述训练样本的所有实体对中每个实体的实体向量,包括:
    根据获取的字向量,获取所述训练样本中每个词的词向量;
    根据获取的词向量,获取所述训练样本中的实体向量。
  18. 如权利要求17所述的计算机可读存储介质,其中,
    所述通过预设的权重矩阵对所述实体对的每个实体的词向量进行加权求和处理,获取所述实体对的每个实体的额外特征向量,包括:
    通过预设的权重矩阵对所述实体对中头实体的实体向量进行处理,获取所述实体对中头实体的权重;
    通过预设的权重矩阵对所述实体对中尾实体的实体向量进行处理,获取所述实体对中尾实体的权重;
    根据所述实体对中头实体的权重、所述实体对中尾实体的权重,获取所述实体对的每个实体的额外特征向量。
  19. 如权利要求15所述的计算机可读存储介质,其中,
    所述通过激活函数对所述实体对的每个实体向量以及所述额外特征向量进行处理,获取所述实体对关系类别的预测概率,包括:
    通过tanh激活函数分别对所述实体对的每个实体向量以及所述额外特征向量进行处 理,分别获取每个实体的tanh结果值;
    并将获取的每个实体的tanh结果值进行乘积处理,然后输入到sigmoid函数,获取所述实体对关系类别的预测概率。
  20. 如权利要求15所述的计算机可读存储介质,其中,
    所述循环获取所述关系类别损失函数值,当所述关系类别损失函数值收敛至预设范围,停止循环获取所述关系类别损失函数值,以完成对构建的实体关系识别模型的迭代训练,包括:
    当所述关系类型损失函数值低于或等于预设类别值时,继续循环获取所述关系类型损失函数值;
    当所述关系类型损失函数值高于预设类别值时,停止获取循环获取所述关系类型损失函数值,完成对所述实体关系识别模型的训练。
PCT/CN2022/089938 2022-01-14 2022-04-28 实体关系的识别方法、设备及可读存储介质 WO2023134069A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210042332.0 2022-01-14
CN202210042332.0A CN114385817A (zh) 2022-01-14 2022-01-14 实体关系的识别方法、设备及可读存储介质

Publications (1)

Publication Number Publication Date
WO2023134069A1 true WO2023134069A1 (zh) 2023-07-20

Family

ID=81201077

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/089938 WO2023134069A1 (zh) 2022-01-14 2022-04-28 实体关系的识别方法、设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN114385817A (zh)
WO (1) WO2023134069A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114385817A (zh) * 2022-01-14 2022-04-22 平安科技(深圳)有限公司 实体关系的识别方法、设备及可读存储介质
CN116028880B (zh) * 2023-02-07 2023-07-04 支付宝(杭州)信息技术有限公司 训练行为意图识别模型的方法、行为意图识别方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522557A (zh) * 2018-11-16 2019-03-26 中山大学 文本关系抽取模型的训练方法、装置及可读存储介质
US20200218744A1 (en) * 2019-01-07 2020-07-09 International Business Machines Corporation Extracting entity relations from semi-structured information
CN111814469A (zh) * 2020-07-13 2020-10-23 北京邮电大学 一种基于树型胶囊网络的关系抽取方法及装置
CN113128203A (zh) * 2021-03-30 2021-07-16 北京工业大学 基于注意力机制的关系抽取方法、***、设备及存储介质
WO2021174774A1 (zh) * 2020-07-30 2021-09-10 平安科技(深圳)有限公司 神经网络关系抽取方法、计算机设备及可读存储介质
CN114385817A (zh) * 2022-01-14 2022-04-22 平安科技(深圳)有限公司 实体关系的识别方法、设备及可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522557A (zh) * 2018-11-16 2019-03-26 中山大学 文本关系抽取模型的训练方法、装置及可读存储介质
US20200218744A1 (en) * 2019-01-07 2020-07-09 International Business Machines Corporation Extracting entity relations from semi-structured information
CN111814469A (zh) * 2020-07-13 2020-10-23 北京邮电大学 一种基于树型胶囊网络的关系抽取方法及装置
WO2021174774A1 (zh) * 2020-07-30 2021-09-10 平安科技(深圳)有限公司 神经网络关系抽取方法、计算机设备及可读存储介质
CN113128203A (zh) * 2021-03-30 2021-07-16 北京工业大学 基于注意力机制的关系抽取方法、***、设备及存储介质
CN114385817A (zh) * 2022-01-14 2022-04-22 平安科技(深圳)有限公司 实体关系的识别方法、设备及可读存储介质

Also Published As

Publication number Publication date
CN114385817A (zh) 2022-04-22

Similar Documents

Publication Publication Date Title
US20230016365A1 (en) Method and apparatus for training text classification model
CN111897964B (zh) 文本分类模型训练方法、装置、设备及存储介质
US11544550B2 (en) Analyzing spatially-sparse data based on submanifold sparse convolutional neural networks
WO2022022421A1 (zh) 语言表示模型***、预训练方法、装置、设备及介质
US20200293874A1 (en) Matching based intent understanding with transfer learning
WO2023134069A1 (zh) 实体关系的识别方法、设备及可读存储介质
CN111602147A (zh) 基于非局部神经网络的机器学习模型
CN112131366A (zh) 训练文本分类模型及文本分类的方法、装置及存储介质
WO2021169347A1 (zh) 提取文本关键字的方法及装置
CN112800292B (zh) 一种基于模态特定和共享特征学习的跨模态检索方法
WO2022227211A1 (zh) 基于Bert的篇章的多意图识别方法、设备及可读存储介质
WO2020073533A1 (zh) 自动问答方法及装置
US20240185602A1 (en) Cross-Modal Processing For Vision And Language
WO2021139316A1 (zh) 建立表情识别模型方法、装置、计算机设备及存储介质
US20220230061A1 (en) Modality adaptive information retrieval
CN113761190A (zh) 文本识别方法、装置、计算机可读介质及电子设备
WO2023226309A1 (zh) 一种模型训练方法及相关设备
CN113761887A (zh) 基于文本处理的匹配方法、装置、计算机设备和存储介质
Zhang Voice keyword retrieval method using attention mechanism and multimodal information fusion
CN118035945B (zh) 一种标签识别模型的处理方法和相关装置
CN114662484A (zh) 语义识别方法、装置、电子设备及可读存储介质
CN116821307B (zh) 内容交互方法、装置、电子设备和存储介质
WO2023173554A1 (zh) 坐席违规话术识别方法、装置、电子设备、存储介质
CN114510942A (zh) 获取实体词的方法、模型的训练方法、装置及设备
CN113850078A (zh) 基于机器学习的多意图识别方法、设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22919710

Country of ref document: EP

Kind code of ref document: A1