WO2022083093A1 - 图谱中的概率计算方法、装置、计算机设备及存储介质 - Google Patents

图谱中的概率计算方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2022083093A1
WO2022083093A1 PCT/CN2021/090491 CN2021090491W WO2022083093A1 WO 2022083093 A1 WO2022083093 A1 WO 2022083093A1 CN 2021090491 W CN2021090491 W CN 2021090491W WO 2022083093 A1 WO2022083093 A1 WO 2022083093A1
Authority
WO
WIPO (PCT)
Prior art keywords
probability
variable
node
neural network
information
Prior art date
Application number
PCT/CN2021/090491
Other languages
English (en)
French (fr)
Inventor
白祚
罗炳峰
莫洋
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022083093A1 publication Critical patent/WO2022083093A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the field of artificial intelligence, and in particular, to a probability calculation method, device, computer equipment and storage medium in a graph.
  • the code logic will become very complex due to the increase of managed variables, and such systems can often only maintain limited variables, thus limiting the richness of the generated text.
  • the hard-coded variable sampling is often in the order of topological sorting, so it can only model the dependency network that constitutes a directed acyclic graph, and cannot model the dependencies with cycles.
  • the constraints between variables are hard-coded in the system, the reusability of such systems is often low, resulting in high development costs, which in turn limits the development of such applications.
  • the purpose of the embodiments of the present application is to propose a probability calculation method, device, computer equipment and storage medium in a graph, so as to reduce the complexity of variable management and thus reduce the development cost.
  • the embodiment of the present application provides a probability calculation method in a graph, and adopts the following technical solutions:
  • the target node input by the user, and determining at least one associated node of the target node according to the connection relationship between the multiple nodes, where the associated node is an upper-level node connected to the target node;
  • variable information input by the user in each of the associated nodes, and obtain at least one variable information
  • the probability of each variable in the target node is calculated according to the at least one variable information.
  • the step of constructing a probability map according to the information relationship in the nodes, and obtaining the connection relationship between multiple nodes in the probability map specifically includes:
  • each node is connected in sequence to obtain a probability map.
  • the step of calculating the probability of each variable in the target node according to the at least one variable information specifically includes:
  • the probability of each variable in the target node is calculated according to the first number of cases and the second number of cases.
  • step of calculating the probability of each variable in the target node according to the at least one variable information specifically further includes:
  • the probability of each variable in the target node is output to the user.
  • step of calculating the probability of each variable in the target node according to the at least one variable information specifically further includes:
  • the probability of each variable in the target node is output to the user.
  • the step of outputting the probability of each variable in the target node through the trained neural network further includes:
  • f i n represents the output of the nth layer of the target neural network model after the i-th training data is input to the target neural network model
  • i is any positive integer
  • n is a natural number, when n
  • f i n refers to the output of the target neural network model
  • f i n-1 indicates that after the i-th training data is input to the target neural network model, in all The output of the n-1th layer of the target neural network model
  • the step of constructing a probability map according to the information relationship in the nodes, and obtaining the connection relationship between multiple nodes in the probability map it further includes:
  • connection relationship between the plurality of nodes is stored in the blockchain.
  • the embodiment of the present application also provides a probability calculation device in a map, which adopts the following technical solutions:
  • the acquisition module is used to construct a probability map according to the information relationship in the nodes, and obtain the connection relationship between multiple nodes in the probability map;
  • the determining module is configured to obtain the target node input by the user, and determine at least one associated node of the target node according to the connection relationship between the plurality of nodes, where the associated node is an upper-level node connected to the target node;
  • the information acquisition module is used to acquire the variable information input by the user in each of the associated nodes, and obtain at least one variable information
  • the probability calculation module is configured to calculate the probability of each variable in the target node according to the at least one variable information.
  • the embodiment of the present application also provides a computer device, which adopts the following technical solutions:
  • a computer device comprising at least one connected processor, memory, and input-output unit, wherein the memory is used to store computer-readable instructions, and the processor is used to invoke the computer-readable instructions in the memory for execution
  • the target node input by the user, and determining at least one associated node of the target node according to the connection relationship between the multiple nodes, where the associated node is an upper-level node connected to the target node;
  • variable information input by the user in each of the associated nodes, and obtain at least one variable information
  • the probability of each variable in the target node is calculated according to the at least one variable information.
  • the embodiments of the present application also provide a computer-readable storage medium, which adopts the following technical solutions:
  • the target node input by the user, and determining at least one associated node of the target node according to the connection relationship between the multiple nodes, where the associated node is an upper-level node connected to the target node;
  • variable information input by the user in each of the associated nodes, and obtain at least one variable information
  • the probability of each variable in the target node is calculated according to the at least one variable information.
  • the probability map is used for sampling, and the probability map can also be used to easily calculate the probability of obtaining the target node based on the probability of each node variable value. Therefore, the causal relationship between the target node probability, that is, the causal relationship between the text target keyword and the associated keyword, reduces the cost of error-finding in the system, reduces the complexity of variable management, and reduces development costs.
  • FIG. 1 is an exemplary system architecture diagram to which the present application can be applied;
  • Fig. 2-1 is a flowchart according to an embodiment of the probability calculation method in the atlas of the present application
  • 2-2 is a schematic diagram of a probability atlas according to the probability calculation method in the atlas of the present application
  • FIG. 3 is a schematic structural diagram of an embodiment of a probability calculation device in the atlas according to the present application.
  • FIG. 4 is a schematic structural diagram of an embodiment of a computer device according to the present application.
  • the system architecture 100 may include terminal devices 101 , 102 , and 103 , a network 104 and a server 105 .
  • the network 104 is a medium used to provide a communication link between the terminal devices 101 , 102 , 103 and the server 105 .
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • the user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send messages and the like.
  • Various communication client applications may be installed on the terminal devices 101 , 102 and 103 , such as web browser applications, shopping applications, search applications, instant messaging tools, email clients, social platform software, and the like.
  • the terminal devices 101, 102, and 103 may be various electronic devices that have a display screen and support web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III, dynamic Picture Experts Compression Standard Audio Layer 3), MP4 (Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Compression Standard Audio Layer 4) Players, Laptops and Desktops, etc.
  • MP3 players Moving Picture Experts Group Audio Layer III, dynamic Picture Experts Compression Standard Audio Layer 3
  • MP4 Moving Picture Experts Group Audio Layer IV, Moving Picture Experts Compression Standard Audio Layer 4
  • the server 105 may be a server that provides various services, such as a background server that provides support for the pages displayed on the terminal devices 101 , 102 , and 103 .
  • the probability calculation method in the graph provided by the embodiment of the present application is generally executed by the server/terminal device, and correspondingly, the probability calculation device in the graph is generally set in the server/terminal device.
  • terminal devices, networks and servers in FIG. 1 are merely illustrative. There can be any number of terminal devices, networks and servers according to implementation needs.
  • the probability calculation method in the described atlas comprises the following steps:
  • Step 201 construct a probability map according to the information relationship in the nodes, and obtain the connection relationship between multiple nodes in the probability map.
  • the electronic device for example, the server/terminal device shown in FIG. 1
  • the electronic device on which the probability calculation method in the graph runs can be calibrated by receiving a user request from the server through a wired connection or a wireless connection.
  • the above wireless connection methods may include but are not limited to 3G/4G connection, WiFi connection, Bluetooth connection, WiMAX connection, Zigbee connection, UWB (ultra wideband) connection, and other wireless connection methods currently known or developed in the future .
  • the value of the variable of the node itself is stored in the node.
  • the guarantee responsibility as an example, train accidents, elevator accidents, etc. are stored.
  • Relevant nodes are also stored, for example, the amount of compensation is related to the disability situation and guarantee responsibility, and the disability situation is related to the time and guarantee responsibility.
  • a knowledge graph is constructed through these related node relationships.
  • Step 202 Obtain the target node input by the user, and determine at least one associated node of the target node according to the connection relationship among the plurality of nodes, where the associated node is an upper-level node connected to the target node.
  • the target node is the node for which the probability is to be calculated
  • the associated node is the upper-level node connected to the target node in the graph.
  • links to disability are coverage obligations and events.
  • the associated node is the upper-level node connected to the target node.
  • the directed edges between nodes represent the dependencies between nodes.
  • the incoming edge of a node represents the variable whose value depends on it, and the outgoing edge of a node represents which variable values depend on the current node, that is, each node.
  • the value of the output node depends on the value of the input node for output, which can be understood as the input determines the output, and the output depends on the value of the input.
  • the directed edge from "guarantee responsibility" to "event” is represented.
  • the value of each target node in the probabilistic graph model depends on the value of the associated node.
  • the topology structure of the probabilistic graphical model hides the details of the constraint relationship, such as the details of the variables between the elevator falling and the level 1-10 disability, and the total disability, and more vividly shows the dependency path between the variables.
  • Step 203 Obtain variable information input by the user in each of the associated nodes, and obtain at least one variable information.
  • each variable has a value.
  • the value of each node obeys a specific conditional probability, and the conditional variable of the conditional probability is the variable corresponding to the incoming edge of the current variable.
  • the variable "disability status" has two input edges, corresponding to the two variables “guarantee responsibility” and “event” respectively. Therefore, the value of "disability situation” depends on the conditional probability P(disability situation
  • the conditional probability distribution of each node in the probability graph describes how the values of each variable affect each other. When the value of a variable in a node changes, the associated node will affect the probability of its target node variable correspondingly according to the conditional probability distribution. .
  • Step 204 Calculate the probability of each variable in the target node according to the at least one variable information.
  • the probability is calculated by the number of historical cases.
  • the probability relationship is preset, the probability value is calculated through the probability relationship, and if there is no preset probability relationship, the probability is calculated through the neural network.
  • the probability map is used for sampling, and the probability map can also be used to easily calculate the probability of obtaining the target node based on the probability of each node variable value. Therefore, the causal relationship between the target node probability, that is, the causal relationship between the text target keyword and the associated keyword, reduces the cost of error-finding in the system, reduces the complexity of variable management, and reduces development costs.
  • the step of constructing a probability map according to information relationships in the nodes, and obtaining the connection relationships between multiple nodes in the probability map specifically includes:
  • Node information stores the information of all nodes and the relationship between all nodes. It can be stored on the hard disk or on the corresponding database. The node information is arranged in the table in the order of the table. retrieve the node information stored in the hard disk or database.
  • An ID is generated for the corresponding storage address of the node information, and a plurality of the IDs are obtained; the memory address in each node information in A is found. Due to too much data, one node information can be stored in multiple memory addresses, and one memory address corresponds to one ID. Of course, if there is not much data, a node information can also be stored in a memory address, corresponding to an ID. Take set A to store 10 nodes, and set A to store in array form as an example, the array can store 10 arrays as a[0] to store the first node information, a[1] to store the second node information, a[2] Store the third node information...
  • the system will determine the length of the array according to the array elements specified by Uno. Take the example just now. At this time, 10 array elements are specified, and the system will create an array object with a length of 10. Once the array object is created successfully, the length of the allocated memory cannot be changed, and only the value of the data in the memory can be changed. .
  • each node is connected in sequence to obtain a probability map, and the obtained probability map connects the nodes in sequence according to the sequence of each node taken out.
  • the node information includes the association relationship between the nodes, the variables in the node and the value of the variable, the information relationship refers to the association relationship between the nodes, and the sequence is the sequence of reading according to the ID.
  • text generation (Data2Text) tasks, there are often complex dependencies between variables. For example, in the above example, some of the variables involved have the relationship shown in Figure 2-2.
  • the values of the input variables must conform to the inherent logical constraints between the variables.
  • variable values For example, in order to help customers understand insurance terms, it is necessary to sample other correlated variables given the diseases the customer wants to know, in order to generate claims cases that help customers understand.
  • the variables used for text generation and the constraints between the variables are managed by using the probability map, so that the rationality of the variable values can be easily checked, and the variable values can be sampled according to the constraint relationship.
  • the event is a node
  • the elevator falling and the train derailment in the node are the variable information of this node, that is, the variables inside and the corresponding variable values.
  • the variable value is 1, it is the elevator.
  • Falling when the variable value is 2, the train derails.
  • the step of calculating the probability of each variable in the target node according to the at least one variable information specifically includes:
  • the probability of each variable in the target node is calculated according to the first number of cases and the second number of cases.
  • the input information is the value of the node.
  • the input information is the disability status and security responsibility.
  • the risk incentive needs to be calculated, the input information is required. Guarantee responsibility, enumerate the value of each dependent variable and the corresponding value of the current variable. For example, when there are 1,000 train derailments (the number of the first case), 300 of which have caused blindness in one eye.
  • the compensation amount (target node) and (associated node) guarantee If the liability is related to the disability status of the (association node), then the default guarantee liability is (variable value) train accident, and the disability status is (variable value) total disability, then the amount of compensation for the number of historical cases (varied by 60%) Value (variable value) 1 million yuan, 10% value (variable value) 900,000 yuan, (variable value) 30% value 1.1 million yuan, to obtain the user's input information, when the user's input is total disability and In the event of a train accident, 60% of the output compensation amount is valued at 1 million yuan, 10% is valued at 900,000 yuan, and 30% is valued at 1.1 million yuan.
  • the probability between each node can be calculated in a preset way, thereby improving the probability of calculation. accuracy.
  • the step of calculating the probability of each variable in the target node according to the at least one variable information specifically further includes:
  • the probability of each variable in the target node is output to the user.
  • the value of each variable is used as a parameter of the probability distribution to which the values of each variable in the target node obey, and then the current variable is sampled from the probability distribution.
  • the associated node of the target node "temperature displayed by sensor” is "true temperature”
  • the target node and associated node obey the normal distribution P(temperature displayed by sensor
  • true temperature) Norm(true temperature, 1)
  • true temperature is a parameter of the probability distribution that "sensor display temperature” obeys.
  • a probability distribution of the temperature displayed by the sensor will be obtained.
  • the probability relationship refers to the functional relationship between two variables.
  • high blood pressure is related to age.
  • the probability between each node can be calculated by means of a probability function, thereby improving the accuracy of the probability calculation.
  • the step of calculating the probability of each variable in the target node according to the at least one variable information specifically further includes:
  • the probability of each variable in the target node is output to the user.
  • the value of the current variable is the output result of a model
  • the input of the model is the variable that the current node depends on. It is assumed that the variable "diabetes” depends on the variables "age” and "weight”. Then P(diabetes
  • age, weight) f(age, weight). where f(age, weight) is the input is "age”, "weight” and the output is a function of the probability of having diabetes.
  • w1 and w2 are weights.
  • the probability between each node can be calculated by means of an artificial intelligence model, thereby improving the accuracy of the probability calculation.
  • the probability is calculated by the number of historical cases.
  • the step before the step of outputting the probability of each variable in the target node through the trained neural network, the step further includes:
  • f i n represents the output of the nth layer of the target neural network model after the i-th training data is input to the target neural network model
  • i is any positive integer
  • n is a natural number, when n
  • f i n refers to the output of the target neural network model
  • f i n-1 indicates that after the i-th training data is input to the target neural network model, in all The output of the n-1th layer of the target neural network model
  • the training data is the value of each input node
  • the labeled label is the value of the corresponding target node.
  • the guarantee liability is a train accident
  • the disability is total disability
  • the label is 1 million yuan, or 900,000 yuan, or 1.1 million yuan.
  • variable D can be d1, d2, and d3, and the values of variables A, B, and C are 0.1, 0.5, and 0.3, respectively.
  • the method further includes:
  • connection relationship between the plurality of nodes is stored in the blockchain.
  • the blockchain referred to in this application is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information to verify its Validity of information (anti-counterfeiting) and generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • the aforementioned storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM) or the like.
  • the present application provides an embodiment of a probability calculation device in a graph, which is the same as the method embodiment shown in FIG. 2-1 .
  • the apparatus can be specifically applied to various electronic devices.
  • the probability calculation device 300 in the atlas includes: an acquisition module 301 , a determination module 302 , an information acquisition module 303 , and a probability calculation module 304 . in:
  • the acquisition module 301 is used to construct a probability map according to the information relationship in the node, and obtain the connection relationship between multiple nodes in the probability map;
  • the determining module 302 is configured to obtain the target node input by the user, and determine at least one associated node of the target node according to the connection relationship between the multiple nodes, where the associated node is an upper-level node connected to the target node ;
  • the information acquisition module 303 is configured to acquire variable information input by the user in each of the associated nodes, and obtain at least one variable information;
  • the probability calculation module 304 is configured to calculate the probability of each variable in the target node according to the at least one variable information.
  • the above device reduces the complexity of variable management, thereby reducing development costs.
  • the acquisition module includes a collective storage sub-module, an address generation sub-module, a memory reading sub-module and a connection sub-module.
  • the collection storage submodule is used to obtain node information
  • the address generation submodule is used to generate an ID for the storage address corresponding to the node information, and obtain a plurality of the IDs;
  • the memory reading sub-module is used to sequentially read the node information and the information relationship of the nodes stored in the storage addresses corresponding to the IDs in order of the IDs from small to large;
  • connection sub-module is used to connect the nodes in sequence according to the node information and the information relationship of the nodes to obtain a probability map.
  • the information acquisition module includes a first statistics sub-module, a second statistics sub-module and a probability calculation sub-module.
  • the first statistical sub-module is used to count the number of historical cases in which each variable of the target node is valued to obtain the first number of cases;
  • the second statistics sub-module counts the number of historical cases with the values of each variable of the associated node, and obtains the second number of cases
  • the probability calculation submodule is configured to calculate the probability of each variable in the target node according to the first case number and the second case number.
  • the information acquisition module includes an association probability sub-module, a probability calculation sub-module and a probability output sub-module.
  • the association probability sub-module is configured to obtain a probability relationship according to the connection relationship between the association node and the target node;
  • the probability calculation submodule is configured to calculate the probability of each variable in the target node according to the at least one variable information and the probability relationship of each variable;
  • the probability output submodule is used for outputting the probability of each variable in the target node to the user.
  • the information acquisition module includes a variable input sub-module, a model input sub-module and a model output sub-module.
  • variable input submodule is used to input the at least one variable information into the trained neural network
  • the model input submodule is used to output the probability of each variable in the target node through the trained neural network
  • the probability of each variable in the target node is output to the user.
  • the text splicing device further includes a training data acquisition sub-module, a training data input sub-module, a training sub-module and a deployment sub-module.
  • the training data acquisition submodule is used to acquire a plurality of training data and labels corresponding to the training data
  • the training data input sub-model is used to input the training data and the corresponding label to the initial neural network model
  • the training module is used to train the sub-module to pass the initial neural network model through Training to get the target neural network model, represents the weight obtained by training the kth neuron in the nth layer in the multilayer perceptron of the target neural network model according to the output of the n-1th layer in the multilayer perceptron of the target neural network model, express Corresponding bias, f i n represents the output of the nth layer of the target neural network model after the i-th training data is input to the target neural network model, i is any positive integer, n is a natural number, when n When it is the last layer of the target neural network model, f i n refers to the output of the target neural network model, and f i n-1 indicates that after the i-th training data is input to the target neural network model, in all The output of the n-1th layer of the target neural network model;
  • the deployment submodule is used to deploy the target neural network model.
  • the text splicing device further includes a blockchain sub-module.
  • the blockchain sub-module is used to store the connection relationship between the multiple nodes in the blockchain.
  • FIG. 4 is a block diagram of a basic structure of a computer device according to this embodiment.
  • the computer device 4 includes a memory 41, a processor 42, and a network interface 43 that communicate with each other through a system bus. It should be noted that only the computer device 4 with components 41-43 is shown in the figure, but it should be understood that it is not required to implement all of the shown components, and more or less components may be implemented instead. Among them, those skilled in the art can understand that the computer device here is a device that can automatically perform numerical calculation and/or information processing according to pre-set or stored instructions, and its hardware includes but is not limited to microprocessors, special-purpose Integrated circuit (Application Specific Integrated Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), digital processor (Digital Signal Processor, DSP), embedded equipment, etc.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Signal Processor
  • the computer equipment may be a desktop computer, a notebook computer, a palmtop computer, a cloud server and other computing equipment.
  • the computer device can perform human-computer interaction with the user through a keyboard, a mouse, a remote control, a touch pad or a voice control device.
  • the memory 41 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static Random Access Memory (SRAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Programmable Read Only Memory (PROM), Magnetic Memory, Magnetic Disk, Optical Disk, etc.
  • the memory 41 may be an internal storage unit of the computer device 4 , such as a hard disk or a memory of the computer device 4 .
  • the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, flash memory card (Flash Card), etc.
  • the memory 41 may also include both the internal storage unit of the computer device 4 and its external storage device.
  • the memory 41 is generally used to store the operating system and various application software installed in the computer device 4, such as computer-readable instructions of the probability calculation method in the graph.
  • the memory 41 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 42 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips in some embodiments. This processor 42 is typically used to control the overall operation of the computer device 4 . In this embodiment, the processor 42 is configured to execute computer-readable instructions stored in the memory 41 or process data, for example, computer-readable instructions for executing the probability calculation method in the graph. The steps of executing the probability calculation method in the above-mentioned atlas, the specific implementation manner will not be repeated.
  • CPU Central Processing Unit
  • the network interface 43 may include a wireless network interface or a wired network interface, and the network interface 43 is generally used to establish a communication connection between the computer device 4 and other electronic devices.
  • the present application also provides another embodiment, that is, to provide a computer-readable storage medium, where the computer-readable storage medium stores computer-readable instructions, and the computer-readable instructions can be executed by at least one processor to The at least one processor is caused to perform the steps of the probability calculation method in the graph as described above.
  • the method of the above embodiment can be implemented by means of software plus a necessary general hardware platform, and of course can also be implemented by hardware, but in many cases the former is better implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or in a part that contributes to the prior art, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, CD-ROM), including several instructions to make a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the methods described in the various embodiments of this application.
  • a storage medium such as ROM/RAM, magnetic disk, CD-ROM

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种图谱中的概率计算方法,包括:根据节点中的信息关系构建概率图谱,得到概率图谱中多个节点间的连接关系;获取用户输入的目标节点,根据多个节点间的连接关系,确定目标节点的至少一个关联节点,关联节点为与目标节点相连的上一级节点;获取每个关联节点中用户输入的变量信息,得到至少一个变量信息;根据至少一个变量信息计算目标节点中的各变量的概率。此外,还涉及区块链技术,概率图谱中多个节点间的连接关系;存储于区块链中。还提供一种图谱中的概率计算装置、计算机设备及存储介质,以减少开发成本。

Description

图谱中的概率计算方法、装置、计算机设备及存储介质
本申请以2020年10月23日提交的申请号为202011150139.6,名称为“图谱中的概率计算方法、装置、计算机设备及存储介质”的中国发明专利申请为基础,并要求其优先权。
技术领域
本申请涉及人工智能领域,尤其涉及一种图谱中的概率计算方法、装置、计算机设备及存储介质。
背景技术
在文本生成领域时常会遇到给定约束条件生成文本的场景。比如给定保险理赔条款,生成一些理赔案例来帮助用户理解;给定一个商品的基本参数,使用方式,生成一些实际使用场景的描述来激发用户的购买欲望等。这些场景往往可以抽象为给定变量定义,变量之间的约束,和某些变量的取值,来推断并采样未知变量的取值,进而利用Data2Text的方式生成最终的文本。但发明人发现,在传统自动文本生成***处理这类问题时,只是将变量之间的约束硬编码在***中,因此往往只能处理特定的场景。一方面,由于管理的变量增多后代码逻辑也会变得十分复杂,此类***往往只能维护有限的变量,进而限制了生成的文本的丰富度。另一方面,硬编码的变量采样往往是按照拓扑排序的顺序的,因而只能建模构成有向无环图的依赖关系网,无法建模有环的依赖关系。同时,由于变量之间的约束关系是硬编码在***中的,此类***的可复用性往往很低,从而导致开发成本居高不下,进而限制了此类应用的发展。
发明内容
本申请实施例的目的在于提出一种图谱中的概率计算方法、装置、计算机设备及存储介质,以降低变量管理的复杂度,从而降低了开发成本。
为了解决上述技术问题,本申请实施例提供一种图谱中的概率计算方法,采用了如下所述的技术方案:
根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系;
获取用户输入的目标节点,根据所述多个节点间的连接关系,确定所述目标节点的至少一个关联节点,所述关联节点为与所述目标节点相连的上一级节点;
获取每个所述关联节点中用户输入的变量信息,得到至少一个变量信息;
根据所述至少一个变量信息计算所述目标节点中的各变量的概率。
进一步的,所述根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系的步骤具体包括:
获取节点信息;
为所述节点信息对应存储地址生成一个ID,得到多个所述ID;
按照所述ID由小到大的次序,依次读取各个所述ID对应的存储地址内存储的所述节点信息以及所述节点的信息关系;
根据所述节点信息以及所述节点的信息关系,将各节点按照先后顺序依次连接起来,得到概率图谱。
进一步的,所述根据所述至少一个变量信息计算所述目标节点中的各变量的概率的步骤具体包括:
统计所述目标节点各变量取值的历史案件数量,得到第一案件数量;
统计所述关联节点各变量取值的历史案件数量,得到第二案件数量;
根据所述第一案件数量以及第二案件数量计算所述目标节点中的各变量的概率。
进一步的,所述根据所述至少一个变量信息计算所述目标节点中的各变量的概率的步骤具体还包括:
根据所述关联节点与所述目标节点连接关系,得到概率关系;
根据所述至少一个变量信息以及所述各变量的概率关系计算所述目标节点中的各变量的概率;
将所述目标节点中的各变量的概率输出给用户。
进一步的,所述根据所述至少一个变量信息计算所述目标节点中的各变量的概率的步骤具体还包括:
将所述至少一个变量信息输入至训练好的神经网络;
通过所述训练好的神经网络输出所述目标节点中的各变量的概率;
将所述目标节点中的各变量的概率输出给用户。
进一步的,所述通过所述训练好的神经网络输出所述目标节点中的各变量的概率的步骤之前还包括:
获取多个训练数据以及所述训练数据所对应的标注标签;
将所述训练数据以及所述对应的标注标签输入至所述初始神经网络模型;
将所述初始神经网络模型通过
Figure PCTCN2021090491-appb-000001
训练,得到目标神经网络模型,
Figure PCTCN2021090491-appb-000002
代表根据所述目标神经网络模型的多层感知器中第n-1层的输出,训练所述目标神经网络模型的多层感知器中第n层中第k个神经元得到的权值,
Figure PCTCN2021090491-appb-000003
表示
Figure PCTCN2021090491-appb-000004
相应的偏置,f i n表示第i个训练数据输入至所述目标神经网络模型后,在所述目标神经网络模型的第n层的输出,i为任意正整数,n为自然数,当n为所述目标神经网络模型的最后一层时,f i n是指所述目标神经网络模型的输出,f i n-1表示第i个训练数据输入至所述目标神经网络模型后,在所述目标神经网络模型的第n-1层的输出;
部署所述目标神经网络模型。
进一步的,所述根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系的步骤之后还包括:
将所述多个节点间的连接关系存储于区块链中。
为了解决上述技术问题,本申请实施例还提供一种图谱中的概率计算装置,采用了如下所述的技术方案:
获取模块用于根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系;
确定模块用于获取用户输入的目标节点,根据所述多个节点间的连接关系,确定所述目标节点的至少一个关联节点,所述关联节点为与所述目标节点相连的上一级节点;
信息获取模块用于获取每个所述关联节点中用户输入的变量信息,得到至少一个变量信息;
概率计算模块用于根据所述至少一个变量信息计算所述目标节点中的各变量的概率。
为了解决上述技术问题,本申请实施例还提供一种计算机设备,采用了如下所述的技术方案:
一种计算机设备,其包括至少一个连接的处理器、存储器、输入输出单元,其中,所述存储器用于存储计算机可读指令,所述处理器用于调用所述存储器中的计算机可读指令来执行如下所述的图谱中的概率计算方法的步骤:
根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系;
获取用户输入的目标节点,根据所述多个节点间的连接关系,确定所述目标节点的至少一个关联节点,所述关联节点为与所述目标节点相连的上一级节点;
获取每个所述关联节点中用户输入的变量信息,得到至少一个变量信息;
根据所述至少一个变量信息计算所述目标节点中的各变量的概率。
为了解决上述技术问题,本申请实施例还提供一种计算机可读存储介质,采用了如下所述的技术方案:
一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如下所述的图谱中的概率计算方法的步骤:
根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系;
获取用户输入的目标节点,根据所述多个节点间的连接关系,确定所述目标节点的至少一个关联节点,所述关联节点为与所述目标节点相连的上一级节点;
获取每个所述关联节点中用户输入的变量信息,得到至少一个变量信息;
根据所述至少一个变量信息计算所述目标节点中的各变量的概率。
与现有技术相比,本申请实施例主要有以下有益效果:
本申请利用概率图谱采样,还可以利用概率率图谱方便地计算基于各节点变量取值的概率得到目标节点的概率,在文本分析中可以将计算出来的概率可以提供洞察,辅助分析各个变量的取值的,从而得到目标节点概率的因果关系,即文本目标关键词与关联关键词之间的因果关系,减少了***的寻错成本,降低变量管理的复杂度,进而减少了开发成本。
附图说明
为了更清楚地说明本申请中的方案,下面将对本申请实施例描述中所需要使用的附图作一个简单介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请可以应用于其中的示例性***架构图;
图2-1根据本申请的图谱中的概率计算方法的一个实施例的流程图;
图2-2根据本申请的图谱中的概率计算方法的一个概率图谱的示意图;
图3是根据本申请的图谱中的概率计算装置的一个实施例的结构示意图;
图4是根据本申请的计算机设备的一个实施例的结构示意图。
具体实施方式
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同;本文中在申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请;本申请的说明书和权利要求书及上述附图说明中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。本申请的说明书和权利要求书或上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
为了使本技术领域的人员更好地理解本申请方案,下面将结合附图,对本申请实施例中的技术方案进行清楚、完整地描述。
如图1所示,***架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103上可以安装有各种通讯客户端应用,例如网页浏览器应用、购物类应用、搜索类应用、即时通信工具、邮箱客户端、社交平台软件等。
终端设备101、102、103可以是具有显示屏并且支持网页浏览的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。
服务器105可以是提供各种服务的服务器,例如对终端设备101、102、103上显示的页面提供支持的后台服务器。
需要说明的是,本申请实施例所提供的图谱中的概率计算方法一般由服务器/终端设备执行,相应地,图谱中的概率计算装置一般设置于服务器/终端设备中。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
继续参考图2-1,示出了根据本申请的图谱中的概率计算的方法的一个实施例的流程图。所述的图谱中的概率计算方法,包括以下步骤:
步骤201,根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系。
在本实施例中,图谱中的概率计算方法运行于其上的电子设备(例如图1所示的服务器/终端设备)可以通过有线连接方式或者无线连接方式服务器接收用户请求进行标定。需要指出的是,上述无线连接方式可以包括但不限于3G/4G连接、WiFi连接、蓝牙连接、WiMAX连接、Zigbee连接、UWB(ultra wideband)连接、以及其他现在已知或将来开发的无线连接方式。
在本实施例中,节点中存储节点本身的变量的取值,以保障责任为例,存储着列车意外,电梯意外等。还存储着相关的节点,例如赔付金额跟伤残情况和保障责任相关,伤残情况跟时间和保障责任相关。通过这些相关的节点关系构建知识图谱。
步骤202,获取用户输入的目标节点,根据所述多个节点间的连接关系,确定所述目标节点的至少一个关联节点,所述关联节点为与所述目标节点相连的上一级节点。
在本实施例中,目标节点为要计算概率的节点,关联节点为图谱中与目标节点相连的上一级节点。以图2-2的伤残情况为例,若节点存在指向目标节点的连接关系,则可以认为存在关联。与伤残情况的关联节点为保障责任和事件。关联节点为与目标节点相连的上一级节点。节点之间的有向边表示节点之间的依赖关系,一个节点的入边表征了其取值依赖的变量,而一个节点的出边则表征哪些变量的取值依赖于当前节点,即每个输出的节点的值依赖于输入节点的值进行输出,可以理解为输入决定输出,输出依赖于输入的值。比如,在图2-2中,从“保障责任”指向“事件”的有向边表示。概率图模型中每个目标节点的取值依赖于关联节点的取值。概率图模型的拓扑结构则隐去了约束关系的细节,如电梯下坠和1-10级伤残,全残之间变量的细节,更形象地展示了变量之间的依赖路径。
步骤203,获取每个所述关联节点中用户输入的变量信息,得到至少一个变量信息。
在本实施例中,每个节点中存在多个变量,每个变量都存在取值。各个节点的取值服从特定的条件概率,该条件概率的条件变量为当前变量的入边所对应的变量。如图2-2,变量“伤残情况”有两条入边,分别对应“保障责任”和“事件”两个变量。因而,“伤残情况”的取值依赖条件概率P(伤残情况|保障责任,事件)。概率图谱中每个节点的条件概率分布描述了各个变量取值如何相互影响,当一个节点中的变量取值的改变时,关联节点会依据条件概率分布相应的影响其目标节点变量的取值概率。
步骤204,根据所述至少一个变量信息计算所述目标节点中的各变量的概率。
当历史案件数量满足预设值时,则通过历史案件数量计算概率。当历史案件数量不满足预设值时,则判断是否预设了概率关系。若预设了概率关系,则通过概率关系计算概率值,若没有预设概率关系,则通过神经网络计算概率。
本申请利用概率图谱采样,还可以利用概率率图谱方便地计算基于各节点变量取值的 概率得到目标节点的概率,在文本分析中可以将计算出来的概率可以提供洞察,辅助分析各个变量的取值的,从而得到目标节点概率的因果关系,即文本目标关键词与关联关键词之间的因果关系,减少了***的寻错成本,降低变量管理的复杂度,进而减少了开发成本。
在一些可选的实现方式中,所述根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系的步骤具体包括:
获取节点信息。节点信息存储了所有节点的信息以及所有节点之间的关系。可以存储在硬盘,也可以存储在对应的数据库上。节点信息在表中按表的先后顺序进行排列。调取存储在硬盘或者数据库中的节点信息。
为所述节点信息对应存储地址生成一个ID,得到多个所述ID;找到A中每个节点信息中的内存地址。由于数据太多,一个节点信息可以存储在多个内存地址,一个内存地址对应一个ID。当然,如果数据不多,一个节点信息也可以存储在一个内存地址,对应一个ID。以集合A存储10个节点,集合A以数组形式存储为例,数组可以存储10个数组为a[0]存储第一个节点信息,a[1]存储第二个节点信息,a[2]存储第3个节点信息……以此类推,但是数组依附与内存存在,a[0]存储的信息在一个内存上,a[1]则在另外一个内存上,通过数组可以快速找到对应的内存,调取内存中的数据。当内存初始化后,***会根据欧诺个户指定的数组元素来决定数组的长度。以刚刚为例,此时指定了10个数组元素,***将创建一个长度为10的数组对象,一旦该数组对象创建成功,该分配内存的长度将不可改变,只能改变内存中存在数据的值。
按照所述ID由小到大的次序,依次读取各个所述ID对应的存储地址内存储的所述节点信息以及所述节点的信息关系;根据ID从小到大的排列顺序,读取内存地址中所存储的节点的信息以及所述节点的信息关系(即相关联的节点),直至全部读取完,将节点信息以及对应的连接全部取出来。
根据所述节点信息以及所述节点的信息关系,将各节点按照先后顺序依次连接起来,得到概率图谱,得到概率图谱根据取出来的每个节点的先后顺序,依次将节点连接起来。
上述实施方式中,节点信息包括节点间的关联关系,节点中的变量以及变量的取值,信息关系是指节点之间的关联关系,先后顺序为根据ID读取的先后顺序,计算机可以在内存读取节点的数据,并根据节点的数据生成图谱。以2-2为例,赔付金额这个节点会存储与他相连的上一级节点为保障责任和伤残状况,同理,每个节点都存储与他相连的上一级节点,通过将每个节点读取,完成概率图谱的初步构建。在文本生成(Data2Text)类任务中,变量之间往往具有复杂的依赖关系。比如,对于上述例子中,其涉及到的部分变量具有如图2-2所示的关系。为了保证生成文本的合理性,其输入的变量取值必须符合变量之间的内在逻辑约束。同时,在很多场景下还需要采样出符合逻辑约束关系的变量取值来生成文本。比如,为了帮助客户理解保险条款,需要在给定客户想了解的疾病的情况下,采样出其他关联变量才能生成辅助客户理解的理赔案例。采用概率图谱对用于文本生成的变量和变量之间的约束进行管理,从而使得可以方便地检验变量取值的合理性,以及按照约束关系对变量取值进行采样。例如以事件节点为例,事件为一个节点,节点中电梯下坠以及列车脱轨为这个节点的变量信息,即为里面的变量以及对应的变量取值,例如当变量取值为1时,则为电梯下坠,当变量取值为2时为列车脱轨。
在一些可选的实现方式中,所述根据所述至少一个变量信息计算所述目标节点中的各变量的概率的步骤具体包括:
统计所述目标节点各变量取值的历史案件数量,得到第一案件数量;
统计所述关联节点各变量取值的历史案件数量,得到第二案件数量;
根据所述第一案件数量以及第二案件数量计算所述目标节点中的各变量的概率。
上述实施方式中,以图2-2为例,输入信息为节点的取值,例如当需要计算赔付金额时,则输入信息为伤残情况和保障责任,当需要计算出险诱因时,则需要输入保障责任,通过枚举的方式列举各个依赖变量的取值以及当前变量对应的取值,比如当统计到发生的 列车脱轨有1000起(第一案件数量),其中有300起造成了单目失明(第二案件数量),700起造成了截瘫(第二案件数量),则预设(左边)((目标节点)事件=列车脱轨(变量取值))|(关联节点)(右边)疾病=((变量取值)单目失明/30%,(变量取值)截瘫/70%)”(|左边列车脱轨是输入变量,右边疾病是目标节点变量“疾病”各个取值的概率分布)。当用户获取到输入“事件=列车脱轨”,则输出疾病=(单目失明/30%,截瘫/70%)。以图2-2为例,赔付金额(目标节点)与(关联节点)保障责任以及(关联节点)伤残情况所关联,则预设当保障责任为(变量取值)列车意外,伤残情况为(变量取值)全残,则历史案件数量赔付金额(变60%取值(变量取值)100万元,10%取值(变量取值)90万元,(变量取值)30%取值110万元,获取用户的输入信息,当用户的输入为全残以及列车意外,则输出赔付金额60%取值100万元,10%取值90万元,30%取值110万元。由于赔付金额存在各种差异的状况,不是由这两个变量全残以及列车意外因素造成,还可能存在其他因素,只是通过历史数据这两个因素历史统计的数据进行获取得到结果。通过上述方式可以通过预设的方式计算各个节点之间的概率,进而提升概率计算的准确性。
在一些可选的实现方式中,所述根据所述至少一个变量信息计算所述目标节点中的各变量的概率的步骤具体还包括:
根据所述关联节点与所述目标节点连接关系,得到概率关系;
根据所述至少一个变量信息以及所述各变量的概率关系计算所述目标节点中的各变量的概率;
将所述目标节点中的各变量的概率输出给用户。
上述实施方式中,将每个变量的取值作为目标节点中各变量取值服从的概率分布的参数,然后当前变量从该概率分布中采样。比如,假设目标节点“传感器显示的温度”的关联节点是“真实温度”,而且目标节点与关联节点服从正态分布P(传感器显示的温度|真实温度)=Norm(真实温度,1),则这里“真实温度”是“传感器显示温度”所服从概率分布的一个参数。当一个真实温度输入以后,会得到一个传感器显示温度的概率分布。同理多个变量的道理也是同理。概率关系是指两个变量之间的函数关系式,例如y=kx,y和k可以理解为一个线性概率表达式,例如高血压跟年龄有关,这个相关信息为一个sigmoid模型的概率模型可以计算,sigmoid的函数表达式为sigmoid(x)=1/(1+e^x),当年龄与高血压概率y的函数表达式为y=sigmoid(0.05x-2),则当年龄为60岁时,则通过y=sigmoid(0.05x-2)=sigmoid(0.05*60-2)=sigmoid(1)=1/e=0.73。通过上述方式可以通过概率函数的方式计算各个节点之间的概率,进而提升概率计算的准确性。
在一些可选的实现方式中,所述根据所述至少一个变量信息计算所述目标节点中的各变量的概率的步骤具体还包括:
将所述至少一个变量信息输入至训练好的神经网络;
通过所述训练好的神经网络输出所述目标节点中的各变量的概率;
将所述目标节点中的各变量的概率输出给用户。
上述实施方式中,当前变量的取值为一个模型的输出结果,该模型的输入为当前节点所依赖的变量,假设变量“患糖尿病”依赖于变量“年龄”,“体重”。则P(患糖尿病|年龄,体重)=f(年龄,体重)。其中f(年龄,体重)为输入是“年龄”,“体重”,输出是患糖尿病概率的函数。神经网络计算的函数形式可以很多样,比如logistic regression,多层感知机,决策树等。其中,logistic regression是最简单的函数形式:p=e^(w1*年龄+w2*体重)/(1+e^(w1*年龄+w2*体重))。其中w1以及为w2为权重。通过上述方式可以通过概人工智能模型的方式计算各个节点之间的概率,进而提升概率计算的准确性。当历史案件数量满足预设值时,则通过历史案件数量计算概率。当历史案件数量不满足预设值时,则判断是否预设了概率关系。若预设了概率关系,则通过概率关系计算概率值,若没有预设概率关系,则通过神经网络计算概率。
在一些可选的实现方式中,所述通过所述训练好的神经网络输出所述目标节点中的各变量的概率的步骤之前还包括:
获取多个训练数据以及所述训练数据所对应的标注标签;
将所述训练数据以及所述对应的标注标签输入至所述初始神经网络模型;
将所述初始神经网络模型通过
Figure PCTCN2021090491-appb-000005
训练,得到目标神经网络模型,
Figure PCTCN2021090491-appb-000006
代表根据所述目标神经网络模型的多层感知器中第n-1层的输出,训练所述目标神经网络模型的多层感知器中第n层中第k个神经元得到的权值,
Figure PCTCN2021090491-appb-000007
表示
Figure PCTCN2021090491-appb-000008
相应的偏置,f i n表示第i个训练数据输入至所述目标神经网络模型后,在所述目标神经网络模型的第n层的输出,i为任意正整数,n为自然数,当n为所述目标神经网络模型的最后一层时,f i n是指所述目标神经网络模型的输出,f i n-1表示第i个训练数据输入至所述目标神经网络模型后,在所述目标神经网络模型的第n-1层的输出;
部署所述目标神经网络模型。
上述实施方式中,其中训练数据为每个输入的节点取值,标注的标签为对应的目标节点的取值。保障责任为列车意外,伤残情况为全残,标注的标签为100万元,或者90万元,或者110万元。比如,假设采用多层全连接神经网络(multi-layer perceptron)作为模型,变量D取值可以是d1,d2,d3三种,变量A,B,C取值分别为0.1,0.5,0.3,则输入向量x为[0.1,0.5,0.3],则经过若干全连接层x i=W i-1x i-1(x i为第i层特征向量),再接softmax层,得到变量D三个取值的概率分布。
在一些可选的实现方式中,所述根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系的步骤之后还包括:
将所述多个节点间的连接关系存储于区块链中。
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,该计算机可读指令可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,前述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random Access Memory,RAM)等。
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
进一步参考图3,作为对上述图2-1所示方法的实现,本申请提供了一种图谱中的概率计算装置的一个实施例,该装置实施例与图2-1所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图3所示,本实施例所述的图谱中的概率计算装置300包括:获取模块301、确定模块302、信息获取模块303以及概率计算模块304。其中:
获取模块301用于根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节 点间的连接关系;
确定模块302用于获取用户输入的目标节点,根据所述多个节点间的连接关系,确定所述目标节点的至少一个关联节点,所述关联节点为与所述目标节点相连的上一级节点;
信息获取模块303用于获取每个所述关联节点中用户输入的变量信息,得到至少一个变量信息;
概率计算模块304用于根据所述至少一个变量信息计算所述目标节点中的各变量的概率。
上述装置降低变量了管理的复杂度,从而降低了开发成本。
进一步的,所述获取模块包括集合存储子模块、地址生成子模块、内存读取子模块以及连接子模块。
集合存储子模块用于获取节点信息;
地址生成子模块用于为所述节点信息对应存储地址生成一个ID,得到多个所述ID;
内存读取子模块用于按照所述ID由小到大的次序,依次读取各个所述ID对应的存储地址内存储的所述节点信息以及所述节点的信息关系;
连接子模块用于根据所述节点信息以及所述节点的信息关系,将各节点按照先后顺序依次连接起来,得到概率图谱。
进一步的,所述信息获取模块包括第一统计子模块、第二统计子模块以及概率计算子模块。
第一统计子模块用于统计所述目标节点各变量取值的历史案件数量,得到第一案件数量;
第二统计子模块统计所述关联节点各变量取值的历史案件数量,得到第二案件数量;
概率计算子模块用于根据所述第一案件数量以及第二案件数量计算所述目标节点中的各变量的概率。
进一步的,所述信息获取模块包括关联概率子模块、概率计算子模块以及概率输出子模块。
关联概率子模块用于根据所述关联节点与所述目标节点连接关系,得到概率关系;
概率计算子模块用于根据所述至少一个变量信息以及所述各变量的概率关系计算所述目标节点中的各变量的概率;
概率输出子模块用于将所述目标节点中的各变量的概率输出给用户。
进一步的,所述信息获取模块包括变量输入子模块、模型输入子模块以及模型输出子模块。
变量输入子模块用于将所述至少一个变量信息输入至训练好的神经网络;
模型输入子模块用于通过所述训练好的神经网络输出所述目标节点中的各变量的概率;
将所述目标节点中的各变量的概率输出给用户。
进一步的,所述文本拼接装置还包括训练数据获取子模块,训练数据输入子模、训练子模块以及部署子模块。
训练数据获取子模块用于获取多个训练数据以及所述训练数据所对应的标注标签;
训练数据输入子模用于将所述训练数据以及所述对应的标注标签输入至所述初始神经网络模型;
训练模块用于训练子模块用于将所述初始神经网络模型通过
Figure PCTCN2021090491-appb-000009
训练,得到目标神经网络模型,
Figure PCTCN2021090491-appb-000010
代表根据所述目标神经网络模型的多层感知器中第n-1层的输出,训练所述目标神经网络模型的多层感知器中第n层中第k个神经元得到的权值,
Figure PCTCN2021090491-appb-000011
表示
Figure PCTCN2021090491-appb-000012
相应的偏置,f i n表示第i个训练数据输入至所述目标神经网络模型后,在所述目标神经网络模型的第n层的输出,i为任意正整数,n 为自然数,当n为所述目标神经网络模型的最后一层时,f i n是指所述目标神经网络模型的输出,f i n-1表示第i个训练数据输入至所述目标神经网络模型后,在所述目标神经网络模型的第n-1层的输出;
部署子模块用于部署所述目标神经网络模型。
进一步的,所述文本拼接装置还包括区块链子模块。
区块链子模块用于将所述多个节点间的连接关系存储于区块链中。
为解决上述技术问题,本申请实施例还提供计算机设备。具体请参阅图4,图4为本实施例计算机设备基本结构框图。
所述计算机设备4包括通过***总线相互通信连接存储器41、处理器42、网络接口43。需要指出的是,图中仅示出了具有组件41-43的计算机设备4,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。其中,本技术领域技术人员可以理解,这里的计算机设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。
所述计算机设备可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述计算机设备可以与用户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互。
所述存储器41至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,所述存储器41可以是所述计算机设备4的内部存储单元,例如该计算机设备4的硬盘或内存。在另一些实施例中,所述存储器41也可以是所述计算机设备4的外部存储设备,例如该计算机设备6上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。当然,所述存储器41还可以既包括所述计算机设备4的内部存储单元也包括其外部存储设备。本实施例中,所述存储器41通常用于存储安装于所述计算机设备4的操作***和各类应用软件,例如图谱中的概率计算方法的计算机可读指令等。此外,所述存储器41还可以用于暂时地存储已经输出或者将要输出的各类数据。
所述处理器42在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器42通常用于控制所述计算机设备4的总体操作。本实施例中,所述处理器42用于运行所述存储器41中存储的计算机可读指令或者处理数据,例如运行所述图谱中的概率计算方法的计算机可读指令。执行上述图谱中的概率计算方法的步骤,具体实施方式不再赘述。
所述网络接口43可包括无线网络接口或有线网络接口,该网络接口43通常用于在所述计算机设备4与其他电子设备之间建立通信连接。
本申请还提供了另一种实施方式,即提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令可被至少一个处理器执行,以使所述至少一个处理器执行如上述的图谱中的概率计算方法的步骤。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
显然,以上所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例,附图中给出了本申请的较佳实施例,但并不限制本申请的专利范围。本申请可以以许多不同的形式来实现,相反地,提供这些实施例的目的是使对本申请的公开内容的理解更加透彻全面。尽管参照前述实施例对本申请进行了详细的说明,对于本领域的技术人员来而言,其依然可以对前述各具体实施方式所记载的技术方案进行修改,或者对其中部分技术特征进行等效替换。凡是利用本申请说明书及附图内容所做的等效结构,直接或间接运用在其他相关的技术领域,均同理在本申请专利保护范围之内。

Claims (20)

  1. 一种图谱中的概率计算方法,其中,包括下述步骤:
    根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系;
    获取用户输入的目标节点,根据所述多个节点间的连接关系,确定所述目标节点的至少一个关联节点,所述关联节点为与所述目标节点相连的上一级节点;
    获取每个所述关联节点中用户输入的变量信息,得到至少一个变量信息;
    根据所述至少一个变量信息计算所述目标节点中的各变量的概率。
  2. 根据权利要求1所述的图谱中的概率计算方法,其中,所述根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系的步骤具体包括:
    获取节点信息;
    为所述节点信息对应存储地址生成一个ID,得到多个所述ID;
    按照所述ID由小到大的次序,依次读取各个所述ID对应的存储地址内存储的所述节点信息以及所述节点的信息关系;
    根据所述节点信息以及所述节点的信息关系,将各节点按照先后顺序依次连接起来,得到概率图谱。
  3. 根据权利要求1或2所述的图谱中的概率计算方法,其中,所述根据所述至少一个变量信息计算所述目标节点中的各变量的概率的步骤具体包括:
    统计所述目标节点各变量取值的历史案件数量,得到第一案件数量;
    统计所述关联节点各变量取值的历史案件数量,得到第二案件数量;
    根据所述第一案件数量以及第二案件数量计算所述目标节点中的各变量的概率。
  4. 根据权利要求1或2所述的图谱中的概率计算方法,其中,所述根据所述至少一个变量信息计算所述目标节点中的各变量的概率的步骤具体还包括:
    根据所述关联节点与所述目标节点连接关系,得到概率关系;
    根据所述至少一个变量信息以及所述各变量的概率关系计算所述目标节点中的各变量的概率;
    将所述目标节点中的各变量的概率输出给用户。
  5. 根据权利要求1或2中所述的图谱中的概率计算方法,其中,所述根据所述至少一个变量信息计算所述目标节点中的各变量的概率的步骤具体还包括:
    将所述至少一个变量信息输入至训练好的神经网络;
    通过所述训练好的神经网络输出所述目标节点中的各变量的概率;
    将所述目标节点中的各变量的概率输出给用户。
  6. 根据权利要求5中所述的图谱中的概率计算方法,其中,所述通过所述训练好的神经网络输出所述目标节点中的各变量的概率的步骤之前还包括:
    获取多个训练数据以及所述训练数据所对应的标注标签;
    将所述训练数据以及所述对应的标注标签输入至所述初始神经网络模型;
    将所述初始神经网络模型通过
    Figure PCTCN2021090491-appb-100001
    训练,得到目标神经网络模型,
    Figure PCTCN2021090491-appb-100002
    代表根据所述目标神经网络模型的多层感知器中第n-1层的输出,训练所述目标神经网络模型的多层感知器中第n层中第k个神经元得到的权值,
    Figure PCTCN2021090491-appb-100003
    表示
    Figure PCTCN2021090491-appb-100004
    相应的偏置,f i n表示第i个训练数据输入至所述目标神经网络模型后,在所述目标神经网络模型的第n层的输出,i为任意正整数,n为自然数,当n为所述目标神经网络模型的最后一层时,f i n是指所述目标神经网络模型的输出,f i n-1表示第i个训练数据输入至所述目标神经网络模型后,在所述目标神经网络模型的第n-1层的输出;
    部署所述目标神经网络模型。
  7. 根据权利要求6所述的图谱中的概率计算方法,其中,所述根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系的步骤之后还包括:
    将所述多个节点间的连接关系存储于区块链中。
  8. 一种图谱中的概率计算装置,其中,包括:
    获取模块,用于根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系;
    确定模块,用于获取用户输入的目标节点,根据所述多个节点间的连接关系,确定所述目标节点的至少一个关联节点,所述关联节点为与所述目标节点相连的上一级节点;
    信息获取模块,用于获取所述至少一个关联节点中的变量信息,得到至少一个变量信息;
    概率计算模块,用于根据所述至少一个变量信息计算所述目标节点中的各变量的概率。
  9. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现如下所述的图谱中的概率计算方法的步骤:
    根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系;
    获取用户输入的目标节点,根据所述多个节点间的连接关系,确定所述目标节点的至少一个关联节点,所述关联节点为与所述目标节点相连的上一级节点;
    获取每个所述关联节点中用户输入的变量信息,得到至少一个变量信息;
    根据所述至少一个变量信息计算所述目标节点中的各变量的概率。
  10. 根据权利要求9所述的计算机设备,其中,所述根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系的步骤具体包括:
    获取节点信息;
    为所述节点信息对应存储地址生成一个ID,得到多个所述ID;
    按照所述ID由小到大的次序,依次读取各个所述ID对应的存储地址内存储的所述节点信息以及所述节点的信息关系;
    根据所述节点信息以及所述节点的信息关系,将各节点按照先后顺序依次连接起来,得到概率图谱。
  11. 根据权利要求9或10所述的计算机设备,其中,所述根据所述至少一个变量信息计算所述目标节点中的各变量的概率的步骤具体包括:
    统计所述目标节点各变量取值的历史案件数量,得到第一案件数量;
    统计所述关联节点各变量取值的历史案件数量,得到第二案件数量;
    根据所述第一案件数量以及第二案件数量计算所述目标节点中的各变量的概率。
  12. 根据权利要求9或10所述的计算机设备,其中,所述根据所述至少一个变量信息计算所述目标节点中的各变量的概率的步骤具体还包括:
    根据所述关联节点与所述目标节点连接关系,得到概率关系;
    根据所述至少一个变量信息以及所述各变量的概率关系计算所述目标节点中的各变量的概率;
    将所述目标节点中的各变量的概率输出给用户。
  13. 根据权利要求9或10中所述的计算机设备,其中,所述根据所述至少一个变量信息计算所述目标节点中的各变量的概率的步骤具体还包括:
    将所述至少一个变量信息输入至训练好的神经网络;
    通过所述训练好的神经网络输出所述目标节点中的各变量的概率;
    将所述目标节点中的各变量的概率输出给用户。
  14. 根据权利要求13中所述的计算机设备,其中,所述通过所述训练好的神经网络输出所述目标节点中的各变量的概率的步骤之前还包括:
    获取多个训练数据以及所述训练数据所对应的标注标签;
    将所述训练数据以及所述对应的标注标签输入至所述初始神经网络模型;
    将所述初始神经网络模型通过
    Figure PCTCN2021090491-appb-100005
    训练,得到目标神经网络模型,
    Figure PCTCN2021090491-appb-100006
    代表根据所述目标神经网络模型的多层感知器中第n-1层的输出,训练所述目标神经网络模型的多层感知器中第n层中第k个神经元得到的权值,
    Figure PCTCN2021090491-appb-100007
    表示
    Figure PCTCN2021090491-appb-100008
    相应的偏置,f i n表示第i个训练数据输入至所述目标神经网络模型后,在所述目标神经网络模型的第n层的输出,i为任意正整数,n为自然数,当n为所述目标神经网络模型的最后一层时,f i n是指所述目标神经网络模型的输出,f i n-1表示第i个训练数据输入至所述目标神经网络模型后,在所述目标神经网络模型的第n-1层的输出;
    部署所述目标神经网络模型。
  15. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如下所述的图谱中的概率计算方法的步骤:
    根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系;
    获取用户输入的目标节点,根据所述多个节点间的连接关系,确定所述目标节点的至少一个关联节点,所述关联节点为与所述目标节点相连的上一级节点;
    获取每个所述关联节点中用户输入的变量信息,得到至少一个变量信息;
    根据所述至少一个变量信息计算所述目标节点中的各变量的概率。
  16. 根据权利要求15所述的计算机可读存储介质,其中,所述根据节点中的信息关系构建概率图谱,得到所述概率图谱中多个节点间的连接关系的步骤具体包括:
    获取节点信息;
    为所述节点信息对应存储地址生成一个ID,得到多个所述ID;
    按照所述ID由小到大的次序,依次读取各个所述ID对应的存储地址内存储的所述节点信息以及所述节点的信息关系;
    根据所述节点信息以及所述节点的信息关系,将各节点按照先后顺序依次连接起来,得到概率图谱。
  17. 根据权利要求15或16所述的计算机可读存储介质,其中,所述根据所述至少一个变量信息计算所述目标节点中的各变量的概率的步骤具体包括:
    统计所述目标节点各变量取值的历史案件数量,得到第一案件数量;
    统计所述关联节点各变量取值的历史案件数量,得到第二案件数量;
    根据所述第一案件数量以及第二案件数量计算所述目标节点中的各变量的概率。
  18. 根据权利要求15或16所述的计算机可读存储介质,其中,所述根据所述至少一个变量信息计算所述目标节点中的各变量的概率的步骤具体还包括:
    根据所述关联节点与所述目标节点连接关系,得到概率关系;
    根据所述至少一个变量信息以及所述各变量的概率关系计算所述目标节点中的各变量的概率;
    将所述目标节点中的各变量的概率输出给用户。
  19. 根据权利要求15或16中所述的计算机可读存储介质,其中,所述根据所述至少一个变量信息计算所述目标节点中的各变量的概率的步骤具体还包括:
    将所述至少一个变量信息输入至训练好的神经网络;
    通过所述训练好的神经网络输出所述目标节点中的各变量的概率;
    将所述目标节点中的各变量的概率输出给用户。
  20. 根据权利要求19中所述的计算机可读存储介质,其中,所述通过所述训练好的神经网络输出所述目标节点中的各变量的概率的步骤之前还包括:
    获取多个训练数据以及所述训练数据所对应的标注标签;
    将所述训练数据以及所述对应的标注标签输入至所述初始神经网络模型;
    将所述初始神经网络模型通过
    Figure PCTCN2021090491-appb-100009
    训练,得到目标神经网络模型,
    Figure PCTCN2021090491-appb-100010
    代表根据所述目标神经网络模型的多层感知器中第n-1层的输出,训练所述目标神经网络模型的多层感知器中第n层中第k个神经元得到的权值,
    Figure PCTCN2021090491-appb-100011
    表示
    Figure PCTCN2021090491-appb-100012
    相应的偏置,f i n表示第i个训练数据输入至所述目标神经网络模型后,在所述目标神经网络模型的第n层的输出,i为任意正整数,n为自然数,当n为所述目标神经网络模型的最后一层时,f i n是指所述目标神经网络模型的输出,f i n-1表示第i个训练数据输入至所述目标神经网络模型后,在所述目标神经网络模型的第n-1层的输出;
    部署所述目标神经网络模型。
PCT/CN2021/090491 2020-10-23 2021-04-28 图谱中的概率计算方法、装置、计算机设备及存储介质 WO2022083093A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011150139.6 2020-10-23
CN202011150139.6A CN112256886B (zh) 2020-10-23 2020-10-23 图谱中的概率计算方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022083093A1 true WO2022083093A1 (zh) 2022-04-28

Family

ID=74261782

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/090491 WO2022083093A1 (zh) 2020-10-23 2021-04-28 图谱中的概率计算方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN112256886B (zh)
WO (1) WO2022083093A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116304885A (zh) * 2023-05-11 2023-06-23 之江实验室 一种基于图节点嵌入的事件识别方法、装置和设备
CN117295071A (zh) * 2023-11-24 2023-12-26 易讯科技股份有限公司 用于ipv6网络的移动节点安全管理方法及***
CN117851608A (zh) * 2024-01-06 2024-04-09 杭州威灿科技有限公司 案件图谱生成方法、装置、设备和介质
CN117934177A (zh) * 2024-03-22 2024-04-26 湖南多层次商保科技有限公司 一种保险智能定责模型的构建方法及***
CN118133883A (zh) * 2024-05-06 2024-06-04 杭州海康威视数字技术股份有限公司 图采样方法、图谱预测方法及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256886B (zh) * 2020-10-23 2023-06-27 平安科技(深圳)有限公司 图谱中的概率计算方法、装置、计算机设备及存储介质
CN118209833A (zh) * 2022-12-16 2024-06-18 华为技术有限公司 芯片故障分析方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657837A (zh) * 2018-11-19 2019-04-19 平安科技(深圳)有限公司 违约概率预测方法、装置、计算机设备和存储介质
CN109657918A (zh) * 2018-11-19 2019-04-19 平安科技(深圳)有限公司 关联评估对象的风险预警方法、装置和计算机设备
CN110765117A (zh) * 2019-09-30 2020-02-07 中国建设银行股份有限公司 欺诈识别方法、装置、电子设备及计算机可读存储介质
CN111221944A (zh) * 2020-01-13 2020-06-02 平安科技(深圳)有限公司 文本意图识别方法、装置、设备和存储介质
CN112256886A (zh) * 2020-10-23 2021-01-22 平安科技(深圳)有限公司 图谱中的概率计算方法、装置、计算机设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110837562B (zh) * 2018-08-17 2023-05-02 阿里巴巴集团控股有限公司 案件的处理方法、装置和***
CN110110034A (zh) * 2019-05-10 2019-08-09 天津大学深圳研究院 一种基于图的rdf数据管理方法、装置及存储介质
CN110232524A (zh) * 2019-06-14 2019-09-13 哈尔滨哈银消费金融有限责任公司 社交网络欺诈模型的构建方法、防欺诈方法和装置
CN111198933A (zh) * 2020-01-03 2020-05-26 北京明略软件***有限公司 搜索目标实体的方法、装置、电子装置及存储介质
CN111309824B (zh) * 2020-02-18 2023-09-15 中国工商银行股份有限公司 实体关系图谱显示方法及***

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657837A (zh) * 2018-11-19 2019-04-19 平安科技(深圳)有限公司 违约概率预测方法、装置、计算机设备和存储介质
CN109657918A (zh) * 2018-11-19 2019-04-19 平安科技(深圳)有限公司 关联评估对象的风险预警方法、装置和计算机设备
CN110765117A (zh) * 2019-09-30 2020-02-07 中国建设银行股份有限公司 欺诈识别方法、装置、电子设备及计算机可读存储介质
CN111221944A (zh) * 2020-01-13 2020-06-02 平安科技(深圳)有限公司 文本意图识别方法、装置、设备和存储介质
CN112256886A (zh) * 2020-10-23 2021-01-22 平安科技(深圳)有限公司 图谱中的概率计算方法、装置、计算机设备及存储介质

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116304885A (zh) * 2023-05-11 2023-06-23 之江实验室 一种基于图节点嵌入的事件识别方法、装置和设备
CN116304885B (zh) * 2023-05-11 2023-08-22 之江实验室 一种基于图节点嵌入的事件识别方法、装置和设备
CN117295071A (zh) * 2023-11-24 2023-12-26 易讯科技股份有限公司 用于ipv6网络的移动节点安全管理方法及***
CN117295071B (zh) * 2023-11-24 2024-02-02 易讯科技股份有限公司 用于ipv6网络的移动节点安全管理方法及***
CN117851608A (zh) * 2024-01-06 2024-04-09 杭州威灿科技有限公司 案件图谱生成方法、装置、设备和介质
CN117934177A (zh) * 2024-03-22 2024-04-26 湖南多层次商保科技有限公司 一种保险智能定责模型的构建方法及***
CN118133883A (zh) * 2024-05-06 2024-06-04 杭州海康威视数字技术股份有限公司 图采样方法、图谱预测方法及存储介质

Also Published As

Publication number Publication date
CN112256886B (zh) 2023-06-27
CN112256886A (zh) 2021-01-22

Similar Documents

Publication Publication Date Title
WO2022083093A1 (zh) 图谱中的概率计算方法、装置、计算机设备及存储介质
AU2020385264B2 (en) Fusing multimodal data using recurrent neural networks
WO2021120677A1 (zh) 一种仓储模型训练方法、装置、计算机设备及存储介质
US11416754B1 (en) Automated cloud data and technology solution delivery using machine learning and artificial intelligence modeling
CN108280104B (zh) 目标对象的特征信息提取方法及装置
WO2021169115A1 (zh) 风控方法、装置、电子设备及计算机可读存储介质
US20220100963A1 (en) Event extraction from documents with co-reference
CN112148987A (zh) 基于目标对象活跃度的消息推送方法及相关设备
US20220100772A1 (en) Context-sensitive linking of entities to private databases
CN111427971B (zh) 用于计算机***的业务建模方法、装置、***和介质
WO2020147409A1 (zh) 一种文本分类方法、装置、计算机设备及存储介质
US20220100967A1 (en) Lifecycle management for customized natural language processing
EP3815342B1 (en) Adaptive user-interface assembling and rendering
US11507747B2 (en) Hybrid in-domain and out-of-domain document processing for non-vocabulary tokens of electronic documents
CN113656587B (zh) 文本分类方法、装置、电子设备及存储介质
CN112257959A (zh) 用户风险预测方法、装置、电子设备及存储介质
WO2021139223A1 (zh) 分群模型的解释方法、装置、计算机设备和存储介质
WO2023179038A1 (zh) 数据标注的方法、ai开发平台、计算设备集群和存储介质
CN116661936A (zh) 页面数据的处理方法、装置、计算机设备及存储介质
CN115412401B (zh) 训练虚拟网络嵌入模型及虚拟网络嵌入的方法和装置
US20230117893A1 (en) Machine learning techniques for environmental discovery, environmental validation, and automated knowledge repository generation
CN114357180A (zh) 知识图谱的更新方法及电子设备
US11783206B1 (en) Method and system for making binary predictions for a subject using historical data obtained from multiple subjects
KR102449831B1 (ko) 신규 텍스트에 대한 정보를 제공하는 전자 장치, 신규 텍스트를 확인하는 서버 및 그 동작 방법
US20240232606A9 (en) Computing services architect

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21881517

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21881517

Country of ref document: EP

Kind code of ref document: A1