CN115617960A - Post recommendation method and device - Google Patents

Post recommendation method and device Download PDF

Info

Publication number
CN115617960A
CN115617960A CN202110791270.9A CN202110791270A CN115617960A CN 115617960 A CN115617960 A CN 115617960A CN 202110791270 A CN202110791270 A CN 202110791270A CN 115617960 A CN115617960 A CN 115617960A
Authority
CN
China
Prior art keywords
question
answer
candidate
competency
pair
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110791270.9A
Other languages
Chinese (zh)
Inventor
陈凯
刘志伟
方小雷
陈清财
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jinyu Intelligent Technology Co ltd
Original Assignee
Shanghai Jinyu Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jinyu Intelligent Technology Co ltd filed Critical Shanghai Jinyu Intelligent Technology Co ltd
Priority to CN202110791270.9A priority Critical patent/CN115617960A/en
Publication of CN115617960A publication Critical patent/CN115617960A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06398Performance of employee with respect to a job function
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • Databases & Information Systems (AREA)
  • Economics (AREA)
  • Artificial Intelligence (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Game Theory and Decision Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a post recommendation method and device and a computer readable storage medium. The post recommendation method comprises the following steps: presenting a plurality of questions to a candidate for a plurality of competencies of the candidate; collecting answer texts of the candidate for answering the questions, and forming question-answer pairs by the answer texts and the corresponding question texts; segmenting the question text and the answer text of each question-answer pair to respectively obtain a plurality of question sentences and a plurality of answer sentences of each question-answer pair; performing information interaction, semantic reasoning and result classification on the question sentences and the answer sentences of each question-answer pair by using a pre-trained reasoning model so as to respectively obtain the scoring results of the candidate aiming at each competency; and determining a recommended position suitable for the candidate according to the scoring result of the candidate aiming at each competence and at least one competence combination required by the recruiting position.

Description

Post recommendation method and device
Technical Field
The invention belongs to the technical field of Artificial Intelligence (AI), and particularly relates to a post recommendation method based on Artificial Intelligence, a device for executing the post recommendation method and a computer-readable storage medium.
Background
With the development of artificial intelligence technology, automated post recommendation technology or systems are emerging. Some existing post recommendation technologies use a simple natural language processing basis to perform word segmentation or keyword extraction on resume contents, current post contents and target post contents of a candidate, and complete matching or similarity calculation of resumes and posts based on weights of some keywords, so that the purpose of simple post recommendation is achieved. Other relatively advanced post recommendation technologies train a word vector model based on external encyclopedic data to represent the personal semantic vector (including resume content and personal information of the candidate) and the post semantic vector of the candidate, and further obtain the suitability of the candidate for the current post by calculating semantic similarity. More advanced post recommendation techniques further train a matching model (a simple neural network model, such as a convolutional neural network) based on the resume content and the post content on the basis of the word vectors to predict the suitability of the candidate for the post. Although the methods can realize automatic recommendation of the positions to a certain extent, the methods still have the following disadvantages:
(1) The theory supports weakness. The post recommendation is carried out only based on the resume and the post requirement, which only indicates that the working experience of the candidate meets the requirement background of the post to a certain extent, and the recommended candidate can hardly be qualified for the current work. In other words, the recommended candidates still require tedious manual interviews or some video interviews for further qualification, and do not provide substantial relief for enterprise interviewers.
(2) The technology is rough. Whether the similarity calculation or the matching model of semantic vectors is developed by word segmentation, keyword extraction or based on the basic word vector technology, the intrinsic association between the working experience of a candidate and the position requirement cannot be deeply mined. Furthermore, such methods have difficulty ensuring that the recommended candidate fits the corresponding position.
(3) Lack of qualification for the candidate's competency. According to an interview assisting technology disclosed by Shanghai intelligent science and technology Limited (patent No. 202011060966.6), competence (Competency) refers to the comprehensive qualities of an individual and non-technical abilities which can accompany the sustainable development of the person for life, mainly including responsibility for work, ability to communicate and coordinate with colleagues, time management ability, pressure resistance ability, professional enthusiasm for the engaged position, and the like. Competency is an important factor in assessing whether a candidate is competent to recruit a post. And a post recommendation technology for checking the competence of the candidate does not exist, so that whether the candidate is competent for the recruitment post or not can not be comprehensively and accurately evaluated.
Based on the pain points, the invention provides a post recommendation technology, on one hand, the question and answer interaction can be carried out on the question texts and the answer texts of the candidate for answering the questions by using the artificial intelligence technology from the perspective of the competence assessment, so that the assessment precision of the artificial intelligence on each competence of the candidate is improved, and on the other hand, the proper post can be automatically recommended for the candidate according to the assessment result of each competence of the candidate.
Disclosure of Invention
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
In order to overcome the defects in the prior art, the invention provides a post recommendation method, a post recommendation device and a computer readable storage medium, which are used for performing question-answer interaction on question texts and answer texts of candidate answering questions by using an artificial intelligence technology from the perspective of competence assessment, so that the evaluation precision of the artificial intelligence on various competence of the candidate is improved, and a proper post is automatically recommended for the candidate according to the evaluation result of the competence of the candidate.
Specifically, the post recommendation method provided by the first aspect of the present invention includes the following steps: presenting a plurality of questions to a candidate for a plurality of competencies of the candidate; collecting answer texts of the candidate for answering the questions, and forming question-answer pairs by the answer texts and the corresponding question texts respectively; segmenting the question text and the answer text of each question-answer pair to respectively obtain a plurality of question sentences and a plurality of answer sentences of each question-answer pair; performing information interaction, semantic reasoning and result classification on the question sentences and the answer sentences of each question-answer pair by using a pre-trained reasoning model so as to respectively obtain the scoring results of the candidate aiming at each competency; and determining a recommended position suitable for the candidate according to the scoring result of the candidate aiming at each competence and at least one competence combination required by the recruiting position. By executing the steps, on one hand, the post recommendation method can perform question-answer interaction on the question texts and the answer texts of the candidate for answering the questions by using an artificial intelligence technology from the perspective of competence assessment, so that the assessment precision of the artificial intelligence on each competence of the candidate is improved, and on the other hand, a proper post can be automatically recommended for the candidate according to the assessment result of each competence of the candidate.
Preferably, in some embodiments of the invention, the step of presenting to the candidate a plurality of questions regarding the candidate's competency comprises: determining the plurality of competencies according to a competency model for a plurality of stations; retrieving one or more questions corresponding to each of the competencies from a pre-constructed library of competency questions; and playing video or audio of each of the questions to the candidate.
Preferably, in some embodiments of the present invention, before performing the step of presenting the plurality of questions to the candidate with the plurality of competencies for the candidate, the position recommendation method further comprises the steps of: interviewing a plurality of employees at a plurality of posts to obtain interview records of the plurality of employees; scoring the interview record of each said employee according to a predefined competency dimension to obtain a score for each said employee for each said competency; dividing the multiple employees into excellent employees and common employees of all the posts according to work performance; performing a difference check on the scores of the excellent employees and the common employees in each competency to determine a plurality of competencies which the excellent employees in each post should have; constructing a competency model of each post according to a plurality of competencies which the excellent employees of each post should have; and preparing at least one question for each of the competency and constructing the competency question bank based on a plurality of competency that the excellent employees of the plurality of posts should possess.
Optionally, in some embodiments of the present invention, the step of collecting answer texts of the candidate answering the questions comprises: collecting video or audio of the candidate answering each question; and performing voice text transcription on audio data in the video or audio to acquire answer texts of the candidate for answering the questions.
Optionally, in some embodiments of the present invention, the step of segmenting the question text and the answer text of each question-answer pair includes: segmenting the question text and the answer text of each question-answer pair by taking sentences as units; or the RDF triples are taken as units, and the question text and the answer text of each question-answer pair are segmented.
Optionally, in some embodiments of the present invention, the inference model includes a pre-trained coding module and a pre-trained relationship graph network module. The step of performing information interaction, semantic reasoning and result classification on the question sentences and the answer sentences of each question-answer pair comprises the following steps: vector representation is carried out on each question and each answer in each question-answer pair by using the coding module so as to respectively generate a coding representation vector of each question and each answer in each question-answer pair and a coding representation vector of each answer; utilizing the relational graph network module to respectively carry out iterative reasoning on the coding expression vector of each question and the coding expression vector of each answer sentence in each question-answer pair so as to realize information interaction between each question and each answer sentence in each question-answer pair and respectively generate the logical relationship expression vector of each question and each answer sentence in each question-answer pair; semantic reasoning is carried out on the logical relationship expression vector of each question and answer pair and the logical relationship expression vector of each answer pair respectively to generate a semantic expression vector of each question and answer pair; and performing result classification according to the semantic expression vectors of the question-answer pairs to respectively obtain the scoring results of the candidate aiming at all the competency.
Preferably, in some embodiments of the present invention, the step of performing iterative inference on the coded representation vector of each question sentence in each question-answer pair and the coded representation vector of each answer sentence respectively includes: respectively performing weighted integration on the coded representation vector of each question in each question-answer pair according to the coded representation vector of each question in each question-answer pair, the coded representation vectors of the rest questions and the coded representation vectors of the answer sentences so as to realize information interaction between each question in each question-answer pair and the rest questions and the answer sentences and respectively generate the logical relationship representation vectors of each question; and respectively performing weighted integration on the coding expression vector of each answer sentence in each question-answer pair according to the coding expression vector of each answer sentence in each question-answer pair, the coding expression vectors of the rest answer sentences and the coding expression vectors of the question sentences so as to realize information interaction between each answer sentence in each question-answer pair and the rest answer sentences and the question sentences and respectively generate the logical relationship expression vectors of each answer sentence.
Preferably, in some embodiments of the present invention, the step of performing iterative inference on the coded representation vector of each question sentence in each question-answer pair and the coded representation vector of each answer sentence respectively further includes: respectively determining question nodes and neighbor nodes of question nodes and answer nodes in question-answer pairs according to the word sequences of the question texts and the answer texts in the question-answer pairs, wherein the neighbor nodes of the question nodes comprise all question nodes and all answer nodes in the question-answer pairs, the interval between the question nodes and the question nodes in the question-answer pairs is smaller than a preset window value, and the neighbor nodes of the answer nodes comprise all answer nodes and all question nodes in the question-answer pairs, the interval between the answer nodes and the question nodes in the question-answer pairs is smaller than the preset window value; respectively performing weighted integration on the code expression vector of each question node according to the code expression vectors of each question node and all the neighbor nodes of each question-answer pair so as to realize information interaction between each question node and all the neighbor nodes of each question-answer pair and respectively generate a logic relationship expression vector of each question node; and respectively performing weighted integration on the code expression vector of each question-answer node according to the code expression vectors of each question-answer node and all the neighboring nodes thereof in each question-answer pair so as to realize information interaction between each question-answer node and all the neighboring nodes thereof in each question-answer pair and respectively generate the logic relationship expression vector of each question-answer node.
Optionally, in some embodiments of the invention, the inference model further comprises a pre-trained semantic inference module. The step of performing semantic reasoning on the logical relationship expression vector of each question and answer pair and the logical relationship expression vector of each answer pair to generate a semantic expression vector of each question and answer pair includes: and performing semantic reasoning on the logical relationship expression vector of each question and answer pair and the logical relationship expression vector of each answer sentence in each question and answer pair respectively by using the semantic reasoning module to realize semantic interaction between the question and answer pairs and generate the semantic expression vector of each question and answer pair respectively.
Preferably, in some embodiments of the present invention, the inference model further comprises a plurality of pre-trained classification network modules. The step of performing result classification according to the semantic expression vector of each question-answer pair to respectively obtain the scoring results of the candidate for each competency comprises: and respectively inputting the semantic expression vectors of the question-answer pairs into corresponding classification network modules so as to respectively obtain the scoring results of the candidate for each competency.
Optionally, in some embodiments of the present invention, the step of determining a recommended position suitable for the candidate according to the scoring result of the candidate for each of the competencies and the combination of competencies required by at least one recruiting position comprises: screening the scoring results of the candidate for all the competencies according to a preset scoring threshold value to determine at least one qualified competencies of the candidate; comparing the at least one qualified competency with a combination of competencies required for at least one recruiting position to determine that the at least one qualified competency can fully cover the at least one recruiting position for the combination of competencies; and determining at least one recruiting position for which the at least one qualified competency can fully cover the combination of competency as the recommended position for the candidate.
Preferably, in some embodiments of the present invention, after the step of determining the recommended positions suitable for the candidate according to the scoring result of the candidate for each of the competencies and the combination of competencies required for at least one recruiting position is performed, the position recommendation method further comprises the steps of: screening a recommended position list of a plurality of candidates according to the recruiting positions provided by the recruitment enterprise to determine recommended candidates of the recruiting positions, wherein at least one recommended position suitable for the corresponding candidate is recorded in the recommended position list; and recommending the candidate for recommendation to the recruitment enterprise.
Preferably, in some embodiments of the present invention, after the step of recommending the recommendation candidate to the recruiting enterprise is performed, the position recommendation method further comprises the steps of: and providing answer texts of the candidate for recommendation for answering each question and corresponding question texts thereof, and/or videos and/or audios of the candidate for recommendation for answering each question, and/or scoring results of the candidate for recommendation for each item of competency to the recruitment enterprise.
Optionally, in some embodiments of the present invention, before performing the step of presenting the plurality of questions to the candidate with the plurality of competencies for the candidate, the position recommendation method further comprises the steps of: respectively proposing a plurality of corresponding problems to a plurality of candidate samples according to the plurality of competencies; respectively collecting answer text samples of each candidate person sample for answering each question, and forming question-answer pair samples by each answer text sample and the corresponding question texts thereof; segmenting the question text and the answer text sample of each question-answer pair sample to obtain a plurality of question samples and a plurality of answer samples of each question-answer pair sample; respectively scoring competency of answer text samples of the candidate person samples for answering the questions according to the competency of the candidate person samples; and training the inference model to perform the functions of information interaction, semantic inference and/or result classification by using a plurality of question samples and a plurality of answer sentence samples of each question-answer pair sample and the competency scores of corresponding candidate samples for answering each question.
According to a second aspect of the present invention, there is provided the station recommendation apparatus comprising a memory and a processor. The processor is connected to the memory and configured to implement the position recommendation method provided by the first aspect of the present invention. By implementing the post recommendation method, the post recommendation device can perform question-answer interaction on the question texts and the answer texts of the candidate for answering the questions by using an artificial intelligence technology from the perspective of competence assessment, so that the assessment precision of the artificial intelligence on each competence of the candidate is improved, and on the other hand, a proper post can be automatically recommended for the candidate according to the assessment result of each competence of the candidate.
The above computer-readable storage medium provided according to a third aspect of the present invention has computer instructions stored thereon. The computer instructions, when executed by the processor, implement the above-mentioned position recommendation method provided by the first aspect of the present invention. By implementing the position recommendation method, the computer-readable storage medium can perform question-answer interaction on the question texts and the answer texts of the candidate for answering the questions by using an artificial intelligence technology from the perspective of competence assessment, so that the assessment accuracy of the artificial intelligence on each competence of the candidate is improved, and on the other hand, a proper position can be automatically recommended for the candidate according to the assessment result of each competence of the candidate.
Drawings
The above features and advantages of the present disclosure will be better understood upon reading the detailed description of embodiments thereof in conjunction with the following drawings. In the drawings, components are not necessarily drawn to scale, and components having similar relative characteristics or features may have the same or similar reference numerals.
Fig. 1 illustrates a flow diagram provided in accordance with some embodiments of the present invention for building a competency model and a competency question bank.
FIG. 2 illustrates a flow diagram for training inference models provided in accordance with some embodiments of the present invention.
FIG. 3 illustrates a schematic diagram of a relationship graph network provided in accordance with some embodiments of the present invention.
Fig. 4 shows a flow diagram of a method of position recommendation provided according to some embodiments of the invention.
Detailed Description
The following description is given by way of example of the present invention and other advantages and features of the present invention will become apparent to those skilled in the art from the following detailed description. While the invention will be described in connection with the preferred embodiments, there is no intent to limit the features of the invention to those embodiments. On the contrary, the invention is described in connection with the embodiments for the purpose of covering alternatives or modifications that may be extended based on the claims of the present invention. In the following description, numerous specific details are included to provide a thorough understanding of the invention. The invention may be practiced without these particulars. Moreover, some of the specific details have been left out of the description in order to avoid obscuring or obscuring the focus of the present invention.
As described above, in order to overcome the defects of weak theoretical support, rough technology and lack of qualification on the competence of the candidate in the prior art, the invention provides a position recommending method, a position recommending device and a computer readable storage medium, which are used for performing question-answer interaction on the question text and the answer text of the candidate for answering each question by using an artificial intelligence technology from the standpoint of qualification on the competence, thereby improving the evaluation precision of the artificial intelligence on each competence of the candidate and automatically recommending a proper position for the candidate according to the evaluation result of each competence of the candidate.
In some non-limiting embodiments, the method for position recommendation provided by the first aspect of the present invention may be implemented by the position recommendation apparatus provided by the second aspect of the present invention, using a competence model and an inference model that are constructed and trained in advance. Specifically, the station recommendation device provided by the second aspect of the present invention may include a memory and a processor. The memory includes, but is not limited to, the above-described computer-readable storage medium provided by the third aspect of the invention having computer instructions stored thereon. The processor is connected to the memory and configured to execute the computer instructions stored in the memory to implement the station recommendation method according to the first aspect of the present invention.
The working principle of the post recommendation device will be described below with reference to some methods for constructing competence models, some methods for training inference models, and some methods for performing post recommendation by using the competence models and the inference models. It will be appreciated by those skilled in the art that these methods of constructing competency models, training inference models, and position recommendations are merely non-limiting examples provided by the present invention, which are intended to clearly illustrate the broad concepts of the invention and provide specific examples for facilitating its implementation by the public, and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating a process for constructing a competency model and a competency question bank according to some embodiments of the present invention.
As shown in fig. 1, in some embodiments of the present invention, during the construction phase of the competency model and the competency question bank, a constructor may first determine a plurality of positions and interview a plurality of employees at each position to obtain interview records of the employees. Then, the constructor can grade the visit record of each employee according to a plurality of competency dimensions predefined by professional knowledge, skill level, responsibility, pressure resistance, time management ability, communication ability, adaptation ability, heat pillow degree to the position and the like aiming at each position so as to obtain the score of each employee of the position in each competency.
In some embodiments, the task of scoring interview records for each employee for multiple competencies required for a job may be implemented by a pre-trained competency assessment model. The competency assessment model is an artificial intelligence model for preliminarily determining the scores of each employee in each competency based on a neural network model (such as CNN, RNN or a model variation thereof) and a classification network, and the basic principle of the model can refer to the prior art (patent number/application number 202011060966.6) published by Shanghai intelligent technology limited.
Optionally, in other embodiments, the work of scoring the interview record of each employee for multiple competencies required by the post may also be obtained by an expert in the human resources field manually labeling the performance of the interview content in each competency dimension.
After the score of each employee at each position on each competence is obtained, the competence model and the competence question libraryThe builder can sort the interview content of each employee to obtain a competency topic set Q [ Q ] of each employee 1 ,q 2 ,...,q k ]And corresponding competency score S 1 ,s 2 ,...,s k ]. Then, the constructor can classify a plurality of employees at each post into excellent employees and common employees according to the actual work performance of the employees at the corresponding post, and combines the competence evaluation model or the competence level L [ L ] provided by the human resource expert q1C1 ,l q2C1 ,...,l qkC1 ,l q1C2 ,l q2C2 ,...,l qkC2 ]Calculate excellent employee C 1 And general staff C 2 Total score in multiple competency dimensions:
total_score_C1=s 1 *l q1C1 +s 2 *l q2C1 +…+s k *l qkC1
total_score_C2=s 1 *l q1C2 +s 2 *l q2C2 +…+s k *l qkC2
in the formula I qiCj Denotes the jth employee C j At ith competency problem q i The competency level of.
Further, the constructor can also calculate the excellent member group C by combining the competence score S and the competence level L 1 And general staff group C 2 Mean total score, mean score of grades of winning Ren Li, and standard deviation over each competence dimension. Thereafter, the builder can proceed through the project team C 1 And general staff group C 2 And determining k competencies for screening excellent employees and common employees in a manner of performing difference tests on the average of the total scores in each competence dimension, the average of grades of the competencies Ren Li and the standard deviation.
Specifically, if in a certain competency dimension, a certain position elite staff group C 1 The average of the total scores is obviously higher than that of the common staff group C 2 The average of the total scores of the job, the constructor of the competency model and the competency question bank can judge that the competency can obviously distinguish the excellent staff and the common staff of the post. In this wayThe constructor can judge whether each competency can obviously distinguish the excellent staff of the post from the common staff one by one, thereby determining a plurality of competencies which the excellent staff of one post should have and further determining a plurality of competencies which the excellent staff of each post should have.
After determining the number of competencies that the excellent employee of each post should possess, the builder may build the model of each post competency based on the number of competencies that the excellent employee of each post should possess. Meanwhile, the constructor can prepare at least one question for each competency, and construct a competency question bank based on the competency that the excellent employees of each post should have, so that the competency model can call the questions for each competency to comprehensively assess each competency of the candidate.
In some preferred embodiments, the competency combinations that distinguish a good employee from a normal employee on one post are not unique. For example, a good office clerk needs to have a work responsibility, good communication coordination ability and time management ability to ensure that work is done on time. On the other hand, another office clerk who has work responsibility, hard to eat and work and good communication coordination ability but does not have good time management ability can also ensure that work is finished on time and meets the standard of excellent employees at the post. At this time, the constructor of the competency model and the competency question bank can also configure a plurality of different competency combinations for the same post to construct the competency model and the competency question bank, so that excellent talents meeting the requirement of the recruitment post can be comprehensively and individually screened.
It will be appreciated by those skilled in the art that the above constructor is described in a non-limiting manner, including but not limited to those skilled in the art, performing the above construction method, and processors and other associated equipment performing the above construction method.
After completing the construction of a large number of (e.g., more than 1000, more than 2000, more than 5000, more than 10000) competency models and competency question banks, one skilled in the art can retrieve one or more questions corresponding to each competency from the pre-constructed competency question banks according to all competency involved in the competency models of the stations, and play video or audio of each question to a candidate sample for subsequent training of inference models and post recommendation procedures using inference models.
Referring to fig. 2, fig. 2 illustrates a flow diagram for training inference models provided in accordance with some embodiments of the present invention.
As shown in fig. 2, in some embodiments of the present invention, in a training phase of the inference model, a trainer may first use the competency model constructed in the above embodiments to respectively obtain questions for examining various competencies from the competency question bank constructed in the above embodiments according to multiple competencies required by a large number of positions to construct a question set Q [ Q ] Q 1 ,q 2 ,…,q k ]. Thereafter, the trainer may recruit a large number of candidate samples to answer the questions, and collect answer text samples of the candidate samples to answer the questions to construct an answer set A [ a ] 1 ,a 2 ,…,a k ]. Then, the trainer can engage experts in the field of human resources, and scores T [ T ] the abilities of the candidate samples in various competence dimensions based on the question set Q and the answer set A respectively 1 ,T 2 ,…,T k ]Therefore, data of each competence of a large number of candidate samples are comprehensively obtained to be used as a basis for subsequently training the reasoning model. In some embodiments, the scoring T of each candidate sample in each competency dimension may be performed in a multi-level manner such as binary, ternary, quinternary, etc., with higher scores indicating a better ability of the candidate sample in the corresponding competency dimension.
After a question set Q and an answer set A of a plurality of candidate person samples and a single score T of the plurality of candidate person samples in each competence dimension are obtained, a trainer can construct a reasoning model and trains an information interaction module, a semantic reasoning module and/or a result classification module of the reasoning model based on the sample data, so that the trained reasoning model has the function of accurately evaluating each competence of the candidate persons.
Specifically, the trainer of the inference model may first take each answer text sample a i Question texts q respectively corresponding to the question texts i Form question and answer pair sample<q i ,a i >. Then, for each question-answer pair sample<q i ,a i >The trainer will segment the question text q in sentence units i To obtain a plurality of question samples [ q ] therein s1 ,q s2 ,...,q sc ]And segmenting answer text sample a i To obtain multiple answer samples [ a ] therein s1 ,a s2 ,...,a sp ]Where c represents the question text q i Number of question in (1), p represents answer text sample a i Number of answer sentences in (1).
Then, the trainer can use the coding module formed by CNN, RNN, LSTM, GRU, BERT and other sequence models to perform the question sample [ q ] s1 ,q s2 ,...,q sc ]Performing coding representation to obtain coding representation vector [ r ] of each question sample qs1 ,r qs2 ,...,r qsc ]And for each answer sentence sample [ a ] s1 ,a s2 ,...,a sp ]Performing coding representation to obtain coding representation vector [ r ] of each answer sentence sample as1 ,r as2 ,...,r asp ]. For a specific scheme of encoding and representing text data by using a sequence model, reference may be made to the prior art having patent number/application number 202011060966.6, which is not described herein again.
Thereafter, the trainer can construct a relational graph network module based on graph models (including but not limited to GATConv, relGraphConv, GCN, TAGConv, and utilize the relational graph network module to pair question-answer pair samples<q i ,a i >Each question sample [ q ] s1 ,q s2 ,...,q sc ]And each answer sample [ a s1 ,a s2 ,...,a sp ]And performing information interaction to represent deep or shallow dependency information (dependency information) and semantic association among the questions, the questions and the answers and among the answers, so that the evaluation precision of artificial intelligence on each competence of the candidate is improved.
Taking the relationship graph network RelGraphConv as an example, a trainer can pair samples according to questions and answers<q i ,a i >The coded representation vector of each question and the rest (e.g., r) qs1 And r qs2 ~r qsc ) And the code expression vector of each question and each answer (for example: r is qs1 And r as1 ~r asp ) And the coded representation vector of each and the remaining answer sentences (e.g.: r is a radical of hydrogen as1 And r as2 ~r qsp ) And constructing a relational graph network, and realizing information interaction among the questions, the questions and the answers by means of information transmission among nodes in the relational graph network.
Referring to fig. 3, fig. 3 illustrates a schematic diagram of a relationship graph network provided in accordance with some embodiments of the present invention.
In the embodiment shown in FIG. 3, question-answer pair samples<q i ,a i >May include two question sentences q 1 ~q 2 And three answer sentences a 1 ~a 3 . Correspondingly, q can be included in the relationship graph network 1 ~q 2 And a 1 ~a 3 These five nodes. When iterative inference is carried out by using a relational graph network, a representation vector h of each node i Will all be based on its own h i And neighbor node h with which there is association j Performing weighted integration, i.e.
Figure BDA0003161175860000111
Wherein, w i 、w n 、w j Are all weights to be learned, nc represents a node h i The number of neighbor nodes. Therefore, the relational graph network can realize information interaction among the question sentences, between the question sentences and the answer sentences and between the answer sentences by means of information transmission among the nodes, and represent semantic association among the question sentences, between the question sentences and the answer sentences and between the answer sentences.
Compared with the prior art adopting the traditional sequence model, the method adopts the graph model to pair the question and answer samples<q i ,a i >The relational graph network modeling is carried out, and each question and sentence can be effectively shortenedThe distance between every two answers, thereby avoiding the gradient dissipation problem caused by excessive sentences in the modeling process of the sequence model, and further ensuring the same question and answer pair sample<q i ,a i >And carrying out deep interaction of the causal relationship between each question sentence and each answer sentence.
Based on the above description, the relational graph network can pair samples for questions and answers<q i ,a i >Coded representation vector r of each question sample in the set qs1 ,r qs2 ,…,r qsc ]And an encoded representation vector [ r ] of each sentence sample as1 ,r as2 ,…,r asp ]Performing iterative reasoning to obtain question-answer pair samples<q i ,a i >Sample of each question in si Represents the vector l qs1 ,l qs2 ,…,l qsc ]And each answer sample a si Represents the vector l as1 ,l as2 ,…,l asp ]. These logical relationships represent the vector l qs1 ,l qs2 ,…,l qsc ]And [ l as1 ,l as2 ,…,l asp ]Sample capable of characterizing question-answer pairs<q i ,a i >The semantic association among the question sentences, the question sentences and the answer sentences.
It will be appreciated by those skilled in the art that the relational graph network shown in fig. 3 comprising only 5 nodes is merely an example of a simple structure provided by the present invention, and is intended to clearly illustrate the main concepts of the present invention and provide a concrete solution for the implementation by the public without limiting the scope of the present invention.
Alternatively, in other embodiments, for a complex relational graph network comprising a large number of question-sentence nodes and a large number of answer-sentence nodes, the trainer may sample pairs of questions and answers<q i ,a i >Question text q i And answer text sample a i Respectively determining each question-answer pair according to the word order and the preset window value<q i ,a i >Middle each question node q si And each sentence answering node a si Of the neighboring node. For example, for an embodiment with a preset window value of 2, the trainer may pair the questions and answers to the sample<q i ,a i >Chinese and question node q si All question nodes q with an interval of less than 2 s(i-1) And q is s(i+1) And a question-answer pair sample<q i ,a i >All the sentence answering nodes a in s1 ~a sp Are all determined as question nodes q si Of the neighboring node. Similarly, the trainer can also match questions and answers to the samples<q i ,a i >Node a of the sum answer si All period nodes a with intervals less than 2 s(i-1) And a s(i+1) And a question-answer pair sample<q i ,a i >All question nodes q in (1) s1 ~q sc Are all determined as answer sentence nodes a si Of the neighboring node.
Thereafter, the trainer can match the sample according to the question and answer<q i ,a i >Each question node q si And all its neighbor nodes, and challenge-response pair samples<q i ,a i >Each period answering node a si And all neighbor nodes thereof, and constructing question-answer pair samples<q i ,a i >Question text q i And answer text sample a i A network of relationship graphs. The relational graph network can sample question and answer pairs<q i ,a i >Each node carries out iterative reasoning to realize information interaction and semantic association between each adjacent node. Thus, the relational graph network can be used for matching samples according to questions and answers<q i ,a i >In each question node q si And the coding expression vectors of all the neighbor nodes are respectively used for each question node q si The coding expression vectors are weighted and integrated to realize question-answer pair samples<q i ,a i >Each question node q in si Information interaction with all the neighbor nodes of the node and respectively generate each question node q si Represents the vector l qs1 ,l qs2 ,…,l qsc ]. Similarly, the relational graph network can also pair samples according to questions and answers<q i ,a i >In each answer sentence node a si And the code expression vectors of all the neighboring nodes are respectively used for each sentence answering node a si OfPerforming weighted integration on code expression vectors to realize question-answer pair samples<q i ,a i >In each answer sentence node a si Information interaction with all the neighbor nodes and respectively generating each period answering node a si Represents the vector l as1 ,l as2 ,…,l asp ]。
By configuring the setting interface of the window value, a trainer can set an appropriate window value to give consideration to the performance of both the evaluation precision and the processing speed according to the requirement of the inference model on the evaluation precision of each competence of a candidate and the data processing capacity of the processors of the training equipment and the post recommendation device. The specific value of the window value does not affect the basic requirement of those skilled in the art for constructing a relationship graph network, and is not described herein in detail.
It will be appreciated by those skilled in the art that the above-mentioned scheme of text segmentation in units of sentences is only a non-limiting example provided by the present invention, and is intended to clearly illustrate the main concept of the present invention and provide a concrete scheme convenient for the public to implement, but not to limit the scope of the present invention. Optionally, in other embodiments, the trainer may also segment the question-answer pair samples based on a master-predicate-Bint (RDF) triplet structure<q i ,a i >Question text q i And answer text sample a i To obtain question-answer pair samples in the same way<q i ,a i >A plurality of question samples [ q ] s1 ,q s2 ,…,q sc ]And multiple answer samples [ a ] s1 ,a s2 ,…,a sp ]. Thereafter, the trainer can apply each question sample q as described above si And each answer sample a si Carrying out coding representation and constructing a relational graph network to carry out information interaction between all adjacent nodes, thereby generating all question-answer pair samples<q i ,a i >Sample of each question in si Represents the vector l qs1 ,l qs2 ,…,l qsc ]And each answer sample a si Represents the vector l as1 ,l as2 ,…,l asp ]. In the scheme, the code is represented byThe specific process of information interaction is similar to the above embodiments, and is not described herein again.
After constructing the relationship Graph Network module of the inference model, the trainer can continue to construct the semantic inference module based on Graph models such as Graph Attention Network (GAT) and utilize the semantic inference module to pair question and answer samples<q i ,a i >Each question sample q si Represents the vector l qs1 ,l qs2 ,…,l qsc ]And each answer sample a si Represents the vector l as1 ,l as2 ,…,l asp ]Performing semantic reasoning to further enhance single question-answer pair samples<q i ,a i >And the interaction between the question sentences, between the question sentences and the answer sentences and between the answer sentences in the semantic level further improves the evaluation precision of artificial intelligence on various competencies of the candidate.
Specifically, for the relational graph network module shown in fig. 3, the semantic reasoning module at the back end may have two identical question nodes q 1 ~q 2 And three a 1 ~a 3 And (6) answering sentence nodes. The difference between the semantic reasoning module and the relational graph network module is that each question node q in the semantic reasoning module si Is representing a vector h i No longer the above-mentioned coded representation vector r qsi The logical relationship expression vector l is obtained after inference iteration of the relationship graph network module qsi . Correspondingly, each sentence answering node a in the semantic reasoning module si Is representing a vector h i Nor is the above-described code-representing vector r any longer asi The logical relationship expression vector l is obtained after reasoning iteration of the relationship graph network module asi
The representation vector h of each node of the graph attention network when semantic reasoning is carried out by using a semantic reasoning module i Will be based on its own h i And neighbor node h with which there is association j Performing weighted integration, i.e.
Figure BDA0003161175860000141
Figure BDA0003161175860000142
Wherein, w i 、w n 、w j Are all weights to be learned, nc represents a node h i The number of neighbor nodes. In this way, the graph attention network can perform semantic interaction among the questions, between the questions and the answers, and between the answers by means of information transmission among the nodes, so that semantic reasoning among the questions, between the questions and the answers, and between the answers is realized.
Compared with the prior art adopting the traditional sequence model, the invention adopts the graph attention network model to pair the question and answer samples<q i ,a i >Semantic reasoning modeling is carried out, the distance between each question sentence and each answer sentence can be effectively shortened, so that the problem of gradient dissipation caused by excessive sentences in the modeling process of the sequence model is avoided, and the same question and answer pair sample is ensured<q i ,a i >And performing semantic deep interaction between the question sentences and the answer sentences.
Based on the above description, the semantic reasoning module can pair samples to questions and answers<q i ,a i >The logical relation of the question samples in the vector represents the vector l qs1 ,l qs2 ,...,l qsc ]And a logical relationship representation vector [ l ] of each answer sample as1 ,l as2 ,...,l asp ]Further semantic reasoning is carried out, so that question-answer pair samples are respectively obtained<q i ,a i >Is a semantic representation vector [ h ] 1 ,h 2 ,...,h c+p ]. The semantic representation vector [ h 1 ,h 2 ,...,h c+p ]Can characterize sample of question-answer pairs<q i ,a i >And semantic association among the question sentences, between the question sentences and the answer sentences and between the answer sentences.
In some embodiments, for the case that the answer text samples of different candidate samples have different number of answers, the trainer may further apply the obtained semantic representation vectors [ h ] 1 ,h 2 ,...,h c+p ]Maximum pooling (max-pool)ing) or mean-pooling (mean-pooling) to obtain a semantic representation vector H of uniform dimension 1 ,h 2 ,...,h c+p ]。
After the semantic reasoning module of the reasoning model is built, the trainer can further build a plurality of classification network modules at the back end of the semantic reasoning module. The classification network modules can respectively obtain question-answer pair samples corresponding to competency dimensions<q i ,a i >Is expressed as a vector h i And expressing the vector h according to the obtained semantic i And determining a corresponding classification result. The classification result indicates the scoring result T of the candidate aiming at a corresponding competency i . The basic principle of determining the corresponding classification result according to the semantic expression vector by using the classification network can refer to the prior art with the patent number/application number of 202011060966.6, and is not described herein again.
Based on the construction process, a trainer can construct an inference model comprising a coding module, a relational graph network module, a semantic inference module and a plurality of classification network modules. In this scenario, where multiple classification network modules are deployed, the entire inference model can be viewed as multiple end-to-end multi-tasking models based on single-topic scoring. Question and answer pair sample<q i ,a i >The interactive information can be transmitted to the semantic reasoning module and each classification network module at the rear end through the graph nodes of the relational graph network module, so that the optimization of the whole reasoning model can be realized.
After the construction process of the inference model is completed, a trainer can sequentially input sample data such as a question set Q, an answer set A and competence scoring results T of the candidate samples in each competence dimension into the constructed inference model to synchronously train the learning parameters of the coding module, the relation graph network module, the semantic inference module and/or the classification network modules, so that the trained inference model has the functions of coding representation, information interaction, semantic inference and competence scoring result classification.
It will be appreciated by those skilled in the art that the above-described training scheme for synchronously training the inference model is only a non-limiting embodiment provided by the present invention, and is intended to clearly illustrate the main concepts of the present invention and provide a concrete scheme for the convenience of the public, and is not intended to limit the scope of the present invention. Optionally, in other embodiments, the trainer may further label the intermediate parameters output by each module step by step, and train each module of the inference model separately based on each intermediate parameter, so as to achieve the same training effect.
It will be further understood by those skilled in the art that the above-described trainer is a non-limiting description, including but not limited to those skilled in the art performing the above-described training method, as well as processors and other related devices performing the above-described training method.
After the training of the inference model is completed, the user can use the post recommendation device provided by the second aspect of the present invention, and implement the post recommendation method provided by the first aspect of the present invention by using the pre-constructed competence model, competence question bank, and trained inference model.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating a position recommendation method according to some embodiments of the present invention.
In some embodiments of the invention, as shown in fig. 4, when making a position recommendation for a candidate, the position recommendation apparatus may first query a large number of position competency models to determine the competencies to which the competency models relate. Thereafter, the post recommender may retrieve one or more questions corresponding to each competency from a pre-built library of competency questions and play video or audio of each question to the candidate to ask the candidate. By selecting the problems according to a plurality of competencies related to the competency models of a large number of positions, the invention can comprehensively assess the competencies of the candidate in all aspects, thereby accurately and comprehensively recommending the positions of all fields, all industries and all types for the candidate.
The candidate may respond to each question in turn based on the video or audio played by the post recommendation device. Meanwhile, the post recommending device can collect videos of candidate people for answering all questionsOr audio. Then, the post recommending device may extract audio data of the candidate answer content from the collected video, and perform Speech text transcription on the extracted audio data by using an Automatic Speech text transcription (ASR) technology to obtain an answer text a [ a ] of the candidate to answer each question 1 ,a 2 ,…,a k ]。
It will be appreciated by those skilled in the art that the above-described solutions to the problems of presenting to a candidate via video or audio are only provided by the present invention and some non-limiting embodiments are intended to clearly illustrate the main concepts of the present invention and to provide some specific solutions for facilitating the implementation by the public, not to limit the scope of protection of the present invention. Optionally, in other embodiments, the post recommendation apparatus may also present the plurality of questions to the candidate by displaying the question text on a display screen, a touch screen, or other human-computer interaction interface, so as to achieve the same effect. In addition, the post recommendation device can also directly collect answer texts of candidate people for answering all questions through a human-computer interaction interface such as an entity/virtual keyboard, and the step of transcribing the ASR voice texts is omitted.
After obtaining the answer text a of the candidate to answer each question, the post recommending apparatus may send each answer text a i Question texts q respectively corresponding to the question texts q i Form question and answer pair<q i ,a i >. Then, the post recommending device may adopt a segmentation mode consistent with the training flow of the inference model, and take sentences as units or RDF triples as units to make each question-answer pair one by one<q i ,a i >Question text q of i Divided into a plurality of question sentences [ q ] s1 ,q s2 ,…,q sc ]And each question-answer pair<q i ,a i >Answer text a of i Divided into a plurality of answer sentences [ a s1 ,a s2 ,…,a sp ]Where c represents the question text q i Number of question in (1), p represents answer text a i Number of answers in (1).
Obtaining question-answer pairs during segmentation<q i ,a i >Multiple question sentences [ q ] s1 ,q s2 ,…,q sc ]And a plurality of answer sentences [ a s1 ,a s2 ,…,a sp ]Then, the post recommending device can utilize the trained coding module to perform one-by-one question-answer pair<q i ,a i >Multiple question sentences [ q ] s1 ,q s2 ,…,q sc ]Performing coded representation to obtain the question-answer pair<q i ,a i >Coded representation vector r of each question in qs1 ,r qs2 ,...,r qsc ]. Similarly, the position recommending device can also utilize the coding module to perform one-by-one question-answer pair<q i ,a i >A plurality of answer sentences s1 ,a s2 ,...,a sp ]Performing coded representation to obtain the question-answer pair<q i ,a i >The coding expression vector [ r ] of each answer sentence as1 ,r as2 ,...,r asp ]。
In obtaining question-answer pairs<q i ,a i >Coded representation vector r of each question in qs1 ,r qs2 ,...,r qsc ]And a coded representation vector r for each answer as1 ,r as2 ,...,r asp ]Then, the post recommending device can make each question-answer pair one by one<q i ,a i >Is coded representation vector r of each question qs1 ,r qs2 ,...,r qsc ]And a coded representation vector r for each answer as1 ,r as2 ,...,r asp ]Inputting the trained relational graph network module, and utilizing each node of the relational graph network module to respectively carry out question and answer pairs<q i ,a i >Each question and each answer sentence are subjected to iterative reasoning to realize each question-answer pair<q i ,a i >The question sentences and the answer sentences are interacted, and question-answer pairs are respectively generated<q i ,a i >The logical relation of each question represents the vector l qs1 ,l qs2 ,...,l qsc ]And a logical relationship representation vector [ l ] of each answer sentence as1 ,l as2 ,...,l asp ]。
In particular, for the less nodal relationships shown in FIG. 3In the graph network, the relational graph network module can realize information interaction among the questions, among the questions and answers and among the answers by means of information transmission among all nodes in the relational graph network, and represent semantic association among the questions, among the questions and answers and among the answers. In using the relationship graph network for iterative inference, the relationship graph network module may determine the learning weights w based on previous training steps i 、w n 、w j A representation vector h for each node separately i Performing weighted integration, i.e.
Figure BDA0003161175860000171
Here, the expression vector h for each question node i For question and answer pair<q i ,a i >Coded representation vector [ r ] of each question in (1) qs1 ,r qs2 ,...,r qsc ]. Expression vector h of each answer node i For question and answer pair<q i ,a i >The coding expression vector [ r ] of each answer sentence as1 ,r as2 ,...,r asp ]. The relational graph network module can be used for each question-answer pair<q i ,a i >Is a coded representation vector r of each question qs1 ,r qs2 ,...,r qsc ]And a coded representation vector r for each answer as1 ,r as2 ,...,r asp ]Iteration is carried out to obtain each question-answer pair one by one<q i ,a i >The logical relation of each question represents vector [ l qs1 ,l qs2 ,...,l qsc ]And the logical relation expression vector [ l ] of each answer sentence as1 ,l as2 ,...,l asp ]. These logical relationships represent the vector l qs1 ,l qs2 ,...,l qsc ]And [ l as1 ,l as2 ,...,l asp ]Can characterize each question-answer pair<q i ,a i >And semantic association among the question sentences, between the question sentences and the answer sentences and between the answer sentences.
Compared with the prior art adopting the traditional sequence model, the graph model adopted by the invention can effectively shorten the distance between each question sentence and each answer sentenceTherefore, the problem of gradient dissipation caused by excessive sentences in the modeling process of the sequence model is avoided, and the same question-answer pair is ensured<q i ,a i >And carrying out deep interaction of causal relationship between the question sentences and the answer sentences.
Optionally, in other embodiments of the present invention, for a complex relational graph network including a large number of question-sentence nodes and a large number of question-answer nodes, the relational graph network module may further include a plurality of question-answer pairs<q i ,a i >Question text q i And answer text a i Respectively determining each question-answer pair according to the word order and the preset window value<q i ,a i >Middle each question node q si And each sentence answering node a si Of the neighboring node. For example, for embodiments in which the preset window value is 2, the relational graph network module may pair questions and answers<q i ,a i >Chinese and question node q si All question nodes q with an interval of less than 2 s(i-1) And q is s(i+1) And question-answer pairs<q i ,a i >All the sentence answering nodes a in s1 ~a sp Are all determined as question nodes q si Of the neighboring node. Similarly, the relational graph network module can also pair questions and answers<q i ,a i >Node a of the sum answer si All period nodes a with an interval of less than 2 s(i-1) And a s(i+1) And question-answer pairs<q i ,a i >All question nodes q in (1) s1 ~q sc Are all determined as answer sentence nodes a si The neighbor node of (2). The relational graph network module can then be paired according to questions and answers<q i ,a i >In each question node q si And the coding expression vectors of all the neighbor nodes are respectively used for each question node q si The coded representation vectors are weighted and integrated to realize question-answer pairs<q i ,a i >In each question node q si Information interaction with all the neighbor nodes of the question node and generation of each question node q si Represents the vector l qs1 ,l qs2 ,…,l qsc ]. Similarly, the relationship graph network can be based on questionsAnswer pair<q i ,a i >Each answer node a in si And the code expression vectors of all the neighboring nodes are respectively used for each sentence answering node a si The coded representation vectors are weighted and integrated to realize question-answer pairs<q i ,a i >In each answer sentence node a si Information interaction with all the neighbor nodes and respectively generating each period answering node a si Represents the vector l as1 ,l as2 ,…,l asp ]。
By configuring the setting interface of the window value, a user can set an appropriate window value according to the requirement of the inference model on the evaluation precision of each competence of a candidate and the data processing capacity of the processors of the training equipment and the post recommendation device so as to give consideration to the performances of both the evaluation precision and the processing speed. The specific value of the window value does not affect the basic requirement of those skilled in the art for constructing a relationship graph network, and is not described herein in detail.
In obtaining question-answer pairs<q i ,a i >Chinese question q si Represents the vector l qs1 ,l qs2 ,…,l qsc ]And each answer sentence a si Represents the vector l as1 ,l as2 ,…,l asp ]Thereafter, the post recommender may pair each question and answer<q i ,a i >Represents the vector l qs1 ,l qs2 ,…,l qsc ]And [ l as1 ,l as2 ,…,l asp ]And inputting the trained semantic reasoning modules one by one to initialize each node of the semantic reasoning modules.
As described above, each question node q in the semantic reasoning module si Is represented by vector h i Is a logical relationship expression vector l obtained after reasoning iteration of a relationship graph network module qsi And wherein each answer node a si Is represented by vector h i Is a logical relationship expression vector l obtained after reasoning iteration of a relationship graph network module asi . Trained graph attention network (GAT) per node when using semantic reasoning module for semantic reasoningRepresents a vector h i Will all be based on its own h i And neighbor node h with which there is association j Performing weighted integration, i.e.
Figure BDA0003161175860000191
Wherein, w i 、w n 、w j Are all learning weights determined by the aforementioned training process, nc represents node h i The number of neighbor nodes. Thus, the graph attention network can perform semantic interaction among the question sentences, between the question sentences and the answer sentences and between the answer sentences by means of information transmission among the nodes, thereby realizing semantic reasoning among the question sentences, between the question sentences and the answer sentences and between the answer sentences and respectively generating question-answer pairs<q i ,a i >Is a semantic representation vector [ h ] 1 ,h 2 ,...,h c+p ]. The semantic representation vector [ h 1 ,h2,...,h c+p ]Can characterize each question-answer pair<q i ,a i >And semantic association among the question sentences, between the question sentences and the answer sentences and between the answer sentences.
Compared with the prior art adopting the traditional sequence model, the invention adopts the graph attention network model to answer and question pairs<q i ,a i >Semantic reasoning modeling is carried out, the distance between each question and each answer can be effectively shortened, so that the problem of gradient dissipation caused by excessive sentences in the modeling process of the sequence model is avoided, and the same question-answer pair is ensured<q i ,a i >And performing semantic deep interaction between the question sentences and the answer sentences.
In some embodiments, the post recommender may represent the vector h for the plurality of semantic representations obtained 1 ,h 2 ,...,h c+p ]Performing maximum pooling (max-pooling) or mean-pooling (mean-pooling) operation to obtain semantic expression vector H [ H ] with uniform dimension 1 ,h 2 ,...,h c+p ]. Therefore, even if the answer sentences in the answer texts of different candidates are different in number, the classification network module at the rear end can accurately determine the competencies of the candidates for all the terms according to the semantic expression vector H with uniform dimensionalityScore result T i
Generating question-answer pairs by utilizing semantic reasoning module<q i ,a i >Represents the vector H [ H ] semantically 1 ,h 2 ,...,h c+p ]Then, the post recommending device can represent each semantic expression vector h output by the semantic reasoning module i Respectively inputting the classification network modules corresponding to the competence dimensions, so that the classification network modules represent the vector h according to the input semantics i And respectively determining classification results corresponding to the competency. The classification result indicates the scoring result T of the candidate aiming at competency i
As shown in FIG. 4, in determining the scoring result T [ T ] of each item of competence of the candidate 1 ,T 2 ,...,T k ]The post recommender may then base these scoring results T [ T ] 1 ,T 2 ,...,T k ]And at least one competency combination C required for recruiting posts i [c 1 ,c 2 ,...,c n ]And determining a recommended position suitable for the candidate.
Specifically, in some binary embodiments, the scoring results T are i The value results of 0 and 1 may be included, where 0 indicates that the candidate does not have the competency, and 1 indicates that the candidate has the competency. The position recommending device can screen the scoring results of the candidate aiming at all competencies according to a preset scoring threshold value 1 so as to determine at least one qualified competence of the candidate. For example, if a candidate x scores T x =[1,1,0,1,0]The position recommender may determine that the eligibility of candidate x comprises C x [c 1 ,c 2 ,c 4 ]。
Then, the position recommending device can adopt a simple sequence matching mode to gradually make at least one qualified competence C of the candidate x x [c 1 ,c 2 ,c 4 ]Combined with competency required by each recruiting post C i Comparing to determine at least one qualified competence C x Can fully cover competent force combination C i At least one recruiting post. Example (b)For example, if the combination of the required competency of a recruitment post i is C i [c 1 ,c 2 ]The position recommending device can determine at least one qualified competence C of the candidate x x [c 1 ,c 2 ,c 4 ]Can fully cover the competence combination C needed by the recruitment post i i [c 1 ,c 2 ]Thereby determining the recruiting position i as the recommended position for the candidate x.
Further, if the combination of competency required by another recruiting post j is C j [c 2 ,c 4 ]The position recommending means may determine at least one qualified competency C of the candidate x x [c 1 ,c 2 ,c 4 ]Can also fully cover the competency combination C needed by the recruitment post j j [c 2 ,c 4 ]Thereby determining the recruiting position j as the recommended position for the candidate x and adding the recruiting position i and the recruiting position j together to the list of recommended positions for the candidate x.
Otherwise, if the candidate x has at least one qualified C x [c 1 ,c 2 ,c 4 ]Competence combination C that does not fully cover the needs of any recruited post i The post recommending means may determine that there is no recommended post suitable for the candidate x at present, and output a result of not making a recommendation.
Therefore, the post recommending device provided by the invention can perform question-answer interaction on the question text and the answer text of the candidate for answering each question by using the artificial intelligence technology from the perspective of competence assessment, so that the assessment precision of the artificial intelligence on each competence of the candidate is improved, and on the other hand, a proper post can be automatically recommended for the candidate according to the assessment result of each competence of the candidate, so that the enterprise interviewer is substantially liberated.
It will be appreciated by those skilled in the art that the binary scoring described above is merely a non-limiting example of the present invention, and is intended to clearly illustrate the broad concepts of the invention and provide a detailed description of the invention for convenience in public practice and not to limit the scope of the invention.
Optionally, in other embodiments of the three-component scoring method, each scoring result T is i The value results of 0, 1 and 2 can be included, where 0 indicates that the candidate does not have the competence, 1 indicates that the candidate basically has the competence, and 2 indicates that the competence of the candidate is more prominent. The post recommending device can set a corresponding scoring threshold according to the specific requirements of the recruitment enterprise, and then screen scoring results of the candidate aiming at various competencies according to the set scoring threshold so as to determine at least one qualified competence of the candidate. For example, if the recruiter needs to use people urgently, the post recommendation device can set the score threshold to 1, and perform extensive screening on candidates basically having the required competency, so as to increase the number of qualified candidates. For another example, if the recruiter only needs excellent talents, the post recommendation device may set the score threshold to 2, and perform a fine screening on candidates with outstanding competency to further improve the quality of qualified candidates.
Further, in some embodiments of the invention, in order to meet the recruitment requirement of the recruitment enterprise, the post recommendation device may further acquire requirement information of the recruitment post provided by the recruitment enterprise from the recruitment platform. Then, the post recommending device can screen the recommended post list of a large number of candidates who have been qualified in advance according to the recruiting post provided by the recruiting enterprise, determine the candidate with the corresponding post recorded in the recommended post list as a candidate for recommendation suitable for the recruiting post, and recommend the candidate for recommendation to the recruiting enterprise. By adopting the asynchronous recruitment mode of pre-checking the competency of the candidate and screening the suitable candidate to be recommended according to the recruitment requirement of the recruitment enterprise, the invention can further get rid of the time limit of the interview behavior on the two parties to be recruited, thereby comprehensively recommending the suitable post for the candidate and comprehensively recommending the suitable talent for the recruitment enterprise.
Furthermore, after the candidate for recommendation is recommended to the recruitment enterprise, the post recommendation device can provide the answer text of the candidate for recommendation to answer each question and the corresponding question text thereof, and/or the video and/or audio of the candidate for recommendation to answer each question, and/or the scoring result of the candidate for recommendation to each competence to the recruitment enterprise according to the requirement of the recruitment enterprise and the purchased service thereof, so as to provide theoretical support and data support for the recommendation result, and provide guidance for the subsequent talent employment of the recruitment enterprise according to the assessment result of each competence of the candidate for recommendation.
Based on the description, the post recommendation device provided by the invention can utilize the competence model and the reasoning model which are constructed and trained in advance to perform question-answer interaction on the question text and the answer text so as to improve the evaluation precision of artificial intelligence on each competence of the candidate, and automatically recommend a proper post for the candidate according to the evaluation result of each competence of the candidate.
While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with one or more embodiments, occur in different orders and/or concurrently with other acts from that shown and described herein or not shown and described herein, as would be understood by one skilled in the art.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (16)

1. A post recommendation method is characterized by comprising the following steps:
presenting a plurality of questions to a candidate for a plurality of competencies of the candidate;
collecting answer texts of the candidate for answering the questions, and forming question-answer pairs by the answer texts and the corresponding question texts respectively;
segmenting the question text and the answer text of each question-answer pair to respectively obtain a plurality of question sentences and a plurality of answer sentences of each question-answer pair;
performing information interaction, semantic reasoning and result classification on the question sentences and the answer sentences of each question-answer pair by using a pre-trained reasoning model so as to respectively obtain the scoring results of the candidate aiming at each competency; and
and determining a recommended position suitable for the candidate according to the scoring result of the candidate aiming at each competence and at least one competence combination required by the recruiting position.
2. The position recommendation method of claim 1, wherein the step of presenting the candidate with the plurality of competencies for the candidate comprises:
determining the plurality of competencies according to a competency model for a plurality of stations;
retrieving one or more questions corresponding to each of the competencies from a pre-constructed library of competency questions; and
playing video or audio of each of the questions to the candidate.
3. A position recommendation method according to claim 2, characterized in that before performing said step of posing a plurality of questions to the candidate with a plurality of competencies for the candidate, the position recommendation method further comprises the steps of:
interviewing a plurality of employees at a plurality of posts to obtain interview records of the plurality of employees;
scoring the interview record of each said employee according to a predefined competency dimension to obtain a score for each said employee for each said competency;
dividing the plurality of employees into excellent employees and common employees of all posts according to work performance;
performing a difference check on the scores of the excellent employees and the common employees in each competency to determine a plurality of competencies which the excellent employees in each post should have;
constructing a competency model of each post according to a plurality of competencies which the excellent employees of each post should have; and
preparing at least one question for each of the competencies, and constructing the competency question bank based on a plurality of competencies that the excellent employees of the plurality of posts should possess.
4. The position recommendation method according to claim 2, wherein said step of collecting answer texts for said candidate to answer each of said questions comprises:
collecting video or audio of the candidate answering each question; and
and performing voice text transcription on audio data in the video or audio to obtain answer texts of the candidate for answering the questions.
5. The post recommendation method according to claim 1, wherein said step of segmenting said question text and said answer text of each of said question-answer pairs comprises:
segmenting the question text and the answer text of each question-answer pair by taking a sentence as a unit; or
And segmenting the question text and the answer text of each question-answer pair by taking the RDF triple as a unit.
6. The post recommendation method according to claim 1, wherein the inference model comprises a pre-trained coding module and a pre-trained relation graph network module, and the steps of performing information interaction, semantic reasoning and result classification on the question sentences and the answer sentences of each question-answer pair comprise:
vector representation is carried out on each question and each answer in each question-answer pair by using the coding module so as to respectively generate a coding representation vector of each question and each answer in each question-answer pair and a coding representation vector of each answer;
utilizing the relational graph network module to respectively carry out iterative reasoning on the coding expression vector of each question and the coding expression vector of each answer sentence in each question-answer pair so as to realize information interaction between each question and each answer sentence in each question-answer pair and respectively generate the logical relationship expression vector of each question and each answer sentence in each question-answer pair;
performing semantic reasoning on the logical relationship expression vector of each question and answer pair and the logical relationship expression vector of each answer pair respectively to generate a semantic expression vector of each question and answer pair; and
and performing result classification according to the semantic expression vector of each question-answer pair so as to respectively obtain the scoring results of the candidate aiming at each competency.
7. The position recommendation method according to claim 6, wherein said step of performing iterative inference on the coded representation vector of each question sentence in each question-answer pair and the coded representation vector of each answer sentence respectively comprises:
respectively performing weighted integration on the coded representation vector of each question in each question-answer pair according to the coded representation vector of each question in each question-answer pair, the coded representation vectors of the rest questions and the coded representation vectors of the answer sentences so as to realize information interaction between each question in each question-answer pair and the rest questions and the answer sentences and respectively generate the logical relationship representation vectors of each question; and
and respectively carrying out weighted integration on the coded representation vector of each answer sentence in each question-answer pair according to the coded representation vector of each answer sentence, the coded representation vectors of the rest answer sentences and the coded representation vectors of the question sentences, so as to realize information interaction between each answer sentence in each question-answer pair and the rest answer sentences and the question sentences, and respectively generate a logical relationship representation vector of each answer sentence.
8. The post recommendation method according to claim 7, wherein said step of iteratively reasoning vectors representing the coded expressions of the question sentences and the answer sentences in each question-answer pair further comprises:
respectively determining question nodes and neighbor nodes of question nodes and answer nodes in question-answer pairs according to the word sequences of the question texts and the answer texts in the question-answer pairs, wherein the neighbor nodes of the question nodes comprise all question nodes and all answer nodes in the question-answer pairs, the interval between the question nodes and the question nodes in the question-answer pairs is smaller than a preset window value, and the neighbor nodes of the answer nodes comprise all answer nodes and all question nodes in the question-answer pairs, the interval between the answer nodes and the question nodes in the question-answer pairs is smaller than the preset window value;
respectively performing weighted integration on the code expression vector of each question node according to the code expression vectors of each question node and all the neighbor nodes of each question-answer pair so as to realize information interaction between each question node and all the neighbor nodes of each question-answer pair and respectively generate a logic relationship expression vector of each question node; and
and respectively performing weighted integration on the code expression vector of each question-answer node according to the code expression vectors of each question-answer node and all the neighbor nodes thereof in each question-answer pair so as to realize information interaction between each question-answer node and all the neighbor nodes thereof in each question-answer pair and respectively generate the logic relationship expression vector of each question-answer node.
9. The post recommendation method according to claim 6, wherein the inference model further comprises a pre-trained semantic inference module, and the step of performing semantic inference on the logical relationship representation vector of each question and answer pair and the logical relationship representation vector of each answer sentence in each question and answer pair to generate the semantic representation vector of each question and answer pair comprises:
and performing semantic reasoning on the logical relationship expression vector of each question and answer pair and the logical relationship expression vector of each answer sentence in each question and answer pair respectively by using the semantic reasoning module to realize semantic interaction between the question and answer pairs and generate the semantic expression vector of each question and answer pair respectively.
10. The position recommendation method according to claim 9, wherein said inference model further comprises a plurality of pre-trained classification network modules, and said step of performing result classification according to semantic representation vectors of each said question-answer pair to obtain scoring results of said candidate for each said competency respectively comprises:
and respectively inputting the semantic expression vectors of the question-answer pairs into corresponding classification network modules so as to respectively obtain the scoring results of the candidate aiming at the competency.
11. The station recommendation method according to claim 1, wherein the step of determining the recommended position suitable for the candidate according to the scoring result of the candidate for each of the competencies and the combination of competencies required for at least one recruiting position comprises:
screening the scoring results of the candidate aiming at all the competencies according to a preset scoring threshold value so as to determine at least one qualified competencie of the candidate;
comparing the at least one qualified competency with a combination of competencies required for at least one recruiting position to determine that the at least one qualified competency can fully cover the at least one recruiting position for the combination of competencies; and
determining at least one recruit position for which the at least one qualified competency can fully cover the combination of competency as the recommended position for the candidate.
12. The station recommendation method according to claim 11, wherein after performing the step of determining the recommended positions for the candidate based on the results of the candidate's scoring for each of the competencies and the combination of competencies required for at least one recruited position, the station recommendation method further comprises the steps of:
screening a recommended position list of a plurality of candidates according to the recruiting positions provided by the recruitment enterprise to determine recommended candidates of the recruiting positions, wherein at least one recommended position suitable for the corresponding candidate is recorded in the recommended position list; and
and recommending the candidate for recommendation to the recruitment enterprise.
13. The station recommendation method as defined in claim 12, wherein, after performing the step of recommending the candidate for recommendation to the recruiting enterprise, the station recommendation method further comprises the steps of:
and providing answer texts of the candidate for recommendation for answering each question and corresponding question texts thereof, and/or videos and/or audios of the candidate for recommendation for answering each question, and/or scoring results of the candidate for recommendation for each item of competency to the recruitment enterprise.
14. A position recommendation method as claimed in claim 1, characterized in that before performing said step of posing a plurality of questions to the candidate with said plurality of competencies for the candidate, the position recommendation method further comprises the steps of:
respectively proposing a plurality of corresponding problems to a plurality of candidate samples according to the competencies;
respectively collecting answer text samples of each candidate person sample for answering each question, and forming question-answer pair samples by the answer text samples and the corresponding question texts;
segmenting the question text and the answer text sample of each question-answer pair sample to obtain a plurality of question samples and a plurality of answer samples of each question-answer pair sample;
respectively scoring competency of answer text samples of the candidate person samples for answering the questions according to the competency of the candidate person samples; and
and training the inference model to perform the functions of information interaction, semantic inference and/or result classification by using a plurality of question samples and a plurality of answer sentence samples of each question-answer pair sample and the competency scores of corresponding candidate samples for answering each question.
15. A post recommendation device, comprising:
a memory; and
a processor connected to the memory and configured to implement the position recommendation method of any of claims 1-14.
16. A computer-readable storage medium having computer instructions stored thereon, wherein the computer instructions, when executed by a processor, implement the position recommendation method of any one of claims 1-14.
CN202110791270.9A 2021-07-13 2021-07-13 Post recommendation method and device Pending CN115617960A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110791270.9A CN115617960A (en) 2021-07-13 2021-07-13 Post recommendation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110791270.9A CN115617960A (en) 2021-07-13 2021-07-13 Post recommendation method and device

Publications (1)

Publication Number Publication Date
CN115617960A true CN115617960A (en) 2023-01-17

Family

ID=84855788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110791270.9A Pending CN115617960A (en) 2021-07-13 2021-07-13 Post recommendation method and device

Country Status (1)

Country Link
CN (1) CN115617960A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116452047A (en) * 2023-04-12 2023-07-18 上海才历网络有限公司 Candidate competence evaluation method and device
CN117455430A (en) * 2023-08-31 2024-01-26 北京五八信息技术有限公司 Resume information processing method, device, equipment and storage medium based on AI

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116452047A (en) * 2023-04-12 2023-07-18 上海才历网络有限公司 Candidate competence evaluation method and device
CN117455430A (en) * 2023-08-31 2024-01-26 北京五八信息技术有限公司 Resume information processing method, device, equipment and storage medium based on AI
CN117455430B (en) * 2023-08-31 2024-05-17 北京五八信息技术有限公司 Resume information processing method, device, equipment and storage medium based on AI

Similar Documents

Publication Publication Date Title
CN110543557B (en) Construction method of medical intelligent question-answering system based on attention mechanism
Liu et al. Piano playing teaching system based on artificial intelligence–design and research
CN111444709A (en) Text classification method, device, storage medium and equipment
CN111275401B (en) Intelligent interview method and system based on position relation
CN112667799B (en) Medical question-answering system construction method based on language model and entity matching
Yang [Retracted] Piano Performance and Music Automatic Notation Algorithm Teaching System Based on Artificial Intelligence
CN111666376B (en) Answer generation method and device based on paragraph boundary scan prediction and word shift distance cluster matching
CN115617960A (en) Post recommendation method and device
KR20200089914A (en) Expert automatic matching system in education platform
CN114818691A (en) Article content evaluation method, device, equipment and medium
Marcolin et al. Listening to the voice of the guest: A framework to improve decision-making processes with text data
CN113705191A (en) Method, device and equipment for generating sample statement and storage medium
Shan et al. [Retracted] Research on Classroom Online Teaching Model of “Learning” Wisdom Music on Wireless Network under the Background of Artificial Intelligence
CN117609486A (en) Intelligent dialogue system in psychological field
CN111259115A (en) Training method and device for content authenticity detection model and computing equipment
Zarzour et al. Sentiment analysis based on deep learning methods for explainable recommendations with reviews
Zhai et al. A wgan-based dialogue system for embedding humor, empathy, and cultural aspects in education
Xie Recommendation of English reading in vocational colleges using linear regression training model
CN115115483B (en) Student comprehensive ability evaluation method integrating privacy protection
CN115619363A (en) Interviewing method and device
Muangnak et al. The neural network conversation model enables the commonly asked student query agents
CN116127954A (en) Dictionary-based new work specialized Chinese knowledge concept extraction method
Qi et al. Attention-based hybrid model for automatic short answer scoring
Liu et al. Suggestion mining from online reviews usingrandom multimodel deep learning
CN111428499A (en) Idiom compression representation method for automatic question-answering system by fusing similar meaning word information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination