CN113449095A - Interview data analysis method and device - Google Patents

Interview data analysis method and device Download PDF

Info

Publication number
CN113449095A
CN113449095A CN202110751800.7A CN202110751800A CN113449095A CN 113449095 A CN113449095 A CN 113449095A CN 202110751800 A CN202110751800 A CN 202110751800A CN 113449095 A CN113449095 A CN 113449095A
Authority
CN
China
Prior art keywords
interview
technical
text data
job seeker
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110751800.7A
Other languages
Chinese (zh)
Inventor
惠小珏
陈�峰
高燕煦
曹光斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110751800.7A priority Critical patent/CN113449095A/en
Publication of CN113449095A publication Critical patent/CN113449095A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2468Fuzzy queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/355Class or cluster creation or modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/105Human resources
    • G06Q10/1053Employment or hiring
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • Software Systems (AREA)
  • Fuzzy Systems (AREA)
  • Economics (AREA)
  • Quality & Reliability (AREA)
  • Probability & Statistics with Applications (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • General Business, Economics & Management (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention provides an interview data analysis method and device, which can be used in the technical field of artificial intelligence, and the method comprises the following steps: converting the acquired interview voice data into interview text data by an automatic voice recognition technology; evaluating the interview text data by a specified evaluation technology according to a preset evaluation category to generate an interview image of the job seeker; the interview image is matched with the preset post standard image to generate an interview result, talent recruitment can be accurately performed, the recruitment efficiency is improved, and talent loss is avoided.

Description

Interview data analysis method and device
Technical Field
The invention relates to the technical field of data processing, in particular to the technical field of artificial intelligence, and particularly relates to an interview data analysis method and device.
Background
At present, with the rapid development of computer network technology and social economy, various industries need to perform a great deal of talent introduction. Generally, a job seeker performs multiple rounds of interviews during job hunting, and the interviews are mainly evaluated from two aspects of professional literacy and comprehensive quality. In the prior art, a professional interviewer usually performs online interviewing on a job seeker and evaluates the job seeker by the interviewer, the obtained evaluation is artificially and subjectively judged, the problem of inaccuracy is caused in talent recruitment, and the recruitment efficiency is low, so that talents are lost.
Disclosure of Invention
An object of the present invention is to provide an interview data analysis method, which can accurately perform talent recruitment and improve recruitment efficiency, thereby avoiding talent loss. Another object of the present invention is to provide an interview data analysis apparatus. It is yet another object of the present invention to provide a computer readable medium. It is a further object of the present invention to provide a computer apparatus.
In order to achieve the above object, the present invention discloses an interview data analysis method, including:
converting the acquired interview voice data into interview text data by an automatic voice recognition technology;
according to a preset evaluation category, evaluating the interview text data through a specified evaluation technology to generate an interview image of the job seeker;
and matching the interview picture image with a preset post standard image to generate an interview result.
Preferably, the assessment categories include technical and/or non-technical categories;
according to a preset evaluation category, the interview text data is evaluated through a specified evaluation technology, and the interview image of the job seeker is generated, wherein the method comprises the following steps:
and evaluating the interview text data by a specified evaluation technology according to the technical class and/or the non-technical class to generate an interview image of the job seeker.
Preferably, the generating the interview image of the job seeker by evaluating the interview text data according to a technical class and/or a non-technical class by a specified evaluation technique includes:
if the evaluation type is a technical type, evaluating the interview text data through a named entity recognition model to generate the technical type of the interview text data;
generating a comprehensive score of the job seeker under the technical category according to the standard answers acquired under the technical category and the interview text data through a keyword fuzzy matching technology;
and drawing the interview image of the job seeker according to the comprehensive scores in the technical categories.
Preferably, the generating, by the keyword fuzzy matching technology, the comprehensive score of the job seeker in the technical category according to the standard answer and the interview text data obtained in the technical category includes:
carrying out fuzzy matching on the standard answers and the interview text data through the keyword fuzzy matching technology, and calculating the accuracy and the recall rate;
averaging the accuracy rate and the recall rate to generate a composite score of the job seeker under the technology category.
Preferably, the generating the interview image of the job seeker by evaluating the interview text data according to a technical class and/or a non-technical class by a specified evaluation technique includes:
if the evaluation category is a non-technical category, evaluating the interview text data through a self-attention model to generate a non-technical score of the job seeker under each non-technical category;
and drawing the interview image of the job seeker according to the non-technical score under each non-technical classification.
Preferably, if the evaluation category is a technical category, the evaluating the interview text data by a named entity recognition model to generate the technical category of the interview text data includes:
generating a word vector according to the interview text data through a word vector model;
generating sentence level characteristics according to the word vectors through a bidirectional long-time and short-time memory network;
decoding the sentence level characteristics through a double affine classifier to generate a plurality of classification matrixes and classification probability of each classification matrix;
and determining the class of the classification matrix corresponding to the maximum value of the classification probability as the technical class of the interview text data.
Preferably, the interview image comprises at least one interview classification and an interview score corresponding to the at least one interview classification; the post standard graph comprises at least one standard classification and a standard score corresponding to the at least one standard classification;
the step of matching the interview picture image with a preset post standard image to generate an interview result comprises the following steps:
matching the interview classification with the standard classification, and judging whether the interview score is greater than or equal to the standard score;
and if the interview score is larger than or equal to the standard score, generating an interview result which is passed by the interview.
Preferably, the method further comprises:
if the interview score is smaller than the standard score, matching the interview portrait with other post standard maps to generate a matching result;
if the matching result is that the matching is successful, recommending the post corresponding to the standard diagram of the other post to the job seeker;
and if the matching result is matching failure, generating an interview result of interview failure.
The invention also discloses an interview data analysis device, which comprises:
the conversion unit is used for converting the acquired interview voice data into interview text data through an automatic voice recognition technology;
the first generation unit is used for evaluating the interview text data through a specified evaluation technology according to a preset evaluation category to generate an interview image of the job seeker;
and the second generation unit is used for matching the interview image with a preset post standard image to generate an interview result.
The invention also discloses a computer-readable medium, on which a computer program is stored which, when executed by a processor, implements a method as described above.
The invention also discloses a computer device comprising a memory for storing information comprising program instructions and a processor for controlling the execution of the program instructions, the processor implementing the method as described above when executing the program.
The method comprises the steps of converting acquired interview voice data into interview text data through an automatic voice recognition technology; evaluating the interview text data by a specified evaluation technology according to a preset evaluation category to generate an interview image of the job seeker; the interview image is matched with the preset post standard image to generate an interview result, talent recruitment can be accurately performed, the recruitment efficiency is improved, and talent loss is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a scene schematic diagram of interview data analysis according to an embodiment of the present invention;
FIG. 2 is a flowchart of an interview data analysis method according to an embodiment of the invention;
FIG. 3 is a flowchart of another interview data analysis method according to an embodiment of the present invention;
FIG. 4 is a pictorial view of a interview provided by an embodiment of the invention;
FIG. 5 is a schematic view of another interview provided by the embodiment of the invention;
fig. 6 is a schematic structural diagram of an interview data analysis apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the interview data analysis method and apparatus disclosed in the present application can be used in the technical field of artificial intelligence, and can also be used in any field except the technical field of artificial intelligence.
In order to facilitate understanding of the technical solutions provided in the present application, the following first describes relevant contents of the technical solutions in the present application. Recruitment is a talent competition, efficiency is a key factor of competition, and enterprises who contact job seekers most quickly often win and enjoy talent advantages. In order to scientifically identify talent qualifications and improve recruitment efficiency, the invention provides an interview data analysis method and device, aiming at assisting enterprises to carry out fair and objective talent recruitment, and carrying out comprehensive analysis on voice data answered by job seekers in an interview process to generate interview picture images; the post matching is carried out according to the interview picture, so that the professional ability and the personal literacy of job seekers can be comprehensively evaluated, and the talents can be prevented from losing.
Fig. 1 is a schematic view of a scene of interview data analysis according to an embodiment of the present invention, as shown in fig. 1, the scene includes a job seeker 100 and an intelligent interview robot 200. Specifically, the intelligent interview robot 200 serves as a recruiter to interview the job seeker 100, an interview data analysis device is arranged inside the intelligent interview robot 200, the job seeker 100 starts interviewing, and the intelligent interview robot 200 outputs interview questions; the job seeker 100 answers the interview questions output by the intelligent interview robot 200 in a voice manner; the intelligent interview robot 200 converts the acquired interview voice data into interview text data through an automatic voice recognition technology; evaluating the interview text data by a specified evaluation technology according to a preset evaluation category to generate an interview image of the job seeker; and matching the interview image with a preset post standard image to generate an interview result.
It should be noted that the intelligent interview robot 200 may output the interview question in a voice manner, or may output the interview question by displaying a text on a display screen, and the embodiment of the present invention does not limit the output manner of the interview question output by the intelligent interview robot 200.
In the technical scheme provided by the embodiment of the invention, the acquired interview voice data is converted into interview text data through an automatic voice recognition technology; evaluating the interview text data by a specified evaluation technology according to a preset evaluation category to generate an interview image of the job seeker; the interview image is matched with the preset post standard image to generate an interview result, talent recruitment can be accurately performed, the recruitment efficiency is improved, and talent loss is avoided.
It should be noted that the scenario of interview data analysis shown in fig. 1 is also applicable to the interview data analysis method shown in fig. 2 or fig. 3, and details thereof are not repeated herein.
The following describes an implementation process of the interview data analysis method provided by the embodiment of the invention, taking the interview data analysis device as an execution subject. It can be understood that the executing subject of the interview data analysis method provided by the embodiment of the invention includes, but is not limited to, the interview data analysis device.
Fig. 2 is a flowchart of an interview data analysis method according to an embodiment of the present invention, and as shown in fig. 2, the method includes:
step 101, converting the acquired interview voice data into interview text data through an automatic voice recognition technology.
And 102, evaluating the interview text data through a specified evaluation technology according to a preset evaluation category to generate an interview image of the job seeker.
In the embodiment of the invention, the evaluation category comprises a technical class and/or a non-technical class. And evaluating the interview text data by a specified evaluation technology according to the technical class and/or the non-technical class to generate an interview image of the job seeker. Specifically, if the evaluation category is a technical category, the interview text data is evaluated through a named entity recognition model to generate the technical category of the interview text data; generating a comprehensive score of the job seeker under the technical category according to the standard answers and the interview text data acquired under the technical category by a keyword fuzzy matching technology; and drawing an interview image of the job seeker according to the comprehensive scores in the technical categories. If the evaluation category is a non-technical category, evaluating the interview text data through a self-attention model to generate a non-technical score of the job seeker under each non-technical category; and drawing an interview image of the job seeker according to the non-technical score under each non-technical classification.
And 103, matching the interview image with a preset post standard image to generate an interview result.
In the embodiment of the invention, the interview image comprises at least one interview classification and interview scores corresponding to the at least one interview classification; the post standard chart includes at least one standard classification and a standard score corresponding to the at least one standard classification. Specifically, the interview classification is matched with the standard classification, and whether the interview score is greater than or equal to the standard score is judged; and if the interview score is greater than or equal to the standard score, generating an interview result which passes the interview.
In the technical scheme provided by the embodiment of the invention, the acquired interview voice data is converted into interview text data through an automatic voice recognition technology; evaluating the interview text data by a specified evaluation technology according to a preset evaluation category to generate an interview image of the job seeker; the interview image is matched with the preset post standard image to generate an interview result, talent recruitment can be accurately performed, the recruitment efficiency is improved, and talent loss is avoided.
Fig. 3 is a flowchart of another interview data analysis method according to an embodiment of the present invention, as shown in fig. 3, the method includes:
step 201, converting the obtained interview voice data into interview text data by an Automatic Speech Recognition (ASR) technology.
In the embodiment of the invention, each step is executed by the interview data analysis device.
In the embodiment of the invention, when the candidate answers the interview question, interview voice data can be input into the interview data analysis device in a voice input mode so that the interview data analysis device can acquire the interview voice data; the interview data analysis device converts interview voice data into interview text data through an ASR technology. Wherein the interview text data comprises pause blanks existing in the voice input process of the job seeker.
And step 202, evaluating the interview text data by a specified evaluation technology according to the technical class and/or the non-technical class to generate an interview image of the job seeker.
In the embodiment of the invention, the evaluation categories comprise technical categories and/or non-technical categories, and the interview data analysis device can evaluate the technical categories and/or the non-technical categories of the job seeker. Wherein the technical category represents the specialty of the post adopted by the job seeker, and the non-technical category represents the personal quality of the job seeker in the process of answering the interview question.
If the evaluation category is a technology category, step 202 specifically includes:
step 2021, evaluating the interview text data through the named entity recognition model to generate the technical category of the interview text data.
In the embodiment of the invention, the named entity model is constructed based on a double affine classifier (Biaffine). Specifically, a preset interview question library comprises a large number of interview questions and interview answers corresponding to the interview questions; marking answers corresponding to the interview questions and the interview questions according to technical categories in a remote supervision mode; the named entity model is trained offline based on the annotated interview question library. The technology classes include, but are not limited to, Architecture Design (AD), Project Development (PD), System Test (ST), Application Maintenance (AM), and Network and Equipment (NE). It is to be understood that the named entity model provided by the embodiment of the present invention is not limited to be constructed on Biaffine, and may be constructed based on other classifiers.
In the embodiment of the present invention, step 2021 specifically includes:
and 2121, generating a word vector according to the interview text data through the word vector model.
Specifically, the interview text data is input into a word vector model, and word vectors are output, wherein the word vectors are the representations of Chinese characters and words in the interview text data. The word vector model may be a BERT context vector model or a GloVe word vector, and it is understood that the word vector model provided in the embodiment of the present invention is not limited to the BERT context vector model or the GloVe word vector, and may be other word vector models.
Step 2221, generating sentence level characteristics according to the word vectors through a bidirectional long-short time memory network (BilSTM).
Specifically, the word vector is input into BilSTM for encoding, and sentence level characteristics are output.
Step 2321, decoding sentence level features through a double affine classifier to generate a plurality of classification matrixes and classification probability of each classification matrix.
Specifically, the sentence level features are input into a dual affine classifier for decoding, a plurality of classification matrixes are generated, and the classification probability of each classification matrix is generated. Wherein each classification matrix represents a technology class, and the classification probability of each classification matrix represents the probability that the interview text data is classified into the technology class.
Step 2421, determining the category of the classification matrix corresponding to the maximum value of the classification probability as the technical category of the interview text data.
In the embodiment of the invention, the maximum value of the classification probability represents the maximum possibility that the interview text data is the technical class of the classification matrix; and determining the category of the classification matrix corresponding to the maximum value of the classification probability as the technical category of the interview text data.
Step 2022, generating a comprehensive score of the job seeker under the technical category according to the standard answers and the interview text data acquired under the technical category by the keyword fuzzy matching technology.
In the embodiment of the invention, the standard answers are obtained from the interview question library under the technical category.
Specifically, fuzzy matching is carried out on the standard answers and the face test text data through a keyword fuzzy matching technology, and the accuracy rate and the recall rate are calculated; and averaging the accuracy and the recall rate to generate a comprehensive score of the job seeker under the technical category. The accuracy rate represents the proportion of the number of keywords which are correctly answered by the job seeker for a certain test question to the total number of the keywords, and the recall rate represents the proportion of the number of keywords which are correctly answered by the job seeker for a certain test question to the number of keywords of all correct answers to the test question.
The calculation process of the comprehensive score of the job seeker under the technical category is described below by using a specific example:
set metadata with a face test question "Spring provides several configurations? For example, the technical category of the interview question is architectural design, a standard answer corresponding to the interview question is obtained in the architectural design, interview text data and an input keyword fuzzy matching technology are used, and accuracy and recall are output. Specifically, the keywords of the interview text comprise XML and API, the keywords of the standard answers are XML, annotation and Java, the number of the keywords of the job seeker answering the interview questions correctly is 1, and the total number of the answer keywords is 2, so that the accuracy rate of the job seeker answering is 50%; the number of the keywords of all correct answers of the interview question is 3, so that the recall rate of the answer of the job seeker is 33%; and averaging the accuracy and the recall rate to obtain a comprehensive score of 41.5% of the job seeker under the framework design.
Further, if there are a plurality of interview questions in a certain technical category and a plurality of composite scores are obtained, the plurality of composite scores are averaged, and the average is determined as the final composite score of the job seeker in the technical category.
Step 2023, drawing the interview image of the job seeker according to the comprehensive scores in the technical categories.
In the embodiment of the present invention, the type of the interview image may be a radar image, and it is to be understood that the type of the interview image provided in the embodiment of the present invention is not limited to the radar image, and may be other types of images.
Fig. 4 is a interview image provided in an embodiment of the present invention, and as shown in fig. 4, taking the type of the interview image as a radar image as an example, the comprehensive score of the architecture design is 0.8, the comprehensive score of the project development is 0.6, the comprehensive score of the system test is 0.7, the comprehensive score of the application maintenance is 0.9, and the comprehensive score of the network device is 0.8. The comprehensive scores of the job seeker on each technical category can be visually seen through the radar map, the professional ability of the job seeker can be objectively reflected, and a reliable basis is provided for follow-up post matching and post recommendation.
If the evaluation category is non-technical, step 202 specifically includes:
step 3021, evaluating the interview text data through the self-attention model to generate a non-technical score of the job seeker under each non-technical classification.
In the embodiment of the invention, the self-attention model can capture the internal relevance of the interview text data, and because the interview text data comprises pause blanks and invalid vocabularies existing when the job seeker answers the interview questions, the quality of the interview text data can be evaluated through the self-attention model, namely: the job seeker was evaluated on a personal basis. Where the term of invalidity includes, but is not limited to, "kayen", "this", "then", etc. noise words not related to the answer. Specifically, interview text data is input from the attention model and non-technical scores of job seekers are output.
In embodiments of the present invention, non-technical classifications include, but are not limited to, behavioral capacity, language organization coordination, and knowledge mastering.
And step 3022, drawing a interview image of the job seeker according to the non-technical score of each non-technical classification.
In the embodiment of the present invention, the type of the interview image may be a radar image, and it is to be understood that the type of the interview image provided in the embodiment of the present invention is not limited to the radar image, and may be other types of images.
In the embodiment of the invention, the voice of the job seeker in the interview process is comprehensively analyzed to generate the personalized interview picture, so that the comprehensive ability of the job seeker can be objectively evaluated, the interview process is more efficient and scientific, and the interview cost of enterprises and job seekers is reduced.
Fig. 5 is another interview image chart provided by an embodiment of the present invention, and as shown in fig. 5, by taking the interview image chart as an example of the type of radar chart, the comprehensive score of the driving force is 0.7, the comprehensive score of the language organization coordination ability is 0.6, and the comprehensive score of the knowledge grasping ability is 0.8. The comprehensive scores of the job seeker in each non-technical classification can be visually seen through the radar map, the personal quality of the job seeker can be objectively reflected, and a reliable basis is provided for follow-up post matching and post recommendation.
Step 203, matching the interview image with a preset post standard image, and judging whether the interview score is greater than or equal to the standard score; if the interview score is greater than or equal to the standard score, go to step 204; if the interview score is less than the standard score, step 205 is performed.
In the embodiment of the invention, the interview image comprises at least one interview classification and interview scores corresponding to the at least one interview classification; the post standard chart includes at least one standard classification and a standard score corresponding to the at least one standard classification. If the evaluation category is a technical category, the interview classification represents the technical category, and the interview score corresponding to the interview classification represents the comprehensive score under the technical category; if the evaluation category is a non-technical category, the interview classification represents the non-technical category, and the interview score corresponding to the interview classification represents the comprehensive score under the non-technical category.
In the embodiment of the present invention, the post standard diagram is preset according to each post requirement, and the type of the post standard diagram may be a radar diagram.
Specifically, the interview classification is matched with the standard classification, if the corresponding interview score is judged to be greater than or equal to the standard score, the job seeker is shown to meet the post standard, and the step 204 is continuously executed; if the corresponding interview score is less than the standard score, the job seeker is not qualified with the post standard, and step 205 is executed.
And step 204, if the interview score is larger than or equal to the standard score, generating an interview result which is passed by the interview.
In the embodiment of the invention, if the interview score is more than or equal to the standard score, the job seeker is shown to meet the post standard, an interview result which is passed by the interview is generated, the job seeker is shown to pass the interview in the round, and if the next round of interview is available, the next round of interview can be entered; and if the next round of interview does not exist, informing the job seeker that the interview is successful.
Step 205, if the interview score is smaller than the standard score, matching the interview portrait with other post standard maps to generate a matching result; if the matching result is a successful matching, go to step 206; if the matching result is a matching failure, go to step 207.
In the embodiment of the invention, if the interview score is less than the standard score and the job seeker is not in accordance with the post standard, the interview portrait of the job seeker is matched with post standard maps of other posts to generate a matching result. The matching process is the same as step 203 and will not be described herein. Wherein, the matching result comprises matching success or matching failure. If the matching is successful, indicating that the job seeker is suitable for the other post, continuing to execute step 206; if the matching fails, it indicates that there is no post suitable for the job seeker, and the process continues to step 207.
And step 206, recommending the posts corresponding to the other post standard graphs to the job seeker if the matching result is that the matching is successful.
In the embodiment of the invention, if the matching is successful, the matched post is recommended to the job seeker, and the intention of the job seeker is inquired.
In the embodiment of the invention, when the job seeker is not suitable for the job position interviewed by the job seeker, the interview portrait image of the job seeker is matched with the post standard images of other posts, and if the matching is successful, the post suitable for the job seeker to work is recommended to the job seeker.
And step 207, if the matching result is matching failure, generating an interview result of interview failure.
In the embodiment of the invention, if the matching fails, the job seeker is informed that the interview fails, and the situation that no post suitable for the job seeker exists is indicated.
In the technical scheme of the interview data analysis method provided by the embodiment of the invention, the acquired interview voice data is converted into interview text data through an automatic voice recognition technology; evaluating the interview text data by a specified evaluation technology according to a preset evaluation category to generate an interview image of the job seeker; the interview image is matched with the preset post standard image to generate an interview result, talent recruitment can be accurately performed, the recruitment efficiency is improved, and talent loss is avoided.
Fig. 6 is a schematic structural diagram of an interview data analysis apparatus according to an embodiment of the present invention, the interview data analysis apparatus is configured to execute the interview data analysis method, and as shown in fig. 6, the interview data analysis apparatus includes: a conversion unit 11, a first generation unit 12 and a second generation unit 13.
The conversion unit 11 is configured to convert the acquired interview voice data into interview text data by an automatic voice recognition technique.
The first generating unit 12 is configured to evaluate the interview text data by a specified evaluation technique according to a preset evaluation category, and generate an interview image of the job seeker.
The second generating unit 13 is configured to match the interview image with a preset post standard image to generate an interview result.
In the embodiment of the present invention, the first generating unit 12 is specifically configured to evaluate the interview text data by a specified evaluation technique according to a technical class and/or a non-technical class, so as to generate an interview image of the job seeker.
In this embodiment of the present invention, the first generating unit 12 specifically includes: a first evaluation subunit 121 and a first generation subunit 122 and a first rendering subunit 123.
The first evaluation subunit 121 is configured to, if the evaluation category is a technical category, evaluate the interview text data by using the named entity identification model, and generate the technical category of the interview text data.
The first generating subunit 122 is configured to generate a comprehensive score of the job seeker in the technical category according to the standard answers and the interview text data acquired in the technical category through a keyword fuzzy matching technology.
The first drawing subunit 123 is configured to draw the interview image of the job seeker according to the comprehensive score in the technical category.
In this embodiment of the present invention, the first generating unit 12 further specifically includes: a second evaluation subunit 124 and a second rendering subunit 125.
The second evaluation subunit 124 is configured to evaluate the interview text data through the self-attention model if the evaluation category is the non-technical category, and generate a non-technical score of the candidate under each non-technical category.
The second drawing subunit 125 is configured to draw the interview image of the job seeker according to the non-technical score under each non-technical classification.
In this embodiment of the present invention, the first evaluation subunit 121 is specifically configured to generate a word vector according to interview text data through a word vector model; generating sentence level characteristics according to the word vectors through a bidirectional long-time memory network; decoding sentence level features through a double affine classifier to generate a plurality of classification matrixes and classification probability of each classification matrix; and determining the category of the classification matrix corresponding to the maximum value of the classification probability as the technical category of the interview text data.
In this embodiment of the present invention, the second generating unit 13 specifically includes a first matching subunit 131 and a second generating subunit 132.
The first matching subunit 131 is configured to match the interview classification with the standard classification, and determine whether the interview score is greater than or equal to the standard score.
The second generating subunit 132 is configured to generate an interview result that the interview passes if the interview score is greater than or equal to the standard score.
In the embodiment of the present invention, the second generating unit 13 further includes a second matching subunit 133, a recommending subunit 134, and a third generating subunit 135.
The second matching subunit 133 is configured to match the interview portrait with other post standard maps to generate a matching result if the interview score is smaller than the standard score.
And the recommending subunit 134 is configured to recommend the post corresponding to the standard diagram of the other post to the job seeker if the matching result is that the matching is successful.
The third generating subunit 135 is configured to generate an interview result with an interview failure if the matching result is a matching failure.
In the embodiment of the present invention, the first generating subunit 122 is specifically configured to perform fuzzy matching on the standard answer and the trial text data through a keyword fuzzy matching technology, and calculate an accuracy rate and a recall rate; and averaging the accuracy and the recall rate to generate a comprehensive score of the job seeker under the technical category.
In the scheme of the embodiment of the invention, the acquired interview voice data is converted into interview text data through an automatic voice recognition technology; evaluating the interview text data by a specified evaluation technology according to a preset evaluation category to generate an interview image of the job seeker; the interview image is matched with the preset post standard image to generate an interview result, talent recruitment can be accurately performed, the recruitment efficiency is improved, and talent loss is avoided.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer device, which may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
An embodiment of the present invention provides a computer device, including a memory and a processor, where the memory is used to store information including program instructions, and the processor is used to control execution of the program instructions, and the program instructions are loaded and executed by the processor to implement the steps of the embodiment of the interview data analysis method.
Referring now to FIG. 7, shown is a schematic diagram of a computer device 600 suitable for use in implementing embodiments of the present application.
As shown in fig. 7, the computer apparatus 600 includes a Central Processing Unit (CPU)601 which can perform various appropriate works and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM)) 603. In the RAM603, various programs and data necessary for the operation of the computer apparatus 600 are also stored. The CPU601, ROM602, and RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output section 607 including a Cathode Ray Tube (CRT), a liquid crystal feedback (LCD), and the like, and a speaker and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 606 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted as necessary on the storage section 608.
In particular, according to an embodiment of the present invention, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the invention include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (11)

1. A method of interview data analysis, the method comprising:
converting the acquired interview voice data into interview text data by an automatic voice recognition technology;
according to a preset evaluation category, evaluating the interview text data through a specified evaluation technology to generate an interview image of the job seeker;
and matching the interview picture image with a preset post standard image to generate an interview result.
2. The interview data analysis method of claim 1 wherein said assessment categories include technical and/or non-technical categories;
according to a preset evaluation category, the interview text data is evaluated through a specified evaluation technology, and the interview image of the job seeker is generated, wherein the method comprises the following steps:
and evaluating the interview text data by a specified evaluation technology according to the technical class and/or the non-technical class to generate an interview image of the job seeker.
3. The interview data analysis method according to claim 2, wherein the generating of the interview picture of the job seeker by evaluating the interview text data by a specified evaluation technique according to a technical class and/or a non-technical class comprises:
if the evaluation type is a technical type, evaluating the interview text data through a named entity recognition model to generate the technical type of the interview text data;
generating a comprehensive score of the job seeker under the technical category according to the standard answers acquired under the technical category and the interview text data through a keyword fuzzy matching technology;
and drawing the interview image of the job seeker according to the comprehensive scores in the technical categories.
4. The interview data analysis method according to claim 3, wherein the generating a composite score of the job seeker under the technical category from the standard answers obtained under the technical category and the interview text data by a keyword fuzzy matching technique comprises:
carrying out fuzzy matching on the standard answers and the interview text data through the keyword fuzzy matching technology, and calculating the accuracy and the recall rate;
averaging the accuracy rate and the recall rate to generate a composite score of the job seeker under the technology category.
5. The interview data analysis method according to claim 2, wherein the generating of the interview picture of the job seeker by evaluating the interview text data by a specified evaluation technique according to a technical class and/or a non-technical class comprises:
if the evaluation category is a non-technical category, evaluating the interview text data through a self-attention model to generate a non-technical score of the job seeker under each non-technical category;
and drawing the interview image of the job seeker according to the non-technical score under each non-technical classification.
6. The interview data analysis method according to claim 3, wherein if the evaluation category is a technical category, the interview text data is evaluated through a named entity recognition model to generate the technical category of the interview text data, comprising:
generating a word vector according to the interview text data through a word vector model;
generating sentence level characteristics according to the word vectors through a bidirectional long-time and short-time memory network;
decoding the sentence level characteristics through a double affine classifier to generate a plurality of classification matrixes and classification probability of each classification matrix;
and determining the class of the classification matrix corresponding to the maximum value of the classification probability as the technical class of the interview text data.
7. The method of claim 1, wherein the interview representation includes at least one interview classification and an interview score corresponding to the at least one interview classification; the post standard graph comprises at least one standard classification and a standard score corresponding to the at least one standard classification;
the step of matching the interview picture image with a preset post standard image to generate an interview result comprises the following steps:
matching the interview classification with the standard classification, and judging whether the interview score is greater than or equal to the standard score;
and if the interview score is larger than or equal to the standard score, generating an interview result which is passed by the interview.
8. The interview data analysis method of claim 7 further comprising:
if the interview score is smaller than the standard score, matching the interview portrait with other post standard maps to generate a matching result;
if the matching result is that the matching is successful, recommending the post corresponding to the standard diagram of the other post to the job seeker;
and if the matching result is matching failure, generating an interview result of interview failure.
9. An interview data analysis apparatus, comprising:
the conversion unit is used for converting the acquired interview voice data into interview text data through an automatic voice recognition technology;
the first generation unit is used for evaluating the interview text data through a specified evaluation technology according to a preset evaluation category to generate an interview image of the job seeker;
and the second generation unit is used for matching the interview image with a preset post standard image to generate an interview result.
10. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the interview data analysis method according to any one of claims 1 to 8.
11. A computer device comprising a memory for storing information including program instructions and a processor for controlling the execution of the program instructions, wherein the program instructions are loaded and executed by the processor to implement the interview data analysis method of any one of claims 1-8.
CN202110751800.7A 2021-07-02 2021-07-02 Interview data analysis method and device Pending CN113449095A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110751800.7A CN113449095A (en) 2021-07-02 2021-07-02 Interview data analysis method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110751800.7A CN113449095A (en) 2021-07-02 2021-07-02 Interview data analysis method and device

Publications (1)

Publication Number Publication Date
CN113449095A true CN113449095A (en) 2021-09-28

Family

ID=77815048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110751800.7A Pending CN113449095A (en) 2021-07-02 2021-07-02 Interview data analysis method and device

Country Status (1)

Country Link
CN (1) CN113449095A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663042A (en) * 2022-02-11 2022-06-24 北京斗米优聘科技发展有限公司 Intelligent telephone calling recruitment method and device, electronic equipment and storage medium
CN118014545A (en) * 2024-02-19 2024-05-10 海安新知人工智能科技有限公司 Recruitment interview AI scoring algorithm

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544104A (en) * 2018-11-01 2019-03-29 平安科技(深圳)有限公司 A kind of recruitment data processing method and device
CN111695338A (en) * 2020-04-29 2020-09-22 平安科技(深圳)有限公司 Interview content refining method, device, equipment and medium based on artificial intelligence
WO2021051586A1 (en) * 2019-09-18 2021-03-25 平安科技(深圳)有限公司 Interview answer text classification method, device, electronic apparatus and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544104A (en) * 2018-11-01 2019-03-29 平安科技(深圳)有限公司 A kind of recruitment data processing method and device
WO2021051586A1 (en) * 2019-09-18 2021-03-25 平安科技(深圳)有限公司 Interview answer text classification method, device, electronic apparatus and storage medium
CN111695338A (en) * 2020-04-29 2020-09-22 平安科技(深圳)有限公司 Interview content refining method, device, equipment and medium based on artificial intelligence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663042A (en) * 2022-02-11 2022-06-24 北京斗米优聘科技发展有限公司 Intelligent telephone calling recruitment method and device, electronic equipment and storage medium
CN118014545A (en) * 2024-02-19 2024-05-10 海安新知人工智能科技有限公司 Recruitment interview AI scoring algorithm

Similar Documents

Publication Publication Date Title
JP6793975B2 (en) Video-based Jobs Job Matching Servers and methods and computer-readable recording media containing programs to perform those methods
CN108376132B (en) Method and system for judging similar test questions
CN111080123A (en) User risk assessment method and device, electronic equipment and storage medium
US11881010B2 (en) Machine learning for video analysis and feedback
US20210125149A1 (en) Adaptability job vacancies matching system and method
CN113449095A (en) Interview data analysis method and device
CN110750523A (en) Data annotation method, system, computer equipment and storage medium
CN111522916A (en) Voice service quality detection method, model training method and device
CN115641101A (en) Intelligent recruitment method, device and computer readable medium
CN115760037A (en) Method and device for determining post demand information, computer storage medium and terminal
CN112819024A (en) Model processing method, user data processing method and device and computer equipment
US20230121404A1 (en) Searching for normalization-activation layer architectures
Yagi et al. Predicting multimodal presentation skills based on instance weighting domain adaptation
CN112434144A (en) Method, device, electronic equipment and computer readable medium for generating target problem
CN112419812A (en) Exercise correction method and device
CN114297390B (en) Aspect category identification method and system in long tail distribution scene
US20220108174A1 (en) Training neural networks using auxiliary task update decomposition
Pandey et al. Interview bot with automatic question generation and answer evaluation
CN114357964A (en) Subjective question scoring method, model training method, computer device, and storage medium
CN113822589A (en) Intelligent interviewing method, device, equipment and storage medium
CN113886539A (en) Method and device for recommending dialect, customer service equipment and storage medium
CN110955755A (en) Method and system for determining target standard information
Shrestha A transformer-based deep learning model for evaluation of accessibility of image descriptions
KR102668980B1 (en) Recommendation system based on artificial intelligence having learned result of vocational education and follow-up management information
Narkhede et al. AVA: A Photorealistic AI Bot for Human-like Interaction and Extended Reality

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination