WO2018000271A1 - 一种基于用户画像的意图场景识别方法及*** - Google Patents

一种基于用户画像的意图场景识别方法及*** Download PDF

Info

Publication number
WO2018000271A1
WO2018000271A1 PCT/CN2016/087756 CN2016087756W WO2018000271A1 WO 2018000271 A1 WO2018000271 A1 WO 2018000271A1 CN 2016087756 W CN2016087756 W CN 2016087756W WO 2018000271 A1 WO2018000271 A1 WO 2018000271A1
Authority
WO
WIPO (PCT)
Prior art keywords
intention
intent
user
candidate
module
Prior art date
Application number
PCT/CN2016/087756
Other languages
English (en)
French (fr)
Inventor
王昊奋
邱楠
杨新宇
Original Assignee
深圳狗尾草智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳狗尾草智能科技有限公司 filed Critical 深圳狗尾草智能科技有限公司
Priority to PCT/CN2016/087756 priority Critical patent/WO2018000271A1/zh
Priority to CN201680001743.8A priority patent/CN106489148A/zh
Publication of WO2018000271A1 publication Critical patent/WO2018000271A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • the present invention relates to the field of data processing technologies, and in particular, to a method and system for intent scene recognition based on a user portrait.
  • the user portrait also known as the user role (Persona) is an effective tool for sketching the target user, contacting the user's appeal and design direction.
  • the user portrait can be used to locate and plan the product; in the specific implementation, the user portrait can be used as a collection of tags for characterizing the user, such as basic attributes such as age, gender, education, or user interest. Features, etc.; during product promotion, potential customer groups can be mined based on user images for targeted product recommendations.
  • user portraits are gradually being applied to more fields.
  • intent recognition technology is emerging, analyzing user intent and returning a concise, accurate answer to the user.
  • the existing intent recognition is judged by the information input by the interaction, and the user is not personalized, and is easy to make mistakes, and the user experience is not good enough.
  • the present invention provides a method and system for intent scene recognition based on a user portrait, which combines a user portrait vector with a vector of traditional intent recognition to obtain a personalized answer generation and realize a question and answer. Personalize the experience and greatly improve the accuracy.
  • the present invention provides an intent scene recognition method based on a user portrait, comprising: Step 1: A user inputs a multimodal input, and performs multimodal input conversion on the multimodal input, and converts the same For text; Step 2: Intent recognition based on the converted text and scoring each of the intents drawn.
  • the intent recognition can be performed in a conventional manner; Step 3: Perform a similarity calculation on the vector generated by the user image and the vector of each candidate intent; Step 4: use the similarity calculated in the above step 3 as another score, and weight and add the weight obtained by the traditional method. Row, and the candidate with the highest score is the intent of the final output.
  • said step 2-4 is performed on a server.
  • the present invention also provides an intent scene recognition method based on a user portrait, comprising: step 1: user inputting text; step 2: performing intent recognition according to the text, and scoring each of the obtained candidate intents Step 3: Calculate the similarity of the vector generated by the user image and the vector of each candidate intent; Step 4: Use the similarity calculated in the above step 3 as another score, and weight-add the scores obtained by the traditional method. Rearrange and use the highest-scoring candidate intent as the intent of the final output.
  • said step 2-4 is performed on a server.
  • the present invention further provides an intent scene recognition system based on a user portrait, comprising: a multi-modal input conversion module, configured to convert a multi-modal input input by a user into a multi-modal input, and convert the same a text; an intent recognition module, configured to perform intent recognition according to the converted text, and score each of the obtained candidate intents; a user portrait similarity calculation module, a vector for generating the user image and each candidate The intent vector performs a similarity calculation; and an intent output module is used to use the calculated similarity as another score, weighted addition and rearrangement with the score obtained by the traditional method, and the candidate intention with the highest score is taken as the final intention Output.
  • a multi-modal input conversion module configured to convert a multi-modal input input by a user into a multi-modal input, and convert the same a text
  • an intent recognition module configured to perform intent recognition according to the converted text, and score each of the obtained candidate intents
  • a user portrait similarity calculation module a vector for generating the user image and
  • the intent recognition module, the user portrait similarity calculation module, and the intent output module are on a server.
  • the present invention further provides an intent scene recognition system based on a user portrait, comprising: a text input module for receiving text input by a user; and an intent recognition module for performing intent recognition according to the received text And scoring each candidate intent; the user portrait similarity calculation module is configured to perform a similarity calculation on the vector generated by the user image and the vector of each candidate intent; and an intent output module for calculating Similarity is used as another score, and the scores obtained by the traditional method are weighted and rearranged, and the candidate with the highest score is taken as the final score. Intent output.
  • the intent recognition module, the user portrait similarity calculation module, and the intent output module are on a server.
  • the similarity calculation is performed on the user portrait vector and the intention recognition vector, and the calculated similarity is weighted with the intention recognition score obtained by the traditional method, and the obtained result is more accurate.
  • FIG. 1 is a flowchart of an intent scene recognition method based on a user image according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of an intent scene recognition system based on a user portrait according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of an intent scene recognition system based on a user portrait according to another embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for identifying an intended scene based on a user image according to an embodiment of the present invention, including the following steps:
  • Step 1 The user enters the multimodal input and performs multimodal input conversion on the input multimodal input to convert it into text. For example, a user enters a voice and performs a voice conversion on the input voice to turn it into a text question.
  • multimodal input includes, but is not limited to, Frequency, face, expression, scene, voiceprint, fingerprint, iris pupil, light perception, and other information.
  • Step 2 Intent recognition based on the converted text and scoring each of the intents drawn.
  • the intent recognition can be performed in a conventional manner;
  • Step 3 Perform a similarity calculation on the vector generated by the user image and the vector of each candidate intent
  • Step 4 The similarity calculated in the above step 3 is taken as another score, and the scores obtained by the conventional method are weighted and added, rearranged, and the candidate intention with the highest score is taken as the final output intention.
  • the user can directly input the text question, and the steps 2-4 are performed on the server.
  • the steps of inputting voice and voice conversion are not necessary.
  • FIG. 2 is a schematic structural diagram of an intent scene recognition system based on a user portrait according to an embodiment of the present invention, including a multi-modal input conversion module (eg, a voice input conversion module), an intent recognition module, and a user portrait similarity calculation module. And an intent output module.
  • a multi-modal input conversion module eg, a voice input conversion module
  • an intent recognition module e.g, a user portrait similarity calculation module
  • a user portrait similarity calculation module e.g., a user portrait similarity calculation module.
  • the voice input conversion module is configured to convert the voice input by the user into a voice, and convert the voice into a text; the intent recognition module is configured to perform the intent recognition according to the converted text, and score each of the obtained candidate intents; a user portrait similarity calculation module, configured to perform a similarity calculation on a vector generated by the user image and a vector of each candidate intent; and an intention output module, configured to use the calculated similarity as another score, and the traditional method
  • the scores are weighted and reordered, and the candidate with the highest score is output as the final intent.
  • FIG. 3 is a schematic structural diagram of an intent scene recognition system based on a user portrait according to another embodiment of the present invention, including a text input module, an intent recognition module, a user portrait similarity calculation module, and an intent output module.
  • the text input module is configured to receive text input by the user and send the text to the intent recognition module;
  • the intent identification module is configured to perform intent recognition according to the received text, and score each of the obtained candidate intents;
  • the user image similarity a calculation module for performing a similarity calculation on a vector generated by the user image and a vector of each candidate intent; and an intention output module for using the calculated similarity as another score, and weighting the score obtained by the traditional method Add, rearrange, and output the candidate with the highest score as the final intent.
  • the intent recognition module, the user portrait similarity calculation module, and the intent The output module is on the server.
  • the method and system for intent-based scene recognition based on user portrait realizes intent recognition by means of user portraits, can satisfy personalized functions, and improve accuracy.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Machine Translation (AREA)

Abstract

一种基于用户画像的意图场景识别方法及***,包括:步骤1:用户输入多模态输入,并对所述多模态输入进行多模态输入转化,将其转化为文本;步骤2:根据转化的文本进行意图识别,并对得出的每一个候选意图进行打分;步骤3:将用户画像生成的向量和每一个候选意图的向量进行相似度计算;步骤4:将上述步骤3计算的相似度作为另一个分数,与传统方式得出的分数进行加权相加、重排,并将得分最高的候选意图作为最终输出的意图。该方法和***通过用户画像的方式来实现意图识别,能满足个性化的功能,并提高准确率。

Description

一种基于用户画像的意图场景识别方法及*** 技术领域
本发明涉及数据处理技术领域,并且特别涉及一种基于用户画像的意图场景识别方法及***。
背景技术
用户画像,又称为用户角色(Persona),是一种勾画目标用户、联系用户诉求与设计方向的有效工具。例如在产品开发时,可用于对产品进行定位与规划;在具体实现时,可以将用户画像作为刻画用户特征的标签(tag)集合,例如:年龄、性别、学历等基础属性,或者用户的兴趣特征等;在产品推广时,可根据用户画像挖掘潜在客户群体,进行有针对性的产品推荐。随着信息技术的不断发展,用户画像也逐渐应用于更多领域中。
随着人们对快速、准确地获取信息的需求不断增加,意图识别技术逐渐兴起,其能分析用户的意图并为用户返回一个简洁、准确的答案。但现有的意图识别都是通过本次交互所输入的信息来进行判断,对于用户没有个性化,同时容易出错,用户体验不够好。
发明内容
针对现有技术的不足,本发明提供一种基于用户画像的意图场景识别方法及***,采用将用户画像向量与传统意图识别的向量相结合的方式,得到具有个性化的回答生成,实现问答的个性化体验,并大大提高准确率。
为解决上述技术问题,本发明提供一种基于用户画像的意图场景识别方法,包括:步骤1:用户输入多模态输入,并对所述多模态输入进行多模态输入转化,将其转化为文本;步骤2:根据转化的文本进行意图识别,并对得出的每一个意图进行打分。在本发明实施例中,可使用传统的方式进行意图识别;步 骤3:将用户画像生成的向量和每一个候选意图的向量进行相似度计算;步骤4:将上述步骤3计算的相似度作为另一个分数,与传统方式得出的分数进行加权相加、重排,并将得分最高的候选意图作为最终输出的意图。
优选地,所述步骤2-4是在服务器上执行。
本发明还提供一种基于用户画像的意图场景识别方法,其特征在于,包括:步骤1:用户输入文本;步骤2:根据所述文本进行意图识别,并对得出的每一个候选意图进行打分;步骤3:将用户画像生成的向量和每一个候选意图的向量进行相似度计算;步骤4:将上述步骤3计算的相似度作为另一个分数,与传统方式得出的分数进行加权相加、重排,并将得分最高的候选意图作为最终输出的意图。
优选地,所述步骤2-4是在服务器上执行。
为解决上述技术问题,本发明还提供一种基于用户画像的意图场景识别***,包括:多模态输入转化模块,用于将用户输入的多模态输入进行多模态输入转化,将其转化为文本;意图识别模块,用于根据所述转化的文本进行意图识别,并对得出的每一个候选意图进行打分;用户画像相似度计算模块,用于将用户画像生成的向量和每一个候选意图的向量进行相似度计算;以及意图输出模块,用于将计算的相似度作为另一个分数,与传统方式得出的分数进行加权相加、重排,并将得分最高的候选意图作为最终意图输出。
优选地,所述意图识别模块、用户画像相似度计算模块、和意图输出模块是在服务器上。
为解决上述技术问题,本发明还提供一种基于用户画像的意图场景识别***,包括:文本输入模块,用于接收用户输入的文本;意图识别模块,用于根据接收的所述文本进行意图识别,并对得出的每一个候选意图进行打分;用户画像相似度计算模块,用于将用户画像生成的向量和每一个候选意图的向量进行相似度计算;以及意图输出模块,用于将计算的相似度作为另一个分数,与传统方式得出的分数进行加权相加、重排,并将得分最高的候选意图作为最终 意图输出。
优选地,所述意图识别模块、用户画像相似度计算模块、和意图输出模块是在服务器上。
总体而言,相较于现有技术,本发明的技术方案具有以下有益效果:
1、通过用户画像进行意图识别能满足个性化的特点;
2、将用户画像向量与意图识别向量进行相似度计算,并将计算的相似度与传统方式得到的意图识别分数进行加权计算,得到的结果更准确。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明一实施例提供的基于用户画像的意图场景识别方法的流程图;
图2是本发明一实施例提供的基于用户画像的意图场景识别***的结构示意图;
图3是本发明另一实施例提供的基于用户画像的意图场景识别***的结构示意图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
图1所示为本发明一实施例提供的基于用户画像的意图场景识别方法的流程图,包括以下步骤:
步骤1:用户输入多模态输入,并对输入的多模态输入进行多模态输入转化,将其转化为文本。例如,用户输入语音,并对输入的语音进行语音转化,将其转化为文本问题。请注意,本文所指的“多模态输入”包括但不限于,视 频、人脸、表情、场景、声纹、指纹、虹膜瞳孔、光感、等信息。
步骤2:根据转化的文本进行意图识别,并对得出的每一个意图进行打分。在本发明实施例中,可使用传统的方式进行意图识别;
步骤3:将用户画像生成的向量和每一个候选意图的向量进行相似度计算;
步骤4:将上述步骤3计算的相似度作为另一个分数,与传统方式得出的分数进行加权相加、重排,并将得分最高的候选意图作为最终输出的意图。
其中,所述步骤1中,用户也可以直接输入文本问题,并且所述步骤2-4是在服务器上执行。也就是说,输入语音以及语音转化的步骤并非是必要的。
图2所示为本发明一实施例提供的基于用户画像的意图场景识别***的结构示意图,包括多模态输入转化模块(例如,语音输入转化模块)、意图识别模块、用户画像相似度计算模块以及意图输出模块。其中,语音输入转化模块,用于将用户输入的语音进行语音转化,将其转化为文本;意图识别模块,用于根据转化的文本进行意图识别,并对得出的每一个候选意图进行打分;用户画像相似度计算模块,用于将用户画像生成的向量和每一个候选意图的向量进行相似度计算;以及意图输出模块,用于将计算的相似度作为另一个分数,与传统方式得出的分数进行加权相加、重排,并将得分最高的候选意图作为最终意图输出。
图3所示为本发明另一实施例提供的基于用户画像的意图场景识别***的结构示意图,包括文本输入模块、意图识别模块、用户画像相似度计算模块以及意图输出模块。其中,文本输入模块,用于接收用户输入的文本并发送至意图识别模块;意图识别模块,用于根据接收的文本进行意图识别,并对得出的每一个候选意图进行打分;用户画像相似度计算模块,用于将用户画像生成的向量和每一个候选意图的向量进行相似度计算;以及意图输出模块,用于将计算的相似度作为另一个分数,与传统方式得出的分数进行加权相加、重排,并将得分最高的候选意图作为最终意图输出。
在一个实施例中,所述意图识别模块、用户画像相似度计算模块、和意图 输出模块是在服务器上。
本发明提供的基于用户画像的意图场景识别方法及***,通过用户画像的方式来实现意图识别,能满足个性化的功能,并提高了准确率。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (8)

  1. 一种基于用户画像的意图场景识别方法,其特征在于,包括:
    步骤1:用户输入多模态输入,并对所述多模态输入进行多模态输入转化,将其转化为文本;
    步骤2:根据转化的文本进行意图识别,并对得出的每一个候选意图进行打分;
    步骤3:将用户画像生成的向量和每一个候选意图的向量进行相似度计算;
    步骤4:将上述步骤3计算的相似度作为另一个分数,与传统方式得出的分数进行加权相加、重排,并将得分最高的候选意图作为最终输出的意图。
  2. 如权利要求1所述的基于用户画像的意图场景识别方法,其特征在于,所述步骤2-4是在服务器上执行。
  3. 一种基于用户画像的意图场景识别方法,其特征在于,包括:
    步骤1:用户输入文本;
    步骤2:根据所述文本进行意图识别,并对得出的每一个候选意图进行打分;
    步骤3:将用户画像生成的向量和每一个候选意图的向量进行相似度计算;
    步骤4:将上述步骤3计算的相似度作为另一个分数,与传统方式得出的分数进行加权相加、重排,并将得分最高的候选意图作为最终输出的意图。
  4. 如权利要求3所述的基于用户画像的意图场景识别方法,其特征在于,所述步骤2-4是在服务器上执行。
  5. 一种基于用户画像的意图场景识别***,其特征在于,包括:
    多模态输入转化模块,用于将用户输入的多模态输入进行多模态输入转化,将其转化为文本;
    意图识别模块,用于根据所述转化的文本进行意图识别,并对得出的每一个候选意图进行打分;
    用户画像相似度计算模块,用于将用户画像生成的向量和每一个候选意图 的向量进行相似度计算;以及
    意图输出模块,用于将计算的相似度作为另一个分数,与传统方式得出的分数进行加权相加、重排,并将得分最高的候选意图作为最终意图输出。
  6. 如权利要求5所述的基于用户画像的意图场景识别***,其特征在于,所述意图识别模块、用户画像相似度计算模块、和意图输出模块是在服务器上。
  7. 一种基于用户画像的意图场景识别***,其特征在于,包括:文本输入模块,用于接收用户输入的文本;意图识别模块,用于根据接收的所述文本进行意图识别,并对得出的每一个候选意图进行打分;用户画像相似度计算模块,用于将用户画像生成的向量和每一个候选意图的向量进行相似度计算;以及意图输出模块,用于将计算的相似度作为另一个分数,与传统方式得出的分数进行加权相加、重排,并将得分最高的候选意图作为最终意图输出。
  8. 如权利要求7所述的基于用户画像的意图场景识别***,其特征在于,所述意图识别模块、用户画像相似度计算模块、和意图输出模块是在服务器上。
PCT/CN2016/087756 2016-06-29 2016-06-29 一种基于用户画像的意图场景识别方法及*** WO2018000271A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2016/087756 WO2018000271A1 (zh) 2016-06-29 2016-06-29 一种基于用户画像的意图场景识别方法及***
CN201680001743.8A CN106489148A (zh) 2016-06-29 2016-06-29 一种基于用户画像的意图场景识别方法及***

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/087756 WO2018000271A1 (zh) 2016-06-29 2016-06-29 一种基于用户画像的意图场景识别方法及***

Publications (1)

Publication Number Publication Date
WO2018000271A1 true WO2018000271A1 (zh) 2018-01-04

Family

ID=58286072

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/087756 WO2018000271A1 (zh) 2016-06-29 2016-06-29 一种基于用户画像的意图场景识别方法及***

Country Status (2)

Country Link
CN (1) CN106489148A (zh)
WO (1) WO2018000271A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783730A (zh) * 2019-01-03 2019-05-21 深圳壹账通智能科技有限公司 产品推荐方法、装置、计算机设备和存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109176535B (zh) * 2018-07-16 2021-10-19 北京光年无限科技有限公司 基于智能机器人的交互方法及***
CN109783733B (zh) * 2019-01-15 2020-11-06 腾讯科技(深圳)有限公司 用户画像生成装置及方法、信息处理装置及存储介质
CN111737670B (zh) * 2019-03-25 2023-08-18 广州汽车集团股份有限公司 多模态数据协同人机交互的方法、***及车载多媒体装置
CN110457447A (zh) * 2019-05-15 2019-11-15 国网浙江省电力有限公司电力科学研究院 一种电网任务型对话***
CN110136699A (zh) * 2019-07-10 2019-08-16 南京硅基智能科技有限公司 一种基于文本相似度的意图识别方法
CN114692639A (zh) * 2020-12-25 2022-07-01 华为技术有限公司 一种文本纠错方法和电子设备
CN112991004A (zh) * 2021-02-06 2021-06-18 上海红星美凯龙泛家信息服务有限公司 基于画像的兴趣分类评分方法、***及计算机存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951428A (zh) * 2014-03-26 2015-09-30 阿里巴巴集团控股有限公司 用户意图识别方法及装置
CN105095357A (zh) * 2015-06-24 2015-11-25 百度在线网络技术(北京)有限公司 一种用于咨询数据处理的方法和装置
CN105183848A (zh) * 2015-09-07 2015-12-23 百度在线网络技术(北京)有限公司 基于人工智能的人机聊天方法和装置
CN105487663A (zh) * 2015-11-30 2016-04-13 北京光年无限科技有限公司 一种面向智能机器人的意图识别方法和***

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8311294B2 (en) * 2009-09-08 2012-11-13 Facedouble, Inc. Image classification and information retrieval over wireless digital networks and the internet
CN102800006B (zh) * 2012-07-23 2016-09-14 姚明东 基于客户购物意图挖掘的实时商品推荐方法
CN103235812B (zh) * 2013-04-24 2015-04-01 中国科学院计算技术研究所 查询多意图识别方法和***
CN105068661B (zh) * 2015-09-07 2018-09-07 百度在线网络技术(北京)有限公司 基于人工智能的人机交互方法和***

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104951428A (zh) * 2014-03-26 2015-09-30 阿里巴巴集团控股有限公司 用户意图识别方法及装置
CN105095357A (zh) * 2015-06-24 2015-11-25 百度在线网络技术(北京)有限公司 一种用于咨询数据处理的方法和装置
CN105183848A (zh) * 2015-09-07 2015-12-23 百度在线网络技术(北京)有限公司 基于人工智能的人机聊天方法和装置
CN105487663A (zh) * 2015-11-30 2016-04-13 北京光年无限科技有限公司 一种面向智能机器人的意图识别方法和***

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783730A (zh) * 2019-01-03 2019-05-21 深圳壹账通智能科技有限公司 产品推荐方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN106489148A (zh) 2017-03-08

Similar Documents

Publication Publication Date Title
WO2018000271A1 (zh) 一种基于用户画像的意图场景识别方法及***
WO2018000270A1 (zh) 一种基于用户画像的个性化回答生成方法及***
WO2020143844A1 (zh) 意图分析方法、装置、显示终端及计算机可读存储介质
CN107492379B (zh) 一种声纹创建与注册方法及装置
WO2020155766A1 (zh) 意图识别中的拒识方法、装置、设备及存储介质
US10394854B2 (en) Inferring entity attribute values
US10068588B2 (en) Real-time emotion recognition from audio signals
US20180293990A1 (en) Method and device for processing voiceprint authentication
CN109271537B (zh) 一种基于蒸馏学习的文本到图像生成方法和***
CN103514170B (zh) 一种语音识别的文本分类方法和装置
KR20220064940A (ko) 음성 생성 방법, 장치, 전자기기 및 저장매체
Xia et al. Audiovisual speech recognition: A review and forecast
WO2021051877A1 (zh) 人工智能面试中获取输入文本和相关装置
KR20200010650A (ko) 딥러닝 기반 제스처 자동 인식 방법 및 시스템
CN110992988A (zh) 一种基于领域对抗的语音情感识别方法及装置
Zhao et al. A survey on automatic emotion recognition using audio big data and deep learning architectures
JP7280705B2 (ja) 機械学習装置、プログラム及び機械学習方法
CN113743267A (zh) 一种基于螺旋和文本的多模态视频情感可视化方法及装置
CN111444321A (zh) 问答方法、装置、电子设备和存储介质
CN117315334A (zh) 图像分类方法、模型的训练方法、装置、设备及介质
WO2023154351A2 (en) Apparatus and method for automated video record generation
WO2020001182A1 (zh) 声纹识别方法、电子装置及计算机可读存储介质
US20160005336A1 (en) Sign language image input method and device
CN103761294A (zh) 基于手写轨迹和语音识别的查询方法及装置
CN114398896A (zh) 信息录入方法、装置、电子设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16906672

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16906672

Country of ref document: EP

Kind code of ref document: A1