CN110111770A - A kind of multilingual social interpretation method of network, system, equipment and medium - Google Patents

A kind of multilingual social interpretation method of network, system, equipment and medium Download PDF

Info

Publication number
CN110111770A
CN110111770A CN201910389958.7A CN201910389958A CN110111770A CN 110111770 A CN110111770 A CN 110111770A CN 201910389958 A CN201910389958 A CN 201910389958A CN 110111770 A CN110111770 A CN 110111770A
Authority
CN
China
Prior art keywords
user
text
voice data
social
conversion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910389958.7A
Other languages
Chinese (zh)
Inventor
都风忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Puyang Peak Network Technology Co Ltd
Original Assignee
Puyang Peak Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Puyang Peak Network Technology Co Ltd filed Critical Puyang Peak Network Technology Co Ltd
Priority to CN201910389958.7A priority Critical patent/CN110111770A/en
Publication of CN110111770A publication Critical patent/CN110111770A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/263Language identification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/04Segmentation; Word boundary detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/52User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Machine Translation (AREA)

Abstract

The invention discloses a kind of multilingual social interpretation method of network, system, equipment and media, comprising: obtains voice data of the user in network social intercourse in real time, and the languages used the user identify;The voice data is converted into text, while the languages according to used in the social object of the user according to the languages of identification, by the character translation after conversion at target text;By after the conversion text and target text return to the social object of the user and the user.The present invention solves the problems, such as communication disorders when different language crowd's social activity, and application scenarios diversification can meet one-to-many communication requirements, effectively improve social efficiency, while user and its social object can be assisted mutually to learn language.

Description

A kind of multilingual social interpretation method of network, system, equipment and medium
Technical field
The present invention relates to network social intercourse technical fields, and in particular to a kind of network multilingual social interpretation method, is set system Standby and medium.
Background technique
There are many translation software on the market at present, the function of realization is essentially simple character translation or voiced translation, and It is primary translation, user operation process is also relatively more passive, exists simultaneously certain translation time difference, cannot reach dynamic synchronization and turn over The effect translated, and the translation demand for a variety of social scenes such as be not able to satisfy voice, text, screen.
Summary of the invention
For above-mentioned defect in the prior art, the present invention provides a kind of multilingual social interpretation method of network, system, sets User voice data is converted into text by standby and medium, the languages used first according to user, later according to user social contact object The languages used by the character translation of conversion at target text, and the function that massage voice reading is provided and text is corrected in real time Can, effectively improve the social efficiency of different language crowd, the accessible expression for understanding other side and communication.
Specific summary of the invention are as follows:
A kind of multilingual social interpretation method of network, comprising:
Voice data of the user in network social intercourse is obtained in real time, and the languages used the user identify;Institute Predicate kind includes Chinese, English, Japanese, Korean, French, Latin language, Portuguese etc.;
The voice data is converted into text according to the languages of identification, while being made according to the social object of the user Languages, by the character translation after conversion at target text;
By after the conversion text and target text return to the social object of the user and the user.
Further, the voice data includes dialog mode segmentation voice data, video calling voice data, voice call Voice data.
The dialog mode segmentation voice data is similar to the voice data that wechat is sent one by one, and under normal circumstances, every right The duration of words formula segmentation voice data is respectively less than 60 seconds.
Further, when the voice data is that dialog mode is segmented voice data, by the text and mesh after the conversion After mark text returns to the social object of the user and the user, further includes:
According to languages used in the social object of the user, the social object of Xiang Suoshu user and the user are provided The massage voice reading function of the target text.
Further, by after the conversion text and target text return to the user and the user social activity it is right As rear, further includes:
The manual operation information of the user is obtained in real time, if the manual operation is to the text progress after the conversion Adjustment, then in real time correct corresponding target text according to the adjustment of the user, and according to the adjustment and correction, real When to after the conversion for the social object for returning to the user and the user text and target text be updated.
A kind of multilingual social translation system of network, comprising:
Voice data obtains module, for obtaining voice data of the user in network social intercourse in real time;
Languages identification module, for according to the voice data, the languages used the user to be identified;Institute's predicate Kind includes Chinese, English, Japanese, Korean, French, Latin language, Portuguese etc.;
Text conversion module, the languages for being identified according to the languages identification module convert the voice data written Word;
Target text translation module will utilize the text for languages used in the social object according to the user Character translation after the conversion of word conversion module is at target text;
Text return module, for by after the conversion text and target text return to the user and the user Social object.
Further, the voice data includes dialog mode segmentation voice data, video calling voice data, voice call Voice data.
The dialog mode segmentation voice data is similar to the voice data that wechat is sent one by one, and under normal circumstances, every right The duration of words formula segmentation voice data is respectively less than 60 seconds.
It further, further include voice return module, specifically when the voice data is that dialog mode is segmented voice data For:
After executing the text return module, according to languages used in the social object of the user, to the use The social object of family and the user provide the massage voice reading function of the target text.
Further, further include that text corrects module, be specifically used for:
After executing the text return module, the manual operation information of the user is obtained in real time, if the manual behaviour It is adjusted, then corresponding target text is entangled in real time according to the adjustment of the user as to the text after the conversion Just, and according to the adjustment and correction, in real time to the text after the conversion for the social object for returning to the user and the user Word and target text are updated.
The method of the invention, system can be implemented separately or are integrated in existing social tool in the form of plug-in unit etc. and be realized, Such as it is integrated in wechat, microblogging or video calling, in voice communication software, efficient, convenient and fast dialogue, which is provided, for user turns in real time Translate environment.
A kind of electronic equipment, comprising: shell, processor, memory, circuit board and power circuit, wherein circuit board placement In the space interior that shell surrounds, processor and memory setting are on circuit boards;Power circuit, for being above-mentioned electronic equipment Each circuit or device power supply;Memory is for storing executable program code;Processor is stored by reading in memory Executable program code run program corresponding with executable program code, turn over for executing the multilingual social activity of aforementioned network Translate method.
A kind of computer readable storage medium is stored with one or more program, and one or more of programs can It is executed by one or more processor, to realize the multilingual social interpretation method of aforementioned network.
The beneficial effects of the present invention are embodied in:
Voice data of user during network social intercourse can be converted into text by the present invention, and be made according to social object Languages carry out character translation, and the text after the text of conversion and translation is returned to user and its social object reading, Application scenarios diversification, can meet one-to-many communication requirements.The present invention can modification according to user to the text of conversion, to turning over Text after translating carries out real-time error and update, and semanteme twists problem caused by avoiding because of dialect or speech recognition error, in real time Accurate, complete statement is passed into social object, communication problem caused by ambiguity is avoided, promotes user experience.Institute of the present invention It is embeddable or be integrated in existing social software to state method, system, it is easy to use, solve communication when different language crowd social activity Problem on obstacle effectively improves social efficiency, while user and its social object can be assisted mutually to learn language.
Detailed description of the invention
It, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical solution in the prior art Embodiment or attached drawing needed to be used in the description of the prior art are briefly described.In all the appended drawings, similar element Or part is generally identified by similar appended drawing reference.In attached drawing, each element or part might not be drawn according to actual ratio.
Fig. 1 is a kind of multilingual social interpretation method flow chart of network of the embodiment of the present invention;
Fig. 2 is a kind of scene interactivity schematic diagram with massage voice reading function of the embodiment of the present invention;
Fig. 3 is the scene interactivity schematic diagram that a kind of screen of the embodiment of the present invention is conversed under scene;
Fig. 4 is the scene interactivity schematic diagram under a kind of voice call scene of the embodiment of the present invention;
Fig. 5 is a kind of multilingual social translation system structure chart of network of the embodiment of the present invention;
Fig. 6 is the multilingual social translation system structure chart of another kind of embodiment of the present invention network;
Fig. 7 is the multilingual social translation system structure chart of the third network of the embodiment of the present invention;
Fig. 8 is a kind of electronic equipment of embodiment of the present invention structural schematic diagram.
Specific embodiment
It is described in detail below in conjunction with embodiment of the attached drawing to technical solution of the present invention.Following embodiment is only used for Clearly illustrate technical solution of the present invention, therefore be only used as example, and cannot be used as a limitation and limit protection model of the invention It encloses.
It should be noted that unless otherwise indicated, technical term or scientific term used in this application should be this hair The ordinary meaning that bright one of ordinary skill in the art are understood.
As shown in Figure 1, for a kind of multilingual social interpretation method embodiment of network of the present invention, comprising:
S11: voice data of the user in network social intercourse is obtained in real time, and the languages used the user are known Not;The languages include Chinese, English, Japanese, Korean, French, Latin language, Portuguese etc.;
S12: the voice data is converted into text according to the languages of identification, while according to the social object of the user Used languages, by the character translation after conversion at target text;
S13: by after the conversion text and target text return to the social object of the user and the user;One As in the case of, be text after conversion upper in the display mode of user side, target text is under, in user social contact subject side Display is conversely, be target text upper, the text after conversion is under;The purpose shown in this way is to enable user and its social object Enough texts for seeing oneself used languages first, easy-to-read improve usage experience.
Preferably, the voice data includes dialog mode segmentation voice data, video calling voice data, voice call Voice data.
The dialog mode segmentation voice data is similar to the voice data that wechat is sent one by one, and under normal circumstances, every right The duration of words formula segmentation voice data is respectively less than 60 seconds.
Preferably, when the voice data is that dialog mode is segmented voice data, by the text and target after the conversion After text returns to the social object of the user and the user, further includes:
According to languages used in the social object of the user, the social object of Xiang Suoshu user and the user are provided The massage voice reading function of the target text;The massage voice reading function can be realized by icon, when user or its social activity Object has when listening to demand, triggers the corresponding button, and scene interactivity schematic diagram is as shown in Figure 2.
Preferably, by after the conversion text and target text return to the social object of the user and the user Afterwards, further includes:
The manual operation information of the user is obtained in real time, if the manual operation is to the text progress after the conversion Adjustment, then in real time correct corresponding target text according to the adjustment of the user, and according to the adjustment and correction, real When to after the conversion for the social object for returning to the user and the user text and target text be updated;Accordingly Ground provides a kind of scene interactivity schematic diagram that screen is conversed under scene, as shown in Figure 3;A kind of voice call scene is provided simultaneously Under scene interactivity schematic diagram, as shown in Figure 4.
As shown in figure 5, for a kind of multilingual social translation system embodiment of network of the present invention, comprising:
Voice data obtains module 51, for obtaining voice data of the user in network social intercourse in real time;
Languages identification module 52, for according to the voice data, the languages used the user to be identified;It is described Languages include Chinese, English, Japanese, Korean, French, Latin language, Portuguese etc.;
Text conversion module 53, the languages for being identified according to the languages identification module 52 convert the voice data At text;
Target text translation module 54 will be described in for languages used in the social object according to the user Character translation after the conversion of text conversion module 53 is at target text;
Text return module 55, for by after the conversion text and target text return to the user and the use The social object at family.
Preferably, the voice data includes dialog mode segmentation voice data, video calling voice data, voice call Voice data.
The dialog mode segmentation voice data is similar to the voice data that wechat is sent one by one, and under normal circumstances, every right The duration of words formula segmentation voice data is respectively less than 60 seconds.
Preferably, as shown in fig. 6, further including that voice returns when the voice data is that dialog mode is segmented voice data Module 56, is specifically used for:
After executing the text return module 55, according to languages used in the social object of the user, Xiang Suoshu The social object of user and the user provide the massage voice reading function of the target text.
Preferably, as shown in fig. 7, further including that text corrects module 57, it is specifically used for:
After executing the text return module 55, the manual operation information of the user is obtained in real time, if described manual Operation then in real time carries out corresponding target text according to the adjustment of the user to be adjusted to the text after the conversion Correct, and according to the adjustment and correction, in real time to the conversion for the social object for returning to the user and the user after Text and target text are updated.
The method of the invention, system can be implemented separately or are integrated in existing social tool in the form of plug-in unit etc. and be realized, Such as it is integrated in wechat, microblogging or video calling, in voice communication software, efficient, convenient and fast dialogue, which is provided, for user turns in real time Translate environment.
The embodiment of the present invention also provides a kind of electronic equipment, as shown in figure 8, embodiment illustrated in fig. 1 of the present invention may be implemented Process, as shown in figure 8, above-mentioned electronic equipment may include: shell 81, processor 82, memory 83, circuit board 84 and power supply Circuit 85, wherein circuit board 84 is placed in the space interior that shell 81 surrounds, and processor 82 and memory 83 are arranged in circuit board On 84;Power circuit 85, for each circuit or the device power supply for above-mentioned electronic equipment;Memory 83 is executable for storing Program code;Processor 82 is run by reading the executable program code stored in memory 83 and executable program code Corresponding program, for executing the multilingual social interpretation method of aforementioned network.
Processor 82 to the specific implementation procedures of above-mentioned steps and processor 82 by operation executable program code come The step of further executing may refer to the description of embodiment illustrated in fig. 1 of the present invention, and details are not described herein.
The electronic equipment exists in a variety of forms, including but not limited to:
(1) mobile communication equipment: the characteristics of this kind of equipment is that have mobile communication function, and to provide speech, data Communication is main target.This Terminal Type includes: smart phone (such as iPhone), multimedia handset, functional mobile phone and low Hold mobile phone etc..
(2) super mobile personal computer equipment: this kind of equipment belongs to the scope of personal computer, there is calculating and processing function Can, generally also have mobile Internet access characteristic.This Terminal Type includes: PDA, MID and UMPC equipment etc., such as iPad.
(3) portable entertainment device: this kind of equipment can show and play multimedia content.Such equipment include: audio, Video player (such as iPod), handheld device, e-book and intelligent toy and portable car-mounted navigation equipment.
(4) server: providing the equipment of the service of calculating, and the composition of server includes that processor, hard disk, memory, system are total Line etc., server is similar with general computer architecture, but due to needing to provide highly reliable service, in processing energy Power, stability, reliability, safety, scalability, manageability etc. are more demanding.
(5) other electronic equipments with data interaction function.
The embodiment of the present invention also provides a kind of computer readable storage medium, the computer-readable recording medium storage There is one or more program, one or more of programs can be executed by one or more processor, aforementioned to realize The multilingual social interpretation method of network.
Voice data of user during network social intercourse can be converted into text by the present invention, and be made according to social object Languages carry out character translation, and the text after the text of conversion and translation is returned to user and its social object reading, Application scenarios diversification, can meet one-to-many communication requirements.The present invention can modification according to user to the text of conversion, to turning over Text after translating carries out real-time error and update, and semanteme twists problem caused by avoiding because of dialect or speech recognition error, in real time Accurate, complete statement is passed into social object, communication problem caused by ambiguity is avoided, promotes user experience.Institute of the present invention It is embeddable or be integrated in existing social software to state method, system, it is easy to use, solve communication when different language crowd social activity Problem on obstacle effectively improves social efficiency, while user and its social object can be assisted mutually to learn language.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution The range of scheme should all cover within the scope of the claims and the description of the invention.

Claims (10)

1. a kind of multilingual social interpretation method of network characterized by comprising
Voice data of the user in network social intercourse is obtained in real time, and the languages used the user identify;
The voice data is converted into text according to the languages of identification, while according to used in the social object of the user Languages, by the character translation after conversion at target text;
By after the conversion text and target text return to the social object of the user and the user.
2. the method as described in claim 1, which is characterized in that the voice data includes dialog mode segmentation voice data, view Frequency call voice data, voice call voice data.
3. method according to claim 2, which is characterized in that when the voice data is that dialog mode is segmented voice data, By the text after the conversion and after target text returns to the social object of the user and the user, the method is also wrapped It includes:
According to languages used in the social object of the user, described in the social object offer of Xiang Suoshu user and the user The massage voice reading function of target text.
4. method according to claim 2, which is characterized in that by after the conversion text and target text return to it is described After the social object of user and the user, the method also includes:
The manual operation information of the user is obtained in real time, if the manual operation is to adjust to the text after the conversion It is whole, then corresponding target text is corrected in real time according to the adjustment of the user, and according to the adjustment and correction, in real time To after the conversion for the social object for returning to the user and the user text and target text be updated.
5. a kind of multilingual social translation system of network characterized by comprising
Voice data obtains module, for obtaining voice data of the user in network social intercourse in real time;
Languages identification module, for according to the voice data, the languages used the user to be identified;
The voice data is converted into text by text conversion module, the languages for being identified according to the languages identification module;
Target text translation module will be turned for languages used in the social object according to the user using the text Character translation after changing the mold block conversion is at target text;
Text return module, for by after the conversion text and target text return to the society of the user and the user Hand over object.
6. system as claimed in claim 5, which is characterized in that the voice data includes dialog mode segmentation voice data, view Frequency call voice data, voice call voice data.
7. system as claimed in claim 6, which is characterized in that when the voice data is that dialog mode is segmented voice data, The system also includes voice return modules, are specifically used for:
After executing the text return module, according to languages used in the social object of the user, Xiang Suoshu user and The social object of the user provides the massage voice reading function of the target text.
8. system as claimed in claim 6, which is characterized in that the system also includes texts to correct module, is specifically used for:
After executing the text return module, the manual operation information of the user is obtained in real time, if the manual operation is Text after the conversion is adjusted, then corresponding target text is corrected in real time according to the adjustment of the user, And according to the adjustment and correction, in real time to after the conversion for the social object for returning to the user and the user text and Target text is updated.
9. a kind of electronic equipment, which is characterized in that the electronic equipment includes: shell, processor, memory, circuit board and electricity Source circuit, wherein circuit board is placed in the space interior that shell surrounds, and processor and memory setting are on circuit boards;Power supply Circuit, for each circuit or the device power supply for above-mentioned electronic equipment;Memory is for storing executable program code;Processing Device runs program corresponding with executable program code by reading the executable program code stored in memory, for holding Row claim 1-4 any methods.
10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage have one or Multiple programs, one or more of programs can be executed by one or more processor, to realize that claim 1-4 are appointed Method described in one.
CN201910389958.7A 2019-05-10 2019-05-10 A kind of multilingual social interpretation method of network, system, equipment and medium Pending CN110111770A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910389958.7A CN110111770A (en) 2019-05-10 2019-05-10 A kind of multilingual social interpretation method of network, system, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910389958.7A CN110111770A (en) 2019-05-10 2019-05-10 A kind of multilingual social interpretation method of network, system, equipment and medium

Publications (1)

Publication Number Publication Date
CN110111770A true CN110111770A (en) 2019-08-09

Family

ID=67489407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910389958.7A Pending CN110111770A (en) 2019-05-10 2019-05-10 A kind of multilingual social interpretation method of network, system, equipment and medium

Country Status (1)

Country Link
CN (1) CN110111770A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111447397A (en) * 2020-03-27 2020-07-24 深圳市贸人科技有限公司 Translation method and translation device based on video conference
CN111696552A (en) * 2020-06-05 2020-09-22 北京搜狗科技发展有限公司 Translation method, translation device and earphone
CN113286217A (en) * 2021-04-23 2021-08-20 北京搜狗智能科技有限公司 Call voice translation method and device and earphone equipment
CN113628626A (en) * 2020-05-09 2021-11-09 阿里巴巴集团控股有限公司 Speech recognition method, device and system and translation method and system

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101207586A (en) * 2006-12-19 2008-06-25 国际商业机器公司 Method and system for real-time automatic communication
CN101867632A (en) * 2009-06-12 2010-10-20 刘越 Mobile phone speech instant translation system and method
CN101957814A (en) * 2009-07-16 2011-01-26 刘越 Instant speech translation system and method
CN104010267A (en) * 2013-02-22 2014-08-27 三星电子株式会社 Method and system for supporting a translation-based communication service and terminal supporting the service
CN104394265A (en) * 2014-10-31 2015-03-04 小米科技有限责任公司 Automatic session method and device based on mobile intelligent terminal
CN104754536A (en) * 2013-12-27 2015-07-01 ***通信集团公司 Method and system for realizing communication between different languages
CN104965824A (en) * 2015-06-11 2015-10-07 胡开标 Real-time text and speech translation system
CN105185375A (en) * 2015-08-10 2015-12-23 联想(北京)有限公司 Information processing method and electronic equipment
CN106847256A (en) * 2016-12-27 2017-06-13 苏州帷幄投资管理有限公司 A kind of voice converts chat method
CN107111613A (en) * 2014-10-08 2017-08-29 阿德文托尔管理有限公司 Computer based translation system and method
CN107343113A (en) * 2017-06-26 2017-11-10 深圳市沃特沃德股份有限公司 Audio communication method and device
CN107577675A (en) * 2017-09-06 2018-01-12 叶进蓉 A kind of method and device for being translated to voice call
CN108304389A (en) * 2017-12-07 2018-07-20 科大讯飞股份有限公司 Interactive voice interpretation method and device
CN108352006A (en) * 2015-11-06 2018-07-31 苹果公司 Intelligent automation assistant in instant message environment
US20190116210A1 (en) * 2017-10-18 2019-04-18 International Business Machines Corporation Identifying or creating social network groups of interest to attendees based on cognitive analysis of voice communications

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101207586A (en) * 2006-12-19 2008-06-25 国际商业机器公司 Method and system for real-time automatic communication
CN101867632A (en) * 2009-06-12 2010-10-20 刘越 Mobile phone speech instant translation system and method
CN101957814A (en) * 2009-07-16 2011-01-26 刘越 Instant speech translation system and method
CN104010267A (en) * 2013-02-22 2014-08-27 三星电子株式会社 Method and system for supporting a translation-based communication service and terminal supporting the service
CN104754536A (en) * 2013-12-27 2015-07-01 ***通信集团公司 Method and system for realizing communication between different languages
CN107111613A (en) * 2014-10-08 2017-08-29 阿德文托尔管理有限公司 Computer based translation system and method
CN104394265A (en) * 2014-10-31 2015-03-04 小米科技有限责任公司 Automatic session method and device based on mobile intelligent terminal
CN104965824A (en) * 2015-06-11 2015-10-07 胡开标 Real-time text and speech translation system
CN105185375A (en) * 2015-08-10 2015-12-23 联想(北京)有限公司 Information processing method and electronic equipment
CN108352006A (en) * 2015-11-06 2018-07-31 苹果公司 Intelligent automation assistant in instant message environment
CN106847256A (en) * 2016-12-27 2017-06-13 苏州帷幄投资管理有限公司 A kind of voice converts chat method
CN107343113A (en) * 2017-06-26 2017-11-10 深圳市沃特沃德股份有限公司 Audio communication method and device
CN107577675A (en) * 2017-09-06 2018-01-12 叶进蓉 A kind of method and device for being translated to voice call
US20190116210A1 (en) * 2017-10-18 2019-04-18 International Business Machines Corporation Identifying or creating social network groups of interest to attendees based on cognitive analysis of voice communications
CN108304389A (en) * 2017-12-07 2018-07-20 科大讯飞股份有限公司 Interactive voice interpretation method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111447397A (en) * 2020-03-27 2020-07-24 深圳市贸人科技有限公司 Translation method and translation device based on video conference
CN111447397B (en) * 2020-03-27 2021-11-23 深圳市贸人科技有限公司 Video conference based translation method, video conference system and translation device
CN113628626A (en) * 2020-05-09 2021-11-09 阿里巴巴集团控股有限公司 Speech recognition method, device and system and translation method and system
CN111696552A (en) * 2020-06-05 2020-09-22 北京搜狗科技发展有限公司 Translation method, translation device and earphone
CN111696552B (en) * 2020-06-05 2023-09-22 北京搜狗科技发展有限公司 Translation method, translation device and earphone
CN113286217A (en) * 2021-04-23 2021-08-20 北京搜狗智能科技有限公司 Call voice translation method and device and earphone equipment

Similar Documents

Publication Publication Date Title
CN110111770A (en) A kind of multilingual social interpretation method of network, system, equipment and medium
US10489112B1 (en) Method for user training of information dialogue system
CN105117391B (en) Interpreter language
US20200167528A1 (en) Dialog generation method, apparatus, and electronic device
CN106878566B (en) Voice control method, mobile terminal apparatus and speech control system
US11830482B2 (en) Method and apparatus for speech interaction, and computer storage medium
CN110175012B (en) Skill recommendation method, skill recommendation device, skill recommendation equipment and computer readable storage medium
CN111261144A (en) Voice recognition method, device, terminal and storage medium
CN109616096A (en) Construction method, device, server and the medium of multilingual tone decoding figure
CN113168336A (en) Client application of phone based on experiment parameter adaptation function
CN109119071A (en) A kind of training method and device of speech recognition modeling
CN110136713A (en) Dialogue method and system of the user in multi-modal interaction
CN104123114A (en) Method and device for playing voice
CN111563151A (en) Information acquisition method, session configuration device and storage medium
CN108933968A (en) A kind of conversion method of message format, device, storage medium and android terminal
CN109036409A (en) A kind of method and device thereof of intelligent sound control operating software
CN110308800B (en) Input mode switching method, device, system and storage medium
CN110418181A (en) To the method for processing business of smart television, device, smart machine and storage medium
CN112447168A (en) Voice recognition system and method, sound box, display device and interaction platform
CN104679733A (en) Voice conversation translation method, device and system
CN109408815A (en) Dictionary management method and system for voice dialogue platform
CN104347081A (en) Method and device for testing scene statement coverage
CN111508481B (en) Training method and device of voice awakening model, electronic equipment and storage medium
CN110473524B (en) Method and device for constructing voice recognition system
CN108882006A (en) A kind of conversion method of message format, device, storage medium and android terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190809

RJ01 Rejection of invention patent application after publication