CN111933126A - Voice compiling method and device, electronic equipment and computer readable storage medium - Google Patents

Voice compiling method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111933126A
CN111933126A CN201910395525.2A CN201910395525A CN111933126A CN 111933126 A CN111933126 A CN 111933126A CN 201910395525 A CN201910395525 A CN 201910395525A CN 111933126 A CN111933126 A CN 111933126A
Authority
CN
China
Prior art keywords
compiling
component
voice
speech
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910395525.2A
Other languages
Chinese (zh)
Inventor
韩琪玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910395525.2A priority Critical patent/CN111933126A/en
Publication of CN111933126A publication Critical patent/CN111933126A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Machine Translation (AREA)

Abstract

The embodiment of the invention discloses a voice compiling method, a voice compiling device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring input voice; performing intention recognition on the input voice to generate a compiling instruction, wherein the intention recognition is to perform semantic recognition on the input voice to obtain intention information; and executing the compiling instruction to obtain a voice compiling result. The technical scheme can provide an effective way for users, especially users lacking a professional software development background, to conveniently, quickly and accurately develop software such as website construction, webpage generation and application development, thereby providing great convenience for users, and being beneficial to popularization of the internet and improvement of service efficiency and quality.

Description

Voice compiling method and device, electronic equipment and computer readable storage medium
Technical Field
The embodiment of the invention relates to the technical field of compiling processing, in particular to a voice compiling method and device, electronic equipment and a computer readable storage medium.
Background
With the development of society, internet websites, web pages and internet-based applications occupy irreplaceable positions in people's learning, work and life, and meanwhile, a lot of development demands are brought forward. However, since software development belongs to a relatively professional skill, for users lacking a professional software development background, website construction, webpage generation and application development are almost inaccessible, which brings inconvenience to users, and is not beneficial to internet popularization and service efficiency and quality improvement.
Disclosure of Invention
The embodiment of the invention provides a voice compiling method and device, electronic equipment and a computer readable storage medium.
In a first aspect, an embodiment of the present invention provides a speech compiling method.
Specifically, the speech compiling method includes:
acquiring input voice;
performing intention recognition on the input voice to generate a compiling instruction, wherein the intention recognition is to perform semantic recognition on the input voice to obtain intention information;
and executing the compiling instruction to obtain a voice compiling result.
With reference to the first aspect, in a first implementation manner of the first aspect, the acquiring the input speech includes:
displaying a voice compilation control component in response to a compilation page of the audio compilation component being accessed;
initializing an audio component in response to the speech compilation control component being started;
and starting the audio assembly to obtain input voice.
With reference to the first aspect and the first implementation manner of the first aspect, in a second implementation manner of the first aspect, the displaying a speech compilation control component in response to a compilation page of an audio compilation component being accessed is implemented as:
and triggering a message communication component in response to the compiling page of the audio compiling component being accessed, so as to establish the connection between the audio compiling component and the voice compiling control component, and displaying the voice compiling control component.
With reference to the first aspect, the first implementation manner of the first aspect, and the second implementation manner of the first aspect, in a third implementation manner of the first aspect, the initializing the audio component in response to the voice compilation control component being started is implemented as:
in response to the voice compilation control component being started, establishing a connection between the voice compilation control component and a voice recognition component and initializing the audio component.
With reference to the first implementation manner of the first aspect, the second implementation manner of the first aspect, and the third implementation manner of the first aspect, in a fourth implementation manner of the first aspect, the starting the audio component and acquiring the input voice is implemented as:
and starting the audio assembly, acquiring input voice and processing the input voice.
With reference to the first aspect, the first implementation manner of the first aspect, the second implementation manner of the first aspect, the third implementation manner of the first aspect, and the fourth implementation manner of the first aspect, in a fifth implementation manner of the first aspect, the present disclosure further includes:
and in response to receiving a voice compiling control component closing voice instruction, closing the voice compiling control component and the audio component and the connection between the voice compiling control component and the voice recognition component.
With reference to the first aspect, the first implementation manner of the first aspect, the second implementation manner of the first aspect, the third implementation manner of the first aspect, the fourth implementation manner of the first aspect, and the fifth implementation manner of the first aspect, in a sixth implementation manner of the first aspect, the performing intent recognition on the input speech to generate the compiling instruction includes:
responding to the connection between the voice recognition component and the voice compiling control component and the fact that the voice recognition component receives input voice, and converting the input voice into characters;
and sending the characters to an intention identification component for intention identification to generate a compiling instruction.
With reference to the first aspect, the first implementation manner of the first aspect, the second implementation manner of the first aspect, the third implementation manner of the first aspect, the fourth implementation manner of the first aspect, the fifth implementation manner of the first aspect, and the sixth implementation manner of the first aspect, in a seventh implementation manner of the first aspect, the sending the text to the intention recognition component for intention recognition, and generating a compiling instruction are implemented as:
transmitting the characters back to an audio component, and sending a service interface calling request to the intention identification component;
and responding to the confirmation of the service interface calling request, sending the words to an intention identification component for intention identification, and generating a compiling instruction.
With reference to the first aspect, the first implementation manner of the first aspect, the second implementation manner of the first aspect, the third implementation manner of the first aspect, the fourth implementation manner of the first aspect, the fifth implementation manner of the first aspect, the sixth implementation manner of the first aspect, and the seventh implementation manner of the first aspect, in an eighth implementation manner of the first aspect, the executing the compiling instruction to obtain the voice compiling result includes:
sending the compiling instruction to an audio compiling component through a message communication component;
determining and acquiring a compiling element related to the compiling instruction;
and executing the compiling instruction based on the compiling element to obtain a voice compiling result.
With reference to the first aspect, the first implementation manner of the first aspect, the second implementation manner of the first aspect, the third implementation manner of the first aspect, the fourth implementation manner of the first aspect, the fifth implementation manner of the first aspect, the sixth implementation manner of the first aspect, the seventh implementation manner of the first aspect, and the eighth implementation manner of the first aspect, in a ninth implementation manner of the first aspect, the disclosure further includes:
and displaying the voice compiling result.
In a second aspect, an embodiment of the present invention provides a speech coding apparatus.
Specifically, the speech compiling apparatus includes:
an acquisition module configured to acquire an input voice;
a generation module configured to perform intent recognition on the input speech to generate a compiling instruction, wherein the intent recognition is to perform semantic recognition on the input speech to obtain intent information;
and the execution module is configured to execute the compiling instruction to obtain a voice compiling result.
With reference to the second aspect, in a first implementation manner of the second aspect, the obtaining module includes:
a display sub-module configured to display the voice compilation control component in response to a compilation page of the audio compilation component being accessed;
an initialization sub-module configured to initialize an audio component in response to the speech compilation control component being started;
and the acquisition sub-module is configured to start the audio component and acquire input voice.
With reference to the second aspect and the first implementation manner of the second aspect, in a second implementation manner of the second aspect, the display sub-module is configured to:
and triggering a message communication component in response to the compiling page of the audio compiling component being accessed, so as to establish the connection between the audio compiling component and the voice compiling control component, and displaying the voice compiling control component.
With reference to the second aspect, the first implementation manner of the second aspect, and the second implementation manner of the second aspect, in a third implementation manner of the second aspect, the initialization submodule is configured to:
in response to the voice compilation control component being started, establishing a connection between the voice compilation control component and a voice recognition component and initializing the audio component.
With reference to the first implementation manner of the second aspect, the second implementation manner of the second aspect, and the third implementation manner of the second aspect, in a fourth implementation manner of the second aspect, the obtaining sub-module is configured to:
and starting the audio assembly, acquiring input voice and processing the input voice.
With reference to the second aspect, the first implementation manner of the second aspect, the second implementation manner of the second aspect, the third implementation manner of the second aspect, and the fourth implementation manner of the second aspect, in a fifth implementation manner of the second aspect of the present disclosure, the obtaining module further includes:
a shutdown submodule configured to shut down the speech compilation control component and the audio component, and a connection between the speech compilation control component and the speech recognition component, in response to receiving a speech compilation control component shutdown speech instruction.
With reference to the second aspect, the first implementation manner of the second aspect, the second implementation manner of the second aspect, the third implementation manner of the second aspect, the fourth implementation manner of the second aspect, and the fifth implementation manner of the second aspect, in a sixth implementation manner of the second aspect, the generating module includes:
a conversion sub-module configured to convert an input speech into text in response to the speech recognition component establishing a connection with the speech compilation control component and the speech recognition component receiving the input speech;
and the generation submodule is configured to send the characters to an intention recognition component for intention recognition, and generates a compiling instruction.
With reference to the second aspect, the first implementation manner of the second aspect, the second implementation manner of the second aspect, the third implementation manner of the second aspect, the fourth implementation manner of the second aspect, the fifth implementation manner of the second aspect, and the sixth implementation manner of the second aspect, in a seventh implementation manner of the second aspect, the generation submodule is configured to:
transmitting the characters back to an audio component, and sending a service interface calling request to the intention identification component;
and responding to the confirmation of the service interface calling request, sending the words to an intention identification component for intention identification, and generating a compiling instruction.
With reference to the second aspect, the first implementation manner of the second aspect, the second implementation manner of the second aspect, the third implementation manner of the second aspect, the fourth implementation manner of the second aspect, the fifth implementation manner of the second aspect, the sixth implementation manner of the second aspect, and the seventh implementation manner of the second aspect, in an eighth implementation manner of the second aspect, the executing module includes:
the sending submodule is configured to send the compiling instruction to the audio compiling component through the message communication component;
a determining submodule configured to determine and acquire the compiling instruction related compiling element;
and the execution submodule is configured to execute the compiling instruction based on the compiling element to obtain a voice compiling result.
With reference to the second aspect, the first implementation manner of the second aspect, the second implementation manner of the second aspect, the third implementation manner of the second aspect, the fourth implementation manner of the second aspect, the fifth implementation manner of the second aspect, the sixth implementation manner of the second aspect, the seventh implementation manner of the second aspect, and the eighth implementation manner of the second aspect, in a ninth implementation manner of the second aspect, the present disclosure further includes:
a display module configured to display the speech compilation result.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a memory and a processor, where the memory is used to store one or more computer instructions for supporting a speech compiling apparatus to execute the speech compiling method in the first aspect, and the processor is configured to execute the computer instructions stored in the memory. The speech compiling apparatus may further include a communication interface for the speech compiling apparatus to communicate with other devices or a communication network.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium for storing computer instructions for a speech compiling apparatus, where the computer instructions include computer instructions for executing the speech compiling method according to the first aspect to the speech compiling apparatus.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
according to the technical scheme, the compiling instruction is generated by performing intention recognition on the input voice, and the voice compiling result is obtained after the compiling instruction is executed. The technical scheme can provide an effective way for users, especially users lacking a professional software development background, to conveniently, quickly and accurately develop software such as website construction, webpage generation and application development, thereby providing great convenience for users, and being beneficial to popularization of the internet and improvement of service efficiency and quality.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of embodiments of the invention.
Drawings
Other features, objects and advantages of embodiments of the invention will become more apparent from the following detailed description of non-limiting embodiments thereof, when taken in conjunction with the accompanying drawings. In the drawings:
FIG. 1 illustrates a flow diagram of a method of speech compilation according to an embodiment of the present invention;
FIG. 2 shows a flowchart of step S101 of the speech compilation method according to the embodiment shown in FIG. 1;
FIG. 3 is a flowchart illustrating a step S101 of a speech compiling method according to another embodiment illustrated in FIG. 1;
FIG. 4 is a flowchart illustrating step S102 of the speech compiling method according to the embodiment illustrated in FIG. 1;
FIG. 5 shows a flowchart of step S103 of the speech compilation method according to the embodiment shown in FIG. 1;
FIG. 6 illustrates a flow diagram of a speech compilation method according to another embodiment of the present invention;
FIG. 7 is a block diagram showing a configuration of a speech compiling apparatus according to an embodiment of the present invention;
FIG. 8 is a block diagram illustrating the structure of an obtaining module 701 of the speech compiling apparatus according to the embodiment shown in FIG. 7;
FIG. 9 is a block diagram illustrating a structure of an obtaining module 701 of the speech compiling apparatus according to another embodiment illustrated in FIG. 7;
FIG. 10 is a block diagram showing a structure of a generation module 702 of the speech compiling apparatus according to the embodiment shown in FIG. 7;
fig. 11 is a block diagram illustrating an execution module 703 of the speech compiling apparatus according to the embodiment illustrated in fig. 7;
FIG. 12 is a block diagram showing a configuration of a speech compiling apparatus according to another embodiment of the present invention;
FIG. 13 shows a block diagram of an electronic device according to an embodiment of the invention;
FIG. 14 is a block diagram of a computer system suitable for implementing a speech compilation method according to an embodiment of the invention.
Detailed Description
Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the embodiments of the present invention, it is to be understood that terms such as "including" or "having", etc., are intended to indicate the presence of the features, numbers, steps, actions, components, parts, or combinations thereof disclosed in the present specification, and are not intended to exclude the possibility that one or more other features, numbers, steps, actions, components, parts, or combinations thereof may be present or added.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. Embodiments of the present invention will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
According to the technical scheme provided by the embodiment of the invention, the compiling instruction is generated by carrying out intention recognition on the input voice, and the voice compiling result is obtained after the compiling instruction is executed. The technical scheme can provide an effective way for users, especially users lacking a professional software development background, to conveniently, quickly and accurately develop software such as website construction, webpage generation and application development, thereby providing great convenience for users, and being beneficial to popularization of the internet and improvement of service efficiency and quality.
Fig. 1 shows a flowchart of a speech compiling method according to an embodiment of the invention, and as shown in fig. 1, the speech compiling method includes the following steps S101 to S103:
in step S101, an input voice is acquired;
in step S102, performing intent recognition on the input speech to generate a compiling instruction, where the intent recognition is to perform semantic recognition on the input speech to obtain intent information;
in step S103, the compiling instruction is executed to obtain a voice compiling result.
As mentioned above, with the development of society, internet websites, web pages, and internet-based applications occupy irreplaceable positions in people's learning, work, and life, and also bring about many development demands. However, since software development belongs to a relatively professional skill, for users lacking a professional software development background, website construction, webpage generation and application development are almost inaccessible, which brings inconvenience to users, and is not beneficial to internet popularization and service efficiency and quality improvement.
In view of the above, in this embodiment, a speech compiling method is proposed that generates a compiling instruction by performing intent recognition on an input speech and obtains a speech compiling result after executing the compiling instruction. The technical scheme can provide an effective way for users, especially users lacking a professional software development background, to conveniently, quickly and accurately develop software such as website construction, webpage generation and application development, thereby providing great convenience for users, and being beneficial to popularization of the internet and improvement of service efficiency and quality.
In an optional implementation manner of this embodiment, the input speech refers to speech input by a user through a speech input device such as a microphone, where the input speech is related to content that the user wants to compile, a compiling effect that the user wants to achieve, or a compiling purpose that the user wants to achieve, that is, the input speech may include one or more of the following data: compiling content, compiling components, compiling operations, compiling effects, compiling destinations, compiling data, and so forth.
In an optional implementation manner of this embodiment, the intention recognition is to perform semantic recognition on the input speech to obtain intention information, or requirement information, that is, to obtain the intention or requirement of the user through the speech input by the user.
In an optional implementation manner of this embodiment, the compiling instruction refers to an instruction that is obtained by performing intent recognition according to the input speech and can be directly executed to achieve a preset compiling purpose or compiling effect, and a compiling result corresponding to the speech input by the user can be obtained after the compiling instruction is executed, so that difficulty in software development can be reduced, and a professional threshold is eliminated, thereby providing great convenience for the user, and facilitating popularization of the internet and improvement of service efficiency and quality.
In an optional implementation manner of this embodiment, as shown in fig. 2, the step S101, that is, the step of acquiring the input voice, includes the following steps S201 to S203:
in step S201, in response to a compilation page of the audio compilation component being accessed, displaying a voice compilation control component;
in step S202, in response to the voice compilation control component being started, initializing an audio component;
in step S203, the audio component is turned on to obtain the input voice.
In order to effectively acquire the voice input by the user and acquire accurate voice content related to the compiling operation, in the embodiment, when the input voice is acquired, whether a compiling page of an audio compiling component is accessed is firstly detected, namely whether the user has the intention of wanting to perform voice compiling is determined, after the compiling page of the audio compiling component is detected to be accessed, the user is confirmed to want to perform voice compiling, and then a voice compiling control component is displayed for the user in response to the detection result that the compiling page of the audio compiling component is accessed so as to control different components to work in the subsequent voice compiling process; if the user starts the voice compilation control component, for example, a start button of the voice compilation control component is clicked, the audio component is initialized in response to the start operation, then the audio component is started, and the voice input by the user is acquired and acquired.
The audio compiling component is a user-oriented component, which generally operates at a front end to provide a visual compiling interface for a user to browse, input command information of the user, and output information of a compiling result. The compilation interface of the audio compilation component may provide one or more of the following data: audio compilation component introduction information, audio compilation component usage information, audio compilation component launch entries, speech compilation control component information, speech compilation control component launch entries, historical audio compilation data, current audio compilation related information, current audio compilation results, and the like.
In an optional implementation manner of the embodiment, the user can access the compiling page of the audio compiling component through the browser to further improve the use convenience. The configuration parameters of the browser are related to the characteristics of the audio compiling component and the required operating environment and operating parameters thereof, and a person skilled in the art can select a proper browser according to the requirements of practical application.
In an optional implementation manner of this embodiment, the audio component refers to a component capable of acquiring voice data, such as a microphone, a loudspeaker, and the like, and after the audio component is started and initialized in response to the voice compiling control component, an operating state may be turned on to acquire input voice of a user.
In an optional implementation manner of this embodiment, the step S201, that is, the step of displaying the speech compilation control component in response to the compilation page of the audio compilation component being accessed, may be implemented as:
and triggering a message communication component in response to the compiling page of the audio compiling component being accessed, so as to establish the connection between the audio compiling component and the voice compiling control component, and displaying the voice compiling control component.
In order to enable normal communication between the audio compiling component and the voice compiling control component, when a compiling page of the audio compiling component is accessed, the message communication component is immediately triggered to establish communication connection between the audio compiling component and the voice compiling control component so as to bear communication messages, and the voice compiling control component is displayed so as to receive voice messages of a user.
The audio compiling component and the voice compiling control component can be connected by a websocket, the websocket is a protocol for full-duplex communication on a single TCP connection, and the message communication between the audio compiling component and the voice compiling control component can be effectively supported.
In an optional implementation manner of this embodiment, the step S202, that is, in response to the speech coding control component being started, the step of initializing the audio component may be implemented as:
in response to the voice compilation control component being started, establishing a connection between the voice compilation control component and a voice recognition component and initializing the audio component.
In order to enable normal communication between a speech compiling control component and a speech recognition component, after the speech compiling control component is started, a communication connection between the speech compiling control component and the speech recognition component is immediately established so as to carry communication messages, and initialization operation is carried out on the audio component, so that the speech recognition component can rapidly and effectively recognize relevant speech.
Similar to the previous implementation manner, the connection between the speech compiling control component and the speech recognition component may also be a websocket connection.
In an optional implementation manner of this embodiment, the step S203 of turning on the audio component and acquiring the input voice may be implemented as:
and starting the audio assembly, acquiring input voice and processing the input voice.
In order to improve the accuracy of subsequent voice recognition, after the audio component acquires the input voice, the audio component can also perform processing such as voice denoising and voice optimization on the input voice so as to remove noise such as background sound and microphone noise in the input voice data and improve the definition of the voice data.
In an optional implementation manner of this embodiment, the step S101, that is, the step of acquiring the input speech, may further include a step of turning off the relevant speech compiling component, that is, as shown in fig. 3, the step S101 includes the following steps S301 to S304:
in step S301, in response to a compilation page of the audio compilation component being accessed, displaying a voice compilation control component;
in step S302, in response to the voice compilation control component being started, initializing an audio component;
in step S303, the audio component is turned on to obtain an input voice;
in step S304, in response to receiving a speech compilation control component closing speech instruction, closing the speech compilation control component and the audio component, and the connection between the speech compilation control component and the speech recognition component.
After the audio compiling component is used by a user, a voice closing instruction can be sent to the voice compiling control component, and after the audio component receives the voice closing instruction, the voice compiling control component and the audio component and the connection between the voice compiling control component and the voice recognition component are closed.
The closing voice instruction can be an instruction containing closing semantics, such as 'bye', 'baiye', 'good at this bar'.
In an optional implementation manner of this embodiment, the instruction including the closing semantics, such as "bye", "bailey", "like this", may be stored in advance, when voice data is received, the received voice data is matched with the pre-stored voice data, if the matching is successful, a relevant closing operation is performed, and if the matching is unsuccessful, the normal voice data processing flow is performed. Or, the method can also be realized by means of character matching, namely, command character information containing closing semantics, such as 'goodbye', 'baiye', 'good-for-the-spot', and the like, is stored in advance, then received voice data is converted into characters, and then the characters are matched with the prestored command character information.
In another optional implementation manner of this embodiment, the audio component may directly send the voice or the converted text to the intention recognition component for intention recognition after receiving the voice data, and if it is determined that the user intention is to close the voice compiling function after recognition, execute the corresponding closing operation.
In an alternative implementation manner of this embodiment, as shown in fig. 4, the step S102 of performing intent recognition on the input speech to generate a compiling instruction includes the following steps S401 to S402:
in step S401, in response to the connection being established between the speech recognition component and the speech compilation control component and the input speech being received by the speech recognition component, converting the input speech into text;
in step S402, the characters are sent to the intention recognition component for intention recognition, and a compiling instruction is generated.
And after the voice recognition component receives input voice, the voice recognition component performs real-time voice recognition on the input voice, converts the input voice into characters, and then sends the characters obtained by conversion to the intention recognition component for intention recognition so as to generate a compiling instruction required by compiling work.
In an optional implementation manner of this embodiment, in step S402, that is, the step of sending the text to the intention identification component for intention identification, and generating a compiling instruction may be implemented as:
transmitting the characters back to an audio component, and sending a service interface calling request to the intention identification component;
and responding to the confirmation of the service interface calling request, sending the words to an intention identification component for intention identification, and generating a compiling instruction.
When the characters obtained by converting the voice recognition component are sent to the intention recognition component, the characters can be asynchronously returned to the audio component, a service interface calling request is sent to the intention recognition component, and after the service interface calling request is confirmed, the characters are sent to the intention recognition component for intention recognition.
In an alternative implementation manner of this embodiment, as shown in fig. 5, the step S103 of executing the compiling instruction to obtain the speech compiling result includes the following steps S501 to S503:
in step S501, the compiling instruction is sent to an audio compiling component via a message communication component;
in step S502, a compiling element related to the compiling instruction is determined and acquired;
in step S503, the compiling instruction is executed based on the compiling element, and a voice compiling result is obtained.
When the compiling operation is executed according to the compiling instruction, the compiling instruction can be sent to an audio compiling component through a message communication component, and the audio compiling component determines and acquires compiling elements related to the compiling instruction according to the compiling instruction, such as a compiling implementation component, API (application program interface) configuration, linkage configuration and the like; then, the compiling instruction is executed based on the compiling element, for example, corresponding compiling implementation components are added, dragged, and corresponding codes are executed, so that a voice compiling result corresponding to the compiling instruction can be obtained, for example, a complete web page is generated or an internet application capable of realizing a certain function is generated.
In an optional implementation manner of this embodiment, the method further includes a step of displaying the speech compilation result, that is, as shown in fig. 6, the method includes the following steps S601 to S604:
in step S601, an input voice is acquired;
in step S602, performing intent recognition on the input speech to generate a compiling instruction;
in step S603, the compiling instruction is executed to obtain a speech compiling result;
in step S604, the speech compilation result is displayed.
After the speech compiling result is obtained, in order to enable the user to more intuitively know the final compiling effect, in the implementation manner, the speech compiling result may be displayed in a currently visible page of the user, and if the user is not satisfied with the speech compiling result, the user wants to adjust the speech compiling result, or wants to add some functions, the speech compiling operation may be restarted, and the speech compiling operation may be deleted, added, or adjusted according to the above-mentioned flow.
The following are embodiments of the apparatus of the present invention that may be used to perform embodiments of the method of the present invention.
Fig. 7 is a block diagram illustrating a structure of a speech compiling apparatus according to an embodiment of the present invention, which may be implemented as part or all of an electronic device by software, hardware, or a combination of both. As shown in fig. 7, the speech compiling apparatus includes:
an obtaining module 701 configured to obtain an input voice;
a generating module 702 configured to perform intent recognition on the input speech to generate a compiling instruction, wherein the intent recognition is to perform semantic recognition on the input speech to obtain intent information;
the execution module 703 is configured to execute the compiling instruction to obtain a voice compiling result.
As mentioned above, with the development of society, internet websites, web pages, and internet-based applications occupy irreplaceable positions in people's learning, work, and life, and also bring about many development demands. However, since software development belongs to a relatively professional skill, for users lacking a professional software development background, website construction, webpage generation and application development are almost inaccessible, which brings inconvenience to users, and is not beneficial to internet popularization and service efficiency and quality improvement.
In view of the above, in this embodiment, a speech compiling apparatus is proposed which generates a compiling instruction by performing intent recognition on an input speech and obtains a speech compiling result after executing the compiling instruction. The technical scheme can provide an effective way for users, especially users lacking a professional software development background, to conveniently, quickly and accurately develop software such as website construction, webpage generation and application development, thereby providing great convenience for users, and being beneficial to popularization of the internet and improvement of service efficiency and quality.
In an optional implementation manner of this embodiment, the input speech refers to speech input by a user through a speech input device such as a microphone, where the input speech is related to content that the user wants to compile, a compiling effect that the user wants to achieve, or a compiling purpose that the user wants to achieve, that is, the input speech may include one or more of the following data: compiling content, compiling components, compiling operations, compiling effects, compiling destinations, compiling data, and so forth.
In an optional implementation manner of this embodiment, the intention recognition is to perform semantic recognition on the input speech to obtain intention information, or requirement information, that is, to obtain the intention or requirement of the user through the speech input by the user.
In an optional implementation manner of this embodiment, the compiling instruction refers to an instruction that is obtained by performing intent recognition according to the input speech and can be directly executed to achieve a preset compiling purpose or compiling effect, and a compiling result corresponding to the speech input by the user can be obtained after the compiling instruction is executed, so that difficulty in software development can be reduced, and a professional threshold is eliminated, thereby providing great convenience for the user, and facilitating popularization of the internet and improvement of service efficiency and quality.
In an optional implementation manner of this embodiment, as shown in fig. 8, the obtaining module 701 includes:
a display sub-module 801 configured to display a voice compilation control component in response to a compilation page of the audio compilation component being accessed;
an initialization sub-module 802 configured to initialize an audio component in response to the speech compilation control component being started;
an obtaining sub-module 803 configured to turn on the audio component to obtain the input voice.
In order to effectively acquire the voice input by the user and acquire accurate voice content related to the compiling operation, in this embodiment, when the acquiring module 701 acquires the input voice, it is detected whether a compiling page of an audio compiling component is accessed, that is, it is determined whether the user has an intention of wanting to perform voice compiling, after it is detected that the compiling page of the audio compiling component is accessed, it is determined that the user wants to perform voice compiling, and then in response to a detection result that the compiling page of the audio compiling component is accessed, the display sub-module 801 displays a voice compiling control component for the user to control different components to work in a subsequent voice compiling process; if the user starts the speech coding control component, for example, clicks a start button of the speech coding control component, in response to the start operation, the initialization sub-module 802 initializes the audio component, and then the obtaining sub-module 803 starts the audio component to collect and obtain the speech input by the user.
The audio compiling component is a user-oriented component, which generally operates at a front end to provide a visual compiling interface for a user to browse, input command information of the user, and output information of a compiling result. The compilation interface of the audio compilation component may provide one or more of the following data: audio compilation component introduction information, audio compilation component usage information, audio compilation component launch entries, speech compilation control component information, speech compilation control component launch entries, historical audio compilation data, current audio compilation related information, current audio compilation results, and the like.
In an optional implementation manner of the embodiment, the user can access the compiling page of the audio compiling component through the browser to further improve the use convenience. The configuration parameters of the browser are related to the characteristics of the audio compiling component and the required operating environment and operating parameters thereof, and a person skilled in the art can select a proper browser according to the requirements of practical application.
In an optional implementation manner of this embodiment, the audio component refers to a component capable of acquiring voice data, such as a microphone, a loudspeaker, and the like, and after the audio component is started and initialized in response to the voice compiling control component, an operating state may be turned on to acquire input voice of a user.
In an optional implementation manner of this embodiment, the display sub-module 801 may be configured to:
and triggering a message communication component in response to the compiling page of the audio compiling component being accessed, so as to establish the connection between the audio compiling component and the voice compiling control component, and displaying the voice compiling control component.
In order to enable normal communication between the audio compiling component and the voice compiling control component, when a compiling page of the audio compiling component is accessed, the message communication component is immediately triggered to establish communication connection between the audio compiling component and the voice compiling control component so as to bear communication messages, and the voice compiling control component is displayed so as to receive voice messages of a user.
The audio compiling component and the voice compiling control component can be connected by a websocket, the websocket is a protocol for full-duplex communication on a single TCP connection, and the message communication between the audio compiling component and the voice compiling control component can be effectively supported.
In an optional implementation manner of this embodiment, the initialization sub-module 802 may be configured to:
in response to the voice compilation control component being started, establishing a connection between the voice compilation control component and a voice recognition component and initializing the audio component.
In order to enable normal communication between a speech compiling control component and a speech recognition component, after the speech compiling control component is started, a communication connection between the speech compiling control component and the speech recognition component is immediately established so as to carry communication messages, and initialization operation is carried out on the audio component, so that the speech recognition component can rapidly and effectively recognize relevant speech.
Similar to the previous implementation manner, the connection between the speech compiling control component and the speech recognition component may also be a websocket connection.
In an optional implementation manner of this embodiment, the obtaining sub-module 803 may be configured to:
and starting the audio assembly, acquiring input voice and processing the input voice.
In order to improve the accuracy of subsequent voice recognition, after the audio component acquires the input voice, the audio component can also perform processing such as voice denoising and voice optimization on the input voice so as to remove noise such as background sound and microphone noise in the input voice data and improve the definition of the voice data.
In an optional implementation manner of this embodiment, the obtaining module 701 may further include a part for turning off a relevant speech compiling component, that is, as shown in fig. 9, the obtaining module 701 includes:
a display sub-module 901 configured to display a voice compilation control component in response to a compilation page of the audio compilation component being accessed;
an initialization submodule 902 configured to initialize an audio component in response to the speech compilation control component being started;
an obtaining sub-module 903 configured to turn on the audio component and obtain an input voice;
a closing sub-module 904 configured to close the speech compilation control component and the audio component, and the connection between the speech compilation control component and the speech recognition component, in response to receiving a speech compilation control component closing speech instruction.
After the audio compilation component is used by the user, a voice closing instruction can be sent to the voice compilation control component, and the closing sub-module 904 closes the voice compilation control component and the audio component and the connection between the voice compilation control component and the voice recognition component after receiving the voice closing instruction.
The closing voice instruction can be an instruction containing closing semantics, such as 'bye', 'baiye', 'good at this bar'.
In an optional implementation manner of this embodiment, the instruction including the closing semantics, such as "bye", "bailey", "like this", may be stored in advance, when voice data is received, the received voice data is matched with the pre-stored voice data, if the matching is successful, a relevant closing operation is performed, and if the matching is unsuccessful, the normal voice data processing flow is performed. Or, the method can also be realized by means of character matching, namely, command character information containing closing semantics, such as 'goodbye', 'baiye', 'good-for-the-spot', and the like, is stored in advance, then received voice data is converted into characters, and then the characters are matched with the prestored command character information.
In another optional implementation manner of this embodiment, the audio component may directly send the voice or the converted text to the intention recognition component for intention recognition after receiving the voice data, and if it is determined that the user intention is to close the voice compiling function after recognition, execute the corresponding closing operation.
In an optional implementation manner of this embodiment, as shown in fig. 10, the generating module 702 includes:
a conversion submodule 1001 configured to convert an input voice into text in response to a connection being established between the voice recognition component and the voice compilation control component and the voice recognition component receiving the input voice;
the generating sub-module 1002 is configured to send the text to the intention identifying component for intention identification, and generate a compiling instruction.
The connection is established between the speech recognition component and the speech compiling control component, and after the speech recognition component receives the input speech, the conversion sub-module 1001 performs real-time speech recognition on the input speech and converts the input speech into characters, and the generation sub-module 1002 sends the converted characters to the intention recognition component for intention recognition, so as to generate a compiling instruction required by compiling work.
In an optional implementation manner of this embodiment, the generating sub-module 1002 may be configured to:
transmitting the characters back to an audio component, and sending a service interface calling request to the intention identification component;
and responding to the confirmation of the service interface calling request, sending the words to an intention identification component for intention identification, and generating a compiling instruction.
When the characters obtained by converting the voice recognition component are sent to the intention recognition component, the characters can be asynchronously returned to the audio component, a service interface calling request is sent to the intention recognition component, and after the service interface calling request is confirmed, the characters are sent to the intention recognition component for intention recognition.
In an optional implementation manner of this embodiment, as shown in fig. 11, the executing module 703 includes:
the sending submodule 1101 is configured to send the compiling instruction to an audio compiling component through a message communication component;
a determining submodule 1102 configured to determine and obtain the compiling instruction related compiling element;
an execution sub-module 1103 configured to execute the compiling instruction based on the compiling element to obtain a voice compiling result.
When the compiling operation is executed according to the compiling instruction, the sending submodule 1101 sends the compiling instruction to the audio compiling assembly through the message communication assembly, and the determining submodule 1102 determines and obtains a compiling element related to the compiling instruction according to the compiling instruction, such as a compiling implementation assembly, API configuration, linkage configuration and the like; the execution sub-module 1103 executes the compiling instruction based on the compiling element, for example, adding, dragging, executing a corresponding compiling implementation component, and so on, so as to obtain a voice compiling result corresponding to the compiling instruction, for example, generating a complete web page or an internet application that can implement a certain function.
In an optional implementation manner of this embodiment, the apparatus further includes a portion for displaying the result of the speech compilation, that is, as shown in fig. 12, the apparatus includes:
an obtaining module 1201 configured to obtain an input voice;
a generating module 1202 configured to perform intent recognition on the input speech to generate a compiling instruction;
an executing module 1203 configured to execute the compiling instruction to obtain a voice compiling result;
a display module 1204 configured to display the speech compilation result.
After obtaining the speech compiling result, in order to enable the user to more intuitively know the final compiling effect, in this implementation, the display module 1204 may be further configured to display the speech compiling result in the page currently visible to the user, and if the user is not satisfied with the speech compiling result, the user wants to adjust the speech compiling result, or wants to add some functions, the speech compiling operation may be restarted to perform deleting, adding, or adjusting according to the above-mentioned flow.
Fig. 13 is a block diagram illustrating a structure of an electronic device according to an embodiment of the present invention, and as shown in fig. 13, the electronic device 1300 includes a memory 1301 and a processor 1302; wherein the content of the first and second substances,
the memory 1301 is used to store one or more computer instructions, which are executed by the processor 1302 to implement any of the method steps described above.
FIG. 14 is a schematic diagram of a computer system suitable for implementing a speech compilation method according to an embodiment of the invention.
As shown in fig. 14, the computer system 1400 includes a Central Processing Unit (CPU)1401 which can execute various processes in the above-described embodiments according to a program stored in a Read Only Memory (ROM)1402 or a program loaded from a storage portion 1408 into a Random Access Memory (RAM) 1403. In the RAM1403, various programs and data necessary for the operation of the system 1400 are also stored. The CPU1401, ROM1402, and RAM1403 are connected to each other via a bus 1404. An input/output (I/O) interface 1405 is also connected to bus 1404.
The following components are connected to the I/O interface 1405: an input portion 1406 including a keyboard, a mouse, and the like; an output portion 1407 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker and the like; a storage portion 1408 including a hard disk and the like; and a communication portion 1409 including a network interface card such as a LAN card, a modem, or the like. The communication section 1409 performs communication processing via a network such as the internet. The driver 1410 is also connected to the I/O interface 1405 as necessary. A removable medium 1411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1410 as necessary, so that a computer program read out therefrom is installed into the storage section 1408 as necessary.
In particular, the above described method may be implemented as a computer software program according to an embodiment of the present invention. For example, embodiments of the present invention include a computer program product comprising a computer program tangibly embodied on and readable medium thereof, the computer program comprising program code for performing the speech compilation method. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 1409 and/or installed from the removable media 1411.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium may be a computer-readable storage medium included in the apparatus in the foregoing embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the embodiments of the present invention.
The foregoing description is only exemplary of the preferred embodiments of the invention and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention according to the embodiments of the present invention is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept. For example, the above features and (but not limited to) the features with similar functions disclosed in the embodiments of the present invention are mutually replaced to form the technical solution.

Claims (22)

1. A speech compilation method, comprising:
acquiring input voice;
performing intention recognition on the input voice to generate a compiling instruction, wherein the intention recognition is to perform semantic recognition on the input voice to obtain intention information;
and executing the compiling instruction to obtain a voice compiling result.
2. The method of claim 1, wherein the obtaining input speech comprises:
displaying a voice compilation control component in response to a compilation page of the audio compilation component being accessed;
initializing an audio component in response to the speech compilation control component being started;
and starting the audio assembly to obtain input voice.
3. The method of claim 2, wherein responsive to the compilation page of the audio compilation component being accessed, displaying a speech compilation control component is implemented to:
and triggering a message communication component in response to the compiling page of the audio compiling component being accessed, so as to establish the connection between the audio compiling component and the voice compiling control component, and displaying the voice compiling control component.
4. The method according to claim 2 or 3, wherein the initializing the audio component in response to the speech coding control component being started is implemented as:
in response to the voice compilation control component being started, establishing a connection between the voice compilation control component and a voice recognition component and initializing the audio component.
5. The method according to any of claims 2-4, wherein said turning on said audio component, obtaining input speech, is implemented as:
and starting the audio assembly, acquiring input voice and processing the input voice.
6. The method of any of claims 2-5, further comprising:
and in response to receiving a voice compiling control component closing voice instruction, closing the voice compiling control component and the audio component and the connection between the voice compiling control component and the voice recognition component.
7. The method according to any one of claims 1-6, wherein the performing intent recognition on the input speech generates compiled instructions, comprising:
responding to the connection between the voice recognition component and the voice compiling control component and the fact that the voice recognition component receives input voice, and converting the input voice into characters;
and sending the characters to an intention identification component for intention identification to generate a compiling instruction.
8. The method of claim 7, wherein sending the text to an intent recognition component for intent recognition generates compilation instructions implemented to:
transmitting the characters back to an audio component, and sending a service interface calling request to the intention identification component;
and responding to the confirmation of the service interface calling request, sending the words to an intention identification component for intention identification, and generating a compiling instruction.
9. The method according to any one of claims 1-8, wherein said executing said compiled instructions to obtain a result of speech compilation comprises:
sending the compiling instruction to an audio compiling component through a message communication component;
determining and acquiring a compiling element related to the compiling instruction;
and executing the compiling instruction based on the compiling element to obtain a voice compiling result.
10. The method of any of claims 1-9, further comprising:
and displaying the voice compiling result.
11. A speech coding apparatus, comprising:
an acquisition module configured to acquire an input voice;
a generation module configured to perform intent recognition on the input speech to generate a compiling instruction, wherein the intent recognition is to perform semantic recognition on the input speech to obtain intent information;
and the execution module is configured to execute the compiling instruction to obtain a voice compiling result.
12. The apparatus of claim 11, wherein the obtaining module comprises:
a display sub-module configured to display the voice compilation control component in response to a compilation page of the audio compilation component being accessed;
an initialization sub-module configured to initialize an audio component in response to the speech compilation control component being started;
and the acquisition sub-module is configured to start the audio component and acquire input voice.
13. The apparatus of claim 12, wherein the display sub-module is configured to:
and triggering a message communication component in response to the compiling page of the audio compiling component being accessed, so as to establish the connection between the audio compiling component and the voice compiling control component, and displaying the voice compiling control component.
14. The apparatus of claim 12 or 13, wherein the initialization submodule is configured to:
in response to the voice compilation control component being started, establishing a connection between the voice compilation control component and a voice recognition component and initializing the audio component.
15. The apparatus of any of claims 12-14, wherein the acquisition sub-module is configured to:
and starting the audio assembly, acquiring input voice and processing the input voice.
16. The apparatus of any of claims 12-15, wherein the obtaining module further comprises:
a shutdown submodule configured to shut down the speech compilation control component and the audio component, and a connection between the speech compilation control component and the speech recognition component, in response to receiving a speech compilation control component shutdown speech instruction.
17. The apparatus according to any of claims 11-16, wherein the generating means comprises:
a conversion sub-module configured to convert an input speech into text in response to the speech recognition component establishing a connection with the speech compilation control component and the speech recognition component receiving the input speech;
and the generation submodule is configured to send the characters to an intention recognition component for intention recognition, and generates a compiling instruction.
18. The apparatus of claim 17, wherein the generation submodule is configured to:
transmitting the characters back to an audio component, and sending a service interface calling request to the intention identification component;
and responding to the confirmation of the service interface calling request, sending the words to an intention identification component for intention identification, and generating a compiling instruction.
19. The apparatus according to any one of claims 11-18, wherein the execution module comprises:
the sending submodule is configured to send the compiling instruction to the audio compiling component through the message communication component;
a determining submodule configured to determine and acquire the compiling instruction related compiling element;
and the execution submodule is configured to execute the compiling instruction based on the compiling element to obtain a voice compiling result.
20. The apparatus of any of claims 11-19, further comprising:
a display module configured to display the speech compilation result.
21. An electronic device comprising a memory and a processor; wherein the content of the first and second substances,
the memory is for storing one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method steps of any of claims 1-10.
22. A computer-readable storage medium having stored thereon computer instructions, characterized in that the computer instructions, when executed by a processor, carry out the method steps of any of claims 1-10.
CN201910395525.2A 2019-05-13 2019-05-13 Voice compiling method and device, electronic equipment and computer readable storage medium Pending CN111933126A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910395525.2A CN111933126A (en) 2019-05-13 2019-05-13 Voice compiling method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910395525.2A CN111933126A (en) 2019-05-13 2019-05-13 Voice compiling method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111933126A true CN111933126A (en) 2020-11-13

Family

ID=73282666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910395525.2A Pending CN111933126A (en) 2019-05-13 2019-05-13 Voice compiling method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111933126A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170300187A1 (en) * 2016-04-15 2017-10-19 Naver Corporation Application producing apparatus, system, method, and non-transitory computer readable medium
CN107783763A (en) * 2017-09-29 2018-03-09 乐蜜有限公司 A kind of application program generation method, device, server and readable storage medium storing program for executing
CN108287720A (en) * 2018-02-08 2018-07-17 深圳创维-Rgb电子有限公司 software compilation method, device, equipment and storage medium
CN109542414A (en) * 2018-11-09 2019-03-29 深圳市海勤科技有限公司 A kind of autonomous compiling system of volume production software

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170300187A1 (en) * 2016-04-15 2017-10-19 Naver Corporation Application producing apparatus, system, method, and non-transitory computer readable medium
CN107783763A (en) * 2017-09-29 2018-03-09 乐蜜有限公司 A kind of application program generation method, device, server and readable storage medium storing program for executing
CN108287720A (en) * 2018-02-08 2018-07-17 深圳创维-Rgb电子有限公司 software compilation method, device, equipment and storage medium
CN109542414A (en) * 2018-11-09 2019-03-29 深圳市海勤科技有限公司 A kind of autonomous compiling system of volume production software

Similar Documents

Publication Publication Date Title
EP3494499B1 (en) Initializing a conversation with an automated agent via selectable graphical element
US11086598B2 (en) Providing a communications channel between instances of automated assistants
KR20130112885A (en) Methods and apparatus for providing input to a speech-enabled application program
US20150113409A1 (en) Visual and voice co-browsing framework
CN108027725B (en) Method, device and equipment for guiding terminal equipment operation
CN113094143B (en) Cross-application message sending method and device, electronic equipment and readable storage medium
CN107770380B (en) Information processing method and device
US11741958B2 (en) Using structured audio output to detect playback and/or to adapt to misaligned playback in wireless speakers
US10997963B1 (en) Voice based interaction based on context-based directives
CN110268400B (en) Improving interaction with an electronic chat interface
US20190347067A1 (en) User interface interaction channel
US8855615B2 (en) Short messaging service for extending customer service delivery channels
CN110519373B (en) Method and device for pushing information
CN111933126A (en) Voice compiling method and device, electronic equipment and computer readable storage medium
WO2014101413A1 (en) Contact information processing method and apparatus
WO2023046105A1 (en) Message sending method and apparatus and electronic device
CN110543290A (en) Multimodal response
CN113641439B (en) Text recognition and display method, device, electronic equipment and medium
US11656844B2 (en) Providing a communications channel between instances of automated assistants
CN111147353B (en) Method and device for identifying friend, computer storage medium and electronic equipment
CN110597525A (en) Method and apparatus for installing applications
CN113141298B (en) Message processing method, message processing device, storage medium and electronic equipment
CN117540805A (en) Data processing method, device, electronic equipment and storage medium
CN114464165A (en) Voice service method, device, electronic equipment and storage medium
KR20150107066A (en) Messenger service system, method and apparatus for messenger service using common word in the system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201113