WO2018120820A1 - Procédé et appareil de production de présentations - Google Patents

Procédé et appareil de production de présentations Download PDF

Info

Publication number
WO2018120820A1
WO2018120820A1 PCT/CN2017/094599 CN2017094599W WO2018120820A1 WO 2018120820 A1 WO2018120820 A1 WO 2018120820A1 CN 2017094599 W CN2017094599 W CN 2017094599W WO 2018120820 A1 WO2018120820 A1 WO 2018120820A1
Authority
WO
WIPO (PCT)
Prior art keywords
presentation
switching
speech
time
action
Prior art date
Application number
PCT/CN2017/094599
Other languages
English (en)
Chinese (zh)
Inventor
吴亮
黄薇
高峰
钟恒
Original Assignee
北京奇虎科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京奇虎科技有限公司 filed Critical 北京奇虎科技有限公司
Publication of WO2018120820A1 publication Critical patent/WO2018120820A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/14Tree-structured documents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/166Editing, e.g. inserting or deleting

Definitions

  • the present application relates to the field of web technologies, and in particular, to a method for fabricating a presentation and a device for making a presentation.
  • the user In order to realize distance learning, the user usually records the operation of the presentation while the user is speaking, keeping the user's speech synchronized with the presentation.
  • the video data obtained by recording the operation of the presentation is bulky and takes up a lot of storage space.
  • the video data is often compressed to reduce the resolution of the video data, resulting in blurry content of the presentation.
  • the present application has been made in order to provide a method for fabricating a presentation and a corresponding apparatus for producing a presentation that overcomes the above problems or at least partially solves the above problems.
  • a method of making a presentation including:
  • a presentation switching action is configured on the presentation element to play the presentation document element in accordance with the presentation switching action.
  • a production apparatus for a presentation including:
  • a web page loading module adapted to load a web page generated for the presentation
  • a presentation element configuration module adapted to configure a presentation element in the web page
  • An audio data adding module adapted to add audio data to the presentation element on a time axis to synchronously play the audio data when the presentation element is played according to the time axis;
  • the presentation switching action configuration module is adapted to configure a presentation switching action on the presentation element to play the presentation document element according to the speech switching action.
  • a computer program comprising computer readable code causing the terminal device to perform the production of any of the aforementioned presentations when the computer readable code is run on a terminal device method.
  • a computer readable medium storing a computer program of a method of fabricating a presentation as described above.
  • the embodiment of the present application loads a web page generated for a presentation in a client, and configures a presentation element in the web page, and further adds audio data to the presentation element on the timeline, so that the presentation can be played according to the timeline
  • the elements are synchronized to play audio data
  • the web page is used as a carrier to create a presentation
  • the audio data is used to synchronize the presentation of the presentation elements and audio data, allowing the user to view the contents of the presentation and listen to the presentation of the presentation.
  • Using the web element as a presentation element compared to the video data, can greatly reduce the mention, reduce the occupation of the storage space, and, because the web element is directly drawn and loaded on the web page, without compression processing, the web element can be guaranteed.
  • Sharpness on the other hand, the presentation switching action is configured on the presentation element, so that the presentation document element can be played according to the speech switching action during the presentation, which increases the synchronization precision of the presentation and the audio data.
  • FIG. 1 is a flow chart showing the steps of an embodiment of a method for creating a presentation according to an embodiment of the present application
  • FIG. 2A-2C illustrate a configuration presentation element in accordance with one embodiment of the present application.
  • 3A-3D illustrate example diagrams of editing a presentation element and audio data playback order, in accordance with one embodiment of the present application
  • FIGS. 4A-4D illustrate example diagrams of playing presentation elements and audio data in accordance with one embodiment of the present application
  • FIGS. 5A-5B illustrate example diagrams of recording audio data in accordance with one embodiment of the present application
  • FIGS. 6A-6D illustrate example diagrams of an additional speech switching action in accordance with an embodiment of the present application
  • FIGS. 7A-7B are diagrams showing an example of deleting a speech switching action according to an embodiment of the present application.
  • FIGS. 8A-8B illustrate example diagrams of a mobile speech switching action in accordance with one embodiment of the present application
  • FIG. 9 is a structural block diagram of a device for fabricating a presentation according to an embodiment of the present application.
  • Figure 10 schematically shows a block diagram of a terminal device for performing the method according to the present application
  • Fig. 11 schematically shows a storage unit for holding or carrying program code implementing the method according to the present application.
  • FIG. 1 is a flow chart showing the steps of an embodiment of a method for creating a presentation according to an embodiment of the present application. Specifically, the method may include the following steps:
  • Step 101 Load a web page generated for the presentation.
  • the user can log in to the server by using a user account on a client such as a browser, and send a request for generating a presentation to the server.
  • a client such as a browser
  • the server can configure a new presentation and configure the presentation with a unique presentation identifier, such as slide_id (slide ID), which is used to generate a unique one for the presentation.
  • Slide_id segment ID
  • Edited URL Uniform Resource Locator, Uniform Resource Locator
  • the client accesses the URL for editing to load a web page, which is the carrier of the presentation, ie the presentation can edit the content in the web page.
  • the information of the presentation can be displayed in the area such as the user center.
  • the client can directly load the web page by using the URL for editing, which is not used in the embodiment of the present application. limit.
  • the presentation ID is used to generate a unique URL for the presentation, and the URL for the presentation is returned to the client.
  • the client can access the URL for the presentation to load the web page, which is the carrier of the presentation, ie the presentation can be played in the web page.
  • Step 102 configuring a presentation element in the web page.
  • the presentation elements can include one or more of the following:
  • Text images, images of specified shapes, lines, tables, frames, and code.
  • the user can trigger the presentation element to edit state by clicking or the like.
  • the editing operation bar of the presentation element is popped up in the web page, and the user can display the element of the presentation element in the editing operation column. Parameters for the user to adjust.
  • the edit operation column of the table may be popped up in the web page, and the user may set the number of rows, the number of columns, and the cell.
  • Element parameters such as margins, border width, and border color.
  • the user can save it manually, or the script of the client executing the web page can be automatically saved.
  • the parameters configured in the presentation element of the web page can be synchronized with the server during saving, and the server takes the parameter. Store under the presentation (represented by the presentation ID) for subsequent loading.
  • the client loads the web page with the URL for editing, and according to the previously set element parameters.
  • the corresponding presentation element is loaded for the user to continue editing. This embodiment of the present application does not limit this.
  • Step 103 adding audio data to the presentation element on a time axis to synchronously play the audio data when the presentation element is played according to the time axis.
  • the client in order to control the playing of the presentation, can configure a timeline and set the playing time of the presentation element on the timeline.
  • the user can record audio data
  • the client adds audio data to the presentation element, such as a user's speech, so that the presentation elements can be played while the audio data is being played on the time axis, so that the two can be synchronized.
  • the user can set the playing time of the presentation element. With the passage of time, when the audio data is set to be played, the speech can be set to be switched in order.
  • the manuscript elements that is, the text "Quiet Night Thinking", “Li Bai”, “Before the Moon”.
  • the timing control is displayed in the lower left corner, and as time passes, the audio presentation data is played, and the presentation document elements are switched in order, that is, the text is displayed. "Quiet night thinking”, “Li Bai”, “before the bed bright moonlight”.
  • step 103 may include the following sub-steps:
  • Sub-step S11 the recorder is called to record audio data to the presentation element.
  • the microphone can be called to collect the original audio data, and the recorder is called to record the audio data.
  • a recording control can be loaded, after clicking the recording control, recording is started, and a visual element of the audio data is displayed on the axis element of the visual axis of the time axis.
  • the sub-step S11 may include the following sub-steps:
  • Sub-step S111 acquiring original audio stream data collected by the microphone
  • Sub-step S112 the original audio stream data is transmitted to the recorder
  • Sub-step S113 the original audio stream data is visualized in the recorder according to the recording parameters, and the original audio stream data is converted into audio data of a specified format.
  • the client can obtain the original audio stream data collected by the microphone through the getUserMedia interface provided by WebRTC (Web Real-Time Communication).
  • WebRTC Web Real-Time Communication
  • a script processing node is created by the createScriptProcess method of the Web Audio API, which is used to process raw audio stream data using Javascript.
  • the audio source node is connected to the processing node, and the processing node is connected to the audio output node to form a complete processing flow.
  • the processing node can listen to the AudioProcessingEvent event through the onaudioprocess method, and the event acquires a certain length of data from the original audio stream data for processing at regular intervals.
  • the original audio stream data is visualized by the drawAudioWave method (the visualized elements are generated based on the frequency, waveform and other attributes of the original audio stream data), and the audio data is transmitted to the Web Worker for audio.
  • the drawAudioWave method the visualized elements are generated based on the frequency, waveform and other attributes of the original audio stream data
  • the audio data is transmitted to the Web Worker for audio.
  • the audio processing is paused, and a format file such as WAV is requested from the Web Worker, and the Web Worker converts the existing original audio stream data into audio data of a format such as WAV and returns it.
  • a format file such as WAV
  • the Web Worker also opens a thread to temporarily store and process the original audio stream data, so that other processing of the client (such as the browser) can be performed normally.
  • step 103 may include the following sub-steps:
  • Sub-step S21 inputting text information to the presentation element
  • Sub-step S22 converting the text information into audio data.
  • the terminal where the client is located is not configured with a microphone
  • the user can input text information to the presentation element, and the text information can be converted into audio data through the voice synthesis (The Emperor Waltz, TEW).
  • Speech synthesis also known as Text to Speech (TTS) technology
  • TTS Text to Speech
  • the characteristics of the segment such as pitch, length and intensity, are made, so that the synthesized speech can correctly express the semantics and sound more natural.
  • the phonetic primitives of the single words or phrases corresponding to the processed text are extracted from the speech synthesis library, and the prosody characteristics of the speech primitives are adjusted and modified by using a specific speech synthesis technique, and finally synthesized. Meet the required voice data.
  • the manner of adding audio data is only an example.
  • other manners of adding audio data may be set according to actual conditions, for example, directly importing existing audio data, and the like. This is not limited.
  • those skilled in the art may also adopt other manners of adding audio data according to actual needs, and the embodiment of the present application does not limit this.
  • the audio data on the time axis can be uploaded to the server.
  • the audio data can be retrieved from the Web Worker, and the audio file is compressed by the amrnb.js library, and compressed to specify the amr.
  • the format is then uploaded to the server, which is stored under the presentation (represented by the presentation ID) for subsequent loading.
  • Step 104 Configure a presentation switching action on the presentation element to play the presentation document element according to the speech switching action.
  • the user may define a series of speech switching actions in the recording editor, and play the presentation document elements according to the speech switching actions.
  • the speech switching action includes a switching time and a switching operation mode, that is, the speech switching action has a time point corresponding to the audio data, and the corresponding time point is triggered according to the time point of the audio data playing during the playing of the presentation.
  • the switching mode of operation switches the presentation document elements.
  • step 104 may include the following sub-steps:
  • Sub-step S31 receiving an increase instruction of the speech switching action
  • Sub-step S32 setting a switching time of the speech switching action and a switching operation mode of the speech document element according to the adding instruction, to perform the switching operation mode when the speech document is played to the switching time Switch the presentation document elements.
  • a speech switching action may be added, and the presentation document element is switched according to the switching operation mode when the speech presentation is played to the switching time.
  • the newly added speech switching action may include the following two categories:
  • the switching operation mode of the presentation document element is recorded to switch the presentation document element according to the switching operation mode when the speech presentation is played to the time point.
  • the recording editor will record the position of the action switching (corresponding time point) and the corresponding switching operation mode. (one action, one action as above) and displayed in real time in the visualization area.
  • a speech switching action indicator (a symbol with a triangle in a circle) indicates a speech switching action, and the switching operation mode of the speech switching action in the vicinity of 2.3 seconds is the next action, that is, the display.
  • the user can click the speech to switch the blank of the action identifier to add the speech switching action.
  • the recording editor responds to the final state of the action switching in real time and displays it in real time in the visualization area.
  • a speech switching action identifier (a symbol with a triangle in a circle) represents a speech switching action, and a speech switching action is added outside the time axis, and the switching operation mode is the next action. That is, the text "Moonlight in front of the window" is displayed, and the next action can be updated in real time, that is, the text "Moonlight in front of the window” is displayed.
  • step 104 may include the following sub-steps:
  • Sub-step S41 receiving a delete instruction of the speech action
  • Sub-step S42 deleting the switching time of the speech switching operation according to the deletion instruction, to switch the speech document element according to the switching operation mode when the speech document is played to the switching time of the previous speech switching operation.
  • the user can click the speech to switch the action identifier, pop up the delete box, and click the delete button to delete the existing talk switch action.
  • the presentation document element can also be updated in real time, and the presentation document element is updated in real time to the previous speech switching action.
  • a speech switching action identifier (a symbol with a triangle in a circle) indicates a speech switching action, and the speech switching action is deleted near 16.2 seconds, and the switching operation mode is the next action, that is, The text "Bearing Moonlight” is displayed, and the speech switching action near 16.1 seconds can be updated in real time to the next action, that is, the text "Bearing Moonlight” is displayed.
  • step 104 may include the following sub-steps:
  • Sub-step S51 receiving a movement instruction of the speech action
  • Sub-step S52 changing the switching time of the speech switching operation according to the movement instruction, and switching the presentation document element according to the switching operation mode when playing to the switching time after the change.
  • the user can change the position associated with the recorded audio (ie, the switching time) by clicking the existing speech switching action identifier and dragging the movement of the speech switching action identifier.
  • an effective time interval can be set, and the change of the switching time is at the effective time.
  • the area is valid.
  • the effective time interval may be calculated, and the effective time interval is between the switching time of the previous speech switching action and the switching time of the next speech switching action, and is not related to the switching time of the previous speech switching action, and the next The switching time of the presentation switching action overlaps.
  • a time point is determined as the switching time of the speech switching action in the effective time interval to switch the presentation document element according to the switching operation mode when playing to the time point after the change.
  • a speech switching action indicator (a symbol with a triangle in a circle) indicates a speech switching action
  • the switching operation mode of the speech switching action in the vicinity of 6.5 seconds is the next action, that is, Display the text "suspected ground frost", click on the speech to switch the action ID, calculate the effective time interval, between 5.5 seconds - 10.3 seconds, that is, the area covered by the rectangular figure, if the user moves the speech switching action flag to 8.5 seconds Nearby, when the playback is near 7.3 seconds, the text "Suspected Ground Cream" is canceled.
  • the configuration of the above-mentioned speech switching action is only an example.
  • the configuration mode of the other speech switching action may be set according to the actual situation, which is not limited by the embodiment of the present application.
  • those skilled in the art may also adopt other configuration modes of the speech switching action according to actual needs, which is not limited in the embodiment of the present application.
  • the embodiment of the present application loads a web page generated for a presentation in a client, and configures a presentation element in the web page, and further adds audio data to the presentation element on the timeline, so that the presentation can be played according to the timeline
  • the elements are synchronized to play audio data
  • the web page is used as a carrier to create a presentation
  • the audio data is used to synchronize the presentation of the presentation elements and audio data, allowing the user to view the contents of the presentation and listen to the presentation of the presentation.
  • Using the web element as a presentation element compared to the video data, can greatly reduce the mention, reduce the occupation of the storage space, and, because the web element is directly drawn and loaded on the web page, without compression processing, the web element can be guaranteed.
  • Sharpness on the other hand, the presentation switching action is configured on the presentation element, so that the presentation document element can be played according to the speech switching action during the presentation, which increases the synchronization precision of the presentation and the audio data.
  • FIG. 9 a structural block diagram of a device for creating a presentation according to an embodiment of the present application is shown, which may specifically include the following modules:
  • a web page loading module 901 configured to load a web page generated for the presentation
  • a presentation element configuration module 902 adapted to configure a presentation element in the web page
  • An audio data adding module 903, configured to add audio data to the presentation element on a time axis to synchronously play the audio data when the presentation element is played according to the time axis;
  • the presentation switching action configuration module 904 is adapted to configure a presentation switching action on the presentation element to play the presentation document element in accordance with the presentation switching action.
  • the audio data adding module 903 includes:
  • a recording sub-module adapted to call the recorder to record audio data to the presentation element.
  • the recording submodule includes:
  • the original audio stream data acquiring unit is adapted to acquire original audio stream data collected in the microphone
  • a recorder incoming unit adapted to transmit the raw audio stream data to the recorder
  • a recorder processing unit adapted to visualize the original audio stream data in the recorder according to recording parameters, and convert the original audio stream data into audio data of a specified format.
  • the audio data adding module 903 includes:
  • a text information input submodule adapted to input text information to the presentation element
  • a text information conversion sub-module adapted to convert the text information into audio data.
  • the speech switching action includes a switching time and a switching operation mode
  • the speech switching action configuration module 904 includes:
  • a speech switching action adding submodule configured to set a switching time of the speech switching action and a switching operation mode of the speech document element according to the adding instruction, to perform, when the speech presentation is played to the switching time, The switching mode of operation switches the presentation document element.
  • the speech switching action adding submodule includes:
  • a first switching time recording unit configured to record a time point of the speech switching action on the time axis as a switching time when a speech switching action is added in the time axis;
  • a first switching operation mode recording unit adapted to record a switching operation on the presentation document element a manner of switching the presentation document element according to the switching operation mode when the speech presentation is played to the time point;
  • a second switching time recording unit adapted to record an end time of the time axis as a switching time when the speech switching action is added outside the time axis;
  • the second switching operation mode recording unit is adapted to record a switching operation mode of the presentation document element to switch the presentation document element according to the switching operation mode when the speech presentation is played to the end time.
  • the speech switching action configuration module 904 further includes:
  • a speech switching action deletion sub-module configured to delete a switching time of the speech switching action according to the deletion instruction, to switch the switching presentation mode to a switching time of the last speech switching action according to the switching operation mode Presentation document elements.
  • the speech switching action configuration module 904 further includes:
  • a mobile instruction receiving submodule adapted to receive a move instruction of a speech action
  • the speech switching action moving submodule is adapted to change the switching time of the speech switching operation according to the movement instruction to switch the presentation document element according to the switching operation mode when playing to the changed switching time.
  • the speech switching action moving submodule includes:
  • the effective time interval calculation unit is adapted to calculate an effective time interval between the switching time of the last speech switching action and the switching time of the next speech switching action;
  • the switching time determining unit is adapted to determine a time point as the switching time of the speech switching action in the valid time interval to switch the presentation document element according to the switching operation mode when playing to the changed time point.
  • the method further includes:
  • An audio uploading module adapted to upload audio data on the timeline to a server.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the various component embodiments of the present application can be implemented in hardware, or in a software module running on one or more processors, or in a combination thereof. It will be understood by those skilled in the art that a microprocessor or digital signal processor (DSP) can be used in practice to implement some of the production equipment of the presentation according to embodiments of the present application or Some or all of the features of all components.
  • DSP digital signal processor
  • the application can also be implemented as a device or device program (e.g., a computer program and a computer program product) for performing some or all of the methods described herein.
  • Such a program implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such signals may be downloaded from an Internet website, provided on a carrier signal, or provided in any other form.
  • FIG. 10 illustrates a terminal device that can implement the production of a presentation according to the present application.
  • the terminal device conventionally includes a processor 1010 and a computer program product or computer readable medium in the form of a memory 1020.
  • the memory 1020 may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read Only Memory), an EPROM, a hard disk, or a ROM.
  • the memory 1020 has a memory space 1030 for executing program code 1031 of any of the above method steps.
  • storage space 1030 for program code may include various program code 1031 for implementing various steps in the above methods, respectively.
  • the program code can be read from or written to one or more computer program products.
  • Such computer program products include program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such a computer program product is typically a portable or fixed storage unit as described with reference to FIG.
  • the storage unit may have a storage section, a storage space, and the like arranged similarly to the storage 1020 in the terminal device of FIG.
  • the program code can be compressed, for example, in an appropriate form.
  • the storage unit comprises computer readable code 1031', ie code that can be read by, for example, a processor such as 1010, which when executed by the terminal device causes the terminal device to perform each of the methods described above step.
  • "an embodiment," or "an embodiment," or "one or more embodiments" as used herein means that the particular features, structures, or characteristics described in connection with the embodiments are included in at least one embodiment of the present application.
  • phrase "in one embodiment" is not necessarily referring to the same embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephonic Communication Services (AREA)

Abstract

Les modes de réalisation de la présente invention concernent un procédé et un appareil de production de présentations, ledit procédé comportant les étapes consistant à: charger une page web générée pour une présentation; configurer un élément de présentation à l'intérieur de ladite page web; ajouter des données audio audit élément de présentation sur un axe des temps de telle façon que, lorsque ledit élément de présentation est reproduit, lesdites données audio soient reproduites simultanément suivant ledit axe des temps; configurer une action de basculement de présentation pour ledit élément de présentation de telle façon que ledit élément de présentation soit reproduit selon ladite action de basculement de présentation. Les modes de réalisation de la présente invention utilisent un élément web en tant qu'élément de présentation, ce qui, en comparaison à des données vidéo, peut réduire considérablement les mentions, réduisant la quantité d'espace de stockage occupée, tout en garantissant également la résolution de l'élément web, l'élément web étant restitué et chargé sur une page web sans qu'il soit nécessaire de réaliser une compression.
PCT/CN2017/094599 2016-12-26 2017-07-27 Procédé et appareil de production de présentations WO2018120820A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611219547.6 2016-12-26
CN201611219547.6A CN108241597A (zh) 2016-12-26 2016-12-26 一种演示文稿的制作方法和装置

Publications (1)

Publication Number Publication Date
WO2018120820A1 true WO2018120820A1 (fr) 2018-07-05

Family

ID=62701920

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/094599 WO2018120820A1 (fr) 2016-12-26 2017-07-27 Procédé et appareil de production de présentations

Country Status (2)

Country Link
CN (1) CN108241597A (fr)
WO (1) WO2018120820A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112533054A (zh) * 2019-09-19 2021-03-19 腾讯科技(深圳)有限公司 在线视频的播放方法、装置及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110347848A (zh) * 2019-07-11 2019-10-18 深圳云智教育科技有限公司 一种演示文稿管理方法及装置
CN111221452B (zh) * 2020-02-14 2022-02-25 青岛希望鸟科技有限公司 方案讲解控制方法
CN113177126A (zh) * 2021-03-24 2021-07-27 珠海金山办公软件有限公司 一种处理演示文稿的方法、装置、计算机存储介质及终端

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299250A (zh) * 2007-04-30 2008-11-05 深圳华飚科技有限公司 在线协同幻灯片制作服务***
CN101802816A (zh) * 2007-09-18 2010-08-11 微软公司 同步幻灯片显示事件与音频
CN105450944A (zh) * 2015-11-13 2016-03-30 北京自由坊科技有限责任公司 一种幻灯片和现场讲演语音同步录制与重现的方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7312803B2 (en) * 2004-06-01 2007-12-25 X20 Media Inc. Method for producing graphics for overlay on a video source
CN101344883A (zh) * 2007-07-09 2009-01-14 宇瞻科技股份有限公司 记录演示文稿的方法
US20120317486A1 (en) * 2011-06-07 2012-12-13 Microsoft Corporation Embedded web viewer for presentation applications

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299250A (zh) * 2007-04-30 2008-11-05 深圳华飚科技有限公司 在线协同幻灯片制作服务***
CN101802816A (zh) * 2007-09-18 2010-08-11 微软公司 同步幻灯片显示事件与音频
CN105450944A (zh) * 2015-11-13 2016-03-30 北京自由坊科技有限责任公司 一种幻灯片和现场讲演语音同步录制与重现的方法及装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112533054A (zh) * 2019-09-19 2021-03-19 腾讯科技(深圳)有限公司 在线视频的播放方法、装置及存储介质

Also Published As

Publication number Publication date
CN108241597A (zh) 2018-07-03

Similar Documents

Publication Publication Date Title
WO2018120819A1 (fr) Procédé et dispositif pour produire des présentations
WO2018120821A1 (fr) Procédé et dispositif de production d'une présentation
US10210769B2 (en) Method and system for reading fluency training
US9552807B2 (en) Method, apparatus and system for regenerating voice intonation in automatically dubbed videos
US8548618B1 (en) Systems and methods for creating narration audio
JP5030617B2 (ja) デジタル・オーディオ・プレーヤ上でrssコンテンツをレンダリングするためのrssコンテンツ管理のための方法、システム、およびプログラム(デジタル・オーディオ・プレーヤ上でrssコンテンツをレンダリングするためのrssコンテンツ管理)
CN108831437B (zh) 一种歌声生成方法、装置、终端和存储介质
US20200058288A1 (en) Timbre-selectable human voice playback system, playback method thereof and computer-readable recording medium
WO2020098115A1 (fr) Procédé d'ajout de sous-titres, appareil, dispositif électronique et support de stockage lisible par ordinateur
WO2016037440A1 (fr) Procédé et dispositif de conversion de voix de vidéo et serveur
WO2018120820A1 (fr) Procédé et appareil de production de présentations
US20080027726A1 (en) Text to audio mapping, and animation of the text
US20130246063A1 (en) System and Methods for Providing Animated Video Content with a Spoken Language Segment
US20090006965A1 (en) Assisting A User In Editing A Motion Picture With Audio Recast Of A Legacy Web Page
US20110112835A1 (en) Comment recording apparatus, method, program, and storage medium
WO2012086356A1 (fr) Format de fichier, serveur, dispositif de visualisation pour bande dessinée numérique, dispositif de génération de bande dessinée numérique
US20180226101A1 (en) Methods and systems for interactive multimedia creation
JPH0778074A (ja) マルチメディアのスクリプト作成方法とその装置
Pauletto et al. Exploring expressivity and emotion with artificial voice and speech technologies
US9087512B2 (en) Speech synthesis method and apparatus for electronic system
US20080243510A1 (en) Overlapping screen reading of non-sequential text
KR102353797B1 (ko) 영상 컨텐츠에 대한 합성음 실시간 생성에 기반한 컨텐츠 편집 지원 방법 및 시스템
JP2008217447A (ja) コンテンツ生成装置及びコンテンツ生成プログラム
CN109299082B (zh) 一种大数据分析方法及***
JP2022142374A (ja) 音声認識システム、音声認識方法およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17886990

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17886990

Country of ref document: EP

Kind code of ref document: A1