CN111381954B - Audio data recording method, system and terminal equipment - Google Patents

Audio data recording method, system and terminal equipment Download PDF

Info

Publication number
CN111381954B
CN111381954B CN201811607839.6A CN201811607839A CN111381954B CN 111381954 B CN111381954 B CN 111381954B CN 201811607839 A CN201811607839 A CN 201811607839A CN 111381954 B CN111381954 B CN 111381954B
Authority
CN
China
Prior art keywords
audio data
thread
recording
playing
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811607839.6A
Other languages
Chinese (zh)
Other versions
CN111381954A (en
Inventor
熊友军
潘宇超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201811607839.6A priority Critical patent/CN111381954B/en
Publication of CN111381954A publication Critical patent/CN111381954A/en
Application granted granted Critical
Publication of CN111381954B publication Critical patent/CN111381954B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/545Interprogram communication where tasks reside in different layers, e.g. user- and kernel-space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)

Abstract

The invention is suitable for the technical field of electronics, and provides an audio data recording method, an audio data recording system and terminal equipment, wherein the method comprises the steps of identifying a recording object according to a recording instruction of a client; if the recorded object is the audio data played by the client, creating a copy playing thread, wherein the copy playing thread comprises a first playing thread and a second playing thread; the control server starts a first playing thread and plays the audio data through standard output equipment; the control server starts a second playing thread and stores the played audio data into an audio hardware abstraction layer; and starting a recording thread through the server side, and controlling the client side to acquire the audio data stored in the audio hardware abstraction layer. The user can hear the played sound, and simultaneously, the audio data is output to the audio hardware abstraction layer through the second playing thread, so that the client can acquire the audio data from the audio hardware abstraction layer when the recording thread is started.

Description

Audio data recording method, system and terminal equipment
Technical Field
The invention belongs to the technical field of electronics, and particularly relates to an audio data recording method, an audio data recording system and terminal equipment.
Background
The recording function generally refers to a function of acquiring audio data from a standard input device such as a microphone and storing the audio data. The present android system provides an audio data recording method capable of recording audio data being played in the system (i.e. audio data not obtained from a standard input device such as a microphone), and the method has a defect that when recording the audio being played, the system stops the playing thread of the audio in the standard output device (such as a loudspeaker and an earphone), so that a user cannot hear sound from the standard output device when recording the audio data being played by the system.
In summary, the audio data played by the current recording system has a problem that the user cannot hear the sound from the standard output device.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a method, a system and a terminal device for recording audio data, so as to solve the problem that a user cannot hear sound from a standard output device when recording audio data played by a system at present.
The first aspect of the present invention provides an audio data recording method, including:
identifying a recording object according to a recording instruction of the client;
If the recorded object is audio data played by a client, creating a copy playing thread, wherein the copy playing thread comprises a first playing thread and a second playing thread, the first playing thread is a server playing thread supporting standard output equipment, and the second playing thread is a server playing thread supporting the internal recording output;
the control server starts the first playing thread to play the audio data through the standard output equipment;
The control server starts the second playing thread and stores the played audio data into an audio hardware abstraction layer;
And starting a recording thread through the server side, and controlling the client side to acquire the audio data stored in the audio hardware abstraction layer.
A second aspect of the present invention provides an audio data recording system comprising:
The identification module is used for identifying a recording object according to the recording instruction of the client;
The creation module is used for creating a copy playing thread if the recorded object is the audio data played by the client, wherein the copy playing thread comprises a first playing thread and a second playing thread, the first playing thread is a server playing thread supporting standard output equipment, and the second playing thread is a server playing thread supporting the inner recording output;
The first starting module is used for controlling the server to start the first playing thread to play the audio data through the standard output equipment;
The second starting module is used for controlling the server to start the second playing thread and storing the played audio data into the audio hardware abstraction layer;
The recording module is used for starting a recording thread through the server and controlling the client to acquire the audio data stored in the audio hardware abstraction layer.
A third aspect of the present invention provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
identifying a recording object according to a recording instruction of the client;
If the recorded object is audio data played by a client, creating a copy playing thread, wherein the copy playing thread comprises a first playing thread and a second playing thread, the first playing thread is a server playing thread supporting standard output equipment, and the second playing thread is a server playing thread supporting the internal recording output;
the control server starts the first playing thread to play the audio data through the standard output equipment;
The control server starts the second playing thread and stores the played audio data into an audio hardware abstraction layer;
And starting a recording thread through the server side, and controlling the client side to acquire the audio data stored in the audio hardware abstraction layer.
A fourth aspect of the present invention provides a computer readable storage medium storing a computer program which when executed by a processor performs the steps of:
identifying a recording object according to a recording instruction of the client;
If the recorded object is audio data played by a client, creating a copy playing thread, wherein the copy playing thread comprises a first playing thread and a second playing thread, the first playing thread is a server playing thread supporting standard output equipment, and the second playing thread is a server playing thread supporting the internal recording output;
the control server starts the first playing thread to play the audio data through the standard output equipment;
The control server starts the second playing thread and stores the played audio data into an audio hardware abstraction layer;
And starting a recording thread through the server side, and controlling the client side to acquire the audio data stored in the audio hardware abstraction layer.
According to the audio data recording method, system and terminal equipment provided by the invention, when the sound is played by the recording client, the copy playing thread is created, the audio data is transmitted to the standard output equipment through the first playing thread, so that a user can hear the played sound, and meanwhile, the audio data is output to the audio hardware abstraction layer through the second playing thread, so that when the recording thread is started, the client can acquire the audio data from the audio hardware abstraction layer, the recording of the audio data being played by the client is realized, and the problem that the user cannot hear the sound from the standard output equipment when the audio data is played by the current recording system is effectively solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic implementation flow chart of an audio data recording method according to a first embodiment of the present invention;
fig. 2 is a schematic diagram of an implementation architecture of an audio data recording method according to a first embodiment of the present invention;
Fig. 3 is a schematic flowchart of a first step S103 of a second embodiment of the present invention;
fig. 4 is a schematic flowchart of the implementation of step S104 in the corresponding embodiment provided in the third embodiment of the present invention;
fig. 5 is a schematic flowchart of a step S105 of the fourth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an audio data recording system according to a fifth embodiment of the present invention;
Fig. 7 is a schematic structural diagram of a first starting module 103 in a fifth embodiment corresponding to the sixth embodiment of the present invention;
fig. 8 is a schematic structural diagram of a second starting module 104 in a fifth embodiment corresponding to the seventh embodiment of the present invention;
fig. 9 is a schematic structural diagram of a recording module 105 in a fifth embodiment according to the eighth embodiment of the present invention;
Fig. 10 is a schematic diagram of a terminal device provided in a ninth embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
Embodiment one:
as shown in fig. 1, the present embodiment provides an audio data recording method, which specifically includes:
Step S101: and identifying the recording object according to the recording instruction of the client.
In a specific application, the recording object includes audio data input through a standard input device (such as a microphone) and audio data played by a client. The client identifies the recording object by acquiring a recording instruction issued by a recording application installed in the terminal device. It should be noted that the recording instruction includes a recording object.
In a specific application, the client is an audio recording client AudioRecord, and when the client AudioRecord recognizes that the recording object is audio data played in the recording system, the input source is mediarecorder. Specifically, when the recorded object is identified as audio data played by the client, a AudioRecord instance is created in the Java layer, and the input source thereof is designated as mediarecorder.
Note that AudioTrack is an example of a Java layer playback application at the Native layer, and AudioRecord is an example of a Java layer recording application at the Native layer.
Step S102: if the recorded object is audio data played by a client, a copy playing thread is created, wherein the copy playing thread comprises a first playing thread and a second playing thread, the first playing thread is a server playing thread supporting standard output equipment, and the second playing thread is a server playing thread supporting the inner recording output.
In a specific application, when it is identified that the recording object is audio data played by the client, that is, the input source of AudioRecord instances is mediarecorder. Audiosource. Remote_ SUBMIX, the server AudioPolicyService creates a copy playing thread on the Native layer, where the copy playing thread is created by copying according to the audio data playing thread currently being played by the client, and includes a first playing thread and a second playing thread. The first playing thread is used for controlling the playing standard output device (loudspeaker, earphone, etc.) to play the audio data, and the second playing thread is used for controlling the audio data being played to be cached in the corresponding audio hardware abstraction layer.
It should be noted that the copy playing thread includes two playing threads supporting different playing objects, one supporting standard output device and one supporting endocarp output. The playing thread corresponding to the standard output device can continue to output the audio data through the standard output device, and the playing thread corresponding to the audio output device can provide the audio data to be recorded for the input device (Audio Remote Submix corresponding to the audio abstraction layer) corresponding to the audio. It should be noted that the number of the substrates,
Step S103: and the control server starts the first playing thread and plays the audio data through the standard output equipment.
In a specific application, the control server AudioPolicyService starts a first playing thread, sends the audio data to be played after decoding to the audio client AudioTrack through a playing application program installed in the terminal device, writes the audio data into a channel (Track) in the first playing thread of the server through the audio client AudioTrack, then transfers the audio data to the audio hardware abstraction layer, and finally plays the audio data through a standard output device. The audio data can be continuously played in the standard output device by controlling the server to start the first playing thread.
Step S104: and the control server starts the second playing thread and stores the played audio data into an audio hardware abstraction layer.
In a specific application, the control server AudioPolicyService starts a second playing thread, sends the audio data to be played after decoding to the audio client AudioTrack through a playing application program installed in the terminal device, writes the audio data into a channel (Track) in the second playing thread of the server through the audio client AudioTrack, and then writes the audio data into a buffer space in an audio hardware abstraction layer supporting the audio output (corresponding to the audio abstraction layer being Audio Remote Submix).
Step S105: and starting a recording thread through the server side, and controlling the client side to acquire the audio data stored in the audio hardware abstraction layer.
In a specific application, when the audio data being played needs to be recorded, after the recording application program outputs a recording instruction, the server AudioPolicyService creates a recording thread while creating a copy playing thread in the Native layer.
In a specific application, the control server AudioPolicyService starts a recording thread, the recording thread obtains audio data stored in a buffer space in the audio hardware abstraction layer by the second playing thread through the audio hardware abstraction layer, reads the audio data from the audio hardware abstraction layer through the recording client 1AudioRecord, and returns the audio data to the recording application program.
In one embodiment, the audio data recording method further includes:
Step S106: if the recorded object is the audio data input by the standard input device, the control server starts a recording thread, acquires the audio data from the standard input device through the hardware abstraction layer, and transmits the audio data to a channel of the recording thread.
In a specific application, the input source standard input device (microphone) is designated by the client AudioRecord upon recognizing that the recording object is audio data input by the standard input device. Specifically, when the recorded object is identified as the audio data of the standard input device, a AudioRecord instance is created in the Java layer, the input source of the instance is designated as the MediaRecorder.AudioSource.MIC, the control server starts a recording thread, the audio data is obtained from the standard input device through the hardware abstraction layer, and the audio data is transmitted to a channel of the recording thread.
Step S107: and the control client acquires the audio data from the access of the recording thread.
In a specific application, the control recording client AudioRecord reads audio data from a channel (Track) of the recording thread and returns the audio data to the recording application.
In order to more clearly explain the audio data recording method provided by the present embodiment, the audio data recording method provided by the present embodiment is described below with reference to fig. 2:
As shown in fig. 2, an example of a playback application in the Java layer in the Native layer is shown, and AudioRecord is an example of a recording application in the Java layer in the Native layer. The server AudioFlinger and the server AudioPolicyService are servers in Native layer, and the server AudioFlinger manages a plurality of playback threads and recording threads of the server AudioPolicyService, for processing audio data to be played and audio data to be recorded. The server AudioPolicyService is mainly used for controlling the playing thread, the starting (start), the stopping (stop) and the releasing (release) of the server; each of the client AudioTrack and the client AudioRecord corresponds to one channel (Track) of the server play thread and the recording thread in the server AudioFlinger (there are at most 32 tracks for each server thread). The audio hardware abstraction layer is responsible for controlling the input device and the output device, and the input device and the output device with different characteristics need different abstraction layers to be controlled. The audio hardware abstraction layer in this embodiment has two types: the first is an audio hardware abstraction layer supporting standard input devices and standard output devices, and the second is an audio hardware abstraction layer supporting a transcription output (Remote Submix).
When it is identified that the recording object is audio data played by the client, the server AudioPolicyService creates a copy playing thread DumlicatingThread according to the audio data being played, where the copy playing thread includes: a first playback thread PlaybackThread supporting standard output devices and a second playback thread PlaybackThread supporting in-recording output.
The server AudioPolicyService starts the first playing thread PlaybackThread to output the audio data to the standard output device located in the Linux kernel layer for playing through the audio hardware abstraction layer supporting the standard output device. Meanwhile, the server AudioPolicyService starts the second playing thread PlaybackThread to store the audio data into the buffer memory space of the audio hardware abstraction layer supporting the audio output. And then starting the recording thread RecordThread to read the audio data from the buffer storage space of the audio hardware abstraction layer supporting the internal recording output, returning the audio data to the AudioRecord instance of the Native layer, and returning the audio data to the recording application program through the AudioRecord instance, so that the audio data can be continuously played while the audio data being played by the client is recorded, and a user can conveniently judge whether the recording is in progress or not, and the interactive capability is provided.
According to the audio data recording method, when the client side plays the sound, the copy playing thread is created, the audio data is transmitted to the standard output device through the first playing thread, so that a user can hear the played sound, meanwhile, the audio data is output to the audio hardware abstraction layer through the second playing thread, when the recording thread is started, the client side can acquire the audio data from the audio hardware abstraction layer, recording of the audio data played by the client side is achieved, and the problem that the user cannot hear the sound from the standard output device when the audio data played by the current recording system is effectively solved.
Embodiment two:
As shown in fig. 3, in the present embodiment, step S103 in the first embodiment specifically includes:
step S201: the control server starts the first playing thread and writes the audio data into a channel of the first playing thread through the audio playing client.
In a specific application, the control server AudioPolicyService starts the first playing thread, and writes the audio data into the channel of the first playing thread through the audio playing client AudioTrack. It should be noted that each playing thread has its corresponding channel, through which data is transmitted.
Step S202: and transmitting the audio data to an audio hardware abstraction layer through the channel of the first playing thread.
In a specific application, the audio data is correspondingly transmitted to an audio hardware abstraction layer supporting standard output equipment through a channel of a first playing thread.
Step S203: and controlling the audio hardware abstraction layer to write the audio data into the standard output equipment for playing.
In a specific application, after audio data is transmitted to an audio hardware abstraction layer supporting standard output equipment, the audio hardware abstraction layer is controlled to write the audio data into the corresponding standard output equipment, and then the audio data is played through the standard output equipment.
Embodiment III:
As shown in fig. 4, in the present embodiment, step S104 in the first embodiment specifically includes:
step S301: and the control server starts the second playing thread and writes the audio data into a channel of the second playing thread through the audio playing client.
In a specific application, the control server AudioPolicyService starts the second playback thread PlaybackThread, and writes the audio data into the channel of the second playback thread through the audio playback client AudioTrack.
Step S302: a buffer is created at the audio hardware abstraction layer.
In a specific application, a buffer memory is created in an audio hardware abstraction layer supporting the output of the audio file corresponding to the audio data to be stored.
In a specific application, a buffer area capable of accommodating the audio data is created according to the storage space required to be occupied by the audio data.
Step S303: and transmitting the audio data to a cache region of the audio hardware abstraction layer through the second playing thread.
In a specific application, the audio data written into the second playing thread channel is transferred to a buffer area of an audio hardware abstraction layer which is created for the audio data and supports the audio hardware abstraction layer for the audio data.
Embodiment four:
as shown in fig. 5, in the present embodiment, step S105 in the first embodiment specifically includes:
Step S401: the control server starts a recording thread, reads audio data stored in an audio hardware abstraction layer, and stores the audio data in a channel of the recording thread.
In a specific application, the control server AudioPolicyService starts a recording thread, and reads and stores the audio data stored in the buffer area of the audio hardware abstraction layer supporting the audio output of the audio data in the channel of the recording thread.
Step S402: and the control client acquires the audio data from the access of the recording thread.
In a specific application, audio data is read from a channel (Track) of a recording thread of the control recording client AudioRecord, and the audio data is returned to the recording application.
Fifth embodiment:
As shown in fig. 6, the present embodiment provides an audio data recording system 100 for performing the method steps in the first embodiment, which includes an identification module 101, a creation module 102, a first start module 103, a second start module 104, and a recording module 105.
The identifying module 101 is configured to identify a recording object according to a recording instruction of the client.
The creating module 102 is configured to create a copy playing thread if the recording object is audio data played by the client, where the copy playing thread includes a first playing thread and a second playing thread, the first playing thread is a server playing thread supporting standard output equipment, and the second playing thread is a server playing thread supporting endocarp output.
The first starting module 103 is configured to control the server to start the first playing thread to play the audio data through the standard output device.
The second starting module 104 is configured to control the server to start the second playing thread, and store the played audio data in the audio hardware abstraction layer.
The recording module 105 is configured to start a recording thread through the server, and control the client to obtain audio data stored in the audio hardware abstraction layer.
In one embodiment, the audio data recording system further includes an input audio recording module and an input audio acquisition module.
The input audio recording module is used for controlling the server to start a recording thread if the recording object is the audio data input by the standard input device, acquiring the audio data from the standard input device through the hardware abstraction layer and transmitting the audio data to a channel of the recording thread.
The input audio acquisition module is used for controlling a client to acquire the audio data from the access of the recording thread.
It should be noted that, since the audio data recording system provided in the embodiment of the present invention is based on the same concept as the embodiment of the method shown in fig. 1, the technical effects thereof are the same as the embodiment of the method shown in fig. 1, and the specific content can be referred to the description in the embodiment of the method shown in fig. 1, and the description is omitted herein.
Therefore, in the audio data recording system provided by the embodiment, the copy playing thread can be created when the client plays the sound, the audio data is transmitted to the standard output device through the first playing thread, so that the user can hear the played sound, and meanwhile, the audio data is output to the audio hardware abstraction layer through the second playing thread, so that the client can acquire the audio data from the audio hardware abstraction layer when the recording thread is started, the recording of the audio data being played by the client is realized, and the problem that the user cannot hear the sound from the standard output device when the audio data is played by the current recording system is effectively solved.
Example six:
As shown in fig. 7, in the present embodiment, the first starting module 103 in the fifth embodiment includes a structure for performing the steps of the method in the embodiment corresponding to fig. 3, and includes a first starting unit 201, a transmitting unit 202, and a playing unit 203.
The first starting unit 201 is configured to control the server to start the first playing thread, and write the audio data into a channel of the first playing thread through the audio playing client.
The transmission unit 202 is configured to transmit the audio data to an audio hardware abstraction layer through a channel of the first playing thread.
The playing unit 203 is configured to control the audio hardware abstraction layer to write audio data into the standard output device for playing.
Embodiment seven:
As shown in fig. 8, in the present embodiment, the second starting module 104 in the fifth embodiment includes a structure for executing the steps of the method in the embodiment corresponding to fig. 4, which includes a second starting unit 301, a cache creating unit 302, and a cache unit 303.
The second starting unit 301 is configured to control the server to start the second playing thread, and write the audio data to a channel of the second playing thread through the audio playing client.
The buffer creation unit 302 is configured to create a buffer at the audio hardware abstraction layer.
The buffer unit 303 is configured to transmit the audio data to a buffer area of the audio hardware abstraction layer through the second playing thread.
Example eight:
As shown in fig. 9, in the present embodiment, the sound recording module 105 in the fifth embodiment includes a structure for performing the steps of the method in the embodiment corresponding to fig. 5, which includes a data reading unit 401 and a data acquiring unit 402.
The data reading unit 401 is configured to control a server to start a recording thread, read audio data stored in an audio hardware abstraction layer, and store the audio data in a channel of the recording thread.
The data acquisition unit 402 is configured to control a client to acquire the audio data from the access of the recording thread.
Example nine:
Fig. 10 is a schematic diagram of a terminal device provided in a ninth embodiment of the present invention. As shown in fig. 10, the terminal device 9 of this embodiment includes: a processor 90, a memory 91 and a computer program 92, e.g. a program, stored in said memory 91 and executable on said processor 90. The processor 90, when executing the computer program 92, implements the steps of the respective picture processing method embodiments described above, such as steps S101 to S105 shown in fig. 1. Or the processor 90, when executing the computer program 92, performs the functions of the modules/units of the system embodiments described above, such as the functions of the modules 101-105 shown in fig. 6.
Illustratively, the computer program 92 may be partitioned into one or more modules/units that are stored in the memory 91 and executed by the processor 90 to complete the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions describing the execution of the computer program 92 in the terminal device 9. For example, the computer program 92 may be divided into an identification module, a creation module, a first start module, a second start module, and a sound recording module, each of which specifically functions as follows:
The identification module is used for identifying a recording object according to the recording instruction of the client;
The creation module is used for creating a copy playing thread if the recorded object is the audio data played by the client, wherein the copy playing thread comprises a first playing thread and a second playing thread, the first playing thread is a server playing thread supporting standard output equipment, and the second playing thread is a server playing thread supporting the inner recording output;
The first starting module is used for controlling the server to start the first playing thread to play the audio data through the standard output equipment;
The second starting module is used for controlling the server to start the second playing thread and storing the played audio data into the audio hardware abstraction layer;
The recording module is used for starting a recording thread through the server and controlling the client to acquire the audio data stored in the audio hardware abstraction layer.
The terminal device 9 may be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud management server, etc. The terminal device may include, but is not limited to, a processor 90, a memory 91. It will be appreciated by those skilled in the art that fig. 8 is merely an example of the terminal device 9 and does not constitute a limitation of the terminal device 9, and may include more or less components than illustrated, or may combine certain components, or different components, e.g., the terminal device may further include an input-output device, a network access device, a bus, etc.
The Processor 90 may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 91 may be an internal storage unit of the terminal device 9, such as a hard disk or a memory of the terminal device 9. The memory 91 may also be an external storage device of the terminal device 9, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the terminal device 9. Further, the memory 91 may also include both an internal storage unit and an external storage device of the terminal device 9. The memory 91 is used for storing the computer program and other programs and data required by the terminal device. The memory 91 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the system is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the wireless terminal may refer to the corresponding process in the foregoing method embodiment, which is not described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed system/terminal device and method may be implemented in other manners. For example, the system/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, systems or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and provided for sale or use as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or system capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium may include content that is subject to appropriate increases and decreases as required by jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is not included as electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (8)

1. A method for recording audio data, comprising:
identifying a recording object according to a recording instruction of the client;
If the recorded object is audio data played by a client, creating a copy playing thread, wherein the copy playing thread comprises a first playing thread and a second playing thread, the first playing thread is a server playing thread supporting standard output equipment, and the second playing thread is a server playing thread supporting the internal recording output;
the control server starts the first playing thread and plays the audio data through the standard output equipment;
The control server starts the second playing thread and stores the played audio data into an audio hardware abstraction layer;
Starting a recording thread through a server, controlling a client to acquire audio data stored in an audio hardware abstraction layer, and returning the audio data to a recording application program;
The control server starts the second playing thread, stores the played audio data into an audio hardware abstraction layer, and comprises the following steps: the control server starts the second playing thread and writes the audio data into a channel of the second playing thread through the audio playing client;
creating a buffer area at an audio hardware abstraction layer;
Transmitting the audio data to a buffer area of the audio hardware abstraction layer through the second playing thread; and starting the recording thread RecordThread to read the audio data from the buffer storage space of the audio hardware abstraction layer supporting the audio recording output, returning the audio data to the AudioRecord instance of the Native layer, and returning the audio data to the recording application program through the AudioRecord instance.
2. The method of claim 1, wherein the control server initiates the first playback thread to play the audio data via the standard output device, comprising:
The control server starts the first playing thread and writes the audio data into a channel of the first playing thread through the audio playing client;
Transmitting the audio data to an audio hardware abstraction layer through a channel of the first playing thread;
And controlling the audio hardware abstraction layer to write the audio data into the standard output equipment for playing.
3. The method of claim 2, wherein the controlling the client to obtain the audio data stored in the audio hardware abstraction layer by the server starting the recording thread comprises:
The control server starts a recording thread, reads audio data stored in an audio hardware abstraction layer, and stores the audio data in a channel of the recording thread;
and the control client acquires the audio data from the channel of the recording thread.
4. The method as recited in claim 1, further comprising:
if the recorded object is the audio data input by the standard input device, the control server starts a recording thread, acquires the audio data from the standard input device through the hardware abstraction layer, and transmits the audio data to a channel of the recording thread;
and the control client acquires the audio data from the channel of the recording thread.
5. An audio data recording system, comprising:
The identification module is used for identifying a recording object according to the recording instruction of the client;
The creation module is used for creating a copy playing thread if the recorded object is the audio data played by the client, wherein the copy playing thread comprises a first playing thread and a second playing thread, the first playing thread is a server playing thread supporting standard output equipment, and the second playing thread is a server playing thread supporting the inner recording output;
The first starting module is used for controlling the server to start the first playing thread to play the audio data through the standard output equipment;
The second starting module is used for controlling the server to start the second playing thread and storing the played audio data into the audio hardware abstraction layer;
The recording module is used for starting a recording thread through the server, controlling the client to acquire audio data stored in the audio hardware abstraction layer, and returning the audio data to the recording application program;
the second starting module comprises:
The second starting unit is used for controlling the server to start the second playing thread and writing the audio data into a channel of the second playing thread through the audio playing client;
the buffer creation unit is used for creating a buffer area in the audio hardware abstraction layer;
The buffer unit is used for transmitting the audio data to a buffer area of the audio hardware abstraction layer through the second playing thread; and starting the recording thread RecordThread to read the audio data from the buffer storage space of the audio hardware abstraction layer supporting the audio recording output, returning the audio data to the AudioRecord instance of the Native layer, and returning the audio data to the recording application program through the AudioRecord instance.
6. The audio data recording system of claim 5, wherein the first start-up module comprises:
the first starting unit is used for controlling the server to start the first playing thread and writing the audio data into a channel of the first playing thread through the audio playing client;
the transmission unit is used for transmitting the audio data to an audio hardware abstraction layer through the channel of the first playing thread;
And the playing unit is used for controlling the audio hardware abstraction layer to write the audio data into the standard output equipment for playing.
7. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 4 when the computer program is executed.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 4.
CN201811607839.6A 2018-12-27 2018-12-27 Audio data recording method, system and terminal equipment Active CN111381954B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811607839.6A CN111381954B (en) 2018-12-27 2018-12-27 Audio data recording method, system and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811607839.6A CN111381954B (en) 2018-12-27 2018-12-27 Audio data recording method, system and terminal equipment

Publications (2)

Publication Number Publication Date
CN111381954A CN111381954A (en) 2020-07-07
CN111381954B true CN111381954B (en) 2024-05-03

Family

ID=71222349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811607839.6A Active CN111381954B (en) 2018-12-27 2018-12-27 Audio data recording method, system and terminal equipment

Country Status (1)

Country Link
CN (1) CN111381954B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112423076B (en) * 2020-11-18 2023-05-05 湖南嘉加智能科技有限公司 Audio screen-throwing synchronous control method, equipment and computer readable storage medium
CN112615853B (en) * 2020-12-16 2023-01-10 瑞芯微电子股份有限公司 Android device audio data access method
CN113038181B (en) * 2021-03-15 2021-12-21 中国科学院计算机网络信息中心 Start-stop audio fault tolerance method and system in RTMP audio and video plug flow under Android platform
CN115134452A (en) * 2021-03-26 2022-09-30 成都鼎桥通信技术有限公司 Calling method and device of microphone resources
CN113299321B (en) * 2021-05-11 2022-07-29 南京市德赛西威汽车电子有限公司 Multi-user audio sharing method for vehicle-mounted entertainment system, vehicle-mounted system and storage medium
CN113423006B (en) * 2021-05-31 2022-07-15 惠州华阳通用电子有限公司 Multi-audio-stream audio mixing playing method and system based on main and auxiliary sound channels
CN114944171A (en) * 2022-06-06 2022-08-26 北京字跳网络技术有限公司 Audio recording method and device and electronic equipment
CN114879930B (en) * 2022-07-07 2022-09-06 北京麟卓信息科技有限公司 Audio output optimization method for android compatible environment
CN115396723A (en) * 2022-08-23 2022-11-25 北京小米移动软件有限公司 Screen recording method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889068A (en) * 2005-06-30 2007-01-03 腾讯科技(深圳)有限公司 Method for realizing audio-frequency and video frequency synchronization
CN102420010A (en) * 2011-10-17 2012-04-18 苏州阔地网络科技有限公司 Audio recording method and system thereof
CN106708612A (en) * 2015-11-18 2017-05-24 中兴通讯股份有限公司 Audio recording realization method and terminal
WO2017177873A1 (en) * 2016-04-15 2017-10-19 中兴通讯股份有限公司 System and method for synchronous audio recording and playing, and storage medium
CN107277412A (en) * 2017-07-24 2017-10-20 腾讯科技(深圳)有限公司 Video recording method and device, graphics processor and electronic equipment
WO2018120545A1 (en) * 2016-12-30 2018-07-05 华为技术有限公司 Method and device for testing latency of audio loop

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889068A (en) * 2005-06-30 2007-01-03 腾讯科技(深圳)有限公司 Method for realizing audio-frequency and video frequency synchronization
CN102420010A (en) * 2011-10-17 2012-04-18 苏州阔地网络科技有限公司 Audio recording method and system thereof
CN106708612A (en) * 2015-11-18 2017-05-24 中兴通讯股份有限公司 Audio recording realization method and terminal
WO2017177873A1 (en) * 2016-04-15 2017-10-19 中兴通讯股份有限公司 System and method for synchronous audio recording and playing, and storage medium
WO2018120545A1 (en) * 2016-12-30 2018-07-05 华为技术有限公司 Method and device for testing latency of audio loop
CN107277412A (en) * 2017-07-24 2017-10-20 腾讯科技(深圳)有限公司 Video recording method and device, graphics processor and electronic equipment

Also Published As

Publication number Publication date
CN111381954A (en) 2020-07-07

Similar Documents

Publication Publication Date Title
CN111381954B (en) Audio data recording method, system and terminal equipment
US8401534B2 (en) Mobile communication terminal and method for controlling the same
US20200259879A1 (en) Interaction method and device for mobile terminal and cloud platform of unmanned aerial vehicle
US20130117248A1 (en) Adaptive media file rewind
CN110704202B (en) Multimedia recording data sharing method and terminal equipment
CN112954244A (en) Method, device and equipment for realizing storage of monitoring video and storage medium
US20160019934A1 (en) Continuing media playback after bookmarking
JP6356248B2 (en) Verify that certain information is transmitted by the application
CN105578224A (en) Multimedia data acquisition method, device, smart television and set-top box
US8726101B2 (en) Apparatus and method for tracing memory access information
CN103617135B (en) The method and device of digital independent in a kind of storage device
CN105005612A (en) Music file acquisition method and mobile terminal
KR100834543B1 (en) Method and apparatus for sharing a live presentation file over a network
CN116095397A (en) Live broadcast method, live broadcast device, electronic equipment and storage medium
KR20100050098A (en) Image processing apparatus and control method thereof
CN115248657A (en) Audio processing method, device and computer readable storage medium
US10257633B1 (en) Sound-reproducing method and sound-reproducing apparatus
CN106328174A (en) Method and device for processing recording data
CN104679697B (en) File reading, device and CD-ROM drive driving plate and CD-ROM equipment
US9681088B1 (en) System and methods for movie digital container augmented with post-processing metadata
CN111354383A (en) Audio defect positioning method and device and terminal equipment
KR101754714B1 (en) Access management method of memory card and apparatus thereof
KR20160119037A (en) Access management method of memory card, terminal device and service system thereof
CN107968728B (en) Method and device for determining rate
US20230008725A1 (en) Information processing apparatus and file recording method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant