CN113556421B - Recording data processing method, recording data processing device and storage medium - Google Patents

Recording data processing method, recording data processing device and storage medium Download PDF

Info

Publication number
CN113556421B
CN113556421B CN202010332855.XA CN202010332855A CN113556421B CN 113556421 B CN113556421 B CN 113556421B CN 202010332855 A CN202010332855 A CN 202010332855A CN 113556421 B CN113556421 B CN 113556421B
Authority
CN
China
Prior art keywords
call data
uplink
downlink
file
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010332855.XA
Other languages
Chinese (zh)
Other versions
CN113556421A (en
Inventor
郭建南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu TD Tech Ltd
Original Assignee
Chengdu TD Tech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu TD Tech Ltd filed Critical Chengdu TD Tech Ltd
Priority to CN202010332855.XA priority Critical patent/CN113556421B/en
Publication of CN113556421A publication Critical patent/CN113556421A/en
Application granted granted Critical
Publication of CN113556421B publication Critical patent/CN113556421B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/64Automatic arrangements for answering calls; Automatic arrangements for recording messages for absent subscribers; Arrangements for recording conversations
    • H04M1/65Recording arrangements for recording a message from the calling party
    • H04M1/6505Recording arrangements for recording a message from the calling party storing speech in digital form
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/64Automatic arrangements for answering calls; Automatic arrangements for recording messages for absent subscribers; Arrangements for recording conversations
    • H04M1/642Automatic arrangements for answering calls; Automatic arrangements for recording messages for absent subscribers; Arrangements for recording conversations storing speech in digital form
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/64Automatic arrangements for answering calls; Automatic arrangements for recording messages for absent subscribers; Arrangements for recording conversations
    • H04M1/65Recording arrangements for recording a message from the calling party
    • H04M1/656Recording arrangements for recording a message from the calling party for recording conversations
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • G11B2020/10537Audio or video recording
    • G11B2020/10546Audio or video recording specifically adapted for audio data

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the application provides a recording data processing method, a recording data processing device and a storage medium. The method comprises the following steps: the method comprises the steps of responding to a recording instruction input by a user, collecting voice call data of the terminal equipment, wherein the recording instruction is used for indicating to record the voice call data currently received by the terminal equipment; acquiring uplink call data and downlink call data in voice call data; the uplink call data are stored in the uplink file, the downlink call data are stored in the downlink file, and then the uplink call data and the downlink call data are stored separately, so that when a user selects to play a call record, the uplink call data in the uplink file can be selected to be played, or the downlink call data in the downlink file can be selected to be played, and therefore the individual requirements of the user are met.

Description

Recording data processing method, recording data processing device and storage medium
Technical Field
The embodiment of the application relates to the technical field of sound recording, in particular to a method and a device for processing sound recording data and a storage medium.
Background
With the increase of the security supervision of enterprises and public institutions and the improvement of the internal control requirements of enterprises, the call data between employees and clients need to be recorded, so that the call recording requirement is increased day by day.
In the existing telephone recording mode, when a user communicates with a contact person through a mobile terminal, the user triggers a recording icon on mobile terminal equipment to record sound on the mobile terminal equipment.
However, in the recording data stored in the mobile terminal, when the user plays the recording data, the user plays the uplink and downlink call data simultaneously, and thus it is not possible to play the uplink call data or the downlink call data separately.
Disclosure of Invention
The embodiment of the application provides a recording data processing method, a recording data processing device and a storage medium, which are used for solving the problem that the existing call recording data cannot play uplink call data or downlink call data independently.
In a first aspect, an embodiment of the present application provides a method for processing recorded sound data, including:
the method comprises the steps of responding to a recording instruction input by a user, and acquiring voice call data of the terminal equipment, wherein the recording instruction is used for indicating to record the voice call data currently received by the terminal equipment;
acquiring uplink call data and downlink call data in the voice call data;
and storing the uplink call data into an uplink file, and storing the downlink call data into a downlink file.
In a possible implementation manner of the first aspect, the storing the uplink call data in an uplink file and the storing the downlink call data in a downlink file includes:
and calling a first read-write thread to store the uplink call data into the uplink file, and calling a second read-write thread to store the downlink call data into the downlink file.
In a possible implementation manner of the first aspect, before the invoking of the first read-write thread stores the uplink call data in the uplink file and the invoking of the second read-write thread stores the downlink call data in the downlink file, the method further includes:
respectively encoding and compressing the uplink call data and the downlink call data;
the step of calling a first read-write thread to store the uplink call data into the uplink file, and calling a second read-write thread to store the downlink call data into the downlink file includes:
and calling the first read-write thread to store the uplink call data after the coding compression into the uplink file, and calling the second read-write thread to store the downlink call data after the coding compression into the downlink file.
In a possible implementation manner of the first aspect, the separately encoding and compressing the uplink call data and the downlink call data includes:
encoding and compressing the uplink call data by using a first encoder;
encoding and compressing the downlink call data by using a second encoder;
the step of calling the first read-write thread to store the uplink call data after being coded and compressed into the uplink file, and the step of calling the second read-write thread to store the downlink call data after being coded and compressed into the downlink file includes:
calling the first read-write thread to read the uplink call data after the coding compression from the first coder and storing the uplink call data in the uplink file;
and calling the second read-write thread to read the encoded and compressed downlink call data from the second encoder and store the downlink call data in the downlink file.
In a possible implementation manner of the first aspect, before the acquiring the uplink call data and the downlink call data in the voice call data, the method further includes:
caching the collected voice call data into a third cache queue, wherein uplink call data and downlink call data in the voice call data are cached in the third cache queue in a crossed manner;
acquiring uplink call data and downlink call data in the voice call data, including:
acquiring the uplink call data from the third buffer queue according to the condition that each uplink call data is positioned between two downlink communication data;
and acquiring the downlink call data from the third buffer queue according to the condition that each downlink call data is positioned between two uplink communication data.
In a possible implementation manner of the first aspect, the acquiring voice call data of the terminal device in response to a recording instruction input by a user includes:
receiving a recording instruction input by the user;
responding to a recording instruction input by a user, and judging whether the terminal equipment is in a voice call state currently;
and if the terminal equipment is determined to be in the voice call state currently, acquiring voice call data received by the terminal equipment.
In a possible implementation manner of the first aspect, the method further includes:
receiving a recording checking instruction input by the user, wherein the recording inquiring instruction is used for indicating to check a recording file;
and responding to the recording viewing instruction, and displaying the recording file, wherein the recording file comprises an uplink file and a downlink file.
In a second aspect, an embodiment of the present application provides an apparatus for processing recorded sound data, including:
the terminal equipment comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for responding to a recording instruction input by a user and acquiring voice call data of the terminal equipment, and the recording instruction is used for indicating to record the voice call data currently received by the terminal equipment;
the acquisition module is used for acquiring uplink call data and downlink call data in the voice call data;
and the storage module is used for storing the uplink call data into an uplink file and storing the downlink call data into a downlink file.
In a possible implementation manner of the second aspect, the storage module is specifically configured to invoke a first read-write thread to store the uplink call data in the uplink file, and invoke a second read-write thread to store the downlink call data in the downlink file.
In a possible implementation manner of the second aspect, the apparatus further includes:
the coding module is used for respectively coding and compressing the uplink call data and the downlink call data;
the storage module is specifically configured to call the first read-write thread to store the uplink call data after being encoded and compressed into the uplink file, and call the second read-write thread to store the downlink call data after being encoded and compressed into the downlink file.
In a possible implementation manner of the second aspect, the encoding module is configured to encode and compress the uplink call data by using a first encoder; encoding and compressing the downlink call data by using a second encoder;
the storage module is specifically configured to invoke the first read-write thread to read the uplink call data after being encoded and compressed from the first encoder, and store the uplink call data in the uplink file; and calling the second read-write thread to read the encoded and compressed downlink call data from the second encoder and store the downlink call data in the downlink file.
In a possible implementation manner of the second aspect, the storage module is further configured to cache the collected voice call data in a third cache queue, and uplink call data and downlink call data in the voice call data are cross-cached in the third cache queue;
the obtaining module is specifically configured to obtain the uplink call data from the third buffer queue according to that each uplink call data is located between two downlink communication data; and acquiring the downlink call data from the third buffer queue according to the condition that each downlink call data is positioned between two uplink communication data.
In a possible implementation manner of the second aspect, the apparatus further includes:
the receiving module is used for receiving a recording checking instruction input by the user, and the recording inquiring instruction is used for indicating to check a recording file;
and the display module is used for responding to the recording viewing instruction and displaying the recording file, wherein the recording file comprises an uplink file and a downlink file.
In a possible implementation manner of the second aspect, the receiving module is further configured to receive a recording instruction input by the user; the acquisition module is specifically used for responding to a recording instruction input by a user and judging whether the terminal equipment is in a voice call state currently; and if the terminal equipment is determined to be in the voice call state currently, acquiring voice call data received by the terminal equipment.
In a third aspect, an embodiment of the present application provides a terminal device, including a processor and a memory;
the memory for storing a computer program;
the processor is configured to execute the computer program to implement the sound recording data processing method according to any one of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, which includes computer instructions, and when the instructions are executed by a computer, the computer implements the sound recording data processing method according to any one of the first aspect.
In a fifth aspect, the present application provides a computer program product, where the computer program product includes a computer program, the computer program is stored in a readable storage medium, the computer program can be read by at least one processor of a computer from the readable storage medium, and the at least one processor executes the computer program to make the computer implement the sound recording data processing method according to any one of the first aspects.
According to the recording data processing method, the recording data processing device and the recording medium, voice call data of the terminal equipment are acquired by responding to a recording instruction input by a user, wherein the recording instruction is used for indicating to record the voice call data currently received by the terminal equipment; acquiring uplink call data and downlink call data in voice call data; the uplink call data are stored in the uplink file, the downlink call data are stored in the downlink file, and then the uplink call data and the downlink call data are stored separately, so that when a user selects to play a call record, the uplink call data in the uplink file can be selected to be played, or the downlink call data in the downlink file can be selected to be played, and therefore the individual requirements of the user are met.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic view of an application scenario according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a terminal device (e.g., a mobile phone) according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of a data processing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a current call interface according to an embodiment of the present application;
fig. 5A is a schematic diagram of a third buffer queue according to an embodiment of the present application;
fig. 5B is another schematic diagram of a third buffer queue according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a user interface of a sound recording file according to an embodiment of the present application;
FIG. 7 is a schematic flow chart illustrating a recording data processing method according to an embodiment of the present application;
FIG. 8A is a schematic flow chart illustrating a recording data processing method according to an embodiment of the present disclosure;
FIG. 8B is a schematic block diagram of a recording data processing process according to an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a process of a user viewing a recording according to an embodiment of the present application;
FIG. 10 illustrates a schematic diagram of a user interaction with a user interface of a terminal device;
FIG. 11 is a schematic diagram of a user interface for a terminal device to display query results;
FIG. 12 is a schematic structural diagram of an apparatus for processing recorded sound data according to an embodiment of the present application;
FIG. 13 is a schematic structural diagram of an apparatus for processing recorded data according to an embodiment of the present application;
FIG. 14 is a schematic structural diagram of an apparatus for processing recorded sound data according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. The drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the concepts of the application by those skilled in the art with reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Fig. 1 is a schematic view of an application scenario related to an embodiment of the present application, where the application scenario includes a network device and at least two terminal devices, and a call can be performed between the at least two terminal devices.
A network device is a device in a wireless network, for example, a Radio Access Network (RAN) node that accesses a terminal to the wireless network. Currently, some examples of RAN nodes are: a gbb, a Transmission Reception Point (TRP), an evolved Node B (eNB), a Radio Network Controller (RNC), a Node B (NB), a Base Station Controller (BSC), a Base Transceiver Station (BTS), a home base station (e.g., home evolved Node B, or home Node B, HNB), a Base Band Unit (BBU), or a wireless fidelity (Wifi) Access Point (AP), etc. In one network structure, a network device may include a Centralized Unit (CU) node, or a Distributed Unit (DU) node, or a RAN device including a CU node and a DU node, which is not limited herein.
The terminal equipment: the wireless terminal device can be a wireless terminal device or a wired terminal device, and the wireless terminal device can be a device with a wireless transceiving function, can be deployed on land, and comprises indoor or outdoor, handheld or vehicle-mounted; can also be deployed on the water surface (such as a ship and the like); and may also be deployed in the air (e.g., airplanes, balloons, satellites, etc.). The terminal device may be a mobile phone (mobile phone), a tablet computer (Pad), a computer with a wireless transceiving function, a Virtual Reality (VR) terminal device, an Augmented Reality (AR) terminal device, a wireless terminal device in industrial control (industrial control), a wireless terminal device in self driving (self driving), a wireless terminal device in remote medical treatment (remote medical), a wireless terminal device in smart grid (smart grid), a wireless terminal device in transportation safety (transportation safety), a wireless terminal device in smart city (smart city), a wireless terminal device in smart home (smart home), and the like, which are not limited herein. It can be understood that, in the embodiment of the present application, a terminal device may also be referred to as a User Equipment (UE).
Fig. 2 is a schematic structural diagram of a terminal device (e.g., a mobile phone) according to an embodiment of the present disclosure.
The terminal device may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. It is to be understood that the illustrated structure of the present embodiment does not constitute a specific limitation to the terminal device. In other embodiments of the present application, a terminal device may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components may be used. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), among others. The different processing units may be separate devices or may be integrated into one or more processors. In some embodiments, the terminal device may also include one or more processors 110. The controller can be a neural center and a command center of the terminal equipment. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution. A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. This avoids repeated accesses, reduces the latency of the processor 110, and thus increases the efficiency of the terminal equipment system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose-input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc. The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the terminal device, and may also be used to transmit data between the terminal device and the peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone.
It should be understood that the interface connection relationship between the modules in the embodiment of the present invention is only an exemplary illustration, and does not form a structural limitation on the terminal device. In other embodiments of the present application, the terminal device may also adopt different interface connection manners or a combination of multiple interface connection manners in the foregoing embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the terminal device. The charging management module 140 may also supply power to the terminal device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may also be disposed in the same device.
The wireless communication function of the terminal device can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like. The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in a terminal device may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied on the terminal device. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier, etc. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to a terminal device, including Wireless Local Area Networks (WLAN), bluetooth, global Navigation Satellite System (GNSS), frequency Modulation (FM), NFC, infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the terminal device's antenna 1 is coupled to the mobile communication module 150 and the antenna 2 is coupled to the wireless communication module 160 so that the terminal device can communicate with the network and other devices through wireless communication techniques. The wireless communication technologies may include GSM, GPRS, CDMA, WCDMA, TD-SCDMA, LTE, GNSS, WLAN, NFC, FM, and/or IR technologies, among others. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The terminal device can realize the display function through the GPU, the display screen 194, the application processor, and the like. The GPU is a microprocessor for image processing, connected to the display screen 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute instructions to generate or change display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the terminal device may include 1 or N display screens 194, N being a positive integer greater than 1.
The terminal device may implement a photographing function through the ISP, one or more cameras 193, a video codec, a GPU, one or more display screens 194, and an application processor, etc.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can realize the intelligent cognition and other applications of the terminal equipment, such as: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the terminal device. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, data files such as music, photos, videos, and the like are saved in the external memory card.
Internal memory 121 may be used to store one or more computer programs, including instructions. The processor 110 may execute the above-mentioned instructions stored in the internal memory 121, so as to enable the terminal device to execute the voice switching method provided in some embodiments of the present application, and various functional applications, data processing, and the like. The internal memory 121 may include a program storage area and a data storage area. Wherein, the storage program area can store an operating system; the storage area may also store one or more application programs (e.g., gallery, contacts, etc.), etc. The storage data area can store data (such as photos, contacts and the like) and the like created in the use process of the terminal equipment. In addition, the internal memory 121 may include a high speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a Universal Flash Storage (UFS), and the like. In some embodiments, the processor 110 may cause the terminal device to execute the voice switching method provided in the embodiments of the present application, and various functional applications and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor 110.
The terminal device may implement an audio function through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playing, recording, etc. The audio module 170 is configured to convert digital audio information into an analog audio signal for output, and also configured to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110. The speaker 170A, also called a "horn", is used to convert the audio electrical signal into a sound signal. The terminal device can listen to music through the speaker 170A, or listen to a handsfree call. The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the terminal device answers a call or voice information, it is possible to answer a voice by bringing the receiver 170B close to the human ear. The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking near the microphone 170C through the mouth. The terminal device may be provided with at least one microphone 170C. In other embodiments, the terminal device may be provided with two microphones 170C, so as to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the terminal device may further include three, four, or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions. The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be the USB interface 130, may be an open mobile electronic device platform (OMTP) standard interface of 3.5mm, and may also be a CTIA (cellular telecommunications industry association) standard interface.
The sensors 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The terminal device determines the intensity of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the terminal device detects the intensity of the touch operation according to the pressure sensor 180A. The terminal device may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the terminal device. In some embodiments, the angular velocity of the terminal device about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. Illustratively, when the shutter is pressed, the gyroscope sensor 180B detects the shake angle of the terminal device, calculates the distance to be compensated for by the lens module according to the shake angle, and enables the lens to counteract the shake of the terminal device through reverse movement, thereby achieving anti-shake. The gyro sensor 180B may also be used for navigation, body sensing game scenes, and the like.
The acceleration sensor 180E can detect the magnitude of acceleration of the terminal device in various directions (generally, three axes). When the terminal equipment is static, the size and the direction of gravity can be detected. The method can also be used for identifying the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and the like.
A distance sensor 180F for measuring a distance. The terminal device may measure the distance by infrared or laser. In some embodiments, the scene is photographed and the terminal device may range using the distance sensor 180F to achieve fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The terminal device emits infrared light to the outside through the light emitting diode. The terminal device detects infrared reflected light from a nearby object using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the terminal device. When insufficient reflected light is detected, the terminal device may determine that there is no object near the terminal device. The terminal device can detect that the user holds the terminal device by using the proximity light sensor 180G and calls by being close to ears, so that the screen is automatically extinguished to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. The terminal device may adaptively adjust the brightness of the display screen 194 according to the perceived ambient light level. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the terminal device is in a pocket, so as to prevent accidental touch.
A fingerprint sensor 180H (also referred to as a fingerprint recognizer) for collecting a fingerprint. The terminal equipment can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access to an application lock, fingerprint photographing, fingerprint incoming call answering and the like. Further description of fingerprint sensors may be found in international patent application PCT/CN2017/082773 entitled "method and electronic device for handling notifications", which is incorporated herein by reference in its entirety.
Touch sensor 180K, which may also be referred to as a touch panel or touch sensitive surface. The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a touch screen. The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided via the display screen 194. In other embodiments, the touch sensor 180K may be disposed on the surface of the terminal device at a different position than the display screen 194.
The bone conduction sensor 180M can acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys or touch keys. The terminal device may receive a key input, and generate a key signal input related to user setting and function control of the terminal device.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be attached to and detached from the terminal device by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The terminal equipment can support 1 or N SIM card interfaces, and N is a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 is also compatible with external memory cards. The terminal equipment interacts with the network through the SIM card to realize functions of conversation, data communication and the like. In some embodiments, the end device employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the terminal device and cannot be separated from the terminal device.
The technical scheme of the embodiment of the application is suitable for terminal equipment with a recording function, such as terminal equipment of an Android system.
Currently, call recording can be performed in an Android system, but in a current call recording scheme, pulse Code Modulation (PCM) data obtained from a Hardware Abstraction Layer (HAL) Layer is encoded and compressed by an encoder (encoder), and then written into a file through a read-write unit (writer). That is, the voice data of the calling party and the called party are written in the same file and are not separated. Therefore, when the user selects to play the recorded data, the uplink or downlink call data cannot be played independently.
In order to solve the technical problem, in the embodiment of the application, the uplink call data and the downlink call data in the voice call data received by the terminal device are obtained, the uplink call data is stored in the uplink file, and the downlink call data is stored in the downlink file. The terminal device separates the uplink and downlink call data and stores the separated uplink and downlink call data in the two recording files respectively, so that when a user selects to play the call record, the user can select to play the uplink call data in the uplink file or select to play the downlink call data in the downlink file.
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that, for the convenience of clearly describing the technical solutions of the embodiments of the present application, the words "first", "second", and the like are used in the embodiments of the present application to distinguish the same items or similar items with basically the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
Fig. 3 is a schematic flowchart of a data processing method according to an embodiment of the present application. As shown in fig. 3, the method of the embodiment of the present application includes:
s101, responding to a recording instruction input by a user, and collecting voice call data of the terminal equipment.
The recording instruction is used for indicating the recording of the voice call data currently received by the terminal equipment.
The execution subject of the embodiment of the present application is a device having a recording data processing function, for example, a recording data processing device, and the recording data processing device may be a terminal device, or may be a part of the terminal device, for example, a processor in the terminal device. The embodiment of the present application takes an execution subject as an example for a terminal device.
As shown in fig. 1, terminal device 1 and terminal device 2 perform a voice call through a network device, specifically, when terminal device 1 and terminal device 2 are in a call, terminal device 1 sends first voice data sent to terminal device 2 to the network device through an uplink between terminal device 1 and the network device. The network device sends the first voice data to the terminal device 2 through the downlink between the network device and the terminal device 2. Similarly, the terminal device 2 sends the second voice data sent to the terminal device 1 to the network device through the uplink between the terminal device 2 and the network device. The network device sends the second voice data to the terminal device 2 through a downlink between the network device and the terminal device 1, so that the voice call between the terminal device 1 and the terminal device 2 is realized.
In the embodiments of the present application, voice data transmitted via an uplink is referred to as uplink call data, and voice data transmitted via a downlink is referred to as downlink call data. For example, taking the terminal device 1 as an example, the terminal device 1 records first voice data transmitted through an uplink with the network device as uplink call data, and the terminal device 1 records second voice data received from a downlink with the network device as downlink call data. Taking the terminal device 2 as an example, the terminal device 2 records the second voice data transmitted through the uplink with the network device as uplink call data, and the terminal device 2 records the first voice data received from the downlink with the network device as downlink call data.
For the sake of clarity, the recording data processing method according to the embodiment of the present application is described by taking the terminal device 1 side as an example, and the terminal device 2 side may refer to the process of the terminal device 1 side.
The method comprises the steps that the terminal equipment receives a recording instruction input by a user in the call process, and the recording instruction instructs to record voice call data currently received by the terminal equipment.
For example, as shown in fig. 4, taking a terminal device as a smart phone as an example, icons such as recording, hang-up, mute, and keyboard are displayed on the current call interface shown in fig. 4, and information such as a phone number of an opposite terminal (for example, 12311231122) and a current call duration (30 seconds) is displayed. When the recording is needed, the user inputs a recording instruction to the terminal device by triggering the recording icon on fig. 4. When the terminal device detects that the user triggers the recording icon in fig. 4, the terminal device starts to collect the voice call data currently received by the terminal device.
S102, obtaining uplink call data and downlink call data in the voice call data.
As can be seen from fig. 1, the uplink call data is voice data that is sent by the terminal device to the opposite end through the uplink with the network device, and the downlink call data is voice data that is received by the terminal device from the downlink with the network device. In this way, the terminal device can obtain uplink call data on the uplink and obtain downlink call data on the downlink from the collected voice call data.
In some embodiments, before S102, the method further includes step a:
and step A, caching the collected voice call data into a third cache queue, wherein uplink call data and downlink call data in the voice call data are cached in the third cache queue in a crossed manner.
Specifically, fig. 5A and 5B are some exemplary diagrams of a third buffer queue according to an embodiment of the present application, where the third buffer queue includes a plurality of buffer units, and uplink call data and downlink call data are cross-buffered in the third buffer queue. For example, as shown in fig. 5A, according to the predefined definition, the uplink call data transmitted on the uplink is buffered in the buffer unit with odd index in the third buffer queue, and the downlink call data received on the downlink is buffered in the buffer unit with even index in the third buffer queue. Alternatively, as shown in fig. 5B, according to the predefined definition, the uplink call data transmitted in the uplink is buffered in the buffer unit with the even index in the third buffer queue, and the downlink call data received from the downlink is buffered in the buffer unit with the odd index in the third buffer queue.
At this time, the step of acquiring the uplink call data and the downlink call data in the voice call data in S102 may include:
s1021, acquiring uplink call data from a third buffer queue according to the fact that each uplink call data is located between two downlink communication data; and acquiring the downlink call data from the third buffer queue according to the fact that each downlink call data is positioned between two uplink communication data.
Optionally, in the embodiment of the present application, as shown in fig. 5A and 5B, uplink call data obtained from the third buffer queue may be buffered in the first buffer queue, and downlink call data obtained from the third buffer queue may be buffered in the second buffer queue. Therefore, the uplink call data and the downlink call data are respectively cached in different cache queues, and the subsequent reading operation of the uplink call data and the downlink call data is convenient.
S103, storing the uplink call data into an uplink file, and storing the downlink call data into a downlink file.
Specifically, according to the above steps, after obtaining uplink call data and downlink call data from the collected voice call data, the uplink call data is stored in the uplink file, and the downlink call data is stored in the downlink file.
Fig. 6 is a schematic view of a user interface of a recording file according to an embodiment of the present application, and as shown in fig. 6, after a terminal device stops recording, the recording includes two files, one is an uplink file and the other is a downlink file, and a user can play uplink call data by clicking the uplink file and play downlink call data by clicking the downlink file, so as to implement separate storage of the uplink call data and the downlink call data.
That is, in the embodiment of the present application, the uplink call data and the downlink call data are respectively stored in two different files, so that when a user selects to play a call record, the uplink call data in the uplink file can be selected to be played, or the downlink call data in the downlink file can be selected to be played.
Optionally, the file information of the uplink file includes: call date, call duration, and user identification information (e.g., the mobile phone number of the user) of the home terminal device (e.g., terminal device 1). Similarly, file information of the downlink file: including the call date, the call duration, and the user identification information (for example, the mobile phone number of the user) of the opposite terminal device (for example, terminal device 2).
In some implementation manners of the embodiment of the present application, the storing the uplink call data in the uplink file and the storing the downlink call data in the downlink file in S103 may include:
and S1031, calling the first read-write thread to store the uplink call data into the uplink file, and calling the second read-write thread to store the downlink call data into the downlink file.
Specifically, before the terminal device stores the uplink call data into the uplink file and stores the downlink call data into the downlink file, a processor in the terminal device creates two read-write units (writers) in a recorder (recorder) of the terminal device, starts two threads, which are marked as a first read-write thread and a second read-write thread, and performs read-write operations respectively. In creating the writer, attribute information, for example, "whether it is a downlink (miisdownlink)" is transmitted, so that it is determined which way of data the writer performs a read/write operation on. If the value of mIsDownLink is true, it indicates that the writer is performing read-write operation on the data of the downLink (downLink); if the value of mIsDownLink is false, it indicates that the writer is used to manipulate upLink (upLink) data.
It is assumed that the first read-write thread of the embodiment of the present application is used for processing data on an uplink, and the second read-write thread is used for processing data on a downlink.
In the storage process of the voice call data, a processor in the terminal equipment calls a first read-write thread to store the uplink call data into an uplink file, and calls a second read-write thread to store the downlink call data into a downlink file.
The recording data processing method provided by the embodiment of the application acquires voice call data of the terminal equipment by responding to a recording instruction input by a user, wherein the recording instruction is used for indicating to record the voice call data currently received by the terminal equipment; acquiring uplink call data and downlink call data in voice call data; the uplink call data are stored in the uplink file, the downlink call data are stored in the downlink file, and then the uplink call data and the downlink call data are stored separately, so that when a user selects to play a call record, the uplink call data in the uplink file can be selected to be played, or the downlink call data in the downlink file can be selected to be played, and therefore the individual requirements of the user are met.
On the basis of the above embodiment, the embodiment of the present application further includes a process of encoding and compressing call data. Fig. 7 is another schematic flow chart of the recording data processing method according to the embodiment of the present application, and as shown in fig. 7, the method according to the embodiment of the present application includes:
s201, responding to a recording instruction input by a user, and acquiring voice call data of the terminal equipment.
In a possible implementation manner of the embodiment of the present application, as shown in fig. 8A, the foregoing S201 may include:
and S2011, receiving a recording instruction input by a user.
The recording instruction is used for indicating the recording of the voice call data currently received by the terminal equipment.
The manner in which the user inputs the recording instruction to the terminal device may refer to the related description in S101, and is not described herein again.
And S2011, responding to a recording instruction input by a user, and judging whether the terminal equipment is in a voice call state currently.
After receiving a recording instruction input by a user, the terminal device needs to determine whether the terminal device is currently in a voice call state, that is, determine whether a data source currently received by the terminal device is voice call (voice _ call) data. If yes, executing S2011, otherwise, adopting the existing recording mode to record, namely calling a writer to store the data received by the terminal equipment in a file.
And S2011, if the terminal equipment is determined to be in the voice call state currently, acquiring voice call data received by the terminal equipment, and executing the following steps from S202 to S204.
S202, obtaining uplink call data and downlink call data in the voice call data.
The specific implementation process of S202 may refer to the specific description of S102, which is not described herein again.
And S203, respectively encoding and compressing the uplink call data and the downlink call data.
In this step, the uplink call data and the downlink call data are respectively encoded and compressed, so that the data volume of the uplink call data and the downlink call data can be reduced, and the terminal equipment is prevented from occupying too large storage space when being stored in the terminal equipment.
At this time, the above S1031 may be replaced with the following S204.
And S204, calling the first read-write thread to store the uplink call data after the coding compression into an uplink file, and calling the second read-write thread to store the downlink call data after the coding compression into a downlink file.
In a possible implementation manner of the embodiment of the present application, the terminal device may include two encoders, which are denoted as a first encoder and a second encoder, where the first encoder is configured to encode and compress uplink call data, and the second encoder is configured to encode and compress downlink call data. Thus, S203 described above may include S2031 as follows.
S2031, encoding and compressing the uplink call data by using a first encoder; and using a second encoder to encode and compress the downlink call data.
Correspondingly, the step S204 may include:
s2041, calling a first read-write thread to read the uplink call data after the coding compression from the first coder and storing the uplink call data in an uplink file; and calling a second read-write thread to read the encoded and compressed downlink call data from the second encoder and store the downlink call data in a downlink file.
Specifically, referring to fig. 8B, the terminal device includes a first encoder and a second encoder, and the terminal device obtains uplink call data and downlink call data from the voice call data, and inputs the uplink call data into the first encoder, so that the first encoder encodes and compresses the input uplink call data. And inputting the downlink call data into a second encoder, so that the second encoder encodes and compresses the input downlink call data. And then, the terminal equipment calls a first read-write thread to read the uplink call data after the coding compression from the first encoder, and the read uplink call data after the coding compression is kept in the uplink file. And the terminal equipment calls a second read-write thread to read the encoded and compressed downlink call data from the second encoder and keeps the read encoded and compressed downlink call data in a downlink file.
The recording data processing method provided by the embodiment of the application responds to a recording instruction input by a user, acquires voice call data of terminal equipment, acquires uplink call data and downlink call data in the voice call data, inputs the uplink call data into a first encoder for encoding and compression, inputs the downlink call data into a second encoder for encoding and compression, calls a first read-write thread to read the encoded and compressed uplink call data from the first encoder and store the encoded and compressed uplink call data in an uplink file, and calls a second read-write thread to read the encoded and compressed downlink call data from the second encoder and store the encoded and compressed downlink call data in a downlink file. The method for split-track recording realizes separation of the calling party and the called party in the call recording of the common telephone, after the recording is finished, two recording files can be seen at a user side, the two files respectively store the call data of the calling party and the call data of the called party, the two files are synchronous in time, the uplink and downlink sounds correspond to the two files one to one, the recording is clear, and the individual requirements of the user are met.
On the basis of the above embodiment, the embodiment of the application relates to a process of checking the recording by a user after the recording is finished. Fig. 9 is a schematic diagram of a process of a user viewing a recording according to an embodiment of the present application, and as shown in fig. 9, the embodiment of the present application includes:
s301, receiving a recording viewing instruction input by a user.
The recording query instruction is used for indicating to check the target recording file.
S302, responding to the recording checking instruction, and displaying the recording file, wherein the recording file comprises an uplink file and a downlink file.
For example, referring to fig. 10 and fig. 11, fig. 10 shows a schematic diagram of a user interface of a user and a terminal device, and fig. 11 shows a schematic diagram of a user interface of a terminal device displaying a query result. As shown in fig. 10, the user inputs a recording view instruction to the terminal device by triggering the recording call icon on the user interface of the terminal device. When receiving a recording viewing instruction input by a user, the terminal device jumps to the user interface shown in fig. 11 to display a recording file, where the recording file includes an uplink file and a downlink file.
Optionally, the user interface shown in fig. 11 includes a search box, and the user may input a file name of the target sound recording file in the search box, so that the terminal device queries the target sound recording file and displays the queried target sound recording file.
According to the method and the device, the terminal equipment responds to the recording checking instruction of the user and displays the recording file, wherein the recording file comprises the uplink file and the downlink file, so that the user triggers the playing of the uplink call data from the displayed uplink file and triggers the playing of the downlink call data from the displayed downlink file.
Fig. 12 is a schematic structural diagram of an apparatus for processing recorded data according to an embodiment of the present application. The recorded sound data processing device may be a terminal device, or may be a component (e.g., an integrated circuit, a chip, etc.) of a terminal device. As shown in fig. 12, the recording data processing apparatus 100 may include: the device comprises an acquisition module 101, an acquisition module 102 and a storage module 103.
The terminal equipment comprises an acquisition module 101, a processing module and a processing module, wherein the acquisition module is used for responding to a recording instruction input by a user and acquiring voice call data of the terminal equipment, and the recording instruction is used for indicating to record the voice call data currently received by the terminal equipment;
an obtaining module 102, configured to obtain uplink call data and downlink call data in the voice call data;
the storage module 103 is configured to store the uplink call data in an uplink file, and store the downlink call data in a downlink file.
The recording data processing apparatus according to the embodiment of the present application may be configured to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects thereof are similar and will not be described herein again.
In a possible implementation manner, the storage module 103 is specifically configured to call a first read-write thread to store the uplink call data in an uplink file, and call a second read-write thread to store the downlink call data in a downlink file.
Fig. 13 is a schematic structural diagram of an audio record data processing apparatus according to an embodiment of the present application. On the basis of the above embodiments, the apparatus of the embodiment of the present application further includes an encoding module 104;
the encoding module 104 is configured to encode and compress the uplink call data and the downlink call data respectively;
the storage module 103 is specifically configured to invoke a first read-write thread to store the uplink call data after being encoded and compressed into an uplink file, and invoke a second read-write thread to store the downlink call data after being encoded and compressed into a downlink file.
In a possible implementation manner, the encoding module 104 is configured to encode and compress the uplink call data by using a first encoder; using a second encoder to encode and compress the downlink call data;
the storage module 103 is specifically configured to invoke a first read-write thread to read the encoded and compressed uplink call data from the first encoder, and store the uplink call data in an uplink file; and calling a second read-write thread to read the encoded and compressed downlink call data from the second encoder and store the downlink call data in a downlink file.
In a possible implementation manner, the storage module 103 is further configured to cache the collected voice call data in a third cache queue, and uplink call data and downlink call data in the voice call data are cross-cached in the third cache queue;
the obtaining module 102 is specifically configured to obtain uplink call data from a third buffer queue according to that each uplink call data is located between two downlink communication data; and acquiring the downlink call data from the third buffer queue according to the fact that each downlink call data is positioned between two uplink communication data.
The recording data processing apparatus according to the embodiment of the present application may be configured to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects thereof are similar and will not be described herein again.
Fig. 14 is a schematic structural diagram of a recording data processing apparatus according to an embodiment of the present application. On the basis of the above embodiment, the apparatus of the embodiment of the present application further includes: a receiving module 105 and a display module 106;
the receiving module 105 is configured to receive a recording checking instruction input by a user, where the recording checking instruction is used to instruct to check a recording file;
and the display module 105 is used for responding to the recording viewing instruction and displaying the recording files, wherein the recording files comprise an uplink file and a downlink file.
In a possible implementation manner, the receiving module 105 is further configured to receive a recording instruction input by a user;
the acquisition module 101 is specifically configured to respond to a recording instruction input by a user and determine whether the terminal device is currently in a voice call state; and if the terminal equipment is determined to be in the voice call state currently, acquiring voice call data received by the terminal equipment.
The recording data processing apparatus according to the embodiment of the present application may be configured to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects thereof are similar and will not be described herein again.
Fig. 15 is a schematic structural diagram of a terminal device according to an embodiment of the present application. The terminal device 600 may implement the functions executed by the terminal device in the above method embodiments, and the functions may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the above functions.
In one possible design, the terminal device 600 includes a processor 601, a transceiver 602, and a memory 603 in its structure, and the processor 601 is configured to support the terminal device 600 to perform the corresponding functions in the above-described method. The transceiver 602 is used to support communication between the terminal device 600 and other terminal devices or network devices. The terminal device 600 may further comprise a memory 603, which memory 603 is adapted to be coupled to the processor 601 and to store program instructions and data necessary for the terminal device 600.
When the terminal device 600 is powered on, the processor 601 may read the program instructions and data in the memory 603, interpret and execute the program instructions, and process the data of the program instructions. When data is transmitted, the processor 601 performs baseband processing on the data to be transmitted, and outputs a baseband signal to the transceiver 602, and the transceiver 602 performs radio frequency processing on the baseband signal and transmits the radio frequency signal to the outside in the form of electromagnetic waves through the antenna. When there is data to be transmitted to the terminal, the transceiver 602 receives a radio frequency signal through the antenna, converts the radio frequency signal into a baseband signal, and outputs the baseband signal to the processor 601, and the processor 601 converts the baseband signal into data and processes the data.
Those skilled in the art will appreciate that fig. 15 shows only one memory 603 and one processor 601 for ease of illustration. In an actual terminal device 600, there may be multiple processors 601 and multiple memories 603. The memory 603 may also be referred to as a storage medium or a storage device, etc., which is not limited in this application.
Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In addition, the method embodiments and the device embodiments may also refer to each other, and the same or corresponding contents in different embodiments may be referred to each other, which is not described in detail.

Claims (9)

1. A method for processing recorded data, comprising:
the method comprises the steps of responding to a recording instruction input by a user, and acquiring voice call data of the terminal equipment, wherein the recording instruction is used for indicating to record the voice call data currently received by the terminal equipment;
caching the collected voice call data into a third cache queue, wherein uplink call data and downlink call data in the voice call data are cached in the third cache queue in a crossed manner;
acquiring uplink call data from a third buffer queue according to the condition that each uplink call data is positioned between two downlink communication data; acquiring downlink call data from a third buffer queue according to the fact that each downlink call data is located between two uplink communication data;
and storing the uplink call data into an uplink file, and storing the downlink call data into a downlink file, so that a user can select to play the uplink call data in the uplink file or the downlink call data in the downlink file.
2. The method of claim 1, wherein saving the upstream call data to an upstream file and saving the downstream call data to a downstream file comprises:
and calling a first read-write thread to store the uplink call data into the uplink file, and calling a second read-write thread to store the downlink call data into the downlink file.
3. The method of claim 2, wherein before the invoking of the first read-write thread saves the uplink call data into the uplink file and the invoking of the second read-write thread saves the downlink call data into the downlink file, the method further comprises:
respectively encoding and compressing the uplink call data and the downlink call data;
the step of calling a first read-write thread to store the uplink call data into the uplink file, and calling a second read-write thread to store the downlink call data into the downlink file includes:
and calling the first read-write thread to store the uplink call data after the coding compression into the uplink file, and calling the second read-write thread to store the downlink call data after the coding compression into the downlink file.
4. The method of claim 3, wherein the separately encoding and compressing the uplink call data and the downlink call data comprises:
encoding and compressing the uplink call data by using a first encoder;
encoding and compressing the downlink call data by using a second encoder;
the step of calling the first read-write thread to store the uplink call data after being coded and compressed into the uplink file, and the step of calling the second read-write thread to store the downlink call data after being coded and compressed into the downlink file includes:
calling the first read-write thread to read the uplink call data after the coding compression from the first coder and storing the uplink call data in the uplink file;
and calling the second read-write thread to read the encoded and compressed downlink call data from the second encoder and store the downlink call data in the downlink file.
5. The method according to any one of claims 1-3, wherein the collecting voice call data of the terminal device in response to the recording instruction input by the user comprises:
receiving a recording instruction input by the user;
responding to a recording instruction input by a user, and judging whether the terminal equipment is in a voice call state currently;
and if the terminal equipment is determined to be in the voice call state currently, acquiring voice call data received by the terminal equipment.
6. The method according to any one of claims 1-3, further comprising:
receiving a recording checking instruction input by the user, wherein the recording inquiring instruction is used for indicating to check a recording file;
and responding to the recording viewing instruction, and displaying the recording file, wherein the recording file comprises an uplink file and a downlink file.
7. An apparatus for processing recorded sound data, comprising:
the terminal equipment comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for responding to a recording instruction input by a user and acquiring voice call data of the terminal equipment, and the recording instruction is used for indicating to record the voice call data currently received by the terminal equipment;
the acquisition module is used for caching the collected voice call data into a third cache queue, wherein uplink call data and downlink call data in the voice call data are cached in the third cache queue in a crossed manner;
acquiring uplink call data from a third buffer queue according to the condition that each uplink call data is positioned between two downlink communication data; acquiring downlink call data from a third buffer queue according to the condition that each downlink call data is positioned between two uplink communication data;
and the storage module is used for storing the uplink call data into an uplink file and storing the downlink call data into a downlink file so that a user can select to play the uplink call data in the uplink file or the downlink call data in the downlink file.
8. A terminal device, comprising: a processor and a memory;
the memory for storing a computer program;
the processor for executing the computer program to implement the recording data processing method according to any one of claims 1 to 6.
9. A computer-readable storage medium characterized in that the storage medium includes computer instructions which, when executed by a computer, cause the computer to implement the sound recording data processing method according to any one of claims 1 to 6.
CN202010332855.XA 2020-04-24 2020-04-24 Recording data processing method, recording data processing device and storage medium Active CN113556421B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010332855.XA CN113556421B (en) 2020-04-24 2020-04-24 Recording data processing method, recording data processing device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010332855.XA CN113556421B (en) 2020-04-24 2020-04-24 Recording data processing method, recording data processing device and storage medium

Publications (2)

Publication Number Publication Date
CN113556421A CN113556421A (en) 2021-10-26
CN113556421B true CN113556421B (en) 2023-01-24

Family

ID=78101265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010332855.XA Active CN113556421B (en) 2020-04-24 2020-04-24 Recording data processing method, recording data processing device and storage medium

Country Status (1)

Country Link
CN (1) CN113556421B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116264598A (en) * 2021-12-14 2023-06-16 荣耀终端有限公司 Multi-screen collaborative communication method, system, terminal and storage medium
CN118214794A (en) * 2022-01-10 2024-06-18 荣耀终端有限公司 Method and device for transmitting call audio data

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778185A (en) * 2009-01-13 2010-07-14 联芯科技有限公司 On-line recording method and device of mobile terminal

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778185A (en) * 2009-01-13 2010-07-14 联芯科技有限公司 On-line recording method and device of mobile terminal

Also Published As

Publication number Publication date
CN113556421A (en) 2021-10-26

Similar Documents

Publication Publication Date Title
CN110401767B (en) Information processing method and apparatus
CN114173000B (en) Method, electronic equipment and system for replying message and storage medium
CN112312366B (en) Method, electronic equipment and system for realizing functions through NFC (near field communication) tag
WO2021083128A1 (en) Sound processing method and apparatus thereof
CN112119641B (en) Method and device for realizing automatic translation through multiple TWS (time and frequency) earphones connected in forwarding mode
CN112351156A (en) Lens switching method and device
CN113556421B (en) Recording data processing method, recording data processing device and storage medium
CN113973398A (en) Wireless network connection method, electronic equipment and chip system
CN114880251A (en) Access method and access device of storage unit and terminal equipment
CN113126948A (en) Audio playing method and related equipment
JP2022501968A (en) File transfer method and electronic device
CN109285563B (en) Voice data processing method and device in online translation process
CN111886849B (en) Information transmission method and electronic equipment
CN113129916A (en) Audio acquisition method, system and related device
CN113901485B (en) Application program loading method, electronic device and storage medium
WO2022135195A1 (en) Method and apparatus for displaying virtual reality interface, device, and readable storage medium
CN114257680B (en) Caller identification method, user equipment, storage medium and electronic equipment
CN114698078A (en) Transmission power adjustment method, electronic device, and storage medium
CN115706755A (en) Echo cancellation method, electronic device, and storage medium
CN114116610A (en) Method, device, electronic equipment and medium for acquiring storage information
CN114827098A (en) Method and device for close shooting, electronic equipment and readable storage medium
CN114430441A (en) Incoming call prompting method, system, electronic equipment and storage medium
CN114466238A (en) Frame demultiplexing method, electronic device and storage medium
WO2020062308A1 (en) Location information processing method and related device
CN116048236B (en) Communication method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant