CN115186124A - Audio searching method and device, electronic equipment and storage medium - Google Patents

Audio searching method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115186124A
CN115186124A CN202210723192.3A CN202210723192A CN115186124A CN 115186124 A CN115186124 A CN 115186124A CN 202210723192 A CN202210723192 A CN 202210723192A CN 115186124 A CN115186124 A CN 115186124A
Authority
CN
China
Prior art keywords
audio
track
lossless audio
lossless
whole
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210723192.3A
Other languages
Chinese (zh)
Inventor
战旭宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co Ltd filed Critical Hisense Mobile Communications Technology Co Ltd
Priority to CN202210723192.3A priority Critical patent/CN115186124A/en
Publication of CN115186124A publication Critical patent/CN115186124A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/61Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses an audio searching method, an audio searching device, electronic equipment and a storage medium, which belong to the technical field of audio processing, and the method comprises the following steps: the electronic equipment acquires a search word, wherein the search word is used for searching an audio name and/or an audio performer; searching the playing time information of the single-track lossless audio matched with the search word from the stored whole-track lossless audio based on a pre-constructed directory tree; and outputting the playing time information of the single-track lossless audio. The directory tree stores audio description information of each single-track lossless audio in the whole-track lossless audio, wherein the audio description information comprises an audio name, an audio performer and playing time information. Therefore, the user can search any single-track lossless audio in the whole-track lossless audio according to the audio name and/or the audio performer, the playing convenience of the whole-track lossless audio is improved, and the user experience is improved.

Description

Audio searching method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of audio processing technologies, and in particular, to an audio search method and apparatus, an electronic device, and a storage medium.
Background
With the abundance of the spiritual life of people, people pursue higher and higher sound quality, so more and more lossless audios such as lossless music and the like appear.
Generally, the lossless audio comprises full-track lossless audio and single-track lossless audio, the full-track lossless audio can be played for 40 to 50 minutes, and the full-track lossless audio comprises a plurality of single-track lossless audio. The whole-track lossless audio file is displayed as an audio file in the audio player, and if a user wants to play one track of lossless audio in the whole-track lossless audio, the user needs to search the track of lossless audio by manually fast forwarding the whole-track lossless audio, or, the whole-track lossless audio is firstly split into single-track lossless audio by means of other audio processing tools, and then one track of lossless audio is selected to be played. Therefore, the playing mode of the whole-track lossless audio is troublesome, and is not beneficial to the transmission of the whole-track lossless audio.
Therefore, the problem that the playing mode of the whole-track lossless audio is troublesome exists in the prior art.
Disclosure of Invention
The embodiment of the application provides an audio searching method, an audio searching device, electronic equipment and a storage medium, and aims to solve the problem that the playing mode of whole-track lossless audio is relatively troublesome in the prior art.
In a first aspect, an embodiment of the present application provides an audio search method, including:
the electronic equipment acquires a search word, wherein the search word is used for searching an audio name and/or an audio performer;
searching playing time information of the single-track lossless audio matched with the search word from stored whole-track lossless audio based on a pre-constructed directory tree, wherein audio description information of each single-track lossless audio in the whole-track lossless audio is stored in the directory tree, and the audio description information comprises an audio name, an audio performer and playing time information;
and outputting the playing time information of the single-track lossless audio.
In some embodiments, the directory tree is constructed according to the following steps:
analyzing the CD-mapping auxiliary CUE file of the whole-track lossless audio to obtain audio description information of each single-track lossless audio in the whole-track lossless audio;
constructing a primary leaf node for the whole-track lossless audio, and mounting the primary leaf node under a pre-constructed root node;
constructing a secondary leaf node for each single-rail lossless audio, and mounting the secondary leaf node below the primary leaf node;
and constructing a three-level leaf node for each audio description information of the single-track lossless audio, and mounting each three-level leaf node under the two-level leaf node to obtain the directory tree.
In some embodiments, searching the stored full-track lossless audio for the playback time information of the single-track lossless audio matching the search term based on the pre-constructed directory tree includes:
searching a target leaf node matched with the search word from three levels of leaf nodes of the directory tree;
determining the single-track lossless audio corresponding to the upper-level node of the target leaf node as the single-track lossless audio matched with the search word;
and reading the playing time information of the single-track lossless audio from the peer node of the target leaf node.
In some embodiments, further comprising:
and determining to start a function of searching single-track lossless audio in the whole-track lossless audio before searching playing time information of single-track lossless audio matched with the search word from the stored whole-track lossless audio based on a pre-constructed directory tree.
In some embodiments, the electronic device is a terminal or a server.
In some embodiments, the electronic device is the terminal, and the playing time information at least includes a start time, and further includes:
after outputting the playback time information of the single-track lossless audio, in response to a playback request, playing the entire-track lossless audio from the start time to play the single-track lossless audio.
In some embodiments, the playing time information further includes an end time, further including:
after the whole-track lossless audio is played from the starting time, if the whole-track lossless audio is determined to be played to the ending time, the playing is stopped, and the single-track lossless audio is played.
In some embodiments, further comprising:
and in response to a deletion request of the whole-track lossless audio, deleting the whole-track lossless audio, and deleting the audio description information of each single-track lossless audio in the whole-track lossless audio from the directory tree.
In a second aspect, an embodiment of the present application provides an audio search apparatus, disposed in an electronic device, including:
the acquisition module is used for acquiring search terms, and the search terms are used for searching audio names and/or audio performers;
the searching module is used for searching playing time information of the single-track lossless audios matched with the searching words from the stored whole-track lossless audios based on a pre-constructed directory tree, wherein audio description information of each single-track lossless audio in the whole-track lossless audios is stored in the directory tree, and the audio description information comprises an audio name, an audio performer and playing time information;
and the output module is used for outputting the playing time information of the single-track lossless audio.
In some embodiments, a construction module is further included for constructing the directory tree according to the following steps:
analyzing the CD-mapping auxiliary CUE file of the whole-track lossless audio to obtain audio description information of each single-track lossless audio in the whole-track lossless audio;
constructing a primary leaf node for the whole-track lossless audio, and mounting the primary leaf node under a pre-constructed root node;
constructing a secondary leaf node for each single-rail lossless audio, and mounting the secondary leaf node below the primary leaf node;
and constructing a three-level leaf node for each audio description information of the single-track lossless audio, and mounting each three-level leaf node under the two-level leaf node to obtain the directory tree.
In some embodiments, the lookup module is specifically configured to:
searching a target leaf node matched with the search word from three levels of leaf nodes of the directory tree;
determining the single-track lossless audio corresponding to the upper-level node of the target leaf node as the single-track lossless audio matched with the search word;
and reading the playing time information of the single-track lossless audio from the peer node of the target leaf node.
In some embodiments, further comprising:
and the determining module is used for determining to start the function of searching the single-track lossless audio in the whole-track lossless audio before searching the playing time information of the single-track lossless audio matched with the search word from the stored whole-track lossless audio based on a pre-constructed directory tree.
In some embodiments, the electronic device is a terminal or a server.
In some embodiments, the electronic device is the terminal, and the playing time information at least includes a start time, and further includes:
and the playing module is used for responding to a playing request after the playing time information of the single-track lossless audio is output, and playing the whole-track lossless audio from the starting time so as to play the single-track lossless audio.
In some embodiments, the playing time information further includes an end time, and the playing module is further configured to:
after the whole-track lossless audio is played from the starting time, if the whole-track lossless audio is determined to be played to the ending time, the playing is stopped, and the single-track lossless audio is played.
In some embodiments, further comprising:
and the deleting module is used for responding to a deleting request of the whole-track lossless audio, deleting the whole-track lossless audio and deleting the audio description information of each single-track lossless audio in the whole-track lossless audio from the directory tree.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the audio search method described above.
In a fourth aspect, embodiments of the present application provide a storage medium, where when a computer program in the storage medium is executed by a processor of an electronic device, the electronic device is capable of executing the above-mentioned audio search method.
In the embodiment of the application, the electronic equipment acquires a search word, and the search word is used for searching an audio name and/or an audio performer; searching the playing time information of the single-track lossless audio matched with the search word from the stored whole-track lossless audio based on a pre-constructed directory tree; and outputting the playing time information of the single-track lossless audio. The directory tree stores audio description information of each single-track lossless audio in the whole-track lossless audio, wherein the audio description information comprises an audio name, an audio performer and playing time information. Therefore, the user can search any single-track lossless audio in the whole-track lossless audio according to the audio name and/or the audio performer, the playing convenience of the whole-track lossless audio is improved, and the user experience is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is an application scenario diagram of an audio search method provided in an embodiment of the present application;
fig. 2 is an application scenario diagram of another audio search method provided in the embodiment of the present application;
fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 4 is a block diagram of a software structure of a terminal according to an embodiment of the present disclosure;
fig. 5 is an interaction flowchart of an audio search method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a search interface provided by an embodiment of the present application;
fig. 7 is a flowchart for constructing a directory tree according to an embodiment of the present application;
FIG. 8 is a content diagram of a CUE file of full-track lossless audio according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a directory tree according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a search results page provided by an embodiment of the present application;
FIG. 11 is a diagram illustrating search results of another audio search method according to an embodiment of the present application;
FIG. 12 is a search interface diagram of another audio search method provided in an embodiment of the present application;
fig. 13 is a flowchart of an audio searching method according to an embodiment of the present application;
FIG. 14 is a flowchart of another audio searching method provided by an embodiment of the present application;
FIG. 15 is a schematic diagram of an audio playback interface according to an embodiment of the present application;
FIG. 16 is a schematic diagram of another audio playback interface provided in an embodiment of the present application;
FIG. 17 is a schematic diagram of an interface for deleting full-track lossless audio according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of an audio search apparatus according to an embodiment of the present application;
fig. 19 is a schematic structural diagram of another audio search apparatus according to an embodiment of the present application;
fig. 20 is a schematic hardware structure diagram of an electronic device for implementing an audio search method according to an embodiment of the present application.
Detailed Description
In order to solve the problem that the playing mode of the whole-track lossless audio is relatively troublesome in the prior art, the embodiment of the application provides an audio searching method, an audio searching device, electronic equipment and a storage medium.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it should be understood that the preferred embodiments described herein are merely for illustrating and explaining the present application, and are not intended to limit the present application, and that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
To facilitate understanding of the present application, the present application refers to technical terms in which:
lossless audio refers to audio without loss of sound quality. Common lossless audio formats include FLAC, WAV and APE, all lossless audio formats are essentially compression of WAV, and are converted back to WAV during playing, so that directly playing WAV helps to save memory and is smoother. However, the WAV format is too large in capacity and inconvenient to use. In order to ensure the sound quality while the use is convenient, the sound quality is generally compressed into an APE format, the volume of the APE format is reduced in a more refined recording mode in a lossless compression format, and the restored data is the same as the source file, so that the integrity of the file is ensured. Compared with the FLAC of the same file format, the APE has the error checking capability but does not provide the error correction function so as to ensure the lossless and pure files; it has another characteristic of compressibility of about 55% higher than that of FLAC, and volume of about half of original CD, so that it is easy to store.
Whole track lossless audio, a form of lossless audio, is an APE file compressed from a whole CD, which contains multiple single track lossless audio.
Single-track lossless audio, another form of lossless audio. Taking a single-track lossless audio as a single-track lossless music as an example, the single-track lossless music is a single song divided from the whole-track lossless music, in order to select music randomly from the whole-track lossless music, the support of a compact disc image auxiliary (CUE) file is needed, and information such as the start and stop time of each single-track lossless music in the whole-track lossless music is recorded in the CUE file.
And the CUE file is compiled according to a text file format. It plays an important role in recording the mapping file of the optical disc, and can instruct the recording software what format to record, what content to record, where to start and where to end, what information to attach, etc.
First, it should be noted that the execution subject in the embodiment of the present application may be a terminal or a server. Moreover, the audio searching method provided by the embodiment of the application can be applied to the whole-track lossless audio such as whole-track lossless music, whole-track lossless phase sound, whole-track lossless story set and the like, wherein in the whole-track lossless music, the name of the audio is the name of music, and the audio performer is a singer; in the whole-orbit lossless phase sound, the audio name is the phase sound name, and the audio performer is the phase sound actor; in the full-orbit lossless story set, the audio name is the story name, and the audio performer is told about the event.
An application scenario of the embodiment of the present application is described below.
When the execution subject of the embodiment of the present application is a server, fig. 1 is an application scenario diagram of an audio search method provided by the embodiment of the present application, and includes a terminal 100 and a server 200.
Various applications may be installed on the terminal 100, for example, in this embodiment, an application capable of playing music is installed on the terminal 100, the user searches for music through the application, and the application may also receive and play music recommended by the server 200, so that the user can enjoy the music.
The server 200 can provide various network services for the terminal 100, and for various music files presented on the terminal 100, such as whole-track lossless music, single music, and the like, the server 200 may be considered as a background server providing corresponding network services, for example, in this embodiment of the present application, the server 200 may receive a search request sent by the terminal 100, perform music search, and return a search result, and for example, the server 200 may push various music meeting the user listening preference to the terminal 100 for the user to select.
The server 200 may be a server, a server cluster formed by a plurality of servers, or a cloud computing center.
Specifically, the server 200 may include a processor 210 (CPU), a storage device 220, an input device 230, an output device 240, and the like, the input device 230 may include a keyboard, a mouse, a touch screen, and the like, and the output device 240 may include a Display device such as a Liquid Crystal Display (LCD), a Cathode Ray Tube (CRT), and the like.
Processor 210 stores program instructions by invoking storage device 220.
Storage device 220 may include Read Only Memory (ROM) and Random Access Memory (RAM), and provides processor 210 with program instructions and data stored in storage device 220.
The terminals 100 and the server 200 are connected via the internet to communicate with each other. Optionally, the internet described above uses standard communication technologies and/or protocols. The internet is typically the internet, but can be any Network including, but not limited to, local Area Network (LAN), metropolitan Area Network (MAN), wide Area Network (WAN), any combination of mobile, wired or wireless networks, private networks, or virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including Hypertext Mark-up Language (HTML), extensible Markup Language (XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), transport Layer Security (TLS), virtual Private Network (VPN), internet Protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above.
It should be noted that the application architecture diagram in the embodiment of the present application is for more clearly illustrating the technical solution in the embodiment of the present application, and does not limit the technical solution provided in the embodiment of the present application, and is not limited to music listening applications, and the technical solution provided in the embodiment of the present application is also applicable to similar problems for other application architectures and business applications.
When the execution subject of the embodiment of the present application is a terminal, fig. 2 is an application scenario diagram of another audio search method provided in the embodiment of the present application, and includes the terminal 100 and the memory 300. The terminal 100 acquires a search term input by a user; searching the whole-track lossless audio stored in the memory 300 for the play time information of the single-track lossless audio matching the search word; the memory 300 transmits the search result to the terminal 100; the terminal 100 outputs the presentation search result.
Only a single terminal 100, memory 300 is described in detail in the present application, but it will be understood by those skilled in the art that the terminal 100, memory 300 shown are intended to represent the operation of the terminal 100, memory 300 according to the teachings of the present application. And is not meant to imply a limitation on the number, type, or location of the terminal 100, memory 300, or the like. It should be noted that the underlying concepts of the example embodiments of the present application may not be altered if additional modules are added or removed from the illustrated environments.
The storage 300 in the embodiment of the present application may be, for example, a cache system, or may also be a hard disk storage, a memory storage, or the like. In addition, the audio searching method provided by the application is not only suitable for the application scene shown in fig. 2, but also suitable for any device with audio searching requirements.
In addition, while the memory 300 of FIG. 2 is external to the terminal 100, in some embodiments, the memory 30 may also be internal to the terminal 100.
In any of the foregoing scenarios, the terminal in the embodiment of the present application may be various electronic devices such as a mobile phone, an Ipad, a tablet computer, a wearable device, and an on-board unit, which is not limited in the embodiment of the present application.
Fig. 3 is a schematic structural diagram of a terminal 100 provided in an embodiment of the present application, and it should be understood that the terminal 100 shown in fig. 3 is only an example, and the terminal 100 may have more or less components than those shown in fig. 3, may combine two or more components, or may have a different component configuration. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
A block diagram of a hardware configuration of the terminal 100 according to an exemplary embodiment is exemplarily shown in fig. 3. As shown in fig. 3, the terminal 100 includes: radio Frequency (RF) circuit 110, memory 120, display unit 130, camera 140, sensor 150, audio circuit 160, audio playing part 170, wireless Fidelity (Wi-Fi) module 180, processor 190, bluetooth module 1100, and power supply 1200.
The RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink data of a base station and then send the downlink data to the processor 190 for processing; the uplink data may be transmitted to the base station. In general, RF circuit 110 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The memory 120 may be used to store software programs and data. The processor 190 performs various functions of the terminal 100 and data processing by executing software programs or data stored in the memory 120. The memory 120 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. The memory 120 stores an operating system that enables the terminal 100 to operate. The memory 120 may store an operating system and various application programs, and may also store codes for performing the methods described in the embodiments of the present application.
The display unit 130 may be used to receive input numeric or character information and generate signal input related to user settings and function control of the terminal 100, and particularly, the display unit 130 may include a touch screen 1301 disposed on the front surface of the terminal 100 and capable of collecting touch operations of a user thereon or nearby, such as clicking a button, dragging a scroll box, and the like.
The display unit 130 may also be used to display a Graphical User Interface (GUI) of information input by or provided to the user and various menus of the terminal 100. Specifically, the display unit 130 may include a display screen 1302 disposed on the front surface of the terminal 100. The display screen 1302 may be a color liquid crystal screen, and may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display unit 130 may be used to display various graphical user interfaces described herein.
The touch screen 1301 may cover the display screen 1302, or the touch screen 1301 and the display screen 1302 may be integrated to implement the input and output functions of the terminal 100, and after the integration, the touch screen may be referred to as a touch display screen for short. In the present application, the display unit 130 may display the application programs and the corresponding operation steps.
The camera 140 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing elements convert the light signals into electrical signals which are then passed to processor 190 for conversion into digital image signals.
The terminal 100 may further comprise at least one sensor 150, such as an acceleration sensor 151, a distance sensor 152, a fingerprint sensor 153, a temperature sensor 154. The terminal 100 may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, light sensor, motion sensor, etc.
Audio circuitry 160, audio playback component 170 may provide an audio interface between a user and terminal 100. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 171, and convert the electrical signal into a sound signal by the speaker 171 for output. The terminal 100 may also be provided with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 172 converts the collected sound signals into electrical signals, converts the electrical signals into audio data after being received by the audio circuit 160, and outputs the audio data to the RF circuit 110 to be transmitted to, for example, another terminal or outputs the audio data to the memory 120 for further processing. In the present application, the microphone 172 may capture the voice of the user.
Wi-Fi belongs to a short-distance wireless transmission technology, and the terminal 100 can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the Wi-Fi module 180, and provides wireless broadband internet access for the user.
The processor 190 is a control center of the terminal 100, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal 100 and processes data by running or executing software programs stored in the memory 120 and calling data stored in the memory 120. In some embodiments, processor 190 may include one or more processing units; processor 190 may also integrate an application processor, which primarily handles operating systems, user interfaces, and applications, etc., with a baseband processor, which primarily handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into processor 190. In the present application, the processor 190 may run an operating system, an application program, a user interface display, and a touch response, so as to implement the audio control method provided in the embodiment of the present application. Further, processor 190 is coupled to display unit 130.
The bluetooth module 1100 is used for performing information interaction with other bluetooth devices having the bluetooth module through a bluetooth protocol. For example, the terminal 100 may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) having a bluetooth module through the bluetooth module 1100, so as to perform data interaction.
The terminal 100 also includes a power supply 1200 (e.g., a battery) that provides power to the various components. The power supply may be logically coupled to processor 190 through a power management system to manage charging, discharging, and power consumption functions through the power management system. The terminal 100 may also be configured with power buttons for powering the terminal on and off, and for locking the screen.
Fig. 4 is a block diagram of a software structure of a terminal 100 according to an embodiment of the present disclosure. The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 4, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 4, the application framework layers may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and answered, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide a communication function of the terminal 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources, such as localized strings, icons, pictures, layout files, video files, etc., to the application.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to notify download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the terminal vibrates, an indicator light flashes, and the like.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, composition, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The workflow of the software and hardware of the terminal 100 is exemplified below in connection with the opening of a multimedia sound scene of a game application.
When the touch screen 1301 receives a touch operation, a corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a game application icon as an example, the game application calls an interface of an application framework layer to start the game application, further starts an audio driver by calling a kernel layer, and plays a prompt tone, a background tone or other multimedia sounds of the game application through the speaker 171.
After introducing the application scenario of the embodiment of the present application, the following describes an audio search method proposed in the present application with reference to a flowchart.
First, a scheme of the present application is described by taking an execution subject of the embodiment of the present application as an example.
Fig. 5 is an interaction flowchart of an audio search method provided in an embodiment of the present application, including the following steps.
In step 501, the terminal determines a search word in response to a search operation.
Referring to fig. 6, fig. 6 is a schematic view of a search interface provided in the embodiment of the present application, where information such as a search box, a recommended search term (such as an audio performer 1, an audio name 1, and an audio performer 2), a hot search list, and a topic list may be displayed on the search interface, where the search term input in the search box may be an audio name, an audio performer, or an audio name and an audio performer.
In step 502, the terminal transmits the search word to the server.
In step 503, the server searches the playing time information of the single-track lossless audio matched with the search word from the stored whole-track lossless audio based on the pre-constructed directory tree.
The playing time information may include a start time, and may further include a start time and an end time.
In specific implementation, the server may construct a directory tree according to the steps shown in fig. 7:
in step 5031a, the CUE file of the entire-track lossless audio is parsed to obtain audio description information of each single-track lossless audio in the entire-track lossless audio.
Fig. 8 is a content diagram of a CUE file of lossless audio of a whole track according to an embodiment of the present application, in which a first row PERFORMER "unknown artist" represents an audio PERFORMER (which may be empty) of lossless audio of the whole track; the second line, TITLE "unknown TITLE" represents the audio name of the entire track of lossless audio; the third line, FILE "cdimage. Ape" WAVE, represents the FILE name of the full track lossless audio; the fourth line TRACK 01AUDIO represents the serial number of the single-TRACK lossless AUDIO 1 in the whole-TRACK lossless AUDIO; the fifth line, TITLE "TRACK 01," represents the audio name of single-TRACK lossless Audio 1; the sixth row PERFORMER "unknown artist" represents the audio PERFORMER for single-track lossless Audio 1; the seventh row INDEX 01 00 represents the start time of the single-track lossless audio 1; the eighth line TRACK 02AUDIO represents the serial number of the single-TRACK lossless AUDIO 2 in the whole-TRACK lossless AUDIO; the ninth line, TITLE "TRACK 02," represents the audio name of the single-TRACK lossless Audio 2; the tenth row PERFORMER "unknown artist" represents the audio PERFORMER for single-track lossless Audio 2; eleventh row INDEX 00 04; the twelfth row INDEX 01 04.
From the CUE file contents shown in FIG. 8, the server can know the audio description information of each single-track lossless audio in the whole-track lossless audio.
In fig. 5032a, a primary leaf node is constructed for the full-track lossless audio and mounted under a pre-constructed root node.
In fig. 5033a, a secondary leaf node is constructed for each single-track lossless audio and is mounted below the primary leaf node.
In fig. 5034a, a three-level leaf node is constructed for each audio description information of the single-track lossless audio, and each three-level leaf node is mounted under the above two-level leaf node to obtain a directory tree.
Fig. 9 is a schematic structural diagram of a directory tree according to an embodiment of the present application, where the directory tree includes a root node, first-level leaf nodes, second-level leaf nodes, and third-level leaf nodes, where the root node has a plurality of first-level leaf nodes, and each first-level leaf node represents a whole-track lossless audio; at least one secondary leaf node is hung below each primary leaf node, and each secondary leaf node represents one-track lossless audio in the whole-track lossless audio corresponding to the primary leaf node; and each secondary leaf node is hung below a plurality of tertiary leaf nodes, and each tertiary leaf node represents audio description information of one-track lossless audio corresponding to the secondary leaf node, such as an audio name, an audio performer and audio playing time information.
In specific implementation, after the server obtains the search term, a target leaf node matched with the search term can be searched from three-level leaf nodes of the directory tree, then the single-track lossless audio corresponding to the previous-level node of the target leaf node is determined as the single-track lossless audio matched with the search term, and finally the playing time information of the single-track lossless audio is read from the same-level node of the target leaf node.
For example, after the search word "audio name 1" is input in the search box of fig. 6, the terminal may send the "audio name 1" to the server, and the server searches the directory tree shown in fig. 9 for three-level leaf nodes matching the "audio name 1", where only one of the three-level leaf nodes shown in fig. 9 matches the "audio name 1", and thus, the leaf node may be determined as a target leaf node, and then, the "single-track lossless audio 1-1" corresponding to a node one level above the target leaf node may be determined as single-track lossless audio matching the search word, and then, the playing time information of the "single-track lossless audio 1-1" (i.e., "audio playing time information 1") may be read in the node one level of the target leaf node as the playing time information of the "single-track lossless audio 1-1".
In step 504, the server sends the search result to the terminal, the search result at least including the playing time information of the single-track lossless audio.
In particular, the search result may further include an audio name of the single-track lossless audio, an audio performer, and an audio name of the full-track lossless audio.
In step 505, the terminal presents the search results.
Fig. 10 is a schematic diagram of a search result page provided in an embodiment of the present application, and a search result interface includes a search box and a search result, where a currently input search word is displayed in the search box, and audio description information of single-track lossless audio matched with the search word, such as an audio name, an audio performer, audio playing time information, and an audio name of full-track lossless audio to which the single-track lossless audio belongs, is displayed at the search result.
In addition, in some embodiments, when the input search term is incorrect, there is no whole track of lossless audio in the directory tree, and the like, the server may not search the playing time information of the single track of lossless audio matching the search term, and the terminal may display the search result page as shown in fig. 11.
It should be noted that fig. 10 and 11 are only examples of the search result page, and do not limit the search result page, and a skilled person can design the search result page according to needs.
In step 506, the terminal responds to the playing request and plays based on the playing time information of the single-track lossless audio.
In some embodiments, the play time information includes a start time. At this time, the terminal plays the entire-track lossless audio from the start time after responding to the play request, that is, plays the single-track lossless audio matching the search word and the single-track lossless audio following the single-track lossless audio (if any) from among the entire-track lossless audio.
In some embodiments, the playing time information includes a start time and an end time, and the terminal plays the entire-track lossless audio from the start time after responding to the playing request, and stops playing if it is determined that the entire-track lossless audio is played to the end time, that is, only the single-track lossless audio matching the search word in the entire-track lossless audio is played.
In addition, a function key for allowing searching of the single-track lossless audio in the whole-track lossless audio can be further arranged, as shown in fig. 12, the function key for allowing searching of the single-track lossless audio in the whole-track lossless audio can be displayed below the search box, and if the function key is turned on, the single-track lossless audio in the whole-track lossless audio can be searched; and if the function key is closed, the single-track lossless audio in the whole-track lossless audio is not searched.
Fig. 13 is a flowchart of an audio search method provided in an embodiment of the present application, where an execution subject of the method is a server, and the method includes the following steps;
in step 1301, a search term sent by a first terminal is received, wherein the search term includes an audio name and/or an audio performer.
In step 1302, if it is determined that the function of searching the single-track lossless audio in the full-track lossless audio is enabled, the playing time information of the single-track lossless audio matched with the search word is searched from the stored full-track lossless audio based on the pre-constructed directory tree.
The step 503 can be performed, and is not described herein again.
In step 1303, the search result is sent to the first terminal, and the search result at least includes the playing time information of the single-track lossless audio.
At this time, the server outputting the playing time information of the single-track lossless audio means sending the search result to the first terminal.
In step 1304, a deletion request of the whole-track lossless file sent by the second terminal is received.
In step 1305, based on the deletion request, the corresponding whole-track lossless audio is deleted, and the audio description information of each single-track lossless audio in the whole-track lossless audio is deleted from the directory tree.
The first terminal and the second terminal may be the same terminal or different terminals.
The following describes a scheme of the present application by taking an execution subject of the embodiment of the present application as an example.
Fig. 14 is a flowchart of another audio searching method according to an embodiment of the present application, including the following steps.
In step 1401, in response to a search operation, a search term is determined, wherein the search term is an audio name and/or an audio performer.
In step 1402, the playing time information of the single-track lossless audio matching the search term is searched from the stored whole-track lossless audio based on the pre-constructed directory tree.
In specific implementation, the stored CUE file of the whole-track lossless audio can be analyzed to obtain audio description information of each single-track lossless audio in the whole-track lossless audio; constructing a primary leaf node for the whole-track lossless audio, and mounting the primary leaf node under a pre-constructed root node; constructing a secondary leaf node for each single-rail lossless audio, and mounting the secondary leaf node below the primary leaf node; and constructing a three-level leaf node for each audio description information of the single-track lossless audio, and mounting each three-level leaf node under the two-level leaf node to obtain the directory tree.
In specific implementation, after the terminal obtains the search word, a target leaf node matched with the search word can be searched from three-level leaf nodes of the directory tree, then the single-track lossless audio corresponding to the previous-level node of the target leaf node is determined as the single-track lossless audio matched with the search word, and finally the playing time information of the single-track lossless audio is read from the same-level node of the target leaf node.
In step 1403, the search results are presented, wherein the search results at least include playback time information of the single-track lossless audio.
In addition, the search result may further include the audio name of the single-track lossless audio, the audio performer, the audio name of the full-track lossless audio, and the like, and the search result page is shown in fig. 10 or fig. 11.
In step 1404, playback is performed based on the playback time information of the single-track lossless audio in response to the playback request.
The playing interface of the terminal can be referred to as follows:
fig. 15 is a schematic diagram of an audio playing interface according to an embodiment of the present application, where the audio playing interface may display playing time information of the single-track lossless audio 3 matched with the search word, an audio name, an audio performer name, and audio words of the audio, where the audio playing time information includes a start time of the single-track lossless audio 3 in the whole-track lossless audio, that is, 10.
FIG. 16 is a diagram illustrating another audio playing interface provided in this embodiment of the present application, where the playing interface may display audio information of each single-track lossless audio in the entire-track lossless audio and label the single-track lossless audio 3 matching the search word, and directly locate to the start time of the single-track lossless audio 3, i.e., the 10 th minute.
In some embodiments, the play time information includes a start time. At this time, the terminal plays the entire-track lossless audio from the start time after responding to the play request, that is, plays the single-track lossless audio matching the search word and the single-track lossless audio following the single-track lossless audio (if any) from among the entire-track lossless audio.
In some embodiments, the playing time information includes a start time and an end time, and the terminal plays the entire-track lossless audio from the start time after responding to the playing request, and stops playing if it is determined that the entire-track lossless audio is played to the end time, that is, only the single-track lossless audio matching the search word in the entire-track lossless audio is played.
In step 1405, in response to the deletion operation, the entire track of lossless audio to be deleted is determined.
Referring to fig. 17, when the user selects to delete any one of the entire tracks of lossless audio, after the user clicks a delete button, a popup may be displayed to ask the user whether to determine that the file is to be deleted, and prompt information may be output to remind the user that the file is not recoverable after deletion, etc.
In step 1406, the determined entire track of lossless audio is deleted, and the audio description information of each single track of lossless audio in the entire track of lossless audio is deleted from the directory tree.
Based on the same technical concept, the embodiment of the present application further provides an audio search apparatus, and the principle of the audio search apparatus for solving the problem is similar to the audio search method, so that the implementation of the audio search apparatus can refer to the implementation of the audio search method, and repeated details are not repeated.
Fig. 18 is a schematic structural diagram of an audio search apparatus according to an embodiment of the present disclosure, which is disposed in an electronic device and includes an obtaining module 1801, a searching module 1802, and an output module 1803.
An obtaining module 1801, configured to obtain a search term, where the search term is used to search for an audio name and/or an audio performer;
a searching module 1802, configured to search, based on a pre-constructed directory tree, playing time information of a single-track lossless audio frequency matched with the search term from stored full-track lossless audio frequencies, where audio description information of each single-track lossless audio frequency in the full-track lossless audio frequency is stored in the directory tree, and the audio description information includes an audio name, an audio performer, and playing time information;
an output module 1803, configured to output the playing time information of the single-track lossless audio.
Fig. 19 is a schematic structural diagram of another audio search apparatus provided in the embodiment of the present application;
in some embodiments, a construction module 1804 is further included for constructing the directory tree according to the following steps:
analyzing the compact disc image auxiliary CUE file of the whole-track lossless audio to obtain audio description information of each single-track lossless audio in the whole-track lossless audio;
constructing a primary leaf node for the whole-track lossless audio, and mounting the primary leaf node under a pre-constructed root node;
constructing a secondary leaf node for each single-rail lossless audio, and mounting the secondary leaf node below the primary leaf node;
and constructing a three-level leaf node for each audio description information of the single-track lossless audio, and mounting each three-level leaf node under the two-level leaf node to obtain the directory tree.
In some embodiments, the lookup module 1802 is specifically configured to:
searching a target leaf node matched with the search word from three levels of leaf nodes of the directory tree;
determining the single-track lossless audio corresponding to the upper-level node of the target leaf node as the single-track lossless audio matched with the search word;
and reading the playing time information of the single-track lossless audio from the peer node of the target leaf node.
In some embodiments, further comprising:
a determining module 1805, configured to determine to start a function of searching for single-track lossless audio in full-track lossless audio before searching for playing time information of single-track lossless audio matching the search term from stored full-track lossless audio based on a pre-constructed directory tree.
In some embodiments, the electronic device is a terminal or a server.
In some embodiments, the electronic device is the terminal, the playing time information at least includes a start time, and the method further includes:
a playing module 1806, configured to play the entire-track lossless audio from the start time in response to a playing request after outputting the playing time information of the single-track lossless audio, so as to play the single-track lossless audio.
In some embodiments, the playing time information further includes an end time, and the playing module 1806 is further configured to:
after the whole-track lossless audio is played from the starting time, if the whole-track lossless audio is determined to be played to the ending time, the playing is stopped, and the single-track lossless audio is played.
In some embodiments, further comprising:
a deleting module 1807, configured to, in response to a deletion request for the entire-track lossless audio, delete the entire-track lossless audio, and delete the audio description information of each single-track lossless audio in the entire-track lossless audio from the directory tree.
The division of the modules in the embodiments of the present application is schematic, and only one logic function division is provided, and in actual implementation, there may be another division manner, and in addition, each function module in each embodiment of the present application may be integrated in one processor, may also exist alone physically, or may also be integrated in one module by two or more modules. The coupling of the various modules to each other may be through interfaces that are typically electrical communication interfaces, but mechanical or other forms of interfaces are not excluded. Thus, modules described as separate components may or may not be physically separate, may be located in one place, or may be distributed in different locations on the same or different devices. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Having described the audio search method and apparatus according to an exemplary embodiment of the present application, an electronic device according to another exemplary embodiment of the present application is described next.
An electronic device 2000 implemented according to this embodiment of the present application is described below with reference to fig. 20. The electronic device 2000 shown in fig. 20 is only an example, and should not bring any limitation to the functions and the range of use of the embodiment of the present application.
As shown in fig. 20, the electronic device 2000 is represented in the form of a general electronic device. Components of the electronic device 2000 may include, but are not limited to: the at least one processor 2001, the at least one memory 2002, and a bus 2003 that couples various system components including the memory 2002 and the processor 2001.
Bus 2003 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, or a local bus using any of a variety of bus architectures.
The memory 2002 may include readable media in the form of volatile memory, such as Random Access Memory (RAM) 20021 and/or cache memory 20022, and may further include Read Only Memory (ROM) 20023.
The memory 2002 may also include a program/utility 20025 having a set (at least one) of program modules 20024, such program modules 20024 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The electronic device 2000 may also communicate with one or more external devices 2004 (e.g., keyboard, pointing device, etc.), with one or more devices that enable a user to interact with the electronic device 2000, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 2000 to communicate with one or more other electronic devices. Such communication may occur through input/output (I/O) interface 2005. Also, the electronic device 2000 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 2006. As shown, the network adapter 2006 communicates with other modules for the electronic device 2000 over a bus 2003. It should be understood that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 2000, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In an exemplary embodiment, there is also provided a storage medium in which a computer program is stored, the computer program being executable by a processor of an electronic device, the electronic device being capable of performing the above-mentioned audio search method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, an electronic device of the present application may include at least one processor, and a memory communicatively connected to the at least one processor, wherein the memory stores a computer program executable by the at least one processor, and the computer program, when executed by the at least one processor, may cause the at least one processor to perform the steps of any of the audio search methods provided by the embodiments of the present application.
In an exemplary embodiment, a computer program product is also provided, which, when executed by an electronic device, enables the electronic device to implement any of the exemplary methods provided herein.
Also, a computer program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable Disk, a hard Disk, a RAM, a ROM, an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a Compact Disk Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for audio search in the embodiments of the present application may be a CD-ROM and include program code, and may be run on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio Frequency (RF), etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In situations involving remote computing devices, the remote computing devices may be connected to the user computing device over any kind of Network, such as a Local Area Network (LAN) or Wide Area Network (WAN), or may be connected to external computing devices (e.g., connected over the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such division is merely exemplary and not mandatory. Indeed, the features and functions of two or more units described above may be embodied in one unit, according to embodiments of the application. Conversely, the features and functions of one unit described above may be further divided into embodiments by a plurality of units.
Further, while the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all changes and modifications that fall within the scope of the present application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. An audio search method, comprising:
the electronic equipment acquires a search word, wherein the search word is used for searching an audio name and/or an audio performer;
searching playing time information of the single-track lossless audio matched with the search word from stored whole-track lossless audio based on a pre-constructed directory tree, wherein audio description information of each single-track lossless audio in the whole-track lossless audio is stored in the directory tree, and the audio description information comprises an audio name, an audio performer and playing time information;
and outputting the playing time information of the single-track lossless audio.
2. The method of claim 1, wherein the directory tree is constructed according to the following steps:
analyzing the CD-mapping auxiliary CUE file of the whole-track lossless audio to obtain audio description information of each single-track lossless audio in the whole-track lossless audio;
constructing a primary leaf node for the whole-track lossless audio, and mounting the primary leaf node under a pre-constructed root node;
constructing a secondary leaf node for each single-rail lossless audio, and mounting the secondary leaf node below the primary leaf node;
and constructing a three-level leaf node for each audio description information of the single-track lossless audio, and mounting each three-level leaf node under the two-level leaf node to obtain the directory tree.
3. The method of claim 2, wherein searching for playback time information of single-track lossless audio matching the search term from stored full-track lossless audio based on a pre-constructed directory tree comprises:
searching a target leaf node matched with the search word from three levels of leaf nodes of the directory tree;
determining the single-track lossless audio corresponding to the upper-level node of the target leaf node as the single-track lossless audio matched with the search word;
and reading the playing time information of the single-track lossless audio from the peer node of the target leaf node.
4. The method of any of claims 1-3, further comprising:
and determining to start a function of searching single-track lossless audio in the whole-track lossless audio before searching playing time information of single-track lossless audio matched with the search word from the stored whole-track lossless audio based on a pre-constructed directory tree.
5. The method of any of claims 1-3, wherein the electronic device is a terminal or a server.
6. The method of claim 5, wherein the electronic device is the terminal, the playing time information includes at least a start time, and further comprising:
after outputting the playback time information for the single-track lossless audio, in response to a playback request, playing the full-track lossless audio from the start time to play the single-track lossless audio.
7. The method of claim 6, wherein the play time information further includes an end time, further comprising:
after the whole-track lossless audio is played from the starting time, if the whole-track lossless audio is determined to be played to the ending time, the playing is stopped, and the single-track lossless audio is played.
8. The method of claim 1, further comprising:
in response to a deletion request for the entire-track lossless audio, deleting the entire-track lossless audio, and deleting audio description information of each single-track lossless audio in the entire-track lossless audio from the directory tree.
9. An electronic device, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
10. A storage medium, characterized in that the electronic device is capable of performing the method according to any of claims 1-8, when the computer program in the storage medium is executed by a processor of the electronic device.
CN202210723192.3A 2022-06-23 2022-06-23 Audio searching method and device, electronic equipment and storage medium Pending CN115186124A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210723192.3A CN115186124A (en) 2022-06-23 2022-06-23 Audio searching method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210723192.3A CN115186124A (en) 2022-06-23 2022-06-23 Audio searching method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115186124A true CN115186124A (en) 2022-10-14

Family

ID=83515234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210723192.3A Pending CN115186124A (en) 2022-06-23 2022-06-23 Audio searching method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115186124A (en)

Similar Documents

Publication Publication Date Title
US20160210363A1 (en) Contextual search using natural language
US11132333B2 (en) File access with different file hosts
EP3627311B1 (en) Computer application promotion
US20130159853A1 (en) Managing playback of supplemental information
CN113037929B (en) Information relay output method and device, electronic equipment and storage medium
US10402647B2 (en) Adapted user interface for surfacing contextual analysis of content
CN114020197B (en) Cross-application message processing method, electronic device and readable storage medium
CN114374813B (en) Multimedia resource management method, recorder and server
US20220021922A1 (en) Playlist switching method, apparatus and system, terminal and storage medium
CN113709026A (en) Method, device, storage medium and program product for processing instant communication message
US20230139886A1 (en) Device control method and device
KR102368945B1 (en) Encoded associations with external content items
CN114827745B (en) Video subtitle generation method and electronic equipment
JP7254842B2 (en) A method, system, and computer-readable recording medium for creating notes for audio files through interaction between an app and a website
CN115186124A (en) Audio searching method and device, electronic equipment and storage medium
KR20190084051A (en) Select layered content
CN112786022B (en) Terminal, first voice server, second voice server and voice recognition method
US20100120531A1 (en) Audio content management for video game systems
CN117097793B (en) Message pushing method, terminal and server
WO2024093700A1 (en) Service hopping method and device, and storage medium
CN113031903B (en) Electronic equipment and audio stream synthesis method thereof
CN113656636A (en) Single music information processing method and terminal equipment
CN113672423A (en) Method for restoring analysis file of album file and terminal equipment
CN115174504A (en) Interface display method, terminal equipment and storage medium
CN113805706A (en) Text input method, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination