WO2014206289A1 - Method and apparatus for outputting log information - Google Patents

Method and apparatus for outputting log information Download PDF

Info

Publication number
WO2014206289A1
WO2014206289A1 PCT/CN2014/080705 CN2014080705W WO2014206289A1 WO 2014206289 A1 WO2014206289 A1 WO 2014206289A1 CN 2014080705 W CN2014080705 W CN 2014080705W WO 2014206289 A1 WO2014206289 A1 WO 2014206289A1
Authority
WO
WIPO (PCT)
Prior art keywords
log information
cache queue
log
system thread
information cache
Prior art date
Application number
PCT/CN2014/080705
Other languages
French (fr)
Inventor
Siguang LI
Original Assignee
Tencent Technology (Shenzhen) Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology (Shenzhen) Company Limited filed Critical Tencent Technology (Shenzhen) Company Limited
Publication of WO2014206289A1 publication Critical patent/WO2014206289A1/en
Priority to US14/824,469 priority Critical patent/US20150347305A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0875Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3476Data logging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/46Caching storage objects of specific type in disk cache
    • G06F2212/463File

Definitions

  • the present disclosure relates to the field of information technology, in particular to a method and an apparatus for outputting log information.
  • various threads configure their respective log information into the log information sharing file according to a certain order, i.e., when a certain thread is performing an operation of configuring the log information, which has been outputted, into the log information sharing file, other threads need to wait until this thread has completed the operation of configuring the log information into the log information sharing file and can then configure the log information into the log information sharing file again.
  • Theembodiments of the present disclosure disclose a method and an apparatus for outputting log information and can improve the task execution efficiency of various threads.
  • a method for outputting log information is provided.
  • the method is implemented in a device having a processor.
  • the device includes a system thread acquires a plurality of pieces of log information from a plurality of application threads.
  • the system thread establishes a log information cache queue.
  • the system thread caches each piece of the log information from the plurality of pieces of log information into the established log information cache queue.
  • the system thread configures the log information located in the front of the log information cache queue, into a log file.
  • an apparatus for outputting log information includes a hardware processor and a non-transitory storage medium configured to store the following units implemented by the hardware processor: an acquiring unit, a caching unit, and a configuring unit.
  • the acquiring unit is configured to acquire a plurality of pieces of log information which have been outputted by a plurality of application threads.
  • the caching unit is configured to cache each piece of the log information from the plurality of pieces of log information acquired by the acquiring unit in proper order into a log information cache queue which has been established by a system thread.
  • the configuring unit is configured to configure the log information, which is cached by the caching unit and located in the front of the log information cache queue, into a log file.
  • a device for outputting log information, including a processor and a non-transitory storage medium accessible to the processor.
  • the device is configured to: establish a log information cache queue by a system thread in the device; acquire a plurality of pieces of log information outputted from a plurality of application threads; cache each piece of the log information from the plurality of pieces of log information acquired by the acquiring unit into the log information cache queue; andconfigure the log information located in the front of the log information cache queue, into a log file.
  • the method and the apparatus, which are disclosed in the embodiments of the present disclosure, for outputting the log information include first acquiring the plurality of pieces of log information which have been outputted by the plurality of application threads, then caching each piece of the log information from the plurality of pieces of log information in proper order into the log information cache queue which has been established by the system thread and finally configuring the log information, which is located in the front of the log information cache queue, into the log information sharing file.
  • the embodiments of the present disclosure establish and maintain one log information cache queue through the configuration of an independent system thread, acquire the log information from this log information cache queue through the system thread and configure the log information, which has been acquired, into the log information sharing file to make other threads be capable of executing other tasks immediately just after having cached the log information, which has been outputted, into this log information cache queue but not need to wait for the completion of the operation of configuring the log information, which has been outputted, into the log information sharing file prior to executing other tasks, so as to improve the task execution efficiency and performance of the various threads.
  • Figure 1 shows a flow diagram of a method, which is disclosed in the embodiments of the present disclosure, for outputting log information
  • Figure 2 shows a flow diagram of another method, which is disclosed in the embodiments of the present disclosure, for outputting log information
  • Figure 3 shows an example structural schematic diagram of an apparatus, which is disclosed in the embodiments of the present disclosure, for outputting log information
  • Figure 4 shows an example structural schematic diagram of another apparatus, which is disclosed in the embodiments of the present disclosure, for outputting log information
  • Figure 5 shows an example schematic diagram of a log information cache queue disclosed in the embodiments of the present disclosure.
  • module may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip.
  • ASIC Application Specific Integrated Circuit
  • FPGA field programmable gate array
  • processor shared, dedicated, or group
  • the term module or unit may include memory (shared, dedicated, or group) that stores code executed by the processor.
  • the exemplary environment may include a server, a client, and a communication network.
  • the server and the client may be coupled through the communication network for information exchange, such as sending/receiving identification information, sending/receiving data files such as splash screen images, etc.
  • information exchange such as sending/receiving identification information, sending/receiving data files such as splash screen images, etc.
  • client and one server are shown in the environment, any number of terminals or servers may be included, and other devices may also be included.
  • the communication network may include any appropriate type of communication network for providing network connections to the server and client or among multiple servers or clients.
  • communication network may include the Internet or other types of computer networks or telecommunication networks, either wired or wireless.
  • the disclosed methods and apparatus may be implemented, for example, in a wireless network that includes at least one client.
  • the client may refer to any appropriate user terminal with certain computing capabilities, such as a personal computer (PC), a work station computer, a server computer, a hand-held computing device (tablet), a smart phone or mobile phone, or any other user-side computing device.
  • the client may include a network access device.
  • the client may be stationary or mobile.
  • a server may refer to one or more server computers configured to provide certain server functionalities, such as database management and search engines.
  • a server may also include one or more processors to execute computer programs in parallel.
  • the embodiments of the present disclosure disclose a method for outputting the log information; as shown in Figure 1, the method includes:
  • a system thread in a terminal device acquires a plurality of pieces of log information from a plurality of application threads.
  • the system thread may acquire the plurality of pieces of log information which have been outputted by the plurality of application threads running in the terminal device.
  • log information when an application thread runs, there may be a large amount of log information to be outputted, where the log information is configured to record result data of various operations performed in the process of running various application threads.
  • [028] 102 The system thread establishes a log information cache queue. To improve efficiency and reduce the waiting time for the various application threads, the log information cache queue
  • the device caches each piece of the log information from the plurality of pieces of log information into the established log information cache queue.
  • the device may cache each piece of the log information from the plurality of pieces of log information in a proper order into the log information cache queue.
  • the log information cache queue maybe configured to save the log information which has been outputted by different application threads.
  • the log information may be a memory address to which the cached log information corresponds or any other form.
  • the embodiments of the present disclosure do not set any limit to the form of the log information.
  • the operation in which the various application threads cache the log information, which has been outputted, into the log information cache queue is performed in the memory, and the time consumed for the caching operation in the memory is very short.Thus,this operation significantly reduces the time consumed for the operation and further improves the task execution efficiency of the various threads in comparison with the operation in which the various application threads directly configure the log information into the log information sharing file.
  • the terminal device establishes and maintainsa log information cache queue using an independent system thread.
  • the terminal device then acquires the log information from this log
  • the size of the log information cache queue maybe configured according to the memory size of the terminal device.
  • An example data structure of the log information cache queue is shown below:
  • queue represents a pointer to the log information, and it is used for identifying a position of the log information in a pointer array.
  • the constant “QUEUE_SIZE” represents the length of the pointer array of the log information, and it is used for identifying the length of the log information cache queue.
  • the integer variable "head” represents a dequeue subscript position of the log information, and it is used for identifying a position of the log information, which has been acquired from the log information cache queue, in the pointer array.
  • the integer variable “tail” represents an enqueue subscript position of the log information, and it is used for identifying a position of the log information, which needs to be saved into the log information cache queue, in the pointer array.
  • the Boolean variable"full is used for identifying whether there is any remaining storage space in the log information cache queue or not.
  • the Boolean variable "empty” is used for identifying whether the log information cache queue is empty or not.
  • the log information cache queue in the embodiments of the present disclosure is a shared resource under a plurality of threads, it is necessary to add a mutual exclusion lock to the log information cache queue at the time of performing the operations of saving the log information into the log information cache queue and of acquiring the log information from the log information cache queue so as to ensure the integrity of the operation of the shared resource. Unlock the log information cache queue after having completed the operations.
  • the process to realize the specific procedure of caching the log information into the log information cache queue may include: first adding the mutual exclusion lock to this log information cache queue prior to caching the log information into the log information cache queue, then determining whether a "full" flag to which the log information cache queue corresponds is true or not; if the flag is true, it is indicated that the memory space of the log information cache queue is full and is not capable of saving this log
  • the process of determining whether the memory space of the log information cache queue is full or not can specifically include: adding 1 to a "tail" value after having assigned the pointer to any one piece of log information to the "queue" array, the subscript position of which is "tail”; determining whether the current "tail” value is equal to the maximum length of the array or not; if it is equal to the maximum length of the array, configuring the "tail” value to 0, then determining whether the "tail” value is equal to a "head” value or not; if it is unequal to the maximum length of the array, directly determining whether the "tail” value is equal to the "head” value or not; when the "tail” value is equal to the "head” value, it is indicated that the enqueue operation of the log information has always been performed in this cache queue, but there is no dequeue operation of the log information in the cache queue, or the amount of the log information for the enqueue operation is larger than the amount of the log information for the dequeu
  • the system thread configures the log information located in the front of the log information cache queue, into a log file.
  • the system thread mayconfigurethe log information, which is located in the front of the log information cache queue, into alog information sharing file.
  • the log information sharing file may specifically be a device file or a regular file and may be configured to save the log information which has been outputted by various threads.
  • the log information cache queue in the embodiments of the present disclosure may specifically be a first-in, first-out queue, so acquiring the log information from the log information cache queue is to acquire one piece of log information from the front.
  • the method, which is disclosed in the embodiments of the present disclosure, for outputting the log information includes first acquiring the plurality of pieces of log information which have been outputted by the plurality of application threads, then caching each piece of the log information from the plurality of pieces of log information in proper order into the log information cache queue which has been established by the system thread and finally configuring the log information, which is located in the front of the log information cache queue, into the log information sharing file.
  • the embodiments of the present disclosure establish and maintain one log information cache queue through the configuration of an independent system thread, acquire the log information from this log information cache queue through the system thread and configure the log information, which has been acquired, into the log information sharing file to make other threads be capable of executing other tasks immediately just after having cached the log information, which has been outputted, into this log information cache queue but not need to wait for the completion of the operation of configuring the log information, which has been outputted, into the log information sharing file prior to executing other tasks, so as to improve the task execution efficiency and performance of the various threads.
  • the embodiments of the present disclosure disclose another method for outputting the log information; as shown in Figure 2, the method includes: [045] 201: acquiring the plurality of pieces of log information which have been outputted by the plurality of application threads.
  • the log information is configured to record result data of various operations which have been performed in the process of running various application threads.
  • [047] 202a caching each piece of the log information from the plurality of pieces of log information in proper order into the log information cache queue which has been established by the system thread.
  • the system thread is configured to establish and maintain the log information cache queue.
  • the log information cache queue may be configured to save the log information which has been outputted by different application threads, and the form whereby the log information is saved into the log information cache queue may specifically be the memory address to which the saved log information corresponds.
  • the size of the log information cache queue can be specifically configured according to the memory size of the terminal device, and the specific data structure of the log information cache queue can be made with reference to the data structure in Figure 1 and will not be described with unnecessary details here.
  • the operation in which the various application threads cache the log information, which has been outputted, into the log information cache queue may beperformed in the memory.
  • the time consumed for the caching operation in the memory is very short.
  • this operation can significantly reduce the time consumed for the operation and further improve the task execution efficiency of the various threads.
  • the disclosed method manages the log information sharing file through a log information cache queue.
  • the log information cache queue is a shared resource accessible to a plurality of threads.
  • the step 202a may include caching the each piece of the log information in a proper order into the log information cache queue in chronological order of the output time to which the each piece of log information corresponds.
  • the step of caching each piece of the log information into the log information cache queue which has been established by the system thread can specifically include: first configuring the mutual exclusion lock for the log information cache queue, then caching the log information into the log information cache queue which has been configured with the mutual exclusion lock and finally unlocking the log information cache queue.
  • a thread 1, a thread 2 and a thread 3 which output the log information at present.
  • the log information which is outputted respectively by the thread 1, the thread 2 and the thread 3 is log information 1, log information 2 and log information 3.
  • the sequence of the log information, which has been outputted is the log information 2, the log information 1 and the log information 3, at this time, first configure the mutual exclusion lock for the log information cache queue, then cache the log information 2 into this log information cache queue, and finally unlock the log information cache queue; then cache the log information 1 and the log information 3 into the log information cache queue according to this mode.
  • the sort order of each piece of log information in the log information cache queue at this time can be as shown in Figure 5.
  • Step 202b in parallel with the step 202a: configuring the system thread into the suspended state if the log information does not exist.
  • this system thread judges that there is any application thread which performs the operation of caching into the log information cache queue, this system thread re-enters the normal operating status.
  • the application thread can wake up the system thread to enter the normal operating status by means of
  • the log information sharing file may specifically be a device file or a regular file and may be configured to save the log information which has been outputted by various threads.
  • the log information cache queue in the embodiments of the present disclosure may specifically be a first-in, first-out queue, so each time of acquiring the log information from the log information cache queue is to acquire one piece of log information from the front.
  • the process to realize the specific procedure of acquiring the log information from the log information cache queue can include: first adding the mutual exclusion lock to this log information cache queue prior to acquiring the log information from the log information cache queue, then extracting the log information from a queue array, the dequeue subscript position of which is "head,” adding 1 to the "head” value to make the pointer to the log information point to the dequeue position of the next piece of log information, and then unlocking the log information cache queue to complete this operation of acquiring the log information.
  • the step of determining whether any log information still exists in the log information cache queue or not can specifically include: after extracting the log information from a queue array, the dequeue subscript position of which is "head,” and adding 1 to the "head” value, first determining whether the current "head” value is equal to the maximum length of the array or not; if it is equal to the maximum length of the array, configuring the "head” value to 0, and then determining whether the "head” value is equal to the "tail” value or not; if it is unequal to the maximum length of the array, then directly determining whether the "head” value is equal to the "tail” value or not; when the "head” value is equal to the "tail” value, it is indicated that the dequeue operation of the log information has always been performed in this log information cache queue, but there is no enqueue operation of the new log information in the log information cache queue, or the amount of the log information for the dequeue operation is larger than the amount of the log information for the enque
  • [059] 204 releasing the memory space to which the log information corresponds in the log information cache queue.
  • the other method, which is disclosed in the embodiments of the present disclosure, for outputting the log information includes first acquiring the plurality of pieces of log information which have been outputted by the plurality of application threads, then caching each piece of the log information from the plurality of pieces of log information in proper order into the log information cache queue which has been established by the system thread and finally configuring the log information, which is located in the front of the log information cache queue, into the log information sharing file.
  • the embodiments of the present disclosure establish and maintain one log information cache queue through the configuration of an independent system thread, acquire the log information from this log information cache queue through the system thread and configure the log information, which has been acquired, into the log information sharing file to make other threads be capable of executing other tasks immediately just after having cached the log information, which has been outputted, into this log information cache queue but not need to wait for the completion of the operation of configuring the log information, which has been outputted, into the log information sharing file prior to executing other tasks, so as to improve the task execution efficiency and performance of the various threads.
  • the embodiments of the present disclosure disclose an apparatus 300 for outputting the log information; the apparatus can be applied to the terminal device, such as a cell phone, computer or notebook PC, and as shown in Figure 3, the apparatus 300 includes a hardware processor 310 and a non-transitory storage medium 320 configured to store the following units implemented by the hardware processor: an acquiring unit 321, a caching unit 322 and a configuring unit 323.
  • the acquiring unit 321 maybe configured to acquire the plurality of pieces of log information which have been outputted by the plurality of application threads.
  • the caching unit 322 maybe configured to cache each piece of the log information from the plurality of pieces of log information, which have been acquired by the acquiring unit 321, in proper order into the log information cache queue which has been established by the system thread.
  • the configuring unit 323 maybe configured to configure the log information, which is cached by the caching unit 322 and located in the front of the log information cache queue, into the log information sharing file.
  • the apparatus may be implemented in a terminal device, such as a cell phone, computer or notebook PC, and as shown in Figure 4.
  • the apparatus includes a hardware processor 410 and storage medium 420 configured to store the following units implemented by the hardware processor: an acquiring unit 41, a caching unit 42, a configuring unit 43, a creating unit 44, an unlocking unit 45, and a releasing unit 46.
  • the storage medium 420 may be transitory or non-transitory.
  • the acquiring unit 41 may be configured to acquire the plurality of pieces of log information which have been outputted by the plurality of application threads.
  • the caching unit 42 may be configured to cache each piece of the log information from the plurality of pieces of log information, which have been acquired by the acquiring unit 41, in proper order into the log information cache queue which has been established by the system thread.
  • the configuring unit 43 may be configured to configure the log
  • the creating unit 44 may be configured to create the system thread, where the system thread is configured to establish and maintain the log information cache queue.
  • the caching unit 42 may be configured to cache the each piece of the log information in proper order into the log information cache queue in chronological order of the output time to which the each piece of log information corresponds.
  • the configuring unit 43 may be configured to configure the mutual exclusion lock for the log information cache queue.
  • the caching unit 42 may be configured to cache the log information into the log information cache queue which has been configured with the mutual exclusion lock.
  • the unlocking unit 45 may be configured to unlock the log information cache queue.
  • the configuring unit 43 mayfurther be configured to configure the system thread into the suspended state if the log information does not exist.
  • the releasing unit 46 may be configured to release the memory space to which the log information corresponds in the log information cache queue.
  • the apparatus which is disclosed in the embodiments of the present disclosure, for outputting the log information includes first acquiring the plurality of pieces of log information which have been outputted by the plurality of application threads, then caching each piece of the log information from the plurality of pieces of log information in proper order into the log information cache queue which has been established by the system thread and finally configuring the log information, which is located in the front of the log information cache queue, into the log information sharing file.
  • the embodiments of the present disclosure establish and maintain one log information cache queue through the configuration of an independent system thread, acquire the log information from this log information cache queue through the system thread and configure the log information, which has been acquired, into the log information sharing file to make other threads be capable of executing other tasks immediately just after having cached the log information, which has been outputted, into this log information cache queue but not need to wait for the completion of the operation of configuring the log information, which has been outputted, into the log information sharing file prior to executing other tasks, so as to improve the task execution efficiency and performance of the various threads.
  • the apparatus that is disclosed in the embodiments of the present disclosure for outputting the log information can realize the embodiments of the method disclosed above.
  • the method and the apparatus that are disclosed in the embodiments of the present disclosure for outputting the log information may be applied to, without limitation, the field of information technology.
  • the realization of the whole or partial flow in the method in the abovementioned embodiments may be completed through a computer program which instructs related hardware, the program may be stored in a computer-readable storage medium, and this program may include the flow of the embodiments of the abovementioned various methods at the time of execution.
  • the storage medium may be a disk, compact disk, read-only memory (ROM), or random access memory (RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Debugging And Monitoring (AREA)

Abstract

A method and an apparatus for outputting log information are disclosed in the field of information technology. In the method: a system thread acquires a plurality of pieces of log information from a plurality of application threads. The system threadestablishes a log information cache queue. The system thread caches each piece of the log information from the plurality of pieces of log information into the established log information cache queue. The system thread configures the log information located in the front of the log information cache queue, into a log file.

Description

Method and Apparatus for Outputting Log Information
CROSS-REFERENCE TO RELATED APPLICATIONS
[001 ] This application is a continuation of Chinese Patent Application No. 201310260929.3, filed on June 26, 2013, which is hereby incorporated herein by reference in its entirety.
FIELD
[002] The present disclosure relates to the field of information technology, in particular to a method and an apparatus for outputting log information.
BACKGROUND
[003] Along with the continuous development of terminal devices, there are more and more types of application programs in the terminal devices. In general, in a process of running an application program, there are always a plurality of threads which exist simultaneously, and each thread has a large amount of log information, which needs to be outputted to a log information sharing file, for the purpose of debugging and positioning problems in the process of running the application program.
[004] At present, various threads configure their respective log information into the log information sharing file according to a certain order, i.e., when a certain thread is performing an operation of configuring the log information, which has been outputted, into the log information sharing file, other threads need to wait until this thread has completed the operation of configuring the log information into the log information sharing file and can then configure the log information into the log information sharing file again. Therefore, outputting the log information through the existing output mode of log information will make the waiting time become relatively long before the various threads configure the log information into the log information sharing file, and the operation time consumed for the various threads to configure the log information, which has been outputted, into the log information sharing file is also relatively long, so as to cause the task execution efficiency of the various threads to be relatively low.
SUMMARY
[005] Theembodiments of the present disclosure disclose a method and an apparatus for outputting log information and can improve the task execution efficiency of various threads.
[006] In a first aspect, a method for outputting log information is provided. The method is implemented in a device having a processor. The device includes a system thread acquires a plurality of pieces of log information from a plurality of application threads. The system thread establishes a log information cache queue. The system thread caches each piece of the log information from the plurality of pieces of log information into the established log information cache queue. The system thread configures the log information located in the front of the log information cache queue, into a log file.
[007] In a second aspect, an apparatus for outputting log information is provided. The apparatus includesa hardware processor and a non-transitory storage medium configured to store the following units implemented by the hardware processor: an acquiring unit, a caching unit, and a configuring unit. The acquiring unit is configured to acquire a plurality of pieces of log information which have been outputted by a plurality of application threads. The caching unit is configured to cache each piece of the log information from the plurality of pieces of log information acquired by the acquiring unit in proper order into a log information cache queue which has been established by a system thread. The configuring unit is configured to configure the log information, which is cached by the caching unit and located in the front of the log information cache queue, into a log file.
[008] In a third aspect, a device is provided for outputting log information, including a processor and a non-transitory storage medium accessible to the processor. The device is configured to: establish a log information cache queue by a system thread in the device; acquire a plurality of pieces of log information outputted from a plurality of application threads; cache each piece of the log information from the plurality of pieces of log information acquired by the acquiring unit into the log information cache queue; andconfigure the log information located in the front of the log information cache queue, into a log file.
[009] The method and the apparatus, which are disclosed in the embodiments of the present disclosure, for outputting the log information include first acquiring the plurality of pieces of log information which have been outputted by the plurality of application threads, then caching each piece of the log information from the plurality of pieces of log information in proper order into the log information cache queue which has been established by the system thread and finally configuring the log information, which is located in the front of the log information cache queue, into the log information sharing file. In comparison with the current situation whereby the various threads directly configure their respectively outputted log information into the log information sharing file according to the certain order, i.e., when a certain thread is performing an operation of configuring the log information, which has been outputted, into the log information sharing file, other threads need to wait until this thread has completed the operation of configuring the log information into the log information sharing file and can then configure the log information into the log information sharing file again, the embodiments of the present disclosure establish and maintain one log information cache queue through the configuration of an independent system thread, acquire the log information from this log information cache queue through the system thread and configure the log information, which has been acquired, into the log information sharing file to make other threads be capable of executing other tasks immediately just after having cached the log information, which has been outputted, into this log information cache queue but not need to wait for the completion of the operation of configuring the log information, which has been outputted, into the log information sharing file prior to executing other tasks, so as to improve the task execution efficiency and performance of the various threads.
BRIEF DESCRIPTION OF THE DRAWINGS
[010] In order to more clearly explain the technical solution in the embodiments of the present disclosure, a brief introduction is given to the attached drawings required for use in the description of the embodiments or prior art below. Obviously, the attached drawings in the following description are merely some embodiments of the present disclosure, and for those of ordinary skill in the art, they may also acquire other drawings according to these attached drawings under the precondition of not making creative efforts.
[011 ] Figure 1 shows a flow diagram of a method, which is disclosed in the embodiments of the present disclosure, for outputting log information;
[012] Figure 2 shows a flow diagram of another method, which is disclosed in the embodiments of the present disclosure, for outputting log information;
[013] Figure 3 shows an example structural schematic diagram of an apparatus, which is disclosed in the embodiments of the present disclosure, for outputting log information;
[014] Figure 4 shows an example structural schematic diagram of another apparatus, which is disclosed in the embodiments of the present disclosure, for outputting log information; and
[015] Figure 5 shows an example schematic diagram of a log information cache queue disclosed in the embodiments of the present disclosure.
DETAILED DESCRIPTION OF THE DRAWINGS
[016] Reference throughout this specification to "one embodiment," "an embodiment," "example embodiment," or the like in the singular or plural means that one or more particular features, structures, or characteristics described in connection with an embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment," "in anexample embodiment," or the like in the singular or plural in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[017] The terminology used in the description of the invention herein is for the purpose of describing particular examples only and is not intended to be limiting of the invention. As used in the description of the invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "may include," "including," "comprises," and/or "comprising," when used in this specification, specify the presence of stated features, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, operations, elements, components, and/or groups thereof.
[018] As used herein, the term "module" or "unit" may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip. The term module or unit may include memory (shared, dedicated, or group) that stores code executed by the processor.
[019] The exemplary environment may include a server, a client, and a communication network. The server and the client may be coupled through the communication network for information exchange, such as sending/receiving identification information, sending/receiving data files such as splash screen images, etc. Although only one client and one server are shown in the environment, any number of terminals or servers may be included, and other devices may also be included.
[020] The communication network may include any appropriate type of communication network for providing network connections to the server and client or among multiple servers or clients. For example, communication network may include the Internet or other types of computer networks or telecommunication networks, either wired or wireless. In a certain embodiment, the disclosed methods and apparatus may be implemented, for example, in a wireless network that includes at least one client.
[021 ] In some cases, the client may refer to any appropriate user terminal with certain computing capabilities, such as a personal computer (PC), a work station computer, a server computer, a hand-held computing device (tablet), a smart phone or mobile phone, or any other user-side computing device. In various embodiments, the client may include a network access device. The client may be stationary or mobile.
[022] A server, as used herein, may refer to one or more server computers configured to provide certain server functionalities, such as database management and search engines. A server may also include one or more processors to execute computer programs in parallel.
[023] The solutions in the embodiments of the present disclosure areclearly and completely described in combination with the attached drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only a part, but not all, of the embodiments of the present disclosure. On the basis of the embodiments of the present disclosure, all other embodiments acquired by those of ordinary skill in the art under the precondition that no creative efforts have been made shall be covered by the protective scope of the present disclosure.
[024] In order to further clarify the advantages of the solutions in the present disclosure, the present disclosure is further described in detail in combination with the attached drawings and the embodiments below.
[025] The embodiments of the present disclosure disclose a method for outputting the log information; as shown in Figure 1, the method includes:
[026] 101: A system thread in a terminal device acquires a plurality of pieces of log information from a plurality of application threads. The system thread may acquire the plurality of pieces of log information which have been outputted by the plurality of application threads running in the terminal device.
[027] Here, when an application thread runs, there may be a large amount of log information to be outputted, where the log information is configured to record result data of various operations performed in the process of running various application threads.
[028] 102: The system thread establishes a log information cache queue. To improve efficiency and reduce the waiting time for the various application threads, the log information cache queue
[029] 103: The device caches each piece of the log information from the plurality of pieces of log information into the established log information cache queue. The device may cache each piece of the log information from the plurality of pieces of log information in a proper order into the log information cache queue.
[030] Here, the log information cache queue maybe configured to save the log information which has been outputted by different application threads. For example, the log information may be a memory address to which the cached log information corresponds or any other form. The embodiments of the present disclosure do not set any limit to the form of the log information. The operation in which the various application threads cache the log information, which has been outputted, into the log information cache queue is performed in the memory, and the time consumed for the caching operation in the memory is very short.Thus,this operation significantly reduces the time consumed for the operation and further improves the task execution efficiency of the various threads in comparison with the operation in which the various application threads directly configure the log information into the log information sharing file.
[031 ] For the embodiments of the present disclosure, the terminal device establishes and maintainsa log information cache queue using an independent system thread. The terminal devicethen acquires the log information from this log
information cache queue through the system thread so as to complete the operation of configuring the log information into the log information sharing file. The size of the log information cache queue maybe configured according to the memory size of the terminal device. An example data structure of the log information cache queue is shown below:
[032] struct student
[033] {
[034] void* queue[QUEUE_SIZE];
int head;
[035] int tail;
[036] bool full;
[037] bool empty;
}
Here, "queue" represents a pointer to the log information, and it is used for identifying a position of the log information in a pointer array. The constant "QUEUE_SIZE" represents the length of the pointer array of the log information, and it is used for identifying the length of the log information cache queue. The integer variable"head" represents a dequeue subscript position of the log information, and it is used for identifying a position of the log information, which has been acquired from the log information cache queue, in the pointer array. The integer variable "tail" represents an enqueue subscript position of the log information, and it is used for identifying a position of the log information, which needs to be saved into the log information cache queue, in the pointer array. The Boolean variable"full" is used for identifying whether there is any remaining storage space in the log information cache queue or not.The Boolean variable "empty" is used for identifying whether the log information cache queue is empty or not.
[038] As the log information cache queue in the embodiments of the present disclosure is a shared resource under a plurality of threads, it is necessary to add a mutual exclusion lock to the log information cache queue at the time of performing the operations of saving the log information into the log information cache queue and of acquiring the log information from the log information cache queue so as to ensure the integrity of the operation of the shared resource. Unlock the log information cache queue after having completed the operations.
[039] For the embodiments of the present disclosure, the process to realize the specific procedure of caching the log information into the log information cache queue may include: first adding the mutual exclusion lock to this log information cache queue prior to caching the log information into the log information cache queue, then determining whether a "full" flag to which the log information cache queue corresponds is true or not; if the flag is true, it is indicated that the memory space of the log information cache queue is full and is not capable of saving this log
information; at this time, unlocking this log information cache queue, and then transmitting a prompt message to the system thread, which maintains this log information cache queue, so as to prompt the system thread that the log information that can be acquired and configured into the log information sharing file exists in the log information cache queue. If the "full" flag to which the log information cache queue corresponds is false, it is indicated that the memory space of the log
information cache queue is not full; at this time, assigning the pointer to this log information to a "queue" array, the subscript position of which is "tail," so as to complete the enqueue operation of this log information, then unlocking the log information cache queue, and transmitting a prompt message to the system thread so as to prompt the system thread that the log information, which may be configured into the log information sharing file, exists in the log information cache queue.
[040] Here, the process of determining whether the memory space of the log information cache queue is full or not can specifically include: adding 1 to a "tail" value after having assigned the pointer to any one piece of log information to the "queue" array, the subscript position of which is "tail"; determining whether the current "tail" value is equal to the maximum length of the array or not; if it is equal to the maximum length of the array, configuring the "tail" value to 0, then determining whether the "tail" value is equal to a "head" value or not; if it is unequal to the maximum length of the array, directly determining whether the "tail" value is equal to the "head" value or not; when the "tail" value is equal to the "head" value, it is indicated that the enqueue operation of the log information has always been performed in this cache queue, but there is no dequeue operation of the log information in the cache queue, or the amount of the log information for the enqueue operation is larger than the amount of the log information for the dequeue operation, and the difference between the amounts is equal to the upper limit of the amount of the log information which can be cached into the log information cache queue, which causes the memory space of the log information cache queue to become full, and at this time, configuring the "full" flag to "true." When the "tail" value is unequal to the "head" value, it is indicated that the amount of the log information for the enqueue operation and the amount of the log information for the dequeue operation are kept balanced in this cache queue, which causes the memory space of the log information cache queue to never become full, and at the moment, configuring the "full" flag to "false" so as to identify that the memory space of the current log information cache queue is not full yet and to indicate that this log information can be saved at this time.
[041 ] 104: The system thread configures the log information located in the front of the log information cache queue, into a log file. For example, the system thread mayconfigurethe log information, which is located in the front of the log information cache queue, into alog information sharing file. [042] Here, the log information sharing file may specifically be a device file or a regular file and may be configured to save the log information which has been outputted by various threads. The log information cache queue in the embodiments of the present disclosure may specifically be a first-in, first-out queue, so acquiring the log information from the log information cache queue is to acquire one piece of log information from the front.
[043] The method, which is disclosed in the embodiments of the present disclosure, for outputting the log information includes first acquiring the plurality of pieces of log information which have been outputted by the plurality of application threads, then caching each piece of the log information from the plurality of pieces of log information in proper order into the log information cache queue which has been established by the system thread and finally configuring the log information, which is located in the front of the log information cache queue, into the log information sharing file. In comparison with the current situation whereby the various threads directly configure their respectively outputted log information into the log information sharing file according to the certain order, i.e., when a certain thread is performing an operation of configuring the log information, which has been outputted, into the log information sharing file, other threads need to wait until this thread has completed the operation of configuring the log information into the log information sharing file and can then configure the log information into the log information sharing file again, the embodiments of the present disclosure establish and maintain one log information cache queue through the configuration of an independent system thread, acquire the log information from this log information cache queue through the system thread and configure the log information, which has been acquired, into the log information sharing file to make other threads be capable of executing other tasks immediately just after having cached the log information, which has been outputted, into this log information cache queue but not need to wait for the completion of the operation of configuring the log information, which has been outputted, into the log information sharing file prior to executing other tasks, so as to improve the task execution efficiency and performance of the various threads.
[044] Further, the embodiments of the present disclosure disclose another method for outputting the log information; as shown in Figure 2, the method includes: [045] 201: acquiring the plurality of pieces of log information which have been outputted by the plurality of application threads.
[046] Here, when each application thread runs, there may be a large amount of log information to be outputted. The log information is configured to record result data of various operations which have been performed in the process of running various application threads.
[047] 202a: caching each piece of the log information from the plurality of pieces of log information in proper order into the log information cache queue which has been established by the system thread.
[048] Here, the system thread is configured to establish and maintain the log information cache queue. The log information cache queue may be configured to save the log information which has been outputted by different application threads, and the form whereby the log information is saved into the log information cache queue may specifically be the memory address to which the saved log information corresponds. The size of the log information cache queue can be specifically configured according to the memory size of the terminal device, and the specific data structure of the log information cache queue can be made with reference to the data structure in Figure 1 and will not be described with unnecessary details here.
[049] For the embodiments of the present disclosure, the operation in which the various application threads cache the log information, which has been outputted, into the log information cache queue may beperformed in the memory. The time consumed for the caching operation in the memory is very short. Thus, this operation can significantly reduce the time consumed for the operation and further improve the task execution efficiency of the various threads. In comparison with the operation in which the various application threads directly configure the log information, the disclosed method manages the log information sharing file through a log information cache queue. The log information cache queue is a shared resource accessible to a plurality of threads. Thus, it may be necessary to add a mutual exclusion lock to the log information cache queue at the time of saving the log information into the log information cache queue and acquiring the log information from the log information cache queue so as to ensure the integrity of the operation of the shared resource. Perform the unlocking operation after having completed the operations. [050] For the embodiments of the present disclosure, as the time for the various application threads to output the log information may besubject to a chronological sequence. For example, the step 202a may include caching the each piece of the log information in a proper order into the log information cache queue in chronological order of the output time to which the each piece of log information corresponds. Here, the step of caching each piece of the log information into the log information cache queue which has been established by the system thread can specifically include: first configuring the mutual exclusion lock for the log information cache queue, then caching the log information into the log information cache queue which has been configured with the mutual exclusion lock and finally unlocking the log information cache queue.
[051] For example, there are threeapplication threads, i.e., a thread 1, a thread 2 and a thread 3, which output the log information at present. The log information which is outputted respectively by the thread 1, the thread 2 and the thread 3 is log information 1, log information 2 and log information 3. After sorting the information according to the chronological sequence of the output time of each piece of log information, the sequence of the log information, which has been outputted, is the log information 2, the log information 1 and the log information 3, at this time, first configure the mutual exclusion lock for the log information cache queue, then cache the log information 2 into this log information cache queue, and finally unlock the log information cache queue; then cache the log information 1 and the log information 3 into the log information cache queue according to this mode. The sort order of each piece of log information in the log information cache queue at this time can be as shown in Figure 5.
[052] Step 202b in parallel with the step 202a: configuring the system thread into the suspended state if the log information does not exist.
[053] Here, through configuring the system thread into the suspended state, it is feasible to conserve the system resources occupied by the system thread in order to provide more system resources for other application threads, so as to further improve the task execution efficiency of the various application threads.
[054] Further, when this system thread judges that there is any application thread which performs the operation of caching into the log information cache queue, this system thread re-enters the normal operating status. Here, the application thread can wake up the system thread to enter the normal operating status by means of
transmitting an enqueue prompt message to the system thread.
[055] 203: configuring the log information located in the front of the log information cache queue, into the log information sharing file.
[056] Here, the log information sharing file may specifically be a device file or a regular file and may be configured to save the log information which has been outputted by various threads. The log information cache queue in the embodiments of the present disclosure may specifically be a first-in, first-out queue, so each time of acquiring the log information from the log information cache queue is to acquire one piece of log information from the front.
[057] For the embodiments of the present disclosure, the process to realize the specific procedure of acquiring the log information from the log information cache queue can include: first adding the mutual exclusion lock to this log information cache queue prior to acquiring the log information from the log information cache queue, then extracting the log information from a queue array, the dequeue subscript position of which is "head," adding 1 to the "head" value to make the pointer to the log information point to the dequeue position of the next piece of log information, and then unlocking the log information cache queue to complete this operation of acquiring the log information. When it is necessary to acquire the log information from the log information cache queue again, first add the mutual exclusion lock to this log information cache queue, then acquire the log information in the next dequeue position to which the abovementioned pointer to the log information points, add 1 to the "head" value again to make the pointer to the log information point to the dequeue position of the next piece of log information, and then unlock the log information cache queue to complete this operation of acquiring the log information. The rest can be done in the same manner until all the log information which has been cached into the log information cache queue is extracted.
[058] Here, the step of determining whether any log information still exists in the log information cache queue or not can specifically include: after extracting the log information from a queue array, the dequeue subscript position of which is "head," and adding 1 to the "head" value, first determining whether the current "head" value is equal to the maximum length of the array or not; if it is equal to the maximum length of the array, configuring the "head" value to 0, and then determining whether the "head" value is equal to the "tail" value or not; if it is unequal to the maximum length of the array, then directly determining whether the "head" value is equal to the "tail" value or not; when the "head" value is equal to the "tail" value, it is indicated that the dequeue operation of the log information has always been performed in this log information cache queue, but there is no enqueue operation of the new log information in the log information cache queue, or the amount of the log information for the dequeue operation is larger than the amount of the log information for the enqueue operation, and the difference between the amounts is equal to the upper limit of the amount of the log information which can be cached into the log information cache queue, which causes all the log information, which has been cached into the log information cache queue, to be extracted, and at this time, configuring an "empty" flag to "true" so as to identify that the current queue is empty. When the "head" value is unequal to the "tail" value, it is indicated that the amount of the log information for the enqueue operation and the amount of the log information for the dequeue operation are kept balanced in this cache queue, and at this time, configuring the "empty" flag to "false" so as to identify that the current log information cache queue is not empty and still caches the log information which can be acquired.
[059] 204: releasing the memory space to which the log information corresponds in the log information cache queue.
[060] Here, through releasing the memory space to which the log information corresponds in the log information cache queue, it is feasible to provide the memory space, which saves the log information to be outputted, for other threads and to ensure the sustainability of the memory space of the log information cache queue.
[061 ] The other method, which is disclosed in the embodiments of the present disclosure, for outputting the log information includes first acquiring the plurality of pieces of log information which have been outputted by the plurality of application threads, then caching each piece of the log information from the plurality of pieces of log information in proper order into the log information cache queue which has been established by the system thread and finally configuring the log information, which is located in the front of the log information cache queue, into the log information sharing file. In comparison with the current situation whereby the various threads directly configure their respectively outputted log information into the log information sharing file according to the certain order, i.e., when a certain thread is performing an operation of configuring the log information, which has been outputted, into the log information sharing file, other threads need to wait until this thread has completed the operation of configuring the log information into the log information sharing file and can then configure the log information into the log information sharing file again, the embodiments of the present disclosure establish and maintain one log information cache queue through the configuration of an independent system thread, acquire the log information from this log information cache queue through the system thread and configure the log information, which has been acquired, into the log information sharing file to make other threads be capable of executing other tasks immediately just after having cached the log information, which has been outputted, into this log information cache queue but not need to wait for the completion of the operation of configuring the log information, which has been outputted, into the log information sharing file prior to executing other tasks, so as to improve the task execution efficiency and performance of the various threads.
[062] Further, as the specific realization of the method as shown in Figure 1, the embodiments of the present disclosure disclose an apparatus 300 for outputting the log information; the apparatus can be applied to the terminal device, such as a cell phone, computer or notebook PC, and as shown in Figure 3, the apparatus 300 includes a hardware processor 310 and a non-transitory storage medium 320 configured to store the following units implemented by the hardware processor: an acquiring unit 321, a caching unit 322 and a configuring unit 323.
[063] The acquiring unit 321 maybe configured to acquire the plurality of pieces of log information which have been outputted by the plurality of application threads.
[064] The caching unit 322 maybe configured to cache each piece of the log information from the plurality of pieces of log information, which have been acquired by the acquiring unit 321, in proper order into the log information cache queue which has been established by the system thread. [065] The configuring unit 323 maybe configured to configure the log information, which is cached by the caching unit 322 and located in the front of the log information cache queue, into the log information sharing file.
[066] It is necessary to state that other relevant descriptions of various functional units related to the apparatus, which is disclosed in the embodiments of the present disclosure, for outputting the log information can be made with reference to the corresponding description in Figure 1 and will not be described with unnecessary details here.
[067] Yet further, as the realization of the method as shown in Figure 2, the embodiments of the present disclosure disclose another apparatus for outputting the log information. The apparatus may be implemented in a terminal device, such as a cell phone, computer or notebook PC, and as shown in Figure 4.The apparatus includes a hardware processor 410 and storage medium 420 configured to store the following units implemented by the hardware processor: an acquiring unit 41, a caching unit 42, a configuring unit 43, a creating unit 44, an unlocking unit 45, and a releasing unit 46. The storage medium 420 may be transitory or non-transitory.
[068] The acquiring unit 41 may be configured to acquire the plurality of pieces of log information which have been outputted by the plurality of application threads.
[069] The caching unit 42 may be configured to cache each piece of the log information from the plurality of pieces of log information, which have been acquired by the acquiring unit 41, in proper order into the log information cache queue which has been established by the system thread.
[070] The configuring unit 43 may be configured to configure the log
information, which is cached by the caching unit 42 and located in the front of the log information cache queue, into the log information sharing file.
[071 ] The creating unit 44 may be configured to create the system thread, where the system thread is configured to establish and maintain the log information cache queue.
[072] The caching unit 42 may be configured to cache the each piece of the log information in proper order into the log information cache queue in chronological order of the output time to which the each piece of log information corresponds. [073] The configuring unit 43 may be configured to configure the mutual exclusion lock for the log information cache queue.
[074] The caching unit 42 may be configured to cache the log information into the log information cache queue which has been configured with the mutual exclusion lock.
[075] The unlocking unit 45 may be configured to unlock the log information cache queue.
[076] The configuring unit 43 mayfurther be configured to configure the system thread into the suspended state if the log information does not exist.
[077] The releasing unit 46 may be configured to release the memory space to which the log information corresponds in the log information cache queue.
[078] It is necessary to state that other relevant descriptions of various functional units related to the apparatus, which is disclosed in the embodiments of the present disclosure, for outputting the log information can be made with reference to the corresponding description in Figure 2 and will not be described with unnecessary details here.
[079] The apparatus, which is disclosed in the embodiments of the present disclosure, for outputting the log information includes first acquiring the plurality of pieces of log information which have been outputted by the plurality of application threads, then caching each piece of the log information from the plurality of pieces of log information in proper order into the log information cache queue which has been established by the system thread and finally configuring the log information, which is located in the front of the log information cache queue, into the log information sharing file. In comparison with the current situation whereby the various threads directly configure their respectively outputted log information into the log information sharing file according to the certain order, i.e., when a certain thread is performing an operation of configuring the log information, which has been outputted, into the log information sharing file, other threads need to wait until this thread has completed the operation of configuring the log information into the log information sharing file and can then configure the log information into the log information sharing file again, the embodiments of the present disclosure establish and maintain one log information cache queue through the configuration of an independent system thread, acquire the log information from this log information cache queue through the system thread and configure the log information, which has been acquired, into the log information sharing file to make other threads be capable of executing other tasks immediately just after having cached the log information, which has been outputted, into this log information cache queue but not need to wait for the completion of the operation of configuring the log information, which has been outputted, into the log information sharing file prior to executing other tasks, so as to improve the task execution efficiency and performance of the various threads.
[080] The apparatus that is disclosed in the embodiments of the present disclosure for outputting the log information can realize the embodiments of the method disclosed above. For the realization of specific functions, please refer to the descriptions in the embodiments of the method, and they will not be described with unnecessary details here. The method and the apparatus that are disclosed in the embodiments of the present disclosure for outputting the log information may be applied to, without limitation, the field of information technology.
[081 ] Those of ordinary skill in the art may understand that the realization of the whole or partial flow in the method in the abovementioned embodiments may be completed through a computer program which instructs related hardware, the program may be stored in a computer-readable storage medium, and this program may include the flow of the embodiments of the abovementioned various methods at the time of execution. Here, the storage medium may be a disk, compact disk, read-only memory (ROM), or random access memory (RAM), etc.
The embodiments described above are only a few example embodiments of the present disclosure, but the protective scope of the present disclosure is not limited to these. Any modification or replacement that can be easily thought of by those skilled in the present art within the technical scope disclosed by the present disclosure shall be covered by the protective scope of the present disclosure. Therefore, the protective scope of the present disclosure shall be subject to the protective scope of the claims.

Claims

Claims What is claimed is:
1. A method for outputting log information, comprising:
acquiring, by a system thread in a terminal device having a processor, a plurality of pieces of log information from a plurality of application threads;
establishing, by the system thread, a log information cache queue,
caching, by the system thread, each piece of the log information from the plurality of pieces of log information into theestablished log information cache queue; and
configuring, by the system thread, the log information located in the front of the log information cache queue, into a log file.
2. The method ofclaim 1, wherein the method further comprises the following before acquiring the plurality of pieces of log information:
creating, by the terminal device,the system thread configured to establish and maintain the log information cache queue.
3. The method of claim 1, wherein caching each piece of the log information from the plurality of pieces of log information in proper order into the log information cache queue comprises:
cachingthe each piece of the log information in proper order into the log information cache queue in a chronological order of output time corresponding to each piece of log information.
4. The method ofclaim 3, wherein caching each piece of the log information into the log information cache queue comprises:
configuring a mutual exclusion lock for the log information cache queue;
cachingthe log information into the log information cache queue configured with the mutual exclusion lock; and
unlockingthe log information cache queue.
5. The method of any one of claims 1 to 4, wherein the method further comprises the following after the step of the acquiring the plurality of pieces of log information from the plurality of application threads:
configuringthe system thread into a suspended state if the log information does not exist.
6. The method ofany one of claims 1 to 4, wherein the method further comprises the following after the step of the configuring the log information located in the front of the log information cache queue, into the log file:
releasing a memory space thatthe log information corresponds in the log information cache queue.
7. An apparatus for outputting log information, comprising a hardware processor and a non-transitory storage medium configured to store the following modules implemented by the hardware processor:
an acquiring unit configured to acquire a plurality of pieces of log information outputted froma plurality of application threads;
a caching unit configured to cache each piece of the log information from the plurality of pieces of log information acquired by the acquiring unit into a log information cache queue established by a system thread; and
a configuring unit configured to configure the log information located in the front of the log information cache queue, into a log file.
8. The apparatus of claim 7, further comprising:
a creating unit configured to create the system thread, wherein the system thread is configured to establish and maintain the log information cache queue.
9. The apparatus of claim 7, wherein the caching unit is configured to cache the each piece of the log information in proper order into the log information cache queue in a chronological order of output time corresponding to each piece of log information
10. The apparatus of claim 9, further comprising an unlocking unit, wherein: the configuring unit is further configured to configure a mutual exclusion lock for the log information cache queue;
the caching unit is configured to cache the log information into the log information cache queue configured with the mutual exclusion lock; and
the unlocking unit is configured to unlock the log information cache queue.
11. The apparatus ofany one of claims7 to 10, wherein the configuring unit is further configured to configure the system thread into a suspended state if the log information does not exist.
12. The apparatus of any one of claims 7 to 10, furthercomprising:
a releasing unit configured to release a memory space thatthe log information corresponds in the log information cache queue.
13. A device for outputting log information, comprising a processor and a non- transitory storage medium accessible to the processor, the device is configured to: establisha log information cache queue by a system thread in the device;
acquire a plurality of pieces of log information outputted from a plurality of application threads;
cache each piece of the log information from the plurality of pieces of log information acquired by the acquiring unit into the log information cache queue; and configure the log information located in the front of the log information cache queue, into a log file.
14. The device of claim 13, further configured to:
to create the system thread, wherein the system thread is configured to establish and maintain the log information cache queue.
15. The device of claim 13, further configured to cache the each piece of the log information in proper order into the log information cache queue in a chronological order of output time corresponding to each piece of log information.
16. The device of claim 15, further configured to:
configure a mutual exclusion lock for the log information cache queue;
cachethe log information into the log information cache queue configured with the mutual exclusion lock; and
unlockthe log information cache queue.
17. The device ofany one of claims 13 to 16, further configured to configure the system thread into a suspended state if the log information does not exist.
18. The device of any one of claims 13 to 16, furtherconfigured to release a memory space thatthe log information corresponds in the log information cache queue.
PCT/CN2014/080705 2013-06-26 2014-06-25 Method and apparatus for outputting log information WO2014206289A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/824,469 US20150347305A1 (en) 2013-06-26 2015-08-12 Method and apparatus for outputting log information

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310260929.3A CN104252405B (en) 2013-06-26 2013-06-26 The output intent and device of log information
CN201310260929.3 2013-06-26

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/824,469 Continuation US20150347305A1 (en) 2013-06-26 2015-08-12 Method and apparatus for outputting log information

Publications (1)

Publication Number Publication Date
WO2014206289A1 true WO2014206289A1 (en) 2014-12-31

Family

ID=52141071

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/080705 WO2014206289A1 (en) 2013-06-26 2014-06-25 Method and apparatus for outputting log information

Country Status (3)

Country Link
US (1) US20150347305A1 (en)
CN (1) CN104252405B (en)
WO (1) WO2014206289A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105468502A (en) * 2015-11-30 2016-04-06 北京奇艺世纪科技有限公司 Log collection method, device and system

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105871780B (en) * 2015-01-21 2020-01-03 杭州迪普科技股份有限公司 Session log sending method and device
JP2017058788A (en) * 2015-09-14 2017-03-23 株式会社東芝 Communication device, communication method, and communication program
US9747222B1 (en) * 2016-03-31 2017-08-29 EMC IP Holding Company LLC Dynamic ingestion throttling of data log
CN107643942B (en) * 2016-07-21 2020-11-03 杭州海康威视数字技术股份有限公司 State information storage method and device
CN106502875A (en) * 2016-10-21 2017-03-15 过冬 A kind of daily record generation method and system based on cloud computing
CN106681658A (en) * 2016-11-25 2017-05-17 天津津航计算技术研究所 Method for achieving high-speed transfer of mass data of data recorder on basis of multithreading
CN106708578B (en) * 2016-12-23 2021-11-09 北京五八信息技术有限公司 Log output method and device based on double threads
CN106951488B (en) * 2017-03-14 2021-03-12 海尔优家智能科技(北京)有限公司 Log recording method and device
CN108205476A (en) * 2017-12-27 2018-06-26 郑州云海信息技术有限公司 A kind of method and device of multithreading daily record output
CN108509327A (en) * 2018-04-20 2018-09-07 深圳市文鼎创数据科技有限公司 A kind of log-output method, device, terminal device and storage medium
CN108829342B (en) * 2018-05-09 2021-06-25 青岛海信宽带多媒体技术有限公司 Log storage method, system and storage device
CN109347899B (en) * 2018-08-22 2022-03-25 北京百度网讯科技有限公司 Method for writing log data in distributed storage system
US11163449B2 (en) 2019-10-17 2021-11-02 EMC IP Holding Company LLC Adaptive ingest throttling in layered storage systems
CN111045782B (en) * 2019-11-20 2024-01-12 北京奇艺世纪科技有限公司 Log processing method, device, electronic equipment and computer readable storage medium
CN111367867B (en) * 2020-03-05 2023-03-21 腾讯云计算(北京)有限责任公司 Log information processing method and device, electronic equipment and storage medium
CN113190410A (en) * 2021-05-10 2021-07-30 芯讯通无线科技(上海)有限公司 Log collection method, system, client and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090182798A1 (en) * 2008-01-11 2009-07-16 Mediatek Inc. Method and apparatus to improve the effectiveness of system logging
US20100332593A1 (en) * 2009-06-29 2010-12-30 Igor Barash Systems and methods for operating an anti-malware network on a cloud computing platform
US8239633B2 (en) * 2007-07-11 2012-08-07 Wisconsin Alumni Research Foundation Non-broadcast signature-based transactional memory

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2758311B2 (en) * 1992-05-28 1998-05-28 富士通株式会社 Log file control method in complex system
US5544359A (en) * 1993-03-30 1996-08-06 Fujitsu Limited Apparatus and method for classifying and acquiring log data by updating and storing log data
JPH07146671A (en) * 1993-06-16 1995-06-06 Mitsubishi Electric Corp Large-sized video display device
US5778243A (en) * 1996-07-03 1998-07-07 International Business Machines Corporation Multi-threaded cell for a memory
US20020165902A1 (en) * 2001-05-03 2002-11-07 Robb Mary Thomas Independent log manager
US7155727B2 (en) * 2001-06-19 2006-12-26 Sun Microsystems, Inc. Efficient data buffering in a multithreaded environment
JP4421230B2 (en) * 2003-08-12 2010-02-24 株式会社日立製作所 Performance information analysis method
EP1825433A4 (en) * 2004-11-23 2010-01-06 Efficient Memory Technology Method and apparatus of multiple abbreviations of interleaved addressing of paged memories and intelligent memory banks therefor
FR2881306B1 (en) * 2005-01-21 2007-03-23 Meiosys Soc Par Actions Simpli METHOD FOR NON-INTRUSIVE JOURNALIZATION OF EXTERNAL EVENTS IN AN APPLICATION PROCESS, AND SYSTEM IMPLEMENTING SAID METHOD
US7480672B2 (en) * 2005-03-31 2009-01-20 Sap Ag Multiple log queues in a database management system
CN100521623C (en) * 2007-05-22 2009-07-29 网御神州科技(北京)有限公司 High-performance Syslog processing and storage method
US7616093B2 (en) * 2007-07-02 2009-11-10 International Business Machines Corporation Method and system for identifying expired RFID data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8239633B2 (en) * 2007-07-11 2012-08-07 Wisconsin Alumni Research Foundation Non-broadcast signature-based transactional memory
US20090182798A1 (en) * 2008-01-11 2009-07-16 Mediatek Inc. Method and apparatus to improve the effectiveness of system logging
US20100332593A1 (en) * 2009-06-29 2010-12-30 Igor Barash Systems and methods for operating an anti-malware network on a cloud computing platform

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105468502A (en) * 2015-11-30 2016-04-06 北京奇艺世纪科技有限公司 Log collection method, device and system

Also Published As

Publication number Publication date
CN104252405B (en) 2018-02-27
US20150347305A1 (en) 2015-12-03
CN104252405A (en) 2014-12-31

Similar Documents

Publication Publication Date Title
US20150347305A1 (en) Method and apparatus for outputting log information
US9342376B2 (en) Method, system, and device for dynamic energy efficient job scheduling in a cloud computing environment
CN107798108B (en) Asynchronous task query method and device
CN109491801B (en) Micro-service access scheduling method, micro-service access scheduling device, medium and electronic equipment
US9590859B2 (en) Discovering resources of a distributed computing environment
CN109726004B (en) Data processing method and device
CN109976989B (en) Cross-node application performance monitoring method and device and high-performance computing system
CN110119307B (en) Data processing request processing method and device, storage medium and electronic device
CN109614227A (en) Task resource concocting method, device, electronic equipment and computer-readable medium
US10331484B2 (en) Distributed data platform resource allocator
CN111078516A (en) Distributed performance test method and device and electronic equipment
EP4080371A1 (en) Automatic testing method and apparatus, electronic device, and storage medium
CN112860412B (en) Service data processing method and device, electronic equipment and storage medium
CN114138476A (en) Processing method and device of pooled resources, electronic equipment and medium
CN103577604B (en) A kind of image index structure for Hadoop distributed environments
CN109389306A (en) A kind of synchronous method and device of user's order
CN110908644B (en) Configuration method and device of state node, computer equipment and storage medium
CN111698109A (en) Method and device for monitoring log
DE102022114661A1 (en) RACK COMPONENT DETECTION AND COMMUNICATION FIELD
CN117633102A (en) Block chain data integration method, device, computer equipment and storage medium
CN111191103B (en) Method, device and storage medium for identifying and analyzing enterprise subject information from internet
CN114422498A (en) Big data real-time processing method and system, computer equipment and storage medium
US9172729B2 (en) Managing message distribution in a networked environment
CN110019445A (en) Method of data synchronization and device calculate equipment and storage medium
CN117806797A (en) Unified scheduling method and device for tasks, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14817737

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02.06.2016)

122 Ep: pct application non-entry in european phase

Ref document number: 14817737

Country of ref document: EP

Kind code of ref document: A1