CN112752144B - Wireless media interaction method and system - Google Patents

Wireless media interaction method and system Download PDF

Info

Publication number
CN112752144B
CN112752144B CN202110008610.6A CN202110008610A CN112752144B CN 112752144 B CN112752144 B CN 112752144B CN 202110008610 A CN202110008610 A CN 202110008610A CN 112752144 B CN112752144 B CN 112752144B
Authority
CN
China
Prior art keywords
information
content
audio data
terminal
data stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110008610.6A
Other languages
Chinese (zh)
Other versions
CN112752144A (en
Inventor
盛亚婷
王天尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202110008610.6A priority Critical patent/CN112752144B/en
Publication of CN112752144A publication Critical patent/CN112752144A/en
Application granted granted Critical
Publication of CN112752144B publication Critical patent/CN112752144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/233Processing of audio elementary streams
    • H04N21/2335Processing of audio elementary streams involving reformatting operations of audio signals, e.g. by converting from one coding standard to another
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • H04N21/43637Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephonic Communication Services (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application discloses a wireless media interaction method and a wireless media interaction system, wherein the method comprises the following steps: after receiving a trigger signal for starting to collect information, the second terminal starts to record first audio data in the process of playing the first content data stream by the first terminal; transmitting the recorded second audio data to a server; or determining characteristic information according to the recorded second audio data, and sending the characteristic information to a server; receiving second content information which is pushed by the server and is associated with the first content data stream; and displaying the second content information. The wireless media interaction method and system disclosed by the embodiment of the application can ensure that a user can conveniently and successfully realize the interaction with media.

Description

Wireless media interaction method and system
The application is that the application date is 2016 year 02 month 17, and the application number is: 201610088055.1, a divisional application entitled "a wireless media interaction method, system, and Server".
Technical Field
The present disclosure relates to the field of media information interaction technologies, and in particular, to a wireless media interaction method and system.
Background
With the continuous progress of internet technology, viewers have not satisfied pure visual enjoyment when watching television, and want to participate in a media program to realize interaction with the media program.
Existing media interaction methods generally include: while displaying the media program, displaying the television station logo or the two-dimensional code on the screen, a user watching the television can scan the television station logo or the two-dimensional code by using a client such as a mobile phone, a tablet personal computer and the like, and according to the scanned television station logo or the scanned two-dimensional code, the user can link to an interaction platform corresponding to the currently played media program, so that interaction with the media program is realized.
In the process of implementing the present application, the inventor finds that at least the following problems exist in the prior art: when a user uses a client to scan a television station logo or a two-dimensional code, the scanning frequency of a television screen possibly causes unsuccessful recognition, the recognition rate is low, and the user needs to scan for multiple times to succeed. Meanwhile, in order to improve the success rate of identification when a client scans, the client needs to be ensured to be in a relatively close range from a television screen, and the user is inconvenient to operate.
Disclosure of Invention
The embodiment of the application aims to provide a wireless media interaction method and system so as to ensure that a user can conveniently and successfully realize interaction with media.
In order to solve the above technical problems, the embodiments of the present application provide a wireless media interaction method and system that are implemented as follows:
a wireless media interaction method applied to a second terminal, the method comprising: after receiving a trigger signal for starting to collect information, starting to record first audio data in the process of playing the first content data stream by the first terminal; transmitting the recorded second audio data to a server; or determining characteristic information according to the recorded second audio data, and sending the characteristic information to a server; receiving second content information which is pushed by the server and is associated with the first content data stream; and displaying the second content information.
A wireless media interaction method applied to a second terminal, the method comprising: after receiving a trigger signal for starting to collect information, starting to record first audio data in the process of playing a first content data stream mixed with first sound wave data by a first terminal; transmitting the recorded second audio data to a server; or determining characteristic information according to the recorded second audio data, and sending the characteristic information to a server; receiving second content information which is pushed by the server and is associated with the first content data stream; and displaying the second content information.
A wireless media interaction system, comprising: the system comprises a program issuing platform, a server, a first terminal and a second terminal; the program distribution platform is used for sending a first content data stream to the server; the server is used for acquiring first audio data in a first content data stream; the first audio data has first characteristic information; the server sends the first content data stream to a first terminal for playing; the first terminal is used for receiving and playing the first content data stream sent by the server; the second terminal is used for beginning to record the first audio data in the process of playing the first content data stream by the first terminal after receiving the trigger signal for beginning to collect information, and sending the recorded second audio data to the server; the server is further configured to receive second audio data recorded from the second terminal, and determine second feature information according to the second audio data; matching the second characteristic information with the first characteristic information; when the second characteristic information is matched with the first characteristic information, the server pushes second content information pre-associated with the first content data stream to the second terminal; wherein the second content information includes: interactive content information of the first content data stream.
A wireless media interaction system, comprising: the system comprises a program issuing platform, a server, a first terminal and a second terminal; the program distribution platform is used for sending a first content data stream to the server; the server is used for generating first sound wave data corresponding to the first content data stream; the first sound wave data has first characteristic information; the server sends a first content data stream mixed with the first sound wave data to a first terminal for playing; the first terminal is used for receiving and playing a first content data stream mixed with the first sound wave data sent by the server; the second terminal is used for starting to record the first audio data in the process of playing the first content data stream mixed with the first sound wave data by the first terminal after receiving the trigger signal for starting to acquire information, and sending the recorded second audio data to the server; the server is further configured to receive second audio data recorded from the second terminal, and determine second feature information according to the second audio data; matching the second characteristic information with the first characteristic information; when the second characteristic information is matched with the first characteristic information, the server pushes second content information pre-associated with the first content data stream to the second terminal; wherein the second content information includes: interactive content information of the first content data stream.
According to the technical scheme provided by the embodiment of the application, a user can connect with a media program in a mode of using audio data or sound wave data at a second terminal used by the user, wireless interaction of media is achieved, the volume of the user when playing the media program can be completely guaranteed that the client can record audio data related to interaction content, therefore, the success of connecting the user to the media interaction content can be improved, and the user can successfully achieve media interaction. Meanwhile, the method provided by the embodiment of the application only needs to collect the audio data by the distance between the second terminal and the first terminal for playing media, and is convenient for a user to operate. On the other hand, according to the wireless media interaction method provided by the embodiment of the application, the sound wave data is directly overlapped with the first content data stream, so that the first content data stream does not need to be subjected to lossy processing such as compression and the like, and the playing of the media program is not affected in an audiovisual manner.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a schematic diagram illustrating the composition of one embodiment of a wireless media interactive system of the present application;
FIG. 2 is a flow chart of one embodiment of a wireless media interaction method of the present application;
FIG. 3 is a flow chart of an embodiment of a server-based wireless media interaction method of the present application;
FIG. 4 is a block diagram of one embodiment of a server in a wireless media interactive system of the present application.
Detailed Description
The embodiment of the application provides a wireless media interaction method and system.
In order to better understand the technical solutions in the present application, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
FIG. 1 is a schematic diagram illustrating the components of an embodiment of a wireless medium interaction system according to the present application. The data connection relationship between the devices in the wireless media interactive system of the present application is shown in fig. 1.
FIG. 2 is a flow chart of one embodiment of a wireless media interaction method of the present application. As shown in fig. 2, the wireless media interaction method may include:
s101: the program distribution platform transmits a first content data stream to a server.
The first content data stream may be used to describe media program content. The first content data stream may include audio data of the media program. The first content data stream may further comprise picture data of the media program and/or program information of the media program.
The program information may include: program identification, program name, and/or program airtime.
The program distribution platform may send the first content data stream to the server.
S102: the server obtains first audio data in a first content data stream or first sound wave data corresponding to the first content data generated by a sound wave encoder.
The server obtaining the first audio data in the first content data stream may include: the server acquires all or part of the content of the audio data in the first content data stream, and takes all or part of the acquired audio data as first audio data. For example, audio data in the first content data stream for reminding a user to participate in the interaction of the media program may be obtained as the first audio data.
The generated first sound wave data corresponding to the first content data stream may be generated by a sound wave encoder according to program information in the first content data stream. For example, the first sonic data may be generated using the sonic encoder based on the program identification or the program name.
Further, the sound wave frequency of the sound wave data may be in a sound frequency range that cannot be heard by the human ear.
S103: the server determines first characteristic information corresponding to a first content data stream according to the first audio data or the first sound wave data.
The server may determine first characteristic information corresponding to a first content data stream according to the first audio data or the first sound wave data.
Specifically: when the server acquires first audio data, extracting audio feature information of the first audio data according to a first extraction rule, and taking the audio feature information as first feature information; or when the server acquires the first sound wave data, the information for generating the sound wave data can be used as the first characteristic information, or the sound wave data can be decoded, and the information obtained by decoding can be used as the first characteristic information. For example, the program identifier for generating acoustic data may be used as the first feature information.
The extracting the audio feature information of the first audio data according to the first rule may specifically include: performing fourier transform on the first audio data frame by frame; extracting frequency dense points in each frame of the first audio data after Fourier transformation in a frequency domain; forming a cross vector by frequency dense points extracted from two adjacent frames of the first audio data after Fourier transformation; and taking the cross vector as first characteristic information.
After the first feature information is determined, a correspondence between the first content data stream and the first feature information may be established. The first content data stream and the first feature information may be in a one-to-one correspondence or a one-to-many correspondence.
S104: the server sends the first content data stream or the first content data stream mixed with the first sound wave data to a first terminal for playing.
The first terminal may be a playback device for playing back the first content data stream. For example, the device may be a television, a tablet computer, a mobile phone, etc.
The server may send the first content data stream to a first terminal for playback. When the server acquires the first sound wave data corresponding to the first content data generated by the sound wave encoder in S102, the server may further send the first content data stream mixed with the first sound wave data to the first terminal for playing.
The first content data stream mixed with the first sound wave data may include: and superposing the first sound wave data in the first content data stream.
Further, the first sound wave data may be mixed once in the first content data stream at preset time intervals; alternatively, the first sound wave data may be superimposed in the first audio data of the first content data stream.
When the first sound wave data is mixed into the first content data stream, the first sound wave data is superimposed into the first content data stream, and data processing such as compression or encoding is not needed to be carried out on the first content data stream, so that the first content data is not damaged, and the audio-visual effect of a program is not affected when the first content data stream is played.
S105: and the first terminal receives and plays the first content data stream sent by the server or the first content data stream mixed with the first sound wave data.
S106: the second terminal records the audio information played by the first terminal and sends the recorded audio information or second characteristic information determined according to the audio information to a server.
The second terminal may be a multimedia device having a recording function. For example, the device can be a mobile phone or a tablet computer.
In another embodiment, the second terminal may begin to record the audio information played by the first terminal after receiving the trigger signal for starting to collect the information. The trigger signal may be actively triggered by a user. For example, a vibration signal of the user's hand shaking the phone may be received, or the user touches a trigger area on the display screen of the second terminal, or the user triggers the second general trigger button, etc.
The trigger signal may further include: the second terminal is opened in the background.
After the second terminal receives the trigger signal, the audio information played by the first terminal can be recorded.
The audio information recorded by the second terminal may include: the second audio data or the second audio data mixed with the second sound wave data.
The second terminal may send the recorded audio information to a server.
In another embodiment, the second terminal may determine second feature information according to the recorded audio information, and send the second feature information to a server.
The determining the second characteristic information according to the recorded audio information may specifically include: when the recorded audio information comprises second audio data, extracting feature information of the second audio data according to a first extraction rule, and taking the extracted feature information as second feature information; or when the recorded audio information is second audio data mixed with second sound wave data, decoding the second sound wave data, and taking the decoded information as second characteristic information.
S107: the server may receive the recorded audio information sent from the second terminal or the second characteristic information sent from the second terminal.
S108: the server matches second characteristic information determined or received according to the audio information with the first characteristic information.
The server may match second characteristic information determined or received from the audio information with the first characteristic information.
And if the second characteristic information received by the server and sent by the second terminal, the second characteristic information and the first characteristic information can be matched.
If the server receives the recorded audio information sent by the second terminal, second characteristic information can be determined according to the audio information, and the second characteristic information is matched with the first characteristic information.
The method for determining the second feature information according to the audio information may be the same as the method for determining the second feature information according to the recorded audio information by the second terminal in step S106, which is not described herein.
S109: when the second characteristic information is matched with the first characteristic information, the server pushes second content information pre-associated with the first content data stream to the second terminal.
The server may push second content information pre-associated with the first content data stream to the second terminal when the second characteristic information matches the first characteristic information. The second content information may include: interactive content information of the first content data stream. For example, it may be an interactive message or an interactive page, etc.
The following describes a wireless media interaction method using a server as a main body.
FIG. 3 is a flow chart of an embodiment of a server-based wireless media interaction method of the present application. As shown in fig. 3, the method may include:
s201: first audio data in the first content data stream is acquired or first sound wave data corresponding to the first content data stream is generated.
S202: and determining first characteristic information corresponding to the first content data stream according to the first audio data or the first sound wave data.
S203: and sending the first content data stream or the first content data stream mixed with the first sound wave data to a first terminal for playing.
S204: and receiving the audio information recorded by the second terminal or receiving second characteristic information determined by the second terminal according to the audio information.
S205: and matching the second characteristic information determined or received according to the audio information with the first characteristic information.
S206: and pushing second content information pre-associated with the first content data stream to the second terminal when the second characteristic information is matched with the first characteristic information.
The specific content of each step in the above embodiments may refer to the embodiment of the wireless media interaction method shown in fig. 1, which is not described herein.
According to the wireless media interaction method provided by the embodiment, the user can be connected with the media program in the mode of audio data or sound wave data at the second terminal used by the user, so that wireless interaction of media is realized, the volume of the user when playing the media program can be completely ensured to be recorded in the audio data related to the interaction content by the client, and therefore, the success of connecting the user to the media interaction content can be improved, and the user can be ensured to successfully realize media interaction. Meanwhile, the method provided by the embodiment of the application only needs to collect the audio data by the distance between the second terminal and the first terminal for playing media, and is convenient for a user to operate. On the other hand, according to the wireless media interaction method provided by the embodiment of the application, the sound wave data is directly overlapped with the first content data stream, so that the first content data stream does not need to be subjected to lossy processing such as compression and the like, and the playing of the media program is not affected in an audiovisual manner.
A wireless media interactive system of the present application is described below. FIG. 1 is a schematic diagram illustrating the components of an embodiment of a wireless medium interaction system according to the present application. As shown in fig. 1, the wireless media interaction system may include: program distribution platform 100, server 200, first terminal 300, and second terminal 400.
Wherein,
the program distribution platform 100 may be configured to send a first content data stream to a server 200.
The server 200 may be configured to obtain first audio data in a first content data stream or first sound wave data generated by a sound wave encoder and corresponding to the first content data stream; the server 200 determines first characteristic information corresponding to a first content data stream according to the first audio data or the first sound wave data; the server 200 sends the first content data stream or the first content data stream mixed with the first sound wave data to the first terminal 300 for playing;
the first terminal 300 may be configured to receive and play the first content data stream sent from the server 200 or the first content data stream mixed with the first sound wave data.
The second terminal 400 may be configured to record audio information played by the first terminal 300, and send the recorded audio information to the server 200; or may be used to record the audio information played by the first terminal 300, determine second feature information according to the audio information, and send the second feature information to the server 200.
The server 200 is further configured to receive the recorded audio information sent from the second terminal 400 or the second feature information sent from the second terminal 400, and match the second feature information determined or received according to the audio information with the first feature information; when the second characteristic information matches the first characteristic information, the server 200 pushes second content information, which is associated with the first content data stream in advance, to the second terminal 400.
FIG. 4 is a block diagram of one embodiment of a server in a wireless media interactive system of the present application. As shown in fig. 4, the server 200 may include: an audio/acoustic data acquisition module 201, a first characteristic information determination module 202, a first content data stream transmission module 203, an information reception module 204, a characteristic information matching module 205, and a second content information push module 206.
Wherein,
the audio/sound wave data obtaining module 201 may be configured to obtain audio data in the first content data stream or generate sound wave data corresponding to the first content data stream.
The first characteristic information determining module 202 may be configured to determine first characteristic information corresponding to a first content data stream according to the first audio data or the first sound wave data in the audio/sound wave data obtaining module 201.
The first content data stream sending module 203 may be configured to send the first content data stream or the first content data stream mixed with the first sound wave data to the first terminal 300 for playing.
The information receiving module 204 may be configured to receive audio information recorded by the second terminal 400 or receive second characteristic information determined by the second terminal 400 according to the audio information.
The feature information matching module 205 may be configured to match second feature information determined according to the audio information received in the information receiving module 204 or received by the information receiving module 204 with the first feature information.
The second content information pushing module 206 may be configured to push, to the second terminal 400, second content information associated with the first content data stream in advance when the second feature information in the feature information matching module 205 matches the first feature information.
The wireless media interaction system, the server and the client provided by the embodiment correspond to the method embodiments of the application respectively, so that the method embodiments of the application can be realized and the technical effects of the method embodiments can be achieved. This application is not described in detail herein.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips 2. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented with "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but HDL is not only one, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog2 are most commonly used at present. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory.
Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the aspects of the present application, in essence and/or contributing to the prior art, may be embodied in the form of a software product, which in a typical configuration, includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The computer software product may include instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in various embodiments or portions of embodiments herein. The computer software product may be stored in a memory, which may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The subject application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Although the present application has been described by way of example, those of ordinary skill in the art will recognize that there are many variations and modifications of the present application without departing from the spirit of the present application, and it is intended that the appended claims encompass such variations and modifications without departing from the spirit of the present application.

Claims (15)

1. A wireless media interaction method, applied to a second terminal, the method comprising:
after receiving a trigger signal for starting information acquisition, starting to record first audio data in the process of playing a first content data stream by a first terminal, wherein the first content data stream is sent to the first terminal by a server to be played, the first audio data has first characteristic information, and the first characteristic information comprises audio characteristic information of the first audio data extracted according to a first extraction rule; the first extraction rule specifically includes: performing Fourier transform on the audio data frame by frame; extracting frequency dense points in each frame of the Fourier transformed audio data in a frequency domain; forming a cross vector by frequency dense points extracted from two adjacent frames of the audio data after Fourier transformation; taking the cross vector as characteristic information;
transmitting the recorded second audio data to a server; or determining second characteristic information according to the recorded second audio data, and sending the second characteristic information to a server; so that the server matches the first characteristic information of the first audio data with the second characteristic information of the second audio data, and when the first characteristic information is matched with the second characteristic information, pushing second content information associated with the first content data stream to the second terminal;
receiving second content information which is pushed by the server and is associated with the first content data stream;
and displaying the second content information.
2. The method of claim 1, wherein after receiving the trigger signal for starting to collect information, starting to record the first audio data in the process that the first terminal plays the first content data stream, including:
and after receiving a vibration signal of a user shaking the mobile phone, starting to record first audio data in the process of playing the first content data stream by the first terminal.
3. The method of claim 1, wherein the second content information comprises: interactive content information of the first content data stream.
4. A method according to claim 3, wherein the first content data stream and the first characteristic information are in a one-to-one correspondence or a one-to-many correspondence.
5. A method according to claim 3, wherein the second characteristic information comprises: and extracting characteristic information of the recorded audio data according to the first extraction rule.
6. The method of claim 1, wherein the first audio data during the playing of the first content data stream by the first terminal comprises: all or part of the audio data in the first content data stream.
7. The method of claim 1, wherein the first content data stream comprises: audio data of a media program, picture data of the media program and/or program information of the media program.
8. The method of claim 7, wherein the program information comprises: program identification, program name, and/or program airtime.
9. A wireless media interaction method, applied to a second terminal, the method comprising:
after receiving a trigger signal for starting to collect information, starting to record first audio data in the process of playing a first content data stream mixed with first sound wave data by a first terminal; the process of playing the first content data stream by the first terminal comprises the following steps: the method comprises the steps that a server extracts first audio data and program information from a first content data stream, a sound encoder is utilized to generate first sound wave data according to the program information, the first sound wave data are overlapped in the first content data stream, the first content data stream mixed with the first sound wave data is sent to a first terminal to be played, and the first audio data have first characteristic information;
transmitting the recorded second audio data to a server; or determining second characteristic information according to the recorded second audio data, and sending the second characteristic information to a server; the second audio data includes audio data mixed with second sound data;
receiving second content information which is pushed by the server and is associated with the first content data stream; the second content information is pushed to the second terminal by the server when first characteristic information of the first audio data matches second information of the second audio data, the first characteristic information including: the second feature information includes: information obtained by decoding the second sound wave data;
and displaying the second content information.
10. The method of claim 9, wherein after receiving the trigger signal for starting to collect information, starting to record the first audio data in the process that the first terminal plays the first content data stream mixed with the first sound wave data, comprising:
and after receiving a vibration signal of a user shaking the mobile phone, starting to record first audio data in the process of playing the first content data stream mixed with the first sound wave data by the first terminal.
11. The method of claim 9, wherein the second content information comprises: interactive content information of the first content data stream.
12. The method of claim 9, wherein superimposing first acoustic data in the first content data stream comprises: the first sound wave data is mixed in the first content data stream once at preset time intervals.
13. The method of claim 9, wherein superimposing first acoustic data in the first content data stream comprises: superimposing the first sound wave data in first audio data of the first content data stream; the first audio data is all or part of the content of the audio data in the first content data stream.
14. A wireless media interactive system, comprising: the system comprises a program issuing platform, a server, a first terminal and a second terminal; wherein,
the program distribution platform is used for sending a first content data stream to the server;
the server is used for acquiring first audio data in a first content data stream; the first audio data has first characteristic information; transmitting the first content data stream to a first terminal for playing; the first characteristic information comprises audio characteristic information of the first audio data extracted according to a first extraction rule; the first extraction rule specifically includes: performing Fourier transform on the audio data frame by frame; extracting frequency dense points in each frame of the Fourier transformed audio data in a frequency domain; forming a cross vector by frequency dense points extracted from two adjacent frames of the audio data after Fourier transformation; taking the cross vector as characteristic information;
the first terminal is used for receiving and playing the first content data stream sent by the server;
the second terminal is used for beginning to record the first audio data in the process of playing the first content data stream by the first terminal after receiving the trigger signal for beginning to collect information, and sending the recorded second audio data to the server;
the server is further configured to receive second audio data recorded from the second terminal, and determine second feature information according to the second audio data; matching the second characteristic information with the first characteristic information; pushing second content information pre-associated with the first content data stream to the second terminal when the second characteristic information is matched with the first characteristic information; wherein the second content information includes: interactive content information of the first content data stream.
15. A wireless media interactive system, comprising: the system comprises a program issuing platform, a server, a first terminal and a second terminal; wherein,
the program distribution platform is used for sending a first content data stream to the server;
the server is used for extracting first audio data and program information from a first content data stream, and generating first sound wave data by utilizing a sound wave encoder according to the program information; the first sound wave data has first characteristic information; the first characteristic information includes: the data information is used for generating the first sound wave data or information obtained by decoding the first sound wave data; superimposing the first sound wave data in the first content data stream; transmitting a first content data stream mixed with the first sound wave data to a first terminal for playing;
the first terminal is used for receiving and playing a first content data stream mixed with the first sound wave data sent by the server;
the second terminal is used for starting to record the first audio data in the process of playing the first content data stream mixed with the first sound wave data by the first terminal after receiving the trigger signal for starting to acquire information, and sending the recorded second audio data to the server; the second audio data includes audio data mixed with second sound data;
the server is further configured to receive second audio data recorded from the second terminal, and determine second feature information according to the second audio data; matching the second characteristic information with the first characteristic information; when the second characteristic information is matched with the first characteristic information, the server pushes second content information pre-associated with the first content data stream to the second terminal; wherein the second characteristic information includes: information obtained by decoding the second sound wave data; the second content information includes: interactive content information of the first content data stream.
CN202110008610.6A 2016-02-17 2016-02-17 Wireless media interaction method and system Active CN112752144B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110008610.6A CN112752144B (en) 2016-02-17 2016-02-17 Wireless media interaction method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110008610.6A CN112752144B (en) 2016-02-17 2016-02-17 Wireless media interaction method and system
CN201610088055.1A CN107094262B (en) 2016-02-17 2016-02-17 Wireless media interaction method, system and server

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201610088055.1A Division CN107094262B (en) 2016-02-17 2016-02-17 Wireless media interaction method, system and server

Publications (2)

Publication Number Publication Date
CN112752144A CN112752144A (en) 2021-05-04
CN112752144B true CN112752144B (en) 2024-03-08

Family

ID=59645973

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110008610.6A Active CN112752144B (en) 2016-02-17 2016-02-17 Wireless media interaction method and system
CN201610088055.1A Active CN107094262B (en) 2016-02-17 2016-02-17 Wireless media interaction method, system and server

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201610088055.1A Active CN107094262B (en) 2016-02-17 2016-02-17 Wireless media interaction method, system and server

Country Status (1)

Country Link
CN (2) CN112752144B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108833964B (en) * 2018-06-11 2022-01-25 阿依瓦(北京)技术有限公司 Real-time continuous frame information implantation identification system
CN108769262B (en) * 2018-07-04 2023-11-17 厦门声连网信息科技有限公司 Large-screen information pushing system, large-screen equipment and method
CN112637147B (en) * 2020-12-13 2022-08-05 青岛希望鸟科技有限公司 Method, terminal and server for establishing and connecting communication service through audio
WO2023102804A1 (en) * 2021-12-09 2023-06-15 青岛希望鸟科技有限公司 Method for creating and connecting communication service through audio, and terminal and server

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6393137B1 (en) * 1999-06-17 2002-05-21 Raytheon Company Multi-resolution object classification method employing kinematic features and system therefor
CN103763586A (en) * 2014-01-16 2014-04-30 北京酷云互动科技有限公司 Television program interaction method and device and server
CN104023247A (en) * 2014-05-29 2014-09-03 腾讯科技(深圳)有限公司 Methods and devices for obtaining and pushing information and information interaction system
CN104050259A (en) * 2014-06-16 2014-09-17 上海大学 Audio fingerprint extracting method based on SOM (Self Organized Mapping) algorithm
CN104519373A (en) * 2014-12-16 2015-04-15 微梦创科网络科技(中国)有限公司 Media program interaction method and related equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8593502B2 (en) * 2006-01-26 2013-11-26 Polycom, Inc. Controlling videoconference with touch screen interface
CN103123787B (en) * 2011-11-21 2015-11-18 金峰 A kind of mobile terminal and media sync and mutual method
CN103873935A (en) * 2012-12-17 2014-06-18 联想(北京)有限公司 Data processing method and device
US9965524B2 (en) * 2013-04-03 2018-05-08 Salesforce.Com, Inc. Systems and methods for identifying anomalous data in large structured data sets and querying the data sets
CN103402118B (en) * 2013-07-05 2017-12-01 Tcl集团股份有限公司 A kind of media program interaction method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6393137B1 (en) * 1999-06-17 2002-05-21 Raytheon Company Multi-resolution object classification method employing kinematic features and system therefor
CN103763586A (en) * 2014-01-16 2014-04-30 北京酷云互动科技有限公司 Television program interaction method and device and server
CN104023247A (en) * 2014-05-29 2014-09-03 腾讯科技(深圳)有限公司 Methods and devices for obtaining and pushing information and information interaction system
CN104378683A (en) * 2014-05-29 2015-02-25 腾讯科技(深圳)有限公司 Program based interaction method and device
CN104050259A (en) * 2014-06-16 2014-09-17 上海大学 Audio fingerprint extracting method based on SOM (Self Organized Mapping) algorithm
CN104519373A (en) * 2014-12-16 2015-04-15 微梦创科网络科技(中国)有限公司 Media program interaction method and related equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于数据采集卡测试印刷设备中的振动信号;黄颖为;褚晓珂;;广东印刷(第05期);全文 *

Also Published As

Publication number Publication date
CN107094262A (en) 2017-08-25
CN107094262B (en) 2021-02-12
CN112752144A (en) 2021-05-04

Similar Documents

Publication Publication Date Title
CN112752144B (en) Wireless media interaction method and system
US10341694B2 (en) Data processing method and live broadcasting method and device
US9877066B2 (en) Synchronization of multimedia streams
US8704948B2 (en) Apparatus, systems and methods for presenting text identified in a video image
TW201132122A (en) System and method in a television for providing user-selection of objects in a television program
US20100122277A1 (en) device and a method for playing audio-video content
KR20180050961A (en) Method and device for decoding multimedia file
CN110663079A (en) Method and system for correcting input generated using automatic speech recognition based on speech
KR101991188B1 (en) Promotion information processing method, device, and apparatus, and non-volatile computer storage medium
US20160150284A1 (en) Dynamic channel selection for live and previously broadcast content
KR102063463B1 (en) Multimedia information reproduction method and system, standardization server, live broadcasting terminal
CN103108229A (en) Method for identifying video contents in cross-screen mode through audio frequency
CN102196268A (en) Method, device and system for processing multimedia data
US20180225445A1 (en) Display apparatus and method for controlling display apparatus thereof
CN110809169B (en) Internet comment information directional shielding system and method
CN111918074A (en) Live video fault early warning method and related equipment
KR20030016406A (en) Content analysis apparatus
KR101180783B1 (en) User Customized Broadcasting Service Method Using TTS
CN112738564A (en) Data processing method and device, electronic equipment and storage medium
CN111581403B (en) Data processing method, device, electronic equipment and storage medium
US20230105009A1 (en) Remote control button detection
EP4386653A1 (en) Placing orders for a subject included in a multimedia segment
CN106454533A (en) A method and device for displaying play records
CN118233664A (en) Video processing method and device and electronic equipment
CN114979729A (en) Video data processing method and device and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant