CN114979729A - Video data processing method and device and vehicle - Google Patents
Video data processing method and device and vehicle Download PDFInfo
- Publication number
- CN114979729A CN114979729A CN202110187609.4A CN202110187609A CN114979729A CN 114979729 A CN114979729 A CN 114979729A CN 202110187609 A CN202110187609 A CN 202110187609A CN 114979729 A CN114979729 A CN 114979729A
- Authority
- CN
- China
- Prior art keywords
- video stream
- terminal
- video
- information
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title abstract description 11
- 238000000034 method Methods 0.000 claims abstract description 54
- 238000012545 processing Methods 0.000 claims abstract description 41
- 238000004590 computer program Methods 0.000 claims description 9
- 230000002194 synthesizing effect Effects 0.000 claims description 4
- 230000002093 peripheral effect Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000010295 mobile communication Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 2
- 238000005266 casting Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 229920000642 polymer Polymers 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4122—Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The invention provides a video data processing method, which comprises the following steps: acquiring a video stream for screen projection; writing the cutting information into the video stream to generate a target video stream; and acquiring target playing data corresponding to the cutting information in the target video stream. The invention also discloses a video data processing device and a vehicle. The video data processing method, the video data processing equipment and the vehicle can process the acquired video stream by using the terminal to extract the target playing data, so that the local part in the screen-projected video can be emphasized, and after interested information in the video is filtered out independently, the information is displayed to a user, and the user experience of a product is improved.
Description
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a method and an apparatus for processing video data, and a vehicle.
Background
With the continuous progress of technological development, video playing modes are more and more, wherein video sharing among playing terminals has become one of the favorite playing modes in people's life, and video screen projection is an important way for sharing video at present.
At present, it is common to synchronize a video file played on an intelligent device such as a mobile phone, a tablet, a computer, etc. to another device for playing and watching in a screen-casting manner. For example, a video stream generated by a mobile phone end is coded and sent to a vehicle end from a related software product projected from the mobile phone end to the vehicle end, and then the vehicle end is decoded and displayed in a full screen mode. And the local part in the screen projection video cannot be emphasized, so that the information which is interested in the video cannot be filtered out independently, and then the information is displayed to the user.
Disclosure of Invention
The invention aims to provide a video data processing method, a device and a vehicle, which utilize a terminal to process an acquired video stream to extract target playing data and solve the problem that local parts in a screen-projected video cannot be emphasized.
Specifically, the present invention provides a method for processing video data, the method comprising the steps of: acquiring a video stream for screen projection; writing the cutting information into the video stream to generate a target video stream; and acquiring the playing data corresponding to the cutting information in the target video stream.
Specifically, the present invention provides a method for processing video data, the method comprising the steps of: the first terminal sends a video stream for screen projection to the second terminal; the second terminal writes the cutting information into the video stream to generate a target video stream; and the second terminal acquires the playing data corresponding to the cutting information in the target video stream.
Specifically, the present invention provides a video data processing apparatus, comprising: a memory and a processor; the memory is used for storing a computer program; the processor is adapted to execute the computer program to implement the steps of the method of processing video data as described above.
Specifically, the invention provides a vehicle, which comprises a second terminal, wherein the second terminal is used for writing clipping information into a video stream after the video stream for screen projection is acquired, generating a target video stream, and acquiring play data corresponding to the clipping information in the target video stream.
According to the video data processing method, the video data processing equipment and the vehicle, the cutting information is written into the video stream after the video stream for screen projection is obtained, the target video stream is generated, and the playing data corresponding to the cutting information in the target video stream is obtained, so that the local part of the screen projection video can be emphasized on the terminal, and the information which is interested in the video is filtered out independently and then displayed to a user, and the user experience of the product is improved.
Drawings
Fig. 1 is a schematic view of an application scenario of a method for processing video data according to an embodiment of the present invention;
fig. 2 shows a block diagram of a terminal;
fig. 3 is a flowchart of a method for processing video data according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating writing of cropping information in a method for processing video data according to an embodiment of the present invention;
fig. 5 is a partial flowchart of a method for processing video data according to an embodiment of the present invention;
fig. 6 is a flowchart of a method for processing video data according to an embodiment of the present invention;
fig. 7 is a flowchart illustrating writing of cropping information in a method for processing video data according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a device for processing video data according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an application scenario schematic diagram of a video data processing method according to an embodiment of the present invention is provided, where the application scenario includes a mobile terminal 11 and a vehicle end 12, and the mobile terminal 11 is configured to send a video stream for projection to the vehicle end 12, and may further generate cropping information according to an operation instruction. The vehicle terminal 12 is configured to write the clipping information into the video stream to generate a target video stream, and is configured to obtain target playing data corresponding to the clipping information in the target video stream, control the playing device to play the playing data, and also receive an input operation instruction, and after sending the operation instruction to the mobile terminal 11, receive the clipping information sent by the mobile terminal 11. The cutting of the target video stream is achieved through decoding of the Android hardware decoder, so that the local part of the screen-projected video can be emphasized, interested information in the video is filtered out independently, the information is displayed for a user, and the user experience of a product is improved.
In an embodiment, the mobile terminal 11 may be a smart phone, a smart television, a television box, a tablet computer, and the like, and is configured to send a video stream for screen projection to the vehicle end 12, and may also generate the cutting information according to an operation instruction.
In an embodiment, the car end 12 may be a car terminal, a Personal Computer (PC), an all-in-one machine, a laptop Computer, and the like, and is configured to write the clipping information into the video stream, generate a target video stream, acquire target playing data corresponding to the clipping information in the target video stream, and control the playing device to play the target playing data, and may also receive an input operation instruction, send the operation instruction to the mobile terminal 11, and then receive the clipping information sent by the mobile terminal 11. In one embodiment, the vehicle-mounted terminal includes a vehicle-mounted entertainment host, which integrates multiple functions of audio/video playing, navigation, vehicle-mounted communication and the like, and aims to bring comfortable and convenient driving experience to drivers, and typical functions of the vehicle-mounted entertainment host include devices of am/fm/digital/satellite broadcasting, cd/dvd playing, multimedia peripheral access, rear seat entertainment, navigation, camera integration, bluetooth connection, communication connection and the like.
Fig. 2 shows a block diagram of a terminal. The structure shown in fig. 2 is applicable to the mobile terminal 11 and the car end 12. As shown in FIG. 2, the terminal 10 includes a memory 102, a memory controller 104, one or more (only one shown) processors 106, a peripherals interface 108, a radio frequency module 110, a positioning module 112, a camera module 114, an audio module 116, a screen 118, and a key module 120. These components communicate with each other via one or more communication buses/signal lines 122.
It will be appreciated that the configuration shown in FIG. 2 is merely exemplary, and that terminal 10 may include more or fewer components than shown in FIG. 2, or may have a different configuration than shown in FIG. 2. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
The memory 102 may be used to store software programs and modules, such as program instructions/modules corresponding to the connection method of the screen projection in the embodiment of the present invention, and the processor 106 executes various functional applications and data processing by running the software programs and modules stored in the storage controller 104, so as to implement the above-mentioned video data processing method.
The memory 102 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 102 may further include memory located remotely from the processor 106, which may be connected to the terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. Access to the memory 102 by the processor 106 and possibly other components may be under the control of the memory controller 104.
In some embodiments, the peripheral interface 108, the processor 106, and the memory controller 104 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The rf module 110 is used for receiving and transmitting electromagnetic waves, and implementing interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices. The rf module 110 may include various existing circuit elements for performing these functions, such as an antenna, an rf transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The rf module 110 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols, and technologies, including, but not limited to, Global System for Mobile Communication (GSM), Enhanced Mobile Communication (Enhanced Data GSM Environment, EDGE), wideband Code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), bluetooth, Wireless Fidelity (WiFi) (e.g., Institute of Electrical and Electronics Engineers (IEEE) standard IEEE802.11 a, IEEE802.11 b, IEEE802.1 g, and/or IEEE802.1 n), Voice over internet protocol (VoIP), Wireless mail for internet Access (Wi-Max), and any other suitable protocols for short-time messaging, including, but not limited to, Wireless messaging, and may even include those protocols that have not yet been developed.
The location module 112 is used for obtaining the current location of the terminal 10. Examples of the positioning module 112 include, but are not limited to, a Global Positioning System (GPS), a wireless local area network-based positioning technology, or a mobile communication network-based positioning technology.
The camera module 114 is used to take a picture or video. The pictures or videos taken may be stored in the memory 102 and transmitted through the radio frequency module 110.
The screen 118 provides an output interface between the terminal 10 and the user. In particular, screen 118 displays video output to the user, the content of which may include text, graphics, video, and any combination thereof. Some of the output results are for some of the user interface objects. It is understood that the screen 118 may also include a touch screen. The touch screen provides both an output and an input interface between the terminal 10 and a user. In addition to displaying video output to users, touch screens also receive user input, such as user clicks, swipes, and other gesture operations, so that user interface objects respond to these user input. The technique of detecting user input may be based on resistive, capacitive, or any other possible touch detection technique. Specific examples of touch screen display units include, but are not limited to, liquid crystal displays or light emitting polymer displays.
The key module 120 also provides an interface for user input to the terminal 10, and the user may press various keys to cause the terminal 10 to perform various functions.
First embodiment
An embodiment of the present invention provides a method for processing video data, and fig. 3 is a flowchart of the method for processing video data according to the embodiment of the present invention. As shown in fig. 3, the method comprises the steps of:
step 310: a video stream for projection is acquired.
Specifically, the video stream may be, but is not limited to, video data in the encoding format of h.264, video data in MPEG-4, audio data in the encoding format of AAC, audio data in the PCM format, and audio data in the APE format, if the video stream in other format may be transcoded into video data in the encoding format of h.264, and audio data in the encoding format of AAC or PCM format. Of course, the video stream may not only include a video stream for projection, but also may be a video stream for the purpose of cutting out the video stream. The video data is transcoded into an H.264 format, and the specific transcoding method comprises the following steps: the video data is decoded by a video decoder, and then the decoded video data is encoded by a video encoder in H.264 format. The audio data is transcoded into an AAC format or a PCM format, and the specific transcoding method comprises the following steps: the audio data is decoded by an audio decoder, and then the decoded data is encoded by an audio encoder in an AAC format or a PCM format. In addition, the decoding of the video data is specifically performed by a video decoder supporting the encoding format of the video data (if the video data is in the MPEG-4 format, the video decoder is a video decoder supporting the MPEG-4 format), and then the decoding is performed by a video decoder supporting the h.264 format; the decoding of the multi-audio data is specifically to decode the multi-audio data through an audio decoder supporting the encoding format of the audio data (if the audio decoding format is the APE format, the audio decoder is an audio decoder supporting the APE format), and then encode the multi-audio data through an audio encoder supporting the AAC format or the PCM format.
H.264 slice (slice)/h.264 slice header (slice header), and h.264 picture parameter set (picture parameter set). Wherein: (1) an h.264 slice refers to an integer number of macroblocks or macroblock pairs in raster scan order within a particular slice group, but these macroblocks or macroblock pairs are not necessarily arranged consecutively in raster scan order within the picture, and the addresses of the macroblocks are obtained by the address of the first macroblock of the slice (described in the slice header) and the macroblock to slice group mapping. (2) An h.264 slice header refers to a portion of an encoded slice that contains data elements related to the first or all macroblocks in the slice. (3) The h.264 picture parameter set refers to a syntax structure containing syntax elements applied to zero or more coded pictures, determined by the syntax element picture parameter set identification (pic _ parameter _ set _ id) in each slice header.
In this embodiment, a video stream for acquiring target play data is selected, the selected video stream is opened, after the video stream is opened, information of a HEADER (HEADER) of the video stream is read first to determine whether the video stream includes audio or not, corresponding decoding operation is performed according to the read information of the HEADER of the video stream, and finally, content of the video stream is read. Of course, the video stream of the present embodiment refers to a video stream that adopts the h.264 coding standard.
Step 320: and writing the cutting information into the video stream to generate a target video stream.
Specifically, the cropping information may be, but is not limited to, the parameter frame _ crop of the frame cropping parameters in h.264 sps. The target video stream is a video stream containing the cropping information. H.264sps is a syntax structure that contains syntax elements that apply to the absence or presence of a complete coded video sequence, the referenced picture parameter set being determined by the syntax element pic _ patameter _ set _ id in the slice header, and the referenced sequence parameter set being determined by the syntax element sequence parameter set identifier (seq _ parameter _ set _ id) in the picture parameter set. When the content of the video stream is read, h.264sps data is acquired from the read content, the h.264sps belonging to the content portion of the file container.
In an embodiment, as shown in fig. 4, the step of writing the cropping information into the video stream to generate the target video stream specifically includes:
step 321: decoding the video stream to generate a video decoding file;
step 322: and synthesizing the video decoding file and the cutting information to generate a target video stream.
Specifically, after a video stream for screen projection is acquired, the video stream is decoded to generate a video decoding file. The cropping information and the decoded video file are re-synthesized into an SPS frame (parameter frame _ crop of the frame cropping parameters in h.264 SPS), thereby generating a target video stream.
In one embodiment, as shown in fig. 5, the step of writing the cropping information into the video stream and generating the target video stream comprises:
step 510: receiving an input operation instruction and sending the operation instruction to a first terminal;
step 520: and receiving cutting information sent by the first terminal, wherein the cutting information is generated by the first terminal according to the operation instruction.
Specifically, the operation instruction may be, but is not limited to, an operation instruction for the user to clip a video on the screen, an operation instruction automatically generated according to a request of the device, or an operation instruction generated by recognizing a voice of the user. And sending the operation instruction to the first terminal through wireless or wired connection. For example, the car machine sends the operation instruction to a mobile phone and the like through bluetooth. Of course, in one embodiment, the step of writing the cropping information into the video stream to generate the target video stream may be preceded by: and receiving an input operation instruction, and generating cutting information according to the operation instruction.
Step 330: and acquiring the playing data corresponding to the cutting information in the target video stream.
Specifically, the Android hardware decoder decodes a target video stream containing the cutting information, and extracts playing data corresponding to the cutting information. The playing data can be output by a local video player on a local display screen and/or a loudspeaker of the local video player, so that the screen projection playing of the video is realized. In one embodiment, the step of obtaining target playing data corresponding to the cropping information in the target video stream comprises: and controlling the playing equipment to play the playing data.
According to the video data processing method, after the video stream for screen projection is obtained, the cutting information is written into the video stream, the target video stream is generated, the playing data corresponding to the cutting information in the target video stream is obtained, so that the local part of the screen projection video can be emphasized on the terminal, and after the information of interest in the video is filtered out independently, the information is displayed to a user, and the user experience of the product is improved.
Second embodiment
An embodiment of the present invention provides a method for processing video data, and fig. 6 is a flowchart of the method for processing video data according to the embodiment of the present invention. As shown in fig. 6, the method includes the steps of:
step 610: the first terminal sends a video stream for screen projection to the second terminal.
Specifically, the video stream may be, but is not limited to, video data in the encoding format of h.264, video data in MPEG-4, audio data in the encoding format of AAC, audio data in the PCM format, and audio data in the APE format, if the video stream in other format may be transcoded into video data in the encoding format of h.264, and audio data in the encoding format of AAC or PCM format. Of course, the video stream may not only include a video stream for screen projection, but also be a video stream for the purpose of cutting out the video stream. The video data is transcoded into an H.264 format, and the specific transcoding method comprises the following steps: the video data is decoded by a video decoder, and then the decoded video data is encoded by a video encoder in H.264 format. The audio data is transcoded into an AAC format or a PCM format, and the specific transcoding method comprises the following steps: the audio data is decoded by an audio decoder, and then the decoded data is encoded by an audio encoder in an AAC format or a PCM format. In addition, the decoding of the video data is specifically performed by a video decoder supporting the encoding format of the video data (if the video data is in the MPEG-4 format, the video decoder is a video decoder supporting the MPEG-4 format), and then the decoding is performed by a video decoder supporting the h.264 format; the decoding of the multi-audio data is specifically to decode the multi-audio data through an audio decoder supporting the encoding format of the audio data (if the audio decoding format is the APE format, the audio decoder is an audio decoder supporting the APE format), and then encode the multi-audio data through an audio encoder supporting the AAC format or the PCM format.
H.264 slice (slice)/h.264 slice header (slice header), and h.264 picture parameter set (picture parameter set). Wherein: (1) an h.264 slice refers to an integer number of macroblocks or macroblock pairs in raster scan order within a particular slice group, but these macroblocks or macroblock pairs are not necessarily arranged consecutively in raster scan order within the picture, and the addresses of the macroblocks are obtained by the address of the first macroblock of the slice (described in the slice header) and the macroblock to slice group mapping. (2) An h.264 slice header refers to a portion of an encoded slice that contains data elements related to the first or all macroblocks in the slice. (3) The h.264 picture parameter set refers to a syntax structure containing syntax elements applied to zero or more coded pictures, determined by the syntax element picture parameter set identification (pic _ parameter _ set _ id) in each slice header.
In this embodiment, a video stream for acquiring target play data is selected, the selected video stream is opened, after the video stream is opened, information of a HEADER (HEADER) of the video stream is read first to determine whether the video stream includes audio or not, corresponding decoding operation is performed according to the read information of the HEADER of the video stream, and finally, content of the video stream is read. Of course, the video stream of the present embodiment refers to a video stream adopting the h.264 coding standard.
Step 620: and the second terminal writes the cutting information into the video stream to generate a target video stream.
Specifically, the cropping information may be, but is not limited to, the parameter frame _ crop of the frame cropping parameters in h.264 sps. The target video stream is a video stream containing the cropping information. H.264sps is a syntax structure that contains syntax elements that apply to the absence or presence of a complete coded video sequence, the referenced picture parameter set being determined by the syntax element pic _ patameter _ set _ id in the slice header, and the referenced sequence parameter set being determined by the syntax element sequence parameter set identifier (seq _ parameter _ set _ id) in the picture parameter set. When the content of the video stream is read, h.264sps data is acquired from the read content, the h.264sps belonging to the content portion of the file container.
In an embodiment, the step of writing the clipping information into the video stream by the second terminal and generating the target video stream specifically includes: and after the second terminal decodes the video stream to generate a video decoding file, synthesizing the video decoding file and the cutting information to generate a target video stream.
Specifically, after the video stream for screen projection is acquired, the second terminal decodes the video stream to generate a video decoding file. The cropping information and the decoded video file are re-synthesized into an SPS frame (parameter frame _ crop of the frame cropping parameters in h.264 SPS), thereby generating a target video stream.
In one embodiment, as shown in fig. 7, the step of writing the cropping information into the video stream by the second terminal and generating the target video stream includes:
step 621: the second terminal receives the input operation instruction, sends the operation instruction to the first terminal and then receives the cutting information sent by the first terminal;
step 622: and the first terminal generates cutting information according to the operation instruction.
Specifically, the operation instruction may be, but is not limited to, an operation instruction for the user to clip a video on the screen, an operation instruction automatically generated according to a request of the device, or an operation instruction generated by recognizing a voice of the user. And sending the operation instruction to the first terminal through wireless or wired connection. For example, the car machine sends the operation instruction to a mobile phone and the like through bluetooth. Of course, in one embodiment, the step of writing the cropping information into the video stream to generate the target video stream may be preceded by the steps of: and receiving an input operation instruction, and generating cutting information according to the operation instruction.
Step 630: and the second terminal acquires the playing data corresponding to the cutting information in the target video stream.
Specifically, the target video stream containing the clipping information is decoded by an Android hardware decoder, and the playing data corresponding to the clipping information is extracted. The playing data can be output by a local video player on a local display screen and/or a loudspeaker of the local video player, so that the screen projection playing of the video is realized. In one embodiment, the step of the second terminal obtaining the target playing data corresponding to the cropping information in the target video stream comprises: and the second terminal controls the playing equipment to play the playing data.
The foregoing describes in detail an embodiment of a method for processing video data, and based on the method for processing video data described in the foregoing embodiment, an embodiment of the present invention further provides a device corresponding to the method.
Fig. 8 is a schematic diagram illustrating a processing apparatus for video data according to an embodiment of the present invention, and as shown in fig. 8, the apparatus includes a memory 81 and a processor 82.
The memory 81 is used for storing computer programs;
the processor 82 is configured to execute a computer program to implement the steps of the video data processing method provided in any one of the above embodiments. Since the embodiment of the device part and the embodiment of the method part correspond to each other, the embodiment of the device part is described with reference to the embodiment of the method part, and is not described again here.
The embodiment of the application provides a video data's processing equipment, can utilize the terminal to handle the video stream that obtains and realize the extraction to target broadcast data to can emphasize the part of throwing in the screen video, and then filter out alone and behind the information of interest in the video, show this information for the user, improved the user experience of product.
The embodiment of the invention also provides a vehicle, which comprises the second terminal, wherein the second terminal is used for writing the cutting information into the video stream after acquiring the video stream for screen projection to generate the target video stream, and is used for acquiring the playing data corresponding to the cutting information in the target video stream.
In the foregoing, an embodiment of a method for processing video data is described in detail, and based on the method for processing video data described in the foregoing embodiment, an embodiment of the present invention further provides a computer-readable storage medium corresponding to the method.
A computer readable storage medium having a computer program stored thereon, the computer storage medium having computer program instructions stored thereon, the computer program instructions when executed by a processor implementing the method of processing video data provided by any of the above embodiments. Since the embodiment of the computer-readable storage medium portion and the embodiment of the method portion correspond to each other, please refer to the embodiment of the method portion for description of the embodiment of the computer-readable storage medium portion, which is not described herein again.
According to the video data processing method, the video data processing equipment and the vehicle, after the video stream for screen projection is obtained, the cutting information is written into the video stream, the target video stream is generated, and the playing data corresponding to the cutting information in the target video stream is obtained, so that the local part in the screen projection video can be emphasized on the terminal, and then after the information which is interested in the video is filtered out independently, the information is displayed for a user, and the user experience of a product is improved.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Although the present invention has been described with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the present invention.
Claims (10)
1. A method for processing video data, the method comprising the steps of:
acquiring a video stream for screen projection;
writing the cutting information into the video stream to generate a target video stream;
and acquiring play data corresponding to the cutting information in the target video stream.
2. The method for processing video data according to claim 1, wherein the step of writing the cropping information into the video stream to generate the target video stream specifically comprises:
decoding the video stream to generate a video decoding file;
and synthesizing the video decoding file and the cutting information to generate the target video stream.
3. The method for processing video data according to claim 1, wherein said step of writing the cropping information into the video stream to generate the target video stream comprises:
receiving an input operation instruction and sending the operation instruction to a first terminal;
and receiving the cutting information sent by the first terminal, wherein the cutting information is generated by the first terminal according to the operation instruction.
4. The method for processing video data according to claim 1, wherein said step of writing the cropping information into the video stream to generate the target video stream comprises:
and receiving an input operation instruction, and generating the cutting information according to the operation instruction.
5. The method for processing video data according to any of claims 1 to 4, wherein the step of obtaining the target playing data corresponding to the cropping information in the target video stream comprises:
and controlling the playing equipment to play the playing data.
6. A method for processing video data, the method comprising the steps of:
the first terminal sends a video stream for screen projection to the second terminal;
the second terminal writes the cutting information into the video stream to generate a target video stream;
and the second terminal acquires the playing data corresponding to the cutting information in the target video stream.
7. The method for processing video data according to claim 6, wherein the step of writing the clipping information into the video stream by the second terminal and generating the target video stream specifically includes:
and after the second terminal decodes the video stream to generate a video decoding file, synthesizing the video decoding file and the cutting information to generate the target video stream.
8. The method for processing video data according to claim 6, wherein the step of writing the cropping information into the video stream by the second terminal to generate the target video stream comprises:
the second terminal receives an input operation instruction, sends the operation instruction to the first terminal and then receives cutting information sent by the first terminal;
and the first terminal generates the cutting information according to the operation instruction.
9. An apparatus for processing video data, the apparatus comprising: a memory and a processor;
the memory is used for storing a computer program;
the processor is adapted to execute the computer program to implement the steps of the method of processing video data according to any of claims 1-8.
10. A vehicle comprises a second terminal, wherein the second terminal is used for writing cutting information into a video stream after the video stream for screen projection is obtained, generating a target video stream, and is used for obtaining playing data corresponding to the cutting information in the target video stream.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110187609.4A CN114979729A (en) | 2021-02-18 | 2021-02-18 | Video data processing method and device and vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110187609.4A CN114979729A (en) | 2021-02-18 | 2021-02-18 | Video data processing method and device and vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114979729A true CN114979729A (en) | 2022-08-30 |
Family
ID=82954317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110187609.4A Pending CN114979729A (en) | 2021-02-18 | 2021-02-18 | Video data processing method and device and vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114979729A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109547724A (en) * | 2018-12-21 | 2019-03-29 | 广州华多网络科技有限公司 | A kind of processing method of video stream data, electronic equipment and storage device |
CN112118558A (en) * | 2020-06-30 | 2020-12-22 | 上汽通用五菱汽车股份有限公司 | Vehicle screen display method, vehicle and computer readable storage medium |
-
2021
- 2021-02-18 CN CN202110187609.4A patent/CN114979729A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109547724A (en) * | 2018-12-21 | 2019-03-29 | 广州华多网络科技有限公司 | A kind of processing method of video stream data, electronic equipment and storage device |
CN112118558A (en) * | 2020-06-30 | 2020-12-22 | 上汽通用五菱汽车股份有限公司 | Vehicle screen display method, vehicle and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7894854B2 (en) | Image/audio playback device of mobile communication terminal | |
CN104980788B (en) | Video encoding/decoding method and device | |
CN109729420B (en) | Picture processing method and device, mobile terminal and computer readable storage medium | |
KR101800889B1 (en) | Device and method for playing music | |
US20070288954A1 (en) | Wallpaper setting apparatus and method for audio channel in digital multimedia broadcasting service | |
JP7100052B2 (en) | Electronic device and its control method | |
US11956497B2 (en) | Audio processing method and electronic device | |
US20210211777A1 (en) | Information Presenting Method, Terminal Device, Server and System | |
WO2005069618A1 (en) | Portable audio/video system for mobile devices | |
CN108132769A (en) | A kind of audio data play method and dual-screen mobile terminal | |
CN112099750A (en) | Screen sharing method, terminal, computer storage medium and system | |
US7768578B2 (en) | Apparatus and method of receiving digital multimedia broadcasting | |
CN115552518B (en) | Signal encoding and decoding method and device, user equipment, network side equipment and storage medium | |
CN105374358A (en) | Adaptive audio output method, adaptive audio output device, audio transmitting end and adaptive audio output system | |
CN105808198A (en) | Audio file processing method and apparatus applied to android system and terminal | |
CN105898320A (en) | Panorama video decoding method and device and terminal equipment based on Android platform | |
CN116368460A (en) | Audio processing method and device | |
CN104135668B (en) | There is provided and obtain the method and device of digital information | |
CN114979729A (en) | Video data processing method and device and vehicle | |
CN100563334C (en) | In the video telephone mode of wireless terminal, send the method for view data | |
KR101688946B1 (en) | Signal processing apparatus and method thereof | |
US20090033803A1 (en) | Broadcast receiver capable of displaying broadcast-related information using data service and method of controlling the broadcast receiver | |
KR101799863B1 (en) | Signal processing apparatus and method thereof | |
JP2008166977A (en) | Audio device and audio system | |
CN117406654B (en) | Sound effect processing method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |