CN116828243A - Hardware encoding and decoding method, mobile terminal, computer device and storage medium - Google Patents

Hardware encoding and decoding method, mobile terminal, computer device and storage medium Download PDF

Info

Publication number
CN116828243A
CN116828243A CN202310662094.8A CN202310662094A CN116828243A CN 116828243 A CN116828243 A CN 116828243A CN 202310662094 A CN202310662094 A CN 202310662094A CN 116828243 A CN116828243 A CN 116828243A
Authority
CN
China
Prior art keywords
codec
decoding
coding
mobile terminal
hardware
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310662094.8A
Other languages
Chinese (zh)
Inventor
王庆民
张定乾
崔传凯
李琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qishuo Shenzhen Technology Co ltd
Original Assignee
Qishuo Shenzhen Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qishuo Shenzhen Technology Co ltd filed Critical Qishuo Shenzhen Technology Co ltd
Priority to CN202310662094.8A priority Critical patent/CN116828243A/en
Publication of CN116828243A publication Critical patent/CN116828243A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42607Internal components of the client ; Characteristics thereof for processing the incoming bitstream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4392Processing of audio elementary streams involving audio buffer management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the application discloses a hardware encoding and decoding method, a mobile terminal, computer equipment and a computer readable storage medium. The method comprises the following steps: acquiring a source code, a first tool kit of a mobile terminal and a first coding and decoding frame of the mobile terminal, wherein the source code is a code kit for realizing hard coding and hard decoding, the first tool kit is a native tool kit of the mobile terminal, and the first coding and decoding frame is a native coding and decoding frame of the mobile terminal; modifying the first tool pack according to the source code to generate a coder-decoder, adding the coder-decoder into the first coder-decoder frame to obtain a second coder-decoder frame, wherein the coder-decoder can be used for calling mobile terminal hardware to perform coding-decoding processing of the audio and video; when the audio and video information is acquired, calling a coder-decoder in the second coding-decoding frame according to a preset function so as to carry out hardware audio and video coding-decoding on the audio and video information. Therefore, the application can obtain the codec capable of realizing high-efficiency hard coding and hard decoding through transformation.

Description

Hardware encoding and decoding method, mobile terminal, computer device and storage medium
Technical Field
The present application relates to video encoding and decoding technology, and in particular, to a hardware encoding and decoding method, a mobile terminal, a computer device, and a computer readable storage medium.
Background
With the rapid development and popularization of mobile terminals, the coding and decoding efficiency of videos is greatly improved. However, in the prior art, in order to consider compatibility, most mobile terminals use ffmpeg to call a software/hardware encoding/decoding mode of encoding/decoding by using related hardware of a software driver formed by integrating android encoding/decoding frames based on media. The problems caused by the encoding and decoding of the software and the hardware are also obvious: cumbersome, poor scalability, and low efficiency. How to improve the coding and decoding efficiency of video is a technical problem to be solved by those skilled in the art.
The foregoing description is provided for general background information and does not necessarily constitute prior art.
Disclosure of Invention
Based on this, it is necessary to address the above-mentioned problems, and a hardware encoding and decoding method, a mobile terminal, a computer device, and a computer-readable storage medium are provided that enable video hardware encoding and decoding of a mobile terminal platform.
The application solves the technical problems by adopting the following technical scheme:
the application provides a hardware coding and decoding method which is applied to a mobile terminal and is characterized by comprising the following steps: acquiring a source code, a first tool kit of a mobile terminal and a first coding and decoding frame of the mobile terminal, wherein the source code is a code kit for realizing hard coding and hard decoding, the first tool kit is a native tool kit of the mobile terminal, and the first coding and decoding frame is a native coding and decoding frame of the mobile terminal; modifying the first tool pack according to the source code to generate a coder-decoder, adding the coder-decoder into the first coder-decoder frame to obtain a second coder-decoder frame, wherein the coder-decoder can be used for calling mobile terminal hardware to perform coding-decoding processing of the audio and video; when the audio and video information is acquired, calling a coder-decoder in the second coding-decoding frame according to a preset function so as to carry out hardware audio and video coding-decoding on the audio and video information.
In an alternative embodiment of the present application, retrofitting a first toolkit to generate a codec according to source code includes: correspondingly adding the source code content into a local catalog of the first toolkit to obtain a second toolkit; the second tool pack is cross-coded using the first codec to obtain a codec.
In an alternative embodiment of the present application, adding a codec to a first codec frame to obtain a second codec frame includes: and adding the codec into the first codec frame and packaging the codec frame, so as to obtain a second codec frame.
In an alternative embodiment of the present application, invoking the codec in the second codec frame according to a preset function includes: invoking a codec in the second codec frame; and acquiring configuration parameters, and injecting the configuration parameters into the codec through the ioctl function so as to perform hardware audio and video encoding and decoding on the audio and video information, wherein the configuration parameters are parameters corresponding to the requirements of encoding and decoding output results.
In an alternative embodiment of the present application, invoking the codec in the second codec frame includes: analyzing the audio and video information, if the audio and video information comprises yuv data, opening a hard coding node in the coder-decoder, wherein the hard coding node is used for enabling the coder-decoder to realize hardware coding on the input data; performing hardware audio/video encoding and decoding on the audio/video information, including: sending yuv data into a coding queue of a coder-decoder; the codec performs hardware encoding on yuv data according to the setting of the configuration parameters to output the encoded avc data.
In an alternative embodiment of the present application, invoking the codec in the second codec frame includes: analyzing the audio and video information, if the audio and video information comprises the avc data, opening a hard decoding node in the codec, wherein the hard decoding node is used for enabling the codec to realize hardware decoding on the input data; performing hardware audio/video encoding and decoding on the audio/video information, including: sending the avc data into a decoding queue of a codec; the codec performs hardware decoding on the avc data according to the setting of the configuration parameters to output nv12 data, which is completely decoded.
In an alternative embodiment of the present application, obtaining source code includes: and acquiring hardware equipment information of the mobile terminal, and correspondingly acquiring corresponding source codes according to the hardware equipment information.
The application also provides a mobile terminal, comprising: the mobile terminal comprises an acquisition module, a coding module and a decoding module, wherein the acquisition module is used for acquiring source codes, a first tool kit of the mobile terminal and a first coding and decoding frame of the mobile terminal, the source codes are code packages for realizing hard coding and hard decoding, the first tool kit is a native tool kit of the mobile terminal, and the first coding and decoding frame is a native coding and decoding frame of the mobile terminal; the modification module is used for modifying the first tool kit according to the source code to generate a coder-decoder, adding the coder-decoder into the first coder-decoder framework to obtain a second coder-decoder framework, wherein the coder-decoder can be used for calling mobile terminal hardware to perform coding-decoding processing of the audio and video; and the encoding and decoding module is used for calling the encoder and decoder in the second encoding and decoding frame according to a preset function when the audio and video information is acquired, so as to carry out hardware audio and video encoding and decoding on the audio and video information.
The application also provides a computer device comprising a processor and a memory: the processor is configured to execute the computer program stored in the memory to implement the method as described above.
The application also provides a computer readable storage medium storing a computer program which when executed by a processor implements a method as described above.
The embodiment of the application has the following beneficial effects:
according to the application, the mobile terminal can be modified through the source code at the mobile terminal platform to obtain the codec which can be used for hard coding and hard decoding, so that when audio and video coding and decoding are needed, the codec can be used for coding and decoding the audio and video information by utilizing the hardware of the mobile terminal, and the coding and decoding efficiency of the audio and video can be improved by directly utilizing the hardware, meanwhile, the complexity of coding and decoding processing is reduced without additionally calling software to process the audio and video information, and the expansibility of application is increased.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application, as well as the preferred embodiments thereof, together with the following detailed description of the application, given by way of illustration only, together with the accompanying drawings. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Wherein:
FIG. 1 is a flow chart of a hardware encoding and decoding method according to an embodiment;
FIG. 2 is a flow chart of hardware encoding according to an embodiment;
FIG. 3 is a flow chart of hardware decoding according to an embodiment;
fig. 4 is a schematic flow chart of a mobile terminal according to an embodiment;
fig. 5 is a schematic block diagram of a computer device according to an embodiment.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the existing mobile terminal, software and hardware encoding and decoding of the audio and video are realized based on a media integrated android encoding and decoding framework. The method has the advantages of complicated actual operation, poor expandability, low efficiency and more problems. In order to solve the above problems, the present application proposes a hardware encoding and decoding method. In order to clearly describe the android codec frame that must be integrated based on the media code provided in this embodiment, the method is cumbersome, has poor scalability and low efficiency, and has many problems, please refer to fig. 1 to 3.
Step S110: the method comprises the steps of obtaining source codes, a first tool kit of a mobile terminal and a first coding and decoding frame of the mobile terminal, wherein the source codes are code packages for realizing hard coding and hard decoding, the first tool kit is a native tool kit of the mobile terminal, and the first coding and decoding frame is a native coding and decoding frame of the mobile terminal.
In an embodiment, the method provided by the application is applied to a mobile terminal, and in particular, specific forms of the mobile terminal can be, but not limited to, a mobile phone, a tablet computer, a personal digital assistant (English: personal digital assistant, abbreviated: PDA), a mobile internet device (English: mobile Internet device, abbreviated: MID), a wearable device (such as a smart watch) and the like. And obtain a first toolkit and a first codec frame of the mobile terminal in preparation for retrofitting. Wherein the first toolkit may be NDK (Native Development Kit, native development toolkit), and the acquiring process may acquire an existing one from a storage of the mobile terminal; the latest version of the tool kit can be acquired from the official network, and the specific acquisition process is not limited. The first codec frame may be a native codec frame of the mobile terminal, such as ffmpeg. Also, existing may be retrieved from the mobile terminal's storage; the latest version may be acquired from the official network, and the specific acquisition process is not limited.
In one embodiment, obtaining source code includes: and acquiring hardware equipment information of the mobile terminal, and correspondingly acquiring corresponding source codes according to the hardware equipment information.
In one embodiment, the source code is a code packet that enables the mobile terminal to implement hard coding and hard decoding. It will be appreciated that the process of hard-programming and hard-programming is implemented by means of hardware (e.g., CPU or GPU, etc.) of the mobile terminal. Therefore, the modification needs to be aimed at the version and the signal of the mobile terminal hardware, and the source code needs to be correspondingly acquired. Therefore, before the source code is acquired, the hardware device information of the mobile terminal needs to be determined. The hardware equipment information records information such as hardware version, signals and the like, so that relevant source codes can be correspondingly acquired. In a preferred embodiment, the hardware in which the mobile terminal of the present application is installed may be a high-pass platform, such as cellu 865, and correspondingly, the code packet corresponding to cellu 865 needs to be downloaded to the high-pass network to complete the acquisition of the source code.
Step S120: and modifying the first tool bag according to the source code to generate a coder-decoder, adding the coder-decoder into the first coder-decoder frame to obtain a second coder-decoder frame, wherein the coder-decoder can be used for calling mobile terminal hardware to perform coding-decoding processing of the audio and video.
In one embodiment, retrofitting a first toolkit to generate a codec according to source code includes: correspondingly adding the source code content into a local catalog of the first toolkit to obtain a second toolkit; the second tool pack is cross-coded using the first codec to obtain a codec.
In one embodiment, adding a codec to a first codec frame to obtain a second codec frame includes: and adding the codec into the first codec frame and packaging the codec frame, so as to obtain a second codec frame.
In one embodiment, the retrofitting process may be divided into a retrofit of a kit and a modification of a frame. The modification of the tool kit, that is, the modification of the NDK, for convenience of description, taking the high-pass source code modification of the NDK of the mobile terminal as an example, the specific adding process may be:
1. will be
msm_ion.h v4l2-controls.h videodev2.h ion.h
Copy to
toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/linux
2. Will be
msm_media_info.h msm_vidc_utils.h
Copy to
toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include/media
3. Will be
on/ion.h
Copy to
toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/include
4. Will be
libion.so
Copy to
toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/lib/aarch64-linux-android
Further, for convenience of description, taking the first codec frame as ffmpeg as an example, the modification scheme may be:
1. the ffmepg-based codec framework adds a codec, and specific modification codes may be included. Adding an encoder into a first framework:
M2MENC(mpeg4,"MPEG4",AV_CODEC ID_MPEG4);M2MENC(h263,"H.263",AV_CODEC ID_H263);
M2MENC(h264,"H.264",AV_CODEC ID_H264);
M2MENC(hevc,"HEVC",AV_CODEC ID_HEVC);
M2MENC(vp8,"VP8",AV_CODEC ID VP8);
and adding a decoder into the first framework:
M2MDEC(h264,"H.264",AV_CODEC_ID_H264,"h264_mp4toannexb");
M2MDEC(hevc,"HEVC",AV_CODEC_ID_HEVC,"hevc_mp4toannexb");
M2MDEC(mpeg1,"MPEG1",AV_CODEC_ID_MPEG1VIDEO,NULL);
M2MDEC(mpeg2,"MPEG2",AV_CODEC_ID_MPEG2VIDEO,NULL);
M2MDEC(mpeg4,"MPEG4",AV_CODEC_ID_MPEG4,NULL);
M2MDEC(h263,"H.263",AV_CODEC_ID_H263,NULL);
M2MDEC(vc1,"VC1",AV_CODEC_ID_VC1,NULL);
M2MDEC(vp8,"VP8",AV_CODEC_ID_VP8,NULL);
M2MDEC(vp9,"VP9",AV_CODEC_ID_VP9,NULL);
after the addition of the codec is completed, ffmpeg is cross-coded. The bins/libraries under ffmpeg and lib in the android directory under ffmpeg engineering are the second codec frames compiled. The process of cross coding, i.e. the process of encapsulating the codec into a first codec frame. The transformation of the mobile terminal is completed, and the second coding and decoding framework capable of performing hard coding and hard decoding by utilizing the hardware of the mobile terminal is obtained.
Step S130: when the audio and video information is acquired, calling a coder-decoder in the second coding-decoding frame according to a preset function so as to carry out hardware audio and video coding-decoding on the audio and video information.
In an embodiment, invoking the codec in the second codec frame according to the preset function includes: invoking a codec in the second codec frame; and acquiring configuration parameters, and injecting the configuration parameters into the codec through the ioctl function so as to perform hardware audio and video encoding and decoding on the audio and video information, wherein the configuration parameters are parameters corresponding to the requirements of encoding and decoding output results.
In one embodiment, the audio/video may be encoded and decoded to be output as a predetermined variety. The requirement can be realized by setting configuration parameters which are parameters corresponding to the requirement of the output result of the encoding and decoding. Specific implementations of the configuration parameters may include, but are not limited to: frame rate, code rate, quantization parameter, and coding quality, etc. Since the second codec frame is a frame that has been modified, the calling method is different from the prior art. The method is characterized in that corresponding nodes in the second coding and decoding framework are required to be called through the ioctl function, and configuration parameters are injected into the corresponding nodes, so that the coding and decoding process is controlled according to the operability. The implementation of hardware codec will be described later separately in terms of an encoding process and a decoding process, respectively.
In one embodiment, the process of hardware encoding is described in this embodiment, and includes steps S210 to S230. For clarity of the hardware encoding process provided in this embodiment, please refer to fig. 2.
Step S210: and analyzing the audio and video information, and if the audio and video information comprises yuv data, opening a hard coding node in the coder-decoder, wherein the hard coding node is used for enabling the coder-decoder to realize hardware coding on the input data.
In an embodiment, specifically, the mobile terminal may parse the audio and video information after receiving the audio and video information, if the audio and video information includes yuv data, the mobile terminal may call the hard-coded node in the second codec frame through the ion-open/open instruction. And calling the hard-coded node, namely starting the function of the second coding and decoding framework, and preparing to perform hardware coding processing on the yuv data. The operation of hardware video coding can also be initiated by means of the following code:
AVCodec*video_codec=avcodec_find_encoder_by_name("h264_v4l2m2m");
this is an operation of video encoding using a hardware encoder in the second codec frame (i.e., ffmpeg after modification and addition is completed, and will not be described in detail). This operation will use a hardware encoder named "h264 v4l2m2m" for video encoding, which has been added and encapsulated into the first codec frame when retrofitting the first codec frame, see in particular the foregoing.
Step S220: and acquiring configuration parameters, and injecting the configuration parameters into the hard-coded nodes through the ioctl function.
In an embodiment, the instruction for setting the configuration parameter may be:
ioctl(s->fd,VIDIOC_S_CTRL,&control)
here, the function is used to set some control parameters in the hard-coded nodes. VIDIOC_S_CTRL is a mobile terminal specific control command, indicating the value of the setup controller. Control is a pointer to a structure of controller values that contains information such as the id and value of the controller to be set. And transferring information such as id and value of the controller to a mobile terminal driver by calling the ioctl function, thereby realizing control of the mobile terminal. In this example, VIDIOC_S_CTRL functions to set the controller value of the mobile terminal to the value described by control. This operation may be used to adjust some parameters of the mobile terminal, such as frame rate, code rate, quantization parameters, coding quality, brightness, contrast, saturation, etc., in order to obtain a more suitable video effect.
Step S230: sending yuv data into a coding queue of a coder-decoder; the codec performs hardware encoding on yuv data according to the setting of the configuration parameters to output the encoded avc data.
In one embodiment, for an instruction to be sent to the encoding queue, it may be:
ioctl(buf_to_m2mctx(avbuf)->fd,VIDIOC_QBUF,&avbuf->buf)
for this instruction is an ioctl operation using video_qbuf commands for video Buffer (Buffer) management. In this operation, buf_to_m2mctx (avbuf) - > fd specifies the mobile terminal file descriptor, and the avbufferef structure avbuf contains the buffer information to be submitted. The ioctl operation places the buffers described by avbuf- > buf in the buffer queue of the mobile terminal to wait for processing. In this way, the data to be processed can be submitted to the mobile terminal, which can be correspondingly processed as required. The hard-coded process may then be enabled by the following command line approach:
ffmpeg-s 1280*720-pix_fmt nv12-i input.yuv-vcodec h264_v4l2m2m-b:v 2M-r 25out.h264
this is an example of a command line that uses FFmpeg to h.264 hardware encode video data in YUV format. The meaning of it is as follows:
-s 1280x720: the resolution of the video is set to 1280x720.
Pix_fmt nv12: the pixel format of the video is set to nv12.
-i input. Yuv: the input file is designated as a file named input.
Vcodec h264_v4l2m2m: let h264_v4l2m2m be used as video encoder. The encoder, i.e. the encoder added and encapsulated in the first codec frame as described above.
-b: v 2M: the bit rate of the video is set to 2Mbps.
-r 25: the frame rate of the video is set to 25fps.
out.h264: the output file name is designated out.h264.
Finally, the output video data will be written into the out.h264 file. After the hard coding process of the mobile terminal is finished, the avc data which is coded in the coding queue can be obtained through the following instructions:
ioctl(ctx_to_m2mctx(ctx)->fd,VIDIOC_DQBUF,&buf)
for this instruction is an ioctl operation using the video_dqbuf command for video Buffer (Buffer) management. In this operation, ctx_to_m2mctx (ctx) - > fd specifies the mobile terminal file descriptor, and the avbufferef structure buf contains the buffer information to be acquired. The ioctl operation fetches one of the buffers in the mobile terminal buffer queue that has already been processed and writes the information therein into the avbufferef structure buf. In this way, the data processed by the mobile terminal can be obtained for subsequent operations.
In one embodiment, the process of hardware decoding is described in this embodiment, and includes steps S310 to S330. For clarity of the hardware encoding process provided in this embodiment, please refer to fig. 3.
Step S310: and analyzing the audio and video information, and if the audio and video information comprises the avc data, opening a hard decoding node in the codec, wherein the hard decoding node is used for enabling the codec to realize hardware decoding on the input data.
In an embodiment, specifically, the mobile terminal may parse the audio and video information after receiving the audio and video information, if the audio and video information includes avc data, the mobile terminal may call the hard solution node in the second codec frame through the ion-open/open instruction. And calling the hard coding node, namely starting the decoder function of the second coding and decoding framework, and preparing to perform hardware decoding processing on the avc data. The operation of hardware video coding can also be initiated by means of the following code:
AVCodec*video_codec=avcodec_find_decoder_by_name("h264_v4l2m2m");
this is an operation of video decoding using a hardware decoder in the second codec frame. This operation will use a hardware decoder named "h264_v4l2m2m" for video decoding.
Step S320: and acquiring configuration parameters, and injecting the configuration parameters into the hard solution node through the ioctl function.
In an embodiment, the instruction for setting the configuration parameter may be:
ioctl(s->fd,VIDIOC_S_CTRL,&control)
here, the function is used to set some control parameters in the hard solution node. VIDIOC_S_CTRL is a mobile terminal specific control command, indicating the value of the setup controller. Control is a pointer to a structure of controller values that contains information such as the id and value of the controller to be set. And transferring information such as id and value of the controller to a mobile terminal driver by calling the ioctl function, thereby realizing control of the mobile terminal. In this example, VIDIOC_S_CTRL functions to set the controller value of the mobile terminal to the value described by control. This operation may be used to adjust some parameters of the mobile terminal, such as frame rate, code rate, quantization parameters, coding quality, brightness, contrast, saturation, etc., in order to obtain a more suitable video effect.
Step S330: sending the avc data into a decoding queue of a codec; the codec performs hardware decoding on the avc data according to the setting of the configuration parameters to output nv12 data, which is completely decoded.
In one embodiment, for an instruction to be posted to a decode queue may be:
ioctl(buf_to_m2mctx(avbuf)->fd,VIDIOC_QBUF,&avbuf->buf)
for this instruction is an ioctl operation using video_qbuf commands for video Buffer (Buffer) management. In this operation, buf_to_m2mctx (avbuf) - > fd specifies the mobile terminal file descriptor, and the avbufferef structure avbuf contains the buffer information to be submitted. The ioctl operation places the buffers described by avbuf- > buf in the buffer queue of the mobile terminal to wait for processing. In this way, the data to be processed can be submitted to the mobile terminal, which can be correspondingly processed as required. The hard solution process may then be enabled by the following command line approach:
ffmpeg-c:v h264_v4l2m2m-i input.h264-vcodec rawvideo-s 1280x720-pix_fmt nv12 output.yuv
this is a command line operation for video transcoding using the second codec frame. The command decodes the h.264 video stream in the input file input.h264, then uses the v4l2m2m decoder in the second codec frame to perform hardware acceleration decoding, and outputs the decoded result as the YUV file output.yuv in the rawvideo format.
V h264_v4l2m2m: hardware decoding of the input Video stream using the h264_v4l2m2m decoder within the second codec frame is specified to achieve hardware acceleration using Video for Linux 2 (V4L 2) Mem2Mem devices. The decoder is in the preamble for encapsulation into the first codec frame of the mobile terminal, reference being made in particular to the preamble.
-i input.h264: the path and name of the input file are specified.
Vcodec rawvideo: the codec that has specified the output video stream is a rawvideo, i.e., original video data that has not been compression-encoded.
-s 1280x720: the resolution of the output video is specified to be 1280x720 pixels.
Pix_fmt nv12 specifies the pixel format of the output video stream as nv12.
Finally, the output video data will be written into the output. After the mobile terminal hard-decoding process is completed, the nv12 data which is decoded in the coding queue can be obtained through the following instructions:
ioctl(ctx_to_m2mctx(ctx)->fd,VIDIOC_DQBUF,&buf)
for this instruction is an ioctl operation using the video_dqbuf command for video Buffer (Buffer) management. In this operation, ctx_to_m2mctx (ctx) - > fd specifies the mobile terminal file descriptor, and the avbufferef structure buf contains the buffer information to be acquired. The ioctl operation fetches one of the buffers in the mobile terminal buffer queue that has already been processed and writes the information therein into the avbufferef structure buf. In this way, the data processed by the mobile terminal can be obtained for subsequent operations.
Therefore, the mobile terminal can be modified through the source code at the mobile terminal platform to obtain the codec which can be used for hard coding and hard decoding, so that when audio and video coding and decoding are needed, the codec can be used for coding and decoding the audio and video information through the hardware of the mobile terminal, the coding and decoding efficiency of the audio and video is improved due to the fact that the hardware can be directly used for coding and decoding, meanwhile, the complexity of coding and decoding is reduced due to the fact that software is not required to be additionally called for processing the audio and video information, and the expansibility of application is improved.
Fig. 4 shows an internal structural diagram of a mobile terminal in one embodiment. The mobile terminal 40 includes: the obtaining module 41 is configured to obtain a source code, a first toolkit of the mobile terminal, and a first codec frame of the mobile terminal, where the source code is a code packet for implementing hard coding and hard decoding, the first toolkit is a native toolkit of the mobile terminal, and the first codec frame is a native codec frame of the mobile terminal. The modification module 42 is configured to modify the first toolkit according to the source code to generate a codec, and add the codec to the first codec frame to obtain a second codec frame, where the codec can be used to call the mobile terminal hardware to perform the codec processing of the audio and video. And the codec module 43 is configured to call a codec in the second codec frame according to a preset function when the audio/video information is acquired, so as to perform hardware audio/video codec on the audio/video information.
FIG. 5 illustrates an internal block diagram of a computer device in one embodiment. The computer device may specifically be a terminal or a server. As shown in fig. 5, the computer device includes a processor, a memory, and a network interface connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system, and may also store a computer program that, when executed by a processor, causes the processor to implement a hardware codec method. The internal memory may also store a computer program that, when executed by the processor, causes the processor to perform the hardware codec method. It will be appreciated by those skilled in the art that the structure shown in FIG. 5 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, the application also proposes a computer-readable storage medium, storing a computer program, which, when executed by a processor, causes the processor to perform the steps of the method as described above,
those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. The hardware coding and decoding method is applied to the mobile terminal and is characterized by comprising the following steps:
acquiring a source code, a first toolkit of the mobile terminal and a first coding and decoding frame of the mobile terminal, wherein the source code is a code packet for realizing hard coding and hard decoding, the first toolkit is a native toolkit of the mobile terminal, and the first coding and decoding frame is a native coding and decoding frame of the mobile terminal;
modifying the first tool pack according to the source code to generate a coder-decoder, adding the coder-decoder into the first coder-decoder framework to obtain a second coder-decoder framework, wherein the coder-decoder can be used for calling the mobile terminal hardware to perform coding-decoding processing of audio and video;
when the audio and video information is acquired, invoking the codec in the second codec frame according to a preset function so as to perform hardware audio and video codec on the audio and video information.
2. The hardware codec method of claim 1, wherein the retrofitting the first toolkit according to the source code to generate a codec comprises:
correspondingly adding the source code content into a local catalog of the first toolkit to obtain a second toolkit;
and cross-coding the second tool pack by using the first coder and decoder so as to obtain the coder and decoder.
3. The hardware codec method of claim 1, wherein the adding the codec to the first codec frame to obtain a second codec frame comprises:
and adding the codec into the first codec frame and packaging the codec, so as to obtain the second codec frame.
4. The hardware codec method of claim 1, wherein the invoking the codec in the second codec frame according to a preset function comprises:
invoking the codec in the second codec frame;
and acquiring configuration parameters, wherein the configuration parameters are injected into the codec through an ioctl function so as to perform hardware audio and video encoding and decoding on the audio and video information, and the configuration parameters are parameters corresponding to the requirements of encoding and decoding output results.
5. The hardware codec method of claim 4, wherein the invoking the codec in the second codec frame comprises:
analyzing the audio and video information, if the audio and video information comprises yuv data, opening a hard coding node in the coder-decoder, wherein the hard coding node is used for enabling the coder-decoder to realize hardware coding on the input data;
the hardware audio/video encoding and decoding of the audio/video information comprises the following steps:
sending the yuv data into a coding queue of the codec;
and the codec performs hardware coding on the yuv data according to the setting of the configuration parameters so as to output the coded avc data.
6. The hardware codec method of claim 4, wherein the invoking the codec in the second codec frame comprises:
analyzing the audio and video information, if the audio and video information comprises avc data, opening a hard decoding node in the codec, wherein the hard decoding node is used for enabling the codec to realize hardware decoding on the input data;
the hardware audio/video encoding and decoding of the audio/video information comprises the following steps:
sending the avc data into a decoding queue of the codec;
and the codec performs hardware decoding on the avc data according to the setting of the configuration parameter to output nv12 data after finishing decoding.
7. The hardware codec method of claim 1, wherein the obtaining source code comprises:
and acquiring the hardware equipment information of the mobile terminal, and correspondingly acquiring the corresponding source codes according to the hardware equipment information.
8. A mobile terminal, comprising:
the mobile terminal comprises an acquisition module, a coding module and a decoding module, wherein the acquisition module is used for acquiring source codes, a first tool kit of the mobile terminal and a first coding and decoding frame of the mobile terminal, the source codes are code packages for realizing hard coding and hard decoding, the first tool kit is a native tool kit of the mobile terminal, and the first coding and decoding frame is a native coding and decoding frame of the mobile terminal;
the modification module is used for modifying the first tool bag according to the source code to generate a coder-decoder, the coder-decoder is added into the first coder-decoder framework to obtain a second coder-decoder framework, and the coder-decoder can be used for calling the mobile terminal hardware to perform coding-decoding processing of the audio and video;
and the encoding and decoding module is used for calling the encoder and decoder in the second encoding and decoding frame according to a preset function when the audio and video information is acquired, so as to carry out hardware audio and video encoding and decoding on the audio and video information.
9. A computer device comprising a processor and a memory;
the processor is configured to execute a computer program stored in the memory to implement the method of any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the method according to any of claims 1 to 7.
CN202310662094.8A 2023-06-05 2023-06-05 Hardware encoding and decoding method, mobile terminal, computer device and storage medium Pending CN116828243A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310662094.8A CN116828243A (en) 2023-06-05 2023-06-05 Hardware encoding and decoding method, mobile terminal, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310662094.8A CN116828243A (en) 2023-06-05 2023-06-05 Hardware encoding and decoding method, mobile terminal, computer device and storage medium

Publications (1)

Publication Number Publication Date
CN116828243A true CN116828243A (en) 2023-09-29

Family

ID=88117744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310662094.8A Pending CN116828243A (en) 2023-06-05 2023-06-05 Hardware encoding and decoding method, mobile terminal, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN116828243A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104994390A (en) * 2015-06-30 2015-10-21 湖南基石通信技术有限公司 Embedded video processor, embedded video processing system and embedded video processor construction method
CN107506219A (en) * 2017-09-21 2017-12-22 烽火通信科技股份有限公司 A kind of general version upgrade method based on android system
CN108509795A (en) * 2018-04-25 2018-09-07 厦门安胜网络科技有限公司 A kind of method, apparatus and storage medium of monitoring ELF file calling system functions
WO2022061194A1 (en) * 2020-09-18 2022-03-24 Crunch Media Works, Llc Method and system for real-time content-adaptive transcoding of video content on mobile devices
CN114938408A (en) * 2022-04-30 2022-08-23 苏州浪潮智能科技有限公司 Data transmission method, system, equipment and medium of cloud mobile phone

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104994390A (en) * 2015-06-30 2015-10-21 湖南基石通信技术有限公司 Embedded video processor, embedded video processing system and embedded video processor construction method
CN107506219A (en) * 2017-09-21 2017-12-22 烽火通信科技股份有限公司 A kind of general version upgrade method based on android system
CN108509795A (en) * 2018-04-25 2018-09-07 厦门安胜网络科技有限公司 A kind of method, apparatus and storage medium of monitoring ELF file calling system functions
WO2022061194A1 (en) * 2020-09-18 2022-03-24 Crunch Media Works, Llc Method and system for real-time content-adaptive transcoding of video content on mobile devices
CN114938408A (en) * 2022-04-30 2022-08-23 苏州浪潮智能科技有限公司 Data transmission method, system, equipment and medium of cloud mobile phone

Similar Documents

Publication Publication Date Title
US11012489B2 (en) Picture file processing method, picture file processing device, and storage medium
WO2021051597A1 (en) Animation playing method and apparatus, and computer device and storage medium
EP3399752B1 (en) Image decoding method and decoding device
KR102368991B1 (en) A method comprising reconstructing an input video based on reconstruction functions for encoding an input video comprising a luma component and two chroma components
WO2013153830A1 (en) Devices for identifying a leading picture
CN109040789B (en) Picture file processing method
US20190045185A1 (en) Coding tools for subjective quality improvements in video codecs
US20230386087A1 (en) A method and an apparatus for encoding/decoding at least one attribute of an animated 3d object
US11064207B1 (en) Image and video processing methods and systems
WO2023142591A1 (en) Video encoding method and apparatus, video decoding method and apparatus, computer device, and storage medium
CN115134629B (en) Video transmission method, system, equipment and storage medium
CN114095784A (en) H.265 format video stream transcoding playing method, system, device and medium
CN114466246A (en) Video processing method and device
CN116828243A (en) Hardware encoding and decoding method, mobile terminal, computer device and storage medium
WO2023142665A1 (en) Image processing method and apparatus, and computer device, storage medium and program product
CN115988214A (en) Video frame encoding method, video frame display method and device
CN115866245A (en) Video encoding method, video encoding device, computer equipment and storage medium
CN113488065B (en) Audio output method and device based on cloud mobile phone, computer equipment and storage medium
JP2024517915A (en) Data processing method, device, computer device and computer program
CN114727116A (en) Encoding method and device
US20240022743A1 (en) Decoding a video stream on a client device
US11523174B2 (en) Bitstream processing method and device
CN114584786B (en) Memory allocation method and system based on video decoding
CN116095364A (en) Efficient video stream processing method and device for editing and electronic equipment
KR20240033618A (en) Video decoding device, operation method thereof, display device, and video system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination