WO2023160216A1 - 流媒体特性架构、处理方法、电子设备及可读存储介质 - Google Patents

流媒体特性架构、处理方法、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2023160216A1
WO2023160216A1 PCT/CN2022/142605 CN2022142605W WO2023160216A1 WO 2023160216 A1 WO2023160216 A1 WO 2023160216A1 CN 2022142605 W CN2022142605 W CN 2022142605W WO 2023160216 A1 WO2023160216 A1 WO 2023160216A1
Authority
WO
WIPO (PCT)
Prior art keywords
streaming media
chip platform
interface
chip
electronic device
Prior art date
Application number
PCT/CN2022/142605
Other languages
English (en)
French (fr)
Inventor
许集润
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2023160216A1 publication Critical patent/WO2023160216A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the field of streaming media, in particular to a streaming media characteristic architecture, processing method, electronic equipment and readable storage medium.
  • chip platforms available for electronic devices are also diversified. Due to the different customer groups targeted by electronic equipment, the chip platforms used by electronic equipment may also be different, which may lead to the possibility that different types or models of electronic equipment under one equipment manufacturer may use different chip platforms.
  • the present application provides a streaming media characteristic architecture, processing method, electronic equipment and readable storage medium, which can improve development efficiency.
  • the present application provides a streaming media feature architecture, which is applied to a chip platform, and the architecture includes:
  • a docking module configured to dock with the chip platform, to obtain stream media data related to the chip platform and send the processed stream media data to the chip platform;
  • the processing engine is provided with a characteristic algorithm independent of the chip platform, and the characteristic algorithm is used to process the streaming media data acquired by the docking module to obtain the processed streaming media data.
  • the streaming media feature architecture divides the data related to streaming media features into two parts, one part is related to the chip platform, and the other part is not related to the chip platform; the architecture connects the data related to the chip platform through the docking module and the chip platform; This architecture sets the parts that have nothing to do with the chip platform, such as image processing software algorithms, memory management mechanisms, etc. There is no direct interactive processing engine on the platform to achieve decoupling from the chip platform; if the docking module can be reused across chip platforms, the architecture can be transplanted to other chip platforms; if the docking module cannot be reused across chip platforms, then Develop docking modules that can be docked with other chip platforms to improve development efficiency.
  • the docking module includes:
  • a scene interface configured to interface with the chip platform, so as to obtain the collection scene of the streaming media data from the chip platform;
  • the output interface is used for docking with the chip platform, so as to send the processed streaming media data to the chip platform.
  • the docking module can be separated into three independent sub-interfaces to provide a variety of implementations.
  • the processing engine is provided with a characteristic algorithm corresponding to each collection scene, and is also used to process the streaming media data corresponding to the collection scene through the characteristic algorithm corresponding to the collection scene Processing is performed to obtain processed streaming media data corresponding to the collection scene.
  • the streaming media feature architecture can realize multiple scene functions of cameras in electronic devices, and different feature algorithms can be set for different scenes. Different feature algorithms are used for streaming media collected in different scenes, and one streaming media can be used The feature architecture implements the full functionality of the camera.
  • the streaming media feature architecture further includes:
  • a hardware interface used for docking with hardware resources on the chip platform
  • the processing engine is further configured to use hardware resources on the chip platform through the hardware interface to process the streaming media data.
  • a hardware interface can also be set, which is connected to the hardware resources on the chip platform. If the hardware interface can be reused across chip platforms, the architecture can be transplanted to other chip platforms. If the hardware interface cannot be cross-chip platform For multiplexing, develop a hardware interface that can be connected to other chip platforms, thereby improving development efficiency.
  • the hardware interface includes:
  • the first hardware interface is used for docking with the hardware computing unit on the chip platform
  • the processing engine is further configured to use a hardware computing unit on the chip platform through the first hardware interface to complete the processing of the streaming media data.
  • the hardware computing resources of the chip platform can be used through the hardware interface, that is, the computing resources of the chip platform itself can be used, and the redevelopment of software resources with the same function can be avoided, and development efficiency can be improved.
  • the hardware interface further includes:
  • the second hardware interface is used to interface with the sensor driver on the chip platform
  • the processing engine is further configured to acquire data collected by a sensor corresponding to the sensor driver through the second hardware interface, and process the stream media data based on the data collected by the sensor.
  • the data collected by the sensor can also be obtained through the hardware interface, so that the resources of the chip platform can be used when the data collected by the sensor is required to participate in data processing. It also provides the basis for cross-platform reuse.
  • processing engine is further configured to:
  • Obtain the data collected by the sensor corresponding to the sensor driver through the second hardware interface process the data collected by the sensor to obtain control information, send the control information to the sensor driver on the chip platform, and the control The information is used to control the sensors.
  • the docking module is applicable to multiple types of chip platforms.
  • the present application provides a streaming media processing method, which is applied to electronic equipment, and the electronic equipment includes: a docking module docked with the chip platform of the electronic equipment and a characteristic algorithm that is not related to the chip platform. processing engine, the method comprising:
  • the docking module obtains streaming media data collected by the camera of the electronic device from the chip platform of the electronic device;
  • the processing engine processes the streaming media data through the characteristic algorithm to obtain the processed streaming media data
  • the docking module sends the processed streaming media data to the chip platform of the electronic device.
  • the method further includes:
  • the docking module obtains the collection scene when the camera of the electronic device collects the streaming media data from the chip platform of the electronic device;
  • the processing engine determines a characteristic algorithm corresponding to the collection scene
  • the processing engine processes the streaming media data through the characteristic algorithm to obtain the processed streaming media data, including:
  • the processing engine processes the streaming media data based on the characteristic algorithm corresponding to the collection scene, and obtains the processed streaming media data.
  • the electronic device further includes a first hardware interface and a second hardware interface connected to hardware resources on the chip platform; the method further includes:
  • the processing engine uses the hardware computing unit on the chip platform through the first hardware interface to process the streaming media data
  • the processing engine obtains the data collected by the sensor corresponding to the sensor driver on the chip platform through the second hardware interface, and processes the streaming media data based on the data collected by the sensor;
  • the processing engine obtains the data collected by the sensor corresponding to the sensor driver through the second hardware interface, processes the data collected by the sensor to obtain control information, and sends it to the sensor driver on the chip platform
  • the control information is used to control the sensor.
  • an electronic device including a chip platform, and the chip platform is provided with the streaming media characteristic architecture according to any one of the first aspect of the present application.
  • a chip system including a chip platform coupled with a memory, and the chip platform is provided with the streaming media characteristic architecture according to any one of the first aspect of the present application.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by one or more processors, the method in any one of the second aspects of the present application is implemented.
  • the present application provides a computer program product, which, when the computer program product is run on a device, causes the device to execute any one of the methods in the second aspect of the present application.
  • FIG. 1 is a schematic diagram of a hardware structure of an electronic device provided in an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an image processing system provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a streaming media feature architecture provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an image processing system provided by an embodiment of the present application.
  • one or more refers to one, two or more than two; "and/or” describes the association relationship of associated objects, indicating that there may be three types of relationships; for example, A and/or B may mean: A exists alone, A and B exist simultaneously, and B exists alone, wherein A and B may be singular or plural.
  • the character "/" generally indicates that the contextual objects are an "or" relationship.
  • references to "one embodiment” or “some embodiments” or the like in the specification of the present application means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including”, “comprising”, “having” and variations thereof mean “including but not limited to”, unless specifically stated otherwise.
  • the streaming media feature framework and streaming media processing method provided by the embodiments of the present application can be applied to electronic devices provided with cameras.
  • the electronic device may be a tablet computer, a mobile phone, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA) and other electronic devices.
  • the electronic device may also be an electronic device other than the above examples.
  • the embodiment of the present application does not limit the specific type of the electronic device.
  • Fig. 1 shows a schematic structural diagram of an electronic device.
  • the electronic device 100 may include a processor 110 (chip platform), an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, and an antenna 1.
  • Antenna 2 mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, camera 193, display screen 194, and A subscriber identification module (subscriber identification module, SIM) card interface 195 and the like.
  • the sensor module 180 may include a pressure sensor 180A, a touch sensor 180K and the like.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) wait.
  • application processor application processor
  • AP application processor
  • modem processor graphics processing unit
  • graphics processing unit graphics processing unit
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit, NPU
  • different processing units may be independent devices, or may be integrated in one or more processors.
  • the streaming media feature architecture provided in the embodiment of the present application may be set on the processor 110 to execute the streaming media processing method in the embodiment of the present application.
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • the USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 .
  • the internal memory 121 may include an area for storing programs and an area for storing data. Wherein, the stored program area can store an operating system and at least one application program required by a function (such as a sound playing function, an image playing function, etc.).
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • a non-volatile memory such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 can receive charging input from the wired charger through the USB interface 130 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 may also be disposed in the processor 110 . In some other embodiments, the power management module 141 and the charging management module 140 may also be set in the same device.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves and radiate them through the antenna 1 .
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio signals into analog audio signals for output, and also for converting analog audio input into digital audio signals.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
  • Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
  • Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the receiver 170B can be placed close to the human ear to receive the voice.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a phone call or sending a voice message, the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 can be provided with two microphones 170C, which can also implement a noise reduction function in addition to listening to voice information. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
  • the earphone interface 170D is used for connecting wired earphones.
  • the earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensor 180A may be disposed on display screen 194 .
  • pressure sensors 180A such as resistive pressure sensors, inductive pressure sensors, and capacitive pressure sensors.
  • a capacitive pressure sensor may be comprised of at least two parallel plates with conductive material.
  • the electronic device 100 determines the intensity of pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • Touch sensor 180K also known as "touch panel”.
  • the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the position of the display screen 194 .
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 may receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the motor 191 can generate a vibrating reminder.
  • the motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • Camera 193 is used to capture still images or video.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • the SIM card interface 195 is used for connecting a SIM card.
  • the SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the embodiment of the present application does not specifically limit the specific structure of the electronic device that can set the streaming media characteristic architecture, nor does it limit the specific structure of the execution subject of a streaming media processing method, as long as one of the embodiments of the present application can be recorded through operation
  • the code of a streaming media processing method may be processed according to a streaming media processing method provided in an embodiment of the present application.
  • the execution subject of a streaming media processing method provided in the embodiment of the present application may be a functional module in an electronic device capable of invoking and executing a program, or a processing device applied in an electronic device, for example, a chip platform.
  • the embodiment of the present application provides a streaming media feature architecture.
  • the streaming media feature architecture can be applied to an image processing system on a chip platform.
  • some algorithms are usually set on the chip platform, for example, feature point recognition, anti-shake algorithm, motion compensation, distortion algorithm and so on.
  • an electronic device uses a chip platform with these algorithms set, it can directly use these algorithms set on the chip platform to realize some image processing functions of the camera, so as to output images or videos processed by the above algorithms.
  • the anti-shake algorithm A1 is set on the chip platform A.
  • the equipment manufacturer may develop another anti-shake algorithm A2 different from that on the chip platform A.
  • the newly developed anti-shake algorithm The algorithm A2 may need to be linked with the original algorithm on the chip platform (other algorithms A3 related to the anti-shake function). Therefore, when developing the new anti-shake algorithm A2, developers of equipment manufacturers need to consider the adaptation of the newly developed anti-shake algorithm A2 to the chip platform A, and even need to consider how to integrate with some original algorithms A3 on the chip platform A. linkage problem.
  • the embodiment of the present application provides a streaming media characteristic framework.
  • the streaming media characteristic framework is applied in the image processing system on the chip platform, while realizing the streaming media function (such as taking pictures, recording, etc.) , and can also be decoupled from the chip platform. Therefore, the streaming media characteristic framework provided by the embodiment of the present application can be reused on multiple chip platforms, for example, Qualcomm chip platform, MediaTek chip platform, and so on. In this way, a set of streaming media feature architecture can be developed, and the set of streaming media feature architecture can be applied to image processing systems on electronic devices using different chip platforms, so as to realize the camera function of the electronic device.
  • the streaming media feature framework assists the image processing system in realizing the camera function of the electronic device.
  • the required information can be separated into two parts, one part is information related to the chip platform, and the other part is information not related to the chip platform.
  • Information related to the chip platform may include: the acquisition and transmission of information collected by devices such as cameras, gyroscopes, and optical image stabilization, and may also include a hardware computing unit that provides hardware support when processing the above information (the hardware computing unit set on the chip platform).
  • the information irrelevant to the chip platform may include: the memory management mechanism when processing the above information, the characteristic algorithm when processing the image, and the like.
  • the information irrelevant to the chip platform can be set inside the streaming media feature architecture.
  • a memory management mechanism can be set inside the streaming media feature architecture, and a feature algorithm library can be set.
  • the feature algorithm library includes multiple feature algorithms for realizing camera functions.
  • a specific function for example, a beauty function
  • multiple feature algorithms in the feature algorithm library may be required to participate, which requires multiple feature algorithms to cooperate with each other.
  • the information related to the chip platform can be obtained and transmitted by setting related interfaces on the streaming media characteristic framework.
  • the data structure of the relevant interface is a commonly defined data structure, so that the streaming media characteristic architecture can realize data connection with different chip platforms, thereby realizing the multiplexing of the streaming media characteristic architecture on multiple chip platforms.
  • the streaming media feature architecture that can be reused across chip platforms is provided with an input interface and an output interface, and the input interface is used to obtain source data (for example, frame data and Synchronized metadata, etc.), the streaming media characteristic architecture performs characteristic processing on the source data to obtain the processing result, and then outputs the processing result to the chip platform through the output interface.
  • source data for example, frame data and Synchronized metadata, etc.
  • the input interface and the output interface can exist as two independent interfaces, or can be used as an independent interface to realize input and output functions.
  • the streaming media feature architecture that can be reused across chip platforms is also provided with a hardware interface, which is used to call the hardware computing unit of the chip platform, so as to assist the processing of source data through the hardware computing unit of the chip platform deal with.
  • the hardware interface may be an interface in a client-to-server (Client+Server, CS) mode, and the streaming media feature architecture calls the hardware computing unit on the chip platform through the CS interface.
  • the hardware computing unit of the chip platform integrates the AI algorithm (face recognition). Therefore, in the process of processing the frame data sequence in the streaming media characteristic architecture, taking beauty as an example, it is necessary to call the chip through the hardware interface 1
  • the hardware computing unit of the platform after the hardware computing unit processes the frame data to obtain the face recognition result, returns the face recognition result to the streaming media feature architecture through hardware interface 1, and the streaming media feature architecture continues to be based on face recognition As a result, the face in the image frame is beautified.
  • the streaming media feature architecture can obtain relevant information from the sensor driver on the chip platform through the hardware interface.
  • the streaming media feature architecture assists the image processing system to realize certain functions, it may be necessary to process the obtained data to control some sensors.
  • the streaming media feature architecture can pass the control information through the hardware interface. The driver passed to the relevant sensor on the chip platform.
  • the streaming media feature architecture can drive the gyroscope (the gyroscope inside the camera) on the chip platform to obtain the shake method and displacement of the lens through the hardware interface.
  • the obtained shaking method and displacement are processed to obtain the compensation information of the lens group in the camera, and the compensation information is transmitted to the camera driver through the hardware interface to compensate the lens group inside the camera and overcome the vibration caused by electronic equipment.
  • the image is blurred, so as to realize the anti-shake function.
  • the hardware interface can realize the invoking function for an independent interface, and can also realize the above-mentioned functions for multiple independent interfaces.
  • the hardware interface can be divided into hardware interface 1 and hardware interface 2, wherein the hardware interface 1 is used to interact with the hardware computing unit on the chip platform, and the hardware interface 2 is used to start interaction with the sensor on the chip platform.
  • the streaming media feature architecture may also include a scene interface, which is used to obtain user information from the chip platform. The camera's feature is when the scene is selected.
  • the scene interface may exist as an independent interface, and the independent interface is used to receive the user's scene selection.
  • the scene interface can also exist as an independent interface with the input interface, and the independent interface is used for receiving user's scene selection and receiving source data.
  • the scene interface can also exist as an independent interface with the input interface and the output interface, and the independent interface is used for receiving user's scene selection, receiving source data and outputting processing results.
  • the streaming media feature architecture can be decoupled from the chip platform, most of the functions that can be realized depend on the internal feature algorithms. Therefore, the streaming media feature structure can be configured with different feature algorithms or feature algorithm groups (including multiple Algorithms for features with associations).
  • the characteristic algorithm may include a data conversion algorithm, an optical anti-shake algorithm, an electronic anti-shake algorithm, a beauty algorithm, and the like.
  • the data conversion algorithm converts data into a data format that can be recognized by the chip platform within the streaming media characteristic architecture, so as to realize data connection with the hardware computing unit on the chip platform.
  • the optical anti-shake algorithm can obtain the data collected by the gyroscope inside the camera from the sensor driver of the chip platform through the hardware interface. After processing the data, the compensation information is transmitted to the sensor driver through the hardware interface, so that the camera captures stable frames. data, so as to get a stable video output.
  • the electronic anti-shake algorithm can process the source data within the streaming media characteristic architecture to obtain a stable video output.
  • the beautification algorithm can beautify the preview image captured by the camera within the streaming media feature architecture.
  • the beautification processing of the image is output to the chip platform after processing, so as to obtain the preview image after beautification.
  • the image processing system includes a scene selection module, a data acquisition module, a streaming media characteristic framework, a camera output module, a video output module, a hardware computing unit and a sensor driver module.
  • the scene selection module is used to receive the scene selected by the user when using the camera function of the electronic device, as an example, a photographing scene, a video recording scene, a portrait mode, a night scene mode, a normal aperture, a large aperture, etc.
  • a photographing scene a video recording scene
  • a portrait mode a night scene mode
  • a normal aperture a large aperture
  • the user may select multiple scenes at one time, for example, the user chooses to take a photo, and selects a night scene mode and a large aperture.
  • the data collection module is used to obtain the image frame collected by the camera and the metadata synchronized with the image frame, and send the image frame and the synchronized metadata to the streaming media feature architecture.
  • the camera output module is used to output and display the processing results of the streaming media feature architecture in the camera scene.
  • the video output module is used to output and display the processing results of the streaming media feature architecture in the video scene.
  • the hardware computing unit is used to provide hardware computing support for the streaming media feature architecture.
  • the sensor driver is used to provide the streaming media characteristic architecture with sensor data collected by the sensor or accept the processing result of the streaming media characteristic architecture (for example, compensation information of the sensor).
  • Streaming media feature architecture receive the scene information sent by the scene selection module through the general scene interface; receive the frame data and the metadata synchronized with the frame data through the general input interface; and then use the characteristic algorithm (or characteristic algorithm group) that matches the scene information Process the frame data, and send the processing result to the corresponding camera output module or video output module through the output interface.
  • characteristic algorithm or characteristic algorithm group
  • the hardware computing unit of the chip platform is called through the hardware interface to assist in the completion of the data processing process; when the data collected by the sensor of the electronic device needs to be obtained
  • the data collected by the sensor is obtained by invoking the sensor driver of the chip platform through the hardware interface; when the sensor of the electronic device needs to be controlled, the control information is transmitted to the sensor driver on the chip platform through the hardware interface.
  • the scene interface, the input interface and the output interface are three independent interfaces, in practical applications, the above three interfaces may be combined into one interface or two interfaces exist.
  • FIG. 3 it is a schematic diagram of a streaming media feature architecture provided by an embodiment of the present application.
  • the streaming media feature architecture can be applied to an image processing system on a chip platform.
  • the streaming media feature architecture is provided with an input interface, an output interface, a scene interface and hardware interfaces (hardware interface 1 and hardware interface 2).
  • the streaming media feature architecture also includes a processing engine.
  • the input interface is adapted to the chip platform to receive source data input by the acquisition module on the chip platform.
  • a characteristic algorithm library is set in the processing engine module, and a plurality of characteristic algorithms are set in the characteristic algorithm library.
  • the processing engine module is used to process the received image data (photographic input) or video data (video recording input) through characteristic algorithms.
  • the processing engine is divided into a camera path and a video path.
  • the camera path receives the image frame and corresponding metadata input through the input interface, and processes the image frame input through the relevant characteristic algorithm to obtain the image processing result, and then sends the image processing result as a camera output through the output interface to the chip platform.
  • the video channel receives the video input image frame sequence and the corresponding metadata through the input interface, and processes the photo input image frame sequence through the relevant characteristic algorithm to obtain the video processing result, and then outputs the video processing result as the video output through the output
  • the interface is sent to the chip platform.
  • the output interface is used to adapt to the chip platform, so as to output the processed image data (photographic output) or video data (video output) to the upper application in the image processing system of the chip platform.
  • the hardware interface 1 is used to adapt the hardware computing unit on the chip platform, so that when the processing engine module is processing image data or video data, it calls the hardware computing unit on the chip platform to assist in completing the data processing process.
  • the hardware interface 2 is used to adapt the sensor driver on the chip platform to obtain the information collected by the sensor to assist in completing the corresponding data processing process or assist in the realization of certain functions.
  • the scene interface can determine the scene when the camera of the electronic device collects images or videos, so as to determine which characteristic algorithm or algorithms to use.
  • the following will use a scene example to introduce how to implement the streaming media processing method provided by the embodiment of the application when the streaming media characteristic framework provided by the embodiment of the application is applied to the image processing system, for example, how to interface with the chip platform, and how to realize the video recording function wait.
  • the user can set whether to enable the video anti-shake during the recording process, or the system can set the video anti-shake function as long as the recording function is enabled; of course, it can also be set.
  • the video anti-shake function will be turned on.
  • the user can set the video anti-shake function to be enabled during the recording process.
  • the scene selection module will determine that the current scene is the video anti-shake scene.
  • the scene selection module sends the information representing the video anti-shake scene to the scene interface of the streaming media characteristic framework.
  • the scene interface may directly send the information representing the video anti-shake scene to the processing engine or send the information representing the video anti-shake scene to the processing engine after format conversion.
  • the processing engine selects a characteristic algorithm matching the information representing the video anti-shake scene from the characteristic algorithm library.
  • the information representing the video anti-shake scene may be a unique identifier used to represent the video anti-shake scene.
  • a unique identifier corresponding to each scene may be set for multiple scenes supported by the electronic device. According to the user's scene selection when taking pictures or videos, determine the unique identifier of the current scene.
  • the user can select multiple scenes (for example, large aperture, anti-shake, beauty, etc.) It can correspond to one characteristic algorithm or multiple characteristic algorithms in the characteristic algorithm library.
  • the data collection module can obtain the frame data sequence collected by the camera and the metadata of each frame data synchronization in the frame data sequence.
  • the data collection module sends the frame data sequence and metadata of each frame data synchronization in the frame data sequence to the input interface of the streaming media characteristic framework.
  • the input interface sends the frame data sequence and metadata of each frame data synchronization in the frame data sequence to the video input channel.
  • the acquired data may include a preview stream and a video stream, and the preview stream and the video stream may use different characteristic algorithms, or may use exactly the same characteristic algorithm.
  • the frame data sequence received by the video input channel is processed by the characteristic algorithm (or characteristic algorithm group) determined in advance according to the scene unique identifier (or scene unique identifier group) in the streaming media characteristic framework.
  • the combination of optical anti-shake and electronic anti-shake can be adopted.
  • the high-end chip platform (such as chip Platform A).
  • the image sensor is installed on a free-floating bracket, and a gyroscope can be installed on the electronic device, and the gyroscope can sense the shaking direction and amplitude of the electronic device.
  • the embodiment of the present application provides The characteristic algorithm corresponding to the optical image stabilization can calculate the displacement compensation of the image sensor based on the data collected by the gyroscope, and the displacement compensation can be sent to the image sensor again to control the image sensor to float based on the displacement compensation.
  • the image collected by the camera is analyzed and processed through software algorithms.
  • the edge image can be used to compensate the blurred part in the middle, so as to achieve anti-shake.
  • This anti-shake technology requires frame data to process.
  • the feature algorithm in electronic device A needs to process the data uploaded by the gyroscope, and pass the processing result to the image sensor.
  • the displacement compensation is sent to the image sensor driver through the hardware interface 2.
  • the image sensor driver may control the image sensor to compensate for floating based on the displacement.
  • the characteristic algorithm in electronic device A also includes a characteristic algorithm corresponding to electronic anti-shake, which can analyze and process the frame data collected by the image sensor to perform compensation and obtain further compensation After the stabilized image.
  • the processing engine can use the data format conversion after receiving the image frame
  • the algorithm processes the image frame to obtain an image format that can be recognized by the hardware computing unit on the chip platform, and then calls the hardware interface 1 to enable the hardware computing unit on the chip platform to identify the blur of the image, and the hardware computing unit on the chip platform obtains
  • the recognition result is returned to the processing engine, and the processing engine processes the image frame after the recognition result is obtained through the corresponding characteristic algorithm.
  • the characteristic algorithm is used to compensate the image frame. Obtain the anti-shake image, and send the anti-shake image to the output interface; if the image is clear, cut the clear image frame and output it to the output interface. This results in a stable and stabilized video stream.
  • the output interface sends the image frame sequence and the corresponding metadata to the video output, which can upload the image frames to the upper application, and the user can get the video file after optical image stabilization and electronic image stabilization.
  • this electronic device needs to use chip platform B, or the electronic device manufacturer develops another high-end electronic device B, which needs to use chip due to special needs Platform B.
  • the camera of the electronic device B and the camera of the electronic device A use cameras of the same manufacturer and the same model.
  • the video anti-shake algorithm on electronic device B can reuse the video anti-shake algorithm on electronic device A.
  • the streaming media feature architecture adopted in electronic device A can be multiplexed into electronic device B, because the interfaces through which the streaming media streaming feature architecture interacts with the chip platform are interfaces of a common language structure. There is no need to develop the video anti-shake function adapted to the chip platform B for the chip platform, and it is not necessary to develop other functions of the camera adapted to the chip platform B.
  • the embodiment of the present application uses the anti-shake algorithm as an example to describe that the streaming media characteristic framework applied in the image processing system provided by the embodiment of the present application can be applied to multiple chip platforms and can be multiplexed on multiple chip platforms.
  • the image processing system that realizes each function of the camera of the electronic device may set characteristic algorithms corresponding to each function of the camera in a characteristic algorithm library in the processing engine.
  • the embodiment of the present application also provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be implemented.
  • the embodiment of the present application also provides a computer program product, which enables the first device to implement the steps in the foregoing method embodiments when the computer program product is run on the first device.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • all or part of the processes in the methods of the above embodiments in the present application can be completed by instructing related hardware through computer programs, and the computer programs can be stored in a computer-readable storage medium.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form.
  • the computer-readable medium may at least include: any entity or device capable of carrying the computer program code to the first device, a recording medium, a computer memory, a read-only memory (ROM, Read-Only Memory), a random-access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media.
  • a recording medium e.g., a hard disk, magnetic disk or optical disk, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signals telecommunication signals
  • software distribution media e.g., software distribution media.
  • computer readable media may not be electrical carrier signals and telecommunication signals under legislation and patent practice.
  • the embodiment of the present application also provides a chip system, the chip system includes a processor, the processor is coupled to the memory, and the processor executes the computer program stored in the memory to implement the steps of any method embodiment of the present application.
  • the chip system can be a single chip, or a chip module composed of multiple chips.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Studio Devices (AREA)

Abstract

本申请提供一种流媒体特性架构、处理方法、电子设备及可读存储介质,涉及流媒体技术领域;架构可以将与流媒体特性相关的数据分为两部分,一部分与芯片平台相关,另一部分与芯片平台无关;架构通过对接模块与芯片平台对接与芯片平台相关的数据;架构将与芯片平台无关的部分,例如图像处理的软件算法、内存管理机制等设置在该架构的处理引擎中;这种架构通过对接模块与芯片平台实现数据对接,对数据的处理集中在于芯片平台不存在直接交互的处理引擎中,实现与芯片平台的解耦;若对接模块可跨芯片平台复用,则可以将该架构移植到其他芯片平台,若对接模块不可跨芯片平台复用,则开发与其他芯片平台能够对接的对接模块,从而提高开发效率。

Description

流媒体特性架构、处理方法、电子设备及可读存储介质
本申请要求于2022年02月28日提交国家知识产权局、申请号为202210193100.5、申请名称为“流媒体特性架构、处理方法、电子设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及流媒体领域,尤其涉及一种流媒体特性架构、处理方法、电子设备及可读存储介质。
背景技术
随着技术的发展,可供电子设备使用的芯片平台也呈现多样化。由于电子设备定位的客户群体不同,电子设备采用的芯片平台也可能不同,这就可能出现一个设备厂商下的不同类型或者不同型号的电子设备可能采用不同的芯片平台。
不同厂商的芯片平台或同一厂商的不同型号的芯片平台之间可能存在差异,这将导致电子设备的一些通用功能(例如,相机的拍照功能和录像功能等)无法在多个芯片平台上复用。电子设备厂商需要针对采用不同芯片平台的电子设备开发适配不同芯片平台的图像处理***(用于实现相机功能的***),开发效率较低。
发明内容
本申请提供一种流媒体特性架构、处理方法、电子设备及可读存储介质,可以提高开发效率。
为达到上述目的,本申请采用如下技术方案:
第一方面,本申请提供一种流媒体特性架构,应用于芯片平台,该架构包括:
对接模块,用于与所述芯片平台进行对接,以获取与所述芯片平台相关的流媒体数据以及向所述芯片平台发送处理后的流媒体数据;
处理引擎,设置有与芯片平台无关的特性算法,所述特性算法用于对所述对接模块获取的流媒体数据进行处理,获得所述处理后的流媒体数据。
本申请中,流媒体特性架构将与流媒体特性相关的数据分为两部分,一部分与芯片平台相关,另一部分与芯片平台无关;该架构通过对接模块与芯片平台对接与芯片平台相关的数据;该架构将与芯片平台无关的部分,例如图像处理的软件算法、内存管理机制等设置在该架构的处理引擎中;这种架构通过对接模块与芯片平台实现数据对接,对数据的处理集中在于芯片平台不存在直接交互的处理引擎中,实现与芯片平台的解耦;若对接模块可跨芯片平台复用,则可以将该架构移植到其他芯片平台,若对接模块不可跨芯片平台复用,则开发与其他芯片平台能够对接的对接模块,从而提高开发效率。
作为第一方面的一种实现方式,所述对接模块包括:
输入接口,用于与所述芯片平台对接,以从所述芯片平台获取所述流媒体数据;
场景接口,用于与所述芯片平台对接,以从所述芯片平台获取所述流媒体数据的采集场景;
输出接口,用于与所述芯片平台对接,以向所述芯片平台发送所述处理后的流媒体数据。
本申请中,还可以根据对接模块的功能,将对接模块分离为3个独立的子接口,提供多样化的实现方式。
作为第一方面的一种实现方式,所述处理引擎,设置有与各采集场景对应的特性算法,还用于通过与所述采集场景对应的特性算法对与所述采集场景对应的流媒体数据进行处理,获得与所述采集场景对应的处理后的流媒体数据。
本申请中,该流媒体特性架构可以实现电子设备中相机的多个场景功能,可以针对不同场景设置不同的特性算法,在不同场景下采集的流媒体采用不同的特性算法,可以采用一个流媒体特性架构实现相机的全部功能。
作为第一方面的一种实现方式,所述流媒体特性架构还包括:
硬件接口,用于与所述芯片平台上的硬件资源对接;
所述处理引擎,还用于通过所述硬件接口使用所述芯片平台上的硬件资源,完成对所述流媒体数据的处理。
本申请中,还可以设置硬件接口,该硬件接口与芯片平台上的硬件资源对接,若硬件接口可跨芯片平台复用,则可以将该架构移植到其他芯片平台,若硬件接口不可跨芯片平台复用,则开发与其他芯片平台能够对接的硬件接口,从而提高开发效率。
作为第一方面的一种实现方式,所述硬件接口包括:
第一硬件接口,用于与所述芯片平台上的硬件计算单元对接;
所述处理引擎,还用于通过所述第一硬件接口使用所述芯片平台上的硬件计算单元,完成所述流媒体数据的处理。
在一些芯片平台提供硬件计算单元的情况下,通过硬件接口使用芯片平台的硬件计算资源,即可以使用芯片平台本身的计算资源,还可以避免重新开发具有同样功能的软件资源,提高开发效率。
作为第一方面的一种实现方式,所述硬件接口还包括:
第二硬件接口,用于与所述芯片平台上的传感器驱动对接;
所述处理引擎,还用于通过所述第二硬件接口获取所述传感器驱动对应的传感器采集的数据,并基于所述传感器采集的数据,对所述流媒体数据进行处理。
本申请中,还可以通过硬件接口获取传感器采集的数据,从而在需要传感器采集的数据参与数据处理时能够使用芯片平台的资源。同时也为跨平台复用提供基础。
作为第一方面的一种实现方式,所述处理引擎还用于:
通过所述第二硬件接口获取所述传感器驱动对应的传感器采集的数据,对所述传感器采集的数据进行处理获得控制信息,向所述芯片平台上的传感器驱动发送所述控制信息,所述控制信息用于控制所述传感器。
作为第一方面的一种实现方式,所述对接模块适用于多类芯片平台。
第二方面,本申请提供一种流媒体处理方法,应用于电子设备,所述电子设备包括:与所述电子设备的芯片平台对接的对接模块和设置有与所述芯片平台无关的特性 算法的处理引擎,所述方法包括:
所述对接模块从所述电子设备的芯片平台获取所述电子设备的摄像头采集的流媒体数据;
所述处理引擎通过所述特性算法与所述流媒体数据进行处理,获得处理后的流媒体数据;
所述对接模块向所述电子设备的芯片平台发送所述处理后的流媒体数据。
作为第二方面的一种实现方式,所述方法还包括:
所述对接模块从所述电子设备的芯片平台获取所述电子设备的摄像头采集所述流媒体数据时的采集场景;
所述处理引擎确定与所述采集场景对应的特性算法;
相应的,所述处理引擎通过所述特性算法与所述流媒体数据进行处理,获得处理后的流媒体数据,包括:
所述处理引擎基于所述采集场景对应的特性算法对所述流媒体数据进行处理,获得所述处理后的流媒体数据。
作为第二方面的一种实现方式,所述电子设备还包括与所述芯片平台上的硬件资源对接的第一硬件接口和第二硬件接口;所述方法还包括:
所述处理引擎通过所述第一硬件接口使用所述芯片平台上的硬件计算单元,对所述流媒体数据进行处理;
和/或,所述处理引擎通过所述第二硬件接口获取所述芯片平台上的传感器驱动对应的传感器采集的数据,并基于所述传感器采集的数据,对所述流媒体数据进行处理;
和/或,所述处理引擎通过所述第二硬件接口获取所述传感器驱动对应的传感器采集的数据,对所述传感器采集的数据进行处理获得控制信息,向所述芯片平台上的传感器驱动发送所述控制信息,所述控制信息用于控制所述传感器。
第三方面,提供一种电子设备,包括芯片平台,所述芯片平台上设有本申请第一方面任一项的流媒体特性架构。
第四方面,提供一种芯片***,包括芯片平台,芯片平台与存储器耦合,所述芯片平台上设有本申请第一方面任一项的流媒体特性架构。
第五方面,提供一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被一个或多个处理器执行时实现本申请第二方面任一项的方法。
第六方面,本申请提供了一种计算机程序产品,当计算机程序产品在设备上运行时,使得设备执行本申请第二方面任一项的方法。
可以理解的是,上述第二方面至第六方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
附图说明
图1为本申请实施例提供的一种电子设备的硬件结构示意图;
图2为本申请实施例提供的一种图像处理***的结构示意图;
图3为本申请实施例提供的一种流媒体特性架构的结构示意图;
图4为本申请实施例提供的一种图像处理***的结构示意图。
具体实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定***结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在本申请实施例中,“一个或多个”是指一个、两个或两个以上;“和/或”,描述关联对象的关联关系,表示可以存在三种关系;例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”、“第四”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本申请实施例提供的一种流媒体特性架构和一种流媒体处理方法,可以适用于设置有摄像头的电子设备中。该电子设备可以为平板电脑、手机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等电子设备。该电子设备还可以为上述示例以外的电子设备。本申请实施例对电子设备的具体类型不作限定。
图1示出了一种电子设备的结构示意图。电子设备100可以包括处理器110(芯片平台),外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中,传感器模块180可以包括压力传感器180A,触摸传感器180K等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件, 也可以集成在一个或多个处理器中。例如,处理器110上设置本申请实施例提供的流媒体特性架构,可以执行本申请实施例中的流媒体处理方法。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了***的效率。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与***设备之间传输数据。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)。
此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。
在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波, 并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信号转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了监听语音信息,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显 示屏194,电子设备100根据压力传感器180A检测触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
摄像头193用于捕获静态图像或视频。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
SIM卡接口195用于连接SIM卡。SIM卡可以通过***SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。
本申请实施例并未特别限定可设置流媒体特性架构的电子设备的具体结构,也并未限定一种流媒体处理方法的执行主体的具体结构,只要可以通过运行记录有本申请实施例的一种流媒体处理方法的代码,以根据本申请实施例提供的一种流媒体处理方法进行处理即可。例如,本申请实施例提供的一种流媒体处理方法的执行主体可以是电子设备中能够调用程序并执行程序的功能模块,或者为应用于电子设备中的处理装置,例如,芯片平台。
本申请实施例提供一种流媒体特性架构。该流媒体特性架构可以应用在芯片平台上的图像处理***中。
目前,芯片平台上通常设置有一些算法,例如,特征点识别、防抖算法、运动补偿、畸变算法等。当一个电子设备采用设置了这些算法的芯片平台时,可以直接采用芯片平台上设置的这些算法,以实现相机的一些图像处理功能,从而输出经过上述算法处理过的图像或视频。
然而,在实际应用中,设备厂商为了满足用户特定需求或者为了提高相机的拍照或录像效果,可能会在使用芯片平台A(设有算法A1)时,另外开发其他效果更好的 算法A2(A2和A1实现相同功能)。
作为示例,芯片平台A上设有防抖算法A1,设备厂商为了使得视频录制时的防抖效果更好,可能会开发与芯片平台A上不同的另一防抖算法A2,新开发的防抖算法A2可能需要与芯片平台上原有的算法(其他与实现防抖功能相关的算法A3)进行联动。因此,这就需要设备厂商的开发人员开发新的防抖算法A2时考虑新开发的防抖算法A2与芯片平台A适配的问题,甚至还需要考虑如何与芯片平台A上原有的一些算法A3联动的问题。
这就导致,当设备厂商采用另一芯片平台B时,由于另一芯片平台B和之前的芯片平台A本身的架构不同,集成的防抖算法B1和芯片平台A上的防抖算法A1也不同,设备厂商需要针对芯片平台B的架构以及芯片平台B本身集成的算法另外开发与芯片平台B适配的防抖算法B2。在开发过程中,还需要考虑新开发的防抖算法B2与芯片平台B上原有的一些算法B3联动的问题,导致开发效率较低。
另外,随着设备厂商的研发能力越来越强大,设备厂商采用芯片平台A时,可能需要重新开发多个效果更好的算法(A3、A4、A5……),这样设备厂商就需要开发每个算法时均考虑与芯片平台A适配的问题,以及考虑与芯片平台A上的其他算法联动的问题。随着新开发的算法越来越多,将这些算法移植到其他芯片平台上时,均需要重新考虑与其他芯片平台以及其他芯片平台内的其他算法适配的问题,以至于相当于重新开发多套算法,导致开发效率很低。
为了解决上述问题,本申请实施例提供一种流媒体特性架构,当流媒体特性架构应用在芯片平台上的图像处理***中时,在实现流媒体功能(例如,拍照、录像等功能)的同时,还能够与芯片平台解耦。因此,本申请实施例提供的流媒体特性架构可以在多个芯片平台上复用,例如,高通芯片平台,联发科芯片平台等。从而实现开发一套流媒体特性架构,该套流媒体特性架构可以应用在采用不同芯片平台的电子设备上的图像处理***中,以实现电子设备的相机功能。
在具体实现时,该流媒体特性架构协助图像处理***实现电子设备的相机功能时。需要的信息可以分离为两部分,其中一部分为与芯片平台相关的信息,另一部分为与芯片平台无关的信息。
与芯片平台相关的信息可以包括:摄像头、陀螺仪、光学防抖等装置设备采集的信息的获取与传递,还可以包括在对上述信息进行处理时提供硬件支持的硬件计算单元(该硬件计算单元设置在芯片平台上)。与芯片平台无关的信息可以包括:在对上述信息进行处理时的内存管理机制、图像处理时的特性算法等。
其中,与芯片平台无关的信息可以设置在流媒体特性架构的内部。例如,流媒体特性架构内部可以设置内存管理机制,可以设置特性算法库,该特性算法库包括多个用于实现相机功能的特性算法。当然,为了实现某个特定功能(例如,美颜功能),可能需要特性算法库中的多个特性算法参与,这就需要多个特性算法相互配合。具体可参照后面实施例的相关描述。
与芯片平台相关的信息可以通过在流媒体特性架构上设置相关的接口实现上述信息的获取和传递。该相关的接口的数据结构为通用定义的数据结构,使得流媒体特性架构可以和不同的芯片平台实现数据对接,从而实现该流媒体特性架构在多个芯片平 台上的复用。
作为接口的一个示例,可跨芯片平台复用的流媒体特性架构设置有输入接口和输出接口,该输入接口用于从芯片平台获取该流媒体特征架构所需的源数据(例如,帧数据以及同步的元数据等),该流媒体特性架构对源数据进行特性处理获得处理结果后,通过该输出接口将处理结果输出给芯片平台。
在实际应用中,该输入接口和输出接口可以作为两个独立的接口存在,也可以作为一个独立的接口实现输入输出功能。
作为接口的另一示例,可跨芯片平台复用的流媒体特性架构还设置有硬件接口,该硬件接口用于调用芯片平台的硬件计算单元,以通过芯片平台的硬件计算单元协助对源数据的处理。该硬件接口可以为客户端到服务端(Client+Server,CS)模式的接口,流媒体特性架构通过CS接口调用芯片平台上的硬件计算单元。
例如,芯片平台的硬件计算单元中集成了AI算法(识别人脸),因此,在流媒体特性架构对帧数据序列进行处理的过程中,以美颜为例,则需要通过硬件接口1调用芯片平台的硬件计算单元,在硬件计算单元对帧数据进行处理获得人脸识别结果后,将人脸识别结果通过硬件接口1将识别结果返回给流媒体特性架构,流媒体特性架构继续基于人脸识别结果对图像帧中的人脸进行美颜处理。
作为接口的另一示例,在流媒体特性架构对源数据进行处理时,需要基于其他传感器采集的信息协助处理。这种情况下,流媒体特性架构可以通过硬件接口从芯片平台上的传感器驱动获取相关的信息。另外,在流媒体特性架构协助图像处理***实现某些功能时,可能需要对获得的数据进行处理以对某些传感器进行控制,在这种情况下,流媒体特性架构可以通过硬件接口将控制信息传递给芯片平台上相关传感器的驱动。
例如,在光学防抖中,流媒体特性架构可以通过硬件接口从芯片平台上的陀螺仪(摄像头内部的陀螺仪)驱动获取镜头的抖动方法和位移量,流媒体特性架构采用匹配的特性算法对获取的抖动方法和位移量进行处理,从而得到摄像头内的镜片组的补偿信息,并将该补偿信息通过硬件接口传输给摄像头驱动,以对摄像头内部的镜片组进行补偿,克服因电子设备振动导致的影像模糊,从而实现防抖功能。
在实际应用中,硬件接口可以为一个独立的接口实现调用功能,也可以为多个独立的接口实现上述功能。
作为一个示例,该硬件接口可以分为硬件接口1和硬件接口2,其中,硬件接口1用于与芯片平台上的硬件计算单元交互,硬件接口2用于与芯片平台上的传感器启动交互。
由于电子设备的相机可能具有多个功能(例如,拍照、录像、美颜等),为了满足多场景要求,该流媒体特性架构还可以包括场景接口,该场景接口用于从芯片平台获取用户使用该相机的功能时的场景选择。
当然,实际应用中,该场景接口可以作为独立的接口存在,该独立的接口用于接收用户的场景选择。
该场景接口也可以和输入接口作为一个独立的接口存在,该独立的接口用于接收用户的场景选择和接收源数据。
该场景接口还可以和输入接口以及输出接口作为一个独立的接口存在,该独立的接口用于接收用户的场景选择、接收源数据以及输出处理结果。
当然,当一个接口具体多个功能时,需要支持多路数据。本申请实施例对接口的形式不做限制。
由于流媒体特性架构可以和芯片平台解耦,能够实现的大部分功能取决于内部的特性算法,所以,该流媒体特性结构可以根据不同的场景配置不同的特性算法或特性算法组(包括多个具有关联关系的特性算法)。
作为特性算法的一些示例,该特性算法可以包括数据转换算法、光学防抖算法、电子防抖算法、美颜算法等。
该数据转换算法在该流媒体特性架构内部将数据转换为芯片平台可以识别的数据格式,以与芯片平台上的硬件计算单元实现数据对接。
该光学防抖算法可以通过硬件接口从芯片平台的传感器驱动获取摄像头内部的陀螺仪采集的数据,在对数据进行处理后,将补偿信息通过硬件接口传送给传感器驱动,以使得摄像头采集稳定的帧数据,从而得到稳定的视频输出。
该电子防抖算法可以在该流媒体特性架构内部将源数据进行处理,得到稳定的视频输出。
该美颜算法可以在该流媒体特性架构内部对摄像头采集的预览图像进行美颜处理,若该美颜算法不需要芯片平台的硬件计算单元的支持,则可以在流媒体特性架构内部实现对预览图像的美颜处理,在处理后输出给芯片平台,从而得到美颜后的预览图像。
当然,实际应用中,流媒体特性架构内部可以设置其他更多的特性算法,以匹配该流媒体特性架构所在的电子设备提供的与图像处理相关的功能。
为了更清楚了解本申请实施例提供的流媒体特性架构,首先介绍该流媒体特性架构应用的图像处理***。
参见图2,为本申请实施例提供的设置了流媒体特性架构的图像处理***。该图像处理***包括场景选择模块、数据采集模块、流媒体特性架构、拍照输出模块、录像输出模块、硬件计算单元和传感器驱动模块。
其中,场景选择模块,用于接收用户在使用电子设备的相机功能时选择的场景,作为示例,拍照场景、录像场景、人像模式、夜景模式、普通光圈、大光圈等。当然,实际应用中,还可以包括其他场景示例,本申请不再一一举例。当然,用户可能一次选用多个场景,例如,用户选择拍照、且选择了夜景模式和大光圈。
数据采集模块,用于获取摄像头采集的图像帧,以及与该图像帧同步的元数据,并将图像帧和同步的元数据发送给流媒体特性架构。
拍照输出模块,用于将拍照场景下的流媒体特性架构的处理结果输出显示。
录像输出模块,用于将录像场景下的流媒体特性架构的处理结果输出显示。
硬件计算单元,用于为流媒体特性架构提供的硬件计算支持。
传感器驱动,用于为流媒体特性架构提供传感器采集的传感器数据或者接受流媒体特性架构的处理结果(例如,传感器的补偿信息)。
流媒体特性架构,通过通用的场景接口接收场景选择模块发送的场景信息;通过通用的输入接口接收帧数据以帧数据同步的元数据;然后通过与场景信息匹配的特性 算法(或特性算法组)对帧数据进行处理,并将处理结果通过输出接口发送至相应的拍照输出模块或录像输出模块。在对数据进行处理的过程中,在需要芯片平台提供硬件计算支持的情况下,通过硬件接口调用芯片平台的硬件计算单元协助完成数据处理过程;在需要获取电子设备的传感器采集的数据的情况下,通过硬件接口调用芯片平台的传感器驱动获得传感器采集的数据;在需要对电子设备的传感器进行控制的情况下,通过硬件接口将控制信息传输给芯片平台上的传感器驱动。
虽然图2中,场景接口、输入接口和输出接口为三个独立的接口,实际应用中,上述三个接口可以合并为一个接口或两个接口存在。
在理解流媒体特性架构在图像处理***中的位置连接关系后,下面将介绍流媒体特性架构的示意图。
参见图3,为本申请实施例提供的流媒体特性架构的示意图。该流媒体特性架构可以应用在芯片平台上的图像处理***中。
如前所述,该流媒体特性架构设置有输入接口、输出接口、场景接口和硬件接口(硬件接口1和硬件接口2)。另外,该流媒体特性架构还包括处理引擎。
其中,输入接口用于与芯片平台适配,以接收芯片平台上的采集模块输入的源数据。
处理引擎模块中设置有特性算法库,该特性算法库中设置有多个特性算法。处理引擎模块用于通过特性算法对接收到的图像数据(拍照输入)或视频数据(录像输入)进行处理。
作为该处理引擎的另一示例,该处理引擎中分为拍照通路和录像通路。该拍照通路通过输入接口接收拍照输入的图像帧和对应的元数据,并通过相关的特性算法对拍照输入的图像帧进行处理,获得图像处理结果,再将图像处理结果作为拍照输出通过输出接口发送给芯片平台。该录像通路通过输入接口接收录像输入的图像帧序列和对应的元数据,并通过相关的特性算法对拍照输入的图像帧序列进行处理,获得录像处理结果,再将录像处理结果作为录像输出通过输出接口发送给芯片平台。
输出接口用于与芯片平台适配,以将处理后的图像数据(拍照输出)或视频数据(录像输出)输出到芯片平台的图像处理***中的上层应用。
硬件接口1,用于适配芯片平台上的硬件计算单元,以在处理引擎模块在对图像数据或视频数据进行处理时,调用芯片平台上的硬件计算单元协助完成数据处理过程。
硬件接口2,用于适配芯片平台上的传感器驱动,以获取传感器采集的信息协助完成相应的数据处理过程或协助实现某些功能。
场景接口可以确定电子设备的摄像头采集图像或视频时的场景,以确定采用哪个或哪些特性算法。
下面将通过一个场景示例介绍本申请实施例提供的流媒体特性架构应用在图像处理***中时,如何实现本申请实施例提供的流媒体处理方法,例如,如何与芯片平台对接,如何实现录像功能等。
参见图4,以视频防抖场景为例,可以由用户设置本次录像过程是否开启视频防抖,也可以***内部设置只要开启录像功能,就开启视频防抖功能;当然,还可以设置,在检测到录像中存在运动的物体的情况下,开启视频防抖功能。
以用户可以设置本次录像过程开启视频防抖为例,在用户开启该视频防抖功能的情况下,该场景选择模块将确定当前为视频防抖场景。场景选择模块将用于表示视频防抖场景的信息发送至流媒体特性架构的场景接口。
场景接口可以将表示视频防抖场景的信息直接发送给处理引擎或者将该表示视频防抖场景的信息进行格式转换后,发送给处理引擎。由处理引擎从特性算法库中选择与该表示视频防抖场景的信息相匹配的特性算法。
在具体实现时,表示视频防抖场景的信息可以为用于表示视频防抖场景的唯一标识。
作为示例,可以将本电子设备支持的多个场景均设置与每个场景对应的唯一标识。根据用户在拍照或录像时的场景选择,确定当前的场景唯一标识。当然,用户在拍照或者录像过程中可以选择多个场景(例如,大光圈、防抖、美颜等),该多个场景分别对应的唯一标识组成的集合会形成唯一标识组,该唯一标识组可以对应特性算法库中的一个特性算法或多个特性算法。
在用户开始录像后,数据采集模块可以获取到摄像头采集的帧数据序列以及帧数据序列中每个帧数据同步的元数据。
数据采集模块将帧数据序列以及帧数据序列中每个帧数据同步的元数据发送至流媒体特性架构的输入接口。
输入接口将帧数据序列以及帧数据序列中每个帧数据同步的元数据送入录像输入通道。
在此需要说明,在录像场景中,获取到的数据可以包括预览流和录像流,预览流和录像流可以采用不同的特性算法,也可以采用完全相同的特性算法。
流媒体特性架构中预先根据场景唯一标识(或场景唯一标识组)确定的特性算法(或特性算法组)对该录像输入通道接收到的帧数据序列进行处理。
本申请实施例中,若应用在高端电子设备(例如,电子设备A)中,则可以采用光学防抖和电子防抖相结合方式,相应的,该电子设备采用的高端的芯片平台(例如芯片平台A)。
作为光学防抖的一种原理,将图像传感器安装在一个可自由浮动的支架上,同时电子设备上可以设置陀螺仪,该陀螺仪可以感应电子设备的抖动方向和幅度,本申请实施例提供的与光学防抖对应的特性算法可以根据陀螺仪采集的数据计算获得图像传感器的位移补偿,该位移补偿可以再次传送给图像传感器,以控制图像传感器基于该位移补偿浮动。
作为电子防抖的一种原理,通过软件算法对摄像头采集的图像进行分析和处理,当图像模糊时,可用边缘图像对中间模糊部分进行补偿,从而实现防抖,该防抖技术需要对帧数据进行处理。
通过上述示例可以理解,电子设备A中该特性算法需要对陀螺仪上传的数据进行处理,将处理结果传递给图像传感器,在实际应用中,该流媒体特性架构可以通过硬件接口2获得陀螺仪上传的数据,再通过特性算法对陀螺仪上传的数据进行处理获得位移补偿后,再通过硬件接口2将该位移补偿发送给图像传感器驱动。该图像传感器驱动可以控制图像传感器基于该位移补偿浮动。
为了进一步获得更好的防抖效果,电子设备A中的特性算法还包括电子防抖对应的特性算法,该特性算法可以对图像传感器采集的帧数据进行分析和处理,以进行补偿从而获得进一步补偿后的防抖图像。
在电子防抖对应的特性算法对帧数据进行分析和处理的过程中,若芯片平台在硬件上提供的AI算法可以识别图像的模糊度,则处理引擎接收到图像帧后,可以采用数据格式转换算法对图像帧进行处理,获得芯片平台上的硬件计算单元可以识别的图像格式,然后调用硬件接口1使得芯片平台上的硬件计算单元以对图像进行模糊度识别,芯片平台上的硬件计算单元获得识别结果后,将识别结果返回给处理引擎,处理引擎通过相应的特性算法在对获得识别结果后的图像帧进行处理,例如,在图像较模糊的情况下,通过特性算法对图像帧进行补偿,获得防抖图像,并将防抖图像发送给输出接口;在图像较清晰的情况下,将清晰的图像帧进行剪切处理后输出到输出接口。这样就可以获得稳定防抖的视频流。
该输出接口将图像帧序列和对应的元数据发送给录像输出,该录像输出可以将图像帧上传到上层应用,用户可以得到光学防抖和电子防抖后的视频文件。
若在应用的过程中,由于产品更新换代升级的需要,该款电子设备需要采用芯片平台B,或者电子设备厂商开发另一款高端的电子设备B,该高端电子设备B由于特殊需求需要采用芯片平台B。该电子设备B的摄像头和电子设备A的摄像头采用同一厂家同一型号的摄像头。则电子设备B上的视频防抖算法可以复用电子设备A上的视频防抖算法。在这种情况下,可以将电子设备A中采用的流媒体特性架构复用到电子设备B中,因为,该流媒体流特性架构与芯片平台交互的各个接口均为通用的语言结构的接口。而不需要针对芯片平台再开发与芯片平台B适配的视频防抖功能,也不需要开发与芯片平台B适配的相机的其他功能。
若在应用的过程中,设备厂商为了满足不同群体的需求,需要开发一款较为低端的电子设备C,为了节省成本,采用芯片平台C,同时,为了降低成本采用不能实现机身传感器防抖的摄像头。在这种情况下,开发人员只需要修改内部存储的场景唯一标识和特性算法的对应关系,例如,电子设备A中的视频防抖的唯一标识对应的特性算法A(光学防抖算法)和特性算法B(电子防抖),电子设备C中的视频防抖场景的唯一标识对应的特性算法为特性算法B,甚至不需要删除特性该算法A。当然,实际应用中,开发人员也可以针对电子设备C删除流媒体特性架构中的特性算法A,本申请对具体的实现方式不做限定。通过这里可以理解,开发人员开发具有通用功能的电子设备时,开发效率大大提高。
本申请实施例以防抖算法作为示例,描述本申请实施例提供的应用在图像处理***中的流媒体特性架构可以应用在多个芯片平台,可以在多个芯片平台上复用。实际应用中,实现电子设备的相机的各个功能的图像处理***可以将该相机的各个功能对应的特性算法均设置在处理引擎中的特性算法库中。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本申请实施例还提供了一种计算机可读存储介质,计算机可读存储介质存储有计 算机程序,计算机程序被处理器执行时可实现上述各个方法实施例中的步骤。
本申请实施例还提供了一种计算机程序产品,当计算机程序产品在第一设备上运行时,使得第一设备可实现上述各个方法实施例中的步骤。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,计算机程序包括计算机程序代码,计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。计算机可读介质至少可以包括:能够将计算机程序代码携带到第一设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。
本申请实施例还提供了一种芯片***,芯片***包括处理器,处理器与存储器耦合,处理器执行存储器中存储的计算机程序,以实现本申请任一方法实施例的步骤。芯片***可以为单个芯片,或者多个芯片组成的芯片模组。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及方法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (13)

  1. 一种流媒体特性架构,其特征在于,应用于芯片平台,所述流媒体特性架构包括:
    对接模块,用于与所述芯片平台进行对接,以获取与所述芯片平台相关的流媒体数据以及向所述芯片平台发送处理后的流媒体数据;
    处理引擎,设置有与芯片平台无关的特性算法,所述特性算法用于对所述对接模块获取的流媒体数据进行处理,获得所述处理后的流媒体数据。
  2. 如权利要求1所述的流媒体特性架构,其特征在于,所述对接模块包括:
    输入接口,用于与所述芯片平台对接,以从所述芯片平台获取所述流媒体数据;
    场景接口,用于与所述芯片平台对接,以从所述芯片平台获取所述流媒体数据的采集场景;
    输出接口,用于与所述芯片平台对接,以向所述芯片平台发送所述处理后的流媒体数据。
  3. 如权利要求2所述的流媒体特性架构,其特征在于,所述处理引擎,设置有与各采集场景对应的特性算法,还用于通过与所述采集场景对应的特性算法对与所述采集场景对应的流媒体数据进行处理,获得与所述采集场景对应的处理后的流媒体数据。
  4. 如权利要求1至3任一项所述的流媒体特性架构,其特征在于,所述流媒体特性架构还包括:
    硬件接口,用于与所述芯片平台上的硬件资源对接;
    所述处理引擎,还用于通过所述硬件接口使用所述芯片平台上的硬件资源,完成对所述流媒体数据的处理。
  5. 如权利要求4所述的流媒体特性架构,其特征在于,所述硬件接口包括:
    第一硬件接口,用于与所述芯片平台上的硬件计算单元对接;
    所述处理引擎,还用于通过所述第一硬件接口使用所述芯片平台上的硬件计算单元,完成所述流媒体数据的处理。
  6. 如权利要求5所述的流媒体特性架构,其特征在于,所述硬件接口还包括:
    第二硬件接口,用于与所述芯片平台上的传感器驱动对接;
    所述处理引擎,还用于通过所述第二硬件接口获取所述传感器驱动对应的传感器采集的数据,并基于所述传感器采集的数据,对所述流媒体数据进行处理。
  7. 如权利要求6所述的流媒体特性架构,其特征在于,所述处理引擎还用于:
    通过所述第二硬件接口获取所述传感器驱动对应的传感器采集的数据,对所述传感器采集的数据进行处理获得控制信息,向所述芯片平台上的传感器驱动发送所述控制信息,所述控制信息用于控制所述传感器。
  8. 如权利要求1至7任一项所述的流媒体特性架构,其特征在于,所述对接模块适用于多类芯片平台。
  9. 一种流媒体处理方法,其特征在于,应用于电子设备,所述电子设备包括:与所述电子设备的芯片平台对接的对接模块和设置有与所述芯片平台无关的特性算法的处理引擎,所述方法包括:
    所述对接模块从所述电子设备的芯片平台获取所述电子设备的摄像头采集的流媒 体数据;
    所述处理引擎通过所述特性算法与所述流媒体数据进行处理,获得处理后的流媒体数据;
    所述对接模块向所述电子设备的芯片平台发送所述处理后的流媒体数据。
  10. 如权利要求9所述的方法,其特征在于,所述方法还包括:
    所述对接模块从所述电子设备的芯片平台获取所述电子设备的摄像头采集所述流媒体数据时的采集场景;
    所述处理引擎确定与所述采集场景对应的特性算法;
    相应的,所述处理引擎通过所述特性算法与所述流媒体数据进行处理,获得处理后的流媒体数据,包括:
    所述处理引擎基于所述采集场景对应的特性算法对所述流媒体数据进行处理,获得所述处理后的流媒体数据。
  11. 如权利要求9或10所述的方法,其特征在于,所述电子设备还包括与所述芯片平台上的硬件资源对接的第一硬件接口和第二硬件接口;所述方法还包括:
    所述处理引擎通过所述第一硬件接口使用所述芯片平台上的硬件计算单元,对所述流媒体数据进行处理;
    和/或,所述处理引擎通过所述第二硬件接口获取所述芯片平台上的传感器驱动对应的传感器采集的数据,并基于所述传感器采集的数据,对所述流媒体数据进行处理;
    和/或,所述处理引擎通过所述第二硬件接口获取所述传感器驱动对应的传感器采集的数据,对所述传感器采集的数据进行处理获得控制信息,向所述芯片平台上的传感器驱动发送所述控制信息,所述控制信息用于控制所述传感器。
  12. 一种电子设备,其特征在于,所述电子设备包括芯片平台,所述芯片平台上设有如权利要求1至8任一项所述的流媒体特性架构。
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储计算机程序,所述计算机程序在处理器上运行时实现如权利要求9至11任一项所述的方法。
PCT/CN2022/142605 2022-02-28 2022-12-28 流媒体特性架构、处理方法、电子设备及可读存储介质 WO2023160216A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210193100.5 2022-02-28
CN202210193100.5A CN116701288A (zh) 2022-02-28 2022-02-28 流媒体特性架构、处理方法、电子设备及可读存储介质

Publications (1)

Publication Number Publication Date
WO2023160216A1 true WO2023160216A1 (zh) 2023-08-31

Family

ID=87764656

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/142605 WO2023160216A1 (zh) 2022-02-28 2022-12-28 流媒体特性架构、处理方法、电子设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN116701288A (zh)
WO (1) WO2023160216A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180131743A1 (en) * 2016-11-08 2018-05-10 Bevara Technologies, Llc Systems and methods for encoding and decoding
CN109101352A (zh) * 2018-08-30 2018-12-28 Oppo广东移动通信有限公司 算法架构、算法调用方法、装置、存储介质及移动终端
CN112153282A (zh) * 2020-09-18 2020-12-29 Oppo广东移动通信有限公司 图像处理架构、方法、存储介质及电子设备
CN112532859A (zh) * 2019-09-18 2021-03-19 华为技术有限公司 视频采集方法和电子设备
CN113014804A (zh) * 2021-02-04 2021-06-22 维沃移动通信有限公司 图像处理方法、装置、电子设备和可读存储介质
CN113852762A (zh) * 2021-09-27 2021-12-28 荣耀终端有限公司 算法调用方法与算法调用装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20030060707A (ko) * 2002-01-11 2003-07-16 주식회사 팬택앤큐리텔 범용 디지털 신호 프로세서 칩을 이용한 멀티미디어단말기 및 그를 이용한 멀티미디어 데이터 부호화 및복호화 방법
CN102509254B (zh) * 2011-10-25 2013-10-30 河海大学 一种基于数字信号处理器的图像处理平台及方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180131743A1 (en) * 2016-11-08 2018-05-10 Bevara Technologies, Llc Systems and methods for encoding and decoding
CN109101352A (zh) * 2018-08-30 2018-12-28 Oppo广东移动通信有限公司 算法架构、算法调用方法、装置、存储介质及移动终端
CN112532859A (zh) * 2019-09-18 2021-03-19 华为技术有限公司 视频采集方法和电子设备
CN112153282A (zh) * 2020-09-18 2020-12-29 Oppo广东移动通信有限公司 图像处理架构、方法、存储介质及电子设备
CN113014804A (zh) * 2021-02-04 2021-06-22 维沃移动通信有限公司 图像处理方法、装置、电子设备和可读存储介质
CN113852762A (zh) * 2021-09-27 2021-12-28 荣耀终端有限公司 算法调用方法与算法调用装置

Also Published As

Publication number Publication date
CN116701288A (zh) 2023-09-05

Similar Documents

Publication Publication Date Title
US11849210B2 (en) Photographing method and terminal
US11669242B2 (en) Screenshot method and electronic device
US12019864B2 (en) Multimedia data playing method and electronic device
EP3800876B1 (en) Method for terminal to switch cameras, and terminal
WO2020244623A1 (zh) 一种空鼠模式实现方法及相关设备
EP4192004A1 (en) Audio processing method and electronic device
WO2021052282A1 (zh) 数据处理方法、蓝牙模块、电子设备与可读存储介质
WO2022100610A1 (zh) 投屏方法、装置、电子设备及计算机可读存储介质
CN112954251B (zh) 视频处理方法、视频处理装置、存储介质与电子设备
WO2021052407A1 (zh) 一种电子设备操控方法及电子设备
CN115379126B (zh) 一种摄像头切换方法及相关电子设备
EP4266208A1 (en) Video switching method and apparatus, storage medium, and device
WO2023241209A9 (zh) 桌面壁纸配置方法、装置、电子设备及可读存储介质
CN114339429A (zh) 音视频播放控制方法、电子设备和存储介质
EP4274224A1 (en) Multi-scene video recording method and apparatus, and electronic device
CN115314591A (zh) 设备交互方法、电子设备及计算机可读存储介质
CN113593567B (zh) 视频声音转文本的方法及相关设备
CN116017388B (zh) 一种基于音频业务的弹窗显示方法和电子设备
WO2023160216A1 (zh) 流媒体特性架构、处理方法、电子设备及可读存储介质
CN111800581A (zh) 图像生成方法、图像生成装置、存储介质与电子设备
CN114928898B (zh) 建立基于WiFi直接连接的会话的方法和装置
CN114928900A (zh) 通过WiFi直接连接进行传输的方法和装置
WO2022068654A1 (zh) 一种终端设备交互方法及装置
WO2024037542A1 (zh) 一种触控输入的方法、***、电子设备及存储介质
WO2022143165A1 (zh) 一种网络制式确定方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22928456

Country of ref document: EP

Kind code of ref document: A1