CN110958390B - Image processing method and related device - Google Patents

Image processing method and related device Download PDF

Info

Publication number
CN110958390B
CN110958390B CN201911253015.8A CN201911253015A CN110958390B CN 110958390 B CN110958390 B CN 110958390B CN 201911253015 A CN201911253015 A CN 201911253015A CN 110958390 B CN110958390 B CN 110958390B
Authority
CN
China
Prior art keywords
image
frame
application
party camera
camera application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911253015.8A
Other languages
Chinese (zh)
Other versions
CN110958390A (en
Inventor
方攀
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911253015.8A priority Critical patent/CN110958390B/en
Publication of CN110958390A publication Critical patent/CN110958390A/en
Application granted granted Critical
Publication of CN110958390B publication Critical patent/CN110958390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control

Abstract

The embodiment of the application discloses an image processing method and a related device, wherein the method comprises the following steps: the third-party camera application sends a photographing request to the hardware abstraction layer; the hardware abstraction layer acquires application data to be processed of a third-party camera application request according to the photographing request, wherein the application data comprises at least one frame of image of a target object, an algorithm for realizing the depth of field effect is called to process the at least one frame of image to obtain a depth map and face information of a target user, and the depth map and the face information are sent to the third-party camera application; the algorithm for realizing the depth of field effect is that the third-party camera application requests an operating system to be open for the third-party camera application in advance through a media service module; and processing at least one frame of image by the third-party camera application according to the depth map and the face information to obtain a processed image. By the aid of the method and the device, the algorithm for achieving the depth of field effect is applied to the third-party camera, the functionality of the third-party camera application is enhanced, and the image processing effect is improved.

Description

Image processing method and related device
Technical Field
The present application relates to the field of electronic device technologies, and in particular, to an image processing method and a related apparatus.
Background
With the development of electronic device technology, third-party camera applications of electronic devices are also involved in various aspects of life. Various types of application software are constantly being pushed out of date. Taking the photographing application software as an example, in order to meet the higher requirement of a user on a photo, third-party camera application software needs to be used for photographing or processing an image. However, in the process of processing the picture by the third-party camera application software, the processing effect is easily influenced due to the lack of deep information.
Disclosure of Invention
The embodiment of the application provides an image processing method and a related device.
In a first aspect, a method for image processing is applied to an electronic device, where the electronic device includes an operating system and a media service module, the operating system includes an application layer and a hardware abstraction layer, the application layer is provided with a third-party camera application, and the method includes:
the third party camera application sends a photographing request to the hardware abstraction layer;
the hardware abstraction layer receives the photographing request of the third-party camera application, obtains application data to be processed of the third-party camera application request, the application data comprises at least one frame of image of a target object, calls an algorithm for realizing a depth effect to process the at least one frame of image, obtains a depth map and face information of the target user, and sends the depth map and the face information to the third-party camera application; wherein the algorithm for achieving the depth of field effect is that the third party camera application requests an operating system to be open for the third party camera application in advance through the media service module;
and the third-party camera application receives the depth map and the face information, and processes the at least one frame of image according to the face information and the depth map to obtain at least one frame of processed image.
In a second aspect, an image processing apparatus is applied to an electronic device, where the electronic device includes an operating system and a media service module, the operating system includes an application layer and a hardware abstraction layer, the application layer is provided with a third-party camera application, the image processing apparatus includes a communication unit and a processing unit, where:
the processing unit is used for controlling the third-party camera application to send a photographing request to the hardware abstraction layer through the communication unit; the hardware abstraction layer is used for controlling the hardware abstraction layer to receive the photographing request of the third-party camera application, acquiring application data to be processed of the third-party camera application request, wherein the application data comprises at least one frame of image of a target object, calling an algorithm for realizing a depth effect to process the at least one frame of image, obtaining a depth map and face information of the target user, and sending the depth map and the face information to the third-party camera application through the communication unit, wherein the algorithm for realizing the depth effect is that the third-party camera application requests an operating system to be open for the third-party camera application through the media service module in advance; and the third-party camera application is used for controlling to receive the depth map and the face information, and processing the at least one frame of image according to the face information and the depth map to obtain at least one frame of processed image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a data interface, where the processor reads instructions stored on a memory through the data interface, and performs a method according to the first aspect to the third aspect and any optional implementation manner described above.
In a fifth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform some or all of the steps described in the first aspect of the present application.
In a sixth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of embodiments of the present application. The computer program product may be a software installation package.
In the embodiment of the application, a third-party camera application arranged on an application layer of an operating system of the electronic device sends a photographing request to a hardware abstraction layer; the hardware abstraction layer receives the photographing request of the third-party camera application, obtains application data to be processed of the third-party camera application request, wherein the application data comprises at least one frame of image of a target object, the third-party camera application requests an operating system to open the algorithm for realizing the depth-of-field effect for the third-party camera application in advance through the media service module, processes the at least one frame of image to obtain a depth map and face information of the target user, and sends the depth map and the face information to the third-party camera application, and after the third-party camera application receives the depth map and the face information, the third-party camera application processes the at least one frame of image according to the face information and the depth map to obtain at least one frame of processed image. The third-party camera application acquires more deep-level data and functions of the operating system through the media service module, so that the image processing effect is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a software architecture of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of another image processing method provided in the embodiments of the present application;
FIG. 4 is a functional unit diagram of an image processing apparatus provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
When the existing third-party camera is applied to processing images, a human body area can be identified only through image segmentation, but a large deviation exists in edge identification of the human body area and an image background, and the final processing effect is influenced. And the third party camera application can not acquire the human body image based on the depth information, and can not improve the processing effect of the image.
The electronic device according to the embodiments of the present application may be an electronic device with communication capability, and the electronic device may include various handheld devices with wireless communication function, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and so on.
Referring to fig. 1, fig. 1 is a schematic diagram of a software architecture of an electronic device according to an embodiment of the present disclosure. As shown in fig. 1, an electronic device according to an embodiment of the present application includes a media Service module (oma Service) and an operating system (e.g., an android operating system, etc., which is not limited herein), an application layer of the operating system is provided with a third-party Camera application and a media software development kit module (oma SDK), and a hardware abstraction layer of the operating system is provided with a media policy module (oma Strategy), an algorithm management module (Algo Manager), and a Camera hardware abstraction module (Camera HAL). The third-party camera application is in communication connection with the media software development kit module, the media software development kit module is in communication connection with the media service module, the media service module is in communication connection with the camera hardware abstraction module, the camera hardware abstraction module is in communication connection with the media policy module, and the media policy module is in communication connection with the algorithm management module. In addition, the media service module may be further communicatively coupled to the media policy module and/or the algorithm management module.
The media software development kit module comprises a control interface, can acquire information such as capability value and configuration capability value, does not store static configuration information, can communicate with the media service module by a binder, and transmits the third-party camera application configuration information to the media service module.
The media service module resident in the service module of the system runs, authenticates and responds to the configuration request of the third-party camera application after the electronic equipment is started, so that the configuration information can reach the bottom layer. In the application, the media service module acquires a data processing request or a photographing request of the third-party camera application, sets a data processing scheme or acquires application data to be processed, and can process the application data to be processed according to the data processing scheme through the platform to obtain intermediate data, so that the third-party camera application can process original data based on the intermediate data conveniently, and final processing effects such as portrait lighting effect, body beautification and three-dimensional human body image establishment are presented.
The media strategy module is a bottom strategy module, and can send the information configured by the media service module to the bottom layer to be converted into the capability which can be identified by the bottom layer, so that the capability of the bottom layer can be prevented from being directly seen by the third party camera application in a coupling way, or the data of the bottom layer is prevented from being leaked, the configuration information is converted into second function configuration information which can be identified by a hardware abstraction layer, and algorithm information is called.
The algorithm management module can enable the capability configuration information issued by the upper layer, and can utilize the corresponding algorithm.
The third-party camera application may directly notify the media service module that application data to be processed needs to be acquired or a photographing request occurs.
The electronic equipment of the embodiment of the application acquires application data (at least one frame of image) to be processed by adopting a media platform (OMedia) -based framework, calls an algorithm for realizing a depth-of-field effect to process the at least one frame of image, obtains a depth map and face information of a target user, and sends the depth map and the face information to the third-party camera application; and processing the at least one frame of image according to the face information and the depth map after the third-party camera is applied to receive the depth map and the face information to obtain at least one frame of processed image. And the image processing effect is improved.
The technical solution of the embodiment of the present application may be implemented based on the software architecture of the electronic device illustrated in fig. 1 by way of example or a deformation structure thereof.
Referring to fig. 2, fig. 2 is a schematic flow chart of a method for image processing according to an embodiment of the present application, where the method may include, but is not limited to, the following steps:
201. the third party camera application sends a photographing request to the hardware abstraction layer;
specifically, it may be understood that the third-party camera application is an application APP installed in the electronic device, and when the third-party camera application wants to take a picture, the third-party camera application sends a picture taking request to a hardware abstraction layer of an operating system, where the picture taking request is used to request the hardware abstraction layer to obtain application data to be processed, and the application data includes at least one frame of image of a target object. The photographing request may be one or more frames of images containing a target object with depth information.
202. The hardware abstraction layer receives the photographing request of the third-party camera application, obtains application data to be processed of the third-party camera application request, the application data comprises at least one frame of image of a target object, calls an algorithm for realizing a depth effect to process the at least one frame of image, obtains a depth map and face information of the target user, and sends the depth map and the face information to the third-party camera application; wherein the algorithm for achieving the depth of field effect is that the third party camera application requests an operating system to be open for the third party camera application in advance through the media service module;
specifically, after receiving the photographing request applied by the third-party camera, the hardware abstraction layer may process the photographing request to generate an image acquisition instruction, and send the image acquisition instruction to a driver layer of the operating system, where the driver layer drives an image sensor of the hardware layer to acquire at least one frame of image according to the image acquisition instruction; or the driving layer drives the image sensor of the hardware layer to acquire at least one frame of image through an image signal processor according to the image acquisition instruction; the driver layer is in communication connection with the hardware layer. The image sensor sends the at least one frame of image to the hardware driving layer through the image information processor. In addition, before the hardware driving layer receives the photographing request, the electronic device may enable an image processing algorithm of an operating system itself through a media service module, and the media service module requests the operating system to be open for the third-party application in advance, so that the third-party application may directly use an algorithm or the like of the operating system itself, which can achieve a depth-of-field effect.
203. And the third-party camera application receives the depth map and the face information, and processes the at least one frame of image according to the face information and the depth map to obtain at least one frame of processed image.
Specifically, the third-party camera processes the at least one frame of image according to the face information and the depth map after receiving the at least one frame of image, where the processing mode may be a preset mode, and processes the at least one frame of image according to the processing instruction after receiving different processing instructions to obtain a processed image, where the processed image of the at least one frame of image is an image with an effect corresponding to the processing instruction.
As can be seen, in the embodiment of the present application, a third-party camera application, which is set in an application layer of an operating system of an electronic device, sends a photographing request to a hardware abstraction layer; the hardware abstraction layer receives the photographing request of the third-party camera application, obtains application data to be processed of the third-party camera application request, wherein the application data comprises at least one frame of image of a target object, the third-party camera application requests an operating system to open the algorithm for realizing the depth-of-field effect for the third-party camera application in advance through the media service module, processes the at least one frame of image to obtain a depth map and face information of the target user, and sends the depth map and the face information to the third-party camera application, and after the third-party camera application receives the depth map and the face information, the third-party camera application processes the at least one frame of image according to the face information and the depth map to obtain at least one frame of processed image. The third-party camera application acquires more deep-level data and functions of the operating system through the media service module, so that the image processing effect is improved.
In one possible example, the invoking an algorithm for implementing a depth-of-field effect to process the at least one frame of image to obtain a depth map and face information of the target user includes: the hardware abstraction layer calls a face detection algorithm to process the at least one frame of image to obtain face information of the target object; and the hardware abstraction layer calls a depth detection algorithm to process the at least one frame of image to obtain the depth information of the target object, and a depth map is obtained according to the depth information.
Specifically, the face information is corresponding face information of the target object, and includes a face pose, an expression, an outline, local features, and the like. The Depth Map (Depth Map) is an image or an image channel that obtains a disparity Map from the at least one frame of image by using a stereo matching algorithm and further obtains an obtained Depth Map from the disparity Map, and the obtained Depth Map includes information about a distance to a surface of the target object from the viewpoint. The image sensor records the distance of an object in a scene from the image sensor using the principle of binocular ranging, wherein a depth map is similar to a grayscale image except that each pixel value thereof is the actual distance of the sensor from the object. Usually, the RGB image and the Depth image are registered, so that there is a one-to-one correspondence between the pixel points.
Where the unit of disparity in the disparity map is pixels (pixels) and the unit of depth is often in millimeters (mm) representation. According to the geometric relationship of the parallel binocular vision (the derivation is not drawn here, and is simple), the following conversion formula of parallax and depth can be obtained:
Depth=(f*baseline)/disp
in the above formula, Depth represents a Depth map; f denotes the normalized focal length, i.e. fx in the camera parameter, which reflects the projection relationship between the camera coordinate system to the image coordinate system. Baseline is the distance between the optical centers of the two cameras, called the baseline distance; disp is the disparity value. As is known from the equation, the depth value can be calculated to obtain the depth information, and the depth information is the depth value calculated based on the target object main graph and the target object auxiliary graph, which means the number of bits used for storing each pixel and is also used for measuring the color resolution of the image.
The Face Detection algorithm includes an Ada Boost frame, a VJ (Viola, Jones, VJ) frame, an ACF (Aggregate channel features for Multi-view Face Detection), a dpm (deformable Part model) model, or a frame based on deep learning. The depth detection algorithm comprises: passive ranging sensing, active ranging sensing, structured light and Kinect, etc.
Therefore, the disparity map is obtained based on the at least one frame of image, the depth information is obtained according to the disparity map, the depth map is obtained according to the depth information, and the resolution of the depth map can be effectively improved.
In one possible example, the third-party camera application receives the depth map and the face information, processes the at least one frame of image according to the face information and the depth map, and obtains the processed at least one frame of image, including: the third-party camera carries out portrait lighting effect processing on the at least one frame of image according to the depth map and the face information to obtain at least one frame of human body image with special lighting effect; the third-party camera application carries out three-dimensional modeling processing on the at least one frame of image according to the depth map and the face information to obtain at least one frame of three-dimensional human body image; and the third-party camera application performs human body beautification processing on the at least one frame of image according to the depth map and the face information to obtain at least one frame of human body beautification image.
Specifically, it can be understood that the third-party camera application acquires a human body region in the at least one frame of image based on the face information and the depth map. Due to the different distances of different objects from the image sensor in the depth map, the image depths of different objects are also different, also represented in different colors in the depth map, the number of bits used for storing each pixel and also the color resolution used for measuring the image are also different. Thereby enabling ready identification of the body region in the at least one image. And after the human body area is identified, processing the at least one frame of image based on the human body area and the depth map to obtain a processed image. The processed image is an image of an effect corresponding to the processing. In addition, the portrait light effect includes: natural light, studio lighting, contour light, stage light, monochromatic stage light and the like. The processing of the portrait lighting effects comprises the processing of the lighting effects, the blurring processing of the background, the replacement processing of the background and the like. The three-dimensional modeling processing is to perform three-dimensional mapping processing on a human body region to obtain a corresponding three-dimensional human body model, and then perform portrait lighting effect processing on the three-dimensional human body model. The human body beautifying processing is performed, for example, horizontal or longitudinal scaling processing, scaling processing of local regions of the human body, optimization processing of lines of the human body region, and the like are performed on the human body region according to a preset proportion.
It can be seen that, after the third-party camera application obtains the depth map and the face information, the human body region in the at least one frame of image is collected according to the face information and the depth map, so that the human body region is effectively distinguished from the background image in the at least one frame of image, and the edge segmentation effect of the human body region and the background image is improved. And the third-party camera application carries out diversified processing on the target object main graph according to the face information and the depth map, and increases the application scene of image processing.
In one possible example, a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, wherein the hardware abstraction module is connected with the media policy module, and the media policy module is connected with the algorithm management module; the algorithm for implementing the depth of field effect is that the third party camera application requests an operating system to be open for the third party camera application in advance through the media service module, and includes: the media service module determines a depth-of-field effect required to be used by the third-party camera application, and generates first function configuration information according to the depth-of-field effect; the media policy module receives the first function configuration information from the media service module, wherein the first function configuration information comprises description information of a depth effect; converting the first function configuration information into second function configuration information which can be identified by the algorithm management module, and sending the second function configuration information to the algorithm management module; and the algorithm management module receives the second function configuration information and opens the use permission of the third-party camera application for the algorithm of the depth of field effect according to the second function configuration information.
Specifically, it may be understood that when the third-party camera application wants to acquire a certain deep-level function of the operating system, a target function request message is sent to the media service module, and the media service module receives the target function request message, in this example, the target function request message includes a depth-of-field effect that the third-party camera application needs to use. And the media service module generates first function configuration information according to the target function request message. The media policy module receives the first function configuration information from the media service module, processes the first function configuration information, converts the first function configuration information into second function configuration information which can be identified by the algorithm management module, and sends the second function configuration information to the algorithm management module, and after receiving the second function configuration information, the algorithm management module opens the use permission of the third-party camera application for the algorithm of the depth-of-field effect according to the second function configuration information.
In this example, after determining the depth-of-field effect required to be used by the third-party camera application, the media service module generates first configuration information according to the depth-of-field effect, and then sends the first configuration information to the media policy module, the media policy module converts the first configuration information into second configuration information that can be identified by the algorithm management module, and the algorithm management module opens the use permission of the algorithm of the depth-of-field effect according to the second configuration information, so that the third-party application can use the special algorithm provided by the installation system to obtain the image depth information by authorizing the media platform (the media service module and the media policy module).
In one possible example, a hardware abstraction layer of the operating system is provided with a camera hardware abstraction module and an algorithm management module, and the camera hardware abstraction module is connected with the algorithm management module; the hardware abstraction layer receives the photographing request of the third-party camera application, acquires application data to be processed of the third-party camera application request, wherein the application data comprises at least one frame of image of a target object, and calls an algorithm for realizing a depth effect to process the at least one frame of image to obtain a depth map and face information of the target user, and the method comprises the following steps: the camera hardware abstraction module receives the photographing request of the third-party camera application, and obtains application data to be processed requested by the third-party camera application, wherein the application data comprises at least one frame of image of a target object; the camera hardware abstraction module sends the at least one frame of image to the algorithm management module; and the algorithm management module receives the at least one frame of image, processes the application data to be processed by utilizing the algorithm for realizing the depth of field effect, and obtains a depth map and face information of the target user.
Specifically, in the hardware abstraction layer, the camera hardware abstraction module is in communication connection with the algorithm management module, and the camera hardware abstraction module receives the photographing request of the third-party camera application and obtains to-be-processed application data requested by the third-party camera application, where the application data includes at least one frame of image of the target object. And the camera hardware abstraction module sends the at least one frame of image to the algorithm management module, and the algorithm management module processes the at least one frame of image and application data to be processed by using the algorithm for realizing the depth-of-field effect to obtain a depth map and face information of the target user. The algorithm management module may include any one or more of a Digital Signal Processing (DSP), an embedded Neural Network Processor (NPU), and a Graphics Processing Unit (GPU).
Therefore, the camera hardware abstraction module and the algorithm management module which are in communication connection are utilized, the data to be processed can be conveniently received, sent and processed, and the data processing efficiency is improved.
In one possible example, a hardware abstraction layer of the operating system is provided with a camera hardware abstraction module, a media policy module and an algorithm management module, wherein the camera hardware abstraction module is connected with the media policy module, and the media policy module is connected with the algorithm management module; the hardware abstraction layer receives the photographing request of the third-party camera application, acquires application data to be processed of the third-party camera application request, wherein the application data comprises at least one frame of image of a target object, and calls an algorithm for realizing a depth effect to process the at least one frame of image to obtain a depth map and face information of the target user, and the method comprises the following steps: the camera hardware abstraction module receives the photographing request of the third-party camera application, and obtains application data to be processed requested by the third-party camera application, wherein the application data comprises at least one frame of image of a target object; the camera hardware abstraction module sends the at least one frame of image to the algorithm management module through the media policy module; and the algorithm management module receives the at least one frame of image, processes the application data to be processed by utilizing the algorithm for realizing the depth of field effect, and obtains a depth map and face information of the target user.
Specifically, as shown in the software architecture diagram of the electronic device shown in fig. 1, in a hardware abstraction layer, the camera hardware abstraction module is in communication connection with the media policy module, the media policy module is in communication connection with the algorithm management module, after the camera hardware abstraction module obtains to-be-processed application data requested by the third-party camera application, the application data includes at least one frame of image of a target object, the camera hardware abstraction module sends the at least one frame of image to the algorithm management module through the media policy module, the algorithm management module processes the at least one frame of image by using the algorithm for achieving the depth of field effect, and the to-be-processed application data obtains a depth map and face information of the target user.
Therefore, by utilizing the camera hardware abstraction module, the media policy module and the algorithm management module which are in communication connection, data receiving and sending and algorithm functions are enabled through the media policy module, the functions of the third-party camera are improved while the deep-level functions of the operating system are developed, the deep-level functions of the system and the safety of the data are guaranteed, and the deep-level data or functions are prevented from being leaked.
In one possible example, the application layer of the operating system is further provided with a media management module, and the media management module supports information interaction between the third-party camera application and the media service module through an SDK binder communication mechanism.
Therefore, through the media management module, the third-party camera application or other third-party applications can perform information interaction with the media service module, and the operation system can conveniently open deep functions to the third-party camera application.
In one possible example, before the third party camera application sends the photograph request to a hardware abstraction layer of the operating system, the method further comprises: the third-party camera application sends a media platform version acquisition request carrying an authentication code to the media service module; the media service module receives the media platform version acquisition request, verifies the authentication code and passes the verification; the media service module sends the media platform version information to the third party camera application; the third-party camera application receives the media platform version information and sends a capacity acquisition request carrying the media platform version information to the media service module; the media service module receives the capability acquisition request, inquires an application capability list of the media platform version information, and sends the application capability list to the third-party camera application; the third-party camera application receives the application capability list, and queries the application capability list to acquire a plurality of android native functions supported by the current media platform for the third-party camera application; and determining the selected open algorithm for achieving the depth of view effect in the plurality of android native functions. As can be seen, in this example, after the authentication code is verified and the verification is passed, the media platform version information is returned to the third-party camera application, the verification result is made clear, the third-party camera application requests the media service module for the application capability list, and after the application capability list is obtained, the algorithm for realizing the depth of field that is to be opened is selected, which is beneficial to accurately selecting the algorithm with the opened specific function to process the application data to be processed.
Referring to fig. 3, fig. 3 is a schematic flowchart of another image processing method provided in the embodiment of the present application, applied to an electronic device, and including:
301. the third party camera application sends a photographing request to the camera hardware abstraction module;
this step is the same as step 201, and is not described herein again.
302. The camera hardware abstraction module receives the photographing request of the third-party camera application, and obtains application data to be processed requested by the third-party camera application, wherein the application data comprises at least one frame of image of a target object;
specifically, after receiving the photographing request applied by the third-party camera, the camera hardware abstraction layer may process the photographing request to generate an image acquisition instruction, and send the image acquisition instruction to a driver layer of the operating system, where the driver layer drives an image sensor arranged in the operating system hardware layer to acquire at least one frame of image according to the image acquisition instruction; or the driving layer drives the image sensor of the hardware layer to acquire at least one frame of image through an image signal processor according to the image acquisition instruction. The driver layer is in communication connection with the hardware layer. The image sensor sends the at least one frame of image to the camera hardware abstraction module through the image information processor.
303. The camera hardware abstraction module sends the at least one frame of image to the algorithm management module through the media policy module;
specifically, the camera hardware abstraction module is communicatively coupled to the media policy module, and the media policy module is communicatively coupled to the algorithm management module. The camera hardware abstraction module may send the at least one frame of image to the algorithm management module through the media policy module.
304. The algorithm management module receives the at least one frame of image, and processes the application data to be processed by using the algorithm for realizing the depth of field effect to obtain a depth map and face information of the target user;
specifically, before the camera hardware abstraction module receives the photographing request, or before or after the at least one frame of image is received, the electronic device may enable an image processing algorithm of an operating system of the electronic device through a media service module, where the media service module requests the operating system to open a deep function of a part of the operating system for the third-party application in advance, so that the third-party application may directly use an algorithm that can achieve a depth effect of the operating system itself. Therefore, after the algorithm management module receives the at least one frame of image, the to-be-processed application data is processed by using the algorithm for realizing the depth of field effect, and the depth map and the face information of the target user are obtained.
305. The algorithm management module sends the depth map and the face information to the third party camera application through the media policy module and the camera hardware abstraction module;
specifically, the camera hardware abstraction module is in communication connection with the media policy module, and the media policy module is in communication connection with the algorithm management module, so that the algorithm management module can send the depth map and the face information to the third-party camera application through the media policy module and the camera hardware abstraction module.
306. And the third-party camera application receives the depth map and the face information, and processes the at least one frame of image according to the face information and the depth map to obtain at least one frame of processed image.
Step 306 is the same as step 203 described above, and will not be described herein again.
As can be seen, in the embodiment of the present application, by means of the media service module and the media policy module, the application data to be processed is acquired through the camera hardware abstraction module, and the algorithm for realizing the depth of field effect is opened to the third-party camera application through the media service module and the media policy module, so that the third-party camera application can acquire the face information and the depth map, and obtain at least one frame of processed image from the at least one frame of image based on the face information and the depth map. Therefore, the functionality of the third-party camera application is improved, and the image processing effect is improved.
In accordance with the embodiments shown in fig. 2 and fig. 3, please refer to fig. 4, and fig. 4 is a schematic structural diagram of functional units of an image processing apparatus 400 according to an embodiment of the present application, where the image processing apparatus 400 includes: a communication unit 410, a processing unit 420, wherein,
the processing unit 420 is configured to control the third-party camera application to send a photographing request to the hardware abstraction layer through the communication unit; the hardware abstraction layer is used for controlling the hardware abstraction layer to receive the photographing request of the third-party camera application, acquiring application data to be processed of the third-party camera application request, wherein the application data comprises at least one frame of image of a target object, calling an algorithm for realizing a depth effect to process the at least one frame of image, obtaining a depth map and face information of the target user, and sending the depth map and the face information to the third-party camera application through the communication unit, wherein the algorithm for realizing the depth effect is that the third-party camera application requests an operating system to be open for the third-party camera application through the media service module in advance; and the third-party camera application is used for controlling to receive the depth map and the face information, and processing the at least one frame of image according to the face information and the depth map to obtain at least one frame of processed image.
It can be seen that, the third party camera application arranged in the application layer of the operating system of the electronic device in the embodiment of the present application sends a photographing request to the hardware abstraction layer; the hardware abstraction layer receives the photographing request of the third-party camera application, obtains application data to be processed of the third-party camera application request, wherein the application data comprises at least one frame of image of a target object, the third-party camera application requests an operating system to open the algorithm for realizing the depth-of-field effect for the third-party camera application in advance through the media service module, processes the at least one frame of image to obtain a depth map and face information of the target user, and sends the depth map and the face information to the third-party camera application, and after the third-party camera application receives the depth map and the face information, the third-party camera application processes the at least one frame of image according to the face information and the depth map to obtain at least one frame of processed image. The third-party camera application acquires more deep-level data and functions of the operating system through the media service module, so that the image processing effect is improved.
In a possible example, in terms of processing the at least one frame of image by invoking the algorithm for implementing the depth-of-field effect to obtain the depth map and the face information of the target object, the processing unit 420 is specifically configured to control the hardware abstraction layer to invoke the face detection algorithm to process the at least one frame of image to obtain the face information of the target object; and controlling the hardware abstraction layer to call a depth detection algorithm to process the at least one frame of image to obtain the depth information of the target object, and obtaining a depth map according to the depth information.
In a possible example, in the aspect that the third-party camera application receives the depth map and the face information, processes the at least one frame of image according to the face information and the depth map, and obtains the processed at least one frame of image, the processing unit 420 is specifically configured to control the third-party camera application to perform portrait lighting effect processing on the at least one frame of image according to the depth map and the face information, so as to obtain at least one frame of human body image with special lighting effect; controlling the third-party camera to perform three-dimensional modeling processing on the at least one frame of image according to the depth map and the face information to obtain at least one frame of three-dimensional human body image; and controlling the third-party camera application to perform human body beautification processing on the at least one frame of image according to the depth map and the face information to obtain at least one frame of human body beautification image.
In one possible example, a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, wherein the hardware abstraction module is connected with the media policy module, and the media policy module is connected with the algorithm management module; in the aspect that the algorithm for implementing the depth of field effect is that the third-party camera application requests the operating system to be open for the third-party application third-party camera application in advance through the media service module, the processing unit 420 is specifically configured to control the media service module to determine the depth of field effect required to be used by the third-party camera application, and generate first function configuration information according to the depth of field effect; controlling the media policy module to receive the first function configuration information from the media service module, wherein the first function configuration information includes description information of a depth effect; converting the first function configuration information into second function configuration information which can be identified by the algorithm management module, and sending the second function configuration information to the algorithm management module; and controlling the algorithm management module to receive the second function configuration information, and opening the use permission of the third-party camera application for the algorithm of the depth of field effect according to the second function configuration information.
In one possible example, a hardware abstraction layer of the operating system is provided with a camera hardware abstraction module and an algorithm management module, and the camera hardware abstraction module is connected with the algorithm management module; receiving, at the hardware abstraction layer, the photographing request of the third-party camera application, and acquiring to-be-processed application data of the third-party camera application request, where the application data includes at least one frame of image of a target object, and calling an algorithm for achieving a depth effect to process the at least one frame of image, so as to obtain a depth map and face information of the target user, where the processing unit 420 is specifically configured to control the camera hardware abstraction module to receive the photographing request of the third-party camera application, and acquire to-be-processed application data of the third-party camera application request, where the application data includes at least one frame of image of the target object; controlling the camera hardware abstraction module to send the at least one frame of image to the algorithm management module; and controlling the algorithm management module to receive the at least one frame of image, and processing the application data to be processed by using the algorithm for realizing the depth of field effect to obtain a depth map and face information of the target user.
In one possible example, a hardware abstraction layer of the operating system is provided with a camera hardware abstraction module, a media policy module and an algorithm management module, wherein the camera hardware abstraction module is connected with the media policy module, and the media policy module is connected with the algorithm management module; receiving, at the hardware abstraction layer, the photographing request of the third-party camera application, and acquiring to-be-processed application data of the third-party camera application request, where the application data includes at least one frame of image of a target object, and calling an algorithm for achieving a depth effect to process the at least one frame of image, so as to obtain a depth map and face information of the target user, where the processing unit 420 is specifically configured to receive, by the camera hardware abstraction module, the photographing request of the third-party camera application, and acquire to-be-processed application data of the third-party camera application request, where the application data includes at least one frame of image of the target object; the camera hardware abstraction module sends the at least one frame of image to the algorithm management module through the media policy module; and the algorithm management module receives the at least one frame of image, processes the application data to be processed by utilizing the algorithm for realizing the depth of field effect, and obtains a depth map and face information of the target user.
In one possible example, the application layer of the operating system is further provided with a media management module, and the media management module supports information interaction between the third-party camera application and the media service module through an SDK binder communication mechanism.
Fig. 5 is a schematic structural diagram of an electronic device 500 according to an embodiment of the present application, and as shown in the figure, the electronic device 500 includes an application processor 510, a memory 520, a communication interface 530, and one or more programs 521, where the one or more programs 521 are stored in the memory 520 and configured to be executed by the application processor 510, and the one or more programs 521 include instructions for performing any step in the foregoing method embodiment:
the third party camera application sends a photographing request to the hardware abstraction layer;
the hardware abstraction layer receives the photographing request of the third-party camera application, obtains application data to be processed of the third-party camera application request, the application data comprises at least one frame of image of a target object, calls an algorithm for realizing a depth effect to process the at least one frame of image, obtains a depth map and face information of the target user, and sends the depth map and the face information to the third-party camera application; wherein the algorithm for achieving the depth of field effect is that the third party camera application requests an operating system to be open for the third party camera application in advance through the media service module;
and the third-party camera application receives the depth map and the face information, and processes the at least one frame of image according to the face information and the depth map to obtain at least one frame of processed image.
It can be seen that, the third party camera application arranged in the application layer of the operating system of the electronic device in the embodiment of the present application sends a photographing request to the hardware abstraction layer; the hardware abstraction layer receives the photographing request of the third-party camera application, obtains application data to be processed of the third-party camera application request, wherein the application data comprises at least one frame of image of a target object, the third-party camera application requests an operating system to open the algorithm for realizing the depth-of-field effect for the third-party camera application in advance through the media service module, processes the at least one frame of image to obtain a depth map and face information of the target user, and sends the depth map and the face information to the third-party camera application, and after the third-party camera application receives the depth map and the face information, the third-party camera application processes the at least one frame of image according to the face information and the depth map to obtain at least one frame of processed image. The third-party camera application acquires more deep-level data and functions of the operating system through the media service module, so that the image processing effect is improved.
In one possible example, in terms of processing the at least one frame of image by invoking the algorithm for implementing the depth of field effect to obtain the depth map and the face information of the target object, the one or more programs 521 specifically include instructions for controlling the hardware abstraction layer to invoke the face detection algorithm to process the at least one frame of image to obtain the face information of the target object; and controlling the hardware abstraction layer to call a depth detection algorithm to process the at least one frame of image to obtain the depth information of the target object, and obtaining a depth map according to the depth information.
In a possible example, in the aspect that the third-party camera application receives the depth map and the face information, processes the at least one frame of image according to the face information and the depth map, and obtains the processed at least one frame of image, the one or more programs 521 specifically include instructions for executing the following operations, and controlling the third-party camera application to perform portrait lighting effect processing on the at least one frame of image according to the depth map and the face information, so as to obtain at least one frame of human body image with special lighting effect; controlling the third-party camera to perform three-dimensional modeling processing on the at least one frame of image according to the depth map and the face information to obtain at least one frame of three-dimensional human body image; and controlling the third-party camera application to perform human body beautification processing on the at least one frame of image according to the depth map and the face information to obtain at least one frame of human body beautification image.
In one possible example, a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, wherein the hardware abstraction module is connected with the media policy module, and the media policy module is connected with the algorithm management module; in the aspect that the algorithm for implementing the depth of field effect is that the third-party camera application requests an operating system to be open for the third-party application third-party camera application in advance through the media service module, the one or more programs 521 specifically include an instruction for executing the following operation, control the media service module to determine the depth of field effect required to be used by the third-party camera application, and generate first function configuration information according to the depth of field effect; controlling the media policy module to receive the first function configuration information from the media service module, wherein the first function configuration information includes description information of a depth effect; converting the first function configuration information into second function configuration information which can be identified by the algorithm management module, and sending the second function configuration information to the algorithm management module; and controlling the algorithm management module to receive the second function configuration information, and opening the use permission of the third-party camera application for the algorithm of the depth of field effect according to the second function configuration information.
In one possible example, a hardware abstraction layer of the operating system is provided with a camera hardware abstraction module and an algorithm management module, and the camera hardware abstraction module is connected with the algorithm management module; receiving the photographing request of the third-party camera application at the hardware abstraction layer, acquiring to-be-processed application data requested by the third-party camera application, wherein the application data comprises at least one frame of image of a target object, calling an algorithm for realizing a depth effect to process the at least one frame of image, and acquiring a depth map and face information of the target user, and the one or more programs 521 specifically comprise instructions for executing the following operations, controlling the camera hardware abstraction module to receive the photographing request of the third-party camera application, and acquiring to-be-processed application data requested by the third-party camera application, wherein the application data comprises at least one frame of image of the target object; controlling the camera hardware abstraction module to send the at least one frame of image to the algorithm management module; and controlling the algorithm management module to receive the at least one frame of image, and processing the application data to be processed by using the algorithm for realizing the depth of field effect to obtain a depth map and face information of the target user.
In one possible example, a hardware abstraction layer of the operating system is provided with a camera hardware abstraction module, a media policy module and an algorithm management module, wherein the camera hardware abstraction module is connected with the media policy module, and the media policy module is connected with the algorithm management module; receiving the photographing request of the third-party camera application at the hardware abstraction layer, acquiring to-be-processed application data requested by the third-party camera application, wherein the application data comprises at least one frame of image of a target object, calling an algorithm for realizing a depth effect to process the at least one frame of image, and obtaining a depth map and face information of the target user, the one or more programs 521 specifically comprise instructions for executing the following operations, the camera hardware abstraction module receives the photographing request of the third-party camera application, acquires to-be-processed application data requested by the third-party camera application, and the application data comprises at least one frame of image of the target object; the camera hardware abstraction module sends the at least one frame of image to the algorithm management module through the media policy module; and the algorithm management module receives the at least one frame of image, processes the application data to be processed by utilizing the algorithm for realizing the depth of field effect, and obtains a depth map and face information of the target user.
In one possible example, the application layer of the operating system is further provided with a media management module, and the media management module supports information interaction between the third-party camera application and the media service module through an SDK binder communication mechanism. Processor 510 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 510 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). Processor 510 may also include a main processor and a coprocessor, where the main processor is a processor, also called a Central Processing Unit (CPU), for Processing data in the wake-up state; a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, processor 510 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 520 may include one or more computer-readable storage media, which may be non-transitory. Memory 520 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 520 is at least used for storing a computer program, wherein after being loaded and executed by the processor 510, the computer program can implement relevant steps in the call control method disclosed in any of the foregoing embodiments. In addition, the resources stored in the memory 520 may also include an operating system, data, and the like, and the storage manner may be a transient storage or a permanent storage. The operating system may include Windows, Unix, Linux, and the like. The data may include, but is not limited to, electronic device interaction data, electronic device signals, and the like.
In some embodiments, the electronic device 500 may further include an input-output interface, a communication interface, a power source, and a communication bus.
Those skilled in the art will appreciate that the disclosed configuration of the present embodiment is not intended to be limiting and may include more or fewer components.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
An embodiment of the present application provides a chip, where the chip includes a processor and a data interface, and the processor reads instructions stored on a memory through the data interface to perform a method according to the first aspect to the third aspect and any optional implementation manner described above.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods as described in the above method embodiments, and the computer includes a first electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising the electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the electronic device are merely illustrative, and for example, the division of the above-described units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be performed by associated hardware as instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
While the present disclosure has been described with reference to particular embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure.

Claims (10)

1. An image processing method is applied to an electronic device, the electronic device comprises an operating system and a media service module, the operating system comprises an application layer and a hardware abstraction layer, the application layer is provided with a third-party camera application, and the method comprises the following steps:
the third party camera application sends a photographing request to the hardware abstraction layer;
the hardware abstraction layer receives the photographing request of the third-party camera application, obtains application data to be processed of the third-party camera application request, the application data comprises at least one frame of image of a target object, calls an algorithm for realizing a depth effect to process the at least one frame of image, obtains a depth map and face information of the target user, and sends the depth map and the face information to the third-party camera application; the algorithm for realizing the depth of field effect is that the third-party camera application requests an operating system to be open for the third-party camera application in advance through the media service module, and the media service module is used for responding to a photographing request of the third-party camera application and enabling configuration information of the third-party camera application to the hardware abstraction layer;
and the third-party camera application receives the depth map and the face information, and processes the at least one frame of image according to the face information and the depth map to obtain at least one frame of processed image.
2. The method according to claim 1, wherein the invoking the algorithm for implementing the depth-of-field effect to process the at least one frame of image to obtain the depth map and the face information of the target object comprises:
the hardware abstraction layer calls a face detection algorithm to process the at least one frame of image to obtain face information of the target object;
and the hardware abstraction layer calls a depth detection algorithm to process the at least one frame of image to obtain the depth information of the target object, and a depth map is obtained according to the depth information.
3. The method of claim 1, wherein the third-party camera application receives the depth map and the face information, processes the at least one frame of image according to the face information and the depth map, and obtains the processed at least one frame of image, and comprises:
the third-party camera carries out portrait lighting effect processing on the at least one frame of image according to the depth map and the face information to obtain at least one frame of human body image with special lighting effect;
the third-party camera application carries out three-dimensional modeling processing on the at least one frame of image according to the depth map and the face information to obtain at least one frame of three-dimensional human body image;
and the third-party camera application performs human body beautification processing on the at least one frame of image according to the depth map and the face information to obtain at least one frame of human body beautification image.
4. The method according to claim 1, wherein a hardware abstraction module, a media policy module and an algorithm management module are arranged on a hardware abstraction layer of the operating system, the hardware abstraction module is connected with the media policy module, and the media policy module is connected with the algorithm management module;
the algorithm for implementing the depth of field effect is that the third party camera application requests an operating system to be open for the third party camera application in advance through the media service module, and includes:
the media service module determines a depth-of-field effect required to be used by the third-party camera application, and generates first function configuration information according to the depth-of-field effect;
the media policy module receives the first function configuration information from the media service module, wherein the first function configuration information comprises description information of a depth effect; converting the first function configuration information into second function configuration information which can be identified by the algorithm management module, and sending the second function configuration information to the algorithm management module;
and the algorithm management module receives the second function configuration information and opens the use permission of the third-party camera application for the algorithm of the depth of field effect according to the second function configuration information.
5. The method according to claim 1, wherein a hardware abstraction layer of the operating system is provided with a camera hardware abstraction module and an algorithm management module, and the camera hardware abstraction module is connected with the algorithm management module;
the hardware abstraction layer receives the photographing request of the third-party camera application, acquires application data to be processed of the third-party camera application request, wherein the application data comprises at least one frame of image of a target object, and calls an algorithm for realizing a depth effect to process the at least one frame of image to obtain a depth map and face information of the target user, and the method comprises the following steps:
the camera hardware abstraction module receives the photographing request of the third-party camera application, and obtains application data to be processed requested by the third-party camera application, wherein the application data comprises at least one frame of image of a target object;
the camera hardware abstraction module sends the at least one frame of image to the algorithm management module;
and the algorithm management module receives the at least one frame of image, processes the application data to be processed by utilizing the algorithm for realizing the depth of field effect, and obtains a depth map and face information of the target user.
6. The method according to claim 1, wherein a hardware abstraction layer of the operating system is provided with a camera hardware abstraction module, a media policy module and an algorithm management module, wherein the camera hardware abstraction module is connected with the media policy module, and the media policy module is connected with the algorithm management module;
the hardware abstraction layer receives the photographing request of the third-party camera application, acquires application data to be processed of the third-party camera application request, wherein the application data comprises at least one frame of image of a target object, and calls an algorithm for realizing a depth effect to process the at least one frame of image to obtain a depth map and face information of the target user, and the method comprises the following steps:
the camera hardware abstraction module receives the photographing request of the third-party camera application, and obtains application data to be processed requested by the third-party camera application, wherein the application data comprises at least one frame of image of a target object;
the camera hardware abstraction module sends the at least one frame of image to the algorithm management module through the media policy module;
and the algorithm management module receives the at least one frame of image, processes the application data to be processed by utilizing the algorithm for realizing the depth of field effect, and obtains a depth map and face information of the target user.
7. The method according to any one of claims 1 to 6, wherein a media management module is further provided in the application layer of the operating system, and the media management module supports information interaction between the third-party camera application and the media service module through an SDK binder communication mechanism.
8. An image processing apparatus, applied to an electronic device, the electronic device including an operating system and a media service module, the operating system including an application layer and a hardware abstraction layer, the application layer being provided with a third-party camera application, the image processing apparatus including a communication unit and a processing unit, wherein:
the processing unit is used for controlling the third-party camera application to send a photographing request to the hardware abstraction layer through the communication unit; and for controlling the hardware abstraction layer to receive the photographing request of the third party camera application, to acquire pending application data requested by the third party camera application, the application data comprises at least one frame of image of a target object, the at least one frame of image is processed by calling an algorithm for realizing a depth effect to obtain a depth map and face information of the target user, and sending the depth map and the face information to the third party camera application via the communication unit, wherein the algorithm for implementing the depth of field effect is that the third party camera application previously requests an operating system to be open for the third party camera application through the media service module, the media service module is used for responding to a photographing request of the third-party camera application and enabling the configuration information of the third-party camera application to the hardware abstraction layer; and the third-party camera application is used for controlling to receive the depth map and the face information, and processing the at least one frame of image according to the face information and the depth map to obtain at least one frame of processed image.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-7.
CN201911253015.8A 2019-12-09 2019-12-09 Image processing method and related device Active CN110958390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911253015.8A CN110958390B (en) 2019-12-09 2019-12-09 Image processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911253015.8A CN110958390B (en) 2019-12-09 2019-12-09 Image processing method and related device

Publications (2)

Publication Number Publication Date
CN110958390A CN110958390A (en) 2020-04-03
CN110958390B true CN110958390B (en) 2021-07-20

Family

ID=69980459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911253015.8A Active CN110958390B (en) 2019-12-09 2019-12-09 Image processing method and related device

Country Status (1)

Country Link
CN (1) CN110958390B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111491102B (en) * 2020-04-22 2022-01-07 Oppo广东移动通信有限公司 Detection method and system for photographing scene, mobile terminal and storage medium
CN112165575B (en) * 2020-09-25 2022-03-18 Oppo(重庆)智能科技有限公司 Image blurring processing method and device, storage medium and electronic equipment
CN112311985B (en) * 2020-10-12 2021-12-14 珠海格力电器股份有限公司 Multi-shooting processing method and device and storage medium
CN115706849B (en) * 2021-08-05 2024-01-30 北京小米移动软件有限公司 Camera software architecture, platform and terminal equipment
CN116052236A (en) * 2022-08-04 2023-05-02 荣耀终端有限公司 Face detection processing engine, shooting method and equipment related to face detection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150128256A (en) * 2014-05-09 2015-11-18 배재대학교 산학협력단 Server system having virtual android apparatus for interworking between application and real smart device
CN108876833A (en) * 2018-03-29 2018-11-23 北京旷视科技有限公司 Image processing method, image processing apparatus and computer readable storage medium
CN110086967A (en) * 2019-04-10 2019-08-02 Oppo广东移动通信有限公司 Image processing method, image processor, filming apparatus and electronic equipment
CN110177218A (en) * 2019-06-28 2019-08-27 广州鲁邦通物联网科技有限公司 A kind of image processing method of taking pictures of Android device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9118864B2 (en) * 2012-08-17 2015-08-25 Flextronics Ap, Llc Interactive channel navigation and switching
US10922148B2 (en) * 2015-04-26 2021-02-16 Intel Corporation Integrated android and windows device
KR20180023326A (en) * 2016-08-25 2018-03-07 삼성전자주식회사 Electronic device and method for providing image acquired by the image sensor to application
CN108012084A (en) * 2017-12-14 2018-05-08 维沃移动通信有限公司 A kind of image generating method, application processor AP and third party's picture processing chip

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150128256A (en) * 2014-05-09 2015-11-18 배재대학교 산학협력단 Server system having virtual android apparatus for interworking between application and real smart device
CN108876833A (en) * 2018-03-29 2018-11-23 北京旷视科技有限公司 Image processing method, image processing apparatus and computer readable storage medium
CN110086967A (en) * 2019-04-10 2019-08-02 Oppo广东移动通信有限公司 Image processing method, image processor, filming apparatus and electronic equipment
CN110177218A (en) * 2019-06-28 2019-08-27 广州鲁邦通物联网科技有限公司 A kind of image processing method of taking pictures of Android device

Also Published As

Publication number Publication date
CN110958390A (en) 2020-04-03

Similar Documents

Publication Publication Date Title
CN110958390B (en) Image processing method and related device
US20240037836A1 (en) Image rendering method and apparatus, and electronic device
CN110933275B (en) Photographing method and related equipment
CN110995994B (en) Image shooting method and related device
KR102327779B1 (en) Method for processing image data and apparatus for the same
WO2021078001A1 (en) Image enhancement method and apparatus
KR20200072393A (en) Apparatus and method for determining image sharpness
CN110958399B (en) High dynamic range image HDR realization method and related product
CN111061524A (en) Application data processing method and related device
WO2023284715A1 (en) Object reconstruction method and related device
CN116152122B (en) Image processing method and electronic device
CN108694389A (en) Safe verification method based on preposition dual camera and electronic equipment
CN110991369A (en) Image data processing method and related device
JP7456446B2 (en) Image processing method, image processing device and program
CN111723803A (en) Image processing method, device, equipment and storage medium
CN112560592A (en) Image processing method and device, and terminal control method and device
CN111062323A (en) Face image transmission method, numerical value transfer method, device and electronic equipment
CN108712400A (en) Data transmission method, device, computer readable storage medium and electronic equipment
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN116916151B (en) Shooting method, electronic device and storage medium
CN110933314B (en) Focus-following shooting method and related product
CN110990088B (en) Data processing method and related equipment
CN108898650B (en) Human-shaped material creating method and related device
KR20110136026A (en) System for optimizing a augmented reality data
CN109377460A (en) A kind of image processing method, image processing apparatus and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant