CN110991368A - Camera scene recognition method and related device - Google Patents

Camera scene recognition method and related device Download PDF

Info

Publication number
CN110991368A
CN110991368A CN201911252533.8A CN201911252533A CN110991368A CN 110991368 A CN110991368 A CN 110991368A CN 201911252533 A CN201911252533 A CN 201911252533A CN 110991368 A CN110991368 A CN 110991368A
Authority
CN
China
Prior art keywords
module
algorithm
scene
scene detection
detection result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911252533.8A
Other languages
Chinese (zh)
Other versions
CN110991368B (en
Inventor
方攀
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jinsheng Communication Technology Co ltd
Original Assignee
Shanghai Jinsheng Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jinsheng Communication Technology Co ltd filed Critical Shanghai Jinsheng Communication Technology Co ltd
Priority to CN201911252533.8A priority Critical patent/CN110991368B/en
Publication of CN110991368A publication Critical patent/CN110991368A/en
Application granted granted Critical
Publication of CN110991368B publication Critical patent/CN110991368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses a camera scene identification method and a related device, wherein the method comprises the following steps: the third-party application sends a scene detection request to a hardware abstraction layer of the operating system; the hardware abstraction layer receives the scene detection request, acquires an original scene identification data frame to be processed, calls an algorithm for realizing scene identification to process the original scene identification data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the scene identification algorithm is that the third party application requests an operating system to be open for the third party application in advance through the media service module; and the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm. The embodiment of the application improves the efficiency and accuracy of camera scene recognition.

Description

Camera scene recognition method and related device
Technical Field
The application relates to the technical field of electronic equipment, in particular to a camera scene identification method and a related device.
Background
At present, various camera application software is applied more and more widely in electronic equipment, and as the requirement of a user for processing data of the camera application is higher and higher, no unified API is available to access the underlying scene detection capability for the third-party application usage scene detection. And the network performance is poorer by using the CPU for deep learning, and the preview picture is unsmooth due to the operation of a scene recognition algorithm. The third-party application can only integrate the scene recognition algorithm by itself so as to meet the requirements of users.
Disclosure of Invention
The embodiment of the application provides a camera scene recognition method and a related device, aiming at improving the efficiency and accuracy of camera scene recognition.
In a first aspect, an embodiment of the present application provides a camera scene identification method, where the third-party application sends a scene detection request to a hardware abstraction layer of the operating system;
the hardware abstraction layer receives the scene detection request, acquires an original scene identification data frame to be processed, calls an algorithm for realizing scene identification to process the original scene identification data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the scene identification algorithm is that the third party application requests an operating system to be open for the third party application in advance through the media service module;
and the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
In a second aspect, an embodiment of the present application provides a camera scene recognition apparatus, which is applied to an electronic device, where the electronic device includes a media service module and an operating system, and an application layer of the operating system is provided with a third-party application; the apparatus comprises a processing unit and a communication unit, wherein,
the processing unit is used for sending a scene detection request to a hardware abstraction layer of the operating system by the third-party application; the hardware abstraction layer receives the scene detection request, acquires an original scene identification data frame to be processed, calls an algorithm for realizing scene identification to process the original scene identification data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the scene identification algorithm is that the third party application requests an operating system to be open for the third party application in advance through the media service module; and the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a chip, including: and the processor is used for calling and running the computer program from the memory so that the device provided with the chip executes part or all of the steps described in any method of the first aspect of the embodiment of the application.
In a fifth aspect, this application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in any one of the methods of the first aspect of this application.
In a sixth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the present application, a third party application in an electronic device sends a scene detection request to a hardware abstraction layer of an operating system, then, the hardware abstraction layer receives the scene detection request, obtains an original scene identification data frame to be processed, and invokes an algorithm for implementing scene identification to process the original scene identification data frame, so as to obtain a target scene detection result, and sends the target scene detection result to the third party application, where the algorithm for scene identification is that the third party application requests the operating system to be open for the third party application in advance through a media service module, and finally, the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm. Therefore, a more efficient customizable scene recognition method is provided for the third-party application, the third-party application customizes the target algorithm according to the scene, and the intelligence and the accuracy of the camera scene recognition method are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a camera scene recognition method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of another camera scene recognition method provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 5 is a block diagram illustrating functional units of a camera scene recognition apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device according to the embodiments of the present application may be an electronic device with communication capability, and the electronic device may include various handheld devices with wireless communication function, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and so on.
Currently, in an Android platform, a three-party camera Application can access underlying camera data through a standard Android Application Programming Interface (API), but if the underlying image is to be processed by using more enhanced functions or algorithms, there is no corresponding standard Interface to map the underlying capability to three-party access. However, it is very important to ensure security after opening the functions of the underlying core, and the current scheme authorizes the security by means of white list or the like.
According to the traditional method, scene detection is carried out after the Android third-party application App receives YUV, and bottom-layer hardware acceleration cannot be obtained through a native API of Google. The network performance is poor when the CPU is used for deep learning, and the preview picture is unsmooth when a scene recognition algorithm is operated. The GPU and the DSP of the platform cannot be used for accelerating the calculation. For the scene recognition AI technique, for example, MTK platform MT6760, the single frame processing time of MobileNetv2 using the CPU is about 200 ms. When a third party Camera runs a network model, the performance of the mobile phone can be seriously influenced, and the preview of the Camera is blocked.
In view of the foregoing problems, embodiments of the present application provide a camera scene recognition method and a related apparatus, and the following describes embodiments of the present application in detail with reference to the accompanying drawings.
As shown in fig. 1, an electronic device 100 according to an embodiment of the present application includes a media service module and an operating system, where the operating system may be an android system, an application layer of the operating system is provided with a third party application and a media management module (also referred to as a media interface module), a hardware abstraction layer of the operating system is provided with a hardware abstraction module (this is an android native module, such as a native camera hardware abstraction module CameraHAL), a media policy module, and an algorithm management module, and further, an operating system native architecture further includes a framework layer and a driver layer, the framework layer includes application interfaces of various native applications (such as a native camera application program interface), application services (such as a native camera service), a framework layer interface (such as Google HAL3 interface), the hardware abstraction layer includes a hardware abstraction layer interface (such as HAL3.0), and hardware abstraction modules of various native applications (such as a camera hardware abstraction module), the driver layer includes various drivers (e.g., screen Display driver, Audio driver, etc.) for enabling various hardware of the electronic device, such as the image signal processor ISP + front-end image sensors, etc.
The media service module is independent of the operating system, third-party applications can communicate with the media service module through the media management module, the media service module can communicate with the media policy module through an android native information link formed by an application interface, an application service, a frame layer interface, a hardware abstraction layer interface and the hardware abstraction module, the media policy module communicates with the algorithm management module, the algorithm management module maintains an android native algorithm library, the algorithm library comprises enhancement functions supported by various native applications, and for example, for a native camera application, the enhancement functions such as binocular shooting, beauty, sharpening, night vision and the like are supported. In addition, the media service module can also directly communicate with the media policy module or the algorithm management module.
Based on the above framework, the media service module may enable the algorithm module in the algorithm library through the android native information link, the media policy module, and the algorithm management module, or enable the algorithm module in the algorithm library directly through the media policy module and the algorithm management module, or enable the algorithm module in the algorithm library directly through the algorithm management module, thereby implementing an enhanced function of opening native application association for third-party applications.
Based on the above framework, the media service module may invoke the driver of the application to enable some hardware through an android native information link, or through a first information link composed of the media policy module and the hardware abstraction module, or through a second information link composed of the media policy module, the algorithm management module, and the hardware abstraction module, thereby implementing opening native application-related hardware for a third party application.
Referring to fig. 2, fig. 2 is a flowchart illustrating a camera scene recognition method according to an embodiment of the present disclosure, where the camera scene recognition method may be applied to the electronic device shown in fig. 1.
As shown in the figure, the camera scene recognition method includes the following operations.
S201, the third party application sends a scene detection request to a hardware abstraction layer of the operating system.
And S202, the hardware abstraction layer receives the scene detection request, acquires an original scene identification data frame to be processed, calls an algorithm for realizing scene identification to process the original scene identification data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the scene identification algorithm is that the third party application requests an operating system to be open for the third party application in advance through the media service module.
And S203, the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
Specifically, the scheme is based on an Omedia framework, the third-party application calls a bottom-layer deep learning network through the Omedia framework to perform scene recognition, and the recognized scene is transmitted back to the third-party application. The identified scenes can cooperate with third-party applications, and the third-party applications can customize scenes (such as backlight, night scenes, blue sky and the like) required by the third-party applications.
It can be seen that, in the embodiment of the present application, a third party application in an electronic device sends a scene detection request to a hardware abstraction layer of an operating system, then, the hardware abstraction layer receives the scene detection request, obtains an original scene identification data frame to be processed, and invokes an algorithm for implementing scene identification to process the original scene identification data frame, so as to obtain a target scene detection result, and sends the target scene detection result to the third party application, where the algorithm for scene identification is that the third party application requests the operating system to be open for the third party application in advance through a media service module, and finally, the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm. Therefore, a more efficient customizable scene recognition method is provided for the third-party application, the third-party application customizes the target algorithm according to the scene, and the intelligence and the accuracy of the camera scene recognition method are improved.
In one possible example, a hardware abstraction layer of the operating system is provided with a media policy module; the third party application sending a scene detection request to a hardware abstraction layer of the operating system, including: the third-party application sends a scene detection request to the media service module; the media service module receives the scene detection request and issues the scene detection request to the media strategy module; and the media strategy module receives the scene detection request and issues the scene detection request to the bottom layer driver.
The media service module is an European media platform service module, and the media strategy module is a request calling module.
The third-party application is in communication connection with the OMedia SDK interface, the OMedia SDK interface is in communication connection with the media service module, the media service module is in communication connection with the media strategy module, and the media strategy module is in communication connection with the algorithm management module.
In one possible example, a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, and the hardware abstraction module is connected with the algorithm management module through the media policy module; the hardware abstraction layer receives the scene detection request, acquires an original scene identification data frame to be processed, and calls an algorithm for realizing scene identification to process the original scene identification data frame to obtain a target scene detection result, and the method comprises the following steps: the media service module analyzes the scene detection request to obtain first information and sends the first information to the media strategy module; the media strategy module receives the first information and simultaneously issues the first information to a bottom layer driver; the media strategy module receives an original scene identification data frame reported by the bottom driver; the media strategy module issues the original scene identification data frame to the algorithm management module; and the algorithm management module calls the scene recognition algorithm to detect the original scene recognition data frame and accelerate to obtain a target scene detection result.
The first information may be scene identification information, scene change information, and the like, which is not limited herein.
In one possible example, the method for detecting the original scene identification data frame and accelerating to obtain the target scene detection result includes: the algorithm management module calls a preset hardware module to accelerate the preset algorithm to obtain the accelerated preset algorithm, wherein the preset hardware module comprises a DSP (digital signal processor), an NPU (network processing unit) and a GPU (graphic processing unit); and the algorithm management module calls the accelerated preset algorithm to detect the scene identification data frame to obtain a target scene detection result.
The preset algorithm may be a scene recognition algorithm. And are not intended to be limiting.
As can be seen, in this example, the scene detection result of the third-party application is determined through the preset algorithm, and according to the scene detection result, the third-party application can set the scene requirement by itself, so that the intelligence and the accuracy of the camera scene recognition method are improved.
In one possible example, the third-party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm, and the method includes: and querying a preset database to obtain a target algorithm corresponding to the target scene detection result, wherein the preset database comprises a mapping relation corresponding to the scene detection result and the algorithm.
The mapping relationship may be one-to-one, one-to-many, and many-to-many, and is not limited herein.
Therefore, in the example, different target algorithms are obtained according to different scene detection results, and the diversity and the accuracy of the scene detection algorithms are improved.
In one possible example, a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, and the hardware abstraction module is connected with the algorithm management module through the media policy module; the third-party application receiving the target scene detection result, including: the algorithm management module reports the target scene detection result to the media strategy module; the media strategy module reports the target scene detection result to a hardware abstraction module; the hardware abstraction module reports the target scene detection result to an information callback module; and the third-party application receives the target scene detection result reported by the information callback module.
The information callback module is in communication connection with the third-party application and the media service module.
Therefore, in the example, the scene detection result is accurately and quickly fed back to the third-party application through the information interaction before the modules, and the intelligence and the accuracy of the camera scene identification method are improved.
In one possible example, the media policy module receives a scene identification data frame from the underlying driver report, including: the bottom driver reports the scene identification data frame to the hardware abstraction module; and the hardware abstraction module reports the scene identification data frame to the media strategy module.
The bottom layer driver may include, but is not limited to, a sensor module and an image signal processing module, the sensor module is communicatively connected to the image signal processing module, the image signal processing module is communicatively connected to a hardware abstraction module, and the hardware abstraction module is communicatively connected to a media policy module.
As can be seen, in this example, by building a path, it is possible for a third-party application to use a scene detection algorithm with complex computation; providing a more efficient customizable scene recognition solution for third-party applications; and a customizable scene detection interface is provided for third-party application, and the third-party application can use different algorithms according to scenes, so that the intelligence of the three-party camera is improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating another camera scene recognition method according to an embodiment of the present disclosure, where the camera scene recognition method may be applied to the electronic device shown in fig. 1.
As shown in the figure, the camera scene recognition method includes the following operations:
s301, the third party application sends a scene detection request to the media service module.
S302, the media service module receives the scene detection request and issues the scene detection request to the media strategy module.
And S303, the media policy module receives the scene detection request and issues the scene detection request to a bottom drive.
S304, the media service module analyzes the scene detection request to obtain first information, and sends the first information to the media strategy module.
S305, the media strategy module receives the first information and issues the first information to a bottom layer drive.
S306, the media strategy module receives the original scene identification data frame reported by the bottom driver.
S307, the media strategy module issues the original scene identification data frame to the algorithm management module.
S308, the algorithm management module calls the scene recognition algorithm to detect the original scene recognition data frame and accelerate to obtain a target scene detection result.
S309, the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
It can be seen that, in the embodiment of the present application, a third party application in an electronic device sends a scene detection request to a hardware abstraction layer of an operating system, then, the hardware abstraction layer receives the scene detection request, obtains an original scene identification data frame to be processed, and invokes an algorithm for implementing scene identification to process the original scene identification data frame, so as to obtain a target scene detection result, and sends the target scene detection result to the third party application, where the algorithm for scene identification is that the third party application requests the operating system to be open for the third party application in advance through a media service module, and finally, the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm. Therefore, a more efficient customizable scene recognition method is provided for the third-party application, the third-party application customizes the target algorithm according to the scene, and the intelligence and the accuracy of the camera scene recognition method are improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device 400 according to an embodiment of the present disclosure, and as shown in the figure, the electronic device 400 includes an application processor 410, a memory 420, a communication interface 430, and one or more programs 421, where the one or more programs 421 are stored in the memory 420 and configured to be executed by the application processor 410, and the one or more programs 421 include instructions for executing any step in the foregoing method embodiment.
In one possible example, the program 421 includes instructions for performing the following steps: the third-party application sends a scene detection request to a hardware abstraction layer of the operating system; the hardware abstraction layer receives the scene detection request, acquires an original scene identification data frame to be processed, calls an algorithm for realizing scene identification to process the original scene identification data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the scene identification algorithm is that the third party application requests an operating system to be open for the third party application in advance through the media service module; and the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
In one possible example, a hardware abstraction layer of the operating system is provided with a media policy module; in terms of the third-party application sending a scene detection request to a hardware abstraction layer of the operating system, the instructions in the program 421 are specifically configured to: the third-party application sends a scene detection request to the media service module; the media service module receives the scene detection request and issues the scene detection request to the media strategy module; and the media strategy module receives the scene detection request and issues the scene detection request to the bottom layer driver.
In one possible example, a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, wherein the hardware abstraction module is connected with the algorithm management module through the media policy module; in the aspect of receiving the scene detection request at the hardware abstraction layer, acquiring an original scene identification data frame to be processed, and calling an algorithm for implementing scene identification to process the original scene identification data frame to obtain a target scene detection result, the instruction in the program 421 is specifically configured to perform the following operations: the media service module analyzes the scene detection request to obtain first information and sends the first information to the media strategy module; the media strategy module receives the first information and simultaneously issues the first information to a bottom layer driver; the media strategy module receives an original scene identification data frame reported by the bottom driver; the media strategy module issues the original scene identification data frame to the algorithm management module; and the algorithm management module calls the scene recognition algorithm to detect the original scene recognition data frame and accelerate to obtain a target scene detection result.
In one possible example, in terms of the algorithm management module invoking the scene recognition algorithm to detect the original scene recognition data frame and accelerate to obtain the target scene detection result, the instructions in the program 421 are specifically configured to perform the following operations: the algorithm management module calls a preset hardware module to accelerate the preset algorithm to obtain the accelerated preset algorithm, wherein the preset hardware module comprises a DSP (digital signal processor), an NPU (network processing unit) and a GPU (graphic processing unit); and the algorithm management module calls the accelerated preset algorithm to detect the scene identification data frame to obtain a target scene detection result.
In one possible example, in the aspect that the third-party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and performs photographing according to the target photographing algorithm, the program 421 includes instructions for: and querying a preset database to obtain a target algorithm corresponding to the target scene detection result, wherein the preset database comprises a mapping relation corresponding to the scene detection result and the algorithm.
In one possible example, a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, and the hardware abstraction module is connected with the algorithm management module through the media policy module; in terms of the third party application receiving the target scene detection result, the program 421 includes instructions for: the algorithm management module reports the target scene detection result to the media strategy module; the media strategy module reports the target scene detection result to a hardware abstraction module; the hardware abstraction module reports the target scene detection result to an information callback module; and the third-party application receives the target scene detection result reported by the information callback module.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 5 is a block diagram of functional units of a camera scene recognition apparatus 500 according to an embodiment of the present application. The camera scene recognition device 500 is applied to electronic equipment, and the electronic equipment comprises a media service module and an operating system, wherein a third-party application is arranged on an application layer of the operating system; the device comprises a processing unit 501 and a communication unit 502, wherein the processing unit 501 is configured to execute any step in the above method embodiments, and when data transmission such as sending is performed, the communication unit 503 is optionally invoked to complete the corresponding operation. The details will be described below.
The processing unit 501 is configured to send a scene detection request to a hardware abstraction layer of the operating system by the third-party application;
the hardware abstraction layer receives the scene detection request, acquires an original scene identification data frame to be processed, calls an algorithm for realizing scene identification to process the original scene identification data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the scene identification algorithm is that the third party application requests an operating system to be open for the third party application in advance through the media service module;
and the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
In one possible example, a hardware abstraction layer of the operating system is provided with a media policy module; in an aspect that the third-party application sends a scene detection request to a hardware abstraction layer of the operating system, the processing unit 501 is specifically configured to send the scene detection request to the media service module by the third-party application; the media service module receives the scene detection request and issues the scene detection request to the media strategy module; and the media strategy module receives the scene detection request and issues the scene detection request to the bottom layer driver.
In one possible example, a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, and the hardware abstraction module is connected with the algorithm management module through the media policy module; in the aspect of receiving the scene detection request at the hardware abstraction layer, acquiring an original scene identification data frame to be processed, and calling an algorithm for implementing scene identification to process the original scene identification data frame to obtain a target scene detection result, the processing unit 501 is specifically configured to, the media service module analyzes the scene detection request to obtain first information, and issues the first information to the media policy module; the media strategy module receives the first information and simultaneously issues the first information to a bottom layer driver; the media strategy module receives an original scene identification data frame reported by the bottom driver; the media strategy module issues the original scene identification data frame to the algorithm management module; and the algorithm management module calls the scene recognition algorithm to detect the original scene recognition data frame and accelerate to obtain a target scene detection result.
In one possible example, in terms of the algorithm management module invoking the scene recognition algorithm to detect the original scene recognition data frame and accelerate to obtain a target scene detection result, the algorithm management module invokes a preset hardware module to accelerate the preset algorithm to obtain an accelerated preset algorithm, wherein the preset hardware module comprises a DSP, an NPU and a GPU; and the algorithm management module calls the accelerated preset algorithm to detect the scene identification data frame to obtain a target scene detection result.
In a possible example, in the aspect that the third-party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm, the processing unit 501 is configured to query a preset database to obtain a target algorithm corresponding to the target scene detection result, where the preset database includes a mapping relationship corresponding to the scene detection result and the algorithm.
In one possible example, a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, and the hardware abstraction module is connected with the algorithm management module through the media policy module; in the aspect that the third-party application receives the target scene detection result, the processing unit 501 is configured to report the target scene detection result to the media policy module by the algorithm management module; the media strategy module reports the target scene detection result to a hardware abstraction module; the hardware abstraction module reports the target scene detection result to an information callback module; and the third-party application receives the target scene detection result reported by the information callback module.
The camera scene recognition apparatus 500 may further include a storage unit 503 for storing program codes and data of the electronic device. The processing unit 501 may be a processor, the communication unit 502 may be a touch display screen or a transceiver, and the storage unit 503 may be a memory.
It can be understood that, since the method embodiment and the apparatus embodiment are different presentation forms of the same technical concept, the content of the method embodiment portion in the present application should be synchronously adapted to the apparatus embodiment portion, and is not described herein again.
Embodiments of the present application further provide a chip, where the chip includes a processor, configured to call and run a computer program from a memory, so that a device in which the chip is installed performs some or all of the steps described in the electronic device in the above method embodiments.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. The camera scene identification method is applied to electronic equipment, wherein the electronic equipment comprises a media service module and an operating system, and an application layer of the operating system is provided with a third-party application; the method comprises the following steps:
the third-party application sends a scene detection request to a hardware abstraction layer of the operating system;
the hardware abstraction layer receives the scene detection request, acquires an original scene identification data frame to be processed, calls an algorithm for realizing scene identification to process the original scene identification data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the scene identification algorithm is that the third party application requests an operating system to be open for the third party application in advance through the media service module;
and the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
2. The method of claim 1, wherein a hardware abstraction layer of the operating system is provided with a media policy module; the third party application sending a scene detection request to a hardware abstraction layer of the operating system, including:
the third-party application sends a scene detection request to the media service module;
the media service module receives the scene detection request and issues the scene detection request to the media strategy module;
and the media strategy module receives the scene detection request and issues the scene detection request to the bottom layer driver.
3. The method according to claim 1 or 2, wherein a hardware abstraction module, a media policy module and an algorithm management module are arranged on a hardware abstraction layer of the operating system, and the hardware abstraction module is connected with the algorithm management module through the media policy module; the hardware abstraction layer receives the scene detection request, acquires an original scene identification data frame to be processed, and calls an algorithm for realizing scene identification to process the original scene identification data frame to obtain a target scene detection result, and the method comprises the following steps:
the media service module analyzes the scene detection request to obtain first information and sends the first information to the media strategy module;
the media strategy module receives the first information and simultaneously issues the first information to a bottom layer driver;
the media strategy module receives an original scene identification data frame reported by the bottom driver;
the media strategy module issues the original scene identification data frame to the algorithm management module;
and the algorithm management module calls the scene recognition algorithm to detect the original scene recognition data frame and accelerate to obtain a target scene detection result.
4. The method of claim 3, wherein the algorithm management module invokes the scene recognition algorithm to detect the original scene recognition data frame and accelerate to obtain the target scene detection result, comprising:
the algorithm management module calls a preset hardware module to accelerate the preset algorithm to obtain the accelerated preset algorithm, wherein the preset hardware module comprises a DSP (digital signal processor), an NPU (network processing unit) and a GPU (graphic processing unit);
and the algorithm management module calls the accelerated preset algorithm to detect the scene identification data frame to obtain a target scene detection result.
5. The method of claim 1, wherein the third-party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm, and comprises:
and querying a preset database to obtain a target algorithm corresponding to the target scene detection result, wherein the preset database comprises a mapping relation corresponding to the scene detection result and the algorithm.
6. The method according to claim 1, wherein a hardware abstraction module, a media policy module and an algorithm management module are arranged on a hardware abstraction layer of the operating system, and the hardware abstraction module is connected with the algorithm management module through the media policy module; the third-party application receiving the target scene detection result, including:
the algorithm management module reports the target scene detection result to the media strategy module;
the media strategy module reports the target scene detection result to a hardware abstraction module;
the hardware abstraction module reports the target scene detection result to an information callback module;
and the third-party application receives the target scene detection result reported by the information callback module.
7. The camera scene recognition device is applied to electronic equipment, wherein the electronic equipment comprises a media service module and an operating system, and an application layer of the operating system is provided with a third-party application; the apparatus comprises a processing unit and a communication unit, wherein,
the processing unit is used for sending a scene detection request to a hardware abstraction layer of the operating system by the third-party application; the hardware abstraction layer receives the scene detection request, acquires an original scene identification data frame to be processed, calls an algorithm for realizing scene identification to process the original scene identification data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the scene identification algorithm is that the third party application requests an operating system to be open for the third party application in advance through the media service module; and the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
8. A chip, comprising: a processor for calling and running a computer program from a memory so that a device on which the chip is installed performs the method of any one of claims 1-6.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-6.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-6.
CN201911252533.8A 2019-12-09 2019-12-09 Camera scene recognition method and related device Active CN110991368B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911252533.8A CN110991368B (en) 2019-12-09 2019-12-09 Camera scene recognition method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911252533.8A CN110991368B (en) 2019-12-09 2019-12-09 Camera scene recognition method and related device

Publications (2)

Publication Number Publication Date
CN110991368A true CN110991368A (en) 2020-04-10
CN110991368B CN110991368B (en) 2023-06-02

Family

ID=70091486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911252533.8A Active CN110991368B (en) 2019-12-09 2019-12-09 Camera scene recognition method and related device

Country Status (1)

Country Link
CN (1) CN110991368B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111491102A (en) * 2020-04-22 2020-08-04 Oppo广东移动通信有限公司 Detection method and system for photographing scene, mobile terminal and storage medium
CN112162797A (en) * 2020-10-14 2021-01-01 珠海格力电器股份有限公司 Data processing method, system, storage medium and electronic device
WO2021115113A1 (en) * 2019-12-09 2021-06-17 Oppo广东移动通信有限公司 Data processing method and device, and storage medium
WO2021115038A1 (en) * 2019-12-09 2021-06-17 Oppo广东移动通信有限公司 Application data processing method and related apparatus
CN114727004A (en) * 2021-01-05 2022-07-08 北京小米移动软件有限公司 Image acquisition method and device, electronic equipment and storage medium
CN116347009A (en) * 2023-02-24 2023-06-27 荣耀终端有限公司 Video generation method and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107483725A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Resource allocation method and Related product
US20190034234A1 (en) * 2017-07-31 2019-01-31 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method For Resource Allocation And Terminal Device
CN109831629A (en) * 2019-03-14 2019-05-31 Oppo广东移动通信有限公司 Method of adjustment, device, terminal and the storage medium of terminal photographing mode

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107483725A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Resource allocation method and Related product
US20190034234A1 (en) * 2017-07-31 2019-01-31 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method For Resource Allocation And Terminal Device
CN109831629A (en) * 2019-03-14 2019-05-31 Oppo广东移动通信有限公司 Method of adjustment, device, terminal and the storage medium of terminal photographing mode

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱玉鹏;白海亮;王宏强;: "战场数据融合仿真***设计与实现" *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021115113A1 (en) * 2019-12-09 2021-06-17 Oppo广东移动通信有限公司 Data processing method and device, and storage medium
WO2021115038A1 (en) * 2019-12-09 2021-06-17 Oppo广东移动通信有限公司 Application data processing method and related apparatus
CN111491102A (en) * 2020-04-22 2020-08-04 Oppo广东移动通信有限公司 Detection method and system for photographing scene, mobile terminal and storage medium
CN111491102B (en) * 2020-04-22 2022-01-07 Oppo广东移动通信有限公司 Detection method and system for photographing scene, mobile terminal and storage medium
CN112162797A (en) * 2020-10-14 2021-01-01 珠海格力电器股份有限公司 Data processing method, system, storage medium and electronic device
CN112162797B (en) * 2020-10-14 2022-01-25 珠海格力电器股份有限公司 Data processing method, system, storage medium and electronic device
CN114727004A (en) * 2021-01-05 2022-07-08 北京小米移动软件有限公司 Image acquisition method and device, electronic equipment and storage medium
CN116347009A (en) * 2023-02-24 2023-06-27 荣耀终端有限公司 Video generation method and electronic equipment
CN116347009B (en) * 2023-02-24 2023-12-15 荣耀终端有限公司 Video generation method and electronic equipment

Also Published As

Publication number Publication date
CN110991368B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN110991368B (en) Camera scene recognition method and related device
CN107426432B (en) Resource allocation method and Related product
EP4027238B1 (en) Card rendering method and electronic device
CN107797868B (en) Resource adjusting method and device
WO2021115038A1 (en) Application data processing method and related apparatus
CN110995994A (en) Image shooting method and related device
US20220159453A1 (en) Method for Using Remote SIM Module and Electronic Device
EP4246957A1 (en) Photographing method, system, and electronic device
WO2021169628A1 (en) Enhanced video call method and system, and electronic device
US20210397752A1 (en) Electronic Device Control Method and Electronic Device
CN110991369A (en) Image data processing method and related device
CN111666075B (en) Multi-device interaction method and system
WO2019047708A1 (en) Resource configuration method and related product
CN116360725A (en) Display interaction system, display method and device
CN110413383B (en) Event processing method, device, terminal and storage medium
CN115766851A (en) Device registration method and device, mobile terminal and storage medium
EP4354270A1 (en) Service recommendation method and electronic device
CN115086888B (en) Message notification method and device and electronic equipment
US20230232066A1 (en) Device recommendation method and electronic device
US20230289432A1 (en) Application Data Transmission Method, User Equipment, and System
CN116414500A (en) Recording method, acquisition method and terminal equipment for operation guide information of electronic equipment
CN116991302B (en) Application and gesture navigation bar compatible operation method, graphical interface and related device
WO2022227978A1 (en) Display method and related apparatus
US10824491B2 (en) System information transmitting method and apparatus, and computer-readable storage medium
CN117857646B (en) Data network sharing method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant