CN110602359B - Image processing method, image processor, photographing device and electronic equipment - Google Patents

Image processing method, image processor, photographing device and electronic equipment Download PDF

Info

Publication number
CN110602359B
CN110602359B CN201910822518.6A CN201910822518A CN110602359B CN 110602359 B CN110602359 B CN 110602359B CN 201910822518 A CN201910822518 A CN 201910822518A CN 110602359 B CN110602359 B CN 110602359B
Authority
CN
China
Prior art keywords
module
request
execution state
image
hardware abstraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910822518.6A
Other languages
Chinese (zh)
Other versions
CN110602359A (en
Inventor
***
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910822518.6A priority Critical patent/CN110602359B/en
Publication of CN110602359A publication Critical patent/CN110602359A/en
Application granted granted Critical
Publication of CN110602359B publication Critical patent/CN110602359B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method, an image processor, a shooting device and an electronic device. The image processing method comprises the following steps: the application program module issues a data request to the hardware abstraction module, and the data request is used for requesting image data; after receiving a data request, a hardware abstraction module sets a request identifier corresponding to the data request to be in a first execution state; after the hardware abstraction module uploads the image data to the application program module, the hardware abstraction module sets the request identifier to be in a second execution state, and the second execution state is different from the first execution state; and when the application program module receives the quitting command, if the request identification is in the second execution state, the application program module quits the operation. Therefore, when the application program module quits operation, the hardware abstraction module can completely upload the image data to the application program module, and the situations of image data loss and abnormal photographing can be avoided.

Description

Image processing method, image processor, photographing device and electronic equipment
Technical Field
The present application relates to the field of imaging technologies, and in particular, to an image processing method, an image processor, a photographing apparatus, and an electronic device.
Background
With the development of electronic technology, cameras are increasingly functional. When a user presses a photographing key of the camera, if the user immediately exits the camera, the image data obtained by photographing at this time may not be completely uploaded, resulting in image data loss and abnormal photographing. Particularly, when a thumbnail is generated, a user wants to view a corresponding large image by clicking the thumbnail into an album, but cannot finally view the image data of this time, which causes a functional defect.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processor, a shooting device and electronic equipment.
The image processing method of the embodiment of the application comprises the following steps: an application program module issues a data request to a hardware abstraction module, wherein the data request is used for requesting image data; after receiving the data request, the hardware abstraction module sets a request identifier corresponding to the data request to be in a first execution state; after the hardware abstraction module uploads the image data to the application program module, the hardware abstraction module sets the request identifier to be in a second execution state, and the second execution state is different from the first execution state; and when the application program module receives an exit command, if the request identifier is in the second execution state, the application program module exits from running.
The image processor of the embodiment of the application comprises an application program module and a hardware abstraction module; the application program module is used for issuing a data request to the hardware abstraction module, and the data request is used for requesting image data; the hardware abstraction module is used for setting a request identifier corresponding to the data request to be in a first execution state after receiving the data request; after the hardware abstraction module uploads the image data to the application program module, the hardware abstraction module sets the request identifier to be in a second execution state, and the second execution state is different from the first execution state; and when the application program module receives an exit command, if the request identifier is in the second execution state, the application program module exits from running.
The shooting device of the embodiment of the application comprises an image processor and an image sensor, wherein the image sensor is connected with the image processor; the image processor comprises an application program module and a hardware abstraction module; the application program module is used for issuing a data request to the hardware abstraction module, and the data request is used for requesting image data; the hardware abstraction module is used for setting a request identifier corresponding to the data request to be in a first execution state after receiving the data request; after the hardware abstraction module uploads the image data to the application program module, the hardware abstraction module sets the request identifier to be in a second execution state, and the second execution state is different from the first execution state; and when the application program module receives an exit command, if the request identifier is in the second execution state, the application program module exits from running.
The electronic equipment of the embodiment of the application comprises a shooting device and a shell, wherein the shooting device is combined with the shell; the shooting device comprises an image processor and an image sensor, and the image sensor is connected with the image processor; the image processor comprises an application program module and a hardware abstraction module; the application program module is used for issuing a data request to the hardware abstraction module, and the data request is used for requesting image data; the hardware abstraction module is used for setting a request identifier corresponding to the data request to be in a first execution state after receiving the data request; after the hardware abstraction module uploads the image data to the application program module, the hardware abstraction module sets the request identifier to be in a second execution state, and the second execution state is different from the first execution state; and when the application program module receives an exit command, if the request identifier is in the second execution state, the application program module exits from running.
In the image processing method, the image processor, the shooting device and the electronic device according to the embodiment of the application, the hardware abstraction module sets the request identifier to be in the first execution state after receiving the data request, and sets the request identifier to be in the second execution state after uploading the image data, if the request identifier is in the second execution state, the application program module can quit running, so that the situation that the image data is lost and abnormal shooting occurs can be avoided, and the hardware abstraction module can completely upload the image data to the application program module when the application program module quits running.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 2 is a schematic view of a camera according to some embodiments of the present application;
FIG. 3 is a schematic diagram of an algorithmic post-processing module in accordance with certain embodiments of the present application;
FIG. 4 is a schematic view of a camera according to some embodiments of the present application;
FIG. 5 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 6 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 7 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 8 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 9 is a schematic structural diagram of an electronic device according to some embodiments of the present application;
FIG. 10 is a schematic diagram of an electronic device according to some embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1 and fig. 2, an embodiment of the present application provides an image processing method. The image processing method comprises the following steps:
01: an application program module (APP)14 issues a data request to the hardware abstraction module 12, the data request being for requesting image data;
02: after receiving the data request, the hardware abstraction module 12 sets a request identifier corresponding to the data request to a first execution state;
03: after the hardware abstraction module 12 uploads the image data to the application program module 14, the hardware abstraction module 12 sets the request identifier to a second execution state, which is different from the first execution state; and
04: when the application module 14 receives the exit command, if the request flag is in the second execution state, the application module 14 exits from running.
Referring to fig. 2, an image processor 10 is also provided in the present embodiment. The image processor 10 includes an application module 14 and a hardware abstraction module 12. The image processing method according to the embodiment of the present application is applicable to the image processor 10 according to the embodiment of the present application. For example, the application module 14 may be used to execute the methods in 01 and 04 and the hardware abstraction module 12 may be used to execute the methods in 02 and 03.
That is, the application module 14 may be configured to issue a data request to the hardware abstraction module 12, the data request requesting image data. The hardware abstraction module 12 may be configured to set a request identifier corresponding to the data request to a first execution state after receiving the data request. After the hardware abstraction module 12 uploads the image data to the application module 14, the hardware abstraction module 12 sets the request flag to a second execution state, which is different from the first execution state. When the application module 14 receives the exit command, if the request flag is in the second execution state, the application module 14 exits from running.
Referring to fig. 2, the present embodiment further provides a camera 100. The photographing apparatus 100 includes an image processor 10 and an image sensor 20 of an embodiment of the present application. The image sensor 20 is connected to the image processor 10.
In the image processing method, the image processor 10 and the shooting device 100 according to the embodiment of the application, the hardware abstraction module 12 sets the request identifier to be in the first execution state after receiving the data request, and sets the request identifier to be in the second execution state after uploading the image data, and if the request identifier is in the second execution state, the application module 14 can quit running, so that it can be ensured that the hardware abstraction module 12 has completely uploaded the image data to the application module 14 when the application module 14 quits running, and the situations of image data loss and abnormal shooting cannot occur.
Specifically, the photographing apparatus 100 includes an image processor 10 and an image sensor 20. The image processor 10 is connected to the image sensor 20. The Image sensor 20 includes an Image acquisition unit (sensor)22 and a RAW Image data unit (IFE) 24, the Image acquisition unit 22 is configured to receive light to acquire a RAW Image, and the RAW Image data unit 24 is configured to transmit Image data acquired by the Image acquisition unit 22 to the Image processor 10, wherein the RAW Image data unit 24 is configured to process the RAW Image acquired by the Image acquisition unit 22 and output the processed RAW Image to the Image processor 10.
The image processor 10 includes a hardware abstraction module 12, an application module 14, and an Algo Process Service (APS) module 16.
The hardware abstraction module 12 is configured to receive a RAW image, convert the RAW image into a YUV image, and transmit the RAW image and/or the YUV image. The hardware abstraction module 12 may be connected to the image sensor 20. Specifically, the hardware abstraction module 12 may include a buffer queue (buffer queue)122 connected to the Image sensor 20, a RAW to RGB processing unit (BPS) 124, and an Image Process Engine (IPE) 126 connected to the application module 14. The buffer unit 122 is used for buffering the RAW image from the image sensor 20 and transmitting the RAW image to the post-algorithm processing module 16 through the application module 14. The RAW-to-RGB processing unit 124 is configured to convert the RAW image from the buffer unit 122 into an RGB image. The denoising and YUV post-processing unit 126 is configured to process the RGB image to obtain a YUV image and transmit the YUV image to the algorithm post-processing module 16 through the application module 14. The hardware abstraction module 12 may also transmit metadata (metadata) of the image data, the metadata including 3a (automatic exposure control AE, automatic focus control AF, automatic white balance control AWB) information, picture information (e.g., image width, height), exposure parameters (aperture size, shutter speed, and sensitivity aperture value), etc., and post-photographing processing (e.g., including at least one of beauty processing, filter processing, rotation processing, watermarking processing, blurring processing, HDR processing, and multi-frame processing) of the RAW image and/or the YUV image may be assisted by the metadata. In one embodiment, the metadata Includes Sensitivity (ISO) information according to which the brightness of the RAW image and/or the YUV image can be assisted to be adjusted, thereby implementing post-photographing processing related to the adjusted brightness.
Because the hardware abstraction module 12 does not perform post-photographing processing on the RAW image and/or the YUV image (for example, only receiving the RAW image, converting the RAW image into the YUV image, and transmitting the RAW image and/or the YUV image), the image processing algorithm for post-photographing processing does not need to perform flow truncation on the algorithm framework of the hardware abstraction module 12 itself, and only needs to be externally compatible, so that the design difficulty is reduced.
In the related art, an Application Program Interface (API) establishes a hardware abstraction module as a pipeline (pipeline), and since the establishment of the pipeline requires a lot of time and memory, all the pipelines used for a working mode corresponding to a camera need to be established first when the camera is started, and in order to implement various image processing algorithms, a lot of pipelines (for example, more than three pipelines) generally need to be established, which may cause the startup of the camera to consume a lot of time and occupy a lot of memory. The hardware abstraction module 12 according to the embodiment of the present application does not perform post-photographing processing on the RAW image and/or the YUV image, and therefore, the hardware abstraction module 12 only needs to establish a small number of (for example, one or two) pipelines, and does not need to establish a large number of pipelines, so that the memory can be saved, and the starting speed of the camera can be increased.
The application module 14 is used to interface with the hardware abstraction module 12. The application module 14 may be configured to generate control commands according to user input and send the control commands to the image sensor 20 through the hardware abstraction module 12 to control the operation of the image sensor 20 accordingly. Wherein, the application module 14 can run with 64 bits (bit), and the static data link library (lib) of the image processing algorithm of the post-photographing processing can be configured with 64 bits to improve the operation speed. After receiving the RAW image and/or the YUV image transmitted by the hardware abstraction module 12, the application module 14 may perform post-photographing processing on the RAW image and/or the YUV image, or may transmit the RAW image and/or the YUV image to the algorithm post-processing module 16 for post-photographing processing. Of course, it is also possible that the application module 14 performs some post-photographing processing (e.g., beauty processing, filter processing, rotation processing, watermarking processing, blurring processing, etc.), and the algorithm post-processing module 16 performs some other post-photographing processing (e.g., HDR processing, multi-frame processing, etc.). In the embodiment of the present application, the application module 14 transmits the RAW and/or YUV images to the post-algorithm processing module 16 for post-photographing processing. The application module 14 may also perform synchronous processing on the image data collected by the main camera and the auxiliary camera, that is, perform temporal matching on the image data collected by the main camera and the image data collected by the auxiliary camera. Likewise, the application module 14 may also synchronize the image data and the metadata, i.e., temporally match the image data and the metadata.
The algorithm post-processing module 16 is connected to the hardware abstraction module 12 through the application module 14, at least one image processing algorithm (for example, at least one of a beauty processing algorithm, a filter processing algorithm, a rotation processing algorithm, a watermark processing algorithm, a blurring processing algorithm, an HDR processing algorithm, and a multi-frame processing algorithm) is stored in the algorithm post-processing module 16, and the algorithm post-processing module 16 is configured to process the RAW image and/or the YUV image by using the image processing algorithm to implement post-photographing processing. Since the post-photographing processing of the RAW image and/or the YUV image can be realized by the algorithm post-processing module 16, the process truncation is not required on the algorithm architecture of the hardware abstraction module 12, and only the external compatibility is required, so that the design difficulty is reduced. And because the post-photographing processing is realized by the algorithm post-processing module 16, the function of the algorithm post-processing module 16 is more single and more focused, thereby achieving the effects of fast transplantation, simple expansion of new image processing algorithms and the like. Of course, if the application module 14 performs some post-photographing processing (e.g., beauty processing, filter processing, rotation processing, watermark processing, blurring processing, etc.), and the algorithm post-processing module 16 performs other post-photographing processing (e.g., HDR processing, multi-frame processing, etc.), at least one image processing algorithm (e.g., including at least one of beauty processing algorithm, filter processing algorithm, rotation processing algorithm, watermark processing algorithm, blurring processing algorithm, HDR processing algorithm, and multi-frame processing algorithm) may also be stored in the application module 14, and the application module 14 is further configured to process the RAW image and/or the YUV image using the image processing algorithm to implement the post-photographing processing. Since the post-photographing processing of the RAW image and/or the YUV image is realized by the application module 14 and the algorithm post-processing module 16, the process truncation is not required on the algorithm architecture of the hardware abstraction module 12, and only the external compatibility is required, so that the design difficulty is also greatly reduced.
When the post-algorithm processing module 16 processes only the RAW image (for example, the image processing algorithm processes the RAW image), the hardware abstraction module 12 may transmit only the RAW image (at this time, it may not be necessary to convert the RAW image into the YUV image); when the post-algorithm processing module 16 processes only YUV images (e.g., the image processing algorithm processes YUV images), the hardware abstraction module 12 may transmit only YUV images; while the post-algorithm processing module 16 processes the RAW image and the YUV image, the hardware abstraction module 12 may transmit the RAW image and the YUV image.
After the image sensor 20 performs one shot (exposure imaging), the image data (RAW image) is transmitted to the hardware abstraction module 12, and after the post-algorithm processing module 16 receives the RAW image and/or YUV image corresponding to the image data, the image sensor 20 can perform the next shot, or the image sensor 20 can be turned off, or the application module 14 can exit the application interface. Since the post-photographing processing is implemented by the algorithm post-processing module 16, after the RAW image and/or the YUV image corresponding to the image data is transmitted to the algorithm post-processing module 16, the post-photographing processing can be implemented only by the algorithm post-processing module 16, and at this time, the image sensor 20 and the application program module 14 may not participate in the post-photographing processing, so that the image sensor 20 can be turned off or the next photographing can be performed, and the application program module 14 can be turned off or quit the application interface. In this way, the photographing apparatus 100 can achieve snapshot, and the application module 14 can be closed or the application interface can be exited when the post-photographing processing is performed by the post-algorithm processing module 16, so that some other operations (for example, operations unrelated to the photographing apparatus 100, such as browsing a web page, watching a video, making a call, etc.) can be performed on the electronic device, so that the user does not need to spend a lot of time waiting for the completion of the post-photographing processing, and the user can use the electronic device conveniently.
The algorithm post-processing module 16 may include an encoding unit 162, and the encoding unit 162 is configured to convert the YUV image into a JPG image (or a JPEG image, etc.). Specifically, when the YUV image is processed by the post-algorithm processing module 16, the encoding unit 162 may directly encode the YUV image to form a JPG image, thereby increasing the output speed of the image. When the RAW image is processed by the post-algorithm processing module 16, the post-algorithm processing module 16 may transmit the RAW image processed to realize post-photographing processing back to the hardware abstraction module 12 through the application module 14, for example, back to the RAW to RGB processing unit 124, the RAW to RGB processing unit 124 may be configured to convert the RAW image processed by the post-algorithm processing module 16 to realize post-photographing processing and transmitted back through the application module 14 into an RGB image, the noise reduction and YUV post-processing unit 126 may convert the RGB image into a YUV image, and the YUV image may be transmitted to the encoding unit 162 of the post-algorithm processing module 16 again to convert the YUV image into a JPG image. In some embodiments, the algorithm post-processing module 16 may also transmit the RAW image processed to implement the post-photographing processing back to the buffer unit 122 through the application module 14, and the transmitted RAW image passes through the RAW to RGB processing unit 124 and the noise reduction and YUV post-processing unit 126 to form a YUV image, and then is transmitted to the encoding unit 162 to form the JPG image. After the JPG image is formed, an algorithmic post-processing module 16 may be used to transfer the JPG image to memory for storage.
Referring to fig. 3, the algorithm post-processing module 16 includes a logic processing calling layer 164, an algorithm module interface layer 166 and an algorithm processing layer 168. The logic processing call layer 164 is used to communicate with the application module 14. The algorithm module interface layer 166 is used to maintain the algorithm interface. The algorithm processing layer 168 includes at least one image processing algorithm. The algorithm module interface layer 166 is used for performing at least one of registration, logout, call and callback on the image processing algorithm of the algorithm processing layer 168 through the algorithm interface.
The logic processing calling layer 164 may include a thread queue, and after receiving the post-photographing processing task of the RAW image and/or the YUV image, the algorithm post-processing module 16 may cache the post-photographing processing task in the thread queue for processing, where the thread queue may cache a plurality of post-photographing processing tasks, and thus, a snapshot (i.e., a snapshot mechanism) may be implemented by the logic processing calling layer 164. The logical process calling layer 164 may receive an instruction such as an initialization (init) or process (process) transmitted from the application module 14, and store the corresponding instruction and data in the thread queue. The logic processing call layer 164 makes a call of specific logic (i.e., a specific logic call combination) according to the task in the thread queue. Logical process call layer 164 may also pass the thumbnail (thumbnail) obtained by the process back to application module 14 for display (i.e., thumbnail display).
The algorithm module interface layer 166 is used for calling an algorithm interface, the calling command can also be stored in the thread queue, and the algorithm processing layer 168 can analyze the parameter of the calling command to obtain the image processing algorithm to be called when receiving the calling command of the thread queue. When the algorithm module interface layer 166 registers the image processing algorithm, an image processing algorithm may be newly added to the algorithm processing layer 168; when the algorithm module interface layer 166 performs logout on an image processing algorithm, one of the image processing algorithms in the algorithm processing layer 168 may be deleted; when the algorithm module interface layer 166 calls an image processing algorithm, one of the image processing algorithms in the algorithm processing layer 168 may be called; when the algorithm module interface layer 166 recalls the image processing algorithm, the data and status after the algorithm processing can be transmitted back to the application module 14. The unified interface can be adopted to realize the operations of registration, logout, call-back and the like of the image processing algorithm. Each image processing algorithm in the algorithm processing layer 168 is independent, so that operations such as registration, logout, call back and the like can be conveniently realized on the image processing algorithms.
Referring to fig. 4, the image processor 10 may further include a camera service module 18. The hardware abstraction module 12 is connected to the application module 14 through the camera service module 18. The camera service module 18 encapsulates the RAW image and/or the YUV image and transmits the encapsulated RAW image and/or YUV image to the application module 14, and transmits the RAW image returned by the application module 14 to the hardware abstraction module 12. In this way, by encapsulating the image by the camera service module 18, the efficiency of image transmission can be improved, and the security of image transmission can be improved. When the image processor 10 includes the camera service module 18, the path of data (images, metadata, etc.) transmission in the image processor 10 may be adapted, i.e., data transmitted between the hardware abstraction module 12 and the application module 14 need to pass through the camera service module 18. For example, when the hardware abstraction module 12 transmits the RAW image and/or the YUV image to the application module 14, the hardware abstraction module 12 first transmits the RAW image and/or the YUV image to the camera service module 18, and the camera service module 18 encapsulates the RAW image and/or the YUV image and transmits the encapsulated RAW image and/or YUV image to the application module 14. For another example, when the hardware abstraction module 12 transmits metadata to the application program module 14, the hardware abstraction module 12 first transmits the metadata to the camera service module 18, and the camera service module 18 encapsulates the metadata and transmits the encapsulated metadata to the application program module 14. For another example, when the hardware abstraction module 12 transmits the frame number suggestion to the application module 14, the hardware abstraction module 12 first transmits the frame number suggestion to the camera service module 18, and the camera service module 18 encapsulates the frame number suggestion and transmits the encapsulated frame number suggestion to the application module 14. For another example, when the hardware abstraction module 12 transmits the algorithm suggestion to the application module 14, the hardware abstraction module 12 first transmits the algorithm suggestion to the camera service module 18, and the camera service module 18 encapsulates the algorithm suggestion and transmits the encapsulated algorithm suggestion to the application module 14. Of course, in some embodiments, the hardware abstraction module 12 may transmit the sensitivity information, the jitter condition of the gyroscope, the AR scene detection result, and the like to the camera service module 18, and the camera service module 18 obtains the frame number suggestion and/or the algorithm suggestion according to the sensitivity information, the jitter condition of the gyroscope, the AR scene detection result, and the like, and then transmits the frame number suggestion and/or the algorithm suggestion to the application module 14.
In the embodiment of the present application, the application module 14 issues a data request to the hardware abstraction module 12, where the data request is used to request image data, and the image data may be a YUV image. The number of frames in which the application module 14 requests image data from the hardware abstraction module 12 may be one or more frames. Before the application module 14 issues a data request to the hardware abstraction module 12, the hardware abstraction module 12 may send a frame number suggestion to the application module 14 according to the sensitivity information, the jitter condition of the gyroscope, the AR scene detection result (detection scene type, such as people, animals, and scenery), and the like, for example, when the jitter detected by the gyroscope is large, the frame number suggestion sent by the hardware abstraction module 12 to the application module 14 may be: more frames are suggested to better realize post-photographing processing; when the jitter detected by the gyroscope is small, the frame number suggestion sent by the hardware abstraction module 12 to the application module 14 may be: fewer frames are suggested to reduce the amount of data transmission. That is, the number of frames that the hardware abstraction module 12 suggests to the application module 14 may be positively correlated to the degree of jitter detected by the gyroscope. The application module 14 issues a data request to the hardware abstraction module 12 according to the frame number suggestion sent by the hardware abstraction module 12. For example, when the frame number suggestion sent by the hardware abstraction module 12 is 1 frame, then the application module 14 requests 1 frame of image data from the hardware abstraction module 12; when the frame number suggestion sent by the hardware abstraction module 12 is 4 frames, the application module 14 requests the hardware abstraction module 12 for 4 frames of image data; when the frame number suggestion sent by the hardware abstraction module 12 is 6 frames, then the application module 14 requests 6 frames of image data from the hardware abstraction module 12.
After receiving the data request, the hardware abstraction module 12 sets a request identifier corresponding to the data request to a first execution state. Wherein the number of request identifications may be consistent with the number of frames that the application module 14 requests image data from the hardware abstraction module 12. For example, when the application module 14 requests the hardware abstraction module 12 for 1 frame of image data, the number of request flags is 1, and the hardware abstraction module 12 sets 1 request flag to the first execution state; when the application program module 14 requests the hardware abstraction module 12 for 4 frames of image data, the number of the request identifiers is 4, and the hardware abstraction module 12 sets all the 4 request identifiers to be in the first execution state; when the application module 14 requests the hardware abstraction module 12 for 6 frames of image data, the number of request flags is 6, and the hardware abstraction module 12 sets each of the 6 request flags to the first execution state. The first execution state may be a set state.
After the hardware abstraction module 12 uploads the image data to the application module 14, the hardware abstraction module 12 sets the request flag to the second execution state. For each frame of image data uploaded by the hardware abstraction module 12 to the application module 14, the hardware abstraction module 12 sets a request flag to the second execution state. Taking the application program module 14 requesting the hardware abstraction module 12 for 4 frames of image data as an example, after the hardware abstraction module 12 uploads the 1 st frame of image data to the application program module 14, the hardware abstraction module 12 sets the 1 st request identifier to be in the second execution state; after the hardware abstraction module 12 uploads the 2 nd frame of image data to the application program module 14, the hardware abstraction module 12 sets the 2 nd request identifier to a second execution state; after the hardware abstraction module 12 uploads the 3 rd frame of image data to the application program module 14, the hardware abstraction module 12 sets the 3 rd request identifier to be in the second execution state; after the hardware abstraction module 12 uploads the 4 th frame of image data to the application module 14, the hardware abstraction module 12 sets the 4 th request flag to the second execution state. After the hardware abstraction module 12 finishes uploading all the image data, the hardware abstraction module 12 will set all the request flags to the second execution state. The second execution state may be a reset state.
When the application module 14 receives the exit command input by the user, if the request identifier is in the second execution state, the application module 14 exits from running. It should be noted that the "request identifier is the second execution state" here means that all the request identifiers are the second execution state. When the number of the request identifications is 1, namely 1 request identification is in a second execution state; when the number of the request identifications is 4, namely the 4 request identifications are all in the second execution state; when the number of the request identifiers is 6, that is, the 6 request identifiers are all in the second execution state. All the request identifiers are in the second execution state, which indicates that the hardware abstraction module 12 has completely uploaded the image data to the application program module 14, so that the situations of image data loss and abnormal photographing cannot occur. When the user wishes to view the corresponding large image by clicking on the thumbnail into the album, the corresponding image data can also be seen. Therefore, the application module 14 can quit running according to the quit command input by the user to meet the user requirement and ensure the integrity of the image data.
Referring to fig. 5, in some embodiments, the image processing method further includes:
05: when the application module 14 receives the quit command, if the request identifier is in the first execution state, the application module 14 waits until the request identifier changes from the first execution state to the second execution state and then quits the operation.
Referring to FIG. 2, in some embodiments, the application module 14 may be configured to perform the method of 05.
That is, when the application module 14 receives the exit command, if the request flag is in the first execution state, the application module 14 waits until the request flag changes from the first execution state to the second execution state and then exits from the execution.
Specifically, the "request identification as the first execution state" herein refers to at least one request identification as the first execution state. Taking the application program module 14 requesting the hardware abstraction module 12 for 4 frames of image data as an example, after the hardware abstraction module 12 has uploaded 3 frames of image data to the application program module 14, the hardware abstraction module 12 sets 3 request identifiers to be in the second execution state, and the remaining 1 request identifier is still in the first execution state, which is the case of "request identifier is in the first execution state" in the embodiment of the present application. Although the application module 14 has received the exit command input by the user at this time, since there are 1 request identifier in the first execution state, which indicates that the hardware abstraction module 12 has not yet completely uploaded the image data to the application module 14, the application module 14 does not immediately exit from running, but waits for the remaining 1 request identifier to change from the first execution state to the second execution state and then exit from running, that is, the hardware abstraction module 12 has completely uploaded the image data to the application module 14, so as to ensure the integrity of the image data.
Referring to fig. 6, in some embodiments, the image processing method further includes:
06: when the application module 14 receives the exit command, if the request flag is in the first execution state and the request flag is still in the first execution state after the hardware abstraction module 12 sets the request flag in the first execution state for a predetermined time, the application module 14 exits from running.
Referring to FIG. 2, in some embodiments, the application module 14 may be configured to perform the method of 06.
That is, when the application module 14 receives the quit command, if the request flag is in the first execution state and the request flag is still in the first execution state after the hardware abstraction module 12 sets the request flag to the first execution state for the predetermined time, the application module 14 quits running.
Specifically, the "request identification as the first execution state" herein refers to at least one request identification as the first execution state. Still taking the example that the application module 14 requests the hardware abstraction module 12 for 4 frames of image data, when the application module 14 receives the quit command, if the request identifier is in the first execution state, for example, all 4 request identifiers are in the first execution state, it is theoretically indicated that the hardware abstraction module 12 has not completely uploaded the image data to the application module 14, however, if the request identifier is still in the first execution state after the predetermined time, it indicates that the hardware abstraction module 12 may have a fault, and at this time, the user cannot wait for the fault and cannot quit the application program, and therefore, the application module 14 may forcibly quit the operation to meet the user requirement, and delete the image data generated in the photographed database.
The predetermined time period is counted from the time when the hardware abstraction module 12 sets the request flag to the first execution state, for example, the predetermined time period is 2 seconds after the hardware abstraction module 12 sets the request flag to the first execution state. When the application module 14 receives the exit command, if the request flag is in the first execution state, the application module 14 starts from the time point when the hardware abstraction module 12 sets the request flag to be in the first execution state, and if the request flag is still in the first execution state after 2 seconds, the application module 14 exits from running. It can be understood that the hardware abstraction module 12 needs about 30ms for uploading 1 frame of image data, and the waiting time of 2s is enough for the hardware abstraction module 12 to complete uploading more frames of image data under the condition that no fault occurs, so that if the request identifier is still in the first execution state after 2s, it can be basically determined that the hardware abstraction module 12 has a fault, the application module 14 can directly quit running, so as to avoid the user from continuing to wait. If the request flag changes from the first execution state to the second execution state within the predetermined time period, i.e., 2 seconds, for example, if the request flag changes from the first execution state to the second execution state while waiting for 1 second, the application module 14 may exit from running when the request flag changes from the first execution state to the second execution state (i.e., 1 second after the hardware abstraction module 12 sets the request flag to the first execution state).
The image processing method according to the embodiment of the present application may be further understood as: when the application program module 14 receives the exit command, if the request identifier is in the first execution state, the hardware abstraction module 12 determines whether a time difference between the current time point and a time point at which the request identifier is set to the first execution state by the hardware abstraction module 12 reaches a predetermined time length, and if so, the application program module 14 exits from operation; if not, the hardware abstraction module 12 continuously detects whether the request identifier is in the first execution state until the request identifier changes from the first execution state to the second execution state or the time difference reaches a predetermined time. If the request identifier changes from the first execution state to the second execution state during the process that the hardware abstraction module 12 detects whether the request identifier is in the first execution state, the application module 14 exits from running when the request identifier changes from the first execution state to the second execution state; if the request flag is not changed from the first execution state to the second execution state but remains in the first execution state during the process of detecting whether the request flag is in the first execution state by the hardware abstraction module 12, the application module 14 exits from running when the time difference reaches a predetermined time.
Referring to fig. 7, in some embodiments, a data request is used to request multiple frames of image data, where the data request corresponds to multiple request identifiers. After receiving the data request, the hardware abstraction module 12 sets a request identifier corresponding to the data request to a first execution state (i.e. 02), including:
021: after receiving the data request, the hardware abstraction module 12 sets a plurality of request identifiers corresponding to the data request to a first execution state;
after the hardware abstraction module 12 uploads the image data to the application program module 14, the hardware abstraction module 12 sets the request flag to the second execution state (i.e. 03), which includes:
031: after the hardware abstraction module 12 uploads a frame of image data to the application program module 14, the hardware abstraction module 12 correspondingly sets a request identifier to a second execution state;
when the application module 14 receives the exit command, if the request identifier is in the second execution state, the application module 14 exits from running (04), which includes:
041: when the application module 14 receives the quit command, if the plurality of request identifiers are all in the second execution state, the application module 14 quits running.
Referring to fig. 2, in some embodiments, a data request is used to request multiple frames of image data, and the data request corresponds to multiple request identifiers. The hardware abstraction module 12 may be used to execute the methods in 021 and 031 and the application module 14 may be used to execute the methods in 041.
That is, the hardware abstraction module 12 may be configured to set each of a plurality of request identifiers corresponding to the data request to the first execution state after receiving the data request. After the hardware abstraction module 12 uploads a frame of image data to the application program module 14, the hardware abstraction module 12 sets a request flag to a second execution state. When the application module 14 receives the quit command, if the plurality of request identifiers are all in the second execution state, the application module 14 quits running.
It should be noted that, in the foregoing embodiment, the expanded description of the application program module 14 requesting the hardware abstraction module 12 for multiple frames of image data also applies to the image processing method according to the embodiment of the present application, and is not repeated herein.
In addition, after the hardware abstraction module 12 sets all the request identifiers to the second execution state, a notification command may be sent to the application module 14 to notify the application module 14, and when the application module 14 receives the exit command, if the application module 14 has received the notification command for indicating that all the request identifiers are in the second execution state, the application module 14 exits from running.
Referring to fig. 8, in some embodiments, a data request is used to request multiple frames of image data. The image processing method further includes:
07: after receiving the data request, the hardware abstraction module 12 sequentially sends the multi-frame image data to the application program module 14; and
08: after receiving the multiple frames of image data, the application module 14 packages the multiple frames of image data and sends the packaged multiple frames of image data to the post-algorithm processing module 16.
Referring to fig. 2, in some embodiments, the image processor 10 further includes an algorithm post-processing module 16. The data request is for requesting a plurality of frames of image data. The hardware abstraction module 12 may be used to execute the method in 07 and the application module 14 may be used to execute the method in 08.
That is, the hardware abstraction module 12 may be configured to send frames of image data to the application module 14 in sequence after receiving a data request. The application module 14 may be configured to, after receiving a plurality of frames of image data, packetize the plurality of frames of image data to the post-algorithm processing module 16.
Specifically, in the embodiment of the present application, the image data requested by the application module 14 to the hardware abstraction module 12 is a plurality of frames. The hardware abstraction module 12, upon receiving the data request, sequentially sends the plurality of frames of image data to the application module 14. After receiving the multiple frames of image data, the application module 14 packages the multiple frames of image data and sends the packaged multiple frames of image data to the post-algorithm processing module 16.
Referring to fig. 3, the embodiment of the present application is described by taking an example that the hardware abstraction module 12 is connected to the application program module 14 through the camera service module 18, at this time, both the data request and the image data transmission between the hardware abstraction module 12 and the application program module 14 need to pass through the camera service module 18.
For example, the application module 14 issues a data request to the camera service module 18, where the data request is used to request 4 frames of image data, and the camera service module 18 sends the data request to the hardware abstraction module 12 after receiving the data request; after the hardware abstraction module 12 receives the data request, firstly, the hardware abstraction module 12 sends the 1 st frame of image data to the camera service module 18, and the camera service module 18 sends the 1 st frame of image data to the application program module 14; secondly, the hardware abstraction module 12 sends the 2 nd frame of image data to the camera service module 18, and the camera service module 18 sends the 2 nd frame of image data to the application program module 14; then, the hardware abstraction module 12 sends the 3 rd frame of image data to the camera service module 18, and the camera service module 18 sends the 3 rd frame of image data to the application module 14; finally, the hardware abstraction module 12 sends the 4 th frame of image data to the camera service module 18, and the camera service module 18 sends the 4 th frame of image data to the application module 14. In this process, after the hardware abstraction module 12 sends the previous frame of image data to the camera service module 18, it can start sending the next frame of image data to the camera service module 18 without waiting for the camera service module 18 to send the previous frame of image data to the application module 14, and then start sending the next frame of image data to the camera service module 18. Therefore, the total time required for sending the multi-frame image data to the application program module 14 through the camera service module 18 can be saved, the long-time waiting of the user is avoided, and the improvement of the photographing experience is facilitated.
After receiving the 1 st frame of image data, the 2 nd frame of image data, the 3 rd frame of image data and the 4 th frame of image data, the application module 14 packages and sends the 1 st frame of image data, the 2 nd frame of image data, the 3 rd frame of image data and the 4 th frame of image data to the post-algorithm processing module 16. It can be understood that when the image data requested by the application module 14 from the hardware abstraction module 12 is multiple frames, the post-algorithm processing module 16 generally needs to perform post-photographing processing on the multiple frames of image data, and if the application module 14 sequentially sends the multiple frames of image data to the post-algorithm processing module 16, the post-algorithm processing module 16 cannot complete post-photographing processing on the multiple frames of image data when only receiving the 1 st frame of image data, only receiving the 1 st frame of image data and only receiving the 2 nd frame of image data, or only receiving the 1 st frame of image data, only receiving the 2 nd frame of image data, and only receiving the 3 rd frame of image data. In the embodiment of the present application, the application module 14 packages and sends the multi-frame image data to the post-algorithm processing module 16, which can efficiently utilize transmission resources, and after the application module 14 packages and sends the multi-frame image data to the post-algorithm processing module 16, the post-algorithm processing module 16 can perform post-photographing processing on the multi-frame image data and send back the processed result data to the application module 14.
In addition, the hardware abstraction module 12 may also send an algorithm suggestion to the application module 14 according to the sensitivity information, the jitter condition of the gyroscope, the AR scene detection result, and the like, for example, when the jitter detected by the gyroscope is large, the algorithm suggestion sent by the hardware abstraction module 12 to the application module 14 may be multi-frame processing to eliminate the jitter according to the multi-frame processing; when the scene type detected by the AR scene detection result is a character, the algorithm suggestion sent by the hardware abstraction module 12 to the application program module 14 may be a beauty treatment to beautify the character; when the scene type detected by the AR scene detection result is scenic, the algorithm suggestion sent by the hardware abstraction module 12 to the application module 14 may be HDR processing to form a high dynamic range scenic image. After receiving the algorithm suggestion sent by the hardware abstraction module 12, the application module 14 sends the algorithm suggestion to the algorithm post-processing module 16, so that the algorithm post-processing module 16 performs post-photographing processing on the image data according to the algorithm suggestion. The application module 14 can close or exit the application interface during post-photo processing of the image data by the post-algorithm processing module 16. According to the image processing method, data distribution is achieved through the algorithm post-processing module 16, the load of the hardware abstraction module 12 can be reduced, the system runs smoothly, the phenomenon of blocking cannot occur, the photographing post-processing of the algorithm post-processing module 16 can run at the background, a user does not need to wait for a long time, and photographing experience is greatly improved.
Referring to fig. 9 and 10, an electronic device 1000 is further provided in the present embodiment. The electronic device 1000 may be a mobile phone, a tablet computer, a notebook computer, an intelligent wearable device (an intelligent watch, an intelligent bracelet, an intelligent helmet, an intelligent glasses, etc.), a virtual reality device, etc. The electronic apparatus 1000 includes the camera 100 and the housing 200 according to any of the above embodiments, and the camera 100 is combined with the housing 200. The housing 200 may serve as a mounting carrier for functional elements of the electronic apparatus 1000. The housing 200 may provide protection against dust, falling, water, etc. for functional elements, such as a display screen, the camera 100, a receiver, etc. In one embodiment, the housing 200 includes a main body 210 and a movable bracket 220, the movable bracket 220 can move relative to the main body 210 under the driving of the driving device, for example, the movable bracket 220 can slide relative to the main body 210 to slide into the main body 210 (for example, the state of fig. 9) or slide out of the main body 210 (for example, the state of fig. 10). Some functional components may be mounted on the main body 210, and another part of functional components (e.g., the camera 100) may be mounted on the movable bracket 220, and the movement of the movable bracket 220 may cause the another part of functional components to retract into the main body 210 or extend out of the main body 210. In another embodiment, the housing 200 has a collection window, and the photographing device 100 is installed in alignment with the collection window so that the photographing device 100 can receive external light through the collection window to form an image, or the photographing device 100 is disposed under a display screen and the photographing device 100 receives external light passing through the display screen to form an image.
In the description herein, references to the description of the terms "certain embodiments," "one example," "exemplary," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (14)

1. An image processing method applied to an image processor, wherein the image processor comprises an application program module and a hardware abstraction module, and the image processing method comprises the following steps:
the hardware abstraction module sends a frame number suggestion to the application program module according to the sensitivity information, the shaking condition of the gyroscope or the AR scene detection result;
the application program module issues a data request to the hardware abstraction module according to the frame number suggestion, wherein the data request is used for requesting image data;
after receiving the data request, the hardware abstraction module sets a request identifier corresponding to the data request to be in a first execution state, wherein the number of the request identifier is consistent with the number of frames of image data requested by the application program module to the hardware abstraction module;
after the hardware abstraction module uploads each frame of the image data to the application program module, the hardware abstraction module sets the request identifier corresponding to the frame to be in a second execution state, wherein the second execution state is different from the first execution state; and
and when the application program module receives an exit command input by a user, if all the request identifications are in the second execution state, the application program module exits from running.
2. The image processing method according to claim 1, characterized in that the image processing method further comprises:
when the application program module receives the quit command, if the request identifier is in the first execution state, the application program module waits until the request identifier is changed from the first execution state to the second execution state and then quits the operation.
3. The image processing method according to claim 1, characterized in that the image processing method further comprises:
and when the application program module receives the quit command, if the request identifier is in the first execution state and the request identifier is still in the first execution state after the hardware abstraction module sets the request identifier to be in the first execution state for a preset time, the application program module quits running.
4. The image processing method according to claim 1, wherein the data request is for requesting multiple frames of the image data, the data request corresponds to multiple request identifiers, and the hardware abstraction module sets the request identifier corresponding to the data request to a first execution state after receiving the data request, and the method comprises:
and after receiving the data request, the hardware abstraction module sets the request identifiers corresponding to the data request to the first execution state.
5. The image processing method according to claim 1, wherein the first execution state is a set state and the second execution state is a reset state.
6. The image processing method according to claim 1, wherein the data request is for requesting a plurality of frames of the image data, the image processing method further comprising:
after receiving the data request, the hardware abstraction module sequentially sends the multiple frames of image data to the application program module; and
and after receiving a plurality of frames of image data, the application program module packs the plurality of frames of image data and sends the plurality of frames of image data to the post-algorithm processing module.
7. An image processor comprising an application module and a hardware abstraction module;
the hardware abstraction module sends a frame number suggestion to the application program module according to the sensitivity information, the shaking condition of the gyroscope or the AR scene detection result;
the application program module is used for issuing a data request to the hardware abstraction module according to the frame number suggestion, and the data request is used for requesting image data;
the hardware abstraction module is further used for setting a request identifier corresponding to the data request to be in a first execution state after receiving the data request, wherein the number of the request identifier is consistent with the number of frames of image data requested by the application program module to the hardware abstraction module;
after the hardware abstraction module uploads each frame of the image data to the application program module, the hardware abstraction module sets the request identifier corresponding to the frame to be in a second execution state, wherein the second execution state is different from the first execution state;
and when the application program module receives an exit command input by a user, if all the request identifications are in the second execution state, the application program module exits from running.
8. The image processor of claim 7, wherein when the application module receives the exit command, if the request identifier is in the first execution state, the application module waits until the request identifier changes from the first execution state to the second execution state before exiting.
9. The image processor of claim 7, wherein when the application module receives the exit command, the application module exits operation if the request identifier is in the first execution state and remains in the first execution state after the hardware abstraction module sets the request identifier to the first execution state for a predetermined length of time.
10. The image processor of claim 7, wherein the data request is for requesting a plurality of frames of the image data, the data request corresponding to a plurality of the request identifications;
the hardware abstraction module is configured to set, after receiving the data request, all the request identifiers corresponding to the data request to the first execution state.
11. The image processor of claim 7, wherein the first execution state is a set state and the second execution state is a reset state.
12. The image processor of claim 7, further comprising a post-algorithm processing module, the data request requesting a plurality of frames of the image data;
the hardware abstraction module is used for sequentially sending the multiple frames of image data to the application program module after receiving the data request;
and the application program module is used for packaging and sending a plurality of frames of image data to the algorithm post-processing module after receiving the plurality of frames of image data.
13. A photographing apparatus, characterized by comprising:
the image processor of any one of claims 7 to 12; and
an image sensor connected with the image processor.
14. An electronic device, characterized in that the electronic device comprises:
the camera of claim 13; and
a housing, the photographing device being combined with the housing.
CN201910822518.6A 2019-09-02 2019-09-02 Image processing method, image processor, photographing device and electronic equipment Active CN110602359B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910822518.6A CN110602359B (en) 2019-09-02 2019-09-02 Image processing method, image processor, photographing device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910822518.6A CN110602359B (en) 2019-09-02 2019-09-02 Image processing method, image processor, photographing device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110602359A CN110602359A (en) 2019-12-20
CN110602359B true CN110602359B (en) 2022-01-18

Family

ID=68856851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910822518.6A Active CN110602359B (en) 2019-09-02 2019-09-02 Image processing method, image processor, photographing device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110602359B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110062161B (en) * 2019-04-10 2021-06-25 Oppo广东移动通信有限公司 Image processor, image processing method, photographing device, and electronic apparatus

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106254901A (en) * 2016-08-01 2016-12-21 天脉聚源(北京)教育科技有限公司 A kind of net cast method for uploading and device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100550848C (en) * 2006-12-31 2009-10-14 中国建设银行股份有限公司 The method and system of transferring large number of data
US9603094B2 (en) * 2013-06-09 2017-03-21 Apple Inc. Non-waking push notifications
CN103365399B (en) * 2013-06-26 2017-02-08 贝壳网际(北京)安全技术有限公司 Control method and device for application object of mobile terminal
CN107329559B (en) * 2017-06-30 2021-04-16 宇龙计算机通信科技(深圳)有限公司 Application program control method, device, terminal and storage medium
CN109144728B (en) * 2018-08-22 2020-11-06 Oppo广东移动通信有限公司 Occupancy control method and device for camera application
CN109963083B (en) * 2019-04-10 2021-09-24 Oppo广东移动通信有限公司 Image processor, image processing method, photographing device, and electronic apparatus
CN110062161B (en) * 2019-04-10 2021-06-25 Oppo广东移动通信有限公司 Image processor, image processing method, photographing device, and electronic apparatus
CN110121022A (en) * 2019-06-28 2019-08-13 Oppo广东移动通信有限公司 Control method, filming apparatus and the electronic equipment of filming apparatus

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106254901A (en) * 2016-08-01 2016-12-21 天脉聚源(北京)教育科技有限公司 A kind of net cast method for uploading and device

Also Published As

Publication number Publication date
CN110602359A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CN110086967B (en) Image processing method, image processor, photographing device and electronic equipment
CN109963083B (en) Image processor, image processing method, photographing device, and electronic apparatus
CN110290288B (en) Image processor, image processing method, photographing device, and electronic apparatus
CN110062161B (en) Image processor, image processing method, photographing device, and electronic apparatus
CN109922322B (en) Photographing method, image processor, photographing device and electronic equipment
CN110996012B (en) Continuous shooting processing method, image processor, shooting device and electronic equipment
CN110300240B (en) Image processor, image processing method, photographing device and electronic equipment
CN110753187B (en) Camera control method and device
CN110177214B (en) Image processor, image processing method, photographing device and electronic equipment
CN112399087B (en) Image processing method, image processing apparatus, image capturing apparatus, electronic device, and storage medium
WO2020259250A1 (en) Image processing method, image processor, photographing apparatus, and electronic device
CN111212235A (en) Long-focus shooting method and electronic equipment
CN111147695B (en) Image processing method, image processor, shooting device and electronic equipment
CN111193866B (en) Image processing method, image processor, photographing device and electronic equipment
CN110121022A (en) Control method, filming apparatus and the electronic equipment of filming apparatus
CN116074634B (en) Exposure parameter determination method and device
CN110418061B (en) Image processing method, image processor, photographing device and electronic equipment
CN110602359B (en) Image processing method, image processor, photographing device and electronic equipment
CN111510629A (en) Data display method, image processor, photographing device and electronic equipment
CN110401800B (en) Image processing method, image processor, photographing device and electronic equipment
CN111193867B (en) Image processing method, image processor, photographing device and electronic equipment
WO2023160230A9 (en) Photographing method and related device
JP2013211724A (en) Imaging apparatus
CN111491101B (en) Image processor, image processing method, photographing device, and electronic apparatus
JP5660306B2 (en) Imaging apparatus, program, and imaging method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant