CN115942113A - Image shooting method, application processing chip and electronic equipment - Google Patents

Image shooting method, application processing chip and electronic equipment Download PDF

Info

Publication number
CN115942113A
CN115942113A CN202111166738.1A CN202111166738A CN115942113A CN 115942113 A CN115942113 A CN 115942113A CN 202111166738 A CN202111166738 A CN 202111166738A CN 115942113 A CN115942113 A CN 115942113A
Authority
CN
China
Prior art keywords
shooting mode
shooting
image
mode
image analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111166738.1A
Other languages
Chinese (zh)
Inventor
朱文波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111166738.1A priority Critical patent/CN115942113A/en
Publication of CN115942113A publication Critical patent/CN115942113A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The application discloses an image shooting method, an application processing chip and electronic equipment, and relates to the technical field of image shooting. The method comprises the following steps: acquiring first image data based on a first shooting mode of the electronic equipment; performing image analysis on the first image data to obtain a first image analysis result; determining a second photographing mode based on at least the first image analysis result; configuring shooting resources required by a second shooting mode, wherein the second shooting mode is different from the first shooting mode; and when the first shooting mode is switched to the second shooting mode, the electronic equipment is configured to shoot images in the second shooting mode based on the shooting resources. According to the method and the device, the shooting resources required by the second shooting mode are configured in advance, so that in the subsequent process of switching to the second shooting mode, the camera picture cannot appear a sudden blur or pause phenomenon, and the shooting experience of a user is finally improved.

Description

Image shooting method, application processing chip and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of image shooting, in particular to an image shooting method, an application processing chip and electronic equipment.
Background
Currently, with the continuous development of portable electronic devices (e.g., smart phones, tablet computers, etc.), users usually utilize electronic devices with cameras to record their daily lives, such as taking food, pets, portraits, landscapes, etc. In order to obtain the best shooting effect when shooting different contents, the shooting mode needs to be switched according to different shooting scenes, for example, from a portrait mode to a landscape mode.
However, during the switching process of different shooting modes, the camera image may appear suddenly blurred or stuck, thereby affecting the shooting experience of the user.
Disclosure of Invention
The embodiment of the application provides an image shooting method, an application processing chip and electronic equipment. The technical scheme is as follows:
according to an aspect of the embodiments of the present application, there is provided an image capturing method applied to an electronic device, the method including:
acquiring first image data based on a first shooting mode of the electronic equipment;
performing image analysis on the first image data to obtain a first image analysis result;
determining a second photographing mode based on at least the first image analysis result;
configuring shooting resources required by the second shooting mode, wherein the second shooting mode is different from the first shooting mode;
when the first shooting mode is switched to the second shooting mode, the electronic equipment is configured to shoot images in the second shooting mode based on the shooting resources.
According to an aspect of an embodiment of the present application, there is provided an application processing chip applied to an electronic device, the application processing chip including:
a neural network processor configured to:
performing image analysis based on at least first image data acquired in a first shooting mode of the electronic device to determine a second shooting mode;
the central processor configured to:
configuring shooting resources required by the second shooting mode, wherein the second shooting mode is different from the first shooting mode;
when the first shooting mode is switched to the second shooting mode, the electronic equipment is configured to shoot images in the second shooting mode based on the shooting resources.
According to an aspect of an embodiment of the present application, there is provided an electronic device including an image processing chip and an application processing chip, wherein,
the image processing chip is configured to:
performing image analysis based on at least first image data acquired in a first shooting mode of the electronic device to determine a second shooting mode;
the application processing chip is configured to:
configuring shooting resources required by the second shooting mode, wherein the second shooting mode is different from the first shooting mode;
when the first shooting mode is switched to the second shooting mode, the electronic equipment is configured to shoot images in the second shooting mode based on the shooting resources.
According to an aspect of embodiments of the present application, there is provided a computer-readable storage medium storing at least one computer program for execution by a processor to implement the image capturing method described above.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the method comprises the steps of carrying out image analysis on first image data acquired in a first shooting mode of the electronic equipment to determine a second shooting mode, configuring shooting resources required by the second shooting mode different from the first shooting mode, and configuring the shooting resources required by the second shooting mode in advance, so that in the process of subsequently switching to the second shooting mode, the camera picture can not appear sudden blurring or blocking phenomenon any more, and finally the shooting experience of a user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a shooting mode provided by an embodiment of the present application;
fig. 2 is a schematic diagram illustrating a configuration method of an initial shooting mode according to an embodiment of the present application;
FIG. 3 is a flowchart of an image capture method provided by an embodiment of the present application;
FIG. 4 is a flowchart of an image capture method provided by an embodiment of the present application;
FIG. 5 is a flowchart of an image capture method provided by an embodiment of the present application;
FIG. 6 is a flowchart of an image capture method provided by an embodiment of the present application;
FIG. 7 is a flowchart of an image capture method provided by an embodiment of the present application;
FIG. 8 is a block diagram of an application processing chip provided in one embodiment of the present application;
FIG. 9 is a block diagram of an electronic device provided by one embodiment of the present application;
FIG. 10 is a block diagram of an electronic device provided by one embodiment of the present application;
FIG. 11 is a block diagram of an electronic device provided by an embodiment of the application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Please refer to fig. 1, which illustrates a schematic diagram of a shooting mode according to an embodiment of the present application. In general, the electronic device 10 is set in a conventional photo and video mode, in which the camera is adjusted in advance to adjust the values of the aperture, shutter, focus, light measuring mode, flash, etc. so as to ensure that users with insufficient experience can take photos and videos with certain quality assurance. For example, the electronic device 10 may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a multimedia player device, a wearable device, and the like. However, with the increasing demand for user personalization, satisfactory results of photos and videos have not been obtained using conventional photo and video modalities. Therefore, in order to facilitate the use of the primary user more and to meet the personalized requirements of the user, electronic device manufacturers have built-in various shooting modes. At present, the shooting modes in the electronic equipment are less than four and five, and more than twenty-three.
In addition to the conventional photo and video modes, there are also a portrait mode, a night view mode, and many more modes shown in fig. 1. The portrait mode is a mode for photographing a person, such as a portrait. Specifically, for example, the camera may maximize the aperture (i.e., the aperture blades are fully open and the amount of light passing therethrough is maximized), thereby providing an effect of a shallow depth of field and enhancing the degree of blurring of the background to highlight a person. The night scene mode is, for example, that a plurality of pictures with different exposure parameters are continuously taken, then the pictures of the pictures are registered and aligned, each taken picture is analyzed, the clearest part is left, the blurred part is eliminated, and finally the pictures are fused into a high-quality picture, and in the process, the algorithm can also optimize the picture color, eliminate noise and enhance details.
In addition, still other photographing modes include a landscape mode, a macro mode, an anti-shake mode, a beauty mode, an extreme night mode, and the like. The landscape mode is, for example, to adjust the aperture of the camera to the minimum to increase the depth of field, and in addition, the focusing also becomes infinite, so that the film-making obtains the clearest effect; in terms of image quality, for example, not only sharpness is improved, but also detailed parts can be expressed in detail, and the color tones of green, red, blue, and the like are enhanced, so that the sky, trees, and the like are more vivid. The macro mode is, for example, to photograph a fine object, such as flowers, insects, etc., which is only a few centimeters away from the lens. The anti-shake mode is, for example, to use a gyroscope to acquire the motion attitude of the electronic device, and then to compensate the motion by driving a single lens or the whole lens group to move through a motor. The beauty mode is, for example, a beautifying operation such as skin polishing and whitening for a person by using a beauty algorithm when the person is photographed. The extreme night mode is, for example, to start a correlation algorithm to capture a clear portrait or landscape in a night scene with an ambient light level of 5lux or less. In fact, in order to meet the increasingly personalized shooting requirements, the shooting modes can be used in combination, besides being used alone, for example, a portrait night mode, a portrait anti-shake mode, a landscape night mode, a landscape anti-shake mode, a portrait beauty mode, a landscape midnight mode, and the like, which are not limited in this application.
Please refer to fig. 2, which illustrates a schematic diagram of a configuration method of an initial shooting mode according to an embodiment of the present application. Generally, the configuration of the electronic device for the initial shooting mode includes the following steps: step 210, receiving a camera opening operation; step 220, in response to the camera opening operation, configuring relevant shooting resources according to a default shooting mode or a last shooting mode used last time; and step 230, acquiring image data based on the relevant shooting resources, performing image processing on the image data, and starting image shooting or previewing. Image capture may include, among other things, photo capture, video capture, and the like.
After the configuration setting of the initial shooting mode is completed, a user often faces a situation that different shooting modes need to be switched according to different shooting requirements in an actual use process. Just as the configuration of the initial shooting mode, different shooting modes require different shooting resources to be configured, such as cache resources, algorithm model resources, pipeline (pipeline) model resources, and the like. Schematically, for the portrait mode, cache resources matched with the image data volume, algorithm model resources such as image recognition and the like need to be reallocated, and pipelines are reasonably allocated to realize pipeline model resources for data stream re-serial connection and the like; for the landscape mode, besides the configuration of cache resources and pipeline model resources, algorithm model resources such as a High Dynamic Range (HDR) algorithm, a 3A algorithm and the like need to be configured. The 3A algorithm includes an Automatic White Balance (AWB) algorithm, an Automatic Exposure (AE) algorithm, and an Automatic Focus (AF) algorithm. In one example, shooting resources required by configuring the shooting mode can be personalized according to actual needs, and a user can selectively configure the shooting resources; in one example, the shooting resources required for the shooting mode are default configurations set by the system in advance, and the shooting resources required for configuring the shooting mode refer to the shooting resources required for initializing the shooting mode. Further, the method of arranging the initial shooting mode, the shooting resources, and the limitation on how to arrange the shooting resources are not limited to the present embodiment, and any of the following embodiments is applied.
In the switching process of the shooting mode, only after the shooting resource configuration is completed, a camera preview picture can have a clear data frame. However, after the switching operation of the shooting mode is completed, a data frame is jammed due to long time consumption of the configuration of the shooting resources, and no new data frame is output in a short time, so that the presentation of the shooting effect has a certain hysteresis compared with the operation opportunity, and the user perception is suddenly blurred or jammed, thereby affecting the shooting experience of the user. Wherein the switching operation may include a manual switching operation of a user and an automatic switching operation of the electronic device. Illustratively, the configuration of the shooting resources requires, for example, 100ms, and for a shooting mode of 30fps (Frames Per Second), it takes at least 3 data Frames to be transmitted, that is, there is a delay of 3 data Frames, and no new data frame is output during the configuration time, which results in a sudden blur or jam of the camera image.
According to the technical scheme, the first image data acquired in the first shooting mode of the electronic equipment is subjected to image analysis to determine the second shooting mode, the shooting resources required by the second shooting mode different from the first shooting mode are configured, and the shooting resources required by the second shooting mode are configured in advance, so that in the subsequent process of switching to the second shooting mode, the camera picture cannot be suddenly blurred or jammed, and the shooting experience of a user is finally improved.
Referring to fig. 3, a flowchart of an image capturing method according to an embodiment of the present application is shown. The method is applicable to an electronic device 10 comprising an environment for implementing the embodiment shown in fig. 1-2, and may comprise the following steps (310-350):
step 310, acquiring first image data based on a first shooting mode of the electronic equipment.
The first image data is acquired based on a first photographing mode of the electronic device 10. The first photographing mode is a photographing mode in which the electronic device 10 is being used. In one example, the photographing mode being used may be an initial photographing mode, that is, a default photographing mode or a last-used photographing mode. In one example, the photographing mode being used may be a photographing mode after at least one switching is completed. The first photographing mode may be any one of the photographing modes described in the above embodiments.
The first image data may be the current image data frame, or may be a plurality of image data frames near the current image data frame. Illustratively, the first image data includes image data of a current image data frame and 1 frame before and after the current image data frame, that is, 3 frames in total.
Step 320, performing image analysis on the first image data to obtain a first image analysis result.
Image analysis is the analysis of content in image data. In one possible embodiment, the feature point variation speed, the feature point variation frequency, the color component specific gravity, and the like in the image data may be analyzed to obtain the first image analysis result. Schematically, analyzing the hand characteristic points aiming at the first image data to obtain information such as hand variation speed, frequency and the like; the color component specific gravity is analyzed with respect to the first image data to obtain the ratio of the different colors in the first image data.
In a possible embodiment, the first image data may be further subjected to image recognition by using a cured deep learning convolutional neural network algorithm to obtain an image recognition result, and the analysis is performed based on the image recognition result to obtain the first image analysis result. Image recognition is a technique that utilizes deep learning algorithms to process, analyze, and understand image data to identify various objects, scenes, and concepts in the image data. The solidified deep learning convolution neural network algorithm is characterized in that the algorithm is solidified on a hardware bottom layer in advance by utilizing a Hardware Description Language (HDL), so that the operation speed of the algorithm is higher, and the function of hardware acceleration is achieved. The image recognition is carried out by utilizing the solidified deep learning convolution neural network algorithm, so that the image recognition speed of the first image data can be greatly improved, the integral image analysis speed is further improved, and the image analysis result is obtained at a higher speed. Optionally, the solidified deep learning convolutional neural network algorithm may be a customized algorithm to realize more personalized image recognition functions. Illustratively, the image recognition result may include men, women, children, pets, food, sea, mountain, water, woods, sunset, buildings, etc., and the first image analysis result obtained by analyzing based on the image recognition result includes a ratio of a subject to be photographed (a ratio of an area of the subject to be photographed in the image to an area of a complete image), a color component ratio, etc., which is not limited in the present application.
Step 330, determining a second shooting mode based on at least the first image analysis result.
In a possible embodiment, the second recording mode is determined on the basis of the first image analysis result.
In a possible embodiment, the second photographing mode will be determined based on a comparison of the first image analysis result with a first threshold. The number of the first image analysis results is one or more, and the number of the first threshold is one or more. Illustratively, the first image analysis result includes at least a subject ratio and/or a color component ratio, and the second photographing mode is determined based on the subject ratio and/or the color component ratio. The first threshold value corresponds to the first image analysis result, i.e. for different first image analysis results there is a first threshold value corresponding to the result. Therefore, the first threshold value also includes at least a subject ratio and/or a color component specific gravity. Schematically, the photographic subjects include characters, pets, food, insects, seas, mountains, sunset, buildings, and the like; the color component ratio refers to, for example, the ratio of red, green, and blue in RGB. Illustratively, the second photographing mode includes at least either a portrait mode or a landscape mode. Illustratively, the second photographing mode may also be a pet mode, a delicatessen mode, a building mode, and the like.
In one example, the first threshold is a person-to-person ratio, and the resulting second photographing mode is determined to be a person mode based on the person-to-person ratio of the first image analysis result being greater than or equal to the first threshold; in contrast, the person proportion based on the first image analysis result is smaller than the first threshold, and the resulting second photographing mode is determined to be the landscape mode. Illustratively, the first threshold may be a portrait proportion of up to 25%.
In one example, the first threshold value is a blue proportion in the color component proportion, and the resulting second photographing mode is determined to be a landscape mode based on the blue proportion in the color component proportion of the first image analysis result being greater than or equal to the first threshold value. Illustratively, the first threshold may be a percentage of blue in the color component proportion of up to 50%.
In one example, the first threshold includes a subject proportion and a color component proportion, and the second photographing mode is determined using a comparison of the first image analysis result with the first threshold.
In a possible embodiment, after comparing the first image analysis result with the first threshold, the method further includes: the second photographing mode is determined based on a comparison of the first image analysis result with the first sub-threshold. Illustratively, the first image analysis result includes at least one or more of a subject proportion, a color component proportion, a feature point variation speed, a feature point variation frequency, and the like. The first threshold corresponds to at least a portion of the first image analysis result. Illustratively, the first threshold value includes at least a subject proportion and/or a color component proportion, and the like. The first sub-threshold corresponds to at least a portion of the first image analysis result. Illustratively, the first sub-threshold includes at least one or more of a subject ratio, a feature point variation speed, a feature point variation frequency, and the like. Schematically, the photographic subjects include characters, pets, food, insects, seas, mountains, sunset, buildings, and the like; the color component ratio refers to, for example, the ratio of red, green, and blue in RGB; the characteristic point change speed and/or frequency are information such as a hand characteristic point change speed and/or frequency obtained by analyzing the image data of a plurality of frames, for example. Illustratively, the second photographing mode includes at least any one of a portrait beauty mode, a child high frame rate mode, and a pet high frame rate mode. Illustratively, the high frame rate mode is a shooting mode set for a subject who frequently changes, such as a child or a pet, and can obtain more details and smoother and clearer image quality.
In one example, the first threshold is a child character occupancy rate, the child character occupancy rate based on the first image analysis result is greater than or equal to the first threshold, and the first sub-threshold is a hand feature point variation speed, and when the hand feature point variation speed based on the first image analysis result is greater than or equal to the first sub-threshold, it is determined that the obtained second shooting mode is the child high frame rate mode.
In one example, the first threshold is a human aspect ratio, the human aspect ratio based on the first image analysis result is greater than or equal to the first threshold, and the first sub-threshold is a human aspect ratio, and when the human aspect ratio based on the first image analysis result is further greater than or equal to the first sub-threshold, where the first sub-threshold is greater than the first threshold, it is determined that the resulting second photographing mode is a human image beauty mode.
Illustratively, when the difference between the first image analysis result and the set threshold is larger, the first image data may be identified by more image data frames, so as to reduce the occupation of system resources and the influence on power consumption. And when the first threshold value of the second shooting mode is close, analyzing the first image data frame by frame or frame by frame so as to ensure that the shooting resources required by the second shooting mode are configured in time and ensure the use effect of a user.
In summary, the second shooting mode is determined by using the first image analysis result obtained by analyzing the image recognition result, so that the efficiency and accuracy of the result can be ensured, and a reasonable threshold is set to compare with the first image analysis result, thereby further enhancing the accuracy of the result. The combined setting of the first threshold and the first sub-threshold provides the possibility of selecting more personalized shooting modes.
In a possible embodiment, the second shooting mode is determined based on the first image analysis result and a parameter acquired by a sensor of the electronic device. The details of this embodiment are set forth in the following embodiments of the present application and will not be repeated herein.
Step 340, configuring shooting resources required by the second shooting mode, wherein the second shooting mode is different from the first shooting mode.
The second photographing mode is compared with the first photographing mode, and when the second photographing mode is different from the first photographing mode, it is indicated that there is a possibility that the first photographing mode is switched. Therefore, before the switching operation is really carried out, the shooting resources required by the second shooting mode are configured in advance, so that the phenomenon of sudden blurring or seizure in the switching process of the subsequent shooting mode is avoided. For the related content of the configuration of the shooting resources required by the shooting mode, please refer to the content of the related embodiment of fig. 1-2, which is not described herein again.
And 350, when the first shooting mode is switched to the second shooting mode, configuring the electronic equipment to shoot images in the second shooting mode based on the shooting resources.
When the first shooting mode is switched to the second shooting mode, the shooting resources required by the second shooting mode are configured in advance, so that the electronic equipment is configured to shoot images in the second shooting mode based on the shooting resources, sudden blurring or blocking phenomenon in the process of switching the shooting modes is avoided, and the shooting experience of a user is improved. In addition, the switching condition of the second shooting mode is not limited herein, and may be direct switching after the shooting resources required by the second shooting mode are configured, or may preset a related switching condition, and the switching may be performed after the preset switching condition is satisfied.
In one possible embodiment, steps 310-350 are all performed on the same processor side.
In one possible embodiment, steps 310-330 are performed on the first processor side and steps 340-350 are performed on the second processor side. In one example, the first processor and the second processor may be processor units in the same chip. In one example, the first processor and the second processor may be processing units in different chips. In one example, the first processor and the second processor may be different chips.
According to the technical scheme, the first image data acquired in the first shooting mode of the electronic equipment is subjected to image analysis to determine the second shooting mode, the shooting resources required by the second shooting mode different from the first shooting mode are configured, and the shooting resources required by the second shooting mode are configured in advance, so that in the subsequent process of switching to the second shooting mode, the camera picture cannot be suddenly blurred or jammed, and the shooting experience of a user is finally improved. Particularly, for the scene of video shooting, the shooting quality of the video is further improved due to the fact that sudden blurring or blocking phenomenon in the process of switching the shooting modes is avoided.
Referring to fig. 4, a flowchart of an image capturing method according to an embodiment of the present application is shown. The method is applicable to an electronic device 10 comprising the embodiment environment shown in fig. 1-2, and may comprise steps 410-460.
Steps 410, 440-460 are the same as steps 310, 330-350 in the embodiment of fig. 3, and for details, refer to the description of steps 310, 330-350 in the present application, and are not described herein again.
Step 420, preprocessing the first image data, wherein the preprocessing includes reducing the image data resolution and/or cropping the target area of the image data.
The first image data may be raw image data that is not processed. Raw data is raw image data obtained by converting a captured light source signal into a digital signal by an image sensor such as a Complementary Metal-Oxide-Semiconductor (CMOS) sensor or a Charge Coupled Device (CCD) sensor, and is not processed by any image algorithm; illustratively, the data format of the original image data is generally a RAW format.
In a possible embodiment, the first image data is preprocessed in order to reduce the data amount of the image analysis and increase the data processing speed on the basis of not influencing the subsequent image analysis. Illustratively, the pre-processing includes reducing the image data resolution and/or cropping the target region of the image data. The image data resolution is reduced, for example, to a resolution of 640 × 480 pixels from 4K (4096 × 2160 pixels). The target region is an arbitrary region in the image data, and illustratively, the target region may be a region of interest (ROI) in the image data, a central region in the image data, a user-considered important region in the image data, or the like. The shape of the target area is arbitrary, illustratively, rectangular, circular, or other irregular shape. The target area is a subject of shooting, and is schematically an area where a portrait, a gourmet, a pet, a mountain, a sea, and the like are located. And (3) cutting the target area of the image data, namely cutting the set target area, so that the data volume of image analysis is reduced in the subsequent image recognition process to accelerate the data processing speed. Illustratively, the preprocessing of the image data further includes dead pixel compensation, linearization processing, and the like.
Step 430, performing image analysis on the preprocessed first image data to obtain a first image analysis result.
Except that the object of the image analysis is the preprocessed first image data, the other contents of this step are the same as those of step 320 in the embodiment of fig. 3, and for details, refer to the description about step 320 in this application, and are not described herein again.
In addition, steps 420-430 are not limited to this embodiment, and steps 420-430 may be included in each embodiment of the present application before performing image analysis.
Referring to fig. 5, a flowchart of an image capturing method according to an embodiment of the present application is shown. The method is applicable to an electronic device 10 comprising the embodiment environment shown in fig. 1-2, and may comprise steps 510-550.
Wherein steps 510-520,540-550 are the same as steps 310-320,340-350 in fig. 3, and for details, refer to the description of steps 310-320,340-350 in the present application, and are not described herein again. Step 530 is a detailed description of one embodiment in step 330.
Step 530, determining the second shooting mode based on the first image analysis result and the parameters acquired by the sensor of the electronic device.
Wherein the parameters acquired by the sensor are parameters of a frame of image data acquired by the sensor corresponding to the first image data.
In a possible embodiment, the second photographing mode is determined based on a comparison of the first image analysis result with a first threshold value and a comparison of the parameter acquired by the sensor with a second threshold value. The number of the first image analysis result, the first threshold, the parameters acquired by the sensor and the second threshold can be one or more. Illustratively, the first image analysis result includes at least a subject proportion and/or a color component proportion, and the like. The first threshold value corresponds to the first image analysis result, i.e. for different first image analysis results there is a first threshold value corresponding to the result. Therefore, the first threshold value also includes at least a subject ratio and/or a color component specific gravity. Schematically, the photographic subjects include characters, pets, food, insects, sea, mountain, sunset, buildings, and the like; the color component ratio refers to, for example, the ratio of red, green, and blue in RGB. Illustratively, the parameters acquired by the sensors at least include one or more of an environment brightness value, angular velocity information, a subject distance, power consumption temperature, and the like, and the sensors for acquiring the parameters include a light sensor, a gyroscope, a TOF sensor, a temperature sensor, and the like. The second threshold value corresponds to the parameter acquired by the sensor, i.e. for a different parameter acquired by the sensor, there is a second threshold value corresponding to the parameter acquired by the sensor. Therefore, the second threshold value also includes at least one or more of an environment brightness value, angular velocity information, subject distance, power consumption temperature, and the like. Illustratively, the second photographing mode includes at least any one of a portrait night mode, a portrait anti-shake mode, a landscape night mode, a landscape anti-shake mode, and a macro mode. Illustratively, the second photographing mode may also be a portrait night view anti-shake mode, a landscape night view anti-shake mode, or the like.
In one example, the first threshold is a person-to-person ratio, the person-to-person ratio based on the first image analysis result being greater than or equal to the first threshold; and the second threshold is an environment brightness value, and the obtained second shooting mode is determined to be a portrait night scene mode based on the fact that the environment brightness value acquired by the sensor is smaller than the second threshold. Illustratively, the first threshold may be a portrait ratio of 25%, and the second threshold may be an ambient brightness value of 25lux.
In one example, the first threshold is a person-to-person ratio, the person-to-person ratio based on the first image analysis result being less than the first threshold; and the second threshold is angular speed information, and the obtained second shooting mode is determined to be a landscape anti-shake mode based on the fact that the parameter angular speed information acquired by the sensor is larger than the second threshold.
In one example, the first threshold is an insect proportion, the insect proportion based on the first image analysis result being greater than or equal to the first threshold; and the second threshold is the shooting subject distance, and when the shooting subject distance is smaller than the second threshold based on the parameters acquired by the sensor, the obtained second shooting mode is determined to be the macro mode.
Illustratively, the second threshold may further include a power consumption temperature, and when the power consumption temperature acquired by the sensor is further greater than the second threshold, it is determined that the obtained second shooting mode is the low-frame-rate shooting mode. Illustratively, 30fps is used instead of 60fps.
Illustratively, when the difference between the first image analysis result and the set threshold is larger, the first image data may be identified by more image data frames, so as to reduce the occupation of system resources and the influence on power consumption. When the switching threshold value of the second shooting mode is close, the first image data is analyzed frame by frame or frame by frame so as to ensure that the shooting resources required by the second shooting mode are configured in time and the use effect of a user is ensured.
On the basis of the above embodiment, the second threshold further includes a second sub-threshold, which also includes at least one or more of an environment brightness value, angular velocity information, subject distance, power consumption temperature, and the like. In a possible embodiment, the second shooting mode is determined based on a comparison of the first image analysis result with a first threshold value, based on a comparison of the parameter acquired by the sensor with a second threshold value and a second sub-threshold value.
In one example, when the parameter environment brightness value acquired by the sensor is further smaller than the second sub-threshold, the obtained second shooting mode is determined to be the shooting mode in the extremely night environment. Illustratively, if it is counted that the brightness parameter corresponding to the first image data frame (single frame) exceeds the set second sub-threshold, or two thirds of the brightness parameter corresponding to the first image data frame (multiple frames) exceeds the set second sub-threshold, it is determined that the obtained second shooting mode is the shooting mode in the very night environment. Illustratively, the second sub-threshold is 6lux.
In a possible embodiment, after comparing the first image analysis result with the first threshold, the method may further include: the second photographing mode is determined based on a comparison of the first image analysis result with a first sub-threshold and based on a comparison of the parameter acquired by the sensor with a second threshold. Illustratively, the first image analysis result includes at least one or more of a subject ratio, a color component specific gravity, a feature point variation speed, a feature point variation frequency, and the like. The first threshold value corresponds to at least a portion of the first image analysis result. Illustratively, the first threshold value includes at least a subject proportion and/or a color component proportion, and the like. The first sub-threshold corresponds to at least a portion of the first image analysis result. Illustratively, the first sub-threshold includes at least one or more of a subject ratio, a feature point variation speed, a feature point variation frequency, and the like. Schematically, the photographic subjects include characters, pets, food, insects, seas, mountains, sunset, buildings, and the like; the color component ratio refers to, for example, the ratio of red, green, and blue in RGB; the characteristic point change rate and/or frequency are information such as a hand characteristic point change rate and/or frequency obtained by analyzing the image data of a plurality of frames, for example. Illustratively, the parameters acquired by the sensors at least include one or more of an environment brightness value, angular velocity information, a subject distance, power consumption temperature, and the like, and the sensors for acquiring the parameters include a light sensor, a gyroscope, a TOF sensor, a temperature sensor, and the like. The second threshold value corresponds to the parameter acquired by the sensor, i.e. for a different parameter acquired by the sensor, there is a second threshold value corresponding to the parameter acquired by the sensor. Therefore, the second threshold value also includes at least one or more of an environment brightness value, angular velocity information, subject distance, power consumption temperature, and the like. Illustratively, the second shooting mode at least includes any one of a portrait beauty night view mode, a portrait beauty anti-shake mode, a child high frame rate night view mode, and a child high frame rate anti-shake mode. Illustratively, the high frame rate mode is a shooting mode set for a subject who frequently changes, such as a child or a pet, and can obtain more details and more fluent and clearer image quality.
In one example, the first threshold is a child character occupancy, the child character occupancy based on the first image analysis result being greater than or equal to the first threshold; the first sub-threshold is a hand characteristic point variation speed, and the hand characteristic point variation speed based on the first image analysis result is greater than or equal to the first sub-threshold; and the second threshold value is an environment brightness value, and the obtained second shooting mode is determined to be a child high frame rate night scene mode based on the parameter environment brightness value acquired by the sensor being smaller than the second threshold value.
In one example, the first threshold is a human to animal ratio, the human to animal ratio based on the first image analysis result is greater than or equal to the first threshold, and the first sub-threshold is a human to animal ratio, the human to animal ratio based on the first image analysis result is further greater than or equal to the first sub-threshold, wherein the first sub-threshold is greater than the first threshold; and the second threshold value is an environment brightness value, and the obtained second shooting mode is determined to be a portrait beauty night scene mode based on the fact that the environment brightness value acquired by the sensor is smaller than the second threshold value.
Illustratively, the second threshold may further include a power consumption temperature, and when the power consumption temperature is further greater than the second threshold based on the parameter acquired by the sensor, it is determined that the obtained second shooting mode is the low frame rate shooting mode. Illustratively, 30fps is used instead of 60fps.
Illustratively, when the difference between the first image analysis result and the set threshold is larger, the first image data may be identified by more image data frames, so as to reduce the occupation of system resources and the influence on power consumption. When the switching threshold value of the second shooting mode is close, the first image data is analyzed frame by frame or frame by frame so as to ensure that the shooting resources required by the second shooting mode are configured in time and the use effect of a user is ensured.
On the basis of the above embodiment, the second threshold further includes a second sub-threshold, which also includes at least one or more of an environment brightness value, angular velocity information, subject distance, power consumption temperature, and the like. In a possible embodiment, after comparing the first image analysis result with the first threshold, the method may further include: the second shooting mode is determined based on a comparison of the first image analysis result with the first sub-threshold and based on a comparison of the parameter acquired by the sensor with the second threshold and the second sub-threshold.
In one example, when the environmental brightness value acquired by the sensor is further smaller than the second sub-threshold, the obtained second shooting mode is determined to be a shooting mode in an extremely night environment. Illustratively, if it is counted that the luminance parameter corresponding to the first image data frame (single frame) exceeds the set second sub-threshold, or two-thirds of the luminance parameter corresponding to the first image data frame (multiple frames) exceeds the set second sub-threshold, it is determined that the obtained second shooting mode is the shooting mode in the very night environment. Illustratively, the second sub-threshold is 6lux.
In summary, the second shooting mode is determined by using the first image analysis result obtained after the analysis based on the image recognition result, so that the efficiency and accuracy of the result can be ensured, and the accuracy of the determination result is further enhanced by setting a reasonable threshold value to compare with the first image analysis result and the parameters acquired by the sensor. The combined setting of the first threshold, the first sub-threshold, the second threshold and the second sub-threshold provides the possibility of selecting more personalized shooting modes.
Referring to fig. 6, a flowchart of an image capturing method according to an embodiment of the present application is shown. The method is applicable to an electronic device 10 comprising the embodiment environment shown in fig. 1-2, and may include steps 610-660.
Wherein steps 610-640,660 are the same as steps 310-350 in fig. 3, and for details, refer to the description of steps 310-350 in this application, which is not repeated herein. Step 650 is a detailed explanation of the switching condition of the second photographing mode.
Step 650, when a preset switching condition is satisfied, switching the first shooting mode to the second shooting mode.
The preset switching condition is set according to actual needs. In a possible embodiment, the preset switching condition is that a user's shooting mode switching instruction is received. Illustratively, after configuring the shooting resources required by the second shooting mode, the electronic device may prompt the user on the display screen whether to switch to the second shooting mode, and if a confirmation switching instruction of the user is obtained, switch the first shooting mode to the second shooting mode.
In a possible embodiment, the preset switching condition is further analysis and determination of the second image data by the electronic device, and the electronic device automatically switches the first shooting mode to the second shooting mode after the preset switching condition is met. Schematically, performing image analysis on the second image data to obtain a second image analysis result; wherein the second image data is image data acquired in a first photographing mode of the electronic device after the first image data. The details of this embodiment are set forth in the following embodiments of the present application and will not be repeated herein. Compared with manual switching requiring user cooperation, the automatic switching of the shooting mode by the electronic equipment can ensure the stability of the shooting quality. Especially for shooting of videos, automatic switching can avoid shaking of electronic equipment caused by manual switching, and therefore shooting quality is improved.
In some embodiments, the preset switching condition includes that the first image analysis result satisfies one or more first thresholds, and/or the second image analysis result satisfies one or more second thresholds, where the first and second thresholds may be adjusted per image frame, and may also be dynamically adjusted according to changes in a shooting scene, a shooting subject, and/or a color component.
Whether the first shooting mode should be switched to the second shooting mode is judged by setting a switching condition, so that the switching result is more accurate. And the switching condition can be flexibly selected to adapt to the personalized photographing requirement. The configuration enables the analysis result of the second image data to meet the preset switching condition to be switched, the dynamic judgment process is increased, and the accuracy of the switching result is further ensured.
Step 650 is not limited to this embodiment, and in each embodiment of the present application, step 650 may be included to determine whether to switch the second shooting mode.
Referring to fig. 7, a flowchart of an image capturing method according to an embodiment of the present application is shown. The method is applicable to an electronic device 10 comprising the embodiment environment shown in fig. 1-2, and may comprise steps 710-780.
Wherein steps 710-740 are the same as steps 310-340 in fig. 3, and for details, refer to the description about steps 310-340 in this application, which is not repeated herein. Steps 750-780 are a detailed description of one embodiment of steps 650-660.
Step 750, performing image analysis on the second image data to obtain a second image analysis result; wherein the second image data is image data acquired in a first photographing mode of the electronic device after the first image data.
The second image data is the image data acquired in the first shooting mode of the electronic equipment after the first image data, and the second image data can be a single-frame image data frame or a multi-frame image data frame. The second image data may be a frame immediately following the first image data or may be a frame of image data after a predetermined time interval, and the second image data frame may be a frame of image data of a plurality of frames at intervals, or may be a frame of image data within a predetermined time interval. Illustratively, the second image data includes 1 frame of image data after the first frame of image data, and may also be 3 frames of image data; the second image data can also be a data frame which is spaced by 2 frames after the first image data frame, and can also be a data frame which is spaced by 2 frames after 3 frames; the second image data may be a data frame within 10 frames after the first image data frame, may be analyzed for 10 frames, may be analyzed for 3 frames at intervals, and the like.
Image analysis is the analysis of content in image data. In one possible embodiment, the feature point variation speed, the feature point variation frequency, the color component specific gravity, and the like in the image data may be analyzed to obtain the second image analysis result. Schematically, analyzing the hand characteristic points aiming at the first image data to obtain information such as hand change speed, frequency and the like; the color component specific gravity is analyzed with respect to the second image data to obtain the ratio of the different colors in the second image data.
In a possible embodiment, the second image data may be further subjected to image recognition by using a cured deep learning convolutional neural network algorithm to obtain an image recognition result, and the second image data is analyzed based on the image recognition result to obtain a second image analysis result. Image recognition is a technique that utilizes deep learning algorithms to process, analyze, and understand image data to identify various objects, scenes, and concepts in the image data. The solidified deep learning convolution neural network algorithm is characterized in that a Hardware Description Language (HDL) is utilized to solidify the algorithm on a hardware bottom layer in advance, so that the operation speed of the algorithm is higher, and the function of hardware acceleration is achieved. The image recognition is carried out by utilizing the solidified deep learning convolution neural network algorithm, so that the image recognition speed of the second image data can be greatly improved, the integral image analysis speed is further improved, and the image analysis result is obtained at a higher speed. Optionally, the solidified deep learning convolutional neural network algorithm may be a customized algorithm to realize more personalized image recognition functions. Illustratively, the image recognition result may include men, women, children, pets, food, sea, mountain, water, woods, sunset, buildings, etc., and the second image analysis result obtained by performing image analysis based on the image recognition result includes subject ratio, color component ratio, subject ratio variation trend, etc., which is not limited in this application.
Step 760, determining whether the second image analysis result reaches a switching threshold of the second shooting mode.
Wherein the switching threshold corresponds to the second image analysis result and at least comprises a shooting subject proportion and/or a color component proportion; the switching threshold may be a threshold corresponding to each image analysis-related threshold used in step 730 to determine the second photographing mode, or may be a variation tendency of each threshold.
Illustratively, the second image analysis result is the human proportion or the change trend of the human proportion, and the second shooting mode is the human image mode, and the switching threshold may be the human proportion or the change trend of the human proportion.
In a possible embodiment, step 760 may be followed by determining whether a parameter of the sensor acquisition corresponding to the second image data reaches a switching threshold of the second photographing mode. The switching threshold may be the threshold corresponding to the parameter-related threshold acquired by each sensor used to determine the second shooting mode in step 730, or may be the variation trend of each threshold
Illustratively, the second image analysis result is a human-to-object ratio, the parameter acquired by the sensor is an environment brightness value, and the second shooting mode is a human-to-image night-scene mode, and the switching threshold may be a human-to-object ratio and an environment brightness value, or a change trend of the human-to-object ratio and a change trend of the environment brightness value.
Step 770, when the second image analysis result reaches the switching threshold of the second shooting mode, switching the first shooting mode to the second shooting mode; when the first shooting mode is switched to the second shooting mode, the electronic equipment is configured to shoot images in the second shooting mode based on the shooting resources.
And comparing the second image analysis result with a switching threshold value, and automatically switching the first shooting mode to the second shooting mode when the second image analysis result reaches the switching threshold value. Compared with manual switching requiring user cooperation, the automatic switching of the shooting mode by the electronic equipment can ensure the stability of the shooting quality. Especially for shooting of videos, automatic switching can avoid shaking of electronic equipment caused by manual switching, and therefore shooting quality is improved. In addition, since the configuration of the photographing resources required for the photographing mode is already completed before the switching operation, a sudden blur or a stuck phenomenon occurring during the photographing mode switching process is avoided.
In one possible embodiment, the resource utilization rate is considered, and whether the handover threshold is reached or not is set within a predetermined time, so as to avoid resource waste caused by resource occupation.
In a possible embodiment, after determining that the second image analysis result reaches the switching threshold of the second shooting mode, it is further required to determine that the parameter acquired by the sensor corresponding to the second image data reaches the switching threshold of the second shooting mode, and then switch the first shooting mode to the second shooting mode.
Illustratively, when the person duty ratio of the second image analysis result reaches the switching threshold person duty ratio of the second shooting mode, and when the environment brightness value acquired by the sensor corresponding to the second image data reaches the switching threshold environment brightness value of the second shooting mode, the first shooting mode is switched to the second shooting mode, i.e., the night view mode.
After switching the first photographing mode to the second photographing mode, configuring a preview or photographing of a subsequent image based on the second photographing mode and proceeding back to step 710.
When the first shooting mode is switched to the second shooting mode, the shooting resources required by the second shooting mode are configured in advance, so that the electronic equipment is configured to shoot images in the second shooting mode based on the shooting resources, sudden blurring or blocking phenomenon in the process of switching the shooting modes is avoided, and the shooting experience of a user is improved.
Step 780, when the second image analysis result does not reach the switching threshold of the second shooting mode, releasing the shooting resources.
And comparing the second image analysis result with the switching threshold value, and releasing the shooting resource when the second image analysis result does not reach the switching threshold value.
In a possible embodiment, the second image analysis result is compared with a switching threshold value, and the shooting resource is released when it does not reach the switching threshold value within a predetermined time.
In a possible embodiment, the second image analysis result is compared with a switching threshold value, and the shooting resources are released when the second image analysis result continues to decrease and decreases to a release threshold value.
In the above embodiment, the condition for releasing the shooting resources is reasonably optimized, so that resource waste caused by unreasonable release is avoided.
After releasing the shooting resources, it goes back to step 710.
The following are examples of chips that may be used to implement embodiments of the methods of the present application. For details not disclosed in the chip embodiments of the present application, please refer to the method embodiments of the present application.
Referring to fig. 8, a block diagram of an application processing chip 800 according to an embodiment of the present application is shown. The application processing chip 800 has functions of implementing the above method examples, and the functions may be implemented by hardware, or by hardware executing corresponding software. The application processing chip 800 includes a neural network processor 810, a central processing unit 820, an image signal processor 830 and a system bus 840, wherein the neural network processor 810, the central processing unit 820 and the image signal processor 830 are respectively connected to the system bus 840 and communicate with each other via the system bus 840. It is to be appreciated that although not referred to in the figures, the application processing chip 800 also includes one or more memories configured to store image data, algorithm models, process data, and the like. Illustratively, the neural network processor 810, the central processor 820 and the image signal processor 830 may directly perform data interaction, or may store the process data in a memory and then read the process data from the memory for further processing.
A neural network processor 810 configured to:
performing image analysis based on at least first image data acquired in a first shooting mode of the electronic device to determine a second shooting mode;
the central processor 820 configured to:
configuring shooting resources required by the second shooting mode, wherein the second shooting mode is different from the first shooting mode;
when the first shooting mode is switched to the second shooting mode, the electronic equipment is configured to shoot images in the second shooting mode based on the shooting resources.
In an exemplary embodiment, the neural network processor 810 is further configured to: and carrying out image recognition on the first image data by utilizing a solidified deep learning convolutional neural network algorithm to obtain an image recognition result, and analyzing based on the image recognition result to obtain the first image analysis result.
In an exemplary embodiment, further comprises an image signal processor 830 configured to: preprocessing the first image data, wherein the preprocessing comprises reducing the image data resolution and/or clipping a target area of the image data;
the neural network processor 810 configured to: and carrying out image analysis on the preprocessed first image data to obtain a first image analysis result.
In an exemplary embodiment, the neural network processor 810 is further configured to: determining the second photographing mode based on a comparison of the first image analysis result with a first threshold.
In an exemplary embodiment, the neural network processor 810 is further configured to: determining the second photographing mode based on the first image analysis result and a parameter acquired by a sensor of the electronic device.
In an exemplary embodiment, the neural network processor 810 is further configured to: determining the second photographing mode based on a comparison of the first image analysis result with a first threshold and a comparison of the parameter acquired by the sensor with a second threshold.
In the exemplary embodiment, central processor 820 is further configured to: and when a preset switching condition is met, switching the first shooting mode to the second shooting mode.
In an exemplary embodiment, the neural network processor 810 is further configured to: performing image analysis on the second image data to obtain a second image analysis result; the second image data is image data acquired in a first shooting mode of the electronic equipment after the first image data; a central processor 820 further configured to: and when the second image analysis result reaches a switching threshold value of the second shooting mode, switching the first shooting mode to the second shooting mode.
In the exemplary embodiment, central processor 820 is further configured to: and when the second image analysis result does not reach the switching threshold value of the second shooting mode, releasing the shooting resources.
In the embodiment, the second shooting mode is determined by using a processor commonly used in an existing application processing chip in the electronic device, and the related shooting resources are configured in advance, so that sudden blurring or a pause phenomenon can not occur in the subsequent shooting mode switching process due to the fact that the configuration of the shooting resources required by the shooting mode is completed in advance. Namely, the invention is completed by using the existing application processing chip without adding extra hardware.
The following are embodiments of an electronic device of the present application that may be used to implement embodiments of the method of the present application. For details not disclosed in the embodiments of the electronic device of the present application, please refer to the embodiments of the methods of the present application.
Referring to fig. 9, a block diagram of an electronic device 900 provided by an embodiment of the application is shown. The electronic device 900 has functions of implementing the above method examples, and the functions may be implemented by hardware or by hardware executing corresponding software. The electronic device 900 comprises an image processing chip 910 and an application processing chip 920, wherein the image processing chip 910 and the application processing chip 920 are in communication connection with each other; it is to be appreciated that although not referenced in the figures, the electronic device 900, the image processing chip 910, and the application processing chip 920 each further include one or more memories therein configured to store image data, algorithm models, process data, and the like. Illustratively, the image processing chip 910 and the application processing chip 920 may directly perform data interaction, or may store the process data in a memory of the electronic device 900 and then read the process data from the memory for further processing.
An image processing chip 910 configured to:
performing image analysis based on at least first image data acquired in a first shooting mode of the electronic device to determine a second shooting mode;
an application processing chip 920 configured to:
configuring shooting resources required by the second shooting mode, wherein the second shooting mode is different from the first shooting mode;
when the first shooting mode is switched to the second shooting mode, the electronic equipment is configured to shoot images in the second shooting mode based on the shooting resources.
In an exemplary embodiment, the image processing chip 910 is further configured to: and carrying out image recognition on the first image data by using a solidified deep learning convolutional neural network algorithm to obtain an image recognition result, and analyzing based on the image recognition result to obtain a first image analysis result.
In an exemplary embodiment, the image processing chip 910 is further configured to: preprocessing the first image data, wherein the preprocessing comprises reducing the image data resolution and/or cutting a target area of the image data;
and carrying out image analysis on the preprocessed first image data to obtain a first image analysis result.
In an exemplary embodiment, the image processing chip 910 is further configured to: determining the second photographing mode based on a comparison of the first image analysis result with a first threshold.
In an exemplary embodiment, the image processing chip 910 is further configured to: determining the second photographing mode based on the first image analysis result and a parameter acquired by a sensor of the electronic device.
In an exemplary embodiment, the image processing chip 910 is further configured to: determining the second photographing mode based on a comparison of the first image analysis result with a first threshold and a comparison of the parameter acquired by the sensor with a second threshold.
In the exemplary embodiment, the application processing chip 920 is further configured to: and when a preset switching condition is met, switching the first shooting mode to the second shooting mode.
In an exemplary embodiment, the image processing chip 910 is further configured to: performing image analysis on the second image data to obtain a second image analysis result; the second image data is image data acquired in a first shooting mode of the electronic equipment after the first image data; an application processing chip 920, further configured to: and when the second image analysis result reaches a switching threshold value of the second shooting mode, switching the first shooting mode to the second shooting mode.
In an exemplary embodiment, the application processing chip 920 is further configured to: and when the second image analysis result does not reach the switching threshold value of the second shooting mode, releasing the shooting resources.
In the embodiment, the image processing chip and the application processing chip in the electronic equipment are mutually matched to determine the second shooting mode and perform advanced configuration of the related shooting resources, and as the configuration of the shooting resources required by the shooting mode is completed in advance, a sudden fuzzy or stuck phenomenon can not occur in the subsequent shooting mode switching process. Since the image processing chip takes the image data and/or the parameters collected by the relevant sensors from the image sensor and/or other relevant sensors before the application processing chip, the image analysis can be completed in advance, thereby reducing the time for determining the second shooting mode. In addition, the image processing chip can also be a customized chip, and relevant algorithms such as image recognition and the like are solidified on the bottom layer of the hardware, so that the aim of greatly improving the image processing speed can be fulfilled, and more personalized image processing algorithms can be customized.
The following is another electronic device embodiment of the present application that may be used to implement a method embodiment of the present application. For details not disclosed in the embodiments of the electronic device of the present application, please refer to the embodiments of the methods of the present application.
Referring to fig. 10, a block diagram of an electronic device 100 provided by an embodiment of the application is shown. The electronic device 100 has functions of implementing the above method examples, and the functions may be implemented by hardware or by hardware executing corresponding software. The electronic device 100 includes an image processing chip 110 and an application processing chip 120. The image processing chip 110 includes a neural network processor 111, an image signal processor 112, a first interface 113, and a system bus 114; the neural network processor 111, the image signal processor 112 and the first interface 113 are respectively connected to a system bus 114, and communicate with each other through the system bus 114. The application processing chip 120 includes a second interface 121, a central processing unit 122 and a system bus 123; the second interface 121 and the central processor 122 are respectively connected to a system bus 123, and communicate with each other through the system bus 123. The first interface 113 and the second interface 121 are communicatively connected to implement communication between the image processing chip 110 and the application processing chip 120. Illustratively, the first Interface 113 and the second Interface 121 may be Mobile Industry Processor Interfaces (MIPI), which may receive image data such as RAW data. It is to be appreciated that although not referenced in the figures, the electronic device 100, the image processing chip 110, and the application processing chip 120 each include one or more memories configured to store image data, algorithm models, process data, and the like. Illustratively, data interaction can be directly performed between the image processing chip 910 and the application processing chip 920, between the neural network processor 111 and the image signal processor 112, and between the neural network processor 111 and the image central processing unit 122, or process data can be stored in a memory of the image processing chip 110, the application processing chip 120, and/or the electronic device 100, and then read from the memory for further processing.
A neural network processor 111 in the image processing chip 110 configured to:
performing image analysis based on at least first image data acquired in a first shooting mode of the electronic device to determine a second shooting mode;
a central processor 122 in the application processing chip 120, configured to:
configuring shooting resources required by the second shooting mode, wherein the second shooting mode is different from the first shooting mode;
when the first shooting mode is switched to the second shooting mode, the electronic equipment is configured to shoot images in the second shooting mode based on the shooting resources.
In an exemplary embodiment, the neural network processor 111 is further configured to: and carrying out image recognition on the first image data by utilizing a solidified deep learning convolutional neural network algorithm to obtain an image recognition result, and analyzing based on the image recognition result to obtain the first image analysis result.
In an exemplary embodiment, further comprising: an image signal processor 112 of the image processing chip 110, configured to: preprocessing the first image data, wherein the preprocessing comprises reducing the image data resolution and/or clipping a target area of the image data;
the neural network processor 111 configured to: and carrying out image analysis on the preprocessed first image data to obtain a first image analysis result.
In an exemplary embodiment, the neural network processor 111 is further configured to: determining the second photographing mode based on a comparison of the first image analysis result with a first threshold.
In an exemplary embodiment, the neural network processor 111 is further configured to: determining the second photographing mode based on the first image analysis result and a parameter acquired by a sensor of the electronic device.
In an exemplary embodiment, the neural network processor 111 is further configured to: determining the second photographing mode based on a comparison of the first image analysis result with a first threshold and a comparison of the parameter acquired by the sensor with a second threshold.
In an exemplary embodiment, the central processor 122 is further configured to: and when a preset switching condition is met, switching the first shooting mode to the second shooting mode.
In an exemplary embodiment, the neural network processor 111 is further configured to: performing image analysis on the second image data to obtain a second image analysis result; the second image data is image data acquired in a first shooting mode of the electronic equipment after the first image data; a central processor 122, further configured to: and when the second image analysis result reaches a switching threshold value of the second shooting mode, switching the first shooting mode to the second shooting mode.
In an exemplary embodiment, the central processor 122 is further configured to: and when the second image analysis result does not reach the switching threshold value of the second shooting mode, releasing the shooting resources.
In the embodiment, the second shooting mode is determined by utilizing the mutual cooperation between the image processing chip in the electronic equipment and the specific processor in the application processing chip, and the related shooting resources are configured in advance. Since the image processing chip takes the image data and/or the parameters collected by the relevant sensors from the image sensor and/or other relevant sensors before the application processing chip, the image analysis can be completed in advance, thereby reducing the time for determining the shooting mode to be preconfigured. In addition, the image processing chip can also be a customized chip, and related algorithms such as image recognition and the like are solidified on the bottom layer of the hardware of the neural network processor, so that the aim of greatly improving the image processing speed can be fulfilled, and more personalized image processing algorithms can be customized.
The chip, the electronic device and the method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
Referring to fig. 11, a block diagram of an electronic device 130 provided in an embodiment of the present application is shown. The electronic equipment can be used for realizing the functions of the shooting method. The electronic device 130 may include: processor 131, receiver 132, transmitter 133, memory 134, and bus 135.
The processor 131 includes one or more processing cores, and the processor 131 executes various functional applications and information processing by executing software programs and modules.
The receiver 132 and the transmitter 133 may be implemented as one communication component, which may be a communication chip.
The memory 134 is coupled to the processor 131 by a bus 135.
The memory 134 may be used for storing a computer program, which the processor 131 is used for executing to implement the respective steps in the above-described method embodiments.
Further, memory 134 may be implemented by any type or combination of volatile or non-volatile storage devices, including but not limited to: RAM (Random-Access Memory) and ROM (Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash Memory or other solid state storage technology, CD-ROM (Compact Disc Read-Only Memory), DVD (Digital Video Disc) or other optical storage, magnetic tape cartridge, magnetic tape, magnetic disk storage or other magnetic storage devices.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein a computer program which, when executed by a processor of an audio playback device, implements the above-described audio playback control method. Optionally, the computer-readable storage medium may include: ROM (Read-Only Memory), RAM (Random-Access Memory), SSD (Solid State drive), or optical disk. The Random Access Memory may include a ReRAM (resistive Random Access Memory) and a DRAM (Dynamic Random Access Memory).
In an exemplary embodiment, a computer program product or computer program is also provided, comprising computer instructions stored in a computer readable storage medium. The processor of the electronic device reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the electronic device executes the audio playing control method.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In addition, the step numbers described herein only show an exemplary possible execution sequence among the steps, and in some other embodiments, the steps may also be executed out of the numbering sequence, for example, two steps with different numbers are executed simultaneously, or two steps with different numbers are executed in a reverse order to the illustrated sequence, which is not limited in this application.
The above description is only exemplary of the application and should not be taken as limiting the application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the application should be included in the protection scope of the application.

Claims (15)

1. An image shooting method applied to electronic equipment is characterized by comprising the following steps:
acquiring first image data based on a first shooting mode of the electronic equipment;
performing image analysis on the first image data to obtain a first image analysis result;
determining a second photographing mode based on at least the first image analysis result;
configuring shooting resources required by the second shooting mode, wherein the second shooting mode is different from the first shooting mode;
when the first shooting mode is switched to the second shooting mode, the electronic equipment is configured to shoot images in the second shooting mode based on the shooting resources.
2. The method of claim 1, wherein performing image analysis on the first image data to obtain a first image analysis result comprises:
and carrying out image recognition on the first image data by utilizing a solidified deep learning convolutional neural network algorithm to obtain an image recognition result, and analyzing based on the image recognition result to obtain the first image analysis result.
3. The method of claim 1, further comprising:
preprocessing the first image data, wherein the preprocessing comprises reducing the image data resolution and/or clipping a target area of the image data;
and carrying out image analysis on the preprocessed first image data to obtain a first image analysis result.
4. The method of claim 1, wherein determining a second capture mode based on at least the first image analysis result comprises:
determining the second photographing mode based on a comparison of the first image analysis result with a first threshold.
5. The method according to any one of claims 1 to 4, wherein the first image analysis result comprises a subject proportion and/or a color component specific gravity.
6. The method of claim 1, wherein determining a second capture mode based on at least the first image analysis result comprises:
determining the second photographing mode based on the first image analysis result and a parameter acquired by a sensor of the electronic device.
7. The method of claim 6, comprising:
determining the second photographing mode based on a comparison of the first image analysis result with a first threshold and a comparison of the parameter acquired by the sensor with a second threshold.
8. The method according to claim 6 or 7, wherein the first image analysis result comprises a subject proportion and/or a color component proportion; the parameters collected by the sensor comprise one or more of environment brightness value, angular speed information and shooting subject distance.
9. The method of claim 1, wherein the camera resources comprise one or more of cache resources, algorithm model resources, and pipeline model resources.
10. The method of claim 1, further comprising:
and when a preset switching condition is met, switching the first shooting mode to the second shooting mode.
11. The method of claim 10, wherein switching the first photographing mode to the second photographing mode when a preset switching condition is satisfied comprises:
performing image analysis on the second image data to obtain a second image analysis result; the second image data is image data acquired in a first shooting mode of the electronic equipment after the first image data;
and when the second image analysis result reaches a switching threshold value of the second shooting mode, switching the first shooting mode to the second shooting mode.
12. The method of claim 10, wherein the second image analysis result comprises a subject proportion and/or a color component proportion.
13. The method according to any one of claims 1 to 12,
acquiring first image data based on a first shooting mode of the electronic equipment, performing image analysis on the first image data to obtain a first image analysis result, and determining that a second shooting mode is executed at a first processor side at least based on the first image analysis result;
and when the first shooting mode is switched to the second shooting mode, the electronic equipment is configured to perform image shooting in the second shooting mode on the basis of the shooting resources and is executed on a second processor.
14. An application processing chip applied to an electronic device, the application processing chip comprising:
a neural network processor configured to:
performing image analysis based on at least first image data acquired in a first shooting mode of the electronic device to determine a second shooting mode;
the central processor configured to:
configuring shooting resources required by the second shooting mode, wherein the second shooting mode is different from the first shooting mode;
when the first shooting mode is switched to the second shooting mode, the electronic equipment is configured to shoot images in the second shooting mode based on the shooting resources.
15. An electronic device comprising an image processing chip and an application processing chip, wherein,
the image processing chip is configured to:
performing image analysis based on at least first image data acquired in a first shooting mode of the electronic device to determine a second shooting mode;
the application processing chip is configured to:
configuring shooting resources required by the second shooting mode, wherein the second shooting mode is different from the first shooting mode;
when the first shooting mode is switched to the second shooting mode, the electronic equipment is configured to shoot images in the second shooting mode based on the shooting resources.
CN202111166738.1A 2021-09-30 2021-09-30 Image shooting method, application processing chip and electronic equipment Pending CN115942113A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111166738.1A CN115942113A (en) 2021-09-30 2021-09-30 Image shooting method, application processing chip and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111166738.1A CN115942113A (en) 2021-09-30 2021-09-30 Image shooting method, application processing chip and electronic equipment

Publications (1)

Publication Number Publication Date
CN115942113A true CN115942113A (en) 2023-04-07

Family

ID=86554502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111166738.1A Pending CN115942113A (en) 2021-09-30 2021-09-30 Image shooting method, application processing chip and electronic equipment

Country Status (1)

Country Link
CN (1) CN115942113A (en)

Similar Documents

Publication Publication Date Title
CN110445988B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110072052B (en) Image processing method and device based on multi-frame image and electronic equipment
EP2878121B1 (en) Method and apparatus for dual camera shutter
CN110381263B (en) Image processing method, image processing device, storage medium and electronic equipment
US9195880B1 (en) Interactive viewer for image stacks
US9088721B2 (en) Imaging apparatus and display control method thereof
US8526685B2 (en) Method and apparatus for selectively supporting raw format in digital image processor
JP6401324B2 (en) Dynamic photo shooting method and apparatus
US10264230B2 (en) Kinetic object removal from camera preview image
US20110102621A1 (en) Method and apparatus for guiding photographing
KR20140016401A (en) Method and apparatus for capturing images
US20100245610A1 (en) Method and apparatus for processing digital image
CN109618102B (en) Focusing processing method and device, electronic equipment and storage medium
US20150189142A1 (en) Electronic apparatus and method of capturing moving subject by using the same
CN115242992A (en) Video processing method and device, electronic equipment and storage medium
JP2018006912A (en) Imaging apparatus, image processing apparatus, and control method and program of the same
CN111771372A (en) Method and device for determining camera shooting parameters
CN112258380A (en) Image processing method, device, equipment and storage medium
CN110581957B (en) Image processing method, image processing device, storage medium and electronic equipment
CN110809797B (en) Micro video system, format and generation method
US11367229B2 (en) Image processing apparatus, image processing method, and storage medium
CN113298735A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110266965B (en) Image processing method, image processing device, storage medium and electronic equipment
CN106878606B (en) Image generation method based on electronic equipment and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination