WO2019184719A1 - 一种拍照的方法和装置 - Google Patents

一种拍照的方法和装置 Download PDF

Info

Publication number
WO2019184719A1
WO2019184719A1 PCT/CN2019/078156 CN2019078156W WO2019184719A1 WO 2019184719 A1 WO2019184719 A1 WO 2019184719A1 CN 2019078156 W CN2019078156 W CN 2019078156W WO 2019184719 A1 WO2019184719 A1 WO 2019184719A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
image data
preset
target
jitter
Prior art date
Application number
PCT/CN2019/078156
Other languages
English (en)
French (fr)
Inventor
徐晓
邱海
Original Assignee
青岛海信移动通信技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201810274308.3A external-priority patent/CN108322658B/zh
Priority claimed from CN201810274313.4A external-priority patent/CN108668075A/zh
Application filed by 青岛海信移动通信技术股份有限公司 filed Critical 青岛海信移动通信技术股份有限公司
Publication of WO2019184719A1 publication Critical patent/WO2019184719A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present disclosure relates to the technical field of communication, and in particular to a method and apparatus for photographing.
  • the mobile terminal is usually equipped with a camera, which has a photographing and camera function. Since the camera device is small and the photosensitive area is small, the user's hand may be shaken during the photographing process, resulting in blurred image data collected, especially at night. In a dimly lit environment, the exposure time of the camera will increase due to insufficient light, and the small jitter of the hand may cause the collected image data to be blurred.
  • DIS Digital Image Stabilization
  • EIS Electronic Image Stabilization
  • OIS Optical Image Stabilization
  • Both DIS and EIS require a large amount of data frame cropping, which increases the load on the processor and is only used for video anti-shake.
  • OIS needs to add independent devices to detect the jitter of the mobile terminal and adjust the mirror light to compensate, thereby offsetting the impact of jitter and high cost.
  • Embodiments of the present disclosure provide a method and apparatus for photographing to solve the problem of high cost of photographing anti-shake.
  • a photographing method including:
  • At least two frames of candidate image data are acquired when the photographing operation is performed; and at least two targets whose partial ambiguities overlap and the regions are matched are extracted from the at least two frames of candidate image data respectively.
  • the extracting, from the at least two frames of candidate image data, respectively, at least two target region data whose ambiguities meet predetermined blur conditions and whose partial regions overlap and the regions are matched include: The candidate image data of each frame is divided into at least two candidate region data according to a preset segmentation manner; and the ambiguity is calculated for each candidate region data;
  • the query ambiguity is in accordance with the preset fuzzy condition, and the at least two candidate area data that match the area, as the feature area data, includes:
  • At least two candidate region data having the smallest sum of ambiguities and matching the regions are selected as the feature region data.
  • the cutting manner includes at least one of the following: splitting into a left half and a right half; and dividing into an upper half and a lower half.
  • the splicing the target area data into the target image data includes: extracting feature points from each target area data; and matching feature points of the target image data by using a preset first matching manner. And obtaining a feature point that is successfully matched; calculating a transformation manner between the feature points that are successfully matched; and splicing the target area data into target image data according to the transformation manner.
  • the matching the feature points of the target image data by using the preset first matching manner to obtain the feature points with successful matching includes: generating a descriptor for the feature points of the target region data; Deriving a nearest neighbor distance and a next neighbor distance between the descriptors; calculating a ratio between the nearest neighbor distance and the second neighbor distance; and determining that the feature point matches successfully when the ratio is less than a preset threshold.
  • the target area data is spliced into target image data, and The method includes: removing a feature point that matches the error from the successfully matched feature points by using a preset second matching manner.
  • the splicing the target area data into the target image data further includes: performing the feature area data according to preset sampling parameters. Sampling processing. Before the splicing the target area data into the target image data according to the transformation manner, the splicing the target area data into the target image data further includes: converting the transformation manner according to the sampling parameter deal with.
  • the method further includes: when performing a photographing operation, calling a preset sensor to measure the jitter data; and determining whether the jitter data meets a preset jitter condition. And extracting, from the at least two frame candidate image data, at least two target area data whose partial ambiguities overlap and the area where the ambiguity meets the preset fuzzy condition, including: if the jitter condition is met, And extracting, from the at least two frames of candidate image data, at least two target region data whose ambiguities meet preset blur conditions and whose partial regions overlap and the regions are matched.
  • the method further includes: determining whether the spliced area in the target image data meets a preset splicing condition; if the splicing condition is met, outputting the target image data .
  • the method further includes: if the shaking condition is not met or does not meet the splicing condition, outputting candidate image data that meets a preset image condition.
  • the outputting candidate image data that meets preset image conditions includes:
  • the candidate image data having the smallest degree of blur is output.
  • the jitter data has multiple, and determining whether the jitter data meets a preset jitter condition comprises:
  • the determining whether the splicing area in the target image data meets a preset splicing condition comprises:
  • a mobile device including a camera, a memory, and a processor, wherein:
  • the memory in communication with the camera and processor, configured to store data collected by the camera and computer instructions; the processor configured to execute the computer instructions to:
  • the target area data is spliced into target image data.
  • a photographing apparatus including:
  • a candidate image data collecting module configured to collect at least two frames of candidate image data when performing a photographing operation
  • a target area data extracting module configured to extract, from the at least two frames of candidate image data, at least two target area data whose ambiguities meet preset blur conditions and whose partial regions overlap and the regions are matched;
  • the target area data splicing module is configured to splicing the target area data into target image data.
  • the target area data extraction module includes:
  • a candidate image data cutting molecular module configured to divide each candidate image data into at least two candidate region data according to a preset segmentation manner
  • An ambiguity calculation sub-module configured to calculate a ambiguity for each candidate region data
  • the feature area data query sub-module is configured to query at least two candidate area data whose ambiguity meets the preset fuzzy condition and the area is matched as the feature area data;
  • the dissimilarity extraction sub-module is configured to extract, from the candidate image data to which the feature region data belongs, target region data including at least the feature region data, if the feature region data belongs to at least two frame candidate image data.
  • the feature area data query submodule includes:
  • a candidate region data selecting unit configured to select candidate region data with the smallest ambiguity from candidate region data in the same region for each segmentation mode
  • a value calculation unit configured to calculate, for each of the segmentation modes, a sum of ambiguities of at least two candidate region data in which the region is matched;
  • a ambiguity comparison unit for comparing the sum of all ambiguities for all singulation methods
  • a value selection unit configured to select at least two candidate region data having the smallest sum of ambiguities and matching the regions as the feature region data.
  • the cutting manner includes at least one of the following:
  • the target area data splicing module includes:
  • a feature point extraction sub-module for extracting feature points from each target area data
  • a feature point matching sub-module configured to match feature points of the target image data by using a preset first matching manner to obtain a feature point with successful matching
  • a transformation mode calculation sub-module configured to calculate a transformation manner between the feature points that are successfully matched
  • the transform mode splicing sub-module is configured to splicing the target area data into target image data according to the transformation manner.
  • the feature point matching submodule includes:
  • a descriptor generating unit configured to generate a descriptor for the feature points of the target region data
  • a distance calculation unit configured to calculate a nearest neighbor distance and a next nearest neighbor distance between the descriptors
  • a ratio calculating unit configured to calculate a ratio between the nearest neighbor distance and the second neighbor distance
  • the matching determining unit is configured to determine that the feature point is successfully matched when the ratio is less than a preset threshold.
  • the target area data splicing module further includes:
  • the error point removal sub-module is configured to remove the feature points that match the error from the successfully matched feature points by using a preset second matching manner.
  • the target area data splicing module further includes:
  • a downsampling submodule configured to perform downsampling processing on the feature area data according to preset sampling parameters
  • the change mode conversion submodule is configured to perform conversion processing on the transform mode according to the sampling parameter.
  • FIG. 1 is a flow chart showing the steps of a photographing method according to an embodiment of the present disclosure
  • FIG. 2 is a flow chart showing the steps of another photographing method according to an embodiment of the present disclosure.
  • 3A to 3H illustrate an example of a photographing method of one embodiment of the present disclosure
  • FIG. 4 is a flow chart showing the steps of a photographing method according to another embodiment of the present disclosure.
  • FIG. 5 is a flow chart showing the steps of another photographing method according to another embodiment of the present disclosure.
  • 6A to 6G illustrate an example of a photographing method of another embodiment of the present disclosure
  • FIG. 7 is a structural block diagram of a photographing apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a structural block diagram of a photographing apparatus according to another embodiment of the present disclosure.
  • FIG. 9 is a block diagram showing the structure of a mobile device according to another embodiment of the present disclosure.
  • FIG. 1 a flow chart of steps of a photographing method according to an embodiment of the present disclosure is shown, which may specifically include the following steps:
  • Step 101 When performing a photographing operation, acquiring at least two frames of candidate image data.
  • Step 102 Extract, from the at least two frame candidate image data, at least two target region data whose ambiguities meet the preset blur conditions and whose partial regions overlap and the regions are matched.
  • Step 103 splicing the target area data into target image data.
  • the embodiments of the present disclosure may be applied to a mobile terminal, for example, a mobile phone, a tablet computer, a wearable device (such as VR (Virtual Reality) glasses, a VR helmet, a smart watch), and the like. This example does not limit this.
  • a mobile terminal for example, a mobile phone, a tablet computer, a wearable device (such as VR (Virtual Reality) glasses, a VR helmet, a smart watch), and the like. This example does not limit this.
  • the mobile terminal is configured with one or more cameras for taking photos and recordings.
  • the camera may be disposed on the back of the mobile terminal (also referred to as a rear camera), or may be disposed in the mobile terminal.
  • the front side also referred to as the front camera
  • the front camera is not limited in this embodiment of the present disclosure.
  • the mobile terminal's operating system includes Android (Android), IOS, Windows Phone, Windows, etc., and can support applications that run a variety of callable cameras, such as camera applications, shopping applications, instant messaging applications, and the like.
  • These applications can perform related business operations by calling a camera.
  • the corresponding application can take photos for post-processing (such as filters, cropping, adding patterns, etc.) and store them in the gallery.
  • the storage shopping application can call the camera to take photos of the products.
  • the instant messaging application can call the camera to take the picture and send the collected image data as an instant message, and so on.
  • the camera can start a mode such as ZSL (Zero Second Later), and when performing a photographing operation, perform operations such as exposure, focusing, and the like, and acquire at least two frames of candidate image data.
  • ZSL Zero Second Later
  • Some conditions are set in advance for the ambiguity, and by comparing the ambiguities of the candidate image data of each frame, an appropriate region is extracted from the different candidate image data as the target region data, and the traversal of the target region data is overlapped.
  • the parts are spliced into the target image data so that the image images between the target areas are coherent and complete, and other post-processing (such as cropping into rectangles, adjusting uniform contrast, brightness, etc.), or displaying them to the user.
  • the embodiment of the present disclosure acquires at least two frames of candidate image data; and extracts at least two of the at least two frames of the candidate image data that are ambiguously conforming to the preset blur condition, and the partial regions overlap and the regions are matched.
  • the target area data is spliced into the target image data.
  • the overlapping of partial regions enables the target region data to be spliced.
  • the matching of the regions in the target region data can ensure the content integrity of the spliced image, and the appropriate target is filtered through the fuzzy condition.
  • the area data ensures the sharpness of the image data after splicing, reduces the impact of offsetting the jitter, eliminates the need for additional independent devices, reduces the cost, and splicing processing operation is simple, which increases the processing speed and saves time.
  • FIG. 2 a flow chart of steps of another photographing method according to an embodiment of the present disclosure is shown, which may specifically include the following steps:
  • Step 201 When performing a photographing operation, collecting at least two frames of candidate image data.
  • Step 202 Divide each frame candidate image data into at least two candidate region data according to a preset segmentation manner.
  • one or more segmentation modes may be preset, and each frame candidate image data is segmented according to the segmentation manners, thereby dividing each frame candidate image data into at least two candidate region data.
  • the segmentation method includes at least one of the following:
  • the former segmentation method is left and right segmentation
  • the candidate image data can be divided into left and right candidate region data along the center line in the vertical direction.
  • the latter segmentation method is the upper and lower segmentation, and the candidate image data can be divided into upper and lower candidate region data along the center line of the horizontal mode.
  • the above-mentioned segmentation mode is only an example.
  • other segmentation modes may be set according to actual conditions, for example, the candidate image data is divided into upper, middle, and lower candidate region data, candidate images.
  • the data is divided into three candidate area data of the left, the middle, and the right, and the like, and the embodiment of the present disclosure does not limit this.
  • those skilled in the art may also adopt other segmentation methods according to actual needs, and the embodiments of the present disclosure do not limit this.
  • step 203 the ambiguity is calculated for each candidate region data.
  • the ambiguity may be calculated for each candidate region data of each frame of candidate image data.
  • the so-called ambiguity can refer to the degree of blurring of candidate region data.
  • the ambiguity can be measured by image grayscale variation, image gradient value, image entropy, and the like.
  • the ambiguity is negatively correlated with image grayscale variation, image gradient value, and image entropy. That is, if the ambiguity is larger, the image gradation change is smaller, the image gradient value is smaller, and the image entropy is smaller. Conversely, if the ambiguity is higher Small, the larger the image grayscale change, the larger the image gradient value, and the larger the image entropy.
  • the spectrum function is usually obtained based on the Fourier transform.
  • gradient functions can be used for calculations, such as Tenengrad functions, energy gradient functions, Brenner functions, variance functions, and so on.
  • the entropy function can be based on the premise that the entropy of the image data with the appropriate feature distance is greater than the entropy of the image data with the inappropriate feature distance (too short or too long).
  • Step 204 Query the at least two candidate area data whose ambiguities meet the preset fuzzy conditions and match the area, as the feature area data.
  • At least two candidate region data that match the ambiguity corresponding to the preset blurring condition and match the region are selected as the feature region data.
  • the matching of the regions in which the regions are located may mean that the selected candidate region data covers the regions to which the segmentation mode is divided, so that the selected candidate region data can logically constitute complete image data.
  • the candidate image data is divided into upper and lower candidate region data along the center line in the horizontal direction, and the selected candidate region data includes the upper half of the candidate region data and the lower half of the candidate region data.
  • the candidate image data is divided into left and right candidate region data along the center line in the vertical direction, and the selected candidate region data includes candidate region data of the left half and candidate region data of the right half.
  • step 204 may include the following sub-steps:
  • Sub-step S11 for each of the segmentation modes, the candidate region data having the smallest degree of blur is selected from the candidate region data in the same region.
  • Sub-step S12 for each of the segmentation modes, the sum of the ambiguities of the at least two candidate region data in which the regions are matched is calculated.
  • Sub-step S13 compares the sum of all ambiguities for all splitting modes.
  • Sub-step S14 at least two candidate region data whose sum of ambiguities are the smallest and whose regions match are selected as the feature region data.
  • blur may be selected from candidate region data of each region segmented from all candidate image data.
  • the candidate region data with the smallest degree is calculated, and the sum of the ambiguities of the candidate region data of all the regions is calculated as the ambiguity of the segmentation method.
  • the sum of the ambiguities of all the segmentation modes is compared, and the candidate region data corresponding to the segmentation mode with the smallest sum of ambiguities is selected as the feature region data.
  • the candidate image data is divided into upper and lower candidate region data along the center line in the horizontal direction, and the candidate image data is divided into left and right candidate region data along the center line in the vertical direction, and then all of them are in the Selecting the candidate region data with the smallest ambiguity in the candidate region data in the upper part, selecting the candidate region data with the smallest ambiguity from all the candidate region data in the lower half, calculating the sum of the ambiguities of the two, and then Selecting the candidate region data with the smallest ambiguity among all the candidate region data in the left half, selecting the candidate region data with the smallest ambiguity from all the candidate region data in the right half, and calculating the sum of the ambiguities of the two, and comparing the two The sum of the ambiguities, the candidate region data corresponding to the sum of the ambiguities with the smallest values as the feature region data.
  • Step 205 If the feature region data belongs to at least two frame candidate image data, extract target region data including at least the feature region data from the candidate image data to which the feature region data belongs.
  • the candidate image data to which it belongs can be determined.
  • the target region data including the feature region data may be extracted from the candidate image data to which the feature region data belongs, so that the target region data has a portion with repeated content. , can be spliced.
  • the selected feature region data A is candidate region data located in the upper half of the candidate image data A
  • the selected feature region data B is candidate region data located in the lower half of the candidate image data B, and may be candidate
  • the image data A extracts data of two-thirds of the total area including the feature area data A as the target area data, and extracts the feature area data B from the candidate image data B by two-thirds of the total area.
  • the data is used as the target area data.
  • the candidate image data can be directly output.
  • Step 206 Perform downsampling processing on the feature area data according to preset sampling parameters.
  • Subsampled also known as reduced image or downsampled, downsamples the feature area data according to preset sampling parameters (such as reduction, rotation, translation, etc.), which can reduce the processing amount and improve processing. speed.
  • preset sampling parameters such as reduction, rotation, translation, etc.
  • an image I of size M*N it is downsampled by s times to obtain a resolution image of (M/s)*(N/s) size, where s is M and N. common divisor.
  • the image in the original image s*s window is turned into a pixel.
  • the value of this pixel is the average of all the pixels in the window:
  • Step 207 extract feature points from each target area data.
  • color features, texture features, shape features, spatial relationship features, and the like may be extracted from the target region data as feature points.
  • Step 208 Matching feature points of the target image data by using a preset first matching manner to obtain feature points with successful matching.
  • the feature points between the two target image data may be matched by using a preset first matching manner.
  • descriptors may be generated for feature points of the target region data, such as SIFT (Scale Invariant Feature Transform), ORB (Oriented FAST and Rotated BRIEF), BRISK (Binary Robust Invariant Scalable Keypoints), and many more.
  • SIFT Scale Invariant Feature Transform
  • ORB Oriented FAST and Rotated BRIEF
  • BRISK Binary Robust Invariant Scalable Keypoints
  • the first matching manner of the foregoing feature points is only an example.
  • the first matching manner of other feature points may be set according to actual conditions, for example, by matching the nearest neighbor distance between the descriptors. Etc., the embodiments of the present disclosure do not limit this.
  • the first matching manner of other feature points may be adopted by those skilled in the art according to actual needs, and the embodiment of the present disclosure does not limit this.
  • Step 209 Remove the feature points that match the error from the successfully matched feature points by using a preset second matching manner.
  • the second matching method such as the RANSAC (Random Sample Consensus) algorithm, can be used to remove the feature points of the matching error.
  • Step 210 Calculate a transformation manner between the feature points whose matching is successful.
  • the content of the two target area data is the same, and the content of one target area data is the same as the content of the other target area data (such as the same coordinate system). Therefore, after the feature points are successfully matched, the content can be calculated.
  • the transformation between these feature points (such as the transformation matrix) is used as a transformation between the target region data (such as transformation matrix).
  • Step 211 Perform conversion processing on the transformation manner according to the sampling parameter.
  • the transformation mode is a variation mode after the down sampling process.
  • the sampling parameter may be used to transform the mode.
  • the conversion process is performed to restore it to the change mode before the downsampling process.
  • Step 212 splicing the target area data into target image data according to the transformation manner.
  • one of the target area data is transformed according to the transformation manner to be aligned with another target area data, thereby determining the positional relationship between the two target area data, and the two target areas are determined.
  • the degree of blur of the target image data is less than or equal to the candidate image data of any frame.
  • the mobile terminal caches I n (n is a positive integer) frame candidate image data once, which are I 1 , I 2 , ..., I n , respectively, including the two-frame candidate image data shown in FIGS. 3A and 3B.
  • the shooting time of the n-frame candidate image data is close, and although there is a difference in the content of the photographing, the mobile terminal generally does not move largely in a short time, and therefore, the difference in the content of the photographing is generally small.
  • the candidate image data can be output.
  • p j1 belongs to the candidate image data shown in FIG. 3A
  • p k2 belongs to the candidate image data shown in FIG. 3B, that is, p j1 and p k2 belong to different candidate image data (ie, j ⁇ k), such as
  • FIG. 3C a partial image larger than one-half of the image is cropped from I j (ie, as in the box section of Figure 3A)
  • the target area data as shown in FIG. 3D
  • a partial image of more than one half below is cropped from I k (ie the box part in Figure 3B)
  • Contains p k2 , ie As the target area data.
  • the erroneous D j and D k are matched from the successfully matched D j and D k .
  • FIG. 4 a flow chart of steps of a photographing method according to another embodiment of the present disclosure is shown, which may specifically include the following steps:
  • Step 401 When performing a photographing operation, collecting at least two frames of candidate image data and calling a preset sensor to measure the jitter data.
  • the embodiments of the present disclosure may be applied to a mobile terminal, for example, a mobile phone, a tablet computer, a wearable device (such as VR (Virtual Reality) glasses, a VR helmet, a smart watch), and the like. This example does not limit this.
  • a mobile terminal for example, a mobile phone, a tablet computer, a wearable device (such as VR (Virtual Reality) glasses, a VR helmet, a smart watch), and the like. This example does not limit this.
  • the mobile terminal is configured with one or more cameras and one or more sensors.
  • the camera can be used for photographing and recording.
  • the camera can be disposed on the back of the mobile terminal (also referred to as a rear camera), or can be disposed on the front side of the mobile terminal (also referred to as a front camera). There is no limit to this.
  • the sensor may include a gyroscope, an acceleration sensor, etc., and can measure data such as angular velocity and acceleration as jitter data indicating jitter of the mobile terminal.
  • the mobile terminal's operating system includes Android (Android), IOS, Windows Phone, Windows, etc., and can support applications that run a variety of callable cameras, such as camera applications, shopping applications, instant messaging applications, and the like.
  • These applications can perform related business operations by calling a camera.
  • the corresponding application can take photos for post-processing (such as filters, cropping, adding patterns, etc.) and store them in the gallery.
  • the storage shopping application can call the camera to take photos of the products.
  • the instant messaging application can call the camera to take the picture and send the collected image data as an instant message, and so on.
  • the camera may activate a mode such as ZSL, and perform an operation of exposure, focusing, etc. when performing a photographing operation, and acquire at least two frames of candidate image data.
  • the sensor is called to measure the jitter data to calculate the degree of jitter of the mobile terminal when acquiring the preview image data.
  • Step 402 Determine whether the jitter data meets a preset jitter condition.
  • the jitter condition may be preset, and the jitter data measured during photographing is used to determine whether the jitter condition is met.
  • the jitter condition indicates that the mobile terminal has less jitter when taking a picture. If the jitter condition is not met within the allowed jitter range, it indicates that the mobile terminal shakes more when taking a picture, exceeding the allowed jitter amplitude.
  • the jitter data has a plurality of, and step 402 may include the following sub-steps:
  • Sub-step S41 calculating a plurality of cell jitter values using a plurality of jitter data.
  • Sub-step S42 calculating an average value of the plurality of cell jitter values as an overall jitter value.
  • Sub-step S43 it is judged whether the overall jitter value is within a preset jitter range; if so, sub-step S44 is performed, and if not, sub-step S45 is performed.
  • Sub-step S44 determining that the preset jitter condition is met.
  • Sub-step S45 determining that the preset jitter condition is not met.
  • the sensor since the photographing acquires at least two frames of candidate image data in a relatively short period of time, at the same time, the sensor may collect a plurality of (at least two) of the jitter data.
  • the jitter level of each jitter data during this period can be calculated as the unit jitter value, and the average value of the unit jitter values is calculated as the overall jitter value during this period.
  • the modulus of the calculator data can be used as the cell jitter value.
  • the weight values of the individual jitter values at different times are configured to calculate the weights of the configuration.
  • the sum of the cell jitter values is then used as the overall jitter value, and so on, which is not limited by the embodiments of the present disclosure.
  • the jitter range can be set in a targeted manner. If the overall jitter value is within the jitter range, it means that the mobile terminal has less jitter when taking pictures, and the jitter condition is met, if the overall jitter value is not within the jitter range. , indicating that the mobile terminal shakes a lot when taking a picture, and does not meet the jitter condition.
  • Step 403 If the dithering condition is met, extracting, from the candidate image data, at least two target area data whose ambiguities meet the preset fuzzy conditions and whose partial regions overlap and the regions are matched.
  • the probability that the candidate image data is clear is high. Therefore, by comparing the ambiguities of the candidate image data of each frame, it is determined whether or not some conditions (ie, fuzzy conditions) are set in advance for the ambiguity, from different Appropriate regions are extracted as candidate region data in the candidate image data.
  • the candidate image that conforms to the preset image condition (such as focus, blur, contrast, brightness, etc.) is output.
  • Data, for other post-processing such as adding watermarks, filters, etc.
  • the ambiguity of the candidate image data can be calculated, and the candidate image data with the smallest ambiguity can be output.
  • Step 404 splicing the target area data into target image data.
  • the overlapping portions in the target region data are traversed, thereby splicing into target image data.
  • Step 405 Determine whether the splicing area in the target image data meets a preset splicing condition.
  • Step 406 if the splicing condition is met, determining to output the target image data.
  • the splicing area can be image-analyzed to determine whether the splicing conditions are met (ie, whether there is a splicing seam correlation). conditions of).
  • the target image data is output, and other post-processing (such as cropping into a rectangle, adjusting uniform contrast, brightness, etc.) is performed, or displayed to the user.
  • other post-processing such as cropping into a rectangle, adjusting uniform contrast, brightness, etc.
  • the candidate image data that meets the preset image conditions (such as focus, ambiguity, contrast, brightness, etc.) is output, and other post-processing (such as adding a watermark, a filter) Etc.), or, show it to the user.
  • the ambiguity of the candidate image data can be calculated, and the candidate image data having the smallest ambiguity can be output.
  • step 405 can include the following sub-steps:
  • Sub-step S51 determining a splicing area in the target image data.
  • Sub-step S52 calculating, in the target image data, a first gradation value of a pixel located on one side of the splicing area, and a second gradation value of a pixel located on the other side of the splicing area.
  • Sub-step S53 calculating a gradation difference between the first gradation value and the second gradation value.
  • Sub-step S54 it is judged whether the gradation difference is smaller than a preset threshold; if yes, sub-step S55 is performed, and if not, sub-step S56 is performed.
  • Sub-step S55 determining that the splicing condition meets the preset.
  • Sub-step S56 determining that the splicing condition does not meet the preset.
  • the splicing area can refer to the area where the target area data is spliced, and is generally the area where the edge of the target area data is located.
  • the pixel located on one side of the splicing area belongs to one of the target area data, and the pixel located on the other side of the splicing area belongs to another target area data, and the gradation value of the data of the two target areas can be separately calculated to obtain the first gradation.
  • the gray level difference is greater than or equal to the preset threshold value, the difference between the two target area data in the splicing area is large, the splicing seam is more obvious, and the splicing condition is not met.
  • the gray level difference is less than the preset threshold, the difference between the two target area data in the splicing area is small, and the splicing seam is relatively concealed, which conforms to the preset splicing condition.
  • the embodiment of the present disclosure collects at least two frames of candidate image data and calls a preset sensor to measure the jitter data. If the jitter data meets the jitter condition, the blur degree is respectively extracted from the at least two frame candidate image data according to the preset.
  • the overlapping of partial regions enables the target region data to be spliced, considering the correlation between the collected candidate image data, and the matching of the regions in the target region data can ensure the content integrity of the spliced image, and screening through fuzzy conditions.
  • Appropriate target area data ensures the sharpness of the image data after splicing, reduces the impact of offsetting jitter, eliminates the need for additional independent devices, reduces costs, and splicing processing is simple, increasing processing speed and saving cost Time, on the other hand, through jitter Member, as a spliced splicing conditions, conditions of output target image data, the image quality can be guaranteed spliced.
  • FIG. 5 a flow chart of steps of another photographing method according to an embodiment of the present disclosure is shown, which may specifically include the following steps:
  • Step 501 When performing a photographing operation, collecting at least two frames of candidate image data and calling a preset sensor to measure the jitter data.
  • Step 502 Determine whether the jitter data meets a preset jitter condition.
  • Step 503 If the jitter condition is met, each frame candidate image data is divided into at least two candidate region data according to a preset segmentation manner.
  • step 202 For the implementation of the step, refer to the related description in step 202 in the foregoing embodiment, and the description is not repeated here.
  • Step 504 calculating ambiguity for each candidate region data.
  • the ambiguity may be calculated for each candidate region data of each frame of candidate image data.
  • step 203 For the implementation of the step, refer to the related description in step 203 in the foregoing embodiment, which is not repeated here.
  • Step 505 Query the at least two candidate area data whose ambiguities meet the preset fuzzy conditions and match the area, as the feature area data.
  • step 204 For the implementation of the step, refer to the related description in step 204 in the foregoing embodiment, and the description is not repeated here.
  • Step 506 If the feature region data belongs to at least two frame candidate image data, extract target region data including at least the feature region data from the candidate image data to which the feature region data belongs.
  • step 205 For the implementation of the step, refer to the related description in step 205 in the foregoing embodiment, which is not repeated here.
  • Step 507 Perform down sampling processing on the feature area data according to preset sampling parameters.
  • step 206 For the implementation of the step, refer to the related description in step 206 in the foregoing embodiment, and the description is not repeated here.
  • Step 508 extract feature points from each target area data.
  • step 207 For the implementation of the step, refer to the related description in step 207 in the foregoing embodiment, and the description is not repeated here.
  • Step 509 Match feature points of the target image data by using a preset first matching manner to obtain feature points with successful matching.
  • step 208 For the implementation of the step, refer to the related description in step 208 in the foregoing embodiment, and the description is not repeated here.
  • Step 510 Remove the feature points that match the error from the successfully matched feature points by using a preset second matching manner.
  • step 209 For the implementation of the step, refer to the related description in step 209 in the foregoing embodiment, and the description is not repeated here.
  • Step 511 Calculate a transformation manner between the feature points whose matching is successful.
  • step 210 For the implementation of the step, refer to the related description in step 210 in the foregoing embodiment, which is not repeated here.
  • Step 512 Perform conversion processing on the transformation manner according to the sampling parameter.
  • step 211 For the implementation of the step, refer to the related description in step 211 in the foregoing embodiment, and the description is not repeated here.
  • Step 513 splicing the target area data into target image data according to the transformation manner.
  • step 212 For the implementation of the step, refer to the related description in step 212 in the foregoing embodiment, and the description is not repeated here.
  • Step 514 Determine whether the splicing area in the target image data meets a preset splicing condition.
  • Step 515 if the splicing condition is met, the target image data is output.
  • the target area data transformed according to the change manner covers another target area data, and the spliced area is the area where the edge of the converted target area data is cropped.
  • the above two-thirds of the data is extracted from the candidate image data A as the target region data A
  • the following two-thirds of the data can be extracted from the candidate image data B as the target region data B
  • the target region data A is changed.
  • the mode is changed, and the target area data B is aligned, and the splicing area is the upper edge of the target area data B.
  • the mobile terminal caches I n (n is a positive integer) frame candidate image data once, which are I 1 , I 2 , ..., I n , respectively.
  • the shooting time of the n-frame candidate image data is close, and although there is a difference in the content of the photographing, the mobile terminal generally does not move largely in a short time, and therefore, the difference in the content of the photographing is generally small.
  • the acceleration sensor While acquiring candidate image data, the acceleration sensor is called to acquire m acceleration data.
  • the photographing time is 30ms, taking 200HZ as an example, there will be about 6 acceleration data during the exposure period.
  • the frame candidate image data having the smallest ambiguity is output from the I n frame candidate image data.
  • the candidate image data can be output.
  • p j1 belongs to one of the frame candidate image data
  • p k2 belongs to another frame candidate image data, that is, p j1 and p k2 belong to different candidate image data (ie, j ⁇ k), as shown in FIG. 6A.
  • the erroneous D j and D k are matched from the successfully matched D j and D k .
  • the stitching area is located On the upper edge, the gray value on both sides of the splicing area and its gradation difference are calculated.
  • the frame candidate image data having the smallest ambiguity is output from the I n frame candidate image data.
  • the target image data shown in Fig. 6G is output.
  • FIG. 7 a structural block diagram of a photographing apparatus according to an embodiment of the present disclosure is shown, which may specifically include the following modules:
  • the candidate image data collecting module 701 is configured to collect at least two frames of candidate image data when performing a photographing operation
  • the target area data extracting module 702 is configured to extract, from the at least two frames of candidate image data, at least two target area data whose ambiguities meet the preset fuzzy conditions and whose partial regions overlap and the regions are matched;
  • the target area data splicing module 703 is configured to splicing the target area data into target image data.
  • the target area data extraction module includes:
  • a candidate image data cutting molecular module configured to divide each candidate image data into at least two candidate region data according to a preset segmentation manner
  • An ambiguity calculation sub-module configured to calculate a ambiguity for each candidate region data
  • the feature area data query sub-module is configured to query at least two candidate area data whose ambiguity meets the preset fuzzy condition and the area is matched as the feature area data;
  • a different extraction sub-module configured to extract, from the candidate image data that belongs to each feature region data, target region data that includes the feature region data, if the feature region data belongs to at least two frame candidate image data.
  • the feature area data query submodule includes:
  • a candidate region data selecting unit configured to select candidate region data with the smallest ambiguity from candidate region data in the same region for each segmentation mode
  • a value calculation unit configured to calculate, for each of the segmentation modes, a sum of ambiguities of at least two candidate region data in which the region is matched;
  • a ambiguity comparison unit for comparing the sum of all ambiguities for all singulation methods
  • a value selection unit for selecting at least two candidate region data having the smallest sum of ambiguities and matching the regions as the feature region data.
  • the segmentation manner includes at least one of the following:
  • the target area data splicing module 703 includes:
  • a feature point extraction sub-module for extracting feature points from each target area data
  • a feature point matching sub-module configured to match feature points of the target image data by using a preset first matching manner to obtain a feature point with successful matching
  • a transformation mode calculation sub-module configured to calculate a transformation manner between the feature points that are successfully matched
  • the transform mode splicing sub-module is configured to splicing the target area data into target image data according to the transformation manner.
  • the feature point matching submodule includes:
  • a descriptor generating unit configured to generate a descriptor for the feature points of the target region data
  • a distance calculation unit configured to calculate a nearest neighbor distance and a next nearest neighbor distance between the descriptors
  • a ratio calculating unit configured to calculate a ratio between the nearest neighbor distance and the second neighbor distance
  • the matching determining unit is configured to determine that the feature point is successfully matched when the ratio is less than a preset threshold.
  • the target area data splicing module further includes:
  • the error point removal sub-module is configured to remove the feature points that match the error from the successfully matched feature points by using a preset second matching manner.
  • the target area data splicing module further includes:
  • a downsampling submodule configured to perform downsampling processing on the feature area data according to preset sampling parameters
  • the change mode conversion submodule is configured to perform conversion processing on the transform mode according to the sampling parameter.
  • FIG. 8 a structural block diagram of a photographing apparatus according to another embodiment of the present disclosure is shown, which may specifically include the following modules:
  • the data collection module 801 is configured to: when performing a photographing operation, collect at least two frames of candidate image data and call a preset sensor to measure the jitter data;
  • the jitter condition determining module 802 is configured to determine whether the jitter data meets a preset jitter condition
  • the target area data extraction module 803 is configured to: if the dithering condition is met, extract at least two of the at least two frames of candidate image data that are ambiguous according to a preset fuzzy condition, and the partial regions overlap and the regions are matched Target area data;
  • a target area data splicing module 804 configured to splicing the target area data into target image data
  • the splicing condition determining module 805 is configured to determine whether the splicing area in the target image data meets a preset splicing condition
  • the target image output module 806 is configured to output the target image data if the splicing condition is met.
  • the method further includes: a candidate image data output module, configured to output candidate image data that meets a preset image condition if the jitter condition is not met or the splicing condition is not met.
  • the candidate image data output module includes:
  • a ambiguity calculation sub-module configured to calculate a ambiguity of the candidate image data
  • the ambiguity output sub-module is configured to output the candidate image data with the smallest ambiguity.
  • the jitter data has multiple, and the jitter condition determination module 802 includes:
  • a single cell jitter value calculation sub-module for calculating a plurality of cell jitter values using a plurality of jitter data
  • An overall jitter value calculation submodule configured to calculate an average value of the plurality of cell jitter values as an overall jitter value
  • the jitter range determining submodule is configured to determine whether the overall jitter value is within a preset jitter range; if yes, the first determining submodule is invoked, and if not, the second determining submodule is invoked;
  • a first determining submodule configured to determine a jitter condition that meets a preset
  • the second determining submodule is configured to determine that the jitter condition does not meet the preset.
  • the splicing condition determining module 805 includes:
  • a splicing area determining submodule configured to determine a splicing area in the target image data
  • a gray value calculation submodule configured to calculate, at the target image data, a first gray value of a pixel located on one side of the stitching area, and a second gray value of a pixel located on the other side of the stitching area;
  • a grayscale difference calculation submodule configured to calculate a grayscale difference between the first grayscale value and the second grayscale value
  • a gray level determining sub-module configured to determine whether the gray level difference is less than a preset threshold; if yes, calling a third determining sub-module; if not, calling a fourth determining sub-module;
  • a third determining submodule configured to determine a splicing condition that meets a preset
  • the fourth determining submodule is configured to determine a splicing condition that does not conform to the preset.
  • the target area data extraction module 803 includes:
  • a candidate image data cutting molecular module configured to divide each candidate image data into at least two candidate region data according to a preset segmentation manner
  • An ambiguity calculation sub-module configured to calculate a ambiguity for each candidate region data
  • the feature area data query sub-module is configured to query at least two candidate area data whose ambiguity meets the preset fuzzy condition and the area is matched as the feature area data;
  • a different extraction sub-module configured to extract, from the candidate image data that belongs to each feature region data, target region data that includes the feature region data, if the feature region data belongs to at least two frame candidate image data.
  • the feature area data query submodule includes:
  • a candidate region data selecting unit configured to select candidate region data with the smallest ambiguity from candidate region data in the same region for each segmentation mode
  • a value calculation unit configured to calculate, for each of the segmentation modes, a sum of ambiguities of at least two candidate region data in which the region is matched;
  • a ambiguity comparison unit for comparing the sum of all ambiguities for all singulation methods
  • a value selection unit configured to select at least two candidate region data having the smallest sum of ambiguities and matching the regions as the feature region data.
  • the segmentation manner includes at least one of the following:
  • the target area data splicing module 804 includes:
  • a feature point extraction sub-module for extracting feature points from each target area data
  • a feature point matching sub-module configured to match feature points of the target image data by using a preset first matching manner to obtain a feature point with successful matching
  • a transformation mode calculation sub-module configured to calculate a transformation manner between the feature points that are successfully matched
  • the transform mode splicing sub-module is configured to splicing the target area data into target image data according to the transformation manner.
  • the feature point matching submodule includes:
  • a descriptor generating unit configured to generate a descriptor for the feature points of the target region data
  • a distance calculation unit configured to calculate a nearest neighbor distance and a next nearest neighbor distance between the descriptors
  • a ratio calculating unit configured to calculate a ratio between the nearest neighbor distance and the second neighbor distance
  • the matching determining unit is configured to determine that the feature point is successfully matched when the ratio is less than a preset threshold.
  • the target area data splicing module 804 further includes:
  • the error point removal sub-module is configured to remove the feature points that match the error from the successfully matched feature points by using a preset second matching manner.
  • the target area data splicing module 804 further includes:
  • a downsampling submodule configured to perform downsampling processing on the feature area data according to preset sampling parameters
  • the change mode conversion submodule is configured to perform conversion processing on the transform mode according to the sampling parameter.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • a mobile device that can perform the above method, including a processor 900 and a memory 920 , and further includes a camera 910 is illustrated.
  • Memory 920 is in communication with camera 910 and processor 900 and is configured to store data collected by camera 910 and computer instructions.
  • the processor 900 is configured to execute the computer instructions to: obtain at least two frames of candidate image data acquired by the camera 910 when performing a photographing operation; and extract fuzzy degrees from the at least two frames of candidate image data according to preset blur conditions At least two target area data in which partial regions overlap and the regions are matched; the target region data is spliced into target image data.
  • the bus architecture may include any number of interconnected buses and bridges, specifically linked by one or more processors represented by processor 900 and various circuits of memory represented by memory 920.
  • the bus architecture can also link various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art and, therefore, will not be further described herein.
  • Bus interface 930 provides an interface.
  • the processor 900 is responsible for managing the bus architecture and general processing, as well as providing various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions.
  • the memory 920 can store data used by the processor 900 when performing operations.
  • the processor 900 can be a central embedded device (CPU), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a complex programmable logic.
  • CPU central embedded device
  • ASIC application specific integrated circuit
  • FPGA field-programmable gate array
  • CPLD complex programmable logic
  • the processor 900 reads the computer instruction in the memory 920, and performs the method in the embodiment shown in FIG. 1, FIG. 2, FIG. 4 or FIG. 5, and specifically refers to the related description in the foregoing embodiment, where No longer.
  • the presently disclosed embodiments also provide a computer readable non-volatile storage medium having stored therein computer instructions that, when executed by a processor, implement the methods described in the preceding embodiments.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • embodiments of the disclosed embodiments can be provided as a method, apparatus, or computer program product.
  • embodiments of the present disclosure can take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware.
  • embodiments of the present disclosure may take the form of a computer program product embodied on one or more computer usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • Embodiments of the present disclosure are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the present disclosure. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG.
  • These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing terminal device to produce a machine such that instructions are executed by a processor of a computer or other programmable data processing terminal device
  • Means are provided for implementing the functions specified in one or more of the flow or in one or more blocks of the flow chart.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing terminal device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the instruction device implements the functions specified in one or more blocks of the flowchart or in a flow or block of the flowchart.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

本公开实施例提供了一种拍照的方法和装置,该方法包括:当执行拍照操作时,采集至少两帧候选图像数据;从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据;将所述目标区域数据拼接为目标图像数据。部分区域重叠使得目标区域数据可以实现拼接,考虑了采集的候选图像数据之间的关联性,目标区域数据所处区域相匹配可保证拼接后的图像的内容完整性,通过模糊条件筛选合适的目标区域数据,保证拼接后的图像数据的清晰度,减少抵消抖动带来的影响,无需额外配置独立的器件,降低了成本,并且,拼接处理操作简单,提高了处理的速度,节省耗费的时间。

Description

一种拍照的方法和装置
本公开要求在2018年3月29日提交中国专利局、申请号为201810274308.3、发明名称为“一种拍照的方法和装置”的中国专利申请的优先权,以及要求在2018年3月29日提交中国专利局、申请号为201810274313.4、发明名称为“一种拍照方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及通信的技术领域,特别是涉及一种拍照的方法和装置。
背景技术
随着科技的发展,诸如手机、平板电脑等移动终端,在人们的工作、学习、日常交流等各方面的使用率也越来越高。
移动终端中通常配置有摄像头(camera),具备拍照、摄像功能,由于摄像头器件小,感光面积也小,在拍照过程中,用户的手可能会抖动,导致采集的图像数据模糊,尤其在夜晚等暗光环境中,由于光线的不足,摄像头的曝光时间会增长,手的微小抖动可能会造成采集的图像数据模糊不清。
目前,通常采用数字影像稳定(Digital Image Stabilization,DIS)、电子影像稳定(Electrical Image Stabilization,EIS)及光学影像稳定(Optical Image Stabilization,OIS)消除抖动的影响。
DIS和EIS均需要大量数据帧裁剪,加大处理器的负荷,且只用于录像防抖。
OIS需要增加独立的器件,用于检测移动终端的抖动,调整镜光进行补偿,从而抵消抖动带来的影响,成本较高。
发明内容
本公开实施例提出了一种拍照的方法和装置,以解决拍照防抖成本较高的问题。
依据本公开的一个方面,提供了一种拍照方法,包括:
当执行拍照操作时,采集至少两帧候选图像数据;从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据;将所述目标区域数据拼接为目标图像数据。
在一些实施方式中,所述从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据,包括:将每帧候选图像数据按照预设的切分方式切分为至少两个候选区域数据;对每个候选区域数据计算模糊度;
查询模糊度符合预设的模糊条件的、且所处区域相匹配的至少两个候选区域数据,作为特征区域数据;若所述特征区域数据属于至少两帧候选图像数据,则从每个特征区域数 据中所属的候选图像数据中提取至少包含所述特征区域数据的目标区域数据。
可选地,所述查询模糊度符合预设的模糊条件的、且所处区域相匹配的至少两个候选区域数据,作为特征区域数据,包括:
针对每种切分方式,从处于相同区域的候选区域数据中选择模糊度最小的候选区域数据;
针对每种切分方式,计算所处区域相匹配的至少两个候选区域数据的模糊度之和;
针对所有切分方式,对所有的模糊度之和进行比较;
选择模糊度之和最小的、且所处区域相匹配的至少两个候选区域数据作为特征区域数据。
可选地,所述切分方式包括如下的至少一种:切分为左半部分、右半部分;切分为上半部分、下半部分。
可选地,所述将所述目标区域数据拼接为目标图像数据,包括:从每个目标区域数据中提取特征点;采用预设的第一匹配方式对所述目标图像数据的特征点进行匹配,获得匹配成功的特征点;计算所述匹配成功的特征点之间的变换方式;按照所述变换方式将所述目标区域数据拼接为目标图像数据。
可选地,所述采用预设的第一匹配方式对所述目标图像数据的特征点进行匹配,获得匹配成功的特征点,包括:对所述目标区域数据的特征点生成描述子;计算所述描述子之间的最近邻距离与次近邻距离;计算所述最近邻距离和所述次近邻距离之间的比值;当所述比值小于预设的阈值时,确定所述特征点匹配成功。
可选地,在所述采用预设的第一匹配方式对所述目标图像数据的特征点进行匹配,获得匹配成功的特征点之后,所述将所述目标区域数据拼接为目标图像数据,还包括:采用预设的第二匹配方式从所述匹配成功的特征点中去除匹配错误的特征点。
可选地,在所述从每个区域图像数据中提取特征点之前,所述将所述目标区域数据拼接为目标图像数据,还包括:按照预设的采样参数对所述特征区域数据进行下采样处理。在所述按照所述变换方式将所述目标区域数据拼接为目标图像数据之前,所述将所述目标区域数据拼接为目标图像数据,还包括:按照所述采样参数将所述变换方式进行转换处理。
可选地,还包括:当执行拍照操作时,调用预置的传感器测量抖动数据;判断所述抖动数据是否符合预设的抖动条件。所述从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据,包括:若符合所述抖动条件,则从所述至少两帧候选图像数据中提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据。所述将所述目标区域数据拼接为目标图像数据之后,还包括:判断所述目标图像数据中的拼接区域是否符合预设的拼接条件;若符合所述拼接条件,则输出所述目标图像数据。
可选地,还包括:若不符合所述抖动条件或不符合所述拼接条件,则输出符合预设的图像条件的候选图像数据。
可选地,所述输出符合预设的图像条件的候选图像数据,包括:
计算所述候选图像数据的模糊度;
输出所述模糊度最小的候选图像数据。
可选地,所述抖动数据具有多个,所述判断所述抖动数据是否符合预设的抖动条件,包括:
采用多个抖动数据计算多个单体抖动值;
计算所述多个单体抖动值的平均值,作为整体抖动值;
判断所述整体抖动值是否在预设的抖动范围内;
若是,则确定符合预设的抖动条件;
若否,则确定不符合预设的抖动条件。
可选地,所述判断所述目标图像数据中的拼接区域是否符合预设的拼接条件,包括:
确定所述目标图像数据中的拼接区域;
在目标图像数据计算位于所述拼接区域一侧的像素的第一灰度值、以及位于所述拼接区域另一侧的像素的第二灰度值;
计算所述第一灰度值与所述第二灰度值之间的灰度差异;
判断所述灰度差异是否小于预设的阈值;
若是,则确定符合预设的拼接条件;
若否,则确定不符合预设的拼接条件。
根据本公开的另一方面,提供了一种移动设备,包括摄像头、存储器和处理器,其中:
所述存储器,与所述摄像头和处理器通信,配置为存储所述摄像头采集的数据以及计算机指令;所述处理器,配置为执行所述计算机指令以实现:
获取所述摄像头在执行拍照操作时采集的至少两帧候选图像数据;
从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据;
将所述目标区域数据拼接为目标图像数据。
根据本公开的另一方面,提供了一种拍照装置,包括:
候选图像数据采集模块,用于当执行拍照操作时,采集至少两帧候选图像数据;
目标区域数据提取模块,用于从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据;
目标区域数据拼接模块,用于将所述目标区域数据拼接为目标图像数据。
可选地,所述目标区域数据提取模块包括:
候选图像数据切分子模块,用于将每帧候选图像数据按照预设的切分方式切分为至少两个候选区域数据;
模糊度计算子模块,用于对每个候选区域数据计算模糊度;
特征区域数据查询子模块,用于查询模糊度符合预设的模糊条件的、且所处区域相匹配的至少两个候选区域数据,作为特征区域数据;
相异提取子模块,用于若所述特征区域数据属于至少两帧候选图像数据,则从每个特 征区域数据中所属的候选图像数据中提取至少包含所述特征区域数据的目标区域数据。
可选地,所述特征区域数据查询子模块包括:
候选区域数据选择单元,用于针对每种切分方式,从处于相同区域的候选区域数据中选择模糊度最小的候选区域数据;
和值计算单元,用于针对每种切分方式,计算所处区域相匹配的至少两个候选区域数据的模糊度之和;
模糊度比较单元,用于针对所有切分方式,对所有的模糊度之和进行比较;
和值选择单元,用于选择模糊度之和最小的、且所处区域相匹配的至少两个候选区域数据作为特征区域数据。
可选地,所述切分方式包括如下的至少一种:
切分为左半部分、右半部分;
切分为上半部分、下半部分。
可选地,所述目标区域数据拼接模块包括:
特征点提取子模块,用于从每个目标区域数据中提取特征点;
特征点匹配子模块,用于采用预设的第一匹配方式对所述目标图像数据的特征点进行匹配,获得匹配成功的特征点;
变换方式计算子模块,用于计算所述匹配成功的特征点之间的变换方式;
变换方式拼接子模块,用于按照所述变换方式将所述目标区域数据拼接为目标图像数据。
可选地,所述特征点匹配子模块包括:
描述子生成单元,用于对所述目标区域数据的特征点生成描述子;
距离计算单元,用于计算所述描述子之间的最近邻距离与次近邻距离;
比值计算单元,用于计算所述最近邻距离和所述次近邻距离之间的比值;
匹配确定单元,用于当所述比值小于预设的阈值时,确定所述特征点匹配成功。
可选地,所述目标区域数据拼接模块还包括:
错误点去除子模块,用于采用预设的第二匹配方式从所述匹配成功的特征点中去除匹配错误的特征点。
可选地,所述目标区域数据拼接模块还包括:
下采样子模块,用于按照预设的采样参数对所述特征区域数据进行下采样处理;
变化方式转换子模块,用于按照所述采样参数将所述变换方式进行转换处理。
附图说明
图1示出了本公开一个实施例的一种拍照方法的步骤流程图;
图2示出了本公开一个实施例的另一种拍照方法的步骤流程图;
图3A至图3H示出了本公开一个实施例的一种拍照方法的示例;
图4示出了本公开另一实施例的一种拍照方法的步骤流程图;
图5示出了本公开另一实施例的另一种拍照方法的步骤流程图;
图6A至图6G示出了本公开另一实施例的一种拍照方法的示例;
图7示出了本公开一个实施例的一种拍照装置的结构框图;
图8示出了本公开另一个实施例的一种拍照装置的结构框图;
图9示出了本公开另一个实施例的一种移动设备的结构框图。
具体实施方式
为使本公开的上述目的、特征和优点能够更加明显易懂,下半部分结合附图和具体实施方式对本公开作进一步详细的说明。
参照图1,示出了本公开一个实施例的一种拍照方法的步骤流程图,具体可以包括如下步骤:
步骤101,当执行拍照操作时,采集至少两帧候选图像数据。
步骤102,从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据。
步骤103,将所述目标区域数据拼接为目标图像数据。
在具体实现中,本公开实施例可以应用在移动终端中,例如,手机、平板电脑、可穿戴设备(如VR(Virtual Reality,虚拟现实)眼镜、VR头盔、智能手表)等等,本公开实施例对此不加以限制。
在本公开实施例中,移动终端配置有一个或多个摄像头(camera),用于拍照、录像,该摄像头可以是设置在移动终端的背部(又称后置摄像头),也可以设置在移动终端的正面(又称前置摄像头)的,本公开实施例对此也不加以限制。
移动终端的操作***包括Android(安卓)、IOS、Windows Phone、Windows等等,可以支持运行多种可调用摄像头的应用,例如,相机应用、购物应用、即时通讯应用等等。
这些应用可以通过调用摄像头进行相关的业务操作,例如,相应应用可以进行拍照进行后期处理(如滤镜、裁剪、添加图案等)并存储在图库中,存储购物应用可以调用摄像头对商品进行拍照、扫描二维码等,即时通讯应用可以调用摄像头拍照并将采集的图像数据作为即时通讯消息发送,等等。
在本公开实施例中,摄像头可以启动ZSL(Zero Second Later,0秒延时拍摄)等模式,在执行拍照操作(take photo)时,进行曝光、对焦等操作,采集至少两帧候选图像数据。
预先对模糊度设定一些条件(即预设的模糊条件),通过比较各帧候选图像数据的模糊度,从不同的候选图像数据中提取合适的区域作为目标区域数据,遍历目标区域数据中重叠的部分,从而拼接为目标图像数据,使得各目标区域之间的图像画面连贯、完整,进行其他后期处理(如裁剪成矩形,调整统一的对比度、亮度,等等),或者,展示给用户。
本公开实施例在执行拍照操作时,采集至少两帧候选图像数据;从至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据,将目标区域数据拼接为目标图像数据。部分区域重叠使得目标区域数据 可以实现拼接,考虑了采集的候选图像数据之间的关联性,目标区域数据所处区域相匹配可保证拼接后的图像的内容完整性,通过模糊条件筛选合适的目标区域数据,保证拼接后的图像数据的清晰度,减少抵消抖动带来的影响,无需额外配置独立的器件,降低了成本,并且,拼接处理操作简单,提高了处理的速度,节省耗费的时间。
参照图2,示出了本公开一个实施例的另一种拍照方法的步骤流程图,具体可以包括如下步骤:
步骤201,当执行拍照操作时,采集至少两帧候选图像数据。
步骤202,将每帧候选图像数据按照预设的切分方式切分为至少两个候选区域数据。
在本公开实施例中,可以预先设置一个或多个切分方式,按照这些切分方式对每帧候选图像数据进行切分,从而将每帧候选图像数据切分为至少两个候选区域数据。
在一个示例中,切分方式包括如下的至少一种:
切分为左半部分、右半部分;
切分为上半部分、下半部分。
在此示例中,前一种切分方式为左右切分,则可以将候选图像数据沿垂直方向的中线切分左、右两个候选区域数据。
后一种切分方式为上下切分,则可以将候选图像数据沿水平方式的中线切分为上、下两个候选区域数据。
当然,上述切分方式只是作为示例,在实施本公开实施例时,可以根据实际情况设置其他切分方式,例如,将候选图像数据切分为上、中、下三个候选区域数据,候选图像数据切分为左、中、右三个候选区域数据,等等,将本公开实施例对此不加以限制。另外,除了上述切分方式外,本领域技术人员还可以根据实际需要采用其它切分方式,本公开实施例对此也不加以限制。
步骤203,对每个候选区域数据计算模糊度。
在本公开实施例中,可以针对每帧候选图像数据的每个候选区域数据计算模糊度。
所谓模糊度,可以指候选区域数据的模糊程度。
一般而言,模糊度越小,候选区域数据越清晰,反之,模糊度越大,候选区域数据越模糊。
在具体实现中,可以通过图像灰度变化、图像梯度值、图像熵等方式衡量模糊度。
模糊度与图像灰度变化、图像梯度值、图像熵负相关,即如果模糊度越大,则图像灰度变化越小、图像梯度值越小、图像熵也越小;反之,如果模糊度越小,则图像灰度变化越大、图像梯度值越大、图像熵也越大。
其中,对于图像灰度变化,可以通过频谱函数计算,频谱函数通常基于傅里叶变换得到。
对于特征距离恰当的图像数据,其包含更多的信息,人们能更好地分辨其中的细节,细节意味着图像数据有可辨的边缘,在局部中有很强的灰级变化,灰级的跃变更加剧烈。
对于梯度值,可以使用梯度函数进行计算,例如Tenengrad函数、能量梯度函数、Brenner 函数、方差函数等等。
在图像处理中,梯度函数常被用于提取边缘信息。对于特征距离恰当的图像数据,具有更尖锐的边缘的图像,应有更大的梯度函数值。
对于图像熵,可以通过熵函数获得。熵函数可以是基于这样一个前提:特征距离恰当的图像数据的熵大于特征距离不恰当(过短或过长)的图像数据的熵。
步骤204,查询模糊度符合预设的模糊条件的、且所处区域相匹配的至少两个候选区域数据,作为特征区域数据。
在具体实现中,从所有候选图像数据切分的所有候选区域数据中,选择符合模糊度符合预设的模糊条件的、且所处区域相匹配的至少两个候选区域数据,作为特征区域数据。
所谓所处区域相匹配,可以指选择的候选区域数据覆盖了其所属切分方式切分的各个区域,使得选择的候选区域数据可以从逻辑上组成完整的图像数据。
例如,将候选图像数据沿水平方向的中线切分为上、下两个候选区域数据,则选择的候选区域数据包括上半部分的候选区域数据以及下半部分的候选区域数据。
例如,将候选图像数据沿垂直方向的中线切分左、右两个候选区域数据,则选择的候选区域数据包括左半部分的候选区域数据以及右半部分的候选区域数据。
在本公开的一个实施例中,步骤204可以包括如下子步骤:
子步骤S11,针对每种切分方式,从处于相同区域的候选区域数据中选择模糊度最小的候选区域数据。
子步骤S12,针对每种切分方式,计算所处区域相匹配的至少两个候选区域数据的模糊度之和。
子步骤S13,针对所有切分方式,对所有的模糊度之和进行比较。
子步骤S14,选择模糊度之和最小的、且所处区域相匹配的至少两个候选区域数据作为特征区域数据。
在本公开实施例中,若应用了多种切分方式切分候选图像数据,针对每种切分方式,可以从所有候选图像数据切分出的、每个区域的候选区域数据中,选择模糊度最小的候选区域数据,计算所有区域的候选区域数据的模糊度之和,作为该切分方式的模糊度。
将所有切分方式的模糊度之和进行比较,选择模糊度之和最小的切分方式所对应的候选区域数据作为特征区域数据。
例如,将候选图像数据沿水平方向的中线切分为上、下两个候选区域数据,以及,将候选图像数据沿垂直方向的中线切分左、右两个候选区域数据,则从所有的处于上半部分的候选区域数据中选择模糊度最小的候选区域数据,从所有的处于下半部分的候选区域数据中选择模糊度最小的候选区域数据,计算两者的模糊度之和,接着,从所有处于左半部分的候选区域数据中选择模糊度最小的候选区域数据,从所有处于右半部分的候选区域数据中选择模糊度最小的候选区域数据,计算两者的模糊度之和,比较两个模糊度之和,以值最小的模糊度之和对应的候选区域数据作为特征区域数据。
步骤205,若所述特征区域数据属于至少两帧候选图像数据,则从每个特征区域数据 中所属的候选图像数据中提取至少包含所述特征区域数据的目标区域数据。
对于选择出的特征区域数据,可以判断其所属的候选图像数据。
如果选择出的特征区域数据属于不同的候选图像数据,则可以从每个特征区域数据中所属的候选图像数据中提取至少包含该特征区域数据的目标区域数据,使得目标区域数据存在内容重复的部分,可以进行拼接。
例如,选择出的特征区域数据A为候选图像数据A中位于上半部分的候选区域数据,选择出的特征区域数据B为候选图像数据B中位于下半部分的候选区域数据,则可以从候选图像数据A提取上面包含特征区域数据A的、占总面积三分之二的数据,作为目标区域数据,从可以从候选图像数据B提取下面包含特征区域数据B的、占总面积三分之二的数据,作为目标区域数据。
如果选择出的特征区域数据属于相同的候选图像数据,则可直接输出该候选图像数据。
步骤206,按照预设的采样参数对所述特征区域数据进行下采样处理。
下采样(subsampled),也可以称为缩小图像或降采样(downsampled),按照预设的采样参数(如缩小、旋转、平移等)对特征区域数据进行下采样处理,可以降低处理量,提高处理速度。
例如,对于一幅尺寸为M*N的图像I,对其进行s倍下采样,即得到(M/s)*(N/s)尺寸的得分辨率图像,其中,s是M和N的公约数。
如果考虑的是矩阵形式的图像,就是把原始图像s*s窗口内的图像变成一个像素,这个像素点的值就是窗口内所有像素的均值:
Figure PCTCN2019078156-appb-000001
步骤207,从每个目标区域数据中提取特征点。
在具体实现中,可以从目标区域数据中提取颜色特征、纹理特征、形状特征、空间关系特征等作为特征点。
步骤208,采用预设的第一匹配方式对所述目标图像数据的特征点进行匹配,获得匹配成功的特征点。
提取了特征点之后,则可以采用预设的第一匹配方式对两两目标图像数据之间的特征点进行匹配。
在一个示例中,可以对目标区域数据的特征点生成描述子,如SIFT(Scale Invariant Feature Transform,尺度不变特征转换)、ORB(Oriented FAST and Rotated BRIEF)、BRISK(Binary Robust Invariant Scalable Keypoints),等等。计算描述子之间的最近邻距离与次近邻距离,以及,计算最近邻距离和次近邻距离之间的比值。当比值小于预设的阈值时,确定特征点匹配成功。
当然,上述特征点的第一匹配方式只是作为示例,在实施本公开实施例时,可以根据实际情况设置其他特征点的第一匹配方式,例如,通过描述子之间的最近邻距离进行匹配,等等,将本公开实施例对此不加以限制。另外,除了上述特征点的第一匹配方式外,本领 域技术人员还可以根据实际需要采用其它特征点的第一匹配方式,本公开实施例对此也不加以限制。
步骤209,采用预设的第二匹配方式从所述匹配成功的特征点中去除匹配错误的特征点。
一般情况下,匹配成功的特征点,会存在错误的匹配,因此,可以通过第二匹配方式,如RANSAC(Random Sample Consensus,随机抽样一致)算法,来去除匹配错误的特征点。
步骤210,计算所述匹配成功的特征点之间的变换方式。
两个目标区域数据之间的内容部分相同,其中一个目标区域数据的内容转换之后,与另一个目标区域数据的内容相同(如坐标系相同),因此,在特征点匹配成功之后,可以计算出这些特征点之间的变换方式(如变换矩阵),以此作为目标区域数据之间的变换方式(如变换矩阵)。
步骤211,按照所述采样参数将所述变换方式进行转换处理。
若在先按照采样参数对特征区域数据进行下采样处理,则变换方式为下采样处理之后的变化方式,为了对下采样处理前的目标区域数据进行变换、拼接,则可以采用采样参数将变换方式进行转换处理,使之复原为下采样处理前的变化方式。
步骤212,按照所述变换方式将所述目标区域数据拼接为目标图像数据。
对于进行拼接的两个目标区域数据,其中一个目标区域数据按照该变换方式进行变换,使之与另一个目标区域数据对齐,进而确定两个目标区域数据之间的位置关系,将两个目标区域数据拼接在一起,最终将所有的目标区域数据拼接起来之后,作为目标图像数据,目标图像数据的模糊程度小于或等于任一帧的候选图像数据。
使本领域技术人员更好地理解本公开实施例,以下通过具体的示例来说明本公开实施例中的拍照方法。
假设移动终端一次拍照缓存I n(n为正整数)帧候选图像数据,分别为I 1、I 2、……、I n,其中包括图3A与图3B所示的两帧候选图像数据。
这n帧候选图像数据的拍摄时间接近,虽然拍摄的内容存在差异,但是,移动终端在短时间内一般不会大幅度移动,因此,拍摄的内容差异一般不大。
将每帧候选图像数据I i(i=1、2、……、n)切分为上半部分I i1与下半部分I i2、左半部分I i3与右半部分I i4,即I i1、I i2、I i3与I i4均为候选区域数据。
从所有I i的I i1中选择模糊度最小的I i1',从所有I i的I i2中选择模糊度最小的I i2'。
从所有I i的I i3中选择模糊度最小的I i3',从所有I i的I i4中选择模糊度最小的I i4'。
计算I i1'与I i2'的模糊度之和,以及,计算I i3'与I i4'的模糊度之和。
若I i1'与I i2'的模糊度之和小于I i3'与I i4'的模糊度之和,则将I i1'设置为特征区域数据p j1(j=1、2、……、n)、将I i2'设置为特征区域数据特征区域数据p k2(k=1、2、……、n)。
p j1属于候选图像数据I j(j=1、2、……、n),p k2属于候选图像数据I k(k=1、2、……、n)。
若p j1与p k2属于同一帧候选图像数据(即j=k),则可以输出该候选图像数据。
在本示例中,p j1属于图3A所示的候选图像数据,p k2属于图3B所示的候选图像数据,即p j1与p k2属于不同的候选图像数据(即j≠k),则如图3C所示,从I j中裁剪上面大于二分之一的部分图像
Figure PCTCN2019078156-appb-000002
(即如图3A中的方框部分),使
Figure PCTCN2019078156-appb-000003
包含p j1,即
Figure PCTCN2019078156-appb-000004
作为目标区域数据,如图3D所示,从I k中裁剪下面大于二分之一的部分图像
Figure PCTCN2019078156-appb-000005
(即图3B中的方框部分),使
Figure PCTCN2019078156-appb-000006
包含p k2,即
Figure PCTCN2019078156-appb-000007
作为目标区域数据。
Figure PCTCN2019078156-appb-000008
进行下采样处理,得到
Figure PCTCN2019078156-appb-000009
Figure PCTCN2019078156-appb-000010
进行下采样处理,得到
Figure PCTCN2019078156-appb-000011
如图3E所示,从
Figure PCTCN2019078156-appb-000012
中提取特征点D j,如图3F所示,从
Figure PCTCN2019078156-appb-000013
中提取特征点D k
如图3G所示,对D j与D k进行匹配。
如图3H所示,从匹配成功的D j与D k中去处匹配错误的D j与D k
假设X=[x,y] T
Figure PCTCN2019078156-appb-000014
中的特征点D j的集合,X'=[x',y'] T
Figure PCTCN2019078156-appb-000015
中的特征点D j的集合,通过X'=HX,求得变换矩阵H:
Figure PCTCN2019078156-appb-000016
通过计算
Figure PCTCN2019078156-appb-000017
Figure PCTCN2019078156-appb-000018
Figure PCTCN2019078156-appb-000019
Figure PCTCN2019078156-appb-000020
之间的关系,参考变换矩阵H的求解过程,将变换矩阵H转换为下采样处理之前
Figure PCTCN2019078156-appb-000021
Figure PCTCN2019078156-appb-000022
的变换矩阵H':
Figure PCTCN2019078156-appb-000023
通过变换矩阵H'对
Figure PCTCN2019078156-appb-000024
Figure PCTCN2019078156-appb-000025
进行拼接得到目标图像数据I',I'的模糊度小于或等于任一帧缓存的候选图像数据I n
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开实施例并不受所描述的动作顺序的限制,因为依据本公开实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本公开实施例所必须的。
参照图4,示出了本公开另一个实施例的一种拍照方法的步骤流程图,具体可以包括如下步骤:
步骤401,当执行拍照操作时,采集至少两帧候选图像数据并调用预置的传感器测量抖动数据。
在具体实现中,本公开实施例可以应用在移动终端中,例如,手机、平板电脑、可穿戴设备(如VR(Virtual Reality,虚拟现实)眼镜、VR头盔、智能手表)等等,本公开实施例对此不加以限制。
在本公开实施例中,移动终端配置有一个或多个摄像头(camera)以及一个或多个传感器。
其中,该摄像头可以用于拍照、录像,该摄像头可以是设置在移动终端的背部(又称后置摄像头),也可以设置在移动终端的正面(又称前置摄像头)的,本公开实施例对此也不加以限制。
该传感器可以包括陀螺仪、加速度传感器等,可以测量角速度、加速度等数据,作为表示移动终端抖动的抖动数据。
移动终端的操作***包括Android(安卓)、IOS、Windows Phone、Windows等等,可以支持运行多种可调用摄像头的应用,例如,相机应用、购物应用、即时通讯应用等等。
这些应用可以通过调用摄像头进行相关的业务操作,例如,相应应用可以进行拍照进行后期处理(如滤镜、裁剪、添加图案等)并存储在图库中,存储购物应用可以调用摄像头对商品进行拍照、扫描二维码等,即时通讯应用可以调用摄像头拍照并将采集的图像数据作为即时通讯消息发送,等等。
在本公开实施例中,摄像头可以启动ZSL等模式,在执行拍照操作(take photo)时,进行曝光、对焦等操作,采集至少两帧候选图像数据。
与此同时(尤其是在曝光时),调用传感器测量抖动数据,以此计算移动终端在采集预览图像数据时的抖动程度。
步骤402,判断所述抖动数据是否符合预设的抖动条件。
在具体实现中,可以预先设置抖动条件,采用拍照时测量的抖动数据判断是否符合该抖动条件。
如果符合该抖动条件,表示移动终端在拍照时抖动较小,在允许的抖动幅度内,如果不符合该抖动条件,表示移动终端在拍照时抖动较大,超过了允许的抖动幅度。
在本公开的一个实施例中,抖动数据具有多个,步骤402可以包括如下子步骤:
子步骤S41,采用多个抖动数据计算多个单体抖动值。
子步骤S42,计算所述多个单体抖动值的平均值,作为整体抖动值。
子步骤S43,判断所述整体抖动值是否在预设的抖动范围内;若是,则执行子步骤S44,若否,则执行子步骤S45。
子步骤S44,确定符合预设的抖动条件。
子步骤S45,确定不符合预设的抖动条件。
在本公开实施例中,由于拍照在一段相对短暂的时间内采集至少两帧候选图像数据,与此同时,传感器可能采集了多个(至少两个)抖动数据。
因此,可以计算这段时间内每个抖动数据的抖动程度,作为单体抖动值,并计算单体抖动值的平均值,以此作为这段时间内的整体抖动值。
例如,对于陀螺仪,则可以计算器数据的模值,作为单体抖动值。
当然,除了平均值之外,还可以设置其他方式计算整体抖动值,例如,对不同时间(拍照前段、拍照中段、拍照后段等)的单体抖动值配置相应的权值,计算配置权值之后单体抖动值之和,作为整体抖动值,等等,本公开实施例对此不加以限制。
针对不同传感器采集的抖动数据,可以针对性地设置抖动范围,如果整体抖动值在该 抖动范围内,表示移动终端在拍照时抖动较小,符合该抖动条件,如果整体抖动值不在该抖动范围内,表示移动终端在拍照时抖动较大,不符合该抖动条件。
步骤403,若符合所述抖动条件,则从所述候选图像数据中提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据。
如果在拍照时抖动较小,候选图像数据清晰的概率较高,因此,通过比较各帧候选图像数据的模糊度,判定是否满足预先对模糊度设定一些条件(即模糊条件),从不同的候选图像数据中提取合适的区域作为目标区域数据。
此外,若不符合抖动条件,即在拍照时抖动程度较大,候选图像数据清晰的概率较低,则输出符合预设的图像条件(如是否对焦、模糊度、对比度、亮度等)的候选图像数据,进行其他后期处理(如添加水印、滤镜等等),或者,展示给用户。
以模糊度为例,可以计算候选图像数据的模糊度,输出模糊度最小的候选图像数据。
步骤404,将所述目标区域数据拼接为目标图像数据。
对于提取的目标区域数据,遍历目标区域数据中重叠的部分,从而拼接为目标图像数据。
步骤405,判断所述目标图像数据中的拼接区域是否符合预设的拼接条件。
步骤406,若符合所述拼接条件,则确定输出所述目标图像数据。
由于拼接会存在拼接缝的可能性,目标图像数据中出现拼接缝会使可视效果变差,因此,可以对拼接区域进行图像解析,判断是否符合拼接条件(即是否存在拼接缝相关的条件)。
如果符合拼接条件(不存在拼接缝),则输出目标图像数据,进行其他后期处理(如裁剪成矩形,调整统一的对比度、亮度,等等),或者,展示给用户。
如果不符合拼接条件(存在拼接缝),则输出符合预设的图像条件(如是否对焦、模糊度、对比度、亮度等)的候选图像数据,,进行其他后期处理(如添加水印、滤镜等等),或者,展示给用户。
以模糊度作为图像条件的示例,可以计算候选图像数据的模糊度,输出模糊度最小的候选图像数据。
在本公开的一个实施例中,步骤405可以包括如下子步骤:
子步骤S51,确定所述目标图像数据中的拼接区域。
子步骤S52,在目标图像数据计算位于所述拼接区域一侧的像素的第一灰度值、以及位于所述拼接区域另一侧的像素的第二灰度值。
子步骤S53,计算所述第一灰度值与所述第二灰度值之间的灰度差异。
子步骤S54,判断所述灰度差异是否小于预设的阈值;若是,则执行子步骤S55,若否,则执行子步骤S56。
子步骤S55,确定符合预设的拼接条件。
子步骤S56,确定不符合预设的拼接条件。
所谓拼接区域,则可以指目标区域数据拼接的区域,一般为目标区域数据的边缘所处 的区域。
位于拼接区域一侧的像素,属于其中一个目标区域数据,位于拼接区域另一侧的像素,属于另一个目标区域数据,可以分别计算这两个目标区域数据的灰度值,获得第一灰度值、第二灰度值,进而计算两者之间的灰度差异。
如果灰度差异大于或等于预设的阈值,则两个目标区域数据之间在拼接区域的差异较大,拼接缝较为明显,不符合拼接条件。
如果灰度差异小于预设的阈值,则两个目标区域数据之间在拼接区域的差异较小,拼接缝较为隐蔽,符合预设的拼接条件。
本公开实施例在执行拍照操作时,采集至少两帧候选图像数据并调用预置的传感器测量抖动数据,若抖动数据符合抖动条件,则从至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据,将目标区域数据拼接为目标图像数据,若目标图像数据中的拼接区域符合拼接条件,则输出目标图像数据,一方面,部分区域重叠使得目标区域数据可以实现拼接,考虑了采集的候选图像数据之间的关联性,目标区域数据所处区域相匹配可保证拼接后的图像的内容完整性,通过模糊条件筛选合适的目标区域数据,保证拼接后的图像数据的清晰度,减少抵消抖动带来的影响,无需额外配置独立的器件,降低了成本,并且,拼接处理操作简单,提高了处理的速度,节省耗费的时间,另一方面,通过抖动条件、拼接条件作为拼接、输出目标图像数据的条件,可以保证拼接的图像质量。
参照图5,示出了本公开一个实施例的另一种拍照方法的步骤流程图,具体可以包括如下步骤:
步骤501,当执行拍照操作时,采集至少两帧候选图像数据并调用预置的传感器测量抖动数据。
步骤502,判断所述抖动数据是否符合预设的抖动条件。
步骤503,若符合所述抖动条件,则将每帧候选图像数据按照预设的切分方式切分为至少两个候选区域数据。
其中,该步骤的实现方法,可参见前述实施例中步骤202中的相关描述,在此不再重复。
步骤504,对每个候选区域数据计算模糊度。
在本公开实施例中,可以针对每帧候选图像数据的每个候选区域数据计算模糊度。
其中,该步骤的实现方法,可参见前述实施例中步骤203中的相关描述,在此不再重复。
步骤505,查询模糊度符合预设的模糊条件的、且所处区域相匹配的至少两个候选区域数据,作为特征区域数据。
其中,该步骤的实现方法,可参见前述实施例中步骤204中的相关描述,在此不再重复。
步骤506,若所述特征区域数据属于至少两帧候选图像数据,则从每个特征区域数据 中所属的候选图像数据中提取至少包含所述特征区域数据的目标区域数据。
其中,该步骤的实现方法,可参见前述实施例中步骤205中的相关描述,在此不再重复。
步骤507,按照预设的采样参数对所述特征区域数据进行下采样处理。
其中,该步骤的实现方法,可参见前述实施例中步骤206中的相关描述,在此不再重复。
步骤508,从每个目标区域数据中提取特征点。
其中,该步骤的实现方法,可参见前述实施例中步骤207中的相关描述,在此不再重复。
步骤509,采用预设的第一匹配方式对所述目标图像数据的特征点进行匹配,获得匹配成功的特征点。
其中,该步骤的实现方法,可参见前述实施例中步骤208中的相关描述,在此不再重复。
步骤510,采用预设的第二匹配方式从所述匹配成功的特征点中去除匹配错误的特征点。
其中,该步骤的实现方法,可参见前述实施例中步骤209中的相关描述,在此不再重复。
步骤511,计算所述匹配成功的特征点之间的变换方式。
其中,该步骤的实现方法,可参见前述实施例中步骤210中的相关描述,在此不再重复。
步骤512,按照所述采样参数将所述变换方式进行转换处理。
其中,该步骤的实现方法,可参见前述实施例中步骤211中的相关描述,在此不再重复。
步骤513,按照所述变换方式将所述目标区域数据拼接为目标图像数据。
其中,该步骤的实现方法,可参见前述实施例中步骤212中的相关描述,在此不再重复。
步骤514,判断所述目标图像数据中的拼接区域是否符合预设的拼接条件。
步骤515,若符合所述拼接条件,则输出所述目标图像数据。一般情况下,对于拼接的两个目标区域数据,按照变化方式变换后的目标区域数据覆盖另一目标区域数据,则拼接区域为转换的目标区域数据裁剪的边缘所处的区域。
例如,从候选图像数据A提取上面三分之二的数据,作为目标区域数据A,从可以从候选图像数据B提取下面三分之二的数据,作为目标区域数据B,目标区域数据A按照变化方式变换,对齐目标区域数据B,则拼接区域为目标区域数据B的上边缘。
使本领域技术人员更好地理解本公开实施例,以下通过具体的示例来说明本公开实施例中的拍照方法。
假设移动终端一次拍照缓存I n(n为正整数)帧候选图像数据,分别为I 1、I 2、……、 I n
这n帧候选图像数据的拍摄时间接近,虽然拍摄的内容存在差异,但是,移动终端在短时间内一般不会大幅度移动,因此,拍摄的内容差异一般不大。
在采集候选图像数据的同时,调用加速度传感器采集了m个加速度数据。
若拍照的时间为30ms,以200HZ为例,在曝光的时间段内,大概会有6个加速度数据。
对于这m个加速度数据,先取每个加速度数据的模,再求m个模值的平均值,作为整体的抖动程度。
在该抖动程度超出抖动范围内的情况下,从I n帧候选图像数据中输出模糊度最小的帧候选图像数据。
在该抖动程度在抖动范围内的情况下,将每帧候选图像数据I i(i=1、2、……、n)切分为上半部分I i1与下半部分I i2、左半部分I i3与右半部分I i4,即I i1、I i2、I i3与I i4均为候选区域数据。
从所有I i的I i1中选择模糊度最小的I i1',从所有I i的I i2中选择模糊度最小的I i2'。
从所有I i的I i3中选择模糊度最小的I i3',从所有I i的I i4中选择模糊度最小的I i4'。
计算I i1'与I i2'的模糊度之和,以及,计算I i3'与I i4'的模糊度之和。
若I i1'与I i2'的模糊度之和小于I i3'与I i4'的模糊度之和,则将I i1'设置为特征区域数据p j1(j=1、2、……、n)、将I i2'设置为特征区域数据特征区域数据p k2(k=1、2、……、n)。
p j1属于候选图像数据I j(j=1、2、……、n),p k2属于候选图像数据I k(k=1、2、……、n)。
若p j1与p k2属于同一帧候选图像数据(即j=k),则可以输出该候选图像数据。
在本示例中,p j1属于其中一帧候选图像数据,p k2属于另一帧候选图像数据,即p j1与p k2属于不同的候选图像数据(即j≠k),则如图6A所示,从I j中裁剪上面大于二分之一的部分图像
Figure PCTCN2019078156-appb-000026
使
Figure PCTCN2019078156-appb-000027
包含p j1,即
Figure PCTCN2019078156-appb-000028
作为目标区域数据,如图6B所示,从I k中裁剪下面大于二分之一的部分图像
Figure PCTCN2019078156-appb-000029
使
Figure PCTCN2019078156-appb-000030
包含p k2,即
Figure PCTCN2019078156-appb-000031
作为目标区域数据。
Figure PCTCN2019078156-appb-000032
进行下采样处理,得到
Figure PCTCN2019078156-appb-000033
Figure PCTCN2019078156-appb-000034
进行下采样处理,得到
Figure PCTCN2019078156-appb-000035
如图6C所示,从
Figure PCTCN2019078156-appb-000036
中提取特征点D j,如图6D所示,从
Figure PCTCN2019078156-appb-000037
中提取特征点D k
如图6E所示,对D j与D k进行匹配。
如图6F所示,从匹配成功的D j与D k中去处匹配错误的D j与D k
假设X=[x,y] T
Figure PCTCN2019078156-appb-000038
中的特征点D j的集合,X'=[x',y'] T
Figure PCTCN2019078156-appb-000039
中的特征点D j的集合,通过X'=HX,求得变换矩阵H:
Figure PCTCN2019078156-appb-000040
通过计算
Figure PCTCN2019078156-appb-000041
Figure PCTCN2019078156-appb-000042
Figure PCTCN2019078156-appb-000043
Figure PCTCN2019078156-appb-000044
之间的关系,参考变换矩阵H的求解过程,将变换矩阵H转 换为下采样处理之前
Figure PCTCN2019078156-appb-000045
Figure PCTCN2019078156-appb-000046
的变换矩阵H':
Figure PCTCN2019078156-appb-000047
如图6G所示,通过变换矩阵H'对
Figure PCTCN2019078156-appb-000048
Figure PCTCN2019078156-appb-000049
进行拼接得到目标图像数据I',I'的模糊度小于或等于任一帧缓存的候选图像数据I n
拼接区域位于
Figure PCTCN2019078156-appb-000050
的上边缘,计算该拼接区域两侧灰度值及其灰度差异。
如果灰度差异超过阈值,则从I n帧候选图像数据中输出模糊度最小的帧候选图像数据。
如果灰度差异未超过预设的阈值,则输出图6G所示得的目标图像数据。
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开实施例并不受所描述的动作顺序的限制,因为依据本公开实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本公开实施例所必须的。
参照图7,示出了本公开一个实施例的一种拍照装置的结构框图,具体可以包括如下模块:
候选图像数据采集模块701,用于当执行拍照操作时,采集至少两帧候选图像数据;
目标区域数据提取模块702,用于从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据;
目标区域数据拼接模块703,用于将所述目标区域数据拼接为目标图像数据。
在本公开的一个实施例中,所述目标区域数据提取模块包括:
候选图像数据切分子模块,用于将每帧候选图像数据按照预设的切分方式切分为至少两个候选区域数据;
模糊度计算子模块,用于对每个候选区域数据计算模糊度;
特征区域数据查询子模块,用于查询模糊度符合预设的模糊条件的、且所处区域相匹配的至少两个候选区域数据,作为特征区域数据;
相异提取子模块,用于若所述特征区域数据属于至少两帧候选图像数据,则从每个特征区域数据中所属的候选图像数据中提取至少包含所述特征区域数据的目标区域数据。
在本公开的一个实施例中,所述特征区域数据查询子模块包括:
候选区域数据选择单元,用于针对每种切分方式,从处于相同区域的候选区域数据中选择模糊度最小的候选区域数据;
和值计算单元,用于针对每种切分方式,计算所处区域相匹配的至少两个候选区域数据的模糊度之和;
模糊度比较单元,用于针对所有切分方式,对所有的模糊度之和进行比较;
和值选择单元,用于选择模糊度之和最小的、且所处区域相匹配的至少两个候选区域 数据作为特征区域数据。
在本公开实施例的一个示例中,所述切分方式包括如下的至少一种:
切分为左半部分、右半部分;
切分为上半部分、下半部分。
在本公开的一个实施例中,所述目标区域数据拼接模块703包括:
特征点提取子模块,用于从每个目标区域数据中提取特征点;
特征点匹配子模块,用于采用预设的第一匹配方式对所述目标图像数据的特征点进行匹配,获得匹配成功的特征点;
变换方式计算子模块,用于计算所述匹配成功的特征点之间的变换方式;
变换方式拼接子模块,用于按照所述变换方式将所述目标区域数据拼接为目标图像数据。
在本公开的一个实施例中,所述特征点匹配子模块包括:
描述子生成单元,用于对所述目标区域数据的特征点生成描述子;
距离计算单元,用于计算所述描述子之间的最近邻距离与次近邻距离;
比值计算单元,用于计算所述最近邻距离和所述次近邻距离之间的比值;
匹配确定单元,用于当所述比值小于预设的阈值时,确定所述特征点匹配成功。
在本公开的一个实施例中,所述目标区域数据拼接模块还包括:
错误点去除子模块,用于采用预设的第二匹配方式从所述匹配成功的特征点中去除匹配错误的特征点。
在本公开的一个实施例中,所述目标区域数据拼接模块还包括:
下采样子模块,用于按照预设的采样参数对所述特征区域数据进行下采样处理;
变化方式转换子模块,用于按照所述采样参数将所述变换方式进行转换处理。
参照图8,示出了本公开另一个实施例的一种拍照装置的结构框图,具体可以包括如下模块:
数据采集模块801,用于当执行拍照操作时,采集至少两帧候选图像数据并调用预置的传感器测量抖动数据;
抖动条件判断模块802,用于判断所述抖动数据是否符合预设的抖动条件;
目标区域数据提取模块803,用于若符合所述抖动条件,则从所述至少两帧候选图像数据中提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据;
目标区域数据拼接模块804,用于将所述目标区域数据拼接为目标图像数据;
拼接条件判断模块805,用于判断所述目标图像数据中的拼接区域是否符合预设的拼接条件;
目标图像输出模块806,用于若符合所述拼接条件,则输出所述目标图像数据。
在本公开的一个实施例中,还包括:候选图像数据输出模块,用于若不符合所述抖动条件或不符合所述拼接条件,则输出符合预设的图像条件的候选图像数据。
在本公开的一个实施例中,所述候选图像数据输出模块包括:
模糊度计算子模块,用于计算所述候选图像数据的模糊度;
模糊度输出子模块,用于输出所述模糊度最小的候选图像数据。
在本公开的一个实施例中,所述抖动数据具有多个,所述抖动条件判断模块802包括:
单体抖动值计算子模块,用于采用多个抖动数据计算多个单体抖动值;
整体抖动值计算子模块,用于计算所述多个单体抖动值的平均值,作为整体抖动值;
抖动范围判断子模块,用于判断所述整体抖动值是否在预设的抖动范围内;若是,则调用第一确定子模块,若否,则调用第二确定子模块;
第一确定子模块,用于确定符合预设的抖动条件;
第二确定子模块,用于确定不符合预设的抖动条件。
在本公开的一个实施例中,所述拼接条件判断模块805包括:
拼接区域确定子模块,用于确定所述目标图像数据中的拼接区域;
灰度值计算子模块,用于在目标图像数据计算位于所述拼接区域一侧的像素的第一灰度值、以及位于所述拼接区域另一侧的像素的第二灰度值;
灰度差异计算子模块,用于计算所述第一灰度值与所述第二灰度值之间的灰度差异;
灰度判断子模块,用于判断所述灰度差异是否小于预设的阈值;若是,则调用第三确定子模块,若否,则调用第四确定子模块;
第三确定子模块,用于确定符合预设的拼接条件;
第四确定子模块,用于确定不符合预设的拼接条件。
在本公开的一个实施例中,所述目标区域数据提取模块803包括:
候选图像数据切分子模块,用于将每帧候选图像数据按照预设的切分方式切分为至少两个候选区域数据;
模糊度计算子模块,用于对每个候选区域数据计算模糊度;
特征区域数据查询子模块,用于查询模糊度符合预设的模糊条件的、且所处区域相匹配的至少两个候选区域数据,作为特征区域数据;
相异提取子模块,用于若所述特征区域数据属于至少两帧候选图像数据,则从每个特征区域数据中所属的候选图像数据中提取至少包含所述特征区域数据的目标区域数据。
在本公开的一个实施例中,所述特征区域数据查询子模块包括:
候选区域数据选择单元,用于针对每种切分方式,从处于相同区域的候选区域数据中选择模糊度最小的候选区域数据;
和值计算单元,用于针对每种切分方式,计算所处区域相匹配的至少两个候选区域数据的模糊度之和;
模糊度比较单元,用于针对所有切分方式,对所有的模糊度之和进行比较;
和值选择单元,用于选择模糊度之和最小的、且所处区域相匹配的至少两个候选区域数据作为特征区域数据。
在本公开实施例的一个示例中,所述切分方式包括如下的至少一种:
切分为左半部分、右半部分;
切分为上半部分、下半部分。
在本公开的一个实施例中,所述目标区域数据拼接模块804包括:
特征点提取子模块,用于从每个目标区域数据中提取特征点;
特征点匹配子模块,用于采用预设的第一匹配方式对所述目标图像数据的特征点进行匹配,获得匹配成功的特征点;
变换方式计算子模块,用于计算所述匹配成功的特征点之间的变换方式;
变换方式拼接子模块,用于按照所述变换方式将所述目标区域数据拼接为目标图像数据。
在本公开的一个实施例中,所述特征点匹配子模块包括:
描述子生成单元,用于对所述目标区域数据的特征点生成描述子;
距离计算单元,用于计算所述描述子之间的最近邻距离与次近邻距离;
比值计算单元,用于计算所述最近邻距离和所述次近邻距离之间的比值;
匹配确定单元,用于当所述比值小于预设的阈值时,确定所述特征点匹配成功。
在本公开的一个实施例中,所述目标区域数据拼接模块804还包括:
错误点去除子模块,用于采用预设的第二匹配方式从所述匹配成功的特征点中去除匹配错误的特征点。
在本公开的一个实施例中,所述目标区域数据拼接模块804还包括:
下采样子模块,用于按照预设的采样参数对所述特征区域数据进行下采样处理;
变化方式转换子模块,用于按照所述采样参数将所述变换方式进行转换处理。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
参见图9,示出了本公开实施例提供的一种可执行上述方法的移动设备,包括处理器900以及存储器920,进一步地,还可包括摄像头910。
存储器920与摄像头910和处理器900通信,配置为存储摄像头910采集的数据以及计算机指令。
处理器900配置为执行所述计算机指令以实现:获取摄像头910在执行拍照操作时采集的至少两帧候选图像数据;从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据;将所述目标区域数据拼接为目标图像数据。
在图9中,总线架构可以包括任意数量的互联的总线和桥,具体由处理器900代表的一个或多个处理器和存储器920代表的存储器的各种电路链接在一起。总线架构还可以将诸如***设备、稳压器和功率管理电路等之类的各种其他电路链接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口930提供接口。处理器900负责管理总线架构和通常的处理,还可以提供各种功能,包括定时,***接口,电压调节、电源管理以及其他控制功能。存储器920可以存储处理器900在执行操作时所使用的数据。
可选的,处理器900可以是中央处埋器(CPU)、专用集成电路(Application Specific Integrated Circuit,简称ASIC)、现场可编程门阵列(Field-Programmable Gate Array,简称FPGA)或复杂可编程逻辑器件(Complex Programmable Logic Device,简称CPLD)。
本公开实施例中,处理器900读取存储器920中的计算机指令,执行图1、图2、图4或图5所示实施例中的方法,具体参见前述实施例中的相关描述,此处不再赘述。
本本公开实施例还提供了一种计算机可读的非易失性存储介质,其中存储有计算机指令,该计算机指令被处理器执行时实现前述实施例描述的方法。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本领域内的技术人员应明白,本公开实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本公开实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本公开实施例是参照根据本公开实施例的方法、终端设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本公开实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本公开实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之 间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本公开所提供的一种拍照方法和一种拍照装置,进行了详细介绍,本文中应用了具体个例对本公开的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本公开的方法及其核心思想;同时,对于本领域的一般技术人员,依据本公开的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本公开的限制。

Claims (27)

  1. 一种拍照方法,包括:
    当执行拍照操作时,采集至少两帧候选图像数据;
    从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据;
    将所述目标区域数据拼接为目标图像数据。
  2. 根据权利要求1所述的方法,所述从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据,包括:
    将每帧候选图像数据按照预设的切分方式切分为至少两个候选区域数据;
    对每个候选区域数据计算模糊度;
    查询模糊度符合预设的模糊条件的、且所处区域相匹配的至少两个候选区域数据,作为特征区域数据;
    响应于所述特征区域数据属于至少两帧候选图像数据,从每个特征区域数据中所属的候选图像数据中提取至少包含所述特征区域数据的目标区域数据。
  3. 根据权利要求2所述的方法,所述查询模糊度符合预设的模糊条件的、且所处区域相匹配的至少两个候选区域数据,作为特征区域数据,包括:
    针对每种切分方式,从处于相同区域的候选区域数据中选择模糊度最小的候选区域数据;
    针对每种切分方式,计算所处区域相匹配的至少两个候选区域数据的模糊度之和;
    针对所有切分方式,对所有的模糊度之和进行比较;
    选择模糊度之和最小的、且所处区域相匹配的至少两个候选区域数据作为特征区域数据。
  4. 根据权利要求2所述的方法,所述切分方式包括如下的至少一种:
    切分为左半部分、右半部分;
    切分为上半部分、下半部分。
  5. 根据权利要求1-4任一项所述的方法,所述将所述目标区域数据拼接为目标图像数据,包括:
    从每个目标区域数据中提取特征点;
    采用预设的第一匹配方式对所述目标图像数据的特征点进行匹配,获得匹配成功的特征点;
    计算所述匹配成功的特征点之间的变换方式;
    按照所述变换方式将所述目标区域数据拼接为目标图像数据。
  6. 根据权利要求5所述的方法,所述采用预设的第一匹配方式对所述目标图像数据的特征点进行匹配,获得匹配成功的特征点,包括:
    对所述目标区域数据的特征点生成描述子;
    计算所述描述子之间的最近邻距离与次近邻距离;
    计算所述最近邻距离和所述次近邻距离之间的比值;
    响应于所述比值小于预设的阈值,确定所述特征点匹配成功。
  7. 根据权利要求5所述的方法,在所述采用预设的第一匹配方式对所述目标图像数据的特征点进行匹配,获得匹配成功的特征点之后,所述将所述目标区域数据拼接为目标图像数据,还包括:
    采用预设的第二匹配方式从所述匹配成功的特征点中去除匹配错误的特征点。
  8. 根据权利要求5所述的方法,在所述从每个区域图像数据中提取特征点之前,所述将所述目标区域数据拼接为目标图像数据,还包括:
    按照预设的采样参数对所述特征区域数据进行下采样处理;
    在所述按照所述变换方式将所述目标区域数据拼接为目标图像数据之前,所述将所述目标区域数据拼接为目标图像数据,还包括:
    按照所述采样参数将所述变换方式进行转换处理。
  9. 根据权利要求1-8任一项所述的方法,还包括:
    当执行拍照操作时,调用预置的传感器测量抖动数据;
    判断所述抖动数据是否符合预设的抖动条件;
    所述从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据,包括:
    响应于符合所述抖动条件,从所述至少两帧候选图像数据中提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据;
    所述将所述目标区域数据拼接为目标图像数据之后,还包括:
    判断所述目标图像数据中的拼接区域是否符合预设的拼接条件;
    响应于符合所述拼接条件,输出所述目标图像数据。
  10. 根据权利要求9所述的方法,还包括:
    响应于不符合所述抖动条件或不符合所述拼接条件,输出符合预设的图像条件的候选图像数据。
  11. 根据权利要求10所述的方法,所述输出符合预设的图像条件的候选图像数据,包括:
    计算所述候选图像数据的模糊度;
    输出所述模糊度最小的候选图像数据。
  12. 根据权利要求9-11任一项所述的方法,所述抖动数据具有多个,所述判断所述抖动数据是否符合预设的抖动条件,包括:
    采用多个抖动数据计算多个单体抖动值;
    计算所述多个单体抖动值的平均值,作为整体抖动值;
    判断所述整体抖动值是否在预设的抖动范围内;
    响应于整体抖动值在预设的抖动范围内,确定符合预设的抖动条件;
    响应于整体抖动值不在预设的抖动范围内,确定不符合预设的抖动条件。
  13. 根据权利要求9所述的方法,所述判断所述目标图像数据中的拼接区域是否符合预设的拼接条件,包括:
    确定所述目标图像数据中的拼接区域;
    在目标图像数据计算位于所述拼接区域一侧的像素的第一灰度值、以及位于所述拼接区域另一侧的像素的第二灰度值;
    计算所述第一灰度值与所述第二灰度值之间的灰度差异;
    判断所述灰度差异是否小于预设的阈值;
    响应于所述灰度差异小于预设的阈值,确定符合预设的拼接条件;
    响应于所述灰度差异不小于预设的阈值,确定不符合预设的拼接条件。
  14. 一种移动设备,包括摄像头、存储器和处理器,其中:
    所述存储器,与所述摄像头和处理器通信,配置为存储所述摄像头采集的数据以及计算机指令;
    所述处理器,配置为执行所述计算机指令以实现:
    获取所述摄像头在执行拍照操作时采集的至少两帧候选图像数据;
    从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据;
    将所述目标区域数据拼接为目标图像数据。
  15. 根据权利要求14所述的移动设备,所述从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据,包括:
    将每帧候选图像数据按照预设的切分方式切分为至少两个候选区域数据;
    对每个候选区域数据计算模糊度;
    查询模糊度符合预设的模糊条件的、且所处区域相匹配的至少两个候选区域数据,作为特征区域数据;
    响应于所述特征区域数据属于至少两帧候选图像数据,从每个特征区域数据中所属的候选图像数据中提取至少包含所述特征区域数据的目标区域数据。
  16. 根据权利要求15所述的移动设备,所述查询模糊度符合预设的模糊条件的、且所处区域相匹配的至少两个候选区域数据,作为特征区域数据,包括:
    针对每种切分方式,从处于相同区域的候选区域数据中选择模糊度最小的候选区域数据;
    针对每种切分方式,计算所处区域相匹配的至少两个候选区域数据的模糊度之和;
    针对所有切分方式,对所有的模糊度之和进行比较;
    选择模糊度之和最小的、且所处区域相匹配的至少两个候选区域数据作为特征区域数据。
  17. 根据权利要求15所述的移动设备,所述切分方式包括如下的至少一种:
    切分为左半部分、右半部分;
    切分为上半部分、下半部分。
  18. 根据权利要求14-17任一项所述的移动设备,所述将所述目标区域数据拼接为目标图像数据,包括:
    从每个目标区域数据中提取特征点;
    采用预设的第一匹配方式对所述目标图像数据的特征点进行匹配,获得匹配成功的特征点;
    计算所述匹配成功的特征点之间的变换方式;
    按照所述变换方式将所述目标区域数据拼接为目标图像数据。
  19. 根据权利要求18所述的移动设备,所述采用预设的第一匹配方式对所述目标图像数据的特征点进行匹配,获得匹配成功的特征点,包括:
    对所述目标区域数据的特征点生成描述子;
    计算所述描述子之间的最近邻距离与次近邻距离;
    计算所述最近邻距离和所述次近邻距离之间的比值;
    响应于所述比值小于预设的阈值,确定所述特征点匹配成功。
  20. 根据权利要求18所述的移动设备,所述处理器,还配置为执行所述计算机指令以实现:
    在所述采用预设的第一匹配方式对所述目标图像数据的特征点进行匹配,获得匹配成功的特征点之后,采用预设的第二匹配方式从所述匹配成功的特征点中去除匹配错误的特征点。
  21. 根据权利要求18所述的移动设备,所述处理器,还配置为执行所述计算机指令以实现:
    在所述从每个区域图像数据中提取特征点之前,按照预设的采样参数对所述特征区域数据进行下采样处理;
    在所述按照所述变换方式将所述目标区域数据拼接为目标图像数据之前,按照所述采样参数将所述变换方式进行转换处理。
  22. 根据权利要求14-21任一项所述的移动设备,所述处理器,还配置为执行所述计算机指令以实现:
    当所述摄像头执行拍照操作时,调用预置的传感器测量抖动数据;
    判断所述抖动数据是否符合预设的抖动条件;
    所述从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据,包括:
    响应于符合所述抖动条件,从所述至少两帧候选图像数据中提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据;
    将所述目标区域数据拼接为目标图像数据之后,判断所述目标图像数据中的拼接区域是否符合预设的拼接条件;响应于符合所述拼接条件,输出所述目标图像数据。
  23. 根据权利要求22所述的移动设备,所述处理器,还配置为执行所述计算机指令以实现:
    响应于不符合所述抖动条件或不符合所述拼接条件,输出符合预设的图像条件的候选图像数据。
  24. 根据权利要求23所述的移动设备,所述输出符合预设的图像条件的候选图像数据,包括:
    计算所述候选图像数据的模糊度;
    输出所述模糊度最小的候选图像数据。
  25. 根据权利要求22-24任一项所述的移动设备,所述抖动数据具有多个,所述判断所述抖动数据是否符合预设的抖动条件,包括:
    采用多个抖动数据计算多个单体抖动值;
    计算所述多个单体抖动值的平均值,作为整体抖动值;
    判断所述整体抖动值是否在预设的抖动范围内;
    响应于整体抖动值在预设的抖动范围内,确定符合预设的抖动条件;
    响应于整体抖动值不在预设的抖动范围内,确定不符合预设的抖动条件。
  26. 根据权利要求22所述的移动设备,所述判断所述目标图像数据中的拼接区域是否符合预设的拼接条件,包括:
    确定所述目标图像数据中的拼接区域;
    在目标图像数据计算位于所述拼接区域一侧的像素的第一灰度值、以及位于所述拼接区域另一侧的像素的第二灰度值;
    计算所述第一灰度值与所述第二灰度值之间的灰度差异;
    判断所述灰度差异是否小于预设的阈值;
    响应于所述灰度差异小于预设的阈值,确定符合预设的拼接条件;
    响应于所述灰度差异不小于预设的阈值,确定不符合预设的拼接条件。
  27. 一种机器可读的非易失性存储介质,其上存储有计算机指令,所述计算机指令被处理器执行时实现:
    获取当执行拍照操作时采集的至少两帧候选图像数据;
    从所述至少两帧候选图像数据中分别提取模糊度符合预设的模糊条件的、部分区域重叠且所处区域相匹配的至少两个目标区域数据;
    将所述目标区域数据拼接为目标图像数据。
PCT/CN2019/078156 2018-03-29 2019-03-14 一种拍照的方法和装置 WO2019184719A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201810274308.3 2018-03-29
CN201810274313.4 2018-03-29
CN201810274308.3A CN108322658B (zh) 2018-03-29 2018-03-29 一种拍照的方法和装置
CN201810274313.4A CN108668075A (zh) 2018-03-29 2018-03-29 一种拍照方法和装置

Publications (1)

Publication Number Publication Date
WO2019184719A1 true WO2019184719A1 (zh) 2019-10-03

Family

ID=68058551

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/078156 WO2019184719A1 (zh) 2018-03-29 2019-03-14 一种拍照的方法和装置

Country Status (1)

Country Link
WO (1) WO2019184719A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435530A (zh) * 2021-07-07 2021-09-24 腾讯科技(深圳)有限公司 图像识别方法、装置、计算机设备及计算机可读存储介质
CN113744401A (zh) * 2021-09-09 2021-12-03 网易(杭州)网络有限公司 一种地形拼接方法、装置、电子设备和存储介质
CN116579927A (zh) * 2023-07-14 2023-08-11 北京心联光电科技有限公司 一种图像拼接方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1956487A (zh) * 2005-10-27 2007-05-02 株式会社理光 图像处理***
US20070147814A1 (en) * 2005-12-28 2007-06-28 Seiko Epson Corporation Image capturing apparatus, method of controlling the same, and storage medium
CN104318548A (zh) * 2014-10-10 2015-01-28 西安电子科技大学 一种基于空间稀疏度和sift特征提取的快速图像配准实现方法
CN104599258A (zh) * 2014-12-23 2015-05-06 大连理工大学 一种基于各向异性特征描述符的图像拼接方法
CN106998459A (zh) * 2017-03-15 2017-08-01 河南师范大学 一种连续变焦技术的单摄像头立体图像生成方法
CN108322658A (zh) * 2018-03-29 2018-07-24 青岛海信移动通信技术股份有限公司 一种拍照的方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1956487A (zh) * 2005-10-27 2007-05-02 株式会社理光 图像处理***
US20070147814A1 (en) * 2005-12-28 2007-06-28 Seiko Epson Corporation Image capturing apparatus, method of controlling the same, and storage medium
CN104318548A (zh) * 2014-10-10 2015-01-28 西安电子科技大学 一种基于空间稀疏度和sift特征提取的快速图像配准实现方法
CN104599258A (zh) * 2014-12-23 2015-05-06 大连理工大学 一种基于各向异性特征描述符的图像拼接方法
CN106998459A (zh) * 2017-03-15 2017-08-01 河南师范大学 一种连续变焦技术的单摄像头立体图像生成方法
CN108322658A (zh) * 2018-03-29 2018-07-24 青岛海信移动通信技术股份有限公司 一种拍照的方法和装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435530A (zh) * 2021-07-07 2021-09-24 腾讯科技(深圳)有限公司 图像识别方法、装置、计算机设备及计算机可读存储介质
CN113435530B (zh) * 2021-07-07 2023-10-10 腾讯科技(深圳)有限公司 图像识别方法、装置、计算机设备及计算机可读存储介质
CN113744401A (zh) * 2021-09-09 2021-12-03 网易(杭州)网络有限公司 一种地形拼接方法、装置、电子设备和存储介质
CN116579927A (zh) * 2023-07-14 2023-08-11 北京心联光电科技有限公司 一种图像拼接方法、装置、设备及存储介质
CN116579927B (zh) * 2023-07-14 2023-09-19 北京心联光电科技有限公司 一种图像拼接方法、装置、设备及存储介质

Similar Documents

Publication Publication Date Title
CN108898567B (zh) 图像降噪方法、装置及***
US9819825B2 (en) Systems and methods for detecting and classifying objects in video captured using mobile devices
CN108154526B (zh) 突发模式图像的图像对准
US20220222786A1 (en) Image processing method, smart device, and computer readable storage medium
JP5906028B2 (ja) 画像処理装置、画像処理方法
WO2019184719A1 (zh) 一种拍照的方法和装置
CN108322658B (zh) 一种拍照的方法和装置
JP2005303991A (ja) 撮像装置、撮像方法、及び撮像プログラム
WO2014074959A1 (en) Real-time face detection using pixel pairs
CN112085094B (zh) 单证图像翻拍检测方法、装置、计算机设备和存储介质
CN110021055A (zh) 产生视差图的方法及其图像处理装置与***
US9311522B2 (en) Apparatus and method for improving face recognition ratio
US9767533B2 (en) Image resolution enhancement based on data from related images
WO2018196854A1 (zh) 一种拍照方法、拍照装置及移动终端
CN116916151B (zh) 拍摄方法、电子设备和存储介质
JP4898655B2 (ja) 撮像装置及び画像合成プログラム
CN115567783B (zh) 一种图像处理方法
JP6645711B2 (ja) 画像処理装置、画像処理方法、プログラム
CN111062922A (zh) 一种翻拍图像的判别方法、***及电子设备
CN116452426A (zh) 一种全景图拼接方法及装置
CN113744339B (zh) 生成全景图像的方法、装置、电子设备和存储介质
WO2022127491A1 (zh) 图像处理方法及装置、存储介质、终端
CN108668075A (zh) 一种拍照方法和装置
JP7324066B2 (ja) 画像処理装置およびその制御方法、ならびに撮像装置
JP2018085617A (ja) 画像処理装置、画像処理システム及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19776081

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22/01/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19776081

Country of ref document: EP

Kind code of ref document: A1