CN105827963A - Scene changing detection method during shooting process and mobile terminal - Google Patents

Scene changing detection method during shooting process and mobile terminal Download PDF

Info

Publication number
CN105827963A
CN105827963A CN201610169640.4A CN201610169640A CN105827963A CN 105827963 A CN105827963 A CN 105827963A CN 201610169640 A CN201610169640 A CN 201610169640A CN 105827963 A CN105827963 A CN 105827963A
Authority
CN
China
Prior art keywords
topography
pixels
block
image
preview image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610169640.4A
Other languages
Chinese (zh)
Other versions
CN105827963B (en
Inventor
黄创杰
万美君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201610169640.4A priority Critical patent/CN105827963B/en
Publication of CN105827963A publication Critical patent/CN105827963A/en
Application granted granted Critical
Publication of CN105827963B publication Critical patent/CN105827963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/68Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects
    • H04N25/683Noise processing, e.g. detecting, correcting, reducing or removing noise applied to defects by defect estimation performed on the scene signal, e.g. real time or on the fly detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a scene changing detection method during a shooting process, and belongs to the communication technology field. The method comprises the steps of obtaining a first preview image and a second preview image which are acquired by a camera; determining a first local image and a second local image according to the first and second preview images; then carrying out the grays value matching on the first and second local images to determine whether a scene changes, wherein the first and second preview images are both Y-channel images. The method provided by the present invention only needs to carry out the matching operation on the local images of the preview images, and enables the image matching operands to be reduced and the scene change detection efficiency to be improved. Moreover, by utilizing the Y-channel data to carry out the gray value matching, the matching operation complexity is reduced further, the scene change detection efficiency is improved, and the problems that the image scene change detection efficiency is low, and the scene change detection can not be carried out during the shooting process rapidly and real-timely are solved.

Description

One take pictures during scene-change detecting method and mobile terminal
Technical field
The present invention relates to communication technical field, particularly relate to scene-change detecting method and mobile terminal during one is taken pictures.
Background technology
At mobile terminal during taking pictures or imaging, whether the photographic head acquired image scene being required to detect mobile terminal changes.Such as, when taking pictures or shoot video, in image acquisition procedures, owing to the image of camera collection there occurs that scene changes it is frequently necessary to adjusting focal length, light sensitivitys etc..
At present, the detection method of image scene change is: the motion vector information of fetching portion image corresponding position from continuous print image sequence, according to the change of the change-detection images scene of motion vector information;Or from continuous print image sequence the base attribute information of fetching portion image, such as histogram of gradients, hue histogram etc., whether change according to the matching degree detection image scene of each attribute between the image obtained.Existing image scene detection method is computationally intensive, and the efficiency causing scene detection is low, it is impossible to meet mobile phone users quick, during taking pictures, carry out the demand of Scene change detection in real time.
Summary of the invention
The embodiment of the present invention provide one take pictures during scene-change detecting method and mobile terminal, computationally intensive owing to calculating motion vector or the information such as histogram of gradients, hue histogram to solve in prior art, cause detection efficiency low, it is impossible to quickly, the problem that carries out Scene change detection in real time during taking pictures.
First aspect, embodiments provides scene-change detecting method during one is taken pictures, and is applied to the mobile terminal with photographic head, described in take pictures during scene-change detecting method include:
Obtain the first preview image and second preview image of camera collection;
According to described first preview image and the second preview image, determine the first topography and the second topography;
Described first topography and the second topography are carried out gray value coupling, determines whether scene changes;
Wherein, described first preview image and the second preview image are Y channel image.
Second aspect, the embodiment of the present invention additionally provides a kind of mobile terminal, and including photographic head, described mobile terminal also includes:
Image collection module, for obtaining the first preview image and second preview image of camera collection;
Image determines module, for the first preview image obtained according to described image collection module and the second preview image, determines the first topography and the second topography;
Scene changes determines module, for described image determining, the first topography that module determines and the second topography carry out gray value coupling, determines whether scene changes;
Wherein, described first preview image and the second preview image are Y channel image.
So, in the embodiment of the present invention, by obtaining the first preview image and second preview image of camera collection;And according to described first preview image and the second preview image, determine the first topography and the second topography;Then described first topography and the second topography are carried out gray value coupling, determine whether scene changes;Wherein, described first preview image and the second preview image are Y channel image.Compared with prior art, the embodiment of the present invention only needs the topography to preview image to carry out matching operation, reduces the operand of images match, improves the efficiency of Scene change detection;Further, utilize Y channel data to carry out gray value coupling, further reduce the complexity of matching operation, improve the efficiency of Scene change detection, Scene change detection can be carried out quickly, in real time in previews of taking pictures.
Accompanying drawing explanation
In order to be illustrated more clearly that the technical scheme of the embodiment of the present invention, in describing the embodiment of the present invention below, the required accompanying drawing used is briefly described, apparently, accompanying drawing in describing below is only some embodiments of the present invention, for those of ordinary skill in the art, on the premise of not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the flow chart of the scene-change detecting method of the embodiment of the present invention one;
Fig. 2 be the embodiment of the present invention one scene-change detecting method in block of pixels divide schematic diagram;
Fig. 3 is the flow chart that the scene-change detecting method topography of the embodiment of the present invention one carries out gray value coupling;
Fig. 4 is one of structure chart of mobile terminal of the embodiment of the present invention two;
Fig. 5 is the two of the structure chart of the mobile terminal of the embodiment of the present invention two;
Fig. 6 is the structure chart of the mobile terminal of the embodiment of the present invention three;
Fig. 7 is the structure chart of the mobile terminal of the embodiment of the present invention four.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is a part of embodiment of the present invention rather than whole embodiments.Based on the embodiment in the present invention, the every other embodiment that those of ordinary skill in the art are obtained under not making creative work premise, broadly fall into the scope of protection of the invention.
Embodiment one:
Present embodiments provide scene-change detecting method during one is taken pictures, be applied to the mobile terminal with photographic head, as it is shown in figure 1, the method comprising the steps of 101 to step 103.
Step 101, obtains the first preview image and second preview image of camera collection.
The data obtaining image in the embodiment of the present invention are that the original image of smart mobile phone collection is after the image procossing such as past bad point, gamma correction, color correction, color enhancing, denoising, the view data of the yuv format obtained, and extract Y channel image data and mate.Wherein, described first preview image and the second preview image are Y channel image.
Below by apply the present invention to smart mobile phone take pictures during focusing an application scenarios as a example by technical scheme is described in detail.
After smart mobile phone starts application of taking pictures, the image that preview mode gathers with display can be started, then to the object auto-focusing in the image gathered, or realize when certain personage during user clicks on image or object manually triggering focusing.In previews, the situation of the object removal focal range being taken, the scene changes in the image gathered in detection previews the most in real time, ability adjusting focal length in time often occur.
The present invention is in the specific implementation, first the first preview image to be detected and the second preview image are selected, wherein, described first preview image and described second preview image are selected from the image sequence of camera collection, and the frame number of described first preview image is less than described second preview image.In the specific implementation, for still image, the first two field picture to defocused preview can be selected as the first preview image judging scene changes, and in real-time selection previews the current frame image of subsequent acquisition as the second preview image, by comparison the second preview image and the change of the first preview image content, it is judged that whether image scene changes.It is easy to take pictures application further according to the timely adjusting focal length of judged result of image scene change.Generally after focusing starts, the first frame of preview represents that user wants the scene of shooting, and therefore the present embodiment uses the first frame data of preview to carry out images match with the present frame of preview.For the content change speed taken pictures than in application scenarios faster, such as video capture, between described second preview image and described first preview image of selection, the picture frame at interval is less than predetermined number.When being embodied as, between described second preview image and described first preview image, the picture frame at interval can be 0, and described predetermined number is arranged according to the detection demand of application scenarios.Interval is used to carry out images match less than two frame preview images of predetermined number picture frame, it is simple to find scene changes in dynamic scene in time.
Step 102, according to described first preview image and the second preview image, determines the first topography and the second topography.
When being embodied as, the time of the matching operation that preview image data too conference increases, in order to reduce the operand of images match, improve scene detection efficiency, preferably, the part image data of the first preview image and the second preview image is selected to mate, therefore, first according to described first preview image and the second preview image, the first topography and the second topography are determined.Described first topography is the image-region of the pre-set dimension centered by the center pixel of described first preview image, described second topography is the image-region of the pre-set dimension centered by the center pixel of described second preview image, wherein, described pre-set dimension is c × d block of pixels, wherein, c and d is positive integer.As in figure 2 it is shown, as a example by the preview image 201 that smart mobile phone gathers, with the upper left corner of this image as zero, the coordinate of the center pixel p of this image, centered by this center pixel p, determine a rectangular area 202, a width of d pixel of this rectangular area, a height of c pixel, wherein, c and d is positive integer, and, d is less than the pixel wide of preview image, and c is less than the pixels tall of preview image.Preferably, the resolution of image gathered according to smart mobile phone in prior art and the ageing requirement of scene detection, d is set to 480, c is set to 640, the view data in the first preview image and region that the second preview image center size is 640 × 480 is i.e. selected to carry out Scene change detection, it is not only suitable for major applications scene, the calculating speed of image scene detection can be accelerated again.
In other embodiments of the invention, can also be according to second topography manually arranging the first topography and the second preview image determining the first preview image of user, particularly as follows: described first topography is the area image that in described first preview image, user specifies, described second topography is the image in described second preview image with described first topography's correspondence position region.Such as, detection user's touch gestures on the first preview image, user designated area is determined according to described touch gestures, first topography is the position of this area image of image in described first preview image in user designated area and size determines according to touch gestures, when being embodied as, touch gestures can be 2 slips or 3 slips etc.;The most such as, detect user's clicking operation on the first preview image, centered by the position of clicking operation, determine that pre-set dimension region, the first topography are the image in described first preview image in pre-set dimension region;When being embodied as, it is also possible to by arranging configuration interface, guiding the appointment region of user setup Scene change detection, in then selecting the first preview image, the image in described appointment region is as the first topography.The present invention specifies the concrete grammar in region not limit to being used for arranging.Determine that the area image that user specifies mates, have more specific aim, it is possible to increase the accuracy rate of Scene change detection.
After the region specified according to user determines the first topography, described second topography is the image in described second preview image with described first topography's correspondence position region.
When being embodied as, described appointment region can be rectangle, it is also possible to for other regular shapes such as circles.For convenience of calculation, technical scheme is illustrated for rectangular area by following example with appointment region.
Step 103, carries out gray value coupling to described first topography and the second topography, determines whether scene changes.
When being embodied as, the present invention is that basic unit carries out image comparison with block of pixels.As it is shown on figure 3, described, described first topography and the second topography are carried out gray value coupling, determine the step whether scene changes, including step 1031 to step 1037.
Step 1031, respectively by described first topography and second local image division be a × b block of pixels.
Wherein, being spaced multiple a × b block of pixels between each a × b block of pixels in described first topography and the second topography, a and b is positive integer.
In step 1031, respectively by described first topography and second local image division be a × b block of pixels, wherein, between each a × b block of pixels in described first topography and the second topography, be spaced multiple a × b block of pixels.Wherein, a and b is positive integer, a with b can be identical, it is also possible to different.Generally image optimization process need to use hardware-accelerated, when carrying out hardware register read-write, register manipulation is in units of byte, one corresponding 8 bit of byte, in order to improve calculating speed further, in embodiments of the invention, comprehensive operand and the consideration of follow-up optimization, a and b selects the positive integer that can be divided exactly by 8, and such as a=b=8, then the size of block of pixels is a × b=64 pixel.
Respectively by described first topography and second local image division be a × b block of pixels, wherein, the detailed description of the invention being spaced multiple a × b block of pixels between each a × b block of pixels in described first topography and the second topography is: the first topography and the second topography are respectively divided into the second quantity a × b block of pixels of arranged adjacent;According to predeterminated position distribution rule, the second quantity a × b block of pixels obtained by described first local image division is carried out down-sampling and obtain the first quantity a × b block of pixels, and, according to described predeterminated position distribution rule, the second quantity a × b block of pixels obtained by described second local image division is carried out down-sampling and obtain the first quantity a × b block of pixels;Wherein, described first quantity is less than described second quantity.Below to specify region for 640 × 480, as a example by block of pixels is 8 × 8 block of pixels, further illustrate the first quantity reference pixel block and the determination method of block of pixels to be compared in preferred embodiment.
As in figure 2 it is shown, first the first preview image 201 and the first topography 202 of the second preview image 203 and the second topography 204 to be respectively divided into the block of pixels of the second quantity pre-set dimension of arranged adjacent.By the first topography 202 according to from left to right, order from top to bottom is divided into the block of pixels of the 8 × 8 of arranged adjacent, such as block of pixels 2021,2022,2023,2024,80 row block of pixels can be obtained, often 60 block of pixels of row arranged adjacent, there are 4800 block of pixels (the i.e. second quantity is equal to 4800);According to same division methods, the second topography 204 is divided, obtain 4800 block of pixels, such as block of pixels 2041,2042,2043,2044.Wherein, the first block of pixels that locally image division obtains and the second topography 204 divide the position one_to_one corresponding of the block of pixels obtained, and such as: block of pixels 2021 is corresponding with the position of block of pixels 2041, block of pixels 2023 is corresponding with the position of block of pixels 2043.
Then, according to predeterminated position distribution rule, the second quantity the block of pixels obtained by the division of described first topography 202 is carried out down-sampling and obtain the block of pixels that the first quantity is to be matched, such as the block of pixels 2021,2023,2024 in Fig. 2, and, according to described predeterminated position distribution rule, the second quantity the block of pixels obtained by the division of described second topography 204 is carried out down-sampling and obtain the block of pixels that the first quantity is to be compared, such as the block of pixels 2041,2043,2044 in Fig. 2.Wherein, predeterminated position distribution rule includes: is often spaced E block of pixels in row block of pixels and selects a block of pixels, or each column block of pixels is spaced F block of pixels and selects a block of pixels, or often row block of pixels is spaced E block of pixels and selects F, interval block of pixels one block of pixels of selection in a block of pixels and each column block of pixels.By predeterminated position distribution rule it is: being often spaced E block of pixels in row block of pixels and select in a block of pixels and each column block of pixels as a example by block of pixels one block of pixels of selection of F, interval, the computing formula of the first quantity M is as follows:
Wherein, M is block of pixels quantity to be matched, W is the width value of described first topography, L is the length value of described first topography, and E is the block of pixels number being spaced in the direction of the width between each a × b block of pixels, and F is the block of pixels number being spaced in the longitudinal direction between each a × b block of pixels, a is the width of described a × b block of pixels, b is the length of described a × b block of pixels, E with F can be identical, it is also possible to different.When being embodied as, if the equal value of E and F is 3;Predeterminated position distribution rule is: is often spaced E block of pixels in row block of pixels and selects to be spaced F block of pixels one block of pixels of selection in a block of pixels and each column block of pixels, then according to predeterminated position distribution rule to by described first locally the block of pixels of obtain 4800 8 × 8 of image division carry out down-sampling and obtain 300 block of pixels to be matched.In like manner, 300 block of pixels to be matched are obtained according to same predeterminated position distribution rule to being carried out down-sampling by the block of pixels of described second local obtain 4800 8 × 8 of image division.In order to reduce the amount of calculation of images match, do not affect the accuracy of scene detection, it is preferable that described E and F is less than 10 simultaneously.
Determine that another embodiment of block of pixels for coupling is: be a × b block of pixels by described first topography and the second local image division respectively, wherein, it is spaced multiple pixel between each a × b block of pixels in described first topography and the second topography.Detailed description of the invention is: set up plane right-angle coordinate using the upper left corner of described first topography as zero, pixel in described first topography is spaced the down-sampling of the 5th number of pixels point, and determine M a × b block of pixels using the pixel obtained of sampling as left upper apex (or central point), wherein, 5th quantity is more than a and b, the positive integer that described 5th quantity preferably can be divided exactly by 8.As a example by the 5th quantity is 32, if a and b is 8, then the first topography for 640 × 480 divides, and the quantity of 8 × 8 block of pixels obtained is 300, i.e. M=300.Using the method identical with dividing the first topography is spaced M block of pixels to be matched by the second local image division.
Below only list two kinds of modes determining a × b block of pixels to be matched, it is also possible to determine block of pixels to be matched by other method, be not repeated herein for Scene change detection, the present invention extracting the data of block of pixels.It should be understood that above example is merely to make the solution of the present invention be easier to understand and easy to carry out, should not be used as limitation of the invention.
Embodiments of the invention, by obtaining the first preview image and second preview image of camera collection;And according to described first preview image and the second preview image, determine the first topography and the second topography;Then to described first topography and second local image division be a × b block of pixels, and select wherein partial pixel block to carry out gray value coupling, determine whether scene changes, further reduce the complexity of matching operation, improve the efficiency of Scene change detection.
Step 1032, according to described first preview image, is calculated images match threshold value.
Before carrying out block of pixels coupling, need according to described first preview image, be calculated images match threshold value.When being embodied as, there are two kinds of calculations: the first, according to average gray and the Size calculation matching threshold of described block of pixels of described first preview image;The second, calculates matching threshold according to the average gray of described first preview image.
The first is described is calculated the mode of images match threshold value according to described first preview image, including: computing that the gray value of all pixels of described first preview image is averaged, obtain average gray;Obtain the Pixel Dimensions value of described a × b block of pixels;According to formulaBeing calculated images match threshold value, wherein y is described images match threshold value, and size is the Pixel Dimensions value of described a × b block of pixels, and avg is described average gray.
According to described first preview image described in the second, it is calculated the mode of images match threshold value, including: computing that the gray value of all pixels of described first preview image is averaged, obtain average gray;According to formulaBeing calculated images match threshold value, wherein y is described images match threshold value, and avg is described average gray.Wherein, the computing formula of average gray is as follows:
a v g = 1 W , × L , Σ j = 0 W , × L , p ( j ) .
Wherein, W' is the width value of described first preview image, and L' is the length value of described first preview image, and p (j) is that position is the gray value of the pixel of j in the first preview image, and avg is the average gray value of the first preview image.
Step 1033, according to described first topography, is calculated matching times M.
Before carrying out block of pixels coupling, in addition it is also necessary to according to described first topography, be calculated matching times M.When being embodied as, first, length value L and the width value W of the first topography are obtained;The quantity of a × b block of pixels that matching times M obtains equal to the first local image division, the computing formula of matching times M is as follows:
Wherein, M is described matching times, W is the width value of described first topography, L is the length value of described first topography, and E is the block of pixels number being spaced in the direction of the width between each a × b block of pixels, and F is the block of pixels number being spaced in the longitudinal direction between each a × b block of pixels, a is the width of described a × b block of pixels, b is the length of described a × b block of pixels, E with F can be identical, it is also possible to different.
Step 1034, according to each a × b block of pixels in described first topography and the second topography, determines the number of non-matching block of pixels.
In above-mentioned steps 1034, if described matching threshold uses above-mentioned first kind of way to calculate, described according to each a × b block of pixels in described first topography and the second topography, determine the step of the number of non-matching block of pixels, including: calculate total gray value of each a × b block of pixels in described first topography and the second topography respectively;Carry out seeking difference operation by total gray value of a × b block of pixels of opposite position in described first topography and the second topography, obtain total gray value difference;The absolute value of described total gray value difference is compared with described images match threshold value;When the absolute value of described total gray value difference is more than described images match threshold value, the number of the most non-matching block of pixels adds one.
If determining 300 block of pixels to be matched of the first topography and 300 block of pixels to be matched of the second topography in step 1031, then in step 1033, calculated matching times is 300, then will calculate total gray value of 300 block of pixels of the first topography and total gray value of 300 block of pixels of the second topography respectively in step 1034;Then, calculate the difference of total gray value of the block of pixels of opposite position in total gray value of 300 block of pixels in described first topography and the second topography respectively, obtain total gray value difference.
If with G1I () represents total gray value of block of pixels i in the first topography, with G2I () represents total gray value of block of pixels i in the second topography, i is the station location marker of block of pixels, G1(i) and G2I () represents total gray value of the first topography block of pixels corresponding with position in the second topography respectively.Carry out seeking difference operation by total gray value of a × b block of pixels of opposite position in described first topography and the second topography, obtain total gray value difference, particularly as follows: use formula D (i)=G1(i)-G2I () calculates the difference of total gray value of a × b block of pixels of opposite position in described first topography and the second topography respectively.If step 1031 determines, the first topography has 300 a × b block of pixels, matching times M=300 i.e. determined in step 1033,300 total gray values difference D (i), wherein 0 < i≤300 will be obtained.Then, respectively the absolute value of the M of acquisition total gray value difference is compared with described images match threshold value;When the absolute value of described total gray value difference is more than described images match threshold value, the number of the most non-matching block of pixels adds one, finally, obtains the number of non-matching block of pixels.
In above-mentioned steps 1034, if described matching threshold uses the above-mentioned second way to calculate, described according to each a × b block of pixels in described first topography and the second topography, determine the step of the number of non-matching block of pixels, including: calculate the average gray value of each a × b block of pixels in described first topography and the second topography respectively;Carry out seeking difference operation by the average gray value of a × b block of pixels of opposite position in described first topography and the second topography, obtain average gray value difference;The absolute value of described average gray value difference is compared with described images match threshold value;When the absolute value of described average gray value difference is more than described images match threshold value, the number of the most non-matching block of pixels adds one.If determining 300 block of pixels to be matched of the first topography and 300 block of pixels to be matched of the second topography in step 1031, then in step 1033, calculated matching times is 300, then will calculate the average gray value of 300 block of pixels of the first topography and the average gray value of 300 block of pixels of the second topography respectively in step 1034;Then, calculate the difference of the average gray value of the block of pixels of opposite position in the average gray value of 300 block of pixels in described first topography and the second topography respectively, obtain average gray value difference.
If withRepresent the average gray value of block of pixels i in the first topography, withRepresenting the average gray value of block of pixels i in the second topography, i is the station location marker of block of pixels,WithRepresent the average gray value of the first topography block of pixels corresponding with position in the second topography respectively.Carry out seeking difference operation by the average gray value of a × b block of pixels of opposite position in described first topography and the second topography, obtain average gray value difference, particularly as follows: use formulaCalculate the difference of the average gray value of a × b block of pixels of opposite position in described first topography and the second topography respectively.If step 1031 determines, the first topography has 300 a × b block of pixels, matching times M=300 i.e. determined in step 1033,300 average gray values difference D (i), wherein 0 < i≤300 will be obtained.Then, respectively the absolute value of M average gray value difference of acquisition is compared with described images match threshold value;When the absolute value of described average gray value difference is more than described images match threshold value, the number of the most non-matching block of pixels adds one, finally, obtains the number of non-matching block of pixels.
Step 1035, it is judged that whether the number of non-matching block of pixels is more thanThe most then perform step 1036, if it is not, then perform step 1037.
Finally, further matching result is judged, if not the number of matched pixel block is more than the preset ratio of matching times, it is determined that the scene of described second preview image and described first preview image changes, then perform step 1036;Otherwise, it determines described second preview image is consistent with the scene of described first preview image, then perform step 1037.Wherein, described preset ratio value is between 0~1, and preset ratio value is closer to 1, just scene changes amplitude needs the biggest can being detected, in order to ensure the susceptiveness of scene detection, it is preferred that the preset ratio value that the present embodiment uses is
Step 1036, when the number of non-matching block of pixels is more thanTime, it is determined that scene changes.
When the number of non-matching block of pixels is more thanTime, illustrate that non-matching block of pixels alreadys more than the ratio preset of total block of pixels, it is determined that the scene of the first preview image and the second preview image changes.As a example by matching times M is equal to 300, i.e. need 300 block of pixels in the second preview image to be mated, if not the number of matched pixel block is more than 100, it is determined that the scene of the first preview image and the second preview image changes.
Step 1037, when the number of non-matching block of pixels is less than or equal toTime, it is determined that scene does not changes.
When the number of non-matching block of pixels is less than or equal toTime, the non-matching block of pixels ratio preset already less than total block of pixels is described, it is determined that the scene of the first preview image and the second preview image is consistent.Still as a example by matching times M is equal to 300, i.e. need 300 block of pixels in the second preview image are mated, if not the number of matched pixel block is less than or equal to 100, it is determined that the scene of the first preview image and the second preview image does not changes.
Scene-change detecting method during the taking pictures of the embodiment of the present invention, by obtaining the first preview image and second preview image of camera collection;And according to described first preview image and the second preview image, determine the first topography and the second topography;Then described first topography and the second topography are carried out gray value coupling, determine whether scene changes;Wherein, described first preview image and the second preview image are Y channel image.Compared with prior art, the embodiment of the present invention only needs the topography to preview image to carry out matching operation, reduces the operand of images match, improves the efficiency of Scene change detection;Further, utilize Y channel data to carry out gray value coupling, further reduce the complexity of matching operation, improve the efficiency of Scene change detection, Scene change detection can be carried out quickly, in real time in previews of taking pictures;By dynamic calculation threshold grayscale, the matching threshold of self-adaptative adjustment scene detection, it is possible to increase the accuracy of Scene change detection.
Embodiment two:
Accordingly, in another embodiment of the invention, disclosing a kind of mobile terminal 400, described mobile terminal 400 includes photographic head (not shown), and as shown in Figure 4, described mobile terminal also includes:
Image collection module 410, for obtaining the first preview image and second preview image of camera collection;
Image determines module 420, for the first preview image obtained according to described image collection module 410 and the second preview image, determines the first topography and the second topography;
Scene changes determines module 430, for described image determining, the first topography that module 420 determines and the second topography carry out gray value coupling, determines whether scene changes.
The data obtaining image in the embodiment of the present invention are that the original image of smart mobile phone collection is after the image procossing such as past bad point, gamma correction, color correction, color enhancing, denoising, the view data of the yuv format obtained, and extract Y channel image data and mate.Wherein, described first preview image and the second preview image are Y channel image.
The mobile terminal 400 of the embodiment of the present invention, by obtaining the first preview image and second preview image of camera collection;And according to described first preview image and the second preview image, determine the first topography and the second topography;Then described first topography and the second topography are carried out gray value coupling, determine whether scene changes;Wherein, described first preview image and the second preview image are Y channel image.Compared with prior art, the embodiment of the present invention only needs the topography to preview image to carry out matching operation, reduces the operand of images match, improves the efficiency of Scene change detection;Further, utilize Y channel data to carry out gray value coupling, further reduce the complexity of matching operation, improve the efficiency of Scene change detection, Scene change detection can be carried out quickly, in real time in previews of taking pictures.
When being embodied as, the time of the matching operation that preview image data too conference increases, in order to reduce the operand of images match, improve scene detection efficiency, it is preferable that select the part image data of the first preview image and the second preview image to mate.A kind of mode of the topography selecting preview image is: described first topography is the image-region of the pre-set dimension centered by the center pixel of described first preview image, described second topography is the image-region of the pre-set dimension centered by the center pixel of described second preview image, wherein, described pre-set dimension is c × d block of pixels, wherein, c and d is positive integer.Select the view data in the region of picture centre to carry out Scene change detection, be not only suitable for major applications scene, the calculating speed of image scene detection can be accelerated again.
The another way of the topography of selection preview image is: described first topography is the area image that in described first preview image, user specifies, and described second topography is the image in described second preview image with described first topography's correspondence position region.The area image selecting user to specify mates, and has more specific aim, it is possible to increase the accuracy rate of Scene change detection.
When being embodied as, as it is shown in figure 5, described scene changes determines module 430, farther include:
Block of pixels division unit 4301, is a × b block of pixels for described image determines the first topography and second locally image division that module 420 determines respectively;
Matching threshold computing unit 4302, for the first preview image obtained according to described image collection module 410, is calculated images match threshold value;
Matching times computing unit 4303, for determining, according to described image, the first topography that module 420 determines, is calculated matching times M;
Matching unit 4304, for according to each a × b block of pixels in described first topography and the second topography, determines the number of non-matching block of pixels;
Judging to determine unit 4305, the number of the non-matching block of pixels for determining when described matching unit 4304 is more thanTime, it is determined that scene changes;And,
The number of the non-matching block of pixels determined when described matching unit 4304 is less than or equal toTime, it is determined that scene does not changes;
Wherein, being spaced multiple a × b block of pixels between each a × b block of pixels in described first topography and the second topography, a and b is positive integer.
Scene changes determines that module 430 and the specific implementation of each unit included thereof see embodiment of the method, and here is omitted.
By selecting part block of pixels in the first topography and the second topography to mate, to judge scene changes, efficiently reduce the operand of images match, improve scene detection efficiency.
In a specific embodiment, described matching threshold computing unit 4302, it is further used for: computing that the gray value of all pixels of described first preview image is averaged, obtains average gray;Obtain the Pixel Dimensions value of described a × b block of pixels;According to formulaBeing calculated images match threshold value, wherein y is described images match threshold value, and size is the Pixel Dimensions value of described a × b block of pixels, and avg is described average gray.
Correspondingly, described matching unit 4304 is further used for: calculate total gray value of each a × b block of pixels in described first topography and the second topography respectively;Carry out seeking difference operation by total gray value of a × b block of pixels of opposite position in described first topography and the second topography, obtain total gray value difference;The absolute value of described total gray value difference is compared with described images match threshold value;When the absolute value of described total gray value difference is more than described images match threshold value, the number of the most non-matching block of pixels adds one.
In another specific embodiment of the present invention, described matching threshold computing unit 4302, it is further used for: computing that the gray value of all pixels of described first preview image is averaged, obtains average gray;According to formulaBeing calculated images match threshold value, wherein y is described images match threshold value, and avg is described average gray.Correspondingly, described matching unit 4304 is further used for: calculate the average gray value of each a × b block of pixels in described first topography and the second topography respectively;Carry out seeking difference operation by the average gray value of a × b block of pixels of opposite position in described first topography and the second topography, obtain average gray value difference;The absolute value of described average gray value difference is compared with described images match threshold value;When the absolute value of described average gray value difference is more than described images match threshold value, the number of the most non-matching block of pixels adds one.
Embodiments of the invention, by dynamic calculation threshold grayscale, the matching threshold of self-adaptative adjustment scene detection, it is possible to increase the accuracy of Scene change detection.
When being embodied as, described image is determined that the embodiment that the first topography and second locally image division is a × b block of pixels that module 420 determines can be by block of pixels division unit 4301 respectively: the first topography and the second topography are respectively divided into the second quantity a × b block of pixels of arranged adjacent;According to predeterminated position distribution rule, the second quantity a × b block of pixels obtained by described first local image division is carried out down-sampling and obtain the first quantity a × b block of pixels, and, according to described predeterminated position distribution rule, the second quantity a × b block of pixels obtained by described second local image division is carried out down-sampling and obtain the first quantity a × b block of pixels;Wherein, described first quantity is less than described second quantity.Matching times is equal to the first quantity.Described matching times computing unit 4303, is further used for:
Obtain length value and the width value of the first topography;
According to formulaIt is calculated matching times M, wherein, M is described matching times, W is the width value of described first topography, L is the length value of described first topography, and E is the block of pixels number being spaced in the direction of the width between each a × b block of pixels, and F is the block of pixels number being spaced in the longitudinal direction between each a × b block of pixels, a is the width of described a × b block of pixels, and b is the length of described a × b block of pixels.
For different block of pixels division methods, the concrete grammar calculating matching times there may be difference, and here is omitted.
The mobile terminal of the embodiment of the present invention, obtains the first preview image and second preview image of camera collection by above-mentioned module;And according to described first preview image and the second preview image, determine the first topography and the second topography;Then to described first topography and second local image division be a × b block of pixels, and select wherein partial pixel block to carry out gray value coupling, to determine whether scene changes, further reduce the complexity of matching operation, improve the efficiency of Scene change detection.
Embodiment three:
Fig. 6 is the block diagram of the mobile terminal of another embodiment of the present invention.Mobile terminal 600 shown in Fig. 6 includes: at least one processor 601, memorizer 602, at least one network interface 604 and user interface 603, assembly 606 of taking pictures, and assembly 606 of taking pictures includes photographic head.Each assembly in mobile terminal 600 is coupled by bus system 605.It is understood that bus system 605 is for realizing the connection communication between these assemblies.Bus system 605, in addition to including data/address bus, also includes power bus, controls bus and status signal bus in addition.But for the sake of understanding explanation, in figure 6 various buses are all designated as bus system 605.
Wherein, user interface 603 can include display, keyboard or pointing device (such as, mouse, trace ball (trackball), touch-sensitive plate, touch screen or Trackpad etc..
The memorizer 602 being appreciated that in the embodiment of the present invention can be volatile memory or nonvolatile memory, maybe can include volatibility and nonvolatile memory.Wherein, nonvolatile memory can be read only memory (Read-OnlyMemory, ROM), programmable read only memory (ProgrammableROM, PROM), Erasable Programmable Read Only Memory EPROM (ErasablePROM, EPROM), Electrically Erasable Read Only Memory (ElectricallyEPROM, EEPROM) or flash memory.Volatile memory can be random access memory (RandomAccessMemory, RAM), and it is used as External Cache.nullBy exemplary but be not restricted explanation,The RAM of many forms can use,Such as static RAM (StaticRAM,SRAM)、Dynamic random access memory (DynamicRAM,DRAM)、Synchronous Dynamic Random Access Memory (SynchronousDRAM,SDRAM)、Double data speed synchronous dynamic RAM (DoubleDataRateSDRAM,DDRSDRAM)、Enhancement mode Synchronous Dynamic Random Access Memory (EnhancedSDRAM,ESDRAM)、Synchronized links dynamic random access memory (SynchlinkDRAM,And direct rambus random access memory (DirectRambusRAM SLDRAM),DRRAM).The memorizer 602 of system and method described herein is intended to include but not limited to these and the memorizer of other applicable type any.
In some embodiments, memorizer 602 stores following element, executable module or data structure, or their subset, or their superset: operating system 6021 and application program 6022.
Wherein, operating system 6021, comprise various system program, such as ccf layer, core library layer, driving layer etc., be used for realizing various basic business and processing hardware based task.Application program 6022, comprises various application program, and such as media player (MediaPlayer), browser (Browser), input method etc., be used for realizing various applied business.The program realizing embodiment of the present invention method may be embodied in application program 6022.
In embodiments of the present invention, by calling program or the instruction of memorizer 602 storage, concrete, can be program or the instruction of storage in application program 6022.Detect user by the touch screen in user interface 603 and use the operation of application program, the such as touch gestures of the area image that detection user setup is specified.Processor 601 is for obtaining the first preview image and second preview image of camera collection;According to described first preview image and the second preview image, determine the first topography and the second topography;Described first topography and the second topography are carried out gray value coupling, determines whether scene changes;Wherein, described first preview image and the second preview image are Y channel image.
The method part that the invention described above embodiment discloses can apply in processor 601, or is realized by processor 601.Processor 601 is probably a kind of IC chip, has the disposal ability of signal.During realizing, each step of said method can be completed by the instruction of the integrated logic circuit of the hardware in processor 601 or software form.Above-mentioned processor 601 can be general processor, digital signal processor (DigitalSignalProcessor, DSP), special IC (ApplicationSpecificIntegratedCircuit, ASIC), ready-made programmable gate array (FieldProgrammableGateArray, FPGA) or other PLDs, discrete gate or transistor logic, discrete hardware components.Can realize or perform disclosed each method, step and the logic diagram in the embodiment of the present invention.The processor etc. that general processor can be microprocessor or this processor can also be any routine.Hardware decoding processor can be embodied directly in conjunction with the step of the method disclosed in the embodiment of the present invention to have performed, or combine execution by the hardware in decoding processor and software module and complete.Software module may be located at random access memory, flash memory, read only memory, in the storage medium that this area such as programmable read only memory or electrically erasable programmable memorizer, depositor is ripe.This storage medium is positioned at memorizer 602, and processor 601 reads the access times to application program of the user in memorizer 602, completes the step of said method in conjunction with its hardware.
It is understood that embodiments described herein can realize by hardware, software, firmware, middleware, microcode or a combination thereof.nullHardware is realized,Processing unit can be implemented in one or more special IC (ApplicationSpecificIntegratedCircuits,ASIC)、Digital signal processor (DigitalSignalProcessing,DSP)、Digital signal processing appts (DSPDevice,DSPD)、Programmable logic device (ProgrammableLogicDevice,PLD)、Field programmable gate array (Field-ProgrammableGateArray,FPGA)、General processor、Controller、Microcontroller、Microprocessor、In other electronic unit performing herein described function or a combination thereof.
Software is realized, the techniques described herein can be realized by the module (such as process, function etc.) performing function described herein.Software code is storable in performing in memorizer and by processor.Memorizer can within a processor or realize outside processor.
Described processor 601 is further used for: respectively by described first topography and second local image division be a × b block of pixels;According to described first preview image, it is calculated images match threshold value;According to described first topography, it is calculated matching times M;According to each a × b block of pixels in described first topography and the second topography, determine the number of non-matching block of pixels;When the number of non-matching block of pixels is more thanTime, it is determined that scene changes;When the number of non-matching block of pixels is less than or equal toTime, it is determined that scene does not changes;Wherein, being spaced multiple a × b block of pixels between each a × b block of pixels in described first topography and the second topography, a and b is positive integer.
Alternatively, described processor 601 is used for: computing of averaging the gray value of all pixels of described first preview image, obtains average gray;Obtain the Pixel Dimensions value of described a × b block of pixels;According to formulaBeing calculated images match threshold value, wherein y is described images match threshold value, and size is the Pixel Dimensions value of described a × b block of pixels, and avg is described average gray.
Correspondingly, described processor 601 is further used for: calculate total gray value of each a × b block of pixels in described first topography and the second topography respectively;Carry out seeking difference operation by total gray value of a × b block of pixels of opposite position in described first topography and the second topography, obtain total gray value difference;The absolute value of described total gray value difference is compared with described images match threshold value;When the absolute value of described total gray value difference is more than described images match threshold value, the number of the most non-matching block of pixels adds one.
Alternatively, described processor 601 is used for: computing of averaging the gray value of all pixels of described first preview image, obtains average gray;According to formulaBeing calculated images match threshold value, wherein y is described images match threshold value, and avg is described average gray.
Correspondingly, described processor 601 is further used for: calculate the average gray value of each a × b block of pixels in described first topography and the second topography respectively;Carry out seeking difference operation by the average gray value of a × b block of pixels of opposite position in described first topography and the second topography, obtain average gray value difference;The absolute value of described average gray value difference is compared with described images match threshold value;When the absolute value of described average gray value difference is more than described images match threshold value, the number of the most non-matching block of pixels adds one.
Alternatively, described processor 601 is used for: obtain length value and the width value of the first topography;According to formulaIt is calculated matching times M, wherein, M is described matching times, W is the width value of described first topography, L is the length value of described first topography, and E is the block of pixels number being spaced in the direction of the width between each a × b block of pixels, and F is the block of pixels number being spaced in the longitudinal direction between each a × b block of pixels, a is the width of described a × b block of pixels, and b is the length of described a × b block of pixels.
Alternatively, described first topography is the image-region of the pre-set dimension centered by the center pixel of described first preview image, described second topography is the image-region of the pre-set dimension centered by the center pixel of described second preview image, wherein, described pre-set dimension is c × d block of pixels, wherein, c and d is positive integer.
Alternatively, described first topography is the area image that in described first preview image, user specifies, and described second topography is the image in described second preview image with described first topography's correspondence position region.
Mobile terminal 600 is capable of each process that in previous embodiment, mobile terminal realizes, and for avoiding repeating, repeats no more here.
The mobile terminal 600 of the embodiment of the present invention is by above-mentioned module, it is only necessary to the topography of preview image is carried out matching operation, reduces the operand of images match, improve the efficiency of Scene change detection;Further, utilize Y channel data to carry out gray value coupling, further reduce the complexity of matching operation, improve the efficiency of Scene change detection, Scene change detection can be carried out quickly, in real time in previews of taking pictures;By dynamic calculation threshold grayscale, the matching threshold of self-adaptative adjustment scene detection, it is possible to increase the accuracy of Scene change detection.
Embodiment four:
Fig. 7 is the structural representation of the mobile terminal of another embodiment of the present invention.Specifically, the mobile terminal in Fig. 7 can be smart mobile phone, panel computer, personal digital assistant (PersonalDigitalAssistant, PDA) or vehicle-mounted computer etc..
Mobile terminal in Fig. 7 includes radio frequency (RadioFrequency, RF) circuit 710, memorizer 720, input block 730, display unit 740, processor 760, assembly 750 of taking pictures, voicefrequency circuit 770, WiFi (WirelessFidelity) module 780 and power supply 790, assembly 750 of taking pictures includes photographic head.
Wherein, input block 730 can be used for receiving numeral or the character information of user's input, and produces the signal input relevant with the user setup of mobile terminal and function control.Specifically, in the embodiment of the present invention, this input block 730 can include contact panel 731.Contact panel 731, also referred to as touch screen, user can be collected thereon or neighbouring touch operation (such as user uses any applicable object such as finger, stylus or adnexa operation on contact panel 731), and drive corresponding attachment means according to formula set in advance.Optionally, contact panel 731 can include touch detecting apparatus and two parts of touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect the signal that touch operation brings, transmit a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives this processor 760, and can receive order that processor 760 sends and be performed.Furthermore, it is possible to use the polytypes such as resistance-type, condenser type, infrared ray and surface acoustic wave to realize contact panel 731.Except contact panel 731, input block 730 can also include other input equipments 732, and other input equipments 732 can include but not limited to one or more in physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc..
Wherein, display unit 740 can be used for showing the information inputted by user or the information being supplied to user and the various menu interfaces of mobile terminal 700.Display unit 740 can include display floater 741, optionally, the form such as LCD or Organic Light Emitting Diode (OrganicLight-EmittingDiode, OLED) can be used to configure display floater 741.
It should be noted that, contact panel 731 can cover display floater 741, formed and touch display screen, when this touch display screen detects thereon or after neighbouring touch operation, send processor 760 to determine the type of touch event, display screen on provide corresponding visual output according to the type of touch event touching with preprocessor 760.
Touch display screen and include Application Program Interface viewing area and conventional control viewing area.The arrangement mode of this Application Program Interface viewing area and this conventional control viewing area does not limit, can be arranged above and below, left-right situs etc. can distinguish the arrangement mode of two viewing areas.This Application Program Interface viewing area is displayed for the interface of application program.Each interface can comprise the interface elements such as icon and/or the widget desktop control of at least one application program.This Application Program Interface viewing area can also be the empty interface not comprising any content.This conventional control viewing area is for showing the control that utilization rate is higher, such as, and the application icon etc. such as settings button, interface numbering, scroll bar, phone directory icon.
Wherein processor 760 is the control centre of mobile terminal 700, utilize various interface and the various piece of the whole smart mobile phone of connection, it is stored in the software program in first memory 721 and/or module by running or performing, and call the data being stored in second memory 722, perform the various functions of mobile terminal 700 and process data, thus mobile terminal 700 is carried out integral monitoring.Optionally, processor 760 can include one or more processing unit.
In embodiments of the present invention, by calling the data in the software program and/or module and/or this second memory 722 stored in this first memory 721, processor 760 is for obtaining the first preview image and second preview image of camera collection;According to described first preview image and the second preview image, determine the first topography and the second topography;Described first topography and the second topography are carried out gray value coupling, determines whether scene changes;Wherein, described first preview image and the second preview image are Y channel image.
Described processor 760 is further used for: respectively by described first topography and second local image division be a × b block of pixels;According to described first preview image, it is calculated images match threshold value;According to described first topography, it is calculated matching times M;According to each a × b block of pixels in described first topography and the second topography, determine the number of non-matching block of pixels;When the number of non-matching block of pixels is more thanTime, it is determined that scene changes;When the number of non-matching block of pixels is less than or equal toTime, it is determined that scene does not changes;Wherein, being spaced multiple a × b block of pixels between each a × b block of pixels in described first topography and the second topography, a and b is positive integer.
Alternatively, described processor 760 is used for: computing of averaging the gray value of all pixels of described first preview image, obtains average gray;Obtain the Pixel Dimensions value of described a × b block of pixels;According to formulaBeing calculated images match threshold value, wherein y is described images match threshold value, and size is the Pixel Dimensions value of described a × b block of pixels, and avg is described average gray.
Correspondingly, described processor 760 is further used for: calculate total gray value of each a × b block of pixels in described first topography and the second topography respectively;Carry out seeking difference operation by total gray value of a × b block of pixels of opposite position in described first topography and the second topography, obtain total gray value difference;The absolute value of described total gray value difference is compared with described images match threshold value;When the absolute value of described total gray value difference is more than described images match threshold value, the number of the most non-matching block of pixels adds one.
Alternatively, described processor 760 is used for: computing of averaging the gray value of all pixels of described first preview image, obtains average gray;According to formulaBeing calculated images match threshold value, wherein y is described images match threshold value, and avg is described average gray.
Correspondingly, described processor 760 is further used for: calculate the average gray value of each a × b block of pixels in described first topography and the second topography respectively;Carry out seeking difference operation by the average gray value of a × b block of pixels of opposite position in described first topography and the second topography, obtain average gray value difference;The absolute value of described average gray value difference is compared with described images match threshold value;When the absolute value of described average gray value difference is more than described images match threshold value, the number of the most non-matching block of pixels adds one.
Alternatively, described processor 760 is used for: obtain length value and the width value of the first topography;According to formulaIt is calculated matching times M, wherein, M is described matching times, W is the width value of described first topography, L is the length value of described first topography, and E is the block of pixels number being spaced in the direction of the width between each a × b block of pixels, and F is the block of pixels number being spaced in the longitudinal direction between each a × b block of pixels, a is the width of described a × b block of pixels, and b is the length of described a × b block of pixels.
Alternatively, described first topography is the image-region of the pre-set dimension centered by the center pixel of described first preview image, described second topography is the image-region of the pre-set dimension centered by the center pixel of described second preview image, wherein, described pre-set dimension is c × d block of pixels, wherein, c and d is positive integer.
Alternatively, described first topography is the area image that in described first preview image, user specifies, and described second topography is the image in described second preview image with described first topography's correspondence position region.
Visible, the mobile terminal of the present embodiment is by above-mentioned module, it is only necessary to the topography of preview image is carried out matching operation, reduces the operand of images match, improve the efficiency of Scene change detection;Further, utilize Y channel data to carry out gray value coupling, further reduce the complexity of matching operation, improve the efficiency of Scene change detection, Scene change detection can be carried out quickly, in real time in previews of taking pictures;By dynamic calculation threshold grayscale, the matching threshold of self-adaptative adjustment scene detection, it is possible to increase the accuracy of Scene change detection.
Those of ordinary skill in the art are it is to be appreciated that combine the unit of each example and the algorithm steps that the embodiments described herein describes, it is possible to being implemented in combination in of electronic hardware or computer software and electronic hardware.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can use different methods to realize described function to each specifically should being used for, but this realization is it is not considered that beyond the scope of this invention.
Those skilled in the art is it can be understood that arrive, for convenience and simplicity of description, the specific works process of the mobile terminal of foregoing description, it is referred to the corresponding process in preceding method embodiment, does not repeats them here.
In embodiment provided herein, it should be understood that disclosed apparatus and method, can realize by another way.Such as, device embodiment described above is only schematically, such as, the division of described unit, be only a kind of logic function to divide, actual can have when realizing other dividing mode, the most multiple unit or assembly can in conjunction with or be desirably integrated into another system, or some features can ignore, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, device or unit or communication connection, can be electrical, machinery or other form.
The described unit illustrated as separating component can be or may not be physically separate, and the parts shown as unit can be or may not be physical location, i.e. may be located at a place, or can also be distributed on multiple NE.Some or all of unit therein can be selected according to the actual needs to realize the purpose of the present embodiment scheme.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to be that unit is individually physically present, it is also possible to two or more unit are integrated in a unit.
If described function is using the form realization of SFU software functional unit and as independent production marketing or use, can be stored in a computer read/write memory medium.Based on such understanding, part or the part of this technical scheme that prior art is contributed by technical scheme the most in other words can embody with the form of software product, this computer software product is stored in a storage medium, including some instructions with so that a computer equipment (can be personal computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium includes: the various media that can store program code such as USB flash disk, portable hard drive, ROM, RAM, magnetic disc or CDs.
The above; being only the detailed description of the invention of the present invention, but protection scope of the present invention is not limited thereto, any those familiar with the art is in the technical scope that the invention discloses; change can be readily occurred in or replace, all should contain within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with scope of the claims.
Each embodiment in this specification all uses the mode gone forward one by one to describe, and what each embodiment stressed is the difference with other embodiments, and between each embodiment, identical similar part sees mutually.For mobile terminal embodiment, due to itself and embodiment of the method basic simlarity, so describe is fairly simple, relevant part sees the part of embodiment of the method and illustrates.

Claims (18)

1. a scene-change detecting method during taking pictures, is applied to the mobile terminal with photographic head, it is characterised in that described in take pictures during scene-change detecting method include:
Obtain the first preview image and second preview image of camera collection;
According to described first preview image and the second preview image, determine the first topography and the second topography;
Described first topography and the second topography are carried out gray value coupling, determines whether scene changes;
Wherein, described first preview image and the second preview image are Y channel image.
Method the most according to claim 1, it is characterised in that described described first topography and the second topography are carried out gray value coupling, determines the step whether scene changes, including:
Respectively by described first topography and second local image division be a × b block of pixels;
According to described first preview image, it is calculated images match threshold value;
According to described first topography, it is calculated matching times M;
According to each a × b block of pixels in described first topography and the second topography, determine the number of non-matching block of pixels;
When the number of non-matching block of pixels is more thanTime, it is determined that scene changes;
When the number of non-matching block of pixels is less than or equal toTime, it is determined that scene does not changes;
Wherein, being spaced multiple a × b block of pixels between each a × b block of pixels in described first topography and the second topography, a and b is positive integer.
Method the most according to claim 2, it is characterised in that described be calculated the step of images match threshold value according to described first preview image, including:
The gray value of all pixels of described first preview image is averaged computing, obtain average gray;
Obtain the Pixel Dimensions value of described a × b block of pixels;
According to formulaBeing calculated images match threshold value, wherein y is described images match threshold value, and size is the Pixel Dimensions value of described a × b block of pixels, and avg is described average gray.
Method the most according to claim 3, it is characterised in that described determine the step of the number of non-matching block of pixels according to each a × b block of pixels in described first topography and the second topography, including:
Calculate total gray value of each a × b block of pixels in described first topography and the second topography respectively;
Carry out seeking difference operation by total gray value of a × b block of pixels of opposite position in described first topography and the second topography, obtain total gray value difference;
The absolute value of described total gray value difference is compared with described images match threshold value;
When the absolute value of described total gray value difference is more than described images match threshold value, the number of the most non-matching block of pixels adds one.
Method the most according to claim 2, it is characterised in that described be calculated the step of images match threshold value according to described first preview image, including:
The gray value of all pixels of described first preview image is averaged computing, obtain average gray;
According to formulaBeing calculated images match threshold value, wherein y is described images match threshold value, and avg is described average gray.
Method the most according to claim 5, it is characterised in that described determine the step of the number of non-matching block of pixels according to each a × b block of pixels in described first topography and the second topography, including:
Calculate the average gray value of each a × b block of pixels in described first topography and the second topography respectively;
Carry out seeking difference operation by the average gray value of a × b block of pixels of opposite position in described first topography and the second topography, obtain average gray value difference;
The absolute value of described average gray value difference is compared with described images match threshold value;
When the absolute value of described average gray value difference is more than described images match threshold value, the number of the most non-matching block of pixels adds one.
Method the most according to claim 2, it is characterised in that described be calculated the step of matching times M according to described first topography, including:
Obtain length value and the width value of the first topography;
According to formulaIt is calculated matching times M, wherein, M is described matching times, W is the width value of described first topography, L is the length value of described first topography, and E is the block of pixels number being spaced in the direction of the width between each a × b block of pixels, and F is the block of pixels number being spaced in the longitudinal direction between each a × b block of pixels, a is the width of described a × b block of pixels, and b is the length of described a × b block of pixels.
Method the most according to claim 1, it is characterized in that, described first topography is the image-region of the pre-set dimension centered by the center pixel of described first preview image, described second topography is the image-region of the pre-set dimension centered by the center pixel of described second preview image, wherein, described pre-set dimension is c × d block of pixels, and wherein, c and d is positive integer.
Method the most according to claim 1, it is characterized in that, described first topography is the area image that in described first preview image, user specifies, and described second topography is the image in described second preview image with described first topography's correspondence position region.
10. a mobile terminal, including photographic head, it is characterised in that described mobile terminal also includes:
Image collection module, for obtaining the first preview image and second preview image of camera collection;
Image determines module, for the first preview image obtained according to described image collection module and the second preview image, determines the first topography and the second topography;
Scene changes determines module, for described image determining, the first topography that module determines and the second topography carry out gray value coupling, determines whether scene changes;
Wherein, described first preview image and the second preview image are Y channel image.
11. mobile terminals according to claim 10, it is characterised in that described scene changes determines that module includes:
Block of pixels division unit, is a × b block of pixels for described image determines the first topography and second locally image division that module determines respectively;
Matching threshold computing unit, for the first preview image obtained according to described image collection module, is calculated images match threshold value;
Matching times computing unit, for determining, according to described image, the first topography that module determines, is calculated matching times M;
Matching unit, for according to each a × b block of pixels in described first topography and the second topography, determines the number of non-matching block of pixels;
Judging to determine unit, the number of the non-matching block of pixels for determining when described matching unit is more thanTime, it is determined that scene changes;And, the number of the non-matching block of pixels determined when described matching unit is less than or equal toTime, it is determined that scene does not changes;
Wherein, being spaced multiple a × b block of pixels between each a × b block of pixels in described first topography and the second topography, a and b is positive integer.
12. mobile terminals according to claim 11, it is characterised in that described matching threshold computing unit specifically for:
The gray value of all pixels of described first preview image is averaged computing, obtain average gray;
Obtain the Pixel Dimensions value of described a × b block of pixels;
According to formulaBeing calculated images match threshold value, wherein y is described images match threshold value, and size is the Pixel Dimensions value of described a × b block of pixels, and avg is described average gray.
13. mobile terminals according to claim 12, it is characterised in that described matching unit specifically for:
Calculate total gray value of each a × b block of pixels in described first topography and the second topography respectively;
Carry out seeking difference operation by total gray value of a × b block of pixels of opposite position in described first topography and the second topography, obtain total gray value difference;
The absolute value of described total gray value difference is compared with described images match threshold value;
When the absolute value of described total gray value difference is more than described images match threshold value, the number of the most non-matching block of pixels adds one.
14. mobile terminals according to claim 11, it is characterised in that described matching threshold computing unit specifically for:
The gray value of all pixels of described first preview image is averaged computing, obtain average gray;
According to formulaBeing calculated images match threshold value, wherein y is described images match threshold value, and avg is described average gray.
15. mobile terminals according to claim 14, it is characterised in that described matching unit specifically for:
Calculate the average gray value of each a × b block of pixels in described first topography and the second topography respectively;
Carry out seeking difference operation by the average gray value of a × b block of pixels of opposite position in described first topography and the second topography, obtain average gray value difference;
The absolute value of described average gray value difference is compared with described images match threshold value;
When the absolute value of described average gray value difference is more than described images match threshold value, the number of the most non-matching block of pixels adds one.
16. mobile terminals according to claim 11, it is characterised in that described matching times computing unit specifically for:
Obtain length value and the width value of the first topography;
According to formulaIt is calculated matching times M, wherein, M is described matching times, W is the width value of described first topography, L is the length value of described first topography, and E is the block of pixels number being spaced in the direction of the width between each a × b block of pixels, and F is the block of pixels number being spaced in the longitudinal direction between each a × b block of pixels, a is the width of described a × b block of pixels, and b is the length of described a × b block of pixels.
17. mobile terminals according to claim 10, it is characterized in that, described first topography is the image-region of the pre-set dimension centered by the center pixel of described first preview image, described second topography is the image-region of the pre-set dimension centered by the center pixel of described second preview image, wherein, described pre-set dimension is c × d block of pixels, and wherein, c and d is positive integer.
18. mobile terminals according to claim 10, it is characterized in that, described first topography is the area image that in described first preview image, user specifies, and described second topography is the image in described second preview image with described first topography's correspondence position region.
CN201610169640.4A 2016-03-22 2016-03-22 Scene-change detecting method and mobile terminal during one kind is taken pictures Active CN105827963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610169640.4A CN105827963B (en) 2016-03-22 2016-03-22 Scene-change detecting method and mobile terminal during one kind is taken pictures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610169640.4A CN105827963B (en) 2016-03-22 2016-03-22 Scene-change detecting method and mobile terminal during one kind is taken pictures

Publications (2)

Publication Number Publication Date
CN105827963A true CN105827963A (en) 2016-08-03
CN105827963B CN105827963B (en) 2019-05-17

Family

ID=56524926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610169640.4A Active CN105827963B (en) 2016-03-22 2016-03-22 Scene-change detecting method and mobile terminal during one kind is taken pictures

Country Status (1)

Country Link
CN (1) CN105827963B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106686452A (en) * 2016-12-29 2017-05-17 北京奇艺世纪科技有限公司 Dynamic picture generation method and device
CN108030452A (en) * 2017-11-30 2018-05-15 深圳市沃特沃德股份有限公司 Vision sweeping robot and the method for establishing scene map
CN109670492A (en) * 2017-10-13 2019-04-23 深圳芯启航科技有限公司 Collecting biological feature information method, collecting biological feature information device and terminal
CN110880003A (en) * 2019-10-12 2020-03-13 中国第一汽车股份有限公司 Image matching method and device, storage medium and automobile
CN111654637A (en) * 2020-07-14 2020-09-11 RealMe重庆移动通信有限公司 Focusing method, focusing device and terminal equipment
CN113438480A (en) * 2021-07-07 2021-09-24 北京小米移动软件有限公司 Method, device and storage medium for judging video scene switching

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440612B (en) * 2013-08-27 2016-12-28 华为技术有限公司 Image processing method and device in a kind of GPU vitualization
CN104519263B (en) * 2013-09-27 2018-07-06 联想(北京)有限公司 The method and electronic equipment of a kind of image acquisition
CN103777865A (en) * 2014-02-21 2014-05-07 联想(北京)有限公司 Method, device, processor and electronic device for displaying information

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106686452A (en) * 2016-12-29 2017-05-17 北京奇艺世纪科技有限公司 Dynamic picture generation method and device
CN106686452B (en) * 2016-12-29 2020-03-27 北京奇艺世纪科技有限公司 Method and device for generating dynamic picture
CN109670492A (en) * 2017-10-13 2019-04-23 深圳芯启航科技有限公司 Collecting biological feature information method, collecting biological feature information device and terminal
CN108030452A (en) * 2017-11-30 2018-05-15 深圳市沃特沃德股份有限公司 Vision sweeping robot and the method for establishing scene map
CN110880003A (en) * 2019-10-12 2020-03-13 中国第一汽车股份有限公司 Image matching method and device, storage medium and automobile
CN111654637A (en) * 2020-07-14 2020-09-11 RealMe重庆移动通信有限公司 Focusing method, focusing device and terminal equipment
CN111654637B (en) * 2020-07-14 2021-10-22 RealMe重庆移动通信有限公司 Focusing method, focusing device and terminal equipment
CN113438480A (en) * 2021-07-07 2021-09-24 北京小米移动软件有限公司 Method, device and storage medium for judging video scene switching
CN113438480B (en) * 2021-07-07 2022-11-11 北京小米移动软件有限公司 Method, device and storage medium for judging video scene switching

Also Published As

Publication number Publication date
CN105827963B (en) 2019-05-17

Similar Documents

Publication Publication Date Title
CN105827963A (en) Scene changing detection method during shooting process and mobile terminal
CN107197169B (en) high dynamic range image shooting method and mobile terminal
CN105827965A (en) Image processing method based on mobile terminal and mobile terminal
CN105827971A (en) Image processing method and mobile terminal
CN105847674A (en) Preview image processing method based on mobile terminal, and mobile terminal therein
CN106126108B (en) A kind of generation method and mobile terminal of thumbnail
CN105827754B (en) A kind of generation method and mobile terminal of high dynamic range images
CN105872148A (en) Method and mobile terminal for generating high dynamic range images
CN105827952A (en) Photographing method for removing specified object and mobile terminal
CN105898143A (en) Moving object snapshotting method and mobile terminal
CN106101545A (en) A kind of image processing method and mobile terminal
WO2018072271A1 (en) Image display optimization method and device
CN112954210B (en) Photographing method and device, electronic equipment and medium
CN109691080B (en) Image shooting method and device and terminal
CN106937055A (en) A kind of image processing method and mobile terminal
CN105827951A (en) Moving object photographing method and mobile terminal
CN101918912A (en) Apparatus and methods for a touch user interface using an image sensor
CN106454085B (en) A kind of image processing method and mobile terminal
CN106488133A (en) A kind of detection method of Moving Objects and mobile terminal
CN106454086A (en) Image processing method and mobile terminal
CN107172346A (en) A kind of weakening method and mobile terminal
CN106210512A (en) A kind of photographic head changing method and mobile terminal
CN106097398B (en) A kind of detection method and mobile terminal of Moving Objects
CN113126862A (en) Screen capture method and device, electronic equipment and readable storage medium
CN107566723A (en) A kind of image pickup method, mobile terminal and computer-readable recording medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant