CN107509031A - Image processing method, device, mobile terminal and computer-readable recording medium - Google Patents

Image processing method, device, mobile terminal and computer-readable recording medium Download PDF

Info

Publication number
CN107509031A
CN107509031A CN201710775174.9A CN201710775174A CN107509031A CN 107509031 A CN107509031 A CN 107509031A CN 201710775174 A CN201710775174 A CN 201710775174A CN 107509031 A CN107509031 A CN 107509031A
Authority
CN
China
Prior art keywords
depth
field
preview image
portrait area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710775174.9A
Other languages
Chinese (zh)
Other versions
CN107509031B (en
Inventor
丁佳铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710775174.9A priority Critical patent/CN107509031B/en
Publication of CN107509031A publication Critical patent/CN107509031A/en
Application granted granted Critical
Publication of CN107509031B publication Critical patent/CN107509031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The invention relates to a kind of image processing method, device, mobile terminal and computer-readable recording medium.The above method, including:Recognition of face is carried out to preview image, obtains human face region;Portrait area in the preview image is determined according to the human face region;Other regions in addition to the portrait area are carried out with virtualization processing, and reduces the brightness in other regions.It above-mentioned image processing method, device, mobile terminal and computer-readable recording medium, can protrude the main body of preview image, improve virtualization effect, make the visual display effect of the preview image after virtualization processing more preferable.

Description

Image processing method, device, mobile terminal and computer-readable recording medium
Technical field
The application is related to field of computer technology, more particularly to a kind of image processing method, device, mobile terminal and meter Calculation machine readable storage medium storing program for executing.
Background technology
Virtualization is a kind of digital camera shooting technology, retains the clear of main body by being blurred to background, can protrude The main body of shooting.When user is blurred using mobile terminal to the image of collection, it may be selected to blur preview image, check bat Virtualization effect after taking the photograph.Traditional preview image virtualization, due to being limited to processing speed and power consumption, often there is leakage virtualization Phenomenon so that main body can not protrude, and the effect of virtualization is poor, influence the visual display effect of picture.
The content of the invention
The embodiment of the present application provides a kind of image processing method, device, mobile terminal and computer-readable recording medium, can So that the main body of preview image protrudes, virtualization effect is improved, makes the visual display effect of the preview image after virtualization processing more preferable.
A kind of image processing method, including:
Recognition of face is carried out to preview image, obtains human face region;
Portrait area in the preview image is determined according to the human face region;
Other regions in addition to the portrait area are carried out with virtualization processing, and reduces the brightness in other regions.
A kind of image processing apparatus, including:
Face recognition module, for carrying out recognition of face to preview image, obtain human face region;
Area determination module, for determining the portrait area in the preview image according to the human face region;
Blurring module, for other regions in addition to the portrait area to be carried out with virtualization processing, and reduce it is described other The brightness in region.
A kind of mobile terminal, including memory and processor, computer program, the calculating are stored in the memory When machine program is by the computing device so that the processor realizes method as described above.
A kind of computer-readable recording medium, is stored thereon with computer program, and the computer program is held by processor Method as described above is realized during row.
Above-mentioned image processing method, device, mobile terminal and computer-readable recording medium, face is carried out to preview image Identification, obtains human face region, the portrait area in preview image is determined according to human face region, to other areas in addition to portrait area Domain carries out virtualization processing, and reduces the brightness in other regions, can protrude the main body of preview image, improves virtualization effect, makes The visual display effect of preview image after virtualization processing is more preferable.
Brief description of the drawings
Fig. 1 is the block diagram of mobile terminal in one embodiment;
Fig. 2 is the schematic flow sheet of image processing method in one embodiment;
Fig. 3 is the schematic flow sheet that portrait area is determined in one embodiment;
Fig. 4 is the schematic diagram that depth of view information is calculated in one embodiment;
Fig. 5 is the flow that in one embodiment other regions in preview image in addition to portrait area blur with processing Schematic diagram;
Fig. 6 is the schematic flow sheet that the first field depth corresponding with portrait area is determined in one embodiment;
Fig. 7 (a) is the depth of field histogram generated in one embodiment according to the depth of view information of preview image;
Fig. 7 (b) is the schematic diagram for drawing the normal distribution curve for meeting corresponding crest in one embodiment according to peak value;
Fig. 8 is the flow that the corresponding normal distribution scope of the depth of field that is averaged with the second of portrait area is chosen in one embodiment Schematic diagram;
Fig. 9 (a) is the schematic diagram of the normal distribution curve residing for the second average depth of field of portrait area in one embodiment;
Fig. 9 (b) is that showing for normal distribution scope corresponding to the second average depth of field of portrait area is determined in one embodiment It is intended to;
Figure 10 is the definition variation diagram generated in one embodiment;
Figure 11 is the block diagram of image processing apparatus in one embodiment;
Figure 12 is the block diagram of area determination module in one embodiment;
Figure 13 is the schematic diagram of image processing circuit in one embodiment.
Embodiment
In order that the object, technical solution and advantage of the application are more clearly understood, it is right below in conjunction with drawings and Examples The application is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the application, not For limiting the application.
It is appreciated that term " first " used in this application, " second " etc. can be used to describe various elements herein, But these elements should not be limited by these terms.These terms are only used for distinguishing first element and another element.Citing comes Say, in the case where not departing from scope of the present application, the first client can be referred to as the second client, and similarly, can incite somebody to action Second client is referred to as the first client.First client and the second client both clients, but it is not same visitor Family end.
Fig. 1 is the block diagram of mobile terminal in one embodiment.As shown in figure 1, the mobile terminal includes passing through system bus Processor, non-volatile memory medium, built-in storage and the network interface of connection, display screen and input unit.Wherein, it is mobile whole The non-volatile memory medium at end is stored with operating system and computer program, with reality when the computer program is executed by processor A kind of image processing method provided in existing the embodiment of the present application.The processor is used to provide calculating and control ability, and support is whole The operation of individual mobile terminal.Built-in storage in mobile terminal is the fortune of the computer-readable instruction in non-volatile memory medium Row provides environment.Network interface is used to carry out network service with server.The display screen of mobile terminal can be LCDs Or electric ink display screen etc., input unit can be the touch layers or mobile terminal case covered on display screen Button, trace ball or Trackpad or the external keyboard of upper setting, Trackpad or mouse etc..The mobile terminal can be with It is mobile phone, tablet personal computer or personal digital assistant or Wearable etc..It will be understood by those skilled in the art that show in Fig. 1 The block diagram of the structure gone out, the only part-structure related to application scheme, do not form and application scheme is applied to The restriction of mobile terminal thereon, specific mobile terminal can include than more or less parts shown in figure, or group Some parts are closed, or are arranged with different parts.
As shown in Fig. 2 in one embodiment, there is provided a kind of image processing method, comprise the following steps:
Step 210, recognition of face is carried out to preview image, obtains human face region.
Specifically, can be gathered by camera can be in the preview image of display screen preview, and to preview image for mobile terminal Recognition of face is carried out, obtains the human face region in preview image.Mobile terminal can extract the characteristics of image of preview image, and pass through Default human face recognition model enters to analyze to characteristics of image, judges whether include face in preview image, if comprising really Human face region corresponding to fixed.Characteristics of image may include shape facility, space characteristics and edge feature etc., wherein, shape facility refers to Be shape local in image to be uploaded, space characteristics are referred between multiple regions for being split in image to be uploaded Mutual locus or relative direction relation, edge feature refer to forming the border between two regions in image to be uploaded Pixel etc..
In one embodiment, human face recognition model can be the decision model built beforehand through machine learning, build During human face recognition model, substantial amounts of sample image can be obtained, facial image and unmanned image are included in sample image, can basis Whether each sample image sample image is marked comprising face, and using the sample image of mark as human face recognition model Input, be trained by machine learning, obtain human face recognition model.
Step 220, the portrait area in preview image is determined according to human face region.
Specifically, after mobile terminal determines the human face region of preview image, can be determined according to human face region in preview image Portrait area, wherein, portrait area may also include the body part such as four limbs, trunk of people area in addition to including human face region Domain.
In one embodiment, mobile terminal can obtain depth of view information and colouring information of human face region etc., and according to people The depth of view information in face region and colouring information etc. determine the portrait area in preview image, wherein, the depth of field refers in video camera mirror The subject longitudinal separation scope that the imaging that head or other imaging device forward positions can obtain picture rich in detail is determined, in this reality Apply in example, depth of view information can be understood as each object in preview image to the distance of the camera lens of mobile terminal, namely thing Away from information.Mobile terminal can extract in preview image depth of view information and colouring information with the more similar pixel of human face region Point, wherein, pixel similar in depth of view information refers to that the difference between depth of view information and the depth of view information of human face region is less than The pixel of first numerical value, pixel similar in colouring information refer to and the RGB of human face region (red, green, blue color space) Value belongs to the pixel of same RGB scopes, mobile terminal can first RGB scopes according to corresponding to being chosen the rgb value of human face region, And the pixel for belonging to the RGB scopes is determined as pixel similar in colouring information.Mobile terminal can extract depth of view information with Difference between the depth of view information of human face region is less than the first numerical value, and rgb value belongs to the pixel of the RGB scopes of selection, and Portrait profile is determined from the pixel of extraction.Mobile terminal can be in the pixel of selection and withdrawal, the scape with adjacent pixel Deeply convince that the difference of breath is more than the pixel composition portrait profile of default second value, the depth of view information of two adjacent phase vegetarian refreshments Between difference be more than default second value, illustrate that this two adjacent phase vegetarian refreshments have depth of view information mutation, available for area The portrait area divided and background area etc..
Step 230, virtualization processing is carried out to other regions in addition to portrait area, and reduces the brightness in other regions.
Specifically, mobile terminal can carry out virtualization processing by smoothing filter to other regions in addition to portrait area. In one embodiment, Gaussian filter may be selected and virtualization processing, gaussian filtering is carried out to other regions in addition to portrait area It is a kind of linear smoothing filtering, is one and is weighted average process to entire image, the value of each pixel can be by Itself is obtained after being weighted averagely with other pixel values in neighborhood., can root in other regions in addition to portrait area The window size for carrying out gaussian filtering is chosen according to virtualization degree, the window of selection is bigger, and virtualization degree is bigger, and according to normal state point The weight of each pixel in the weight distribution pattern distribution window of cloth, so as to recalculate the weighted average of each pixel.
After mobile terminal carries out virtualization processing to other regions in addition to portrait area, its in addition to portrait area can be reduced The brightness in his region, a brightness empirical value can be set previously according to experience, and by the bright of other regions in addition to portrait area Degree is reduced to the brightness empirical value.In one embodiment, also can according to the brightness of preview image choose corresponding to brightness regulation Ratio, if for example, the brightness of preview image is higher, larger brightness regulation ratio, such as 30% can be chosen, then can will remove people As the brightness of the pixel reduction by 30% in other regions outside region, if the brightness of preview image is relatively low, can choose less Brightness regulation ratio, such as 5%, then the pixel in other regions in addition to portrait area can be reduced to 5% brightness etc., but not It is limited to this.Brightness regulation can be also configured manually by user, and its in addition to portrait area is reduced according to the Set scale of user The brightness in his region etc..The brightness in other regions in addition to portrait area is reduced, the phenomenon that virtualization is leaked in preview image can be mitigated, Make the main body of preview image more prominent.
It is to be appreciated that mobile terminal also can first reduce the brightness in other regions in addition to portrait area, then to except portrait Other regions outside region carry out virtualization processing, are not limited in above-mentioned execution sequence of steps.
Above-mentioned image processing method, recognition of face is carried out to preview image, human face region is obtained, is determined according to human face region Portrait area in preview image, virtualization processing is carried out to other regions in addition to portrait area, and reduce the bright of other regions Degree, can protrude the main body of preview image, improve virtualization effect, make the visual display effect of the preview image after virtualization processing More preferably.
As shown in figure 3, in one embodiment, step 220 determines the portrait area in preview image according to human face region, Comprise the following steps:
Step 302, the depth of view information of preview image is obtained.
Specifically, mobile terminal can obtain the depth of view information of each pixel in preview image, in one embodiment, move Dynamic terminal can overleaf be provided with two cameras, including the first camera and second camera, and the first camera and second are taken the photograph Picture head is settable in the same horizontal line, horizontal left-right situs, may also be arranged on same vertical curve, is arranged above and below vertically. In the present embodiment, the first camera and second camera can be the cameras of different pixels, wherein, the first camera can be The higher camera of pixel, is mainly used in being imaged, and second camera can be the relatively low auxiliary depth of field camera of pixel, for obtaining Take the depth of view information of the image of collection.
Further, mobile terminal can first pass through the first image of the first camera collection scene, while be taken the photograph by second As the second image of head collection Same Scene, first the first image and the second image can be corrected and demarcated, by correction and mark The first image and the second image after fixed are synthesized, and obtain preview image.Mobile terminal can be according to correction and calibrated the One image and the second image generation disparity map, the depth map of preview image is generated further according to disparity map, can be included in depth map The depth of view information of each pixel in preview image, in depth map, the region of similar depth of view information can use identical color It is filled, color change can reflect the change of the depth of field.In one embodiment, mobile terminal can be according to the first camera and Camera lens difference of height of the photocentre distances of two cameras, photocentre difference in height on a horizontal and two cameras etc. calculates calibration Parameter, and the first image and the second image are corrected and demarcated according to calibration parameter.
Mobile terminal calculates same object in the first image and the parallax of the second image, and obtains this according to parallax and be shot Depth of view information of the thing in preview image, wherein, parallax refers to observing direction caused by same target on two points Difference.Fig. 4 is the schematic diagram that depth of view information is calculated in one embodiment.As shown in figure 4, the first camera and second camera are left In the same horizontal line, the primary optical axis arrival of two cameras is parallel, and OL and OR are respectively the first camera and second for right arrangement The photocentre of camera, the beeline of photocentre to corresponding image planes is focal length f.If P be world coordinate system in a bit, it A left side is practised physiognomy and the right imaging point practised physiognomy is PL, PR, and the distance of PL and PR to the left hand edge of respective image planes is respectively XL, XR, and P's regards Difference is XL-XR or XR-XL.The distance between the photocentre OL of first camera and the photocentre OR of second camera are b, according to The distance between OL, OR b, focal length f and parallax XL-XR or XR-XL, you can point P depth of field Z, its calculating side is calculated Shown in method such as formula (1):
Or
Mobile terminal can carry out Feature Points Matching to the first image and the second image, extract the first image characteristic point and Corresponding row in second image finds optimal match point, it is believed that the characteristic point of the first image and the second image it is corresponding most Good match point is same point respectively in the first image and the imaging point of the second image, you can calculates the parallax of the two, you can generation Disparity map, the depth of view information of each pixel in preview image is calculated further according to formula (1).
In other examples, the depth of view information for obtaining preview image otherwise can be also used, such as utilizes structure The mode such as light or TOF (Time of flight, flight time telemetry) calculates the depth of view information of preview image, however it is not limited on State mode.
Step 304, the first average depth of field of human face region is calculated according to depth of view information.
Specifically, mobile terminal can obtain the depth of view information of each pixel in the human face region of preview image, and calculate The average depth of field of the first of human face region.
Step 306, the colouring information of human face region is obtained.
Specifically, the colouring information of human face region may include the rgb value of each pixel in human face region, and according to face In region each pixel rgb value detection human face region the colour of skin, can according to the colour of skin of human face region choose corresponding to RGB models Enclose, and the pixel according to similar in the selection of RGB scopes with human face region colouring information.
Step 308, the portrait area in preview image is determined according to the first average depth of field and colouring information.
Specifically, the difference that mobile terminal can extract between the first average depth of field of depth of view information and human face region is less than the One numerical value, and rgb value belongs to the pixel of the RGB scopes of selection, and portrait profile is determined from the pixel of extraction, so as to really Determine the portrait area in preview image.
In the present embodiment, the portrait in preview image is determined according to the first of the human face region average depth of field and colouring information Region, the portrait area of determination can be made more accurate, the visual display effect of the preview image after virtualization processing can be made more preferable.
As shown in figure 4, in one embodiment, step carries out virtualization processing to other regions in addition to portrait area, bag Include following steps:
Step 502, the first field depth corresponding with portrait area is chosen according to depth of view information.
Specifically, mobile terminal can be according to the depth of view information of each pixel included in portrait area, selection and portrait First field depth corresponding to region, first field depth can be without the field depth of virtualization processing, in preview image The all pixels point for belonging to the first field depth is all handled without virtualization.
Step 504, second field depth in region to be blurred is determined according to the first field depth.
Specifically, mobile terminal can be according to the first field depth without virtualization of selection, it is determined that needing to be blurred The second field depth, belong to second field depth pixel composition preview image in region to be blurred, wherein, treat void Change region and typically belong to other regions in addition to portrait area.Virtualization region can be treated according to the second field depth to carry out at virtualization Reason, carries out virtualization processing to the pixel for belonging to the second field depth, in one embodiment, can be believed according to the depth of field of pixel Breath adjustment corresponding to virtualization degree, when the depth of field belong to the second field depth and it is more remote from the first field depth when, virtualization degree can It is higher, but not limited to this.
Step 506, virtualization region is treated according to the second field depth and carries out virtualization processing.
In the present embodiment, can precisely be selected to need the depth of field blurred in preview image according to the depth of field of portrait area Scope, virtualization effect can be improved, make the visual display effect of the image after virtualization processing more preferable.
As shown in fig. 6, in one embodiment, step 502 chooses corresponding with portrait area first according to depth of view information Field depth, comprise the following steps:
Step 602, depth of field histogram is generated according to depth of view information.
Specifically, depth of field histogram can be used for representing the number in image with the pixel of some depth of field, depth of field Nogata Figure describes distribution situation of the pixel in image in each depth of field.The scape of each pixel in acquisition for mobile terminal preview image Deeply convince breath, the number of pixel corresponding to each depth of field value can be counted, and generate the depth of field histogram of preview image.Fig. 7 (a) is The depth of field histogram generated in one embodiment according to the depth of view information of preview image.As shown in Fig. 7 (a), the depth of field histogram Transverse axis represent the depth of field, the longitudinal axis represents the quantity of pixel, and the depth of field histogram describes pixel in preview image each The distribution situation of the individual depth of field.
Step 604, each crest of depth of field histogram and corresponding peak value are obtained.
Specifically, mobile terminal can determine that each crest of depth of field histogram, and peak value corresponding to each crest, crest The maximum of the wave amplitude in one section of ripple of depth of field histogram formation is referred to, can be by asking for one of each point in depth of field histogram Order difference is determined, and peak value refers to the maximum on crest.
Step 606, the normal distribution curve for meeting corresponding crest is drawn according to peak value.
Specifically, mobile terminal can draw the normal distribution curve of the corresponding crest of fitting according to the peak value of each crest, just State distribution is mainly determined by two values, including mathematic expectaion μ and variances sigma, wherein, mathematic expectaion μ is the position of normal distribution Parameter is put, describes the central tendency position of normal distribution, using X=μ as symmetry axis, left and right is substantially symmetric for normal distribution, normal state point The expectation of cloth, mean, median, mode are identical, are μ;Variances sigma is then used for the discrete journey for describing data distribution in normal distribution Degree, σ is bigger, and data distribution is more scattered, and σ is smaller, and data distribution is more concentrated, and σ is alternatively referred to as the form parameter of normal distribution, and σ is got over Greatly, curve is more flat, and σ is smaller, and curve is taller and thinner.Each crest in acquisition for mobile terminal depth of field histogram, and crest After peak value, the normal distribution curve of crest can be corresponded to according to peak fitting, it may be determined that the value of each crest depth of field on transverse axis Scope, the mathematic expectaion and variance of the normal distribution curve of digital simulation are bent so as to draw the normal distribution of the corresponding crest of fitting Line.
Fig. 7 (b) is the schematic diagram for drawing the normal distribution curve for meeting corresponding crest in one embodiment according to peak value.Such as Shown in Fig. 7 (b), each crest of depth of field histogram and corresponding peak value are obtained, is drawn and met pair according to the peak value of each crest The normal distribution curve of crest is answered, finally can obtain curve 720, curve 720 is by multiple fitting crests in depth of field histogram Normal distribution curve combine.
Step 608, corresponding with portrait area normal distribution scope is determined according to normal distribution curve, and by normal distribution Scope is as the first field depth corresponding with portrait area.
As shown in figure 8, in one embodiment, step determines normal state corresponding with portrait area according to normal distribution curve Distribution, comprise the following steps:
Step 802, the second average depth of field of portrait area is calculated.
Specifically, after mobile terminal determines the portrait area of preview image, can be obtained from depth map each in portrait area The depth of view information of individual pixel, and calculate the second average depth of field of portrait area.
Step 804, second average depth of field normal distribution curve residing in depth of field histogram is searched.
Specifically, after the second average depth of field of the portrait area of preview image is calculated in mobile terminal, second can be searched The average depth of field is in the position of depth of field histogram, it may be determined that crest corresponding with the second average depth of field, so that it is determined that the second average scape The normal distribution curve of crest corresponding with this fitting residing for depth.Fig. 9 (a) is the second flat of portrait area in one embodiment The schematic diagram of normal distribution curve residing for the equal depth of field.As shown in Fig. 9 (a), the second of portrait area is calculated in mobile terminal The average depth of field is 85 meters, then it is the signified position of key head that can find the second average depth of field in the position of depth of field histogram, can Determine that the second average depth of field is in corresponding to second crest of depth of field histogram on normal distribution curve.
Step 806, the variance of residing normal distribution curve is obtained.
Step 808, the normal distribution scope according to corresponding to variance determines the second average depth of field.
Specifically, mobile terminal can obtain normal state point of the second average depth of field of portrait area residing in depth of field histogram The variances sigma and mathematic expectaion μ of cloth curve, according to 3 σ principles of normal distribution, determine that the second average depth of field of portrait area is corresponding Normal distribution scope.In normal distribution, the probability P (σ-μ < X < σ+μ) that any point appears in σ+(-) μ is 68.26%, Appear in the μ of σ+(-) 2 probability P (μ of σ -2 μ < X < σ+2) be 95.45%, appear in the μ of σ+(-) 3 probability P (σ -3 μ < X < σ+ 3 μ) it is 99.73%, it follows that in normal distribution, data are fallen into the range of the μ of σ+(-) 3 substantially.Acquisition for mobile terminal people It is optional after the variances sigma and mathematic expectaion μ of normal distribution curve as residing for the second average depth of field in region in depth of field histogram Take the depth of field in residing normal distribution curve in the μ of σ+(-) 3 scope as normal distribution scope, and by the normal distribution scope As the first field depth corresponding with portrait area, namely the field depth without virtualization processing.
Fig. 9 (b) is that showing for normal distribution scope corresponding to the second average depth of field of portrait area is determined in one embodiment It is intended to.As shown in Fig. 9 (b), the second average depth of field that portrait area is calculated in mobile terminal is 85 meters, then can find this The two average depth of field are the signified position of key head in the position of depth of field histogram, it may be determined that the second average depth of field is in depth of field Nogata Corresponding to second crest of figure on normal distribution curve.The variance and mathematic expectaion of the normal distribution curve can be obtained, is chosen On the normal distribution curve depth of field in the μ of σ+(-) 3 scope as normal distribution scope 902, normal distribution scope 902 be with First field depth corresponding to portrait area, namely the field depth without virtualization processing.
In the present embodiment, depth of field histogram is generated according to the depth of view information of preview image, it is each according to depth of field histogram The peak value of crest is bonded immediate normal distribution curve, and residing normal distribution is searched further according to the average depth of field of portrait area Curve and corresponding normal distribution scope, it is ensured that with the depth of view information of portrait area similar in region will not carry out at virtualization Reason, it can precisely determine to need the field depth blurred, virtualization effect can be improved, make regarding for the image after virtualization processing Feel that display effect is more preferable.
In one embodiment, step 506 treats virtualization region according to the second field depth and carries out virtualization processing, including: Definition variation diagram is generated according to the second field depth, and virtualization region is treated according to definition variation diagram and carries out virtualization processing.
Specifically, mobile terminal determines the first field depth without virtualization and second depth of field model in region to be blurred After enclosing, definition variation diagram can be generated, wherein, the second field depth may include the Part I less than the first field depth, with And the Part II more than the first field depth.In definition variation diagram, when the depth of field is less than the first field depth, definition With depth of field correlation, definition can increase with the increase of the depth of field;When the depth of field is more than the first field depth, clearly Degree and the negatively correlated relation of the depth of field, definition can reduce with the increase of the depth of field.It is clear in the Part I of second field depth Clear degree can increase with the increase of the depth of field, and definition can reduce with the increase of the depth of field in Part II, the first depth of field model The definition enclosed reaches peak, according to definition variation diagram, it may be determined that definition corresponding to each depth of field, so as to according to pre- Look at virtualization degree corresponding to the depth of view information adjustment of pixel in image, definition is smaller, and virtualization degree is higher.
In one embodiment, can according to definition variation diagram choose carry out gaussian filtering window size, definition compared with High region to be blurred, less window can be chosen and carry out gaussian filtering process, the relatively low region to be blurred of definition, can be chosen Larger window carries out gaussian filtering process.
Figure 10 is the definition variation diagram generated in one embodiment.As shown in Figure 10, mobile terminal is chosen and portrait area First field depth 1006 corresponding to domain, and second field depth in region to be blurred is determined, the second field depth may include small Part I 1002 in the first field depth 1006 and the Part II 1004 more than the first field depth 1006.Clearly becoming Change in figure, in the Part I 1002 of the second field depth, definition and depth of field correlation, definition is with the depth of field Increase and increase, the first field depth 1006 reaches definition peak, in the Part II 1004 of the second field depth, clearly Degree and the negatively correlated relation of the depth of field, definition reduce with the increase of the depth of field.In one embodiment, also can be according to portrait area First field depth 1006 in domain chooses the definition change of the Part I 1002 and Part II 1004 of the second field depth Rate, when the first field depth 1006 is smaller, the definition rate of change of Part I 1002 is larger, Part II 1004 it is clear It is smaller to spend rate of change;When the first field depth 1006 is larger, the definition rate of change of Part I 1002 is smaller, Part II 1004 definition rate of change can be larger;When in the intermediate range that the first field depth 1006 is located at depth of field histogram, first Points 1002 and the definition rate of change of Part II 1004 can be close, but not limited to this.
In the present embodiment, definition variation diagram can be generated, and preview image is waited to blur according to definition variation diagram Region carries out virtualization processing, and definition changes with the change of the depth of field, can precisely determine to need the depth of field model blurred Enclose and corresponding virtualization degree, virtualization effect can be improved, make virtualization handle after image visual display effect it is more preferable.
In one embodiment, there is provided a kind of image processing method, comprise the following steps:
Step (1), recognition of face is carried out to preview image, obtains human face region.
Step (2), the depth of view information of preview image is obtained, the first average scape of human face region is calculated according to depth of view information It is deep, the colouring information of human face region is obtained, and the portrait area in preview image is determined according to the first average depth of field and colouring information Domain.
Step (3), depth of field histogram is generated according to depth of view information, obtains each crest of depth of field histogram and corresponding peak Value, and the normal distribution curve for meeting corresponding crest is drawn according to peak value.
Step (4), the second average depth of field of portrait area is calculated, it is residing in depth of field histogram to search the second average depth of field Normal distribution curve, obtain the variance of residing normal distribution curve, determined further according to variance corresponding to the second average depth of field Normal distribution scope, and using normal distribution scope as the first field depth corresponding with portrait area.
Step (5), second field depth in region to be blurred is determined according to the first field depth.
Step (6), definition variation diagram is generated according to the second field depth, and void is treated to described according to definition variation diagram Change region and carry out virtualization processing.
In the present embodiment, the brightness in other regions in addition to portrait area can be reduced, can mitigate and void is leaked in preview image The phenomenon of change, make the main body of preview image more prominent, and needs in preview image are precisely selected according to the depth of field of portrait area The field depth blurred, virtualization effect can be improved, make the visual display effect of the image after virtualization processing more preferable.
As shown in figure 11, in one embodiment, there is provided a kind of image processing apparatus 1100, including face recognition module 1110th, area determination module 1120 and blurring module 1130.
Face recognition module 1110, for carrying out recognition of face to preview image, obtain human face region.
Area determination module 1120, for determining the portrait area in preview image according to human face region.
Blurring module 1130, for carrying out virtualization processing to other regions in addition to portrait area, and reduce other regions Brightness.
Above-mentioned image processing apparatus, recognition of face is carried out to preview image, human face region is obtained, is determined according to human face region Portrait area in preview image, virtualization processing is carried out to other regions in addition to portrait area, and reduce the bright of other regions Degree, can protrude the main body of preview image, improve virtualization effect, make the visual display effect of the preview image after virtualization processing More preferably.
As shown in figure 12, in one embodiment, area determination module 1120, including depth of field acquiring unit 1122, first Computing unit 1124, color acquiring unit 1126 and area determination unit 1128.
Depth of field acquiring unit 1122, for obtaining the depth of view information of preview image.
First computing unit 1124, for calculating the first average depth of field of human face region according to depth of view information.
Color acquiring unit 1126, for obtaining the colouring information of human face region.
Area determination unit 1128, for determining the portrait area in preview image according to the first average depth of field and colouring information Domain.
In the present embodiment, the portrait in preview image is determined according to the first of the human face region average depth of field and colouring information Region, the portrait area of determination can be made more accurate, the visual display effect of the preview image after virtualization processing can be made more preferable.
In one embodiment, blurring module 1130, including choose unit, depth of field determining unit and virtualization unit.
Unit is chosen, for choosing the first field depth corresponding with portrait area according to depth of view information.
Depth of field determining unit, for determining second field depth in region to be blurred according to the first field depth.
Unit is blurred, virtualization processing is carried out for treating virtualization region according to the second field depth.
In the present embodiment, can precisely be selected to need the depth of field blurred in preview image according to the depth of field of portrait area Scope, virtualization effect can be improved, make the visual display effect of the image after virtualization processing more preferable.
In one embodiment, choose unit, including generation subelement, crest obtain subelement, draw subelement and really Stator unit.
Subelement is generated, for generating depth of field histogram according to depth of view information.
Crest obtains subelement, for each crest for obtaining depth of field histogram and corresponding peak value.
Subelement is drawn, the normal distribution curve of corresponding crest is met for being drawn according to peak value.
Determination subelement, for determining normal distribution scope corresponding with portrait area according to normal distribution curve, and will Normal distribution scope is as the first field depth corresponding with portrait area.
In one embodiment, determination subelement, it is additionally operable to calculate the second average depth of field of portrait area, it is flat searches second Equal depth of field normal distribution curve residing in depth of field histogram, the variance of residing normal distribution curve is obtained, and according to side Difference determines normal distribution scope corresponding to the second average depth of field.
In the present embodiment, depth of field histogram is generated according to the depth of view information of preview image, it is each according to depth of field histogram The peak value of crest is bonded immediate normal distribution curve, and residing normal distribution is searched further according to the average depth of field of portrait area Curve and corresponding normal distribution scope, it is ensured that with the depth of view information of portrait area similar in region will not carry out at virtualization Reason, it can precisely determine to need the field depth blurred, virtualization effect can be improved, make regarding for the image after virtualization processing Feel that display effect is more preferable.
In one embodiment, unit is blurred, is additionally operable to according to the second field depth generation definition variation diagram, and according to Definition variation diagram treats virtualization region and carries out virtualization processing.
In the present embodiment, definition variation diagram can be generated, and preview image is waited to blur according to definition variation diagram Region carries out virtualization processing, and definition changes with the change of the depth of field, can precisely determine to need the depth of field model blurred Enclose and corresponding virtualization degree, virtualization effect can be improved, make virtualization handle after image visual display effect it is more preferable.
The embodiment of the present application also provides a kind of mobile terminal.Above-mentioned mobile terminal includes image processing circuit, at image Managing circuit can utilize hardware and/or component software to realize, it may include define ISP (Image Signal Processing, figure As signal transacting) the various processing units of pipeline.Figure 13 is the schematic diagram of image processing circuit in one embodiment.Such as Figure 13 institutes Show, for purposes of illustration only, only showing the various aspects of the image processing techniques related to the embodiment of the present application.
As shown in figure 13, image processing circuit includes ISP processors 1340 and control logic device 1350.Imaging device 1310 The view data of seizure is handled by ISP processors 1340 first, and ISP processors 1340 are analyzed view data can with seizure For determination and/or the image statistics of one or more control parameters of imaging device 1310.Imaging device 1310 can wrap Include the camera with one or more lens 1312 and imaging sensor 1314.Imaging sensor 1314 may include colour filter Array (such as Bayer filters), imaging sensor 1314 can obtain the light caught with each imaging pixel of imaging sensor 1314 Intensity and wavelength information, and the one group of raw image data that can be handled by ISP processors 1340 is provided.(such as top of sensor 1320 Spiral shell instrument) parameter (such as stabilization parameter) of the image procossing of collection can be supplied to based on the interface type of sensor 1320 by ISP processing Device 1340.The interface of sensor 1320 can utilize SMIA, and (Standard Mobile Imaging Architecture, standard are moved Dynamic Imager Architecture) interface, other serial or parallel camera interfaces or above-mentioned interface combination.
In addition, raw image data can also be sent to sensor 1320 by imaging sensor 1314, sensor 1320 can base Raw image data is supplied to ISP processors 1340 in the interface type of sensor 1320, or sensor 1320 is by original graph As in data Cun Chudao video memories 1330.
ISP processors 1340 handle raw image data pixel by pixel in various formats.For example, each image pixel can Bit depth with 8,10,12 or 14 bits, ISP processors 1340 can be carried out at one or more images to raw image data Reason operation, statistical information of the collection on view data.Wherein, image processing operations can be by identical or different bit depth precision Carry out.
ISP processors 1340 can also receive view data from video memory 1330.For example, the interface of sensor 1320 is by original Beginning view data is sent to video memory 1330, and the raw image data in video memory 1330 is available to ISP processing Device 1340 is for processing.Video memory 1330 can be only in a part, storage device or electronic equipment for storage arrangement Vertical private memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving from imaging sensor 1314 or from the interface of sensor 1320 or from video memory 1330 During raw image data, ISP processors 1340 can carry out one or more image processing operations, such as time-domain filtering.After processing View data can be transmitted to video memory 1330, to carry out other processing before shown.ISP processors 1340 are also Above-mentioned processing data can be carried out in original domain from the reception processing data of video memory 1330 and RGB and YCbCr colors is empty Between in image real time transfer.View data after processing may be output to display 1380, so that user watches and/or by figure Engine or GPU (Graphics Processing Unit, graphics processor) are further handled.In addition, ISP processors 1340 Output also can be transmitted to video memory 1330, and display 1380 can read view data from video memory 1330.One In individual embodiment, video memory 1330 can be configured as realizing one or more frame buffers.In addition, ISP processors 1340 Output can be transmitted to encoder/decoder 1370, so as to encoding/decoding image data.The view data of coding can be saved, And decompressed before being shown in the equipment of display 1380.
The step of processing view data of ISP processors 1340, includes:VFE (Video Front are carried out to view data End, video front) handle and CPP (Camera Post Processing, camera post processing) processing.To view data VFE processing may include correct view data contrast or brightness, modification record in a digital manner illumination conditions data, to figure As data compensate processing (such as white balance, automatic growth control, γ correction etc.), to view data be filtered processing etc.. CPP processing to view data may include to zoom in and out image, preview frame and record frame provided to each path.Wherein, CPP Different codecs can be used to handle preview frame and record frame.
View data after the processing of ISP processors 1340 can be transmitted to blurring module 1360, so as to right before shown Image carries out virtualization processing.Blurring module 1360 can be carried out at virtualization to other regions in preview image in addition to portrait area Reason, and reduce brightness in other regions in addition to portrait area etc..Wherein, blurring module 1360 can be CPU in mobile terminal (Central Processing Unit, central processing unit), GPU or coprocessor etc..Blurring module 1360 enters view data After row virtualization processing, the view data after can virtualization be handled is sent to encoder/decoder 1370, so as to coding/decoding figure As data.The view data of coding can be saved, and be decompressed before being shown in the equipment of display 1380.Wherein, blur Module 1360 may be additionally located between encoder/decoder 1370 and display 1380, i.e., blurring module enters to the image being imaged Row virtualization is handled.Above-mentioned encoder/decoder can be CPU, GPU or coprocessor etc. in mobile terminal.
The statistics that ISP processors 1340 determine, which can be transmitted, gives the unit of control logic device 1350.For example, statistics can Passed including the image such as automatic exposure, AWB, automatic focusing, flicker detection, black level compensation, the shadow correction of lens 1312 The statistical information of sensor 1314.Control logic device 1350 may include the processor for performing one or more examples (such as firmware) and/or micro- Controller, one or more routines can be determined at control parameter and the ISP of imaging device 1310 according to the statistics of reception Manage the control parameter of device 1340.For example, the control parameter of imaging device 1310 may include that the control parameter of sensor 1320 (such as increases Benefit, the time of integration of spectrum assignment), camera flash control parameter, the control parameter of lens 1312 (such as focus on or zoom Jiao Away from), or the combination of these parameters.ISP control parameters may include to be used for AWB and color adjustment (for example, in RGB processing Period) gain level and color correction matrix, and the shadow correction parameter of lens 1312.
In the present embodiment, above-mentioned image processing method can be realized with the image processing techniques in Figure 13.
In one embodiment, there is provided a kind of computer-readable recording medium, be stored thereon with computer program, computer Above-mentioned image processing method is realized when program is executed by processor.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be with The hardware of correlation is instructed to complete by computer program, described program can be stored in a non-volatile computer and can be read In storage medium, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage is situated between Matter can be magnetic disc, CD, read-only memory (Read-Only Memory, ROM) etc..
Each technical characteristic of embodiment described above can be combined arbitrarily, to make description succinct, not to above-mentioned reality Apply all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited In contradiction, the scope that this specification is recorded all is considered to be.
Embodiment described above only expresses the several embodiments of the application, and its description is more specific and detailed, but simultaneously Can not therefore it be construed as limiting the scope of the patent.It should be pointed out that come for one of ordinary skill in the art Say, on the premise of the application design is not departed from, various modifications and improvements can be made, these belong to the protection of the application Scope.Therefore, the protection domain of the application patent should be determined by the appended claims.

Claims (10)

  1. A kind of 1. image processing method, it is characterised in that including:
    Recognition of face is carried out to preview image, obtains human face region;
    Portrait area in the preview image is determined according to the human face region;
    Other regions in addition to the portrait area are carried out with virtualization processing, and reduces the brightness in other regions.
  2. 2. according to the method for claim 1, it is characterised in that described that the preview image is determined according to the human face region In portrait area, including:
    Obtain the depth of view information of the preview image;
    The first average depth of field of the human face region is calculated according to the depth of view information;
    Obtain the colouring information of the human face region;
    The portrait area in the preview image is determined according to the described first average depth of field and colouring information.
  3. 3. according to the method for claim 2, it is characterised in that described pair of other regions in addition to the portrait area are carried out Virtualization is handled, including:
    The first field depth corresponding with the portrait area is chosen according to the depth of view information;
    Second field depth in region to be blurred is determined according to first field depth;
    Virtualization processing is carried out to the region to be blurred according to second field depth.
  4. 4. according to the method for claim 3, it is characterised in that described to be chosen and the portrait area according to the depth of view information First field depth corresponding to domain, including:
    Depth of field histogram is generated according to the depth of view information;
    Obtain each crest of the depth of field histogram and corresponding peak value;
    The normal distribution curve for meeting corresponding crest is drawn according to the peak value;
    Corresponding with portrait area normal distribution scope is determined according to the normal distribution curve, and by the normal distribution Scope is as the first field depth corresponding with the portrait area.
  5. 5. according to the method for claim 4, it is characterised in that described to be determined and the people according to the normal distribution curve The normal distribution scope as corresponding to region, including:
    Calculate the second average depth of field of the portrait area;
    Search described second average depth of field normal distribution curve residing in the depth of field histogram;
    Obtain the variance of the residing normal distribution curve;
    The normal distribution scope according to corresponding to the variance determines the described second average depth of field.
  6. 6. according to any described method of claim 3 to 5, it is characterised in that it is described according to second field depth to institute State region to be blurred and carry out virtualization processing, including:
    Definition variation diagram is generated according to second field depth;
    Virtualization processing is carried out to the region to be blurred according to the definition variation diagram.
  7. A kind of 7. image processing apparatus, it is characterised in that including:
    Face recognition module, for carrying out recognition of face to preview image, obtain human face region;
    Area determination module, for determining the portrait area in the preview image according to the human face region;
    Blurring module, for other regions in addition to the portrait area to be carried out with virtualization processing, and reduce other described regions Brightness.
  8. 8. device according to claim 7, it is characterised in that the area determination module, including:
    Depth of field acquiring unit, for obtaining the depth of view information of the preview image;
    First computing unit, for calculating the first average depth of field of the human face region according to the depth of view information;
    Color acquiring unit, for obtaining the colouring information of the human face region;
    Area determination unit, for determining the portrait area in the preview image according to the described first average depth of field and colouring information Domain.
  9. 9. a kind of mobile terminal, including memory and processor, computer program, the computer are stored in the memory When program is by the computing device so that the processor realizes the method as described in claim 1 to 6 is any.
  10. 10. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the computer program The method as described in claim 1 to 6 is any is realized when being executed by processor.
CN201710775174.9A 2017-08-31 2017-08-31 Image processing method, image processing device, mobile terminal and computer readable storage medium Active CN107509031B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710775174.9A CN107509031B (en) 2017-08-31 2017-08-31 Image processing method, image processing device, mobile terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710775174.9A CN107509031B (en) 2017-08-31 2017-08-31 Image processing method, image processing device, mobile terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN107509031A true CN107509031A (en) 2017-12-22
CN107509031B CN107509031B (en) 2019-12-27

Family

ID=60694622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710775174.9A Active CN107509031B (en) 2017-08-31 2017-08-31 Image processing method, image processing device, mobile terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN107509031B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009539A (en) * 2017-12-26 2018-05-08 中山大学 A kind of new text recognition method based on counting focus model
CN108830892A (en) * 2018-06-13 2018-11-16 北京微播视界科技有限公司 Face image processing process, device, electronic equipment and computer readable storage medium
CN108848367A (en) * 2018-07-26 2018-11-20 宁波视睿迪光电有限公司 A kind of method, device and mobile terminal of image procossing
CN108900790A (en) * 2018-06-26 2018-11-27 努比亚技术有限公司 Method of video image processing, mobile terminal and computer readable storage medium
CN109068063A (en) * 2018-09-20 2018-12-21 维沃移动通信有限公司 A kind of processing of 3 d image data, display methods, device and mobile terminal
CN109379531A (en) * 2018-09-29 2019-02-22 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN110991298A (en) * 2019-11-26 2020-04-10 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic device
CN111161136A (en) * 2019-12-30 2020-05-15 深圳市商汤科技有限公司 Image blurring method, image blurring device, image blurring equipment and storage device
WO2020147790A1 (en) * 2019-01-18 2020-07-23 深圳看到科技有限公司 Picture focusing method, apparatus, and device, and corresponding storage medium
CN111586348A (en) * 2020-04-15 2020-08-25 福建星网视易信息***有限公司 Video background image acquisition method, storage medium, video matting method and storage device
CN111614888A (en) * 2019-02-26 2020-09-01 纬创资通股份有限公司 Image blurring processing method and system
CN113313646A (en) * 2021-05-27 2021-08-27 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN114125286A (en) * 2021-11-18 2022-03-01 维沃移动通信有限公司 Shooting method and device thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103973977A (en) * 2014-04-15 2014-08-06 联想(北京)有限公司 Blurring processing method and device for preview interface and electronic equipment
CN104092955A (en) * 2014-07-31 2014-10-08 北京智谷睿拓技术服务有限公司 Flash control method and device, as well as image acquisition method and equipment
CN105979165A (en) * 2016-06-02 2016-09-28 广东欧珀移动通信有限公司 Blurred photos generation method, blurred photos generation device and mobile terminal
CN106937054A (en) * 2017-03-30 2017-07-07 维沃移动通信有限公司 Take pictures weakening method and the mobile terminal of a kind of mobile terminal
CN106991379A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Human body skin recognition methods and device and electronic installation with reference to depth information
CN107016639A (en) * 2017-03-30 2017-08-04 努比亚技术有限公司 A kind of image processing method and device
CN107111749A (en) * 2014-12-22 2017-08-29 诺瓦赛特有限公司 System and method for improved display

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103973977A (en) * 2014-04-15 2014-08-06 联想(北京)有限公司 Blurring processing method and device for preview interface and electronic equipment
CN104092955A (en) * 2014-07-31 2014-10-08 北京智谷睿拓技术服务有限公司 Flash control method and device, as well as image acquisition method and equipment
CN107111749A (en) * 2014-12-22 2017-08-29 诺瓦赛特有限公司 System and method for improved display
CN105979165A (en) * 2016-06-02 2016-09-28 广东欧珀移动通信有限公司 Blurred photos generation method, blurred photos generation device and mobile terminal
CN106991379A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Human body skin recognition methods and device and electronic installation with reference to depth information
CN106937054A (en) * 2017-03-30 2017-07-07 维沃移动通信有限公司 Take pictures weakening method and the mobile terminal of a kind of mobile terminal
CN107016639A (en) * 2017-03-30 2017-08-04 努比亚技术有限公司 A kind of image processing method and device

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009539B (en) * 2017-12-26 2021-11-02 中山大学 Novel text recognition method based on counting focusing model
CN108009539A (en) * 2017-12-26 2018-05-08 中山大学 A kind of new text recognition method based on counting focus model
US11176355B2 (en) 2018-06-13 2021-11-16 Beijing Microlive Vision Technology Co., Ltd Facial image processing method and apparatus, electronic device and computer readable storage medium
CN108830892A (en) * 2018-06-13 2018-11-16 北京微播视界科技有限公司 Face image processing process, device, electronic equipment and computer readable storage medium
WO2019237745A1 (en) * 2018-06-13 2019-12-19 北京微播视界科技有限公司 Facial image processing method and apparatus, electronic device and computer readable storage medium
CN108830892B (en) * 2018-06-13 2020-03-06 北京微播视界科技有限公司 Face image processing method and device, electronic equipment and computer readable storage medium
CN108900790B (en) * 2018-06-26 2021-01-01 努比亚技术有限公司 Video image processing method, mobile terminal and computer readable storage medium
CN108900790A (en) * 2018-06-26 2018-11-27 努比亚技术有限公司 Method of video image processing, mobile terminal and computer readable storage medium
CN108848367B (en) * 2018-07-26 2020-08-07 宁波视睿迪光电有限公司 Image processing method and device and mobile terminal
CN108848367A (en) * 2018-07-26 2018-11-20 宁波视睿迪光电有限公司 A kind of method, device and mobile terminal of image procossing
CN109068063A (en) * 2018-09-20 2018-12-21 维沃移动通信有限公司 A kind of processing of 3 d image data, display methods, device and mobile terminal
CN109068063B (en) * 2018-09-20 2021-01-15 维沃移动通信有限公司 Three-dimensional image data processing and displaying method and device and mobile terminal
CN109379531A (en) * 2018-09-29 2019-02-22 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN109379531B (en) * 2018-09-29 2021-07-16 维沃移动通信有限公司 Shooting method and mobile terminal
US11683583B2 (en) 2019-01-18 2023-06-20 Kandao Technology Co., Ltd. Picture focusing method, apparatus, terminal, and corresponding storage medium
WO2020147790A1 (en) * 2019-01-18 2020-07-23 深圳看到科技有限公司 Picture focusing method, apparatus, and device, and corresponding storage medium
CN111614888B (en) * 2019-02-26 2022-03-18 纬创资通股份有限公司 Image blurring processing method and system
US11164290B2 (en) 2019-02-26 2021-11-02 Wistron Corporation Method and system for image blurring processing
CN111614888A (en) * 2019-02-26 2020-09-01 纬创资通股份有限公司 Image blurring processing method and system
CN110991298A (en) * 2019-11-26 2020-04-10 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic device
WO2021103474A1 (en) * 2019-11-26 2021-06-03 腾讯科技(深圳)有限公司 Image processing method and apparatus, storage medium and electronic apparatus
CN111161136A (en) * 2019-12-30 2020-05-15 深圳市商汤科技有限公司 Image blurring method, image blurring device, image blurring equipment and storage device
CN111161136B (en) * 2019-12-30 2023-11-03 深圳市商汤科技有限公司 Image blurring method, image blurring device, equipment and storage device
CN111586348B (en) * 2020-04-15 2022-04-12 福建星网视易信息***有限公司 Video background image acquisition method, storage medium, video matting method and storage device
CN111586348A (en) * 2020-04-15 2020-08-25 福建星网视易信息***有限公司 Video background image acquisition method, storage medium, video matting method and storage device
CN113313646A (en) * 2021-05-27 2021-08-27 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN113313646B (en) * 2021-05-27 2024-04-16 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN114125286A (en) * 2021-11-18 2022-03-01 维沃移动通信有限公司 Shooting method and device thereof

Also Published As

Publication number Publication date
CN107509031B (en) 2019-12-27

Similar Documents

Publication Publication Date Title
CN107509031A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN107493432A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN108537749B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108537155B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108055452A (en) Image processing method, device and equipment
CN108111749B (en) Image processing method and device
CN107959778B (en) Imaging method and device based on dual camera
CN110149482A (en) Focusing method, device, electronic equipment and computer readable storage medium
CN107808137A (en) Image processing method, device, electronic equipment and computer-readable recording medium
CN107730446B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN107948519A (en) Image processing method, device and equipment
CN107451969A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN107680128A (en) Image processing method, device, electronic equipment and computer-readable recording medium
CN107862657A (en) Image processing method, device, computer equipment and computer-readable recording medium
CN107977940A (en) background blurring processing method, device and equipment
CN108154514B (en) Image processing method, device and equipment
CN108024054A (en) Image processing method, device and equipment
CN107993209B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107800965B (en) Image processing method, device, computer readable storage medium and computer equipment
CN108024058B (en) Image blurs processing method, device, mobile terminal and storage medium
CN108734676A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN107911625A (en) Light measuring method, device, readable storage medium storing program for executing and computer equipment
CN107396079B (en) White balance adjustment method and device
CN108053363A (en) Background blurring processing method, device and equipment
CN109242794B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: Guangdong Opel Mobile Communications Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant