CN108810407A - A kind of image processing method, mobile terminal and computer readable storage medium - Google Patents
A kind of image processing method, mobile terminal and computer readable storage medium Download PDFInfo
- Publication number
- CN108810407A CN108810407A CN201810536275.5A CN201810536275A CN108810407A CN 108810407 A CN108810407 A CN 108810407A CN 201810536275 A CN201810536275 A CN 201810536275A CN 108810407 A CN108810407 A CN 108810407A
- Authority
- CN
- China
- Prior art keywords
- preview screen
- foreground
- foreground target
- type
- background
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The application is suitable for technical field of image processing, provides a kind of image processing method, mobile terminal and computer readable storage medium, described image processing method and includes:After the camera of the mobile terminal starts, determine whether the camera being currently up is front camera, if the camera being currently up is front camera, it is then detected by the first detection model and whether there is foreground target in the preview screen of the camera, if detecting foreground target in the preview screen by the first detection model, then obtain the foreground type of the foreground target, and estimate the background type of the preview screen, according to the background type of the foreground type of the foreground target and the preview screen of estimation, image procossing is carried out to the foreground target, the photo single effect that can solve the problems, such as to shoot at present by the application, so that the effect diversification that the photo of shooting is presented.
Description
Technical field
The application belongs to a kind of technical field of image processing more particularly to image processing method, mobile terminal and computer
Readable storage medium storing program for executing.
Background technology
With the development of intelligent mobile terminal, people are more and more frequent for the use taken pictures on the mobile terminals such as mobile phone.
The camera function of existing major part mobile terminal supports image procossing, for example, for the filter function of face, grinding skin function,
Whitening function etc..
However, the mode of image procossing is relatively simple at present, for example, the processing carried out for face is exactly that U.S. face is relevant
Processing.So the effect of the photo shot at present is relatively simple, the Experience Degree of user is poor.
Invention content
In view of this, the embodiment of the present application provides a kind of image processing method, mobile terminal and computer-readable storage
Medium, to solve the problems, such as that the photo single effect shot at present, user experience are poor.
The first aspect of the embodiment of the present application provides a kind of image processing method, is applied to mobile terminal, the method
Including:
After the camera of the mobile terminal starts, determine whether the camera being currently up is front camera;
If the camera being currently up is front camera, the preview of the camera is detected by the first detection model
It whether there is foreground target in picture;
If detecting foreground target in the preview screen by the first detection model, the foreground target is obtained
Foreground type, and estimate the background type of the preview screen;
According to the background type of the foreground type of the foreground target and the preview screen of estimation, to the foreground mesh
Mark carries out image procossing.
The second aspect of the embodiment of the present application provides a kind of mobile terminal, including:
Determining module, for after the camera of the mobile terminal starts, determine the camera that is currently up whether be
Front camera;
First foreground identification module passes through the first detection mould if the camera for being currently up is front camera
Type, which detects, whether there is foreground target in the preview screen of the camera;
First Background Recognition module, if for detecting foreground mesh in the preview screen by the first detection model
Mark, then obtain the foreground type of the foreground target, and estimate the background type of the preview screen;
First image processing module is used for the preview screen of the foreground type and estimation according to the foreground target
Background type carries out image procossing to the foreground target.
The third aspect of the embodiment of the present application provides a kind of mobile terminal, including memory, processor and is stored in
In the memory and the computer program that can run on the processor, when the processor executes the computer program
The step of realizing the method that the embodiment of the present application first aspect provides.
The fourth aspect of the embodiment of the present application provides a kind of computer readable storage medium, the computer-readable storage
Media storage has computer program, the computer program to realize the embodiment of the present application when being executed by one or more processors
On the one hand the step of the method provided.
5th aspect of the embodiment of the present application provides a kind of computer program product, and the computer program product includes
Computer program, the computer program realize that the embodiment of the present application first aspect provides when being executed by one or more processors
The method the step of.
The embodiment of the present application detects the foreground target of preview screen by the first detection model when front camera is shot
The foreground type of foreground target is obtained, and estimates the background type of preview screen, the foreground type by foreground target and estimation
Background type to foreground target carry out image procossing, when being shot due to foreground camera, typically be used for self-timer, so
Foreground target is that the probability of face is larger, then object when image procossing is concentrated mainly on foreground target, background type can be with
It is obtained according to background image estimation and is conserved memory usage, then according to foreground type and background type to the foreground target
It is handled, not only saves memory usage, but also the effect diversification that the photo of shooting is presented.
Description of the drawings
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application
Embodiment for those of ordinary skill in the art without having to pay creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is a kind of implementation process schematic diagram of image processing method provided by the embodiments of the present application;
Fig. 2 is the implementation process schematic diagram of another image processing method provided by the embodiments of the present application;
Fig. 3 is the implementation process schematic diagram of another image processing method provided by the embodiments of the present application;
Fig. 4 is a kind of schematic block diagram of mobile terminal provided by the embodiments of the present application;
Fig. 5 is the schematic block diagram of another mobile terminal provided by the embodiments of the present application.
Specific implementation mode
In being described below, for illustration and not for limitation, it is proposed that such as tool of particular system structure, technology etc
Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific
The application can also be realized in the other embodiments of details.In other situations, it omits to well-known system, device, electricity
The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
It should be appreciated that ought use in this specification and in the appended claims, the instruction of term " comprising " is described special
Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step,
Operation, element, component and/or its presence or addition gathered.
It is also understood that the term used in this present specification is merely for the sake of the mesh for describing specific embodiment
And be not intended to limit the application.As present specification and it is used in the attached claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singulative, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is
Refer to any combinations and all possible combinations of one or more of associated item listed, and includes these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In order to illustrate technical solution described herein, illustrated below by specific embodiment.
Fig. 1 is a kind of implementation process schematic diagram of image processing method provided by the embodiments of the present application, is applied to mobile whole
End, this method as shown in the figure may comprise steps of:
Step S101 determines whether the camera being currently up is preposition camera shooting after the camera of mobile terminal starts
Head.
In the embodiment of the present application, it is typically provided with front camera and rear camera in the mobile terminals such as mobile phone, and it is preceding
The difference of camera and rear camera due to position is set, application scenarios when being taken pictures can also differ, front camera
Commonly used in the self-timer of photographer, rear camera is commonly used in the other people or objects of shooting.For front camera, from
It needs to handle face when bat to reach preferable shooting effect, for rear camera, it usually needs to entire
Picture is handled to reach preferable shooting effect.So after the camera of mobile terminal starts, need first to determine current
The camera of startup is front camera or rear camera, is then carrying out relevant image procossing for preview screen.
Step S102, if the camera being currently up is front camera, by being taken the photograph described in the detection of the first detection model
As head preview screen in whether there is foreground target.
In the embodiment of the present application, first detection model is target detection model, commonly used in detecting the preview
Foreground target in picture.
In practical application, the first detection model can also be used in the preview screen of scene detection model inspection camera whether
There are foreground targets.Scene detection model is typically the multiple scenes of content analysis output and each of which scene for single image
The band of position in the picture is suitble to the foreground information of output image.
For the ease of distinguishing, continuing with scene classification model, the scene classification model is typically to be directed to individual for we
The content analysis of image exports single scene, is suitble to the background information of output picture.So in the foreground mesh for obtaining preview screen
It when mark, needs to use scene detection model, when obtaining the background type of preview screen, needs to use scene classification model.When
So, in practical application, other models can also be used to be used for obtaining the foreground target, corresponding with the foreground target of preview screen
Foreground type, background type.
Step S103, if detecting foreground target in the preview screen by the first detection model, obtain described in
The foreground type of foreground target, and estimate the background type of the preview screen.
In the embodiment of the present application, when detecting foreground target by the first detection model, first detection model
The foreground type of the foreground target and the location information of the foreground target can be exported.By foreground target in preview screen
Except region as background image, estimate the background type of the preview screen.The background of the estimation preview screen
Type can carry out the background type that estimation obtains the preview screen by the method for embodiment illustrated in fig. 3, herein no longer in detail
It states.
Step S104 is right according to the background type of the foreground type of the foreground target and the preview screen of estimation
The foreground target carries out image procossing.
In the embodiment of the present application, it is foreground camera due to being currently up, so, the object mainly shot is foreground
Target can carry out image procossing mainly for the foreground target, carry out when carrying out image procossing to the preview screen
The mode of image procossing can be according to the background type of the preview screen of the foreground type and estimation of the foreground target.It needs
It is noted that image procossing can be carried out only for foreground target, can also Global treatment needle again first be carried out to preview screen
Local treatment is carried out to foreground target.
As an example, when the foreground target be face when, it may be determined that foreground type be face, if estimation background
Type is night scene, then when carrying out image procossing to the foreground target, first can carry out denoising to the foreground target, obtain
The facial image after noise must be removed, image enhancement processing then is being carried out to the facial image after the removal noise, finally
U.S. face processing is being carried out to enhancing treated facial image.Face figure clear and that there is U.S. face effect can be obtained in this way
Picture.
The embodiment of the present application detects the foreground target of preview screen by the first detection model when front camera is shot
The foreground type of foreground target is obtained, and estimates the background type of preview screen, the foreground type by foreground target and estimation
Background type to foreground target carry out image procossing, when being shot due to foreground camera, typically be used for self-timer, so
Foreground target is that the probability of face is larger, then the object handled should concentrate on foreground target, background type can be according to the back of the body
Scape Image estimation, which obtains, is conserved memory usage, then according to foreground type and background type to the foreground target at
Reason not only saves memory usage, but also the effect diversification that the photo of shooting is presented.
Fig. 2 is the flow diagram of another image processing method provided by the embodiments of the present application, this method as shown in the figure
It may comprise steps of:
Step S201 determines whether the camera being currently up is preposition after the camera of the mobile terminal starts
Camera.
Step S202, if the camera being currently up is front camera, by being taken the photograph described in the detection of the first detection model
As head preview screen in whether there is foreground target.
The content of step S201 to step S202 can refer to descriptions of the step S101 to step S102, and details are not described herein.
Step S203, if detecting foreground target in the preview screen by the first detection model, obtain described in
The foreground type of foreground target, and judge whether the foreground target meets preset condition.
In the embodiment of the present application, after the foreground type for obtaining the foreground target, it is also necessary to judge the foreground target
Whether preset condition is met.
It is described to judge whether the foreground target meets preset condition and include:
Judge whether foreground target ratio shared in the preview screen is more than preset value;
If foreground target ratio shared in the preview screen is more than preset value, it is determined that the foreground target
Meet preset condition;
If foreground target ratio shared in the preview screen is less than or equal to the preset value, it is determined that institute
It states foreground target and does not meet preset condition.
In the embodiment of the present application, it is from the foreground target in institute to judge whether the foreground target meets preset condition
Whether the foreground for stating ratio-dependent shared in preview screen meets preset condition.If the foreground target is in the preview
Shared ratio is smaller in picture, then the foreground target does not meet preset condition, indicates that the object of photographer's current shooting removes
Foreground target, further includes background;If foreground target large percentage shared in the preview screen, the foreground
Target meets preset condition, indicates that the object of photographer's current shooting is mainly foreground target.It can according to different reference objects
To be arranged in a manner of different image procossings.
Foreground target ratio shared in the preview screen is calculated, the detection where foreground target can be calculated
Frame ratio shared in the preview screen, can also calculate the actual region of the foreground target in the preview screen
Shared ratio needs if calculating the actual region of foreground target ratio shared in the preview screen by institute
It states foreground target to split from the preview screen, the side that the foreground target is split from the preview screen
The method that method is referred to embodiment as shown in Figure 3.
Step S204 estimates the background type of the preview screen if the foreground target meets preset condition.
Step S204 is consistent with the content of step S103, can refer to the description of step S103, details are not described herein.
Step S205 is right according to the background type of the foreground type of the foreground target and the preview screen of estimation
The foreground target carries out image procossing.
Step S205 is consistent with the content of step S104, can refer to the description of step S104, details are not described herein.
Step S206 detects the preview if the foreground target does not meet preset condition by the second detection model
The background type of picture.
In the embodiment of the present application, if the foreground target does not meet preset condition, indicate current shooting object in addition to preceding
Scape target also has powerful connections, it is necessary to accurately obtain the background type of the background image.In order to can be according to the background
The foreground type of type and foreground target carries out image procossing to the preview screen.
It is shared in preview screen that foreground target is can be seen that by the description of step S204 and embodiment illustrated in fig. 1
Large percentage when, indicate that current reference object is the foreground target, before process object when carrying out image procossing is also
Scape target, and the region shared by background is smaller, can not carry out image procossing to the background area.However, working as the foreground
When target ratio shared in preview screen is smaller, reference object further includes background area in addition to foreground target, so right
When the preview screen is handled, need to carry out Global treatment, or again carry out the foreground target after Global treatment
Local treatment.The background type that Global treatment just needs accurately to obtain the preview screen is carried out to the preview screen, just
The method that estimation cannot be used obtains, so the embodiment of the present application is not when the foreground target meets preset condition, by the
Two detection models detect the background type of the preview screen.Second detection model can be retouched in embodiment illustrated in fig. 1
The scene classification model stated can also be other convolutional neural networks models, not be limited herein.
Step S207, according to the foreground type of the foreground target and the background type of the preview screen detected,
Image procossing is carried out to the preview screen.
In the embodiment of the present application, if the foreground type is face, the background type is night scene, then can be to described
Preview screen carries out global denoising and obtains the image after removal noise, and is carried out to the image after the removal noise global
Image enhancement processing;Certainly, after carrying out global image procossing to the preview screen, U.S. can also carried out for face
Face processing.
Step S208, if the camera being currently up is not front camera, described in the detection of the first detection model
It whether there is foreground target in preview screen.
Step S209, if detecting foreground target in the preview screen by the first detection model, obtain described in
The foreground type of foreground target, and detect by the second detection model the background type of the preview screen.
Step S210, according to the foreground type of the foreground target and the background type of the preview screen detected,
Image procossing is carried out to the preview screen.
In the embodiment of the present application, if the camera being currently up is not front camera, then it represents that is be currently up takes the photograph
As head is rear camera, then it represents that the object of current shooting may further include not only background area including foreground target, in this way,
With regard to needing the background type of the accurate foreground type and preview screen for obtaining foreground target, in order to according to the foreground target
Foreground type and the background type of the preview screen that detects, image procossing is carried out to the preview screen.
It should be noted that if it is rear camera to be currently up, then the processing mode of image is referred to startup
Scheme when being less than or equal to preset value for the ratio shared in preview screen of the foreground target in front camera and preview screen
The processing mode of picture, details are not described herein.
Fig. 3 is the flow diagram of another image processing method provided by the embodiments of the present application, this method as shown in the figure
It is on the basis of Fig. 1 or embodiment illustrated in fig. 2, how description estimates the background type of the preview screen, can specifically wrap
Include following steps:
Step S301, according to location information of the foreground target in the preview screen, from the preview screen
Remove the background image that the foreground target obtains the preview screen.
In the embodiment of the present application, location information of the foreground target in the preview screen can be the foreground
Location information of the corresponding detection block of target in the preview screen, removal foreground target obtains institute from the preview screen
The step of stating the background image of preview screen can be:
Based on the location information of the detection block, from the image obtained in the preview screen in the detection block;To institute
It states the image in detection block and is split processing acquisition foreground target, and the foreground target in the preview screen is removed and is obtained
Background image.
In the embodiment of the present application, since detection block is usually the rectangular window for including foreground target, so in detection block
Image be not be entirely foreground target, especially foreground target it is in irregular shape when, also wrap in the image in possible detection block
Contain background image, due in the embodiment of the present application, obtaining what background type was inherently estimated according to background image, so,
The precise region for needing to get background image can reduce error when estimation background type in this way.It can be according to detection
The location information of frame, from the image obtained in preview screen in detection block.Then processing is split to the image in detection block,
Foreground target is obtained, and the image after foreground target being removed in preview screen is exactly background image.
As the another embodiment of the application, the image in the detection block is split processing and obtains foreground target
Including:
Gray threshold sequence is obtained, and by each gray threshold in the gray threshold sequence in the detection block
Image carry out binary conversion treatment, obtain binary image sequence.
Based on the shade of gray of the image in the detection block, the side of foreground target in the image in the detection block is identified
Boundary obtains foreground target contour line.
It is obtained and the highest binary image of foreground target shape matching degree from the binary image sequence.
It will melt with the foreground target contour line with the highest binary image of matching degree of second target profile curve
At continuous foreground target region, the image in the foreground target region described in the preview screen is foreground target for symphysis.
In the embodiment of the present application, binary conversion treatment can be carried out to the image in the detection block, if threshold value setting
Properly, the foreground target region can be obtained.However, in practical application, it is difficult to accurately choose to suitable threshold value will before
Scape target and background separates, and arrives suitable threshold value even if choosing, can inevitably exist and the pixel in foreground target in background
The identical pixel of gray value.So the embodiment of the present application can be combined knowledge by binarization method with shade of gray method
Foreground target in the not described detection block.
Gray proces are carried out to the image in the detection block first, obtain gray level image, then obtain gray threshold sequence
Row carry out binary conversion treatment by each gray threshold in gray threshold sequence to the image in the detection block, so that it may with
Binary image sequence is obtained, in these binaryzation grayscale image sequences, can there is a gray level image, this gray level image energy
Rough expression foreground target region.
Since detection block is the location information of the foreground target identified by the first detection model, it is possible to determine
There are a foreground targets in detection block, and foreground target occupies most of region of the detection block.And in practical application,
The boundary of target is to discriminate between the important evidence of target and background, and surrounding's gray-value variation rate of usually boundary point is higher, therefore,
The boundary that target can be identified by the shade of gray of image, obtains the target profile curve of target.However, target profile curve
Can have a problem that is:It is actually boundary in image, however since graded unobvious may would not generate profile
Line;Alternatively, actually not boundary, generates in target internal and takes turns since the gray-value variation inside foreground target is more apparent
Profile.
By analysis it is found that the actually method of binaryzation and the method for shade of gray can have certain defect, obtain
Result be not highly precisely result.In order to obtain accurate as a result, the embodiment of the present application is by the side of binaryzation
Method and the method for shade of gray, which are combined together, obtains the second target.
In binary image sequence, the regional extent for having target in a width gray level image is true closest to foreground target
Regional extent, the image how obtained from binary image sequence closest to the true regional extent of foreground target can be from
It is obtained and the highest binary image of target profile curve matching degree in the binary image sequence.Matching degree can pass through
The registration for the mesh target area that binarization method and shade of gray method obtain respectively is as matching degree.Each binary picture
Target area as in can temporarily regard the target area of foreground target as, the region in contour line that shade of gray method obtains
It is highest to find the target area registration obtained with shade of gray method for the target area that foreground target can also temporarily be regarded as
Binary image.Target area in this binary image is best able to indicate real foreground target region.
Target area or shade of gray method in the highest binary image of matching degree actually no matter obtained are obtained
The target area that the contour line obtained indicates all can not accurately describe foreground target region, however, it is possible to will be with the foreground mesh
The highest binary image of matching degree of mark contour line is merged with the foreground target contour line generates continuous foreground target area
Domain is abandoned part inaccurate in the contour line of shade of gray acquisition by binary image, is obtained by shade of gray method
The contour line obtained abandons part inaccurate in binary image, and continuous foreground target region, foreground mesh are obtained after fusion
It is not real foreground target image to mark region, because being merged by gray level image after binaryzation and contour line, institute
With the foreground target region expression of acquisition is coordinate of the foreground target in the preview screen.Before in preview screen
Image in scape target area is only foreground target.
The embodiment of the present application is combined together acquisition foreground target by binarization method and shade of gray method.It can be more
Foreground target is accurately partitioned into from preview screen.It can be before will be described in preview screen after obtaining foreground target
Scape object removal obtains background image, estimates the Background further according to the data feature values of pixel in the background image
The background type of picture.
Step S302 obtains coordinate of the pixel in RGB color domain space in the background image.
Step S303 calculates coordinate mean value of all pixels point in RGB color domain space, according to the coordinate mean value in institute
State the background type that corresponding color in color gamut space determines the background image.
In the embodiment of the present application, in order to reduce memory usage, the back of the body of background image can be estimated according to background image
Scape type, for example, obtaining coordinate of the pixel in RGB color domain space in the background image;All pixels point is calculated in RGB
Coordinate mean value in color gamut space, according to the coordinate mean value, corresponding color determines the Background in the color gamut space
The background type of picture.
In practical application, different regions can be divided according to color gamut in color gamut space, for different regions
Different background types is set.Calculate obtain background image in coordinate mean value of all pixels point in RGB color domain space it
Afterwards, according to the coordinate mean value, corresponding color gamut determines background type in color gamut space.As an example, if background image
Coordinate mean value of the middle all pixels point in RGB color domain space corresponding color in color gamut space is green, then can estimate
The background type of background image is meadow;If coordinate mean value of all pixels point in RGB color domain space is in color in background image
Corresponding color is black in domain space, then can estimate that the background type of background image is night scene;If owning in background image
Coordinate mean value of the pixel in RGB color domain space corresponding color in color gamut space is blue, then can estimate Background
The background type of picture is blue sky.
It should be noted that RGB color domain space can also be other by the embodiment of the present application as an example, in practical application
Color gamut space.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present application constitutes any limit
It is fixed.
Fig. 4 is that the schematic block diagram for the mobile terminal that one embodiment of the application provides only is shown and this Shen for convenience of description
It please the relevant part of embodiment.
The mobile terminal 4 can be the software unit being built in the mobile terminals such as mobile phone, tablet computer, notebook, hard
Part unit or the unit of soft or hard combination can also be used as independent pendant and be integrated into the mobile phone, tablet computer, notebook etc.
In mobile terminal.
The mobile terminal 4 includes:
Determining module 41, for whether after the camera of the mobile terminal starts, determining the camera being currently up
For front camera;
First foreground identification module 42 passes through the first detection if the camera for being currently up is front camera
It whether there is foreground target in the preview screen of camera described in model inspection;
First Background Recognition module 43, if for detecting foreground mesh in the preview screen by the first detection model
Mark, then obtain the foreground type of the foreground target, and estimate the background type of the preview screen;
First image processing module 44, for according to the foreground type of the foreground target and the preview screen of estimation
Background type, to the foreground target carry out image procossing.
Optionally, the first Background Recognition module 43 further includes:
Judging unit 431, for before the background type for estimating the preview screen, whether judging the foreground target
Meet preset condition;
First Background Recognition unit 432 is additionally operable to:
If the foreground target meets preset condition, the background type of the preview screen is estimated.
Optionally, the first Background Recognition module 43 further includes:
Second Background Recognition unit 433, for after judging whether the foreground target meets preset condition, if described
Foreground target does not meet preset condition, then the background type of the preview screen is detected by the second detection model;
The mobile terminal 4 further includes:
Second image processing module 45, for being drawn with the preview detected according to the foreground type of the foreground target
The background type in face carries out image procossing to the preview screen.
Optionally, the judging unit 431 is additionally operable to:
Judge whether foreground target ratio shared in the preview screen is more than preset value;
If foreground target ratio shared in the preview screen is more than preset value, it is determined that the foreground target
Meet preset condition;
If foreground target ratio shared in the preview screen is less than or equal to the preset value, it is determined that institute
It states foreground target and does not meet preset condition.
Optionally, the first Background Recognition unit 432 includes:
Background image obtains subelement, is used for the location information according to the foreground target in the preview screen, from
The background image that the foreground target obtains the preview screen is removed in the preview screen;
Background type identifies subelement, for estimating the back of the body according to the data feature values of pixel in the background image
The background type of scape image.
Optionally, the background type identification subelement includes:
Coordinate obtains subelement, for obtaining coordinate of the pixel in RGB color domain space in the background image;
Background type identifies subelement, for calculating coordinate mean value of all pixels point in RGB color domain space, according to institute
State the background type that coordinate mean value corresponding color in the color gamut space determines the background image.
Optionally, the mobile terminal 4 further includes:
Second foreground identification module 46, for after determining whether the camera that is currently up is front camera, if
The camera being currently up is not front camera, then is detected by the first detection model before whether there is in the preview screen
Scape target;
Second Background Recognition module 47, if for detecting foreground mesh in the preview screen by the first detection model
Mark, then obtain the foreground type of the foreground target, and the background type of the preview screen is detected by the second detection model;
Third image processing module 48, for being drawn with the preview detected according to the foreground type of the foreground target
The background type in face carries out image procossing to the preview screen.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each work(
Can unit, module division progress for example, in practical application, can be as needed and by above-mentioned function distribution by different
Functional unit, module are completed, i.e., the internal structure of the mobile terminal are divided into different functional units or module, to complete
All or part of function described above.Each functional unit, module in embodiment can be integrated in a processing unit,
Can also be that each unit physically exists alone, can also be during two or more units be integrated in one unit, above-mentioned collection
At unit both may be used hardware form realize, can also be realized in the form of SFU software functional unit.In addition, each function
Unit, module specific name also only to facilitate mutually distinguish, the protection domain being not intended to limit this application.Above-mentioned dress
Set middle unit, module specific work process, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
Fig. 5 is the schematic block diagram for the mobile terminal that the another embodiment of the application provides.As shown in figure 5, the shifting of the embodiment
Moving terminal 5 includes:It one or more processors 50, memory 51 and is stored in the memory 51 and can be in the processing
The computer program 52 run on device 50.The processor 50 is realized when executing the computer program 52 at above-mentioned each image
Manage the step in embodiment of the method, such as step S101 to S104 shown in FIG. 1.Alternatively, the processor 50 executes the meter
The function of each module/unit in above-mentioned mobile terminal embodiment, such as module 41 to 44 shown in Fig. 4 are realized when calculation machine program 52
Function.
Illustratively, the computer program 52 can be divided into one or more module/units, it is one or
Multiple module/units are stored in the memory 51, and are executed by the processor 50, to complete the application.Described one
A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used for
Implementation procedure of the computer program 52 in the mobile terminal 5 is described.For example, the computer program 52 can be divided
It is cut into determining module, the first foreground identification module, the first Background Recognition module, the first image processing module.
Determining module, for after the camera of the mobile terminal starts, determine the camera that is currently up whether be
Front camera;
First foreground identification module passes through the first detection mould if the camera for being currently up is front camera
Type, which detects, whether there is foreground target in the preview screen of the camera;
First Background Recognition module, if for detecting foreground mesh in the preview screen by the first detection model
Mark, then obtain the foreground type of the foreground target, and estimate the background type of the preview screen;
First image processing module is used for the preview screen of the foreground type and estimation according to the foreground target
Background type carries out image procossing to the foreground target.
Other modules or unit can refer to the description in embodiment shown in Fig. 4, and details are not described herein.
The mobile terminal includes but are not limited to processor 50, memory 51.It will be understood by those skilled in the art that figure
5 be only an example of mobile terminal 5, does not constitute the restriction to mobile terminal 5, may include more more or less than illustrating
Component, either combine certain components or different components, for example, the mobile terminal can also include input equipment, it is defeated
Go out equipment, network access equipment, bus etc..
The processor 50 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), application-specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor can also be any conventional processor
Deng.
The memory 51 can be the internal storage unit of the mobile terminal 5, such as the hard disk of mobile terminal 5 or interior
It deposits.The memory 51 can also be to be equipped on the External memory equipment of the mobile terminal 5, such as the mobile terminal 5
Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge
Deposit card (Flash Card) etc..Further, the memory 51 can also both include the storage inside list of the mobile terminal 5
Member also includes External memory equipment.The memory 51 is for storing needed for the computer program and the mobile terminal
Other programs and data.The memory 51 can be also used for temporarily storing the data that has exported or will export.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may realize that lists described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, depends on the specific application and design constraint of technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
Scope of the present application.
In embodiment provided herein, it should be understood that disclosed mobile terminal and method can pass through it
Its mode is realized.For example, mobile terminal embodiment described above is only schematical, for example, the module or list
Member division, only a kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple units or
Component can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point is shown
The mutual coupling or direct-coupling or communication connection shown or discussed can be by some interfaces, between device or unit
Coupling or communication connection are connect, can be electrical, machinery or other forms.
The unit illustrated as separating component may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, you can be located at a place, or may be distributed over multiple
In network element.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme
's.
In addition, each functional unit in each embodiment of the application can be integrated in a processing unit, it can also
It is that each unit physically exists alone, it can also be during two or more units be integrated in one unit.Above-mentioned integrated list
The form that hardware had both may be used in member is realized, can also be realized in the form of SFU software functional unit.
If the integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or
In use, can be stored in a computer read/write memory medium.Based on this understanding, the application realizes above-mentioned implementation
All or part of flow in example method, can also instruct relevant hardware to complete, the meter by computer program
Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on
The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation
Code can be source code form, object identification code form, executable file or certain intermediate forms etc..The computer-readable medium
May include:Any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic of the computer program code can be carried
Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM,
Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that described
The content that computer-readable medium includes can carry out increasing appropriate according to legislation in jurisdiction and the requirement of patent practice
Subtract, such as in certain jurisdictions, according to legislation and patent practice, computer-readable medium do not include be electric carrier signal and
Telecommunication signal.
Embodiment described above is only to illustrate the technical solution of the application, rather than its limitations;Although with reference to aforementioned reality
Example is applied the application is described in detail, it will be understood by those of ordinary skill in the art that:It still can be to aforementioned each
Technical solution recorded in embodiment is modified or equivalent replacement of some of the technical features;And these are changed
Or replace, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all
Within the protection domain of the application.
Claims (10)
1. a kind of image processing method, which is characterized in that it is applied to mobile terminal, the method includes:
After the camera of the mobile terminal starts, determine whether the camera being currently up is front camera;
If the camera being currently up is front camera, the preview screen of the camera is detected by the first detection model
In whether there is foreground target;
If detecting foreground target in the preview screen by the first detection model, the foreground of the foreground target is obtained
Type, and estimate the background type of the preview screen;
According to the background type of the foreground type of the foreground target and the preview screen of estimation, to the foreground target into
Row image procossing.
2. image processing method as described in claim 1, which is characterized in that the background type for estimating the preview screen it
Before, further include:
Judge whether the foreground target meets preset condition;
If the foreground target meets preset condition, the background type of the preview screen is estimated.
3. image processing method as claimed in claim 2, which is characterized in that judging it is default whether the foreground target meets
After condition, further include:
If the foreground target does not meet preset condition, the background classes of the preview screen are detected by the second detection model
Type;
According to the background type of the foreground type of the foreground target and the preview screen detected, to the preview screen
Carry out image procossing.
4. image processing method as claimed in claim 2, which is characterized in that described to judge whether the foreground target meets pre-
If condition includes:
Judge whether foreground target ratio shared in the preview screen is more than preset value;
If foreground target ratio shared in the preview screen is more than preset value, it is determined that the foreground target meets
Preset condition;
If foreground target ratio shared in the preview screen is less than or equal to the preset value, it is determined that before described
Scape target does not meet preset condition.
5. image processing method as described in claim 1, which is characterized in that the background type of the estimation preview screen
Including:
According to location information of the foreground target in the preview screen, the foreground mesh is removed from the preview screen
Mark obtains the background image of the preview screen;
The background type of the background image is estimated according to the data feature values of pixel in the background image.
6. image processing method as claimed in claim 5, which is characterized in that described according to pixel in the background image
Data feature values estimate that the background type of the background image includes:
Obtain coordinate of the pixel in RGB color domain space in the background image;
Coordinate mean value of all pixels point in RGB color domain space is calculated, according to the coordinate mean value in the color gamut space
Corresponding color determines the background type of the background image.
7. such as claim 1 to 6 any one of them image processing method, which is characterized in that determining the camera shooting being currently up
After whether head is front camera, further include:
If the camera being currently up is not front camera, by the first detection model detect in the preview screen whether
There are foreground targets;
If detecting foreground target in the preview screen by the first detection model, the foreground of the foreground target is obtained
Type, and detect by the second detection model the background type of the preview screen;
According to the background type of the foreground type of the foreground target and the preview screen detected, to the preview screen
Carry out image procossing.
8. a kind of mobile terminal, which is characterized in that including:
Determining module, for after the camera of the mobile terminal starts, determining whether the camera being currently up is preposition
Camera;
First foreground identification module is examined if the camera for being currently up is front camera by the first detection model
It surveys and whether there is foreground target in the preview screen of the camera;
First Background Recognition module, if for detecting foreground target in the preview screen by the first detection model,
The foreground type of the foreground target is obtained, and estimates the background type of the preview screen;
First image processing module is used for the background of the preview screen of the foreground type and estimation according to the foreground target
Type carries out image procossing to the foreground target.
9. a kind of mobile terminal, including memory, processor and it is stored in the memory and can be on the processor
The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 7 when executing the computer program
The step of any one the method.
10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has computer journey
Sequence realizes the step such as any one of claim 1 to 7 the method when the computer program is executed by one or more processors
Suddenly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810536275.5A CN108810407B (en) | 2018-05-30 | 2018-05-30 | Image processing method, mobile terminal and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810536275.5A CN108810407B (en) | 2018-05-30 | 2018-05-30 | Image processing method, mobile terminal and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108810407A true CN108810407A (en) | 2018-11-13 |
CN108810407B CN108810407B (en) | 2021-03-02 |
Family
ID=64089258
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810536275.5A Active CN108810407B (en) | 2018-05-30 | 2018-05-30 | Image processing method, mobile terminal and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108810407B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109669264A (en) * | 2019-01-08 | 2019-04-23 | 哈尔滨理工大学 | Self-adapting automatic focus method based on shade of gray value |
CN112329616A (en) * | 2020-11-04 | 2021-02-05 | 北京百度网讯科技有限公司 | Target detection method, device, equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004304526A (en) * | 2003-03-31 | 2004-10-28 | Konica Minolta Holdings Inc | Digital camera |
US20080100724A1 (en) * | 2006-10-25 | 2008-05-01 | Toshinobu Hatano | Image processing device and imaging device |
CN103024165A (en) * | 2012-12-04 | 2013-04-03 | 华为终端有限公司 | Method and device for automatically setting shooting mode |
CN106101536A (en) * | 2016-06-22 | 2016-11-09 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal |
CN106131418A (en) * | 2016-07-19 | 2016-11-16 | 腾讯科技(深圳)有限公司 | A kind of composition control method, device and photographing device |
CN107454315A (en) * | 2017-07-10 | 2017-12-08 | 广东欧珀移动通信有限公司 | The human face region treating method and apparatus of backlight scene |
CN107742274A (en) * | 2017-10-31 | 2018-02-27 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
-
2018
- 2018-05-30 CN CN201810536275.5A patent/CN108810407B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004304526A (en) * | 2003-03-31 | 2004-10-28 | Konica Minolta Holdings Inc | Digital camera |
US20080100724A1 (en) * | 2006-10-25 | 2008-05-01 | Toshinobu Hatano | Image processing device and imaging device |
CN103024165A (en) * | 2012-12-04 | 2013-04-03 | 华为终端有限公司 | Method and device for automatically setting shooting mode |
CN106101536A (en) * | 2016-06-22 | 2016-11-09 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal |
CN106131418A (en) * | 2016-07-19 | 2016-11-16 | 腾讯科技(深圳)有限公司 | A kind of composition control method, device and photographing device |
CN107454315A (en) * | 2017-07-10 | 2017-12-08 | 广东欧珀移动通信有限公司 | The human face region treating method and apparatus of backlight scene |
CN107742274A (en) * | 2017-10-31 | 2018-02-27 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and electronic equipment |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109669264A (en) * | 2019-01-08 | 2019-04-23 | 哈尔滨理工大学 | Self-adapting automatic focus method based on shade of gray value |
CN112329616A (en) * | 2020-11-04 | 2021-02-05 | 北京百度网讯科技有限公司 | Target detection method, device, equipment and storage medium |
CN112329616B (en) * | 2020-11-04 | 2023-08-11 | 北京百度网讯科技有限公司 | Target detection method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN108810407B (en) | 2021-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108765278A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN108776819A (en) | A kind of target identification method, mobile terminal and computer readable storage medium | |
CN110210474B (en) | Target detection method and device, equipment and storage medium | |
CN110009556A (en) | Image background weakening method, device, storage medium and electronic equipment | |
Chuang et al. | Automatic fish segmentation via double local thresholding for trawl-based underwater camera systems | |
CN108805838A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN105243371A (en) | Human face beauty degree detection method and system and shooting terminal | |
CN104486552A (en) | Method and electronic device for obtaining images | |
CN110796041B (en) | Principal identification method and apparatus, electronic device, and computer-readable storage medium | |
CN111222506B (en) | Color recognition method, apparatus, and computer-readable storage medium | |
CN109214996A (en) | A kind of image processing method and device | |
CN111368587A (en) | Scene detection method and device, terminal equipment and computer readable storage medium | |
Kaur et al. | Single image dehazing with dark channel prior | |
CN111192205A (en) | Image defogging method and system and computer readable storage medium | |
CN109286758A (en) | A kind of generation method of high dynamic range images, mobile terminal and storage medium | |
CN108776800A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN108810407A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN108769521A (en) | A kind of photographic method, mobile terminal and computer readable storage medium | |
CN110188640B (en) | Face recognition method, face recognition device, server and computer readable medium | |
CN103313068A (en) | White balance corrected image processing method and device based on gray edge constraint gray world | |
CN106920266B (en) | The Background Generation Method and device of identifying code | |
CN109255311A (en) | A kind of information identifying method and system based on image | |
EP2930687B1 (en) | Image segmentation using blur and color | |
CN111105369A (en) | Image processing method, image processing apparatus, electronic device, and readable storage medium | |
CN112329572B (en) | Rapid static living body detection method and device based on frame and flash point |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |