CN108712606A - Reminding method, device, storage medium and mobile terminal - Google Patents

Reminding method, device, storage medium and mobile terminal Download PDF

Info

Publication number
CN108712606A
CN108712606A CN201810457182.3A CN201810457182A CN108712606A CN 108712606 A CN108712606 A CN 108712606A CN 201810457182 A CN201810457182 A CN 201810457182A CN 108712606 A CN108712606 A CN 108712606A
Authority
CN
China
Prior art keywords
occlusion
preview image
shooting preview
occlusion area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810457182.3A
Other languages
Chinese (zh)
Other versions
CN108712606B (en
Inventor
王宇鹭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810457182.3A priority Critical patent/CN108712606B/en
Publication of CN108712606A publication Critical patent/CN108712606A/en
Application granted granted Critical
Publication of CN108712606B publication Critical patent/CN108712606B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the present application discloses reminding method, device, storage medium and mobile terminal.This method includes:When occlusion detection event is triggered, shooting preview image is obtained;Shooting preview image is input in occlusion detection model trained in advance;It is determined according to the output result of occlusion detection model and whether there is the first occlusion area in shooting preview image;If it is determined that there are the first occlusion areas in shooting preview image, then user's occlusion removal object is prompted, wherein shelter includes making in shooting preview image that there are the objects of the first occlusion area.The embodiment of the present application is by using above-mentioned technical proposal, occlusion detection can be carried out to shooting preview image by the occlusion detection model built in advance, and accurately and rapidly judge to whether there is occlusion area in shooting preview image, and there are when occlusion area in determining shooting preview image, prompt user's occlusion removal object in time can effectively improve the quality of shooting image.

Description

Reminding method, device, storage medium and mobile terminal
Technical field
The invention relates to technical field of image processing more particularly to reminding method, device, storage medium and movements Terminal.
Background technology
With the fast development of electronic technology and the increasingly raising of people's living standard, terminal has become in people's life An essential part.Most of terminal all has camera function of taking pictures now, and takes pictures or camera function is benefited from deeply Family is liked, and has been more and more widely used.User is by the camera function of taking pictures of terminal, the point drop in record life Drop, and preserve in the terminal, convenient for recalling, appreciating and check in the future.
However, in some cases, during user shoots photo or video, there are the camera shootings of shelter shield portions The case where head, cause shooting picture second-rate, influences the beauty for shooting image.Therefore, improving the quality of shooting image becomes It is most important.
Invention content
The embodiment of the present application provides reminding method, device, storage medium and mobile terminal, can effectively improve shooting image Quality.
In a first aspect, an embodiment of the present invention provides a kind of reminding methods, including:
When occlusion detection event is triggered, shooting preview image is obtained;
The shooting preview image is input in occlusion detection model trained in advance;According to the occlusion detection model Output result determine in the shooting preview image whether there is the first occlusion area;
If it is determined that in the shooting preview image, there are the first occlusion areas, then prompt user's occlusion removal object, wherein institute It includes making in the shooting preview image that there are the objects of the first occlusion area to state shelter.
Second aspect, an embodiment of the present invention provides a kind of suggestion devices, including:
Shooting preview image collection module, for when occlusion detection event is triggered, obtaining shooting preview image;
Shooting preview image input module, for the shooting preview image to be input to occlusion detection mould trained in advance In type;
Occlusion area judgment module, for determining the shooting preview figure according to the output result of the occlusion detection model It whether there is the first occlusion area as in;
User prompt module, for if it is determined that there are the first occlusion area in the shooting preview image, then prompting user Occlusion removal object, wherein the shelter includes making in the shooting preview image that there are the objects of the first occlusion area.
The third aspect, the embodiment of the present application provide a kind of computer readable storage medium, are stored thereon with computer journey Sequence realizes the reminding method as described in the embodiment of the present application when the program is executed by processor.
Fourth aspect, the embodiment of the present application provide a kind of mobile terminal, including memory, processor and are stored in storage It can realize on device and when the computer program of processor operation, the processor execute the computer program as the application is real Apply the reminding method described in example.
The prompt scheme provided in the embodiment of the present invention obtains shooting preview image when occlusion detection event is triggered, Shooting preview image is input in occlusion detection model trained in advance, and is determined according to the output result of occlusion detection model It whether there is the first occlusion area in shooting preview image, however, it is determined that there are the first occlusion area in shooting preview image, then carry Show user's occlusion removal object, wherein shelter includes making in shooting preview image that there are the objects of the first occlusion area.Pass through this Apply for the technical solution that embodiment provides, shooting preview image can be blocked by the occlusion detection model built in advance Detection is accurately and rapidly judged with the presence or absence of occlusion area in shooting preview image, and in determining shooting preview image There are when occlusion area, user's occlusion removal object is prompted in time, can effectively improve the quality of shooting image.
Description of the drawings
Fig. 1 is a kind of flow diagram of reminding method provided by the embodiments of the present application;
Fig. 2 is the flow diagram of another reminding method provided by the embodiments of the present application;
Fig. 3 is the flow diagram of another reminding method provided by the embodiments of the present application;
Fig. 4 is a kind of structure diagram of suggestion device provided by the embodiments of the present application;
Fig. 5 is a kind of structural schematic diagram of mobile terminal provided by the embodiments of the present application;
Fig. 6 is the structural schematic diagram of another mobile terminal provided by the embodiments of the present application.
Specific implementation mode
Technical solution to further illustrate the present invention below with reference to the accompanying drawings and specific embodiments.It is appreciated that It is that specific embodiment described herein is used only for explaining the present invention rather than limitation of the invention.It further needs exist for illustrating , only the parts related to the present invention are shown for ease of description, in attached drawing rather than entire infrastructure.
It should be mentioned that some exemplary embodiments are described as before exemplary embodiment is discussed in greater detail The processing described as flow chart or method.Although each step is described as the processing of sequence, many of which by flow chart Step can be implemented concurrently, concomitantly or simultaneously.In addition, the sequence of each step can be rearranged.When its operation The processing can be terminated when completion, it is also possible to the additional step being not included in attached drawing.The processing can be with Corresponding to method, function, regulation, subroutine, subprogram etc..
Fig. 1 is the flow diagram of reminding method provided in an embodiment of the present invention, and the present embodiment is applicable to image and blocks The case where detection, this method can be executed by suggestion device, and wherein the device can generally be collected by software and or hardware realization At in the terminal.As shown in Figure 1, this method includes:
Step 101, when occlusion detection event is triggered, obtain shooting preview image.
Illustratively, the mobile terminal in the embodiment of the present application may include the mobile devices such as mobile phone and tablet computer.
When occlusion detection event is triggered, shooting preview image is obtained by the camera of mobile terminal, to start Occlusion detection event.
Illustratively, in order to carry out occlusion detection on suitable opportunity, occlusion detection event can be pre-set and be triggered Condition.Optionally, it in order to really determine demand of the user to occlusion detection, can be blocked detecting that active user actively opens Right to examin is prescribed a time limit, and occlusion detection event is triggered.Optionally, in order to make occlusion detection be applied to more valuable Time window, with Extra power consumption caused by occlusion detection is saved, the Time window and application scenarios of occlusion detection can be analyzed or investigated Deng, scene is reasonably preset in setting, when detecting that mobile terminal is in default scene, triggering occlusion detection event.For example, working as When the environmental light brightness of mobile terminal present position is more than predetermined luminance threshold value, occlusion detection event is triggered.It is understood that When environmental light brightness is larger, it is be easy to cause the image overexposure of shooting, user reduces overexposure to reduce environmental light brightness The case where the possibility that occurs, it will usually influence of the bright ambient light to taking pictures was reduced with clothing or hand.But at this In the process, be easy it is careless in user, to camera generate partial occlusion.It should be noted that the application is implemented Example does not limit the specific manifestation form that occlusion detection event is triggered.
In the embodiment of the present application, when occlusion detection event is triggered, shooting preview image is obtained.It is appreciated that It is that, when user needs to take pictures, the shooting function to open a terminal, the camera applications in such as opening a terminal, that is, that opens a terminal takes the photograph As head obtains the image in shooting preview interface, i.e. shooting preview image into shooting preview interface.It is understood that clapping It may include that user wants the image that the content (such as personage, landscape) of shooting is presented in shooting preview interface to take the photograph preview image. Wherein, camera can be 2D cameras, or 3D cameras.3D cameras are properly termed as 3D sensors again.3D is imaged Difference lies in 3D cameras can not only obtain flat image to head, can also obtain with common camera (namely 2D cameras) Take the depth information of reference object, that is, three-dimensional positions and dimensions information.When camera is 2D cameras, the bat of acquisition It is 2D shooting preview images to take the photograph preview image;When camera is 3D cameras, the shooting preview image of acquisition is that 3D shootings are pre- Look at image.
The shooting preview image is input in occlusion detection model trained in advance by step 102.
Wherein, occlusion detection model can be understood as quickly judging the shooting preview image after inputting shooting preview image In whether include the learning model of occlusion area.Occlusion detection model may include neural network model, decision-tree model and with Any one in the machine learning models such as machine forest model.Occlusion detection model can be to the image and image in sample database It is trained generation with the presence or absence of the judging result of occlusion area.Illustratively, occlusion area detection model is based on existing and hide It keeps off the image in region and there is no the characteristic rule generations that the image of occlusion area is presented respectively.It is hidden it is understood that existing Gear region and the feature presented there is no the image of occlusion area are different, therefore, can be to there are the images of occlusion area With there is no the different characteristic rules that the image of occlusion area is presented respectively to be learnt, generate occlusion area detection model. Wherein, there are the image of occlusion area and there is no the image of occlusion area present different characteristic may include:Image it is bright At least one of the exposure of degree, the fuzziness of image, the texture of image and image.When occlusion detection event is triggered, Shooting preview image is obtained, and the shooting preview image of acquisition is input in occlusion detection model, it subsequently can be into facilitate One step, to the analysis result of shooting preview image, determines in shooting preview image whether include blocked area according to occlusion detection model Domain.
Step 103 determines in the shooting preview image according to the output result of the occlusion detection model and whether there is First occlusion area.
In the embodiment of the present application, the shooting preview image that will be obtained in step 101, be input to training in advance blocks inspection After surveying model, occlusion detection model can analyze the characteristic information of the shooting preview image, and can be tied according to analysis Fruit, which determines, whether there is occlusion area in the shooting preview image.Illustratively, when the output result of occlusion detection model is " 0 " When, it is determined that the first occlusion area is not present in shooting preview image;When the output result of occlusion detection model is " 1 ", then Determine that there are the first occlusion areas in shooting preview image.Alternatively, when the output result of occlusion detection model is " 1 ", then really Determine that the first occlusion area is not present in shooting preview image;When the output result of occlusion detection model is " 0 ", it is determined that shooting There are the first occlusion areas in preview image.It is of course also possible to for when the output result of occlusion detection model is "No", then really Determine that the first occlusion area is not present in shooting preview image;When the output result of occlusion detection model is "Yes", it is determined that clap Take the photograph in preview image that there are the first occlusion areas.The embodiment of the present application does not limit this.
Step 104, if it is determined that there are the first occlusion areas in the shooting preview image, then prompt user's occlusion removal Object.
Wherein, the shelter includes making in the shooting preview image that there are the objects of the first occlusion area.
In the embodiment of the present application, when determining that there are the first occlusion areas in shooting preview image, namely there are blocked areas When domain, illustrate there is the shelter for influencing shooting image beauty in front of camera, at this point it is possible to prompt user's occlusion removal Object.Wherein, shelter may include that finger, clothing or foreign matter present on camera etc. are pair unrelated with reference object and right The object that the quality of shooting image has an impact.Illustratively, when determining in shooting preview image there are when the first occlusion area, Send out prompt message:" there are shelter in front of camera, makes in shooting preview image that there are occlusion areas, please remove this in time Shelter ".It should be noted that user's occlusion removal object can be prompted in the form of word, it can also be with the shape of voice broadcast Formula prompts user's occlusion removal object, and the embodiment of the present application is to prompting the prompt form of user's occlusion removal object to be not especially limited.
Reminding method provided by the embodiments of the present application obtains shooting preview image when occlusion detection event is triggered, will Shooting preview image is input in occlusion detection model trained in advance, and is determined and clapped according to the output result of occlusion detection model It takes the photograph and whether there is the first occlusion area in preview image, however, it is determined that there are the first occlusion area in shooting preview image, then prompt User's occlusion removal object, wherein shelter includes making in shooting preview image that there are the objects of the first occlusion area.Pass through this Shen Please embodiment provide technical solution, shooting preview image can be carried out to block inspection by the occlusion detection model built in advance It surveys, and accurately and rapidly judges to whether there is occlusion area in shooting preview image, and in determining shooting preview image There are when occlusion area, user's occlusion removal object is prompted in time, can effectively improve the quality of shooting image.
In some embodiments, before occlusion detection event is triggered, including:Obtain first sample image, wherein institute It includes the image there are occlusion area to state first sample image;By the occlusion area of the first sample image, there are results to be denoted as The sample labeling of the first sample image, wherein the occlusion area includes there are occlusion area or being not present there are result Occlusion area;According to the first sample image and corresponding sample labeling, the first default machine learning model is trained, Obtain occlusion detection model.The advantages of this arrangement are as follows according to occlusion area, there are results to the progress of corresponding sample image Label, i.e., using occlusion area, there are results as the sample labeling of corresponding sample image, can greatly improve to occlusion detection The accuracy of model training.
In the embodiment of the present application, first sample image is obtained, wherein first sample image includes that there are occlusion areas Image and image there is no occlusion area, that is, there are occlusion areas for parts of images in first sample image, and parts of images is not There are occlusion areas.Illustratively, 5000 first sample images are obtained comprising the sample image of occlusion area can be with It it is 3000, the sample image for not including occlusion area can be 2000.And in first sample image, there are occlusion areas Image quantity and there is no the quantity of the image of occlusion area not to limit.In addition, first sample image may include:Net The combination of shooting image one or two in image and local picture library in network platform image library.Obtain first sample image Afterwards, by the occlusion area of first sample image, there are the sample labelings that result is denoted as first sample image.For example, working as first sample It there are when occlusion area in image, is indicated with 1, and current sample image is labeled as 1, be used as current sample graph by 1 in other words The sample labeling of picture;When occlusion area is not present in first sample image, indicated with 0, and current sample image is labeled as 0, in other words by 0 sample labeling as current sample image.According to first sample image and corresponding sample labeling, to first Default machine learning model is trained, and obtains occlusion detection model.It is understood that by first sample image and corresponding Sample labeling is preset machine learning model as training sample set, using the training sample set pair first and is trained, and generates and hides Keep off detection model.Wherein, the first default machine learning model may include neural network model, decision-tree model, random forest Any one in model and model-naive Bayesian.The first default machine learning model of the embodiment of the present application pair does not limit.
Optionally, there are results to be determined according to the input results of user for the occlusion area of first sample image, example Such as, when user's human eye can quickly and intuitively judge in first sample image whether there is occlusion area when, can according to There are results come the occlusion area that determines correspondence image for the input results at family.For another example, in order to improve the screening to first sample image Keeping off region, there are the accuracys that result determines, can be to first to further increase the accuracy to occlusion detection model training Sample image carries out image analysis, such as Color Distribution Features of analysis first sample image, grain distribution feature, fuzziness feature And sharpness etc., according to image analysis result, determining the occlusion area of first sample image, there are results.It needs to illustrate It is that there are the methods of determination of result not to limit for occlusion area of the embodiment of the present application to first sample image.
Wherein, before occlusion detection event is triggered, occlusion detection model is obtained.It should be noted that can move Dynamic terminal obtains above-mentioned first sample image and corresponding sample labeling, utilizes first sample image and corresponding sample labeling pair Default machine learning model is trained, and directly generates occlusion detection model.It can also be that mobile terminal directly invokes other shiftings The occlusion detection model that dynamic terminal training generates, for example, utilizing an acquisition for mobile terminal training sample set and life before manufacture At occlusion detection model, then the occlusion detection model is stored to mobile terminal, is directly used for other mobile terminals. Alternatively, server obtains a large amount of sample image, and according to corresponding occlusion area, there are results to be marked, and is instructed Practice sample set.Server is based on default machine learning model and is trained to training sample set, obtains occlusion detection model.Work as shifting When dynamic terminal needs to carry out occlusion detection namely when occlusion detection event is triggered, blocked from server calls are trained Detection model.
In some embodiments, in the shooting preview image to be input to in advance trained occlusion detection model it Before, further include:Obtain the fuzziness of the shooting preview image;The shooting preview image is input to blocking for training in advance In detection model, including:When the fuzziness is more than predetermined threshold value, the shooting preview image is input to training in advance In occlusion detection model.The advantages of this arrangement are as follows can when detecting that the fuzziness of shooting preview image is larger, then into Whether one step judges in shooting preview image to include occlusion area, it is possible to prevente effectively from occlusion detection is carried out in the case of unnecessary, Further decrease the power consumption of mobile terminal.
Illustratively, before being input to shooting preview image in occlusion detection model trained in advance, to the shooting Preview image carries out image analysis, determines the fuzziness of the shooting preview image.Wherein it is possible to be based on image histogram concentration degree The fuzziness of shooting preview image is evaluated, is also based on step edge width measuring the fuzziness of shooting preview image, It should be noted that the embodiment of the present application does not limit the method for determination of the fuzziness of shooting preview image.Shooting preview figure The fuzziness of picture reflects the picture quality of shooting preview image, and fuzziness is higher, and corresponding picture quality is poorer, conversely, mould Paste degree is lower, and corresponding picture quality is higher.It is understood that when, there are when occlusion area, being caused in shooting preview image The shelter of occlusion area is usually except the focal range of camera namely when camera shoots shelter, usually It can not be directed at camera focal length, the fuzziness of the corresponding image-region of shelter can be caused higher namely occlusion area Fog-level is larger, and occlusion area lacks apparent textural characteristics or sharp edge feature, can further influence entire shooting The fuzziness of preview image.Therefore, when detecting that the fuzziness of shooting preview image is more than predetermined threshold value, show that the shooting is pre- Picture quality of looking at is poor, it is understood that there may be occlusion area, at this point it is possible to by shooting preview image be input in advance training block inspection It surveys in model, whether there is occlusion area in further accurate judgement shooting preview image.
In some embodiments, in the shooting preview image to be input to in advance trained occlusion detection model it Before, further include:Whether detect within the scope of camera pre-determined distance includes object;The shooting preview image is input to advance instruction In experienced occlusion detection model, including:When it includes object to detect within the scope of camera pre-determined distance, by the shooting preview Image is input in occlusion detection model trained in advance.It is shot the advantages of this arrangement are as follows first can substantially detect Whether there is the possibility there are occlusion area in preview image, when it includes object to detect in camera preset range, shows to clap Take the photograph in preview image that there may be occlusion areas, at this point, further judging in shooting preview image by occlusion detection model Whether include occlusion area, it is possible to prevente effectively from the case of unnecessary occlusion area detection, further decrease mobile terminal Power consumption.
Illustratively, before being input to shooting preview image in occlusion detection model trained in advance, detection camera shooting Whether include object within the scope of head pre-determined distance.For example, detection device, such as infrared detecting set can be set around camera, By detection device detection within the scope of the pre-determined distance of camera, as in 10 cm ranges, if there are objects.It can manage Solution, when usually being shot by camera, can shoot longer-distance object, that is, causing occlusion area Shelter usually apart from the distance of camera, it will usually far smaller than actual photographed object distance camera distance.Therefore, When detecting that there are when object, show that there may be shelter namely the camera shootings around camera in camera preset range Existing object may make in shooting preview image that there are occlusion areas in head preset range, at this point it is possible to by shooting preview Image is input in occlusion detection model trained in advance, is blocked with whether there is in further accurate judgement shooting preview image Region.
In some embodiments, however, it is determined that there are the first occlusion area in the shooting preview image, then prompt user to move Except shelter, including:If it is determined that in the shooting preview image, there are the first occlusion areas, it is determined that first occlusion area Position in the shooting preview image;Prompt user according to the position occlusion removal object.The advantages of this arrangement are as follows Approximate location that can be residing for the location determination shelter according to occlusion area further uses family according to the position accurately Occlusion removal object, effectively avoid user miss occlusion removal object the case where occur.
In the embodiment of the present application, in order to accurately make user's removal really cause in shooting preview image, there are the first screenings The shelter for keeping off region, there are when the first occlusion area, the first occlusion area is being further determined that in determining shooting preview image Position in shooting preview image prompts position occlusion removal of the user according to the first occlusion area in shooting preview image Object.It is understood that there may be multiple objects, some objects may be real reference object around camera, have Object may will not make that there are blocked areas in shooting image neither reference object, will not cause blocking for camera Domain, and some objects be really cause shooting image in there are the shelters of occlusion area.If user is according to shooting preview figure There are the prompt messages of occlusion area as in, blindly one by one remove the object around camera, it is not only possible to can remove Real reference object, and user removes the time-consuming long of multiple objects, causes camera shooting image efficiency low, user's body Test difference.Therefore, after determining position of first occlusion area in shooting preview image, can substantially judge to cause shooting preview The substantially distribution orientation that there are the shelters of the first occlusion area in image in front of camera, user can be according to the orientation point Cloth quickly and accurately determines shelter, and shelter is removed.Illustratively, when determining in the upper left corner of shooting preview image Position is there are when the first occlusion area namely when upper left position of first occlusion area in shooting preview image, Ke Yi great General prejudge out causes the shelter of the first occlusion area user can basis within the scope of the pre-determined distance of the left front of camera Prompt removes the shelter within the scope of the pre-determined distance of camera left front.
In some embodiments, position of first occlusion area in the shooting preview image is determined, including:It will The shooting preview image is input to occlusion area trained in advance and determines in model;Wherein, the occlusion area determines model It is generated based on the characteristic rule that occlusion area is presented in the picture;Determine that the output result of model determines according to the occlusion area Position of first occlusion area in the shooting preview image.The advantages of this arrangement are as follows advance structure can be passed through The occlusion area built determines model, quickly and accurately determines specific location of the occlusion area in shooting preview image.
In the embodiment of the present application, occlusion area determines that model can be understood as after inputting shooting preview image, can be with Quickly judge the learning model of occlusion area specific distributing position in shooting preview image.Occlusion area determines that model can be with Including any one in the machine learning models such as neural network model, decision-tree model and Random Forest model.Occlusion area Determine that model can and occlusion area is labelled in sample image to the sample image there are occlusion area in sample database The sample training collection of position is trained generation.Illustratively, occlusion area determines that model is based on occlusion area in the picture The characteristic rule of presentation generates.It is understood that occlusion area and de-occlusion region presentation are characterized in an image Different, therefore, the characteristic rule that occlusion area is presented in the picture can be learnt, generate occlusion area and determine mould Type.Wherein, the feature that occlusion area is presented in the picture may include:Occlusion area size in the picture, occlusion area exist Position, occlusion area in image shape in the picture, the brightness of occlusion area, the color of occlusion area, occlusion area At least one of the texture of fuzziness and occlusion area.
It is determined in shooting preview image there are when the first occlusion area when by occlusion detection model, then by the shooting preview Image is input to occlusion area trained in advance and determines in model that occlusion area determines that model can be to the spy of shooting preview image Reference breath is analyzed, and position of first occlusion area in the shooting preview image can be determined according to analysis result, Determine which specific partial image region is the first occlusion area in shooting preview image.
It is understood that occlusion area determines that model and occlusion detection model are two different learning models.Wherein, Occlusion detection model is mainly used for judging in shooting preview image whether can only being obtained comprising occlusion area namely occlusion detection model Go out in shooting preview image whether to include the judging result of occlusion area, and specific occlusion area can not be determined in shooting preview Position in image.And occlusion area determines that model is mainly used for accurately determining out occlusion area in shooting preview image Specific location can also determine which specific block image-region is occlusion area in shooting preview image.Wherein, occlusion area Shape can be regular, can also be irregular.In addition, since occlusion detection model is mainly used for judging shooting preview It whether there is occlusion area in image, the processing speed of occlusion detection model is typically superior to the processing speed that occlusion area determines model Degree.
In some embodiments, in the shooting preview image to be input to occlusion area trained in advance and determines model Before, further include:Obtain the second sample image, wherein second sample image is that there are the images of the second occlusion area; Mark the position of second occlusion area in second sample image, and by mark behind the second occlusion area position second Sample image is as training sample set;Machine learning model is preset using the training sample set pair second to be trained with to institute The characteristic rule for stating the second occlusion area is learnt, and is obtained occlusion area and is determined model.The advantages of this arrangement are as follows by wrapping The second sample image containing the second occlusion area determines the samples sources of model as occlusion area, can greatly improve to blocking Region determines the accuracy of model training.
In the embodiment of the present application, the second sample image is obtained, wherein the second sample image is the figure there are occlusion area Picture.Wherein it is possible to the second occlusion area in the second sample image is determined based on image processing techniques, it can also be according to user's Circle selection operation determines the second occlusion area in the second sample image.To the second occlusion area into rower in the second sample image The specific location of second occlusion area is also labeled in corresponding second sample image by note.The second occlusion area will be marked The second sample image behind position is preset machine learning model using training sample set pair second and is carried out as training sample set Training, obtains occlusion area and determines model.Wherein, the second default machine learning model may include neural network model, decision Any one in tree-model, Random Forest model and model-naive Bayesian.The second default engineering of the embodiment of the present application pair Model is practised not limit.In addition, the second default machine learning model and the mentioned above first default machine learning model can be with It is identical, it can also be different, the embodiment of the present application does not limit this.
Wherein, before shooting preview image to be input to occlusion area trained in advance and is determined in model, acquisition is blocked Region determines model.It should be noted that can be above-mentioned second sample image of acquisition for mobile terminal, and mark second be blocked Second sample image of regional location as training sample set, using the training sample set pair second preset machine learning model into Row training, directly generates occlusion area and determines model.It can also be that mobile terminal directly invokes the training of other mobile terminals and generates Occlusion area determine model.Training sample set is carried out it is of course also possible to be based on default machine learning model by server Training, obtains occlusion area and determines model.When mobile terminal needs to further determine that the tool of occlusion area in shooting preview image When body position, from server calls, trained occlusion area determines model.
In some embodiments, after prompting user's occlusion removal object, further include:Judge whether to receive shelter The feedback information of removal;When receiving the feedback information, shooting preview image is shot.It is understood that working as When receiving the removed feedback information of shelter, illustrate that occlusion area, Ye Ji have been not present in shooting preview image There is no the shelters made in shooting preview image there are occlusion area in front of camera, at this point it is possible to directly pre- to the shooting Image of looking at is shot.The quality of shooting image can be effectively ensured in this way, make it that occlusion area be not present.
Wherein, feedback information can be understood as the removed determining information of shelter.It illustratively, can be in terminal device Human-computer interaction interface in the determination option that whether is removed to shelter is set.Wherein it is determined that option may include "Yes" and Two options of "No" indicate that user removes shelter when it is "Yes" to determine option.And when it is "No" to determine option, Indicate that user does not remove shelter.
Optionally, in the preset time period after prompting user's occlusion removal object, shooting preview image is inputted again Into occlusion detection model trained in advance, judge in the shooting preview image currently obtained whether to include occlusion area again, If it is not, illustrating that user removes shelter, directly shooting preview image can be shot.Optionally, it is used in prompt In preset time period after the occlusion removal object of family, whether include object, when detecting if detecting within the scope of camera pre-determined distance When not including object within the scope of camera pre-determined distance, illustrate that user removes shelter, it can be directly to shooting preview Image is shot.The advantages of this arrangement are as follows the quality of shooting image can be effectively ensured, make it that blocked area be not present Domain.
Fig. 2 is the flow diagram of reminding method provided by the embodiments of the present application.As shown in Fig. 2, this method includes:
Step 201 obtains first sample image.
Wherein, first sample image includes the image there are occlusion area.
Step 202, by the occlusion area of first sample image, there are the sample labelings that result is denoted as first sample image.
Wherein, occlusion area includes that there are occlusion area or occlusion area is not present there are result.
Step 203, according to first sample image and corresponding sample labeling, the first default machine learning model is instructed Practice, obtains occlusion detection model.
Step 204, when occlusion detection event is triggered, obtain shooting preview image.
Step 205, the fuzziness for obtaining shooting preview image.
Step 206 judges otherwise whether the fuzziness more than predetermined threshold value, executes step if so, thening follow the steps 207 Rapid 212.
Shooting preview image is input in occlusion detection model by step 207.
Step 208 is determined in shooting preview image according to the output result of occlusion detection model and is blocked with the presence or absence of first Otherwise region, executes step 212 if so, thening follow the steps 209.
Step 209, prompt user's occlusion removal object.
Wherein, shelter includes making in shooting preview image that there are the objects of the first occlusion area.
Step 210 judges whether to receive the removed feedback information of shelter, if so, 211 are thened follow the steps, otherwise, Return to step 209.
Step 211 shoots shooting preview image.
Step 212 determines that there is no occlusion areas in shooting preview image, directly shoot shooting preview image.
The reminding method provided in the embodiment of the present application will be clapped when detecting that the fuzziness of shooting preview image is larger It takes the photograph preview image to be input in occlusion detection model trained in advance, and is clapped when being determined according to the output result of occlusion detection model It takes the photograph in preview image there are when the first occlusion area, prompts user's occlusion removal object, and removed anti-receiving shelter When feedforward information, shooting preview image is shot, wherein occlusion detection model is based on to first sample image and corresponding sample This label is trained generation.By using above-mentioned technical proposal, in the case that effectively avoid it is unnecessary carry out occlusion detection, Under the premise of the power consumption for reducing mobile terminal, it can accurately and rapidly judge to whether there is blocked area in shooting preview image Domain, and user's occlusion removal object there are when occlusion area, is being prompted in determining shooting preview image in time, and hidden receiving When the removed feedback information of block material, shooting preview image is shot, the quality of shooting image can be effectively ensured.
Fig. 3 is the flow diagram of reminding method provided by the embodiments of the present application.As shown in figure 3, this method includes:
Step 301 obtains first sample image.
Wherein, first sample image includes the image there are occlusion area.
Step 302, by the occlusion area of first sample image, there are the sample labelings that result is denoted as first sample image.
Wherein, occlusion area includes that there are occlusion area or occlusion area is not present there are result.
Step 303, according to first sample image and corresponding sample labeling, the first default machine learning model is instructed Practice, obtains occlusion detection model.
Step 304, when occlusion detection event is triggered, obtain shooting preview image.
Whether include object within the scope of step 305, detection camera pre-determined distance, if so, 306 are thened follow the steps, otherwise, Execute step 316.
Shooting preview image is input in occlusion detection model by step 306.
Step 307 is determined in shooting preview image according to the output result of occlusion detection model and is blocked with the presence or absence of first Otherwise region, executes step 316 if so, thening follow the steps 308.
Step 308 obtains the second sample image.
Wherein, the second sample image is that there are the images of the second occlusion area;
Step 309, the position that the second occlusion area is marked in the second sample image, and the second occlusion area position will be marked The second sample image postponed is as training sample set.
Step 310 is preset machine learning model using training sample set pair second and is trained with to the second occlusion area Characteristic rule learnt, obtain occlusion area and determine model.
Shooting preview image is input in advance trained occlusion area and determines in model by step 311.
Step 312 determines that the output result of model determines the first occlusion area in shooting preview image according to occlusion area In position.
Step 313, prompt user are according to position occlusion removal object.
Wherein, shelter includes making in shooting preview image that there are the objects of the first occlusion area.
Step 314 judges whether to receive the removed feedback information of shelter, if so, 315 are thened follow the steps, otherwise, Return to step 313.
Step 315 shoots shooting preview image.
Step 316 determines that there is no occlusion areas in shooting preview image, directly shoot shooting preview image.
It should be noted that step 308- steps 310 can also execute before step 304.When step 308- steps 310 When being executed before step 304, step 301- steps 303 can be first carried out, it is rear to execute step 308- steps 310, it can also be first Step 308- steps 310 are executed, it is rear to execute step 301- steps 303.The embodiment of the present application is not construed as limiting this.
The reminding method provided in the embodiment of the present application, when it includes object to detect within the scope of camera pre-determined distance, Shooting preview image is input in occlusion detection model trained in advance, first can substantially detect whether to have to exist and block The possibility in region, when it includes object to detect in camera preset range, further judge in shooting preview image whether Including occlusion area, it is possible to prevente effectively from the case of unnecessary occlusion area detection, further decrease the power consumption of mobile terminal. And it is determined in shooting preview image there are when the first occlusion area when according to the output result of occlusion detection model, further basis Occlusion area determines that model determines position of first occlusion area in shooting preview image, and user is prompted to be removed according to position The case where shelter can make user according to the position accurately occlusion removal object, user is effectively avoided to miss occlusion removal object Occur.
Fig. 4 is a kind of structure diagram of suggestion device provided by the embodiments of the present application, which can be by software and/or hardware It realizes, is typically integrated in mobile terminal, the quality of shooting image can be improved by executing reminding method.As shown in figure 4, should Device includes:
Shooting preview image collection module 401, for when occlusion detection event is triggered, obtaining shooting preview image;
Shooting preview image input module 402, for by the shooting preview image be input in advance training block inspection It surveys in model;
Occlusion area judgment module 403, for determining that the shooting is pre- according to the output result of the occlusion detection model It lookes at and whether there is the first occlusion area in image;
User prompt module 404 is used to if it is determined that there are the first occlusion areas in the shooting preview image, then prompt to use Family occlusion removal object, wherein the shelter includes making in the shooting preview image that there are the objects of the first occlusion area.
Suggestion device provided by the embodiments of the present application obtains shooting preview image when occlusion detection event is triggered, will Shooting preview image is input in occlusion detection model trained in advance, and is determined and clapped according to the output result of occlusion detection model It takes the photograph and whether there is the first occlusion area in preview image, however, it is determined that there are the first occlusion area in shooting preview image, then prompt User's occlusion removal object, wherein shelter includes making in shooting preview image that there are the objects of the first occlusion area.Pass through this Shen Please embodiment provide technical solution, shooting preview image can be carried out to block inspection by the occlusion detection model built in advance It surveys, and accurately and rapidly judges to whether there is occlusion area in shooting preview image, and in determining shooting preview image There are when occlusion area, user's occlusion removal object is prompted in time, can effectively improve the quality of shooting image.Optionally, the dress It sets and further includes:
First sample image collection module, for before occlusion detection event is triggered, obtaining first sample image, In, the first sample image includes the image there are occlusion area;
Result queue module is blocked, for there are results to be denoted as described first by the occlusion area of the first sample image The sample labeling of sample image, wherein the occlusion area includes that there are occlusion area or occlusion area is not present there are result;
Occlusion detection model training module is used for according to the first sample image and corresponding sample labeling, to first Default machine learning model is trained, and obtains occlusion detection model.
Optionally, which further includes:
Fuzziness acquisition module, in the shooting preview image to be input to occlusion detection model trained in advance Before, the fuzziness of the shooting preview image is obtained;
Shooting preview image input module, is used for:
When the fuzziness is more than predetermined threshold value, the shooting preview image is input to occlusion detection trained in advance In model.
Optionally, which further includes:
Object detection module, in the shooting preview image to be input to in advance trained occlusion detection model it Before, whether detect within the scope of camera pre-determined distance includes object;
Shooting preview image input module, is used for:
When it includes object to detect within the scope of camera pre-determined distance, the shooting preview image is input to advance instruction In experienced occlusion detection model.
Optionally, user prompt module, including:
Blocking position determination unit, for if it is determined that there are the first occlusion areas in the shooting preview image, it is determined that Position of first occlusion area in the shooting preview image;
User's prompt unit, for prompting user according to the position occlusion removal object.
Optionally, blocking position determination unit is used for:
The shooting preview image is input to occlusion area trained in advance to determine in model;Wherein, the blocked area Domain determines that model is generated based on the characteristic rule that occlusion area is presented in the picture;
Determine that the output result of model determines first occlusion area in the shooting preview according to the occlusion area Position in image.
Optionally, before the shooting preview image to be input to occlusion area trained in advance and is determined in model, also Including:
Obtain the second sample image, wherein second sample image is that there are the images of the second occlusion area;
The position of second occlusion area is marked in second sample image, and will mark the second occlusion area position The second sample image postponed is as training sample set;
Machine learning model is preset using the training sample set pair second to be trained with to second occlusion area Characteristic rule learnt, obtain occlusion area and determine model.
Optionally, which further includes
Feedback information judgment module, for after prompting user's occlusion removal object, having judged whether to receive shelter The feedback information of removal;
Image taking module, for when receiving the feedback information, being shot to shooting preview image.
The embodiment of the present application also provides a kind of storage medium including computer executable instructions, and the computer is executable When being executed by computer processor for executing reminding method, this method includes for instruction:
When occlusion detection event is triggered, shooting preview image is obtained;
The shooting preview image is input in occlusion detection model trained in advance;
It is determined in the shooting preview image according to the output result of the occlusion detection model and is blocked with the presence or absence of first Region;
If it is determined that in the shooting preview image, there are the first occlusion areas, then prompt user's occlusion removal object, wherein institute It includes making in the shooting preview image that there are the objects of the first occlusion area to state shelter.
Storage medium --- any various types of memory devices or storage device.Term " storage medium " is intended to wrap It includes:Install medium, such as CD-ROM, floppy disk or magnetic tape equipment;Computer system memory or random access memory, such as DRAM, DDRRAM, SRAM, EDORAM, blue Bath (Rambus) RAM etc.;Nonvolatile memory, such as flash memory, magnetic medium (example Such as hard disk or optical storage);The memory component etc. of register or other similar types.Storage medium can further include other types Memory or combinations thereof.In addition, storage medium can be located at program in the first computer system being wherein performed, or It can be located in different second computer systems, second computer system is connected to the first meter by network (such as internet) Calculation machine system.Second computer system can provide program instruction to the first computer for executing.Term " storage medium " can To include two or more that may reside in different location (such as in different computer systems by network connection) Storage medium.Storage medium can store the program instruction that can be executed by one or more processors and (such as be implemented as counting Calculation machine program).
Certainly, a kind of storage medium including computer executable instructions that the embodiment of the present application is provided, computer The prompt operation that executable instruction is not limited to the described above, can also be performed the reminding method that the application any embodiment is provided In relevant operation.
The embodiment of the present application provides a kind of mobile terminal, and provided by the embodiments of the present application carry can be integrated in the mobile terminal Showing device.Fig. 5 is a kind of structural schematic diagram of mobile terminal provided by the embodiments of the present application.Mobile terminal 500 may include:It deposits On a memory and can be in the computer program of processor operation, the processor 502 hold for reservoir 501, processor 502 and storage The reminding method as described in the embodiment of the present application is realized when the row computer program.
Mobile terminal provided by the embodiments of the present application, can be by the occlusion detection model that builds in advance to shooting preview figure As carrying out occlusion detection, and accurately and rapidly judge to whether there is occlusion area in shooting preview image, and is determining to clap It takes the photograph in preview image there are when occlusion area, prompts user's occlusion removal object in time, can effectively improve the quality of shooting image.
Fig. 6 is the structural schematic diagram of another mobile terminal provided by the embodiments of the present application, which may include: Shell (not shown), memory 601, central processing unit (central processing unit, CPU) 602 (are also known as located Manage device, hereinafter referred to as CPU), circuit board (not shown) and power circuit (not shown).The circuit board is placed in institute State the space interior that shell surrounds;The CPU602 and the memory 601 are arranged on the circuit board;The power supply electricity Road, for being each circuit or the device power supply of the mobile terminal;The memory 601, for storing executable program generation Code;The CPU602 is run and the executable journey by reading the executable program code stored in the memory 601 The corresponding computer program of sequence code, to realize following steps:
When occlusion detection event is triggered, shooting preview image is obtained;
The shooting preview image is input in occlusion detection model trained in advance;
It is determined in the shooting preview image according to the output result of the occlusion detection model and is blocked with the presence or absence of first Region;
If it is determined that in the shooting preview image, there are the first occlusion areas, then prompt user's occlusion removal object, wherein institute It includes making in the shooting preview image that there are the objects of the first occlusion area to state shelter.
The mobile terminal further includes:Peripheral Interface 603, RF (Radio Frequency, radio frequency) circuit 605, audio-frequency electric Road 606, loud speaker 611, power management chip 608, input/output (I/O) subsystem 609, other input/control devicess 610, Touch screen 612, other input/control devicess 610 and outside port 604, these components pass through one or more communication bus Or signal wire 607 communicates.
It should be understood that diagram mobile terminal 600 is only an example of mobile terminal, and mobile terminal 600 Can have than shown in the drawings more or less component, can combine two or more components, or can be with It is configured with different components.Various parts shown in the drawings can be including one or more signal processings and/or special It is realized in the combination of hardware, software or hardware and software including integrated circuit.
Just the mobile terminal provided in this embodiment for prompt is described in detail below, and the mobile terminal is with mobile phone For.
Memory 601, the memory 601 can be by access such as CPU602, Peripheral Interfaces 603, and the memory 601 can Can also include nonvolatile memory to include high-speed random access memory, such as one or more disk memory, Flush memory device or other volatile solid-state parts.
The peripheral hardware that outputs and inputs of equipment can be connected to CPU602 and deposited by Peripheral Interface 603, the Peripheral Interface 603 Reservoir 601.
I/O subsystems 609, the I/O subsystems 609 can be by the input/output peripherals in equipment, such as touch screen 612 With other input/control devicess 610, it is connected to Peripheral Interface 603.I/O subsystems 609 may include 6091 He of display controller One or more input controllers 6092 for controlling other input/control devicess 610.Wherein, one or more input controls Device 6092 processed receives electric signal from other input/control devicess 610 or sends electric signal to other input/control devicess 610, Other input/control devicess 610 may include physical button (pressing button, rocker buttons etc.), dial, slide switch, behaviour Vertical pole clicks idler wheel.It is worth noting that input controller 6092 can with it is following any one connect:Keyboard, infrared port, The indicating equipment of USB interface and such as mouse.
Touch screen 612, the touch screen 612 are the input interface and output interface between customer mobile terminal and user, Visual output is shown to user, visual output may include figure, text, icon, video etc..
Display controller 6091 in I/O subsystems 609 receives electric signal from touch screen 612 or is sent out to touch screen 612 Electric signals.Touch screen 612 detects the contact on touch screen, and the contact detected is converted to and is shown by display controller 6091 The interaction of user interface object on touch screen 612, that is, realize human-computer interaction, the user interface being shown on touch screen 612 Object can be the icon of running game, be networked to the icon etc. of corresponding network.It is worth noting that equipment can also include light Mouse, light mouse are the extensions for the touch sensitive surface for not showing the touch sensitive surface visually exported, or formed by touch screen.
RF circuits 605 are mainly used for establishing the communication of mobile phone and wireless network (i.e. network side), realize mobile phone and wireless network The data receiver of network and transmission.Such as transmitting-receiving short message, Email etc..Specifically, RF circuits 605 receive and send RF letters Number, RF signals are also referred to as electromagnetic signal, and RF circuits 605 convert electrical signals to electromagnetic signal or electromagnetic signal is converted to telecommunications Number, and communicated with mobile communications network and other equipment by the electromagnetic signal.RF circuits 605 may include being used for Execute the known circuit of these functions comprising but it is not limited to antenna system, RF transceivers, one or more amplifiers, tuning Device, one or more oscillators, digital signal processor, CODEC (COder-DECoder, coder) chipset, Yong Hubiao Know module (Subscriber Identity Module, SIM) etc..
Voicefrequency circuit 606 is mainly used for receiving audio data from Peripheral Interface 603, which is converted to telecommunications Number, and the electric signal is sent to loud speaker 611.
Loud speaker 611, the voice signal for receiving mobile phone from wireless network by RF circuits 605, is reduced to sound And play the sound to user.
Power management chip 608, the hardware for being connected by CPU602, I/O subsystem and Peripheral Interface are powered And power management.
Suggestion device, storage medium and the mobile terminal provided in above-described embodiment can perform the application any embodiment institute The reminding method of offer has and executes the corresponding function module of this method and advantageous effect.It is not detailed in the above-described embodiments to retouch The technical detail stated, reference can be made to the reminding method that the application any embodiment is provided.
Note that above are only presently preferred embodiments of the present invention and institute's application technology principle.It will be appreciated by those skilled in the art that The present invention is not limited to specific embodiments described here, can carry out for a person skilled in the art it is various it is apparent variation, It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out to the present invention by above example It is described in further detail, but the present invention is not limited only to above example, without departing from the inventive concept, also May include other more equivalent embodiments, and the scope of the present invention is determined by scope of the appended claims.

Claims (11)

1. a kind of reminding method, which is characterized in that including:
When occlusion detection event is triggered, shooting preview image is obtained;
The shooting preview image is input in occlusion detection model trained in advance;
It is determined according to the output result of the occlusion detection model and whether there is the first occlusion area in the shooting preview image;
If it is determined that in the shooting preview image, there are the first occlusion areas, then prompt user's occlusion removal object, wherein the screening Block material includes making in the shooting preview image that there are the objects of the first occlusion area.
2. according to the method described in claim 1, it is characterized in that, before occlusion detection event is triggered, including:
Obtain first sample image, wherein the first sample image includes the image there are occlusion area;
By the occlusion area of the first sample image, there are the sample labelings that result is denoted as the first sample image, wherein The occlusion area includes that there are occlusion area or occlusion area is not present there are result;
According to the first sample image and corresponding sample labeling, the first default machine learning model is trained, is obtained Occlusion detection model.
3. according to the method described in claim 1, it is characterized in that, the shooting preview image is input to training in advance Before in occlusion detection model, further include:
Obtain the fuzziness of the shooting preview image;
The shooting preview image is input in occlusion detection model trained in advance, including:
When the fuzziness is more than predetermined threshold value, the shooting preview image is input to occlusion detection model trained in advance In.
4. according to the method described in claim 1, it is characterized in that, the shooting preview image is input to training in advance Before in occlusion detection model, further include:
Whether detect within the scope of camera pre-determined distance includes object;
The shooting preview image is input in occlusion detection model trained in advance, including:
When it includes object to detect within the scope of camera pre-determined distance, the shooting preview image is input to training in advance In occlusion detection model.
5. according to the method described in claim 1, it is characterized in that, however, it is determined that there are first in the shooting preview image to block Region then prompts user's occlusion removal object, including:
If it is determined that in the shooting preview image, there are the first occlusion areas, it is determined that first occlusion area is in the shooting Position in preview image;
Prompt user according to the position occlusion removal object.
6. according to the method described in claim 5, it is characterized in that, determining first occlusion area in the shooting preview figure Position as in, including:
The shooting preview image is input to occlusion area trained in advance to determine in model;Wherein, the occlusion area is true Cover half type is generated based on the characteristic rule that occlusion area is presented in the picture;
Determine that the output result of model determines first occlusion area in the shooting preview image according to the occlusion area In position.
7. according to the method described in claim 6, it is characterized in that, the shooting preview image is input to training in advance Before occlusion area determines in model, further include:
Obtain the second sample image, wherein second sample image is that there are the images of the second occlusion area;
The position of second occlusion area is marked in second sample image, and will be behind the second occlusion area position of mark The second sample image as training sample set;
Machine learning model is preset using the training sample set pair second to be trained with the spy to second occlusion area Sign rule is learnt, and is obtained occlusion area and is determined model.
8. according to any methods of claim 1-7, which is characterized in that after prompting user's occlusion removal object, also wrap It includes:
Judge whether to receive the removed feedback information of shelter;
When receiving the feedback information, shooting preview image is shot.
9. a kind of suggestion device, which is characterized in that including:
Shooting preview image collection module, for when occlusion detection event is triggered, obtaining shooting preview image;
Shooting preview image input module, for the shooting preview image to be input to occlusion detection model trained in advance In;
Occlusion area judgment module, for being determined in the shooting preview image according to the output result of the occlusion detection model With the presence or absence of the first occlusion area;
User prompt module, for if it is determined that there are the first occlusion area in the shooting preview image, then prompting user to remove Shelter, wherein the shelter includes making in the shooting preview image that there are the objects of the first occlusion area.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor Such as reminding method according to any one of claims 1-8 is realized when execution.
11. a kind of mobile terminal, which is characterized in that including memory, processor and storage are on a memory and can be in processor The computer program of operation, which is characterized in that the processor realizes that claim 1-8 such as appoints when executing the computer program Reminding method described in one.
CN201810457182.3A 2018-05-14 2018-05-14 Reminding method, device, storage medium and mobile terminal Active CN108712606B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810457182.3A CN108712606B (en) 2018-05-14 2018-05-14 Reminding method, device, storage medium and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810457182.3A CN108712606B (en) 2018-05-14 2018-05-14 Reminding method, device, storage medium and mobile terminal

Publications (2)

Publication Number Publication Date
CN108712606A true CN108712606A (en) 2018-10-26
CN108712606B CN108712606B (en) 2019-10-29

Family

ID=63869013

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810457182.3A Active CN108712606B (en) 2018-05-14 2018-05-14 Reminding method, device, storage medium and mobile terminal

Country Status (1)

Country Link
CN (1) CN108712606B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109361874A (en) * 2018-12-19 2019-02-19 维沃移动通信有限公司 A kind of photographic method and terminal
CN109951635A (en) * 2019-03-18 2019-06-28 Oppo广东移动通信有限公司 It takes pictures processing method, device, mobile terminal and storage medium
CN109951636A (en) * 2019-03-18 2019-06-28 Oppo广东移动通信有限公司 It takes pictures processing method, device, mobile terminal and storage medium
CN110321819A (en) * 2019-06-21 2019-10-11 浙江大华技术股份有限公司 The occlusion detection method, apparatus and storage device of picture pick-up device
CN111476123A (en) * 2020-03-26 2020-07-31 杭州鸿泉物联网技术股份有限公司 Vehicle state identification method and device, electronic equipment and storage medium
CN111932481A (en) * 2020-09-11 2020-11-13 广州汽车集团股份有限公司 Fuzzy optimization method and device for automobile reversing image
CN112381054A (en) * 2020-12-02 2021-02-19 东方网力科技股份有限公司 Method for detecting working state of camera and related equipment and system
CN113301250A (en) * 2021-05-13 2021-08-24 Oppo广东移动通信有限公司 Image recognition method and device, computer readable medium and electronic equipment
CN114079766A (en) * 2020-08-10 2022-02-22 珠海格力电器股份有限公司 Method for prompting shielding of camera under screen, storage medium and terminal equipment
CN114333345A (en) * 2021-12-31 2022-04-12 北京精英路通科技有限公司 Early warning method, device, storage medium and program product when parking space is blocked
CN115311589A (en) * 2022-10-12 2022-11-08 山东乾元泽孚科技股份有限公司 Hidden danger processing method and equipment for lighting building

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006217655A (en) * 2006-04-03 2006-08-17 Fujitsu Ltd Photographing apparatus
CN105933607A (en) * 2016-05-26 2016-09-07 维沃移动通信有限公司 Photographing effect adjusting method of mobile terminal and mobile terminal
CN107909065A (en) * 2017-12-29 2018-04-13 百度在线网络技术(北京)有限公司 The method and device blocked for detecting face

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006217655A (en) * 2006-04-03 2006-08-17 Fujitsu Ltd Photographing apparatus
CN105933607A (en) * 2016-05-26 2016-09-07 维沃移动通信有限公司 Photographing effect adjusting method of mobile terminal and mobile terminal
CN107909065A (en) * 2017-12-29 2018-04-13 百度在线网络技术(北京)有限公司 The method and device blocked for detecting face

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109361874A (en) * 2018-12-19 2019-02-19 维沃移动通信有限公司 A kind of photographic method and terminal
CN109951635A (en) * 2019-03-18 2019-06-28 Oppo广东移动通信有限公司 It takes pictures processing method, device, mobile terminal and storage medium
CN109951636A (en) * 2019-03-18 2019-06-28 Oppo广东移动通信有限公司 It takes pictures processing method, device, mobile terminal and storage medium
CN110321819B (en) * 2019-06-21 2021-09-14 浙江大华技术股份有限公司 Shielding detection method and device of camera equipment and storage device
CN110321819A (en) * 2019-06-21 2019-10-11 浙江大华技术股份有限公司 The occlusion detection method, apparatus and storage device of picture pick-up device
CN111476123A (en) * 2020-03-26 2020-07-31 杭州鸿泉物联网技术股份有限公司 Vehicle state identification method and device, electronic equipment and storage medium
CN114079766A (en) * 2020-08-10 2022-02-22 珠海格力电器股份有限公司 Method for prompting shielding of camera under screen, storage medium and terminal equipment
CN114079766B (en) * 2020-08-10 2023-08-11 珠海格力电器股份有限公司 Under-screen camera shielding prompting method, storage medium and terminal equipment
CN111932481A (en) * 2020-09-11 2020-11-13 广州汽车集团股份有限公司 Fuzzy optimization method and device for automobile reversing image
CN111932481B (en) * 2020-09-11 2021-02-05 广州汽车集团股份有限公司 Fuzzy optimization method and device for automobile reversing image
CN112381054A (en) * 2020-12-02 2021-02-19 东方网力科技股份有限公司 Method for detecting working state of camera and related equipment and system
CN113301250A (en) * 2021-05-13 2021-08-24 Oppo广东移动通信有限公司 Image recognition method and device, computer readable medium and electronic equipment
CN114333345A (en) * 2021-12-31 2022-04-12 北京精英路通科技有限公司 Early warning method, device, storage medium and program product when parking space is blocked
CN114333345B (en) * 2021-12-31 2023-05-30 北京精英路通科技有限公司 Early warning method, device, storage medium and program product for shielding parking space
CN115311589A (en) * 2022-10-12 2022-11-08 山东乾元泽孚科技股份有限公司 Hidden danger processing method and equipment for lighting building

Also Published As

Publication number Publication date
CN108712606B (en) 2019-10-29

Similar Documents

Publication Publication Date Title
CN108712606B (en) Reminding method, device, storage medium and mobile terminal
CN108566516B (en) Image processing method, device, storage medium and mobile terminal
CN111079576B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN109685746A (en) Brightness of image method of adjustment, device, storage medium and terminal
CN108830892B (en) Face image processing method and device, electronic equipment and computer readable storage medium
CN109120863B (en) Shooting method, shooting device, storage medium and mobile terminal
CN109348135A (en) Photographic method, device, storage medium and terminal device
CN109951595A (en) Intelligence adjusts method, apparatus, storage medium and the mobile terminal of screen intensity
CN108551552B (en) Image processing method, device, storage medium and mobile terminal
CN109741281A (en) Image processing method, device, storage medium and terminal
CN109167931A (en) Image processing method, device, storage medium and mobile terminal
CN108494996A (en) Image processing method, device, storage medium and mobile terminal
CN108765380A (en) Image processing method, device, storage medium and mobile terminal
CN110246110B (en) Image evaluation method, device and storage medium
CN111182212B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN109639896A (en) Block object detecting method, device, storage medium and mobile terminal
CN109360222B (en) Image segmentation method, device and storage medium
CN109951636A (en) It takes pictures processing method, device, mobile terminal and storage medium
CN108683845A (en) Image processing method, device, storage medium and mobile terminal
CN106204552B (en) A kind of detection method and device of video source
CN105528078B (en) The method and device of controlling electronic devices
CN109285178A (en) Image partition method, device and storage medium
CN109005350A (en) Image repeats shooting reminding method, device, storage medium and mobile terminal
CN110650379A (en) Video abstract generation method and device, electronic equipment and storage medium
CN109167893A (en) Shoot processing method, device, storage medium and the mobile terminal of image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant