CN108921084A - A kind of image classification processing method, mobile terminal and computer readable storage medium - Google Patents

A kind of image classification processing method, mobile terminal and computer readable storage medium Download PDF

Info

Publication number
CN108921084A
CN108921084A CN201810693897.9A CN201810693897A CN108921084A CN 108921084 A CN108921084 A CN 108921084A CN 201810693897 A CN201810693897 A CN 201810693897A CN 108921084 A CN108921084 A CN 108921084A
Authority
CN
China
Prior art keywords
image
facial
piecemeal
processed
facial characteristics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810693897.9A
Other languages
Chinese (zh)
Inventor
徐爱辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201810693897.9A priority Critical patent/CN108921084A/en
Publication of CN108921084A publication Critical patent/CN108921084A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of image classification processing methods, including:Piecemeal processing is carried out to image to be processed according to pre-defined rule;Piecemeal extracts the facial characteristics of piecemeal treated the image to be processed;The facial characteristics is matched from pre-stored image facial feature database;Classification processing is carried out to the image to be processed according to matched result.The embodiment of the invention also discloses a kind of mobile terminal and computer readable storage mediums, solving the problems, such as that classification of images precision is not high in the related technology leads to poor user experience, by being identified to face piecemeal in image, improve the accuracy of recognition of face, and then the accuracy of image classification is improved, improve user experience.

Description

A kind of image classification processing method, mobile terminal and computer readable storage medium
Technical field
The present invention relates to mobile communication technology field more particularly to a kind of image classification processing methods, mobile terminal and meter Calculation machine readable storage medium storing program for executing.
Background technique
In daily life, facial image be human emotion expression with exchange in most important, most direct carrier, it can be anti- Mirror race, the age, individual character, emotional state of a people, the information such as health status, identity and the status of even one people. With constantly improve for cell-phone function, user is various by mobile phone photograph record life, becomes more and more frequently, to protect in mobile phone The image deposited is also more and more, and with increasing for image, user is got up increasingly expense by carrying out sort operation to image manually Strength.It proposes in the related technology and automatically classification is realized by human face detection tech, but the accuracy classified is not high, so that with Family experience is bad.
Aiming at the problem that classification of images precision in the related technology is not high to lead to poor user experience, solution is not yet proposed at present Certainly scheme.
Summary of the invention
It is a primary object of the present invention to propose a kind of image classification processing method, mobile terminal and computer-readable storage Medium, it is intended to which solving the problems, such as that classification of images precision is not high in the related technology leads to poor user experience.
To achieve the above object, the embodiment of the present invention proposes a kind of image classification processing method, including:
Piecemeal processing is carried out to image to be processed according to pre-defined rule;
Piecemeal extracts the facial characteristics of piecemeal treated the image to be processed;
The facial characteristics is matched from pre-stored image facial feature database;
Classification processing is carried out to the image to be processed according to matched result.
Optionally, before carrying out piecemeal processing to the image to be processed according to the pre-defined rule, the method is also Including:
Recognition of face is carried out to the image to be processed;
Determine face region in the image to be processed.
Optionally, carrying out piecemeal processing to the image to be processed according to the pre-defined rule includes:
Face region in the image to be processed is divided into two regions up and down according to pre-defined rule.
Optionally, piecemeal extracts the facial characteristics of piecemeal treated the image to be processed and includes:
The first facial feature of the face region is extracted respectively, and the top of the face region is subregional Second facial characteristics, the subregional third facial characteristics in the lower part of the face region.
Optionally, matching packet is carried out to the facial characteristics from the pre-stored image facial feature database It includes:
The first facial feature is matched from the pre-stored image facial feature database;
In the case where it fails to match, to second face from the pre-stored image facial feature database Feature is matched;
In the case where it fails to match, to the third face from the pre-stored image facial feature database Feature is matched.
Optionally, carrying out classification processing to the image to be processed according to matched result includes:
Succeed to the first facial characteristic matching, to the second facial characteristics successful match or to the third face In the successful situation of portion's characteristic matching, determination is matched with the first facial feature, the second facial characteristics or third facial characteristics The successfully corresponding image of image facial characteristics;
Determine photograph album belonging to the corresponding image of described image facial characteristics;
The image to be processed is saved in photograph album belonging to the corresponding image of described image facial characteristics and completes classification Processing.
Optionally, before carrying out piecemeal processing to the image to be processed according to the pre-defined rule, the method is also Including:
Piecemeal processing is carried out to multiple images according to the pre-defined rule;
Piecemeal extracts piecemeal treated the facial characteristics of multiple images;
The facial characteristics of the facial characteristics of multiple images described in extracting and multiple images and it is described multiple The corresponding relationship of photograph album belonging to image is stored into described image facial feature database.
According to another aspect of an embodiment of the present invention, a kind of mobile terminal is additionally provided, the mobile terminal includes:Processing Device, memory and communication bus, wherein
The communication bus, for realizing the connection communication between the processor and the memory;
The processor, for executing the image classification processing routine stored in memory, to realize following steps:
Piecemeal processing is carried out to image to be processed according to pre-defined rule;
Piecemeal extracts the facial characteristics of piecemeal treated the image to be processed;
The facial characteristics is matched from pre-stored image facial feature database;
Classification processing is carried out to the image to be processed according to matched result.
Optionally, the processor is also used to execute image classification processing routine, to realize following steps:
Before carrying out piecemeal processing to the image to be processed according to the pre-defined rule, to the image to be processed into Row recognition of face;
Determine face region in the image to be processed.
Optionally, the processor is also used to execute image classification processing routine, to realize following steps:
Face region in the image to be processed is divided into two regions up and down according to pre-defined rule.
Optionally, the processor is also used to execute image classification processing routine, to realize following steps:
The first facial feature of the face region is extracted respectively, and the top of the face region is subregional Second facial characteristics, the subregional third facial characteristics in the lower part of the face region.
Optionally, the processor is also used to execute image classification processing routine, to realize following steps:
The first facial feature is matched from the pre-stored image facial feature database;
In the case where it fails to match, to second face from the pre-stored image facial feature database Feature is matched;
In the case where it fails to match, to the third face from the pre-stored image facial feature database Feature is matched.
Optionally, the processor is also used to execute image classification processing routine, to realize following steps:
Succeed to the first facial characteristic matching, to the second facial characteristics successful match or to the third face In the successful situation of portion's characteristic matching, determination is matched with the first facial feature, the second facial characteristics or third facial characteristics The successfully corresponding image of image facial characteristics;
Determine photograph album belonging to the corresponding image of described image facial characteristics;
The image to be processed is saved in photograph album belonging to the corresponding image of described image facial characteristics and completes classification Processing.
Optionally, the processor is also used to execute image classification processing routine, to realize following steps:
Before carrying out piecemeal processing to the image to be processed according to the pre-defined rule, according to the pre-defined rule pair Multiple images carry out piecemeal processing;
Piecemeal extracts piecemeal treated the facial characteristics of multiple images;
The facial characteristics of the facial characteristics of multiple images described in extracting and multiple images and it is described multiple The corresponding relationship of photograph album belonging to image is stored into described image facial feature database.
According to another aspect of an embodiment of the present invention, a kind of computer readable storage medium, the computer are additionally provided Readable storage medium storing program for executing is stored with one or more program, and one or more of programs can be by one or more processor It executes, the step of to realize above-mentioned image classification processing method.
Through the invention, piecemeal processing is carried out to image to be processed according to pre-defined rule;To the image block to be processed Carry out facial feature extraction;The facial characteristics progress that piecemeal is extracted from pre-stored image facial feature database Match;Classification processing is carried out to the image to be processed according to matched result, solves classification of images essence in the related technology Not high the problem of leading to poor user experience is spent, by identifying to face piecemeal in image, improves the accuracy of recognition of face, into And the accuracy of image classification is improved, improve user experience.
Detailed description of the invention
The hardware structural diagram of Fig. 1 optional mobile terminal of each embodiment one to realize the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the flow chart of image classification processing method according to an embodiment of the present invention;
Fig. 4 is the flow chart of photograph album classification according to the preferred embodiment of the invention;
Fig. 5 is the schematic diagram one of face image processing according to an embodiment of the present invention;
Fig. 6 is the schematic diagram two of face image processing according to an embodiment of the present invention;
Fig. 7 is the schematic diagram three of face image processing according to an embodiment of the present invention;
Fig. 8 is the schematic diagram four of face image processing according to an embodiment of the present invention;
Fig. 9 is the schematic diagram of mobile terminal according to an embodiment of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In subsequent description, it is only using the suffix for indicating such as " module ", " component " or " unit " of element Be conducive to explanation of the invention, itself there is no a specific meaning.Therefore, " module ", " component " or " unit " can mix Ground uses.
Terminal can be implemented in a variety of manners.For example, terminal described in the present invention may include such as mobile phone, plate Computer, laptop, palm PC, personal digital assistant (Personal DigitalAssistant, PDA), portable matchmaker Body player (Portable Media Player, PMP), navigation device, wearable device, Intelligent bracelet, pedometer etc. are mobile The fixed terminals such as terminal, and number TV, desktop computer.
It will be illustrated by taking mobile terminal as an example in subsequent descriptions, it will be appreciated by those skilled in the art that in addition to special Except element for moving purpose, the construction of embodiment according to the present invention can also apply to the terminal of fixed type.
Referring to Fig. 1, a kind of hardware structural diagram of its mobile terminal of each embodiment to realize the present invention, the shifting Moving terminal 100 may include:RF (Radio Frequency, radio frequency) unit 101, WiFi module 102, audio output unit 103, A/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, the components such as memory 109, processor 110 and power supply 111.It will be understood by those skilled in the art that shown in Fig. 1 Mobile terminal structure does not constitute the restriction to mobile terminal, and mobile terminal may include components more more or fewer than diagram, Perhaps certain components or different component layouts are combined.
It is specifically introduced below with reference to all parts of the Fig. 1 to mobile terminal:
Radio frequency unit 101 can be used for receiving and sending messages or communication process in, signal sends and receivees, specifically, by base station Downlink information receive after, to processor 110 handle;In addition, the data of uplink are sent to base station.In general, radio frequency unit 101 Including but not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier, duplexer etc..In addition, penetrating Frequency unit 101 can also be communicated with network and other equipment by wireless communication.Any communication can be used in above-mentioned wireless communication Standard or agreement, including but not limited to GSM (Global System of Mobile communication, global system for mobile telecommunications System), GPRS (General Packet Radio Service, general packet radio service), CDMA2000 (Code Division Multiple Access 2000, CDMA 2000), WCDMA (Wideband Code Division Multiple Access, wideband code division multiple access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access, TD SDMA), FDD-LTE (Frequency Division Duplexing-Long Term Evolution, frequency division duplex long term evolution) and TDD-LTE (Time Division Duplexing-Long Term Evolution, time division duplex long term evolution) etc..
WiFi belongs to short range wireless transmission technology, and mobile terminal can help user to receive and dispatch electricity by WiFi module 102 Sub- mail, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Fig. 1 shows Go out WiFi module 102, but it is understood that, and it is not belonging to must be configured into for mobile terminal, it completely can be according to need It to omit within the scope of not changing the essence of the invention.
Audio output unit 103 can be in call signal reception pattern, call mode, record mould in mobile terminal 100 When under the isotypes such as formula, speech recognition mode, broadcast reception mode, by radio frequency unit 101 or WiFi module 102 it is received or The audio data stored in memory 109 is converted into audio signal and exports to be sound.Moreover, audio output unit 103 Audio output relevant to the specific function that mobile terminal 100 executes can also be provided (for example, call signal receives sound, disappears Breath receives sound etc.).Audio output unit 103 may include loudspeaker, buzzer etc..
A/V input unit 104 is for receiving audio or video signal.A/V input unit 104 may include graphics processor (Graphics Processing Unit, GPU) 1041 and microphone 1042, graphics processor 1041 is in video acquisition mode Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out Reason.Treated, and picture frame may be displayed on display unit 106.Through graphics processor 1041, treated that picture frame can be deposited Storage is sent in memory 109 (or other storage mediums) or via radio frequency unit 101 or WiFi module 102.Mike Wind 1042 can connect in telephone calling model, logging mode, speech recognition mode etc. operational mode via microphone 1042 Quiet down sound (audio data), and can be audio data by such acoustic processing.Audio that treated (voice) data can To be converted to the format output that can be sent to mobile communication base station via radio frequency unit 101 in the case where telephone calling model. Microphone 1042 can be implemented various types of noises elimination (or inhibition) algorithms and send and receive sound to eliminate (or inhibition) The noise generated during frequency signal or interference.
Mobile terminal 100 further includes at least one sensor 105, such as optical sensor, motion sensor and other biographies Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to environment The light and shade of light adjusts the brightness of display panel 1061, and proximity sensor can close when mobile terminal 100 is moved in one's ear Display panel 1061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (general For three axis) size of acceleration, it can detect that size and the direction of gravity when static, can be used to identify the application of mobile phone posture (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc.; The fingerprint sensor that can also configure as mobile phone, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, The other sensors such as hygrometer, thermometer, infrared sensor, details are not described herein.
Display unit 106 is for showing information input by user or being supplied to the information of user.Display unit 106 can wrap Display panel 1061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 1061.
User input unit 107 can be used for receiving the number or character information of input, and generate the use with mobile terminal Family setting and the related key signals input of function control.Specifically, user input unit 107 may include touch panel 1071 with And other input equipments 1072.Touch panel 1071, also referred to as touch screen collect the touch operation of user on it or nearby (for example user uses any suitable objects or attachment such as finger, stylus on touch panel 1071 or in touch panel 1071 Neighbouring operation), and corresponding attachment device is driven according to preset formula.Touch panel 1071 may include touch detection Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it It is converted into contact coordinate, then gives processor 110, and order that processor 110 is sent can be received and executed.In addition, can To realize touch panel 1071 using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touch panel 1071, user input unit 107 can also include other input equipments 1072.Specifically, other input equipments 1072 can wrap It includes but is not limited in physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc. It is one or more, specifically herein without limitation.
Further, touch panel 1071 can cover display panel 1061, when touch panel 1071 detect on it or After neighbouring touch operation, processor 110 is sent to determine the type of touch event, is followed by subsequent processing device 110 according to touch thing The type of part provides corresponding visual output on display panel 1061.Although in Fig. 1, touch panel 1071 and display panel 1061 be the function that outputs and inputs of realizing mobile terminal as two independent components, but in certain embodiments, it can The function that outputs and inputs of mobile terminal is realized so that touch panel 1071 and display panel 1061 is integrated, is not done herein specifically It limits.
Interface unit 108 be used as at least one external device (ED) connect with mobile terminal 100 can by interface.For example, External device (ED) may include wired or wireless headphone port, external power supply (or battery charger) port, wired or nothing Line data port, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end Mouth, video i/o port, ear port etc..Interface unit 108 can be used for receiving the input from external device (ED) (for example, number It is believed that breath, electric power etc.) and the input received is transferred to one or more elements in mobile terminal 100 or can be with For transmitting data between mobile terminal 100 and external device (ED).
Memory 109 can be used for storing software program and various data.Memory 109 can mainly include storing program area The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as Audio data, phone directory etc.) etc..In addition, memory 109 may include high-speed random access memory, it can also include non-easy The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 110 is the control centre of mobile terminal, utilizes each of various interfaces and the entire mobile terminal of connection A part by running or execute the software program and/or module that are stored in memory 109, and calls and is stored in storage Data in device 109 execute the various functions and processing data of mobile terminal, to carry out whole monitoring to mobile terminal.Place Managing device 110 may include one or more processing units;Preferably, processor 110 can integrate application processor and modulatedemodulate is mediated Manage device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is main Processing wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 110.
Mobile terminal 100 can also include the power supply 111 (such as battery) powered to all parts, it is preferred that power supply 111 Can be logically contiguous by power-supply management system and processor 110, to realize management charging by power-supply management system, put The functions such as electricity and power managed.
Although Fig. 1 is not shown, mobile terminal 100 can also be including bluetooth module etc., and details are not described herein.
Embodiment to facilitate the understanding of the present invention, the communications network system that mobile terminal of the invention is based below into Row description.
Referring to Fig. 2, Fig. 2 is a kind of communications network system architecture diagram provided in an embodiment of the present invention, the communication network system System is the LTE system of universal mobile communications technology, which includes UE (User Equipment, the use of successively communication connection Family equipment) (the land Evolved UMTS Terrestrial Radio Access Network, evolved UMTS 201, E-UTRAN Ground wireless access network) 202, EPC (Evolved Packet Core, evolved packet-based core networks) 203 and operator IP operation 204。
Specifically, UE201 can be above-mentioned terminal 100, and details are not described herein again.
E-UTRAN202 includes eNodeB2021 and other eNodeB2022 etc..Wherein, eNodeB2021 can be by returning Journey (backhaul) (such as X2 interface) is connect with other eNodeB2022, and eNodeB2021 is connected to EPC203, ENodeB2021 can provide the access of UE201 to EPC203.
EPC203 may include MME (Mobility Management Entity, mobility management entity) 2031, HSS (Home Subscriber Server, home subscriber server) 2032, other MME2033, SGW (Serving Gate Way, Gateway) 2034, PGW (PDN Gate Way, grouped data network gateway) 2035 and PCRF (Policy and Charging Rules Function, policy and rate functional entity) 2036 etc..Wherein, MME2031 be processing UE201 and The control node of signaling, provides carrying and connection management between EPC203.HSS2032 is all to manage for providing some registers Such as the function of home location register (not shown) etc, and preserves some related service features, data rates etc. and use The dedicated information in family.All customer data can be sent by SGW2034, and PGW2035 can provide the IP of UE 201 Address distribution and other functions, PCRF2036 are strategy and the charging control strategic decision-making of business data flow and IP bearing resource Point, it selects and provides available strategy and charging control decision with charge execution function unit (not shown) for strategy.
IP operation 204 may include internet, Intranet, IMS (IP Multimedia Subsystem, IP multimedia System) or other IP operations etc..
Although above-mentioned be described by taking LTE system as an example, those skilled in the art should know the present invention is not only Suitable for LTE system, be readily applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA with And the following new network system etc., herein without limitation.
Based on above-mentioned mobile terminal hardware configuration and communications network system, each embodiment of the method for the present invention is proposed.
Embodiment 1
Based on above-mentioned mobile terminal, the embodiment of the invention provides a kind of image classification processing method, Fig. 3 is according to this The flow chart of the image classification processing method of inventive embodiments, as shown in figure 3, this approach includes the following steps:
Step S301 carries out piecemeal processing to image to be processed according to pre-defined rule;
Step S302, piecemeal extract the facial characteristics of piecemeal treated the image to be processed;
Step S303 matches the facial characteristics from pre-stored image facial feature database;
Step S304 carries out classification processing to the image to be processed according to matched result.
Through the above steps, piecemeal processing is carried out to image to be processed according to pre-defined rule;After piecemeal extracts piecemeal processing The image to be processed facial characteristics;The face spy that piecemeal is extracted from pre-stored image facial feature database Sign is matched;Classification processing is carried out to the image to be processed according to matched result, solve in the related technology image from Dynamic nicety of grading not high the problem of leading to poor user experience, improves recognition of face by identifying to face piecemeal in image Accuracy, and then the accuracy of image classification is improved, improve user experience.
Pre-defined rule in above-mentioned steps S301 can be configured in advance, and specific rule can be is by image block Lower regions, top half region are the region of face nose or more, and lower half portion is forehead with region more than lower lip;? Can among the face nose divide.
In an alternative embodiment, it is being handled to the image progress piecemeal to be processed according to the pre-defined rule Before, the method also includes:Piecemeal processing is carried out to multiple images according to the pre-defined rule;Piecemeal extracts piecemeal, and treated The facial characteristics of multiple images;The face of the facial characteristics of multiple images described in extracting and multiple images The corresponding relationship of photograph album belonging to portion's feature and multiple described images is stored into described image facial feature database, is subsequent The matching of facial characteristics is prepared.Wherein, pair of the facial characteristics of multiple images and photograph album belonging to multiple described images It should be related to the corresponding photograph album of the facial characteristics of specially every image, for example, the facial characteristics of image 1 corresponds to photograph album Zhang Certain, the facial characteristics of image 2 corresponds to Mr. Li.
For image to be processed be not only only have face in the case where, according to the pre-defined rule to described to be processed Before image carries out piecemeal processing, the method also includes:Recognition of face is carried out to the image to be processed, is determined described wait locate Manage face region in image.By carrying out recognition of face to image in advance, determines face position, comparison can be improved Accuracy.
Optionally, carrying out piecemeal processing to the image to be processed according to the pre-defined rule can specifically include:By institute It states in image to be processed face region and is divided into two regions up and down according to pre-defined rule.It can certainly be by figure to be processed As piecemeal is three regions or four regions, it is contemplated that the efficiency of comparison, preferably piecemeal is upper and lower two in the embodiment of the present invention Point.
It is upper and lower two-part situation for piecemeal processing, the facial feature extraction of piecemeal is carried out to the image to be processed It can specifically include:The first facial feature of the face region, the upper part of the face region are extracted respectively Second facial characteristics in region, the subregional third facial characteristics in the lower part of the face region.
Optionally, the facial characteristics progress piecemeal extracted from the pre-stored image facial feature database With can specifically include:To first facial feature progress from the pre-stored image facial feature database Match;In the case where it fails to match, to second facial characteristics from the pre-stored image facial feature database It is matched;In the case where it fails to match, to the third face from the pre-stored image facial feature database Portion's feature is matched.Succeed to the first facial characteristic matching, to the second facial characteristics successful match or to institute It is determining facial with the first facial feature, the second facial characteristics or third in the case where stating third facial characteristics successful match The corresponding image of the successful image facial characteristics of characteristic matching;Determine phase belonging to the corresponding image of described image facial characteristics Volume;The image to be processed is saved in photograph album belonging to the corresponding image of described image facial characteristics and completes classification processing. First the facial characteristics of the complete human face region of image to be processed is matched, it, directly will be in the case where successful match Processing image is saved in corresponding photograph album, needs not continue to match;In the case where it fails to match, by the image to be processed The subregional facial characteristics in upper half top matched, in the case where successful match, the image to be processed is saved in In corresponding photograph album, in the case where it fails to match, then by the subregional facial characteristics in lower half of the image to be processed into Row matching, in the case where successful match, the image to be processed is saved in corresponding photograph album, not due to a face Pipe is that the facial characteristics of complete face or the facial characteristics of top half or the facial characteristics of lower half portion all correspond to One photograph album, therefore the facial characteristics successful match of either any part, be all by image classification to be processed to the same photograph album it In.It is directed to the image of same different expressions in this way, improves the accuracy of classification.
The embodiment of the present invention improves the stability of face classification using the method for the inspection of face key point and piecemeal, reduces The reject rate of face classification, to promote the effect of photograph album classification.One of effect promoting suitable for the classification of any face photograph album The innovation of kind process can improve the performance of photograph album classification in conjunction with the strategy at single class center and non-single class center.Fig. 4 is basis The flow chart of the photograph album classification of the preferred embodiment of the present invention, as shown in figure 4, the method includes:
Step S401 handles the facial image piecemeal of shooting for upper and lower two regions, project most at the beginning of, face Database is sky, and image of newly coming in, i.e. user has taken a kind of facial image by mobile phone, the facial image that will newly shoot Piecemeal is upper and lower two regions.Fig. 5 is the schematic diagram one of face image processing according to an embodiment of the present invention, as shown in figure 5, with Family has taken a kind of facial image, and triggers facial image by scheduled operational order and classify automatically, naturally it is also possible to not have to Triggering, mobile terminal can be classified manually with automatic trigger, and mobile terminal is when detecting facial image, automatic trigger facial image Classification.
Step S402 extracts facial image piecemeal rear upper part region, lower partial region and entire facial image respectively Facial characteristics.Fig. 6 is the schematic diagram two of face image processing according to an embodiment of the present invention, as shown in fig. 6, extract face and It carries out critical point detection, is three parts, 1A, 1B, 1C the processing of face piecemeal (in view of calculation amount and speed).
The facial characteristics of three parts is saved in database by step S403, since the facial image is new for one The facial characteristics of three parts is directly saved in database when database is empty in other words by the image of face shooting.Such as Shown in Fig. 6, for 1A, 1B, 1C obtains feature respectively, is denoted as F_1A, F_1B, F_1C, gets top half face characteristic and is F_1A, whole face characteristic are F_1B, and intermediate face characteristic is F_1C.By repeating step S401-S403, a large amount of people is acquired The facial characteristics of face image, and by partial region, lower partial region and entire face figure on a large amount of facial image of acquisition The facial characteristics of picture is saved in the basis in database as subsequent facial image classification processing.
Step S404, mobile terminal detect that user has taken a facial image, and triggering classifies to facial image Processing.Fig. 7 is the schematic diagram three of face image processing according to an embodiment of the present invention, as shown in fig. 7, can pass through touching instruction Facial image to trigger to shooting is classified automatically, the instruction triggered by user by executing corresponding photographing operation, such as User is triggered by pressing the modes such as physical button that virtual shutter key on mobile terminal display screen or mobile terminal are leaned to one side.
Step S405 handles the facial image piecemeal of shooting for upper and lower two regions, and Fig. 8 is according to embodiments of the present invention Face image processing schematic diagram four, as shown in figure 8, extract face and carry out critical point detection, face piecemeal handle be For three parts, 2A, 2B, 2C.
Step S406 extracts the facial characteristics of three parts after facial image piecemeal respectively, and for 2A, 2B, 2C is obtained respectively Feature is denoted as F_2A, F_2B, F_2C, and getting top half face characteristic is F_2A, and whole face characteristic is F_2B, intermediate Face characteristic is F_2C.
Step S407, the F_2A newly obtained, F_2B, F_2C participate in the comparison with all face characteristics in database respectively, Classified according to the result of comparison to image to be processed.It is right first from the pre-stored image facial feature database F_2B is matched;In the case where it fails to match, from the pre-stored image facial feature database to F_2A into Row matching;In the case where it fails to match, F_2C is matched from the pre-stored image facial feature database. To F_2B successful match to F_2A successful match or to F_2C successful match in the case where, image to be processed is saved in F_ In photograph album belonging to the corresponding image of 2B, F_2A or F_2C.
Specifically, F_2A, F_2B, F_2C corresponding arest neighbors Euclidean distance DIST_A, DIST_B, DIST_C are calculated, this A little Euclidean distances are compared T_A with preset threshold value respectively, and T_B, T_C (are corresponded to if piecemeal nearest neighbor distance is less than Some threshold value, then it is assumed that be belong to certain one kind, otherwise sort out failure), according to practical application, first compare DIST_B and T_B it Between size no longer compared if sorted out successfully, save data and directly wait the arrival of next image;If it fails, Then compare the size between DIST_A and T_A, successful then save database, failure then continues to compare between DIST_C and T_C Size.
For example, wherein T_B is the threshold value of complete face comparison, and T_A is top in the case where T_B=0.76, T_A=0.7 Divide the threshold value of face comparison.
According to actual test image DIST_B (1B, 2B)=0.79 to be processed, it is greater than T_B so as to cause classification failure.
DIST_A (1A, 2A)=0.61 is less than T_A, and secondary classification success will be so that do not sacrificing face classification precision Under the premise of reduce face photograph album reject rate and excessive property, improve the accuracy of classification.
In a preferred embodiment, the embodiment of the present invention can also be by carrying out original facial image to be identified Down-sampling obtains the down-sampled images different from original facial image size, carries out the facial image feature extraction of more sizes, mentions High descriptive power of the facial image feature to facial image;By being carried out at piecemeal to down-sampled images and original facial image Reason, the similarity of all images block obtained after being handled according to piecemeal obtain the recognition result of facial image, to realize to people The classification processing of face improves the accuracy of facial image classification.The executing subject of each step is specific in following methods embodiment It can be the various equipment with face identification functions, such as:Mobile phone, PC etc..This method may include:
Obtain original facial image to be identified;Histogram equalization pretreatment is carried out to original facial image.Specifically, Histogram equalization is a kind of method of image enhancement, and for enhancing the brightness and contrast of image, improving image quality increases The stereovision of image intermediate portions promotes the effect of image interpretation and identification.
N times down-sampling is carried out to original facial image, n various sizes of sampled images are obtained, under n various sizes of The size of sampled images is respectively some or all of original facial image size, wherein n is natural number, the embodiment of the present invention In by n be 2 for be described in detail;
Specifically, image down sampling is a kind of reduction image resolution ratio to be shown, be stored to image, and/or be transmitted Deng technology.Existing various Downsapling methods, such as arest neighbors difference, bilinearity difference equal difference can be used in the present embodiment Value method carries out 2 down-samplings to original facial image.The size of 2 obtained down-sampled images is respectively original face figure As 1/2, the 2/3 of size.
Piecemeal is carried out according to 1/2, the 2/3 of original facial image size to each down-sampled images and original facial image Processing.
Local binary patterns LBP description is carried out to each piecemeal treated image block to extract, and obtains each figure As the LBP histogram of block;According to the LBP histogram of each image block, each piecemeal is extracted treated the spy of image block Levy vector.
Specifically, choosing several (for 8) neighborhood territory pixel points around each pixel, in image block with center The gray value (4) of pixel is used as benchmark, and the neighbor pixel that gray value is less than intermediary image vegetarian refreshments gray value is quantified as 0, will The neighbor pixel that gray value is more than or equal to intermediary image vegetarian refreshments gray value is quantified as 1;It again will be after the quantization of neighborhood territory pixel point Value is together in series to obtain one 8 binary numbers (11010011) according to certain orientation (in a clockwise direction), goes forward side by side One step is converted into decimal number (211) and assigns central pixel point;Aforesaid operations successively are carried out to all pixels point in image block, are obtained LBP to the image block schemes, and the corresponding decimal number (0~255) of each pixel, completes the LBP to the image block in figure Sub- extraction process is described.
The LBP histogram that the image block is further obtained according to LBP obtained above figure is extracted according to this LBP histogram The facial characteristics vector of the image block.To piecemeal, treated that 2 image blocks carry out feature vector respectively according to the method described above It extracts.
By the feature of the feature vector of each image block of extraction and the correspondence image block of pre-registered facial image Vector carries out similarity mode, obtains the similarity of each image block;
Specifically, pre-registered facial image is also to obtain and store according to above steps.Original face The image that image and twice down-sampling obtain is divided into 3 image blocks.So pre-registered facial image also corresponds to 3 images Block, the corresponding one group of feature vector of each image block.
Can by the feature vector of 3 image blocks of obtained facial image to be identified respectively with pre-registered face figure The feature vector of the correspondence image block of picture carries out similarity mode, obtains the similar of each image block in facial image to be identified Degree.
The recognition result of facial image is obtained according to the similarity of obtained all images block.Obtained whole can be schemed As the similarity of block is weighted fusion calculation, final similarity is obtained, the knowledge of facial image is obtained according to the final similarity Other result.Specifically, being weighted fusion to the similarity of each image block in obtained facial image to be identified, as each Image block distributes a weight and the product addition of the similarity of each image block and corresponding weight is obtained face figure to be identified As the final similarity with pre-registered facial image.Wherein, weight distribution is specifically as follows face intermediate portions (such as eye Eyeball, nose, mouth etc.) the biggish weight of place image block distribution, image block point where the non-intermediate portions of face (such as cheek etc.) With lesser weight, the corresponding weights sum of all image blocks is 1.
Whether be more than given threshold according to obtained final similarity, it can be determined that facial image to be identified with it is registered in advance Facial image whether correspond to the same person, complete image if it is, the facial image is saved in corresponding photograph album Classification.
Optionally, can also include:The feature of each image block of extraction is post-processed (such as principal component analysis, Linear discriminant analysis etc.), to reduce the dimension of the feature of each image block, enhance the identification of subsequent similarity mode.
The embodiment of the present invention improves image interpretation and knowledge by pre-processing to original facial image to be identified Other effect;By obtaining the down-sampled images different from original facial image size to original facial image progress down-sampling, The facial image feature extraction for carrying out more sizes improves facial image feature to the descriptive power of facial image;By under Sampled images and pretreated original facial image carry out piecemeal processing, all images block obtained after being handled according to piecemeal Similarity obtains the recognition result of facial image, then the facial image that will identify that is saved in corresponding photograph album, is further mentioned The high precision of facial image classification.
Embodiment 2
According to another aspect of an embodiment of the present invention, a kind of mobile terminal is additionally provided, Fig. 9 is according to embodiments of the present invention Mobile terminal schematic diagram, as shown in figure 9, the mobile terminal includes:Processor 110, memory 109 and communication bus, Wherein,
The communication bus, for realizing the connection communication between the processor 110 and the memory 109;
The processor 110, for executing the image classification processing routine stored in memory 109, to realize following step Suddenly:
Piecemeal processing is carried out to image to be processed according to pre-defined rule;
Piecemeal extracts the facial characteristics of piecemeal treated the image to be processed;
The facial characteristics is matched from pre-stored image facial feature database;
Classification processing is carried out to the image to be processed according to matched result.
Optionally, the processor 110 is also used to execute image classification processing routine, to realize following steps:
Before carrying out piecemeal processing to the image to be processed according to the pre-defined rule, to the image to be processed into Row recognition of face;
Determine face region in the image to be processed.
Optionally, the processor 110 is also used to execute image classification processing routine, to realize following steps:
Face region in the image to be processed is divided into two regions up and down according to pre-defined rule.
Optionally, the processor 110 is also used to execute image classification processing routine, to realize following steps:
The first facial feature of the face region is extracted respectively, and the top of the face region is subregional Second facial characteristics, the subregional third facial characteristics in the lower part of the face region.
Optionally, the processor 110 is also used to execute image classification processing routine, to realize following steps:
The first facial feature is matched from the pre-stored image facial feature database;
In the case where it fails to match, to second face from the pre-stored image facial feature database Feature is matched;
In the case where it fails to match, to the third face from the pre-stored image facial feature database Feature is matched.
Optionally, the processor 110 is also used to execute image classification processing routine, to realize following steps:
Succeed to the first facial characteristic matching, to the second facial characteristics successful match or to the third face In the successful situation of portion's characteristic matching, determination is matched with the first facial feature, the second facial characteristics or third facial characteristics The successfully corresponding image of image facial characteristics;
Determine photograph album belonging to the corresponding image of described image facial characteristics;
The image to be processed is saved in photograph album belonging to the corresponding image of described image facial characteristics and completes classification Processing.
Optionally, the processor 110 is also used to execute image classification processing routine, to realize following steps:
Before carrying out piecemeal processing to the image to be processed according to the pre-defined rule, according to the pre-defined rule pair Multiple images carry out piecemeal processing;
Piecemeal extracts piecemeal treated the facial characteristics of multiple images;
The facial characteristics of the facial characteristics of multiple images described in extracting and multiple images and it is described multiple The corresponding relationship of photograph album belonging to image is stored into described image facial feature database.
Embodiment 3
According to another aspect of an embodiment of the present invention, a kind of computer readable storage medium, the computer are additionally provided Readable storage medium storing program for executing is stored with one or more program, and one or more of programs can be by one or more processor It executes, to realize the following steps of above-mentioned image classification processing method:
S11 carries out piecemeal processing to image to be processed according to pre-defined rule;
S12, piecemeal extract the facial characteristics of piecemeal treated the image to be processed;
S13 matches the facial characteristics from pre-stored image facial feature database;
S14 carries out classification processing to the image to be processed according to matched result.
The embodiment of the present invention carries out piecemeal processing to image to be processed according to pre-defined rule;After piecemeal extracts piecemeal processing The image to be processed facial characteristics;The face spy that piecemeal is extracted from pre-stored image facial feature database Sign is matched;Classification processing is carried out to the image to be processed according to matched result, solve in the related technology image from Dynamic nicety of grading not high the problem of leading to poor user experience, improves recognition of face by identifying to face piecemeal in image Accuracy, and then the accuracy of image classification is improved, improve user experience.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do There is also other identical elements in the process, method of element, article or device.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in a storage medium In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much Form, all of these belong to the protection of the present invention.

Claims (10)

1. a kind of image classification processing method, which is characterized in that including:
Piecemeal processing is carried out to image to be processed according to pre-defined rule;
Piecemeal extracts the facial characteristics of piecemeal treated the image to be processed;
The facial characteristics is matched from pre-stored image facial feature database;
Classification processing is carried out to the image to be processed according to matched result.
2. the method according to claim 1, wherein according to the pre-defined rule to the image to be processed into Before the processing of row piecemeal, the method also includes:
Recognition of face is carried out to the image to be processed;
Determine face region in the image to be processed.
3. according to the method described in claim 2, it is characterized in that, being carried out according to the pre-defined rule to the image to be processed Piecemeal is handled:
Face region in the image to be processed is divided into two regions up and down according to pre-defined rule.
4. according to the method described in claim 3, it is characterized in that, piecemeal extracts piecemeal treated the image to be processed Facial characteristics includes:
The first facial feature of the face region, the top subregional second of the face region are extracted respectively Facial characteristics, the subregional third facial characteristics in the lower part of the face region.
5. according to the method described in claim 4, it is characterized in that, from the pre-stored image facial feature database Carrying out matching to the facial characteristics includes:
The first facial feature is matched from the pre-stored image facial feature database;;
In the case where it fails to match, to second facial characteristics from the pre-stored image facial feature database It is matched;
In the case where it fails to match, to the third facial characteristics from the pre-stored image facial feature database It is matched.
6. according to the method described in claim 5, it is characterized in that, being divided according to matched result the image to be processed Class is handled:
Succeed to the first facial characteristic matching, to the second facial characteristics successful match or to the third face spy In the case where levying successful match, determining and the first facial feature, the second facial characteristics or third facial characteristics successful match The corresponding image of image facial characteristics;
Determine photograph album belonging to the corresponding image of described image facial characteristics;
The image to be processed is saved in photograph album belonging to the corresponding image of described image facial characteristics and completes classification processing.
7. method according to any one of claim 1 to 6, which is characterized in that according to the pre-defined rule to described Before image to be processed carries out piecemeal processing, the method also includes:
Piecemeal processing is carried out to multiple images according to the pre-defined rule;
Piecemeal extracts piecemeal treated the facial characteristics of multiple images;
The facial characteristics of the facial characteristics of multiple images described in extracting and multiple images and multiple described images The corresponding relationship of affiliated photograph album is stored into described image facial feature database.
8. a kind of mobile terminal, which is characterized in that the mobile terminal includes:Processor, memory and communication bus, wherein
The communication bus, for realizing the connection communication between the processor and the memory;
The processor, for executing the image classification processing routine stored in memory, to realize following steps:
Piecemeal processing is carried out to image to be processed according to pre-defined rule;
Piecemeal extracts the facial characteristics of piecemeal treated the image to be processed;
The facial characteristics is matched from pre-stored image facial feature database;
Classification processing is carried out to the image to be processed according to matched result.
9. mobile terminal according to claim 8, which is characterized in that the processor is also used to execute image classification processing Program, to realize following steps:
Before carrying out piecemeal processing to the image to be processed according to the pre-defined rule, people is carried out to the image to be processed Face identification;
Determine face region in the image to be processed.
10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage have one or Multiple programs, one or more of programs can be executed by one or more processor, be appointed with realizing in claim 1-7 The step of image classification processing method described in one.
CN201810693897.9A 2018-06-29 2018-06-29 A kind of image classification processing method, mobile terminal and computer readable storage medium Pending CN108921084A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810693897.9A CN108921084A (en) 2018-06-29 2018-06-29 A kind of image classification processing method, mobile terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810693897.9A CN108921084A (en) 2018-06-29 2018-06-29 A kind of image classification processing method, mobile terminal and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN108921084A true CN108921084A (en) 2018-11-30

Family

ID=64423583

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810693897.9A Pending CN108921084A (en) 2018-06-29 2018-06-29 A kind of image classification processing method, mobile terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108921084A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829448A (en) * 2019-03-07 2019-05-31 苏州市科远软件技术开发有限公司 Face identification method, device and storage medium
CN112446816A (en) * 2021-02-01 2021-03-05 成都点泽智能科技有限公司 Video memory dynamic data storage method and device and server
CN113178164A (en) * 2020-10-12 2021-07-27 浙江山泓科技有限公司 Intelligent image processing device for LED display screen

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101162500A (en) * 2006-10-13 2008-04-16 上海银晨智能识别科技有限公司 Sectorization type human face recognition method
CN103150561A (en) * 2013-03-19 2013-06-12 华为技术有限公司 Face recognition method and equipment
CN103942531A (en) * 2014-03-06 2014-07-23 中南民族大学 Human face identification system and method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101162500A (en) * 2006-10-13 2008-04-16 上海银晨智能识别科技有限公司 Sectorization type human face recognition method
CN103150561A (en) * 2013-03-19 2013-06-12 华为技术有限公司 Face recognition method and equipment
CN103942531A (en) * 2014-03-06 2014-07-23 中南民族大学 Human face identification system and method thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829448A (en) * 2019-03-07 2019-05-31 苏州市科远软件技术开发有限公司 Face identification method, device and storage medium
CN113178164A (en) * 2020-10-12 2021-07-27 浙江山泓科技有限公司 Intelligent image processing device for LED display screen
CN112446816A (en) * 2021-02-01 2021-03-05 成都点泽智能科技有限公司 Video memory dynamic data storage method and device and server

Similar Documents

Publication Publication Date Title
CN108830062B (en) Face recognition method, mobile terminal and computer readable storage medium
CN109167910A (en) focusing method, mobile terminal and computer readable storage medium
CN109063558A (en) A kind of image classification processing method, mobile terminal and computer readable storage medium
CN109711226A (en) Two-dimensional code identification method, device, mobile terminal and readable storage medium storing program for executing
CN109036420A (en) A kind of voice identification control method, terminal and computer readable storage medium
CN109144440A (en) A kind of display refresh control method, terminal and computer readable storage medium
CN108549853A (en) A kind of image processing method, mobile terminal and computer readable storage medium
CN109584897B (en) Video noise reduction method, mobile terminal and computer readable storage medium
CN108230270A (en) A kind of noise-reduction method, terminal and computer readable storage medium
CN108965710A (en) Method, photo taking, device and computer readable storage medium
CN108961489A (en) A kind of equipment wearing control method, terminal and computer readable storage medium
CN108921084A (en) A kind of image classification processing method, mobile terminal and computer readable storage medium
CN110086993A (en) Image processing method, device, mobile terminal and computer readable storage medium
CN109816619A (en) Image interfusion method, device, terminal and computer readable storage medium
CN109407927A (en) Processing method, mobile terminal and the readable storage medium storing program for executing of electronic card
CN108900779A (en) Initial automatic exposure convergence method, mobile terminal and computer readable storage medium
CN108280334A (en) A kind of unlocked by fingerprint method, mobile terminal and computer readable storage medium
CN109005354A (en) Image pickup method, mobile terminal and computer readable storage medium
CN110401806A (en) A kind of video call method of mobile terminal, mobile terminal and storage medium
CN109065065A (en) Call method, mobile terminal and computer readable storage medium
CN108229389A (en) Facial image processing method, apparatus and computer readable storage medium
CN107613284B (en) A kind of image processing method, terminal and computer readable storage medium
CN110443238A (en) A kind of display interface scene recognition method, terminal and computer readable storage medium
CN109785254A (en) Picture noise-reduction method, picture noise reduction model generating method, terminal and storage medium
CN109766211A (en) Method, terminal and the storage medium of shuangping san false-touch prevention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181130

RJ01 Rejection of invention patent application after publication