CN108875518A - Image procossing and image classification method, device and system and storage medium - Google Patents

Image procossing and image classification method, device and system and storage medium Download PDF

Info

Publication number
CN108875518A
CN108875518A CN201711350087.5A CN201711350087A CN108875518A CN 108875518 A CN108875518 A CN 108875518A CN 201711350087 A CN201711350087 A CN 201711350087A CN 108875518 A CN108875518 A CN 108875518A
Authority
CN
China
Prior art keywords
image
object images
result
classification
classification results
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711350087.5A
Other languages
Chinese (zh)
Inventor
梁喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201711350087.5A priority Critical patent/CN108875518A/en
Publication of CN108875518A publication Critical patent/CN108875518A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the present invention provides a kind of image processing method, device and system and image classification method, device and system and storage medium.Image processing method includes:Obtain the image parameter for the target object that each object images in multiple object images include;Information, which is inputted, based on user determines the respective classification results of multiple object images, classification results include at least two retained in result, discarding result and neutrality result, and reservation result, discarding result and neutral result are respectively used to indicate correspondence image reservation, discarding and neutrality;Image classification model is calculated according to the corresponding relationship between the classification results of multiple object images and the image parameter of multiple object images, image classification model is for classifying to any image to obtain the classification results of the image.The above method can calculate the filtering model for meeting user preference according to the user's choice, enable and veritably meet field scene and the application demand of user when being classified using the model.

Description

Image procossing and image classification method, device and system and storage medium
Technical field
The present invention relates to field of image processing, relate more specifically to a kind of image processing method, device and system and one Kind image classification method, device and system and storage medium.
Background technique
In field of image processing, acquired image not necessarily can satisfy the requirement of user, and user needs to do image It accepts or rejects out.By taking face snap camera as an example, under different scenes different demands, requirement of the different users to face snap result It is different.And there are many parameter for influencing to capture result, such as face size etc..These parameters can be used as filtering face snap The criterion of image.User in order to obtain the face snap of its needs as a result, generally require the threshold value for ceaselessly adjusting above-mentioned parameter, To change the filter condition of face snap image.Then, user checks whether is the face snap image picked out after adjusting thresholds Meet user's requirement.The above process usually requires to repeat, and user needs constantly to carry out study tuning to the threshold value of each parameter. This process is often complicated and very long, is more likely not achievable task especially for the little Bai user being lacking in experience.
Summary of the invention
The present invention is proposed in view of the above problem.The present invention provides a kind of image processing methods, device and system And a kind of image classification method, device and system and storage medium.
According to an aspect of the present invention, a kind of image processing method is provided.This method includes:It obtains in multiple object images Each object images target object for including image parameter;Information, which is inputted, based on user determines that multiple object images are respective Classification results, classification results include at least two retained in result, discarding result and neutral result, retain result, abandon knot Fruit and neutral result are respectively used to the reservation of instruction correspondence image, discarding and neutrality;And the classification knot according to multiple object images Corresponding relationship between fruit and the image parameter of multiple object images calculates image classification model, and image classification model is used for appointing One image is classified to obtain the classification results of the image.
Illustratively, information is inputted based on user and determines that the respective classification results of multiple object images include:It receives and more At least partly object images in a object images distinguish relevant user and input information;Divide according to at least partly object images Not relevant user inputs information and determines at least partly respective classification results of object images;And in multiple object images , the remaining object images each of in addition at least partly object images, determine that the classification results of the residue object images are guarantor Stay one of result, discarding result and neutral result.
Illustratively, information is inputted based on user and determines that the respective classification results of multiple object images include:For object Each of image, for receive be used to indicate image classification completion instruction information before, receive and the object The case where relevant user of image inputs information, real-time reception user relevant to the object images input information;According to this The relevant user of object images inputs the initial results that information determines the object images in real time, wherein initial results and classification are tied Result type belonging to fruit is consistent;When receiving instruction information, determine that the initial results of the object images are the object images Classification results;And/or for not receiving user relevant to the object images and inputting letter before receiving instruction information The case where breath, determines the classification results of the object images to retain one of result, discarding result and neutral result.
Illustratively, image processing method further includes:For each of object images, for receiving instruction letter Before breath, the case where user relevant to the object images inputs information is received, if the initial results category of the object images In being used for and the object images as a result, then exporting corresponding icon in real time according to the initial results of the object images for predefined type It displays in association with.
Illustratively, the image parameter packet for the target object that each object images in multiple object images include is obtained It includes:Obtain at least one initial pictures;For each of at least one initial pictures, object inspection is carried out to the initial pictures It surveys, to obtain the location information of each target object in the initial pictures;For each of at least one initial pictures, It includes each target pair that location information based on each target object in the initial pictures extracts respectively from the initial pictures The subgraph of elephant;Determine that at least partly subgraph extracted from least one initial pictures is multiple object images;And meter Calculate the image parameter for the target object that each object images in multiple object images include.
Illustratively, image parameter includes one or more in following item:Object size, object fog-level, object Attitude data, object brightness, object coverage extent.
Illustratively, image classification model is realized using linear function or nonlinear function.
Illustratively, according to the correspondence between the classification results of multiple object images and the image parameter of multiple object images Relationship calculates image classification model:For each of multiple object images, by the classification results of the object images and Image parameter substitutes into function f (x)=k1*x1+k2*x2+...+kn*xn, wherein classification results Y=f (x), xiFor i-th kind of object Parameter, kiFor the coefficient of i-th kind of image parameter, i=1,2...n;And formed according to the substitution result of multiple object images Set of equations calculates k1,k2...kn, to obtain the expression formula of function f (x) as image classification model.
According to a further aspect of the invention, a kind of image classification method is provided, including:Obtain the mesh that image to be processed includes Mark the image parameter of object;The image classification model obtained is calculated to image to be processed using using above-mentioned image processing method Image parameter is handled, to obtain the classification results of image to be processed.
According to a further aspect of the invention, a kind of image processing apparatus is provided, including:Parameter acquisition module, for obtaining The image parameter for the target object that each object images in multiple object images include;Classification results determining module is used for base Input information in user and determine the respective classification results of multiple object images, classification results include retain result, abandon result and At least two in neutral result, retain result, discarding result and neutral result and is respectively used to the reservation of instruction correspondence image, abandons And neutrality;And model computation module, for being joined according to the classification results of multiple object images and the object of multiple object images Corresponding relationship between number calculates image classification model, and image classification model is for classifying to any image to obtain the figure The classification results of picture.
Illustratively, classification results determining module includes:First receiving submodule, in reception and multiple object images At least partly object images distinguish relevant user and input information;First determine submodule, for according to it is at least partly right Relevant user, which is distinguished, as image inputs the determining at least partly respective classification results of object images of information;And second determine son Module, for in multiple object images, remaining object images, determination should each of in addition at least partly object images The classification results of remaining object images are to retain one of result, discarding result and neutral result.
Illustratively, classification results determining module includes:Second receiving submodule, for for each in object images It is a, for receive be used to indicate image classification completion instruction information before, receive use relevant to the object images Family inputs the case where information, and real-time reception user relevant to the object images inputs information;Initial results determine submodule, use In for each of object images, information is inputted according to user relevant to the object images and determines the object images in real time Initial results, wherein initial results are consistent with result type belonging to classification results;First classification results determine submodule, For when receiving instruction information, determining that the initial results of the object images are that this is right for each of object images As the classification results of image;And/or second classification results determine submodule, for for each of object images, for Before receiving instruction information, the case where user relevant to the object images inputs information is not received, determines the object The classification results of image are to retain one of result, discarding result and neutral result.
Illustratively, image processing apparatus further includes:Icon output module is used for for each of object images, For the case where user relevant to the object images inputs information being received, if this is right before receiving instruction information As the initial results of image belong to predefined type as a result, then exporting corresponding figure in real time according to the initial results of the object images Mark with the object images for displaying in association with.
Illustratively, parameter acquisition module includes:Initial pictures acquisition submodule, for obtaining at least one initial graph Picture;Object detection submodule, for carrying out object inspection to the initial pictures for each of at least one initial pictures It surveys, to obtain the location information of each target object in the initial pictures;Image zooming-out submodule, for at least one Each of initial pictures, the location information based on each target object in the initial pictures are distinguished from the initial pictures Extract the subgraph comprising each target object;Image determines submodule, extracts from least one initial pictures for determining At least partly subgraph be multiple object images;And the first parameter computation module, for calculating in multiple object images Each object images target object for including image parameter.
Illustratively, image parameter includes one or more in following item:Object size, object fog-level, object Attitude data, object brightness, object coverage extent.
Illustratively, image classification model is realized using linear function or nonlinear function.
Illustratively, model computation module includes:Function substitutes into submodule, for for each in multiple object images It is a, the classification results of the object images and image parameter are substituted into function f (x)=k1*x1+k2*x2+...+kn*xn, wherein point Class result Y=f (x), xiFor i-th kind of image parameter, kiFor the coefficient of i-th kind of image parameter, i=1,2...n;And coefficient meter Operator module, the set of equations for being formed according to the substitution result of multiple object images calculate k1,k2...kn, to obtain function The expression formula of f (x) is as image classification model.
According to a further aspect of the invention, a kind of image classification device is provided, including:Parameter acquisition module, for obtaining The image parameter for the target object that image to be processed includes;And processing module, for using using above-mentioned image processing method It calculates the image classification model obtained to handle the image parameter of image to be processed, to obtain the classification knot of image to be processed Fruit.
According to a further aspect of the invention, a kind of image processing system, including processor and memory are provided, wherein institute State and be stored with computer program instructions in memory, when the computer program instructions are run by the processor for execute with Lower step:Obtain the image parameter for the target object that each object images in multiple object images include;It is inputted based on user Information determines that the respective classification results of multiple object images, classification results include retaining in result, discarding result and neutral result At least two, retain result, abandon result and neutral result is respectively used to instruction correspondence image and retains, abandons and neutral;With And image point is calculated according to the corresponding relationship between the classification results of multiple object images and the image parameter of multiple object images Class model, image classification model is for classifying to any image to obtain the classification results of the image.
Illustratively, image processing system further includes interactive device and/or image collecting device, and interactive device is for receiving User inputs information;Image collecting device is for acquiring multiple object images or at least one initial pictures, multiple object images It is generated based at least one initial pictures.
Illustratively, used execution is inputted based on user when the computer program instructions are run by the processor Information determines that the step of multiple object images respective classification results includes:Receive with it is at least partly right in multiple object images Relevant user, which is distinguished, as image inputs information;Information is inputted according to user relevant at least partly object images difference to determine At least partly respective classification results of object images;And for it is in multiple object images, except at least partly object images with Each of outer residue object images determine the classification results of the residue object images to retain result, abandoning result and neutral knot One of fruit.
Illustratively, used execution is inputted based on user when the computer program instructions are run by the processor Information determines that the respective classification results step of multiple object images includes:For each of object images, for receiving To before being used to indicate the instruction information of image classification completion, the feelings that user relevant to the object images inputs information are received Condition, real-time reception user relevant to the object images input information;Information is inputted according to user relevant to the object images The initial results of the object images are determined in real time, wherein initial results are consistent with result type belonging to classification results;It is receiving It is the classification results of the object images to the initial results for when indicating information, determining the object images;And/or for receiving Before indicating information, the case where user relevant to the object images inputs information is not received, determines point of the object images Class result is to retain one of result, discarding result and neutral result.
Illustratively, it is also used to execute following steps when the computer program instructions are run by the processor:For Each of object images, for before receiving instruction information, receiving user's input relevant to the object images The case where information, if the initial results of the object images belong to predefined type as a result, if according to the initial of the object images Corresponding icon is exported when fructufy for displaying in association with the object images.
Illustratively, the acquisition of used execution multiple objects when the computer program instructions are run by the processor The step of image parameter for the target object that each object images in image include includes:Obtain at least one initial pictures; For each of at least one initial pictures, object detection is carried out to the initial pictures, to obtain in the initial pictures The location information of each target object;For each of at least one initial pictures, based on each of the initial pictures The location information of target object extracts the subgraph comprising each target object respectively from the initial pictures;It determines from least one At least partly subgraph extracted in a initial pictures is multiple object images;And each of multiple object images of calculating are right As the image parameter for the target object that image includes.
Illustratively, image parameter includes one or more in following item:Object size, object fog-level, object Attitude data, object brightness, object coverage extent.
Illustratively, image classification model is realized using linear function or nonlinear function.
Illustratively, used execution according to multiple objects when the computer program instructions are run by the processor The step of corresponding relationship between the classification results of image and the image parameter of multiple object images calculates image classification model is wrapped It includes:For each of multiple object images, by the classification results of the object images and image parameter substitute into function f (x)= k1*x1+k2*x2+...+kn*xn, wherein classification results Y=f (x), xiFor i-th kind of image parameter, kiFor i-th kind of image parameter Coefficient, i=1,2...n;And k is calculated according to the set of equations that the substitution result of multiple object images is formed1,k2...kn, with The expression formula of function f (x) is obtained as image classification model.
According to a further aspect of the invention, a kind of image classification system, including processor and memory are provided, wherein deposit Computer program instructions are stored in reservoir, for executing following steps when computer program instructions are run by processor:It obtains The image parameter for the target object that image to be processed includes;And utilize the image that acquisition is calculated using above-mentioned image processing method Disaggregated model handles the image parameter of image to be processed, to obtain the classification results of image to be processed.
According to a further aspect of the invention, a kind of storage medium is provided, stores program instruction on said storage, Described program instruction is at runtime for executing following steps:Obtain the mesh that each object images in multiple object images include Mark the image parameter of object;Information, which is inputted, based on user determines that the respective classification results of multiple object images, classification results include Retain at least two in result, discarding result and neutral result, retains result, discarding result and neutral result and be respectively used to refer to Show correspondence image reservation, discarding and neutrality;And according to the classification results of multiple object images and the object of multiple object images Corresponding relationship between parameter calculates image classification model, and image classification model is for classifying to be somebody's turn to do to any image The classification results of image.
Illustratively, what is executed used in described program instruction at runtime determines multiple objects based on user's input information The step of image respective classification results includes:It receives relevant at least partly object images difference in multiple object images User inputs information;Information, which is inputted, according to user relevant at least partly object images difference determines at least partly object images Respective classification results;And for it is in multiple object images, each of in addition at least partly object images remaining object Image determines the classification results of the residue object images to retain one of result, discarding result and neutral result.
Illustratively, what is executed used in described program instruction at runtime determines multiple objects based on user's input information The respective classification results step of image includes:For each of object images, for being used to indicate image point receiving Before the instruction information that class is completed, receive the case where relevant to object images user inputs information, real-time reception with should The relevant user of object images inputs information;Information, which is inputted, according to user relevant to the object images determines the object diagram in real time The initial results of picture, wherein initial results are consistent with result type belonging to classification results;When receiving instruction information, really The initial results of the fixed object images are the classification results of the object images;And/or for receive instruction information before, not The case where user relevant to the object images inputs information is received, determines the classification results of the object images to retain knot Fruit abandons one of result and neutral result.
Illustratively, described program instruction is also used to execute following steps at runtime:For each in object images It is a, for receiving the case where user relevant to the object images inputs information before receiving instruction information, if should The initial results of object images belong to predefined type as a result, then being exported in real time according to the initial results of the object images corresponding Icon with the object images for displaying in association with.
Illustratively, each object in the multiple object images of acquisition executed used in described program instruction at runtime The step of image parameter for the target object that image includes includes:Obtain at least one initial pictures;It is initial at least one Each of image carries out object detection to the initial pictures, to obtain the position of each target object in the initial pictures Confidence breath;For each of at least one initial pictures, the position letter based on each target object in the initial pictures It ceases and extracts the subgraph comprising each target object respectively from the initial pictures;Determination is extracted from least one initial pictures At least partly subgraph be multiple object images;And calculate the target that each object images in multiple object images include The image parameter of object.
Illustratively, image parameter includes one or more in following item:Object size, object fog-level, object Attitude data, object brightness, object coverage extent.
Illustratively, image classification model is realized using linear function or nonlinear function.
Illustratively, described program instruction at runtime the used classification results according to multiple object images executed and Corresponding relationship between the image parameter of multiple object images calculates the step of image classification model and includes:For multiple object diagrams Each of as, the classification results of the object images and image parameter are substituted into function f (x)=k1*x1+k2*x2+...+kn* xn, wherein classification results Y=f (x), xiFor i-th kind of image parameter, kiFor the coefficient of i-th kind of image parameter, i=1,2...n; And k is calculated according to the set of equations that the substitution result of multiple object images is formed1,k2...kn, to obtain the table of function f (x) Up to formula as image classification model.
According to a further aspect of the invention, a kind of storage medium is provided, stores program instruction, program on a storage medium Instruction is at runtime for executing following steps:Obtain the image parameter for the target object that image to be processed includes;And it utilizes It calculates the image classification model obtained using above-mentioned image processing method to handle the image parameter of image to be processed, to obtain Obtain the classification results of image to be processed.
Image processing method, device and system and image classification method according to an embodiment of the present invention, device and system And storage medium, the filtering model for meeting user preference can be calculated according to the user's choice.Above-mentioned image processing method It is guiding with the object images actually obtained, so that the classification knot obtained when later use image classification model is classified Fruit can veritably meet field scene and the application demand of user.It interacts in addition, above-mentioned image processing method is used with user Mode, so that the use difficulty of user substantially reduces.
Detailed description of the invention
The embodiment of the present invention is described in more detail in conjunction with the accompanying drawings, the above and other purposes of the present invention, Feature and advantage will be apparent.Attached drawing is used to provide to further understand the embodiment of the present invention, and constitutes explanation A part of book, is used to explain the present invention together with the embodiment of the present invention, is not construed as limiting the invention.In the accompanying drawings, Identical reference label typically represents same parts or step.
Fig. 1 shows for realizing image processing method according to an embodiment of the present invention and device or image classification method and dress The schematic block diagram for the exemplary electronic device set;
Fig. 2 shows the schematic flow charts of image processing method according to an embodiment of the invention;
Fig. 3 shows the schematic flow chart of image classification method according to an embodiment of the invention;
Fig. 4 shows the schematic block diagram of image processing apparatus according to an embodiment of the invention;
Fig. 5 shows the schematic block diagram of image classification device according to an embodiment of the invention;
Fig. 6 shows the schematic block diagram of image processing system according to an embodiment of the invention;And
Fig. 7 shows the schematic block diagram of image classification system according to an embodiment of the invention.
Specific embodiment
In order to enable the object, technical solutions and advantages of the present invention become apparent, root is described in detail below with reference to accompanying drawings According to example embodiments of the present invention.Obviously, described embodiment is only a part of the embodiments of the present invention, rather than this hair Bright whole embodiments, it should be appreciated that the present invention is not limited by example embodiment described herein.Based on described in the present invention The embodiment of the present invention, those skilled in the art's obtained all other embodiment in the case where not making the creative labor It should all fall under the scope of the present invention.
To solve the above-mentioned problems, the embodiment of the present invention provides a kind of image processing method, device and system and storage and is situated between Matter.The user that is mainly characterized by of this method classifies to object images according to preference, the equipment of real-time image processing according to The selection of user calculates the filtering model (with image classification model realization) for meeting user preference for subsequent figure to be processed As classifying.Image processing method and image classification method according to an embodiment of the present invention can be applied to any and object and examine Survey relevant field, such as protection and monitor field, internet financial field, banking field etc..
Firstly, describing referring to Fig.1 for realizing image processing method according to an embodiment of the present invention and device or image Classification method and the exemplary electronic device of device 100.
As shown in Figure 1, electronic equipment 100 include one or more processors 102, it is one or more storage device 104, defeated Enter device 106, output device 108 and image collecting device 110, these components pass through bus system 112 and/or other shapes Bindiny mechanism's (not shown) of formula interconnects.It should be noted that the component and structure of electronic equipment 100 shown in FIG. 1 are exemplary , and not restrictive, as needed, the electronic equipment also can have other assemblies and structure.
The processor 102 can be central processing unit (CPU) or have data-handling capacity and/or instruction execution The processing unit of the other forms of ability, and the other components that can control in the electronic equipment 100 are desired to execute Function.
The storage device 104 may include one or more computer program products, and the computer program product can To include various forms of computer readable storage mediums, such as volatile memory and/or nonvolatile memory.It is described easy The property lost memory for example may include random access memory (RAM) and/or cache memory (cache) etc..It is described non- Volatile memory for example may include read-only memory (ROM), hard disk, flash memory etc..In the computer readable storage medium On can store one or more computer program instructions, processor 102 can run described program instruction, to realize hereafter institute The client functionality (realized by processor) in the embodiment of the present invention stated and/or other desired functions.In the meter Can also store various application programs and various data in calculation machine readable storage medium storing program for executing, for example, the application program use and/or The various data etc. generated.
The input unit 106 can be the device that user is used to input instruction, and may include keyboard, mouse, wheat One or more of gram wind and touch screen etc..
The output device 108 can export various information (such as image and/or sound) to external (such as user), and It and may include one or more of display, loudspeaker etc..
Described image acquisition device 110 can acquire image (including video frame), and acquired image is stored in For the use of other components in the storage device 104.Image collecting device 110 can be magazine imaging sensor.It answers Work as understanding, image collecting device 110 is only example, and electronic equipment 100 can not include image collecting device 110.In this feelings Under condition, it can use other devices with Image Acquisition ability and acquire image to be processed, and the image of acquisition is sent to electricity Sub- equipment 100.
Illustratively, the exemplary electronic device for realizing image processing method according to an embodiment of the present invention and device can To be realized in the equipment of personal computer or remote server etc..
In the following, image processing method according to an embodiment of the present invention will be described with reference to Fig. 2.Fig. 2 shows according to the present invention one The schematic flow chart of the image processing method 200 of a embodiment.As shown in Fig. 2, image processing method 200 includes following step Suddenly.
In step S210, the image parameter for the target object that each object images in multiple object images include is obtained.
Object images are the images comprising target object, and target object can be any object, including but not limited to:People or A part (face) of human body, animal, vehicle, building etc..The multiple object images can be still image, can also To be the video frame in video.
Multiple object images can be image collecting device (such as magazine imaging sensor) collected original graph Picture is also possible to pre-process original image the image obtained after (digitlization, normalization, smooth etc.).It needs to infuse Meaning, the pretreatment to original image may include from image acquisition device to original image in extract include target object Subgraph so that obtain object images operation.
Compare it is appreciated that multiple object images are that object detection starts the image acquired in real time afterwards or based on object detection The image that the image that acquires in real time after beginning generates, multiple object images are able to reflect current acquisition state (including target pair As state, ambient condition, image collecting device state etc.) under can collected target object the case where.
Illustratively, image parameter may include one or more in following item:Object size, object fog-level, Object gesture data (towards the angle of three-dimensional), object brightness, object coverage extent.The case where target object is face Under, available following parameter:Face size, face fog-level, human face posture data, face brightness, face coverage extent Deng.Certainly, above-mentioned image parameter is only exemplary rather than limitation of the present invention, and image parameter can also include other and target pair As relevant parameter.
In step S220, information is inputted based on user and determines the respective classification results of multiple object images, classification results packet At least two retained in result, discarding result and neutral result are included, retains result, discarding result and neutral result and is respectively used to Indicate correspondence image reservation, discarding and neutrality.
For example, it is (every that multiple face snap images can be acquired first with face snap camera in face snap application Open face snap image and contain at least one face), a certain number of face snaps are obtained as a result, and can recorde every lower (face size, face fog-level, human face posture data, face brightness, face block journey to the parameters of face snap image Degree etc.).Meanwhile it can allow user that face snap image is divided into three classes by autonomous selection mode:Need to retain, needs are gone It is removing and remaining indifferent.User independently selects to realize by the input unit 106.For example, user can be with It is interacted using touch screen with the equipment (such as above-mentioned electronic equipment 100) of real-time image processing, the selection of user is inputted this and is set It is standby.
In one example, classification results can only belong to retain result and abandon result, that is to say, that multiple objects Only there are two types of as a result, not being that retain be exactly to abandon after image classification.In another example, classification results may belong to retain As a result, abandon result and neutral result these three, i.e., to available three kinds after the classification of multiple object images as a result, a part Object images need to retain, and another part object images need to abandon, last part object images belong to " indifferent " or " dispensable ".It is appreciated that each object images only have a classification results.
In one example, it may include point for being used to indicate each object images that user directly inputs that user, which inputs information, The information of class result.For example, user can click the controls such as " reservation ", " discarding " on the touchscreen, it is based on user and touch screen Interactive operation can directly determine the classification results of each object images.In another example, user inputs information and can wrap Include the information about default classifying rules.Sentence for example, a variety of predefined filterings can be provided a user by touch screen Certainly function, such as Fuzzy priority function, facial orientation precedence function, face lightness preference function etc., user can pass through click Touch screen presets classifying rules to select one or more filtering decision functions to be used as.Default classifying rules based on user's selection Each object images are handled, the classification results of each object images can be obtained.
In step S230, according to pair between the classification results of multiple object images and the image parameter of multiple object images It should be related to and calculate image classification model, image classification model is for classifying to any image to obtain the classification knot of the image Fruit.
Herein, it is referred to the image parameter of some image (including object images and image to be processed described below) The image parameter for the target object for including in the image.Illustratively, the every image parameter for determining each object images it It afterwards, can be using these parameter values as vector X (x1,x2...xn) input, each dimension of X is an image parameter. After the interactive selection of user mark, the classification results of each object images can be obtained, illustratively, 1 table can be used Show and need to retain, use 0 indicates neutral, indicates to need to abandon with -1.1, numerical value as 0, -1 can be used as output Y.Therefore entire Problem, which can be converted to, seeks a function f (x), so that having f (x)=Y to each object images, this is that a classification is asked Topic, can be solved, including linear optimization, support vector machines (SVM), even neural metwork training etc. with a variety of methods.
In one example, image classification model can be realized using linear function.In this example, it can use linear Optimal method calculates the expression formula of linear function to determine image classification model.For example, step S230 may include:For more The classification results of the object images and image parameter are substituted into function f (x)=k by each of a object images1*x1+k2*x2 +...+kn*xn, wherein classification results Y=f (x), xiFor i-th kind of image parameter, kiFor the coefficient of i-th kind of image parameter, i= 1,2...n;And k is calculated according to the set of equations that the substitution result of multiple object images is formed1,k2...kn, to obtain function f (x) expression formula is as image classification model.
The X of all object images can be inputted and Y output is all brought into the expression formula of f (x).Assuming that object images are in total There are m, then last available m linear equation, can be write as the matrix representation forms of X*K=Y.But m<It is general next when=n Say that K has solution;And work as m>When n, K can obtain the optimal solution under a least square meaning.After obtaining the solution of K, the expression of f (x) Formula can determine.
In another example, image classification model can realize that the nonlinear function can be with using nonlinear function Including neural network even depth learning algorithm.In this example, it can use SVM and calculate the expression formula of nonlinear function with determination Image classification model.It will be appreciated by those skilled in the art that the algorithm of SVM, does not repeat them here herein.
Illustratively, image classification model can use neural fusion, for example, using conventional convolutional neural networks It realizes.Using the image parameter of each object images as the input of convolutional neural networks, using corresponding classification results as convolution The target of neural network exports, using these image parameters and classification results as sample training convolutional neural networks.It trains Convolutional neural networks can classify automatically to new image, obtain corresponding classification results.
The whole flow process of image procossing is described by taking face snap as an example below.It, can be with when just starting to carry out face snap Some initial pictures (the big collected each initial pictures in place of the stream of people can include many faces) is acquired using camera, in turn Many face snap images can be obtained from initial pictures.Then, it (can be by the equipment of user and real-time image processing Face snap camera itself or the other equipment being communicatively coupled with face snap camera) between interact and obtain each face Capture the classification results of image.Then, these face snap images be can use and calculate acquisition image classification model.For hereafter The face snap image of acquisition can use image classification model and voluntarily classify to face snap image, determine that it is to answer The reservation abandons still neutrality, does not need user at this time and goes to expend time and efforts again, can be completely by real-time image processing Equipment autonomously realizes the classification to face snap image.
The execution sequence of each step of image processing method 200 shown in Fig. 2 is only exemplary rather than limitation of the present invention, Image processing method 200 can have various suitable executive modes.For example, step S220 can before step S210 or with Step S220 is performed simultaneously.
Image processing method according to an embodiment of the present invention, user can classify to object images according to preference, real The equipment for applying image procossing can calculate the filtering model for meeting user preference (with image classification model according to the user's choice It realizes).Above-mentioned image processing method is guiding with the object images that actually obtain so that later use image classification model into Classification results obtained can veritably meet field scene and the application demand of user when row classification.In addition, at above-mentioned image Reason method with user by the way of interacting, so that the use difficulty of user substantially reduces.
Illustratively, image processing method according to an embodiment of the present invention can be in setting with memory and processor It is realized in standby, device or system.
Image processing method according to an embodiment of the present invention can be deployed at Image Acquisition end, for example, in security protection application Field can be deployed in the Image Acquisition end of access control system;In financial application field, can be deployed at personal terminal, such as Smart phone, tablet computer, personal computer etc..
Alternatively, image processing method according to an embodiment of the present invention can also be deployed in server end and individual with being distributed At terminal.For example, can acquiring initial pictures at Image Acquisition end, (object images are based on initial pictures in security protection application field Generate) or object images, Image Acquisition end send the initial pictures of acquisition or object images to server end (or cloud), by Server end (or cloud) carries out image procossing.
According to embodiments of the present invention, step S220 may include:It receives and at least partly object in multiple object images Image distinguishes relevant user and inputs information;Information is inputted according to user relevant at least partly object images difference to determine extremely The respective classification results of small part object images;And for it is in multiple object images, in addition at least partly object images Each of remaining object images, determine the classification results of the residue object images to retain result, abandoning result and neutrality result One of.
User to the selection of classification results be by user in practical situations, observe the effect of each object images, and comprehensive Every demand at scene is closed, come what is determined.Interactive mode between user and equipment can be as follows:Real-time image processing Equipment may include display screen, can show display interface on display screen.It can be shown in a thumbnail proper alignment aobvious Show on interface, object images are shown with breviary diagram form, object images can line up every row n to show.User can To choose any object image with mouse click, and right-click selection " reservation ", " discarding ", any option in " neutrality ". " neutrality " this classification results can clearly be made a choice by user oneself, but this it is not necessary to.For example, for The object images that family does not make a choice can be directly labeled as " neutrality ".Certainly, the object images that user does not make a choice Can also mark to retain " or " discarding ".The classification results for the object images that user does not make a choice are confirmed as retaining knot Fruit, any being previously set by the equipment of user or real-time image processing of abandoning in result and neutral result.
The mode that the equipment of above-mentioned user and real-time image processing interacts only is exemplary rather than limitation.For example, user Data relevant to each object images can be inputted into the equipment of real-time image processing via devices such as keyboard or touch screens Information, such as user can directly input " 1 ", " 0 ", data as " -1 " respectively indicate reservation, neutrality and discarding.
The classification results that each object images can be directly determined according to the information that user inputs for the first time, have determined that in user In the case where the classification results of each object images, computational efficiency can be improved in this way.
According to embodiments of the present invention, step S220 may include:For each of object images, for receiving It is used to indicate before the instruction information of image classification completion, receives the feelings that user relevant to the object images inputs information Condition, real-time reception user relevant to the object images input information;Information is inputted according to user relevant to the object images The initial results of the object images are determined in real time, wherein initial results are consistent with result type belonging to classification results;It is receiving It is the classification results of the object images to the initial results for when indicating information, determining the object images;And/or for receiving Before indicating information, the case where user relevant to the object images inputs information is not received, determines point of the object images Class result is to retain one of result, discarding result and neutral result.
For example, for face snap image I1, after user selects " discarding " option for the first time, want to modify to it, It can then reselect, such as selection " reservation " option.In this way, face snap image I1It is initial when user selects for the first time As a result for discarding as a result, the initial results when selecting for second of user are updated to retain result.In this way, each face snap figure The initial results of picture can be constantly updated as user relevant to the face snap image inputs the change of information, until using Until family input is used to indicate the instruction information of image classification completion.User inputs instruction information and can be, such as clicks mark For the control of " determination ".The face snap image I when receiving instruction information1Initial results be face snap image I1's Classification results.
If user is not to face snap image I before user clicks " determination " control2Categorizing selection is carried out, then may be used To directly determine face snap image I2Classification results be to retain result, abandon one of result and neutral result.Such as institute above It states, the classification results for the object images that user does not make a choice are confirmed as retaining in result, discarding result and neutral result Any can be previously set by the equipment of user or real-time image processing.
Real-time reception user inputs information and determines that the mode of initial results allows user to select as needed it in real time It modifies, while the interactive experience between user and the equipment of real-time image processing can be promoted.
According to embodiments of the present invention, image processing method 200 can also include:It is right for each of object images In before receiving instruction information, the case where user relevant to the object images inputs information is received, if the object The initial results of image belong to predefined type as a result, then exporting corresponding icon in real time according to the initial results of the object images For being displayed in association with the object images.
It displays in association with and refers to when icon is shown in the display interface, allow users to identify which icon corresponds to Which object images.
In one example, icon corresponding from different types of initial results can be the side with different colors Frame.With reference to example above, object images can be shown with breviary diagram form on the interface that a thumbnail proper alignment is shown It shows and.It, can be plus the box of green outside corresponding thumbnail after object images are chosen as " needing to retain ";Work as object It, can be plus red box outside corresponding thumbnail after image is chosen as " needing to remove ".It does not make a choice for user Object images, " it doesn't matter " or " not essential " can be defaulted as, and can not Framed.
The form of expression of above-mentioned icon is only exemplary rather than limitation of the present invention.Icon can have other suitable shapes Shape (circle, triangle etc.), it is possible to have other suitable colors.In addition, icon can also be the icon of written form, example Such as directly it can indicate that it is " reservation ", " discarding " or " not essential " with text above each object images.
Assuming that initial results belong to N (N can be 2 or 3) kind result type altogether, then it is all different types of in order to distinguish As a result, the icon at least needing N-1 kind different.It can provide that the result of which type can be identified using icon in advance, such as As described above, retaining result and abandoning result to be identified with the box of green and red box respectively, neutrality knot Fruit can not Framed.In this case, the result of predefined type includes retaining result and discarding both results of result.? In another example, retaining result, discarding result and neutral result can be identified with box, such as respectively with the side of green The box of frame, red box and yellow is identified.In this case, the result of predefined type includes retaining result, losing Abandon result and these three results of neutral result.
According to initial results export corresponding icon can be convenient user check at a glance currently make classification choosing It selects, and then is conducive to user's timely correction classification error.
According to embodiments of the present invention, before step S220, image processing method 200 can also include:It is multiple right to obtain As image;And the multiple object images of output with breviary diagram form for being shown.
The embodiment that object images are shown with thumbnail is hereinbefore described, details are not described herein again.It should be understood that this reality Applying example is not limitation of the present invention, and object images (such as original image) can also be shown otherwise.
Illustratively, using the multiple object images of image acquisition device or based on image acquisition device After the image arrived obtains the multiple object images, it can use image classification model and classify to multiple object images (can be understood as a kind of filter process).Initial, image classification modelling is to be classified as retaining to all object images, That is the weighted value of each image parameter is decontroled entirely, and all object images not can be filtered.Then, this is utilized A little object images calculate the model parameter in image classification model and then receive new image to be processed The image of target object) if, image classification model can be continued with and classified to image to be processed, only figure at this time As the model parameter in disaggregated model is updated, choice can be made to image to be processed.
According to embodiments of the present invention, step S210 may include:Obtain at least one initial pictures;For at the beginning of at least one Each of beginning image carries out object detection to the initial pictures, to obtain each target object in the initial pictures Location information;For each of at least one initial pictures, the position based on each target object in the initial pictures Information extracts the subgraph comprising each target object respectively from the initial pictures;Determination is mentioned from least one initial pictures At least partly subgraph taken is multiple object images;And calculate the mesh that each object images in multiple object images include Mark the image parameter of object.
Object detection can be using object detection algorithm that is any existing or being likely to occur in the future (for example, in target pair In the case where for face, object detection algorithm is people's face detection algorithm) it realizes.
It is illustrated by taking face snap as an example below.Candid photograph is being monitored to the crowded place such as railway station, market When, the collected each image (i.e. initial pictures) of face snap camera generally can include a large amount of faces.It is obtained it is possible, firstly, to acquire Obtain several (number is set as needed) initial pictures.Then, Face datection is carried out to each initial pictures, is based on Face datection As a result the face snap image of each of each initial pictures face is extracted.For each initial pictures, can it obtain Obtain a large amount of face snap images.In addition, can determine that every face is joined based on Face datection result after executing Face datection Number (face size, face fog-level, human face posture data, face brightness, face coverage extent etc.).Those skilled in the art It is appreciated that Face datection result may include coordinate data (the i.e. face for being used to indicate the face frame of face position Location information) and confidence level of the face frame comprising face.Face size can be obtained according to the positional information calculation of face.Face Attitude data can be obtained by carrying out human face posture assessment to face.Face fog-level can by human face region into Line definition is assessed to obtain.Face brightness can be obtained by carrying out brightness detection to human face region.Face coverage extent It can be obtained by carrying out face occlusion detection to human face region.
According to the present embodiment, initial pictures include more target object, in such a case, it is possible to be examined by object Survey separates each target object, extracts the subgraph comprising each target object as required object images.It is each right Not of uniform size as image is determined unanimously.
According to embodiments of the present invention, step S210 may include:Obtain multiple object images;And for multiple object diagrams Each of as, calculate the image parameter for the target object for including in the object images.
In some applications, the image that image acquisition device arrives includes a target object in this case can Subsequent processing is carried out as object images using the image for directly arriving image acquisition device.For example, when user is in mobile phone It only include the people of user oneself when end runs certain face recognition softwares, in the facial image of the collected user of mobile phone camera Face may not need in this case and especially extract subgraph.Using the method for the present embodiment, calculation amount can reduce, accelerate figure As the speed of processing.
According to a further aspect of the invention, a kind of image classification method is provided.Fig. 3 shows according to an embodiment of the invention The schematic flow chart of image classification method 300.As shown in figure 3, image classification method 300 includes the following steps.
In step S310, the image parameter for the target object that image to be processed includes is obtained.
Similarly with above-mentioned object images, image to be processed is the image comprising target object.Image to be processed can be Still image, the video frame being also possible in video.
Image to be processed can be image collecting device (such as magazine imaging sensor) collected original image, It is also possible to pre-process original image the image obtained after (digitlization, normalization, smooth etc.).It may be noted that Pretreatment to original image may include from image acquisition device to original image in extract comprising target object Subgraph obtains the operation of image to be processed in turn.
The image parameter for the target object that image to be processed includes can be with reference to the description above for image parameter, no longer It repeats.
In step S320, the image classification model obtained is calculated to be processed using using above-mentioned image processing method 200 The image parameter of image is handled, to obtain the classification results of image to be processed.
It illustratively, can be directly by the image parameter input picture disaggregated model of image to be processed.Image classification model It can calculate automatically and export the classification results of image to be processed.Similarly, the classification results of image to be processed may belong to protect Stay at least two in result, discarding result and neutral result.It is appreciated that result belonging to the classification results of image to be processed Result type belonging to classification results of the type with multiple object images in above-mentioned image processing method 200 is consistent.
Image classification method according to an embodiment of the present invention, since above-mentioned image classification model is the preference meter according to user The filtering model obtained is calculated, therefore is classified using above-mentioned image classification model to image to be processed, can obtain and meet now The classification results of field scene and user's actual need.In addition, image classification method according to an embodiment of the present invention can be automatic right Image to be processed is filtered, and is participated in without user, can save time and efforts for user.
According to a further aspect of the invention, a kind of image processing apparatus is provided.Fig. 4 is shown according to an embodiment of the present invention Image processing apparatus 400 schematic block diagram.
As shown in figure 4, image processing apparatus 400 according to an embodiment of the present invention includes parameter acquisition module 410, classification knot Fruit determining module 420 and model computation module 430.The modules can execute the image above in conjunction with Fig. 2 description respectively Each step/function of processing method.Only the major function of each component of the image processing apparatus 400 is described below, And omit the detail content having been described above.
Parameter acquisition module 410 is used to obtain pair for the target object that each object images in multiple object images include As parameter.Parameter acquisition module 410 can deposit in 102 Running storage device 104 of processor in electronic equipment as shown in Figure 1 The program instruction of storage is realized.
Classification results determining module 420 is used to be inputted information based on user and determines that the respective classification of multiple object images is tied Fruit, classification results include retain result, abandon in result and neutral result at least two, retain result, abandon result and in Vertical result is respectively used to the reservation of instruction correspondence image, discarding and neutrality.Classification results determining module 420 can as shown in Figure 1 The program instruction that stores in 102 Running storage device 104 of processor in electronic equipment is realized.
Model computation module 430 is used for according to the classification results of multiple object images and the image parameter of multiple object images Between corresponding relationship calculate image classification model, image classification model is for classifying to any image to obtain the image Classification results.Model computation module 430 can 102 Running storage device 104 of processor in electronic equipment as shown in Figure 1 The program instruction of middle storage is realized.
Illustratively, classification results determining module 420 includes:First receiving submodule, for receiving and multiple object diagrams At least partly object images as in distinguish relevant user and input information;First determines submodule, for basis and at least portion Divide object images to distinguish relevant user and inputs the determining at least partly respective classification results of object images of information;And second really Stator modules, for for it is in multiple object images, each of in addition at least partly object images remaining object images, really The classification results of the fixed residue object images are to retain one of result, discarding result and neutral result.
Illustratively, classification results determining module 420 includes:Second receiving submodule, for in object images Each, for receive be used to indicate image classification completion instruction information before, receive related to the object images User the case where inputting information, real-time reception relevant to object images user input information;Initial results determine submodule Block, for inputting information according to user relevant to the object images and determining that this is right in real time for each of object images As the initial results of image, wherein initial results are consistent with result type belonging to classification results;First classification results determine son Module, for determining that the initial results of the object images are when receiving instruction information for each of object images The classification results of the object images;And/or second classification results determine submodule, for for each of object images, For not receiving the case where user relevant to the object images inputs information before receiving instruction information, determining should The classification results of object images are to retain one of result, discarding result and neutral result.
Illustratively, image processing apparatus 400 further includes:Icon output module, for for each in object images It is a, for receiving the case where user relevant to the object images inputs information before receiving instruction information, if should The initial results of object images belong to predefined type as a result, then being exported in real time according to the initial results of the object images corresponding Icon with the object images for displaying in association with.
Illustratively, image processing apparatus 400 further includes:Image collection module, in classification results determining module 420 Before determining the respective classification results of multiple object images based on user's input information, multiple object images are obtained;And image Output module, for exporting multiple object images for showing with breviary diagram form.
Illustratively, parameter acquisition module 410 includes:Initial pictures acquisition submodule, it is initial for obtaining at least one Image;Object detection submodule, for carrying out object inspection to the initial pictures for each of at least one initial pictures It surveys, to obtain the location information of each target object in the initial pictures;Image zooming-out submodule, for at least one Each of initial pictures, the location information based on each target object in the initial pictures are distinguished from the initial pictures Extract the subgraph comprising each target object;Image determines submodule, extracts from least one initial pictures for determining At least partly subgraph be multiple object images;And the first parameter computation module, for calculating in multiple object images Each object images target object for including image parameter.
Illustratively, parameter acquisition module 410 includes:Object images acquisition submodule, for obtaining multiple object images; And the second parameter computation module, include for calculating in the object images for each of multiple object images The image parameter of target object.
Illustratively, image parameter includes one or more in following item:Object size, object fog-level, object Attitude data, object brightness, object coverage extent.
Illustratively, image classification model is realized using linear function or nonlinear function.
According to a further aspect of the invention, a kind of image classification device is provided.Fig. 5 is shown according to an embodiment of the present invention Image classification device 500 schematic block diagram.
As shown in figure 5, image classification device 500 according to an embodiment of the present invention includes parameter acquisition module 510 and processing Module 520.The modules can execute each step/function of the image classification method above in conjunction with Fig. 3 description respectively. Only the major function of each component of the image classification device 500 is described below, and omit had been described above it is thin Save content.
Parameter acquisition module 510 is used to obtain the image parameter for the target object that image to be processed includes.Parameter obtains mould The program instruction that block 510 can store in 102 Running storage device 104 of processor in electronic equipment as shown in Figure 1 comes real It is existing.
Processing module 420 is used to treat using the image classification model for calculating acquisition using above-mentioned image processing method 200 The image parameter of processing image is handled, to obtain the classification results of image to be processed.Processing module 420 can be by Fig. 1 institute The program instruction that stores in 102 Running storage device 104 of processor in the electronic equipment shown is realized.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
Fig. 6 shows the schematic block diagram of image processing system 600 according to an embodiment of the invention.Image procossing system System 600 includes image collecting device 610, storage device 620, processor 630 and interactive device 640.
Image collecting device 610 is for acquiring image (above-mentioned initial pictures or object images).Image collecting device 610 is Optionally, image processing system 600 can not include image collecting device 610.In such a case, it is possible to utilize other images Acquisition device acquires image, and the image of acquisition is sent to image processing system 600.
The storage of storage device 620 is for realizing the corresponding steps in image processing method according to an embodiment of the present invention Computer program instructions.
The processor 630 is for running the computer program instructions stored in the storage device 620, to execute basis The corresponding steps of the image processing method of the embodiment of the present invention, and for realizing image procossing according to an embodiment of the present invention dress Set parameter acquisition module 410, classification results determining module 420 and the model computation module 430 in 400.
The interactive device 640 receives user and inputs information for interacting with user.In one example, interactive device 640 include the input units such as mouse, keyboard.In one example, interactive device 640 can also fill in outputs including display screen etc. It sets.In another example, interactive device 640 may include the two-way transmission apparatus such as touch screen, i.e., both can be used as input dress It sets and receives user's input information, can also be used as the display etc. that output device realizes image.The interactive device 640 is optional , image processing system 600 can not include interactive device 640.In this case, image processing system 600 may include Communication interface inputs information using the user that communication interface receives other equipment transmission.
In one embodiment, for executing following step when the computer program instructions are run by the processor 630 Suddenly:Obtain the image parameter for the target object that each object images in multiple object images include;Information is inputted based on user Determine that the respective classification results of multiple object images, classification results include retaining in result, discarding result and neutral result extremely Two kinds less, reservation result, discarding result and neutral result are respectively used to the reservation of instruction correspondence image, discarding and neutrality;And root Image classification mould is calculated according to the corresponding relationship between the classification results of multiple object images and the image parameter of multiple object images Type, image classification model is for classifying to any image to obtain the classification results of the image.
Illustratively, image processing system 600 further includes interactive device and/or image collecting device, and interactive device is used for It receives user and inputs information;Image collecting device is for acquiring multiple object images or at least one initial pictures, multiple objects Image is generated based at least one initial pictures.
Illustratively, used execution based on user when the computer program instructions are run by the processor 630 The step of input information determines multiple object images respective classification results include:It receives and at least portion in multiple object images Divide object images to distinguish relevant user and inputs information;Information is inputted according to user relevant at least partly object images difference Determine at least partly respective classification results of object images;And for it is in multiple object images, except at least partly object diagram The remaining object images each of as other than, determine the classification results of the residue object images for retain result, discarding result and in One of vertical result.
Illustratively, used execution based on user when the computer program instructions are run by the processor 630 Input information determines that the respective classification results step of multiple object images includes:For each of object images, for It receives before being used to indicate the instruction information of image classification completion, receives user relevant to the object images and input information The case where, real-time reception user relevant to the object images inputs information;It is inputted according to user relevant to the object images Information determines the initial results of the object images in real time, wherein initial results are consistent with result type belonging to classification results;? When receiving instruction information, determine that the initial results of the object images are the classification results of the object images;And/or for connecing Before receiving instruction information, the case where user relevant to the object images inputs information is not received, determines the object images Classification results be to retain result, abandon one of result and neutral result.
Illustratively, it is also used to execute following steps when the computer program instructions are run by the processor 630:It is right In each of object images, for it is defeated to receive user relevant to the object images before receiving instruction information The case where entering information, if the initial results of the object images belong to predefined type as a result, if according to the object images just Corresponding icon is exported when beginning fructufy for displaying in association with the object images.
Illustratively, when the computer program instructions are run by the processor 630 used execution based on Before the step of family input information determines multiple object images respective classification results, the computer program instructions are by the place Reason device 630 is also used to execute following steps when running:Obtain multiple object images;And the multiple object images of output are used for contracting Thumbnail form is shown.
Illustratively, the acquisition of used execution is multiple when the computer program instructions are run by the processor 630 The step of image parameter for the target object that each object images in object images include includes:Obtain at least one initial graph Picture;For each of at least one initial pictures, object detection is carried out to the initial pictures, to obtain in the initial pictures Each target object location information;For each of at least one initial pictures, based on every in the initial pictures The location information of a target object extracts the subgraph comprising each target object respectively from the initial pictures;It determines from least At least partly subgraph extracted in one initial pictures is multiple object images;Each of and calculate multiple object images The image parameter for the target object that object images include.
Illustratively, the acquisition of used execution is multiple when the computer program instructions are run by the processor 630 The step of image parameter for the target object that each object images in object images include includes:Obtain multiple object images; And for each of multiple object images, the image parameter for the target object for including in the object images is calculated.
Illustratively, image parameter includes one or more in following item:Object size, object fog-level, object Attitude data, object brightness, object coverage extent.
Illustratively, image classification model is realized using linear function or nonlinear function.
Fig. 7 shows the schematic block diagram of image classification system 700 according to an embodiment of the invention.Image classification system System 700 includes image collecting device 710, storage device 720 and processor 730.
Image collecting device 710 is for acquiring image (image to be processed).Image collecting device 710 is optional, image Processing system 700 can not include image collecting device 710.In such a case, it is possible to be adopted using other image collecting devices Collect image, and the image of acquisition is sent to image processing system 700.
The storage of storage device 720 is for realizing the corresponding steps in image classification method according to an embodiment of the present invention Computer program instructions.
The processor 730 is for running the computer program instructions stored in the storage device 720, to execute basis The corresponding steps of the image classification method of the embodiment of the present invention, and for realizing image classification according to an embodiment of the present invention dress Set the parameter acquisition module 510 and processing module 420 in 500.
In addition, according to embodiments of the present invention, additionally providing a kind of storage medium, storing program on said storage Instruction, when described program instruction is run by computer or processor for executing the image processing method of the embodiment of the present invention Corresponding steps, and for realizing the corresponding module in image processing apparatus according to an embodiment of the present invention.The storage medium It such as may include the storage card of smart phone, the storage unit of tablet computer, the hard disk of personal computer, read-only memory (ROM), Erasable Programmable Read Only Memory EPROM (EPROM), portable compact disc read-only memory (CD-ROM), USB storage, Or any combination of above-mentioned storage medium.
In one embodiment, described program instruction can make computer or place when being run by computer or processor Reason device realizes each functional module of image processing apparatus according to an embodiment of the present invention, and/or can execute according to this The image processing method of inventive embodiments.
In one embodiment, described program instruction is at runtime for executing following steps:Obtain multiple object images In each object images target object for including image parameter;Information, which is inputted, based on user determines multiple object images respectively Classification results, classification results include retain result, abandon in result and neutral result at least two, retain result, abandon As a result the reservation of instruction correspondence image, discarding and neutrality are respectively used to neutral result;And the classification according to multiple object images As a result corresponding relationship between the image parameter of multiple object images calculates image classification model, image classification model for pair Any image is classified to obtain the classification results of the image.
Illustratively, what is executed used in described program instruction at runtime determines multiple objects based on user's input information The step of image respective classification results includes:It receives relevant at least partly object images difference in multiple object images User inputs information;Information, which is inputted, according to user relevant at least partly object images difference determines at least partly object images Respective classification results;And for it is in multiple object images, each of in addition at least partly object images remaining object Image determines the classification results of the residue object images to retain one of result, discarding result and neutral result.
Illustratively, what is executed used in described program instruction at runtime determines multiple objects based on user's input information The respective classification results step of image includes:For each of object images, for being used to indicate image point receiving Before the instruction information that class is completed, receive the case where relevant to object images user inputs information, real-time reception with should The relevant user of object images inputs information;Information, which is inputted, according to user relevant to the object images determines the object diagram in real time The initial results of picture, wherein initial results are consistent with result type belonging to classification results;When receiving instruction information, really The initial results of the fixed object images are the classification results of the object images;And/or for receive instruction information before, not The case where user relevant to the object images inputs information is received, determines the classification results of the object images to retain knot Fruit abandons one of result and neutral result.
Illustratively, described program instruction is also used to execute following steps at runtime:For each in object images It is a, for receiving the case where user relevant to the object images inputs information before receiving instruction information, if should The initial results of object images belong to predefined type as a result, then being exported in real time according to the initial results of the object images corresponding Icon with the object images for displaying in association with.
Illustratively, instruct determining based on user's input information for used execution at runtime multiple right in described program Before the step of classification results respective as image, described program instruction is also used to execute following steps at runtime:It obtains more A object images;And the multiple object images of output with breviary diagram form for being shown.
Illustratively, each object in the multiple object images of acquisition executed used in described program instruction at runtime The step of image parameter for the target object that image includes includes:Obtain at least one initial pictures;It is initial at least one Each of image carries out object detection to the initial pictures, to obtain the position of each target object in the initial pictures Confidence breath;For each of at least one initial pictures, the position letter based on each target object in the initial pictures It ceases and extracts the subgraph comprising each target object respectively from the initial pictures;Determination is extracted from least one initial pictures At least partly subgraph be multiple object images;And calculate the target that each object images in multiple object images include The image parameter of object.
Illustratively, each object in the multiple object images of acquisition executed used in described program instruction at runtime The step of image parameter for the target object that image includes includes:Obtain multiple object images;And for multiple object images Each of, calculate the image parameter for the target object for including in the object images.
Illustratively, image parameter includes one or more in following item:Object size, object fog-level, object Attitude data, object brightness, object coverage extent.
Illustratively, image classification model is realized using linear function or nonlinear function.
Each module in image processing system according to an embodiment of the present invention can pass through reality according to an embodiment of the present invention The processor computer program instructions that store in memory of operation of the electronic equipment of image procossing are applied to realize, or can be with The computer instruction stored in the computer readable storage medium of computer program product according to an embodiment of the present invention is counted Calculation machine is realized when running.
In addition, according to embodiments of the present invention, additionally providing a kind of storage medium, storing program on said storage Instruction, when described program instruction is run by computer or processor for executing the image classification method of the embodiment of the present invention Corresponding steps, and for realizing the corresponding module in image classification device according to an embodiment of the present invention.The storage medium It such as may include the storage card of smart phone, the storage unit of tablet computer, the hard disk of personal computer, read-only memory (ROM), Erasable Programmable Read Only Memory EPROM (EPROM), portable compact disc read-only memory (CD-ROM), USB storage, Or any combination of above-mentioned storage medium.
In one embodiment, described program instruction can make computer or place when being run by computer or processor Reason device realizes each functional module of image classification device according to an embodiment of the present invention, and/or can execute according to this The image classification method of inventive embodiments.
In one embodiment, described program instruction is at runtime for executing following steps:Obtain image packet to be processed The image parameter of the target object contained;And it is treated using the image classification model obtained is calculated using above-mentioned image processing method The image parameter of processing image is handled, to obtain the classification results of image to be processed.
Each module in image classification system according to an embodiment of the present invention can pass through reality according to an embodiment of the present invention The processor computer program instructions that store in memory of operation of the electronic equipment of image classification are applied to realize, or can be with The computer instruction stored in the computer readable storage medium of computer program product according to an embodiment of the present invention is counted Calculation machine is realized when running.
Although describing example embodiment by reference to attached drawing here, it should be understood that above example embodiment are only exemplary , and be not intended to limit the scope of the invention to this.Those of ordinary skill in the art can carry out various changes wherein And modification, it is made without departing from the scope of the present invention and spiritual.All such changes and modifications are intended to be included in appended claims Within required the scope of the present invention.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
In several embodiments provided herein, it should be understood that disclosed device and method can pass through it Its mode is realized.For example, apparatus embodiments described above are merely indicative, for example, the division of the unit, only Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be tied Another equipment is closed or is desirably integrated into, or some features can be ignored or not executed.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the present invention and help to understand one or more of the various inventive aspects, To in the description of exemplary embodiment of the present invention, each feature of the invention be grouped together into sometimes single embodiment, figure, Or in descriptions thereof.However, the method for the invention should not be construed to reflect following intention:It is i.e. claimed The present invention claims features more more than feature expressly recited in each claim.More precisely, such as corresponding power As sharp claim reflects, inventive point is that the spy of all features less than some disclosed single embodiment can be used Sign is to solve corresponding technical problem.Therefore, it then follows thus claims of specific embodiment are expressly incorporated in this specific Embodiment, wherein each, the claims themselves are regarded as separate embodiments of the invention.
It will be understood to those skilled in the art that any combination pair can be used other than mutually exclusive between feature All features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed any method Or all process or units of equipment are combined.Unless expressly stated otherwise, this specification (is wanted including adjoint right Ask, make a summary and attached drawing) disclosed in each feature can be replaced with an alternative feature that provides the same, equivalent, or similar purpose.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention Within the scope of and form different embodiments.For example, in detail in the claims, embodiment claimed it is one of any Can in any combination mode come using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice Microprocessor, digital signal processor (DSP), field programmable gate array
(FPGA), specific integrated circuit (ASIC) etc. realizes image processing apparatus or image according to an embodiment of the present invention The some or all functions of some modules in sorter.The present invention is also implemented as described herein for executing Some or all program of device (for example, computer program and computer program product) of method.Such realization is originally The program of invention can store on a computer-readable medium, or may be in the form of one or more signals.In this way Signal can be downloaded from an internet website to obtain, be perhaps provided on the carrier signal or be provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real It is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branch To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame Claim.
The above description is merely a specific embodiment or to the explanation of specific embodiment, protection of the invention Range is not limited thereto, and anyone skilled in the art in the technical scope disclosed by the present invention, can be easily Expect change or replacement, should be covered by the protection scope of the present invention.Protection scope of the present invention should be with claim Subject to protection scope.

Claims (16)

1. a kind of image processing method, including:
Obtain the image parameter for the target object that each object images in multiple object images include;
Information, which is inputted, based on user determines that the respective classification results of the multiple object images, the classification results include retaining knot At least two in fruit, discarding result and neutral result, the reservation result, the discarding result and the neutral result difference It is used to indicate correspondence image reservation, discarding and neutrality;And
According to the corresponding relationship between the classification results of the multiple object images and the image parameter of the multiple object images Image classification model is calculated, described image disaggregated model is for classifying to any image to obtain the classification knot of the image Fruit.
2. the method for claim 1, wherein described determine the multiple object images respectively based on user's input information Classification results include:
It receives user relevant at least partly object images difference in the multiple object images and inputs information;
Information, which is inputted, according to user relevant at least partly object images difference determines at least partly object images Respective classification results;And
For it is in the multiple object images, each of in addition at least partly object images remaining object images, really The classification results of the fixed residue object images are one of the reservation result, the discarding result and described neutral result.
3. the method for claim 1, wherein described determine the multiple object images respectively based on user's input information Classification results include:
For each of described object images,
For receive be used to indicate image classification completion instruction information before, receive use relevant to the object images Family inputs the case where information,
Real-time reception user relevant to the object images inputs information;
The initial results that information determines the object images in real time are inputted according to user relevant to the object images, wherein described Initial results are consistent with result type belonging to the classification results;
When receiving the instruction information, determine that the initial results of the object images are the classification results of the object images;And/ Or
The feelings of information are inputted for before receiving the instruction information, not receiving user relevant to the object images Condition,
The classification results for determining the object images are one of the reservation result, the discarding result and described neutral result.
4. method as claimed in claim 3, wherein described image processing method further includes:
For each of described object images, for receiving and the object diagram before receiving the instruction information As relevant user input information the case where, if the initial results of the object images belong to predefined type as a result, if basis The initial results of the object images export corresponding icon for displaying in association with the object images in real time.
5. the method for claim 1, wherein mesh that each object images obtained in multiple object images include Mark object image parameter include:
Obtain at least one initial pictures;
For each of at least one described initial pictures,
Object detection is carried out to the initial pictures, to obtain the location information of each target object in the initial pictures;
It includes each mesh that location information based on each target object in the initial pictures extracts respectively from the initial pictures Mark the subgraph of object;
Determine that at least partly subgraph extracted from least one described initial pictures is the multiple object images;And
Calculate the image parameter for the target object that each object images in the multiple object images include.
6. the method for claim 1, wherein the image parameter includes one or more in following item:Object is big Small, object fog-level, object gesture data, object brightness, object coverage extent.
7. the method for claim 1, wherein described image disaggregated model is real using linear function or nonlinear function It is existing.
8. the method for claim 1, wherein classification results according to the multiple object images and the multiple Corresponding relationship between the image parameter of object images calculates image classification model:
For each of the multiple object images, the classification results of the object images and image parameter are substituted into function f (x)=k1*x1+k2*x2+...+kn*xn, wherein classification results Y=f (x), xiFor i-th kind of image parameter, kiFor i-th kind of object The coefficient of parameter, i=1,2...n;And k is calculated according to the set of equations that the substitution result of the multiple object images is formed1, k2...kn, to obtain the expression formula of the function f (x) as described image disaggregated model.
9. a kind of image classification method, including:
Obtain the image parameter for the target object that image to be processed includes;And
The image classification model obtained is calculated to institute using using image processing method as claimed in any one of claims 1 to 8 The image parameter for stating image to be processed is handled, to obtain the classification results of the image to be processed.
10. a kind of image processing apparatus, including:
Parameter acquisition module, the object ginseng for the target object that each object images for obtaining in multiple object images include Number;
Classification results determining module determines the respective classification results of the multiple object images for inputting information based on user, The classification results belong at least two retained in result, discarding result and neutral result, the reservation result, the discarding As a result the reservation of instruction correspondence image, discarding and neutrality are respectively used to the neutral result;And
Model computation module, for being joined according to the classification results of the multiple object images and the object of the multiple object images Corresponding relationship between number calculates image classification model, and described image disaggregated model is used to classify to any image to obtain The classification results of the image.
11. a kind of image classification device, including:
Parameter acquisition module, for obtaining the image parameter for the target object that image to be processed includes;And
Processing module, for calculating the image obtained using using image processing method as claimed in any one of claims 1 to 8 Disaggregated model handles the image parameter of the image to be processed, to obtain the classification results of the image to be processed.
12. a kind of image processing system, including processor and memory, wherein be stored with computer program in the memory Instruction, for executing image procossing according to claims 1-8 when the computer program instructions are run by the processor Method.
13. image processing system as claimed in claim 12, wherein image processing system further includes interactive device and/or figure Picture acquisition device,
The interactive device inputs information for receiving the user;
Described image acquisition device is for acquiring the multiple object images or at least one initial pictures, the multiple object diagram As being generated based at least one described initial pictures.
14. a kind of image classification system, including processor and memory, wherein be stored with computer program in the memory Instruction, for executing following steps when the computer program instructions are run by the processor:
Obtain the image parameter for the target object that image to be processed includes;And
The image classification model obtained is calculated to institute using using image processing method as claimed in any one of claims 1 to 8 The image parameter for stating image to be processed is handled, to obtain the classification results of the image to be processed.
15. a kind of storage medium stores program instruction on said storage, described program instruction is at runtime for holding Row such as the described in any item image processing methods of claim 1-8.
16. a kind of storage medium stores program instruction on said storage, described program instruction is at runtime for holding Row following steps:
Obtain the image parameter for the target object that image to be processed includes;And
The image classification model obtained is calculated to institute using using image processing method as claimed in any one of claims 1 to 8 The image parameter for stating image to be processed is handled, to obtain the classification results of the image to be processed.
CN201711350087.5A 2017-12-15 2017-12-15 Image procossing and image classification method, device and system and storage medium Pending CN108875518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711350087.5A CN108875518A (en) 2017-12-15 2017-12-15 Image procossing and image classification method, device and system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711350087.5A CN108875518A (en) 2017-12-15 2017-12-15 Image procossing and image classification method, device and system and storage medium

Publications (1)

Publication Number Publication Date
CN108875518A true CN108875518A (en) 2018-11-23

Family

ID=64325578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711350087.5A Pending CN108875518A (en) 2017-12-15 2017-12-15 Image procossing and image classification method, device and system and storage medium

Country Status (1)

Country Link
CN (1) CN108875518A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1971630A (en) * 2006-12-01 2007-05-30 浙江工业大学 Access control device and check on work attendance tool based on human face identification technique
CN102549601A (en) * 2009-08-21 2012-07-04 索尼爱立信移动通信股份公司 Information terminal, information control method for an information terminal, and information control program
CN103020947A (en) * 2011-09-23 2013-04-03 阿里巴巴集团控股有限公司 Image quality analysis method and device
CN105046245A (en) * 2015-08-28 2015-11-11 深圳英飞拓科技股份有限公司 Video face detection and evaluation method
CN105046277A (en) * 2015-07-15 2015-11-11 华南农业大学 Robust mechanism research method of characteristic significance in image quality evaluation
WO2016130853A1 (en) * 2015-02-11 2016-08-18 AVG Netherlands B.V. Systems and methods for identifying unwanted photos stored on a device
CN106295585A (en) * 2016-08-16 2017-01-04 深圳云天励飞技术有限公司 A kind of filtration system of selection taking into account real-time and face quality and system
CN106575223A (en) * 2014-07-21 2017-04-19 宇龙计算机通信科技(深圳)有限公司 Image classification method and image classification apparatus
JP2017213696A (en) * 2016-05-30 2017-12-07 株式会社リコー Liquid discharge device, liquid discharge system, drive waveform generation method, and drive waveform generation program
CN107454267A (en) * 2017-08-31 2017-12-08 维沃移动通信有限公司 The processing method and mobile terminal of a kind of image

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1971630A (en) * 2006-12-01 2007-05-30 浙江工业大学 Access control device and check on work attendance tool based on human face identification technique
CN102549601A (en) * 2009-08-21 2012-07-04 索尼爱立信移动通信股份公司 Information terminal, information control method for an information terminal, and information control program
CN103020947A (en) * 2011-09-23 2013-04-03 阿里巴巴集团控股有限公司 Image quality analysis method and device
CN106575223A (en) * 2014-07-21 2017-04-19 宇龙计算机通信科技(深圳)有限公司 Image classification method and image classification apparatus
WO2016130853A1 (en) * 2015-02-11 2016-08-18 AVG Netherlands B.V. Systems and methods for identifying unwanted photos stored on a device
CN105046277A (en) * 2015-07-15 2015-11-11 华南农业大学 Robust mechanism research method of characteristic significance in image quality evaluation
CN105046245A (en) * 2015-08-28 2015-11-11 深圳英飞拓科技股份有限公司 Video face detection and evaluation method
JP2017213696A (en) * 2016-05-30 2017-12-07 株式会社リコー Liquid discharge device, liquid discharge system, drive waveform generation method, and drive waveform generation program
CN106295585A (en) * 2016-08-16 2017-01-04 深圳云天励飞技术有限公司 A kind of filtration system of selection taking into account real-time and face quality and system
CN107454267A (en) * 2017-08-31 2017-12-08 维沃移动通信有限公司 The processing method and mobile terminal of a kind of image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵秀恒等: "《概率统计模型与优化》", 30 June 2015 *

Similar Documents

Publication Publication Date Title
CN110532984B (en) Key point detection method, gesture recognition method, device and system
US11321583B2 (en) Image annotating method and electronic device
CN109697434B (en) Behavior recognition method and device and storage medium
US20210192747A1 (en) Portrait Segmentation Method, Model Training Method and Electronic Device
CN109643448A (en) Fine granularity object identification in robot system
CN105590099B (en) A kind of more people&#39;s Activity recognition methods based on improvement convolutional neural networks
CN108875540A (en) Image processing method, device and system and storage medium
CN108875525A (en) Behavior prediction method, apparatus, system and storage medium
CN108198130B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108986075A (en) A kind of judgment method and device of preferred image
US11514605B2 (en) Computer automated interactive activity recognition based on keypoint detection
CN111738280A (en) Image identification method, device, equipment and readable storage medium
CN111079507B (en) Behavior recognition method and device, computer device and readable storage medium
US20230048386A1 (en) Method for detecting defect and method for training model
CN110532959B (en) Real-time violent behavior detection system based on two-channel three-dimensional convolutional neural network
CN107172354A (en) Method for processing video frequency, device, electronic equipment and storage medium
CN108717520A (en) A kind of pedestrian recognition methods and device again
CN110149476A (en) A kind of time-lapse photography method, apparatus, system and terminal device
CN108875516A (en) Face identification method, device, system, storage medium and electronic equipment
CN111310531B (en) Image classification method, device, computer equipment and storage medium
CN115115740A (en) Thinking guide graph recognition method, device, equipment, medium and program product
CN112101109A (en) Face key point detection model training method and device, electronic equipment and medium
CN108875518A (en) Image procossing and image classification method, device and system and storage medium
CN114299598A (en) Method for determining fixation position and related device
CN109670520A (en) A kind of targeted attitude recognition methods, device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181123