CN105487774A - Image grouping method and device - Google Patents

Image grouping method and device Download PDF

Info

Publication number
CN105487774A
CN105487774A CN201510848347.6A CN201510848347A CN105487774A CN 105487774 A CN105487774 A CN 105487774A CN 201510848347 A CN201510848347 A CN 201510848347A CN 105487774 A CN105487774 A CN 105487774A
Authority
CN
China
Prior art keywords
image
target area
sliding path
face
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510848347.6A
Other languages
Chinese (zh)
Other versions
CN105487774B (en
Inventor
刘洁
吴小勇
茹忆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Technology Co Ltd
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510848347.6A priority Critical patent/CN105487774B/en
Publication of CN105487774A publication Critical patent/CN105487774A/en
Application granted granted Critical
Publication of CN105487774B publication Critical patent/CN105487774B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an image grouping method and device, belonging to the technical field of image processing. The method comprises: generating an area selection instruction according to the selection of a user on a target area in a reference image of an image set, wherein the area selection instruction is used for indicating the target area; acquiring image information in the target area according to the area selection instruction; determining similar images in the image set according to the image information in the target area, wherein the similarity between the feature of the image information of the similar images and the feature of the image information in the target area is greater than a preset similarity threshold; and dividing the reference image and the similar images into a group. The method and the device solve the problem of low accuracy of image grouping, improve the accuracy of image grouping, and are used for image grouping.

Description

Image packets method and device
Technical field
The disclosure relates to technical field of image processing, particularly a kind of image packets method and device.
Background technology
Along with the development of electronic technology, the function of terminal is more and more abundanter, user can use terminal (as mobile phone) to take pictures to face figure or landscape, obtain the photo that multiple comprise face figure or landscape, and this multiple pictures can store in the terminal.
In correlation technique, face recognition technology can be adopted to divide into groups to the multiple pictures that terminal stores, so that user manages the multiple pictures after grouping.Example, by comprising the photo of face characteristic in this multiple pictures of face recognition technology identification, and the photo comprising face characteristic be identified can be divided into one group, the unrecognized photo not comprising face characteristic is divided into another group.
Summary of the invention
Present disclose provides a kind of image packets method and device.Described technical scheme is as follows:
According to first aspect of the present disclosure, provide a kind of image packets method, described method comprises:
According to the selection of user to target area on benchmark image in image collection, formation zone selection instruction, described regional choice instruction is used to indicate described target area, and described image collection comprises at least two images;
The image information in described target area is obtained according to described regional choice instruction;
According to the image information in described target area, determine the similar image in described image collection, the similarity of the feature of the image information in the feature of the image information of described similar image and described target area is greater than default similarity threshold;
Described benchmark image and described similar image are divided into one group.
Optionally, described according to the selection of user to target area on benchmark image in image collection, formation zone selection instruction, comprising:
Detect user whether identical with default gesture operation for the gesture operation of operating area, described operating area be terminal user interface on show the region of described benchmark image;
If user is identical with described default gesture operation for the gesture operation of operating area, then by described user for described operating area gesture operation corresponding to region be defined as described target area;
Described regional choice instruction is generated according to described target area.
Optionally, described according to the image information in described target area, determine the similar image in described image collection, comprising:
The benchmark Lis Hartel obtaining the image information in described target area is levied;
Obtain in the image in described image collection except described benchmark image, the Lis Hartel of the image information of each image is levied;
Determine the Lis Hartel of the image information of described each image levy levy with described benchmark Lis Hartel mate degree of confidence;
By in the image except described benchmark image, the Lis Hartel of image information is levied the image that degree of confidence is greater than default coupling confidence threshold value that mates levied with described benchmark Lis Hartel and is defined as described similar image.
Optionally, described benchmark image comprises face figure,
Described default gesture operation comprises: for the first slip touch operation of described operating area, and the first sliding path of described first slip touch operation is closed path, the coincident of described first sliding path and described face figure;
Described by described user for described operating area gesture operation corresponding to region be defined as described target area, comprising:
Obtain described first sliding path;
According to described first sliding path, determine every all outer the first rectangular area being cut in described first sliding path;
Described first rectangular area is defined as described target area,
Or, according to described first rectangular area, determine outer the first border circular areas being cut in described first rectangular area, described first border circular areas be defined as described target area.
Optionally, described benchmark image comprises face figure,
Described default gesture operation comprises: for the clicking operation of any two points in described operating area, and it is two summits that described face figure is positioned at described any two points, is in cornerwise second rectangular area to connect the line segment of described any two points;
Described by described user for described operating area gesture operation corresponding to region be defined as described target area, comprising:
Described second rectangular area is defined as described target area,
Or, according to described second rectangular area, determine outer the second border circular areas being cut in described second rectangular area, described second border circular areas be defined as described target area.
Optionally, described benchmark image comprises face figure,
Described default gesture operation comprises: for the second slip touch operation of described operating area, second sliding path of described second slip touch operation is line segment, the starting point of described second sliding path and the center superposition of described face figure, the terminal of described second sliding path is positioned at the edge of described face figure, and described face figure is positioned at the starting point of described second sliding path for the center of circle, in the 3rd border circular areas being radius with described second sliding path;
Described by described user for described operating area gesture operation corresponding to region be defined as described target area, comprising:
Described 3rd border circular areas is defined as described target area,
Or, according to described 3rd border circular areas, determine every all outer the 3rd rectangular area being cut in described 3rd border circular areas, described 3rd rectangular area be defined as described target area.
Optionally, described benchmark image comprises face figure,
Described default gesture operation comprises: for the 3rd slip touch operation of described operating area, and the sliding path of described 3rd slip touch operation comprises: the 3rd sliding path and the 4th sliding path, described 3rd sliding path and described 4th sliding path are line segment, and described 3rd sliding path and described 4th sliding path conllinear, connect the mid point of the line segment of described 3rd sliding path starting point and described 4th sliding path starting point and the center superposition of described face figure, the terminal of described 3rd sliding path and the terminal of described 4th sliding path are all positioned at the edge of described face figure, and described face figure is positioned at described mid point for the center of circle, to connect in the 4th border circular areas that the line segment of described 3rd sliding path terminal and described 4th sliding path terminal is diameter,
Described by described user for described operating area gesture operation corresponding to region be defined as described target area, comprising:
Described 4th border circular areas is defined as described target area,
Or, according to described 4th border circular areas, determine every all outer the 4th rectangular area being cut in described 4th border circular areas, described 4th rectangular area be defined as described target area.
Optionally, described method also comprises:
Judge whether the feature of the image information in described target area comprises face characteristic;
If the feature of the image information in described target area comprises face characteristic, then adopt face recognition technology to identify described image collection, determine the image comprising face characteristic in described image collection;
The described image comprising face characteristic is divided into one group.
Optionally, described method also comprises:
Adopt face recognition technology to identify described image collection, determine the image comprising face characteristic in described image collection;
The image comprising face characteristic by described and described benchmark image are divided into same group.
According to second aspect of the present disclosure, provide a kind of image packets device, described image packets device comprises:
Generation module, be configured to according to the selection of user to target area on benchmark image in image collection, formation zone selection instruction, described regional choice instruction is used to indicate described target area, and described image collection comprises at least two images;
Acquisition module, is configured to the image information obtained according to described regional choice instruction in described target area;
Determination module, be configured to according to the image information in described target area, determine the similar image in described image collection, the similarity of the feature of the image information in the feature of the image information of described similar image and described target area is greater than default similarity threshold;
First grouping module, is configured to described benchmark image and described similar image to be divided into one group.
Optionally, described generation module comprises:
Detection sub-module, is configured to detect user whether identical with default gesture operation for the gesture operation of operating area, described operating area be terminal user interface on show the region of described benchmark image;
Determine submodule, be configured to when user is identical with described default gesture operation for the gesture operation of operating area, by described user for described operating area gesture operation corresponding to region be defined as described target area;
Generate submodule, be configured to generate described regional choice instruction according to described target area.
Optionally, described determination module is configured to:
The benchmark Lis Hartel obtaining the image information in described target area is levied;
Obtain in the image in described image collection except described benchmark image, the Lis Hartel of the image information of each image is levied;
Determine the Lis Hartel of the image information of described each image levy levy with described benchmark Lis Hartel mate degree of confidence;
By in the image except described benchmark image, the Lis Hartel of image information is levied the image that degree of confidence is greater than default coupling confidence threshold value that mates levied with described benchmark Lis Hartel and is defined as described similar image.
Optionally, described benchmark image comprises face figure,
Described default gesture operation comprises: for the first slip touch operation of described operating area, and the first sliding path of described first slip touch operation is closed path, the coincident of described first sliding path and described face figure;
Describedly determine that submodule is configured to:
Obtain described first sliding path;
According to described first sliding path, determine every all outer the first rectangular area being cut in described first sliding path;
Described first rectangular area is defined as described target area,
Or, according to described first rectangular area, determine outer the first border circular areas being cut in described first rectangular area, described first border circular areas be defined as described target area.
Optionally, described benchmark image comprises face figure,
Described default gesture operation comprises: for the clicking operation of any two points in described operating area, and it is two summits that described face figure is positioned at described any two points, is in cornerwise second rectangular area to connect the line segment of described any two points;
Describedly determine that submodule is configured to:
Described second rectangular area is defined as described target area,
Or, according to described second rectangular area, determine outer the second border circular areas being cut in described second rectangular area, described second border circular areas be defined as described target area.
Optionally, described benchmark image comprises face figure,
Described default gesture operation comprises: for the second slip touch operation of described operating area, second sliding path of described second slip touch operation is line segment, the starting point of described second sliding path and the center superposition of described face figure, the terminal of described second sliding path is positioned at the edge of described face figure, and described face figure is positioned at the starting point of described second sliding path for the center of circle, in the 3rd border circular areas being radius with described second sliding path;
Describedly determine that submodule is configured to:
Described 3rd border circular areas is defined as described target area,
Or, according to described 3rd border circular areas, determine every all outer the 3rd rectangular area being cut in described 3rd border circular areas, described 3rd rectangular area be defined as described target area.
Optionally, described benchmark image comprises face figure,
Described default gesture operation comprises: for the 3rd slip touch operation of described operating area, and the sliding path of described 3rd slip touch operation comprises: the 3rd sliding path and the 4th sliding path, described 3rd sliding path and described 4th sliding path are line segment, and described 3rd sliding path and described 4th sliding path conllinear, connect the mid point of the line segment of described 3rd sliding path starting point and described 4th sliding path starting point and the center superposition of described face figure, the terminal of described 3rd sliding path and the terminal of described 4th sliding path are all positioned at the edge of described face figure, and described face figure is positioned at described mid point for the center of circle, to connect in the 4th border circular areas that the line segment of described 3rd sliding path terminal and described 4th sliding path terminal is diameter,
Describedly determine that submodule is configured to:
Described 4th border circular areas is defined as described target area,
Or, according to described 4th border circular areas, determine every all outer the 4th rectangular area being cut in described 4th border circular areas, described 4th rectangular area be defined as described target area.
Optionally, described image packets device also comprises:
Judge module, whether the feature being configured to the image information judged in described target area comprises face characteristic;
First identification module, when the feature being configured to the image information in described target area comprises face characteristic, adopts face recognition technology to identify described image collection, determines the image comprising face characteristic in described image collection;
Second grouping module, is configured to the described image comprising face characteristic to be divided into one group.
Optionally, described image packets device also comprises:
Second identification module, is configured to adopt face recognition technology to identify described image collection, determines the image comprising face characteristic in described image collection;
3rd grouping module, the image being configured to comprise face characteristic by described and described benchmark image are divided into same group.
According to the third aspect of the present disclosure, provide a kind of image packets device, described image packets device comprises:
Processor;
For storing the storer of the executable instruction of described processor;
Wherein, described processor is configured to:
According to the selection of user to target area on benchmark image in image collection, formation zone selection instruction, described regional choice instruction is used to indicate described target area, and described image collection comprises at least two images;
The image information in described target area is obtained according to described regional choice instruction;
According to the image information in described target area, determine the similar image in described image collection, the similarity of the feature of the image information in the feature of the image information of described similar image and described target area is greater than default similarity threshold;
Described benchmark image and described similar image are divided into one group.
Present disclose provides a kind of image packets method and device, according to the regional choice instruction that user generates the selection of target area on benchmark image in image collection, obtain the image information in target area, and according to the image information in target area, determine the similar image in image collection, benchmark image and similar image are divided into one group.Due to according to the selection of user to the benchmark image in this image collection, the similar image being greater than default similarity threshold with the similarity of this benchmark image is selected in this image collection, so, accurately image fuzzyyer for face characteristic can be divided into one group that comprises face figure, improve the accuracy of image packets.
Should be understood that, it is only exemplary and explanatory that above general description and details hereinafter describe, and can not limit the disclosure.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in disclosure embodiment, below the accompanying drawing used required in describing embodiment is briefly described, apparently, accompanying drawing in the following describes is only embodiments more of the present disclosure, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of image packets schematic diagram according to correlation technique;
Fig. 2 is the method flow diagram of a kind of image packets method according to an exemplary embodiment;
Fig. 3-1 is the method flow diagram of a kind of image packets method according to an exemplary embodiment;
Fig. 3-2 is the method flow diagram of a kind of formation zone selection instruction according to an exemplary embodiment;
Fig. 3-3 is a kind of gesture operation schematic diagram according to an exemplary embodiment;
Fig. 3-4 is the another kind of gesture operation schematic diagram according to an exemplary embodiment;
Fig. 3-5 is another the gesture operation schematic diagram according to an exemplary embodiment;
Fig. 3-6 is another the gesture operation schematic diagram according to an exemplary embodiment;
Fig. 3-7 is a kind of method flow diagram determining similar image according to an exemplary embodiment;
Fig. 4-1 is the structural representation of a kind of image packets device according to an exemplary embodiment;
Fig. 4-2 is the structural representation of a kind of generation module according to an exemplary embodiment;
Fig. 4-3 is the structural representation of the another kind of image packets device according to an exemplary embodiment;
Fig. 4-4 is the structural representation of another the image packets device according to an exemplary embodiment;
Fig. 5 is the structured flowchart of another the image packets device according to an exemplary embodiment.
By above-mentioned accompanying drawing, illustrate the embodiment that the disclosure is clear and definite more detailed description will be had hereinafter.These accompanying drawings and text description be not in order to limited by any mode the disclosure design scope, but by reference to specific embodiment for those skilled in the art illustrate concept of the present disclosure.
Embodiment
For making object of the present disclosure, technical scheme and advantage clearly, below in conjunction with accompanying drawing, disclosure embodiment is described in further detail.
Fig. 1 is a kind of image packets schematic diagram according to correlation technique, as shown in Figure 1, can store multiple pictures in terminal, and this multiple pictures can comprise the picture A 1 comprising face figure and the photo B not comprising face figure.In correlation technique, face recognition technology can be adopted to divide into groups to the multiple pictures that terminal stores, so that user manages the multiple pictures after grouping.Example, can by comprising the picture A 1 of face characteristic in this multiple pictures of face recognition technology identification, and the picture A 1 comprising face characteristic be identified is divided into first group, unrecognized photo is divided into second group, and thinks that this unrecognized photo is the photo B not comprising face characteristic.
When user to use under the environment of terminal in dark shooting to comprise face figure and face characteristic comparatively blurred image A2 time, due to dark, face characteristic in the picture A 2 comprising face figure that terminal obtains is fuzzyyer, when dividing into groups to multiple pictures, by this face characteristic of face recognition technology None-identified comparatively blurred image A2, and this face characteristic can be divided into second group compared with blurred image A2 and the photo B not comprising face figure, therefore, the accuracy of photo grouping is poor.
Fig. 2 is the method flow diagram of a kind of image packets method according to an exemplary embodiment, and as shown in Figure 2, this image packets method can comprise:
In step 201, according to the selection of user to target area on benchmark image in image collection, formation zone selection instruction, regional choice instruction is used to indicate target area, and image collection comprises at least two images.
In step 202., the image information in target area is obtained according to regional choice instruction.
In step 203, according to the image information in target area, determine the similar image in image collection, the similarity of the feature of the image information in the feature of the image information of similar image and target area is greater than default similarity threshold.
In step 204, benchmark image and similar image are divided into one group.
In sum, due in the image packets method that disclosure embodiment provides, according to the regional choice instruction that user generates the selection of target area on benchmark image in image collection, obtain the image information in target area, and according to the image information in target area, determine the similar image in image collection, benchmark image and similar image are divided into one group.Due to according to the selection of user to the benchmark image in this image collection, the similar image being greater than default similarity threshold with the similarity of this benchmark image is selected in this image collection, so, accurately image fuzzyyer for face characteristic can be divided into one group that comprises face figure, improve the accuracy of image packets.
Optionally, step 201 can comprise:
Detect user whether identical with default gesture operation for the gesture operation of operating area, operating area be terminal user interface on show the region of benchmark image;
If user is identical with default gesture operation for the gesture operation of operating area, then by user for operating area gesture operation corresponding to region be defined as target area;
According to formation zone, target area selection instruction.
Optionally, step 203 can comprise:
The benchmark Lis Hartel obtaining the image information in target area is levied;
Obtain in the image in image collection except benchmark image, the Lis Hartel of the image information of each image is levied;
Determine the Lis Hartel of the image information of each image levy levy with benchmark Lis Hartel mate degree of confidence;
By in the image except benchmark image, the Lis Hartel of image information is levied the image that degree of confidence is greater than default coupling confidence threshold value that mates levied with benchmark Lis Hartel and is defined as similar image.
Optionally, benchmark image comprises face figure,
Default gesture operation can comprise: for the first slip touch operation of operating area, and the first sliding path of the first slip touch operation is the coincident of closed path, the first sliding path and face figure;
By user for operating area gesture operation corresponding to region be defined as target area, comprising:
Obtain the first sliding path;
According to the first sliding path, determine every all outer the first rectangular area being cut in the first sliding path;
First rectangular area is defined as target area,
Or, according to the first rectangular area, determine outer the first border circular areas being cut in the first rectangular area, the first border circular areas be defined as target area.
Optionally, benchmark image comprises face figure,
Default gesture operation comprises: for the clicking operation of any two points in operating area, and it is two summits that face figure is positioned at any two points, is in cornerwise second rectangular area to connect the line segment of any two points;
By user for operating area gesture operation corresponding to region be defined as target area, comprising:
Second rectangular area is defined as target area,
Or, according to the second rectangular area, determine outer the second border circular areas being cut in the second rectangular area, the second border circular areas be defined as target area.
Optionally, benchmark image comprises face figure,
Default gesture operation comprises: for the second slip touch operation of operating area, second sliding path of the second slip touch operation is line segment, the starting point of the second sliding path and the center superposition of face figure, the terminal of the second sliding path is positioned at the edge of face figure, and face figure is positioned at the starting point of the second sliding path for the center of circle, in the 3rd border circular areas being radius with the second sliding path;
By user for operating area gesture operation corresponding to region be defined as target area, comprising:
3rd border circular areas is defined as target area,
Or, according to the 3rd border circular areas, determine every all outer the 3rd rectangular area being cut in the 3rd border circular areas, the 3rd rectangular area be defined as target area.
Optionally, benchmark image comprises face figure,
Default gesture operation comprises: for the 3rd slip touch operation of operating area, and the sliding path of the 3rd slip touch operation comprises: the 3rd sliding path and the 4th sliding path, 3rd sliding path and the 4th sliding path are line segment, and the 3rd sliding path and the 4th sliding path conllinear, connect the mid point of the line segment of the 3rd sliding path starting point and the 4th sliding path starting point and the center superposition of face figure, the terminal of the 3rd sliding path and the terminal of the 4th sliding path are all positioned at the edge of face figure, and face figure to be positioned at mid point be the center of circle, with in the 4th border circular areas that to connect the line segment of the 3rd sliding path terminal and the 4th sliding path terminal be diameter,
By user for operating area gesture operation corresponding to region be defined as target area, comprising:
4th border circular areas is defined as target area,
Or, according to the 4th border circular areas, determine every all outer the 4th rectangular area being cut in the 4th border circular areas, the 4th rectangular area be defined as target area.
Optionally, this image packets method can also comprise:
Judge whether the feature of the image information in target area comprises face characteristic;
If the feature of the image information in target area comprises face characteristic, then adopt face recognition technology to identify image collection, determine the image comprising face characteristic in image collection;
The image comprising face characteristic is divided into one group.
Optionally, this image packets method can also comprise:
Adopt face recognition technology to identify image collection, determine the image comprising face characteristic in image collection;
The image and benchmark image that comprise face characteristic are divided into same group.
In sum, due in the image packets method that disclosure embodiment provides, according to the regional choice instruction that user generates the selection of target area on benchmark image in image collection, obtain the image information in target area, and according to the image information in target area, determine the similar image in image collection, benchmark image and similar image are divided into one group.Due to according to the selection of user to the benchmark image in this image collection, the similar image being greater than default similarity threshold with the similarity of this benchmark image is selected in this image collection, so, accurately image fuzzyyer for face characteristic can be divided into one group that comprises face figure, improve the accuracy of image packets.
Fig. 3-1 is the method flow diagram of a kind of image packets method according to an exemplary embodiment, and as shown in Figure 3, this image packets method can comprise:
In step 301, according to the selection of user to target area on benchmark image in image collection, formation zone selection instruction.
Example, this regional choice instruction may be used for indicating target region, and image collection can comprise at least two images.Optionally, as shown in figure 3-2, step 301 can comprise:
In sub-step 3011, detect user whether identical with default gesture operation for the gesture operation of operating area.If user is identical with default gesture operation for the gesture operation of operating area, then perform sub-step 3012; If user is different from default gesture operation for the gesture operation of operating area, then perform sub-step 3011.
Example, this operating area can for the user interface of terminal showing the region of benchmark image.Benchmark image can comprise face figure, it should be noted that, benchmark image can also comprise other figures, and disclosure embodiment does not limit this.
In sub-step 3012, by user for operating area gesture operation corresponding to region be defined as target area.Perform sub-step 3013.
Fig. 3-3 is a kind of gesture operation schematic diagram according to an exemplary embodiment, as shown in Fig. 3-3, can in implementation at the first, default gesture operation can comprise: for the first slip touch operation of operating area X, first sliding path H1 of the first slip touch operation is closed path, the coincident of the first sliding path H1 and face figure.When determining target area, first the first sliding path H1 can be obtained, and according to the first position of sliding path H1 on operating area X, determine every all outer the first rectangular area J1 being cut in the first sliding path H1 on one side, then, the first rectangular area J1 is defined as target area, or, according to the first rectangular area J1, determine outer the first border circular areas Y1 being cut in the first rectangular area J1, the first border circular areas Y1 is defined as target area.
User, when selecting benchmark image, by a bit on a finger touch face pattern edge, then can slide on this operating area X along the first sliding path H1, and the terminal slided overlaps with the starting point of finger sliding.Now, terminal according to this first sliding path H1, can determine every all outer the first rectangular area J1 being cut in the first sliding path H1, further, according to this first rectangular area J1, outer the first border circular areas Y1 being cut in the first rectangular area J1 can also be determined.
Fig. 3-4 is the another kind of gesture operation schematic diagram according to an exemplary embodiment, as shown in Figure 3-4, can in implementation at the second, default gesture operation can comprise: for the clicking operation of any two points in operating area X (some D1 and some D2), it is two summits that face figure is positioned at any two points (some D1 and some D2), is in cornerwise second rectangular area J2 to connect the line segment of any two points (some D1 and some D2).When determining target area, the second rectangular area J2 can be defined as target area, or, according to the second rectangular area J2, determine outer the second border circular areas Y2 being cut in the second rectangular area J2, the second border circular areas Y2 is defined as target area.
User is when selecting benchmark image, can by any two points in two fingers respectively touch operation region, making face figure be positioned at any two points is two summits, be in cornerwise second rectangular area J2 to connect the line segment of any two points, and determine this second rectangular area J2 according to these any two points.Further, according to this second rectangular area J2, outer the second border circular areas Y2 being cut in the second rectangular area J2 can also be determined.
Fig. 3-5 is another the gesture operation schematic diagram according to an exemplary embodiment, as in Figure 3-5, can in implementation at the third, default gesture operation can comprise: for the second slip touch operation of operating area X, second sliding path H2 of the second slip touch operation is line segment, the starting point Q2 of the second sliding path H2 and the center superposition of face figure, the terminal Z2 of the second sliding path H2 is positioned at the edge of face figure, and face figure is positioned at the starting point Q2 of the second sliding path H2 for the center of circle, in the 3rd border circular areas Y3 being radius with the second sliding path H2.When determining target area, the 3rd border circular areas Y3 can be defined as target area, or, according to the 3rd border circular areas Y3, determine every all outer the 3rd rectangular area J3 being cut in the 3rd border circular areas Y3, the 3rd rectangular area J3 is defined as target area.
User is when selecting benchmark image, can by the center of a finger touch face figure, and from the center of face figure to the boundary slip of face figure, face figure is positioned at the starting point Q2 of the second sliding path H2 for the center of circle, in the 3rd border circular areas Y3 being radius with the second sliding path H2, and determine the 3rd border circular areas Y3 according to this second sliding path H2.Further, according to the 3rd border circular areas Y3, every all outer the 3rd rectangular area J3 being cut in the 3rd border circular areas Y3 can also be determined.
Fig. 3-6 is another the gesture operation schematic diagram according to an exemplary embodiment, as seen in figures 3-6, can in implementation at the 4th kind, default gesture operation can comprise: for the 3rd slip touch operation of operating area X, and the sliding path of the 3rd slip touch operation comprises: the 3rd sliding path H3 and the 4th sliding path H4, 3rd sliding path H3 and the 4th sliding path H4 is line segment, and the 3rd sliding path H3 and the 4th sliding path H4 conllinear, connect the mid point M of line segment and the center superposition of face figure of the 3rd sliding path H3 starting point Q3 and the 4th sliding path H4 starting point Q4, the terminal Z3 of the 3rd sliding path H3 and the terminal Z4 of the 4th sliding path H4 is all positioned at the edge of face figure, and face figure is positioned at mid point M for the center of circle, to connect in the 4th border circular areas Y4 that the line segment of the 3rd sliding path H3 terminal Z3 and the 4th sliding path H4 terminal Z4 is diameter.When determining target area, the 4th border circular areas Y4 can be defined as target area, or, according to the 4th border circular areas Y4, determine every all outer the 4th rectangular area J4 being cut in the 4th border circular areas Y4, the 4th rectangular area J4 is defined as target area.
User is when selecting benchmark image, can respectively by 2 points in two finger touch face figures, and this line of 2 is through the center of face figure, and two fingers are slided along the direction away from face centre of figure respectively, until slide into the edge of face figure, face figure is made to be positioned at mid point M for the center of circle, to connect in the 4th border circular areas Y4 that the line segment of the 3rd sliding path H3 terminal Z3 and the 4th sliding path H4 terminal Z4 is diameter.Further, according to the 4th border circular areas Y4, every all outer the 4th rectangular area J4 being cut in the 4th border circular areas Y4 can also be determined.
In sub-step 3013, according to formation zone, target area selection instruction.
After determining the target area on this benchmark image, can according to the position of this target area on benchmark image, formation zone selection instruction, makes this regional choice instruction be used to indicate this target area, namely indicates the position of this target area on benchmark image.
In step 302, the image information in target area is obtained according to regional choice instruction.
According to this regional choice instruction, after determining the target area on this benchmark image, the image information in target area on this benchmark image can be obtained.The concrete steps obtaining the image information on this benchmark image in target area can with reference to the concrete steps of the image information in a certain region obtained in correlation technique on a certain image, and disclosure embodiment does not repeat at this.
In step 303, according to the image information in target area, determine the similar image in image collection.
Example, this similar image can be greater than the image of default similarity threshold for the similarity of the feature of the image information in the feature of image information and target area.
As shown in fig. 3 to 7, step 303 can comprise:
In sub-step 3031, the benchmark Lis Hartel obtaining the image information in target area is levied.
The Lis Hartel of the image information in the target area that can obtain in obtaining step 302 is levied (also known as Harr feature).The Lis Hartel obtaining the image information in the target area concrete steps of levying can with reference to the concrete steps obtaining Lis Hartel in correlation technique and levy.
In sub-step 3032, obtain in the image in image collection except benchmark image, the Lis Hartel of the image information of each image is levied.
After the Lis Hartel obtaining the image information in target area is levied, can with reference to the concrete steps obtaining Lis Hartel in correlation technique and levy, the Lis Hartel obtaining the image information of each image in image collection is levied.Example, obtaining in the image in image collection except benchmark image, when the Lis Hartel of the image information of each image is levied, the Lis Hartel that can obtain multiple dimensioned image information is levied, and in the region namely obtaining multiple different size in image, the Lis Hartel of image information is levied.
In sub-step 3033, determine the Lis Hartel of the image information of each image levy levy with benchmark Lis Hartel mate degree of confidence.
Can levy according to the Lis Hartel of the image information of each image, the benchmark Lis Hartel of image information in target area levies and mate the computing formula of degree of confidence, determine the Lis Hartel of the image information of each image levy levy with benchmark Lis Hartel mate degree of confidence.
Optionally, the Lis Hartel that can calculate the image information of each image in this image collection levy levy with benchmark Lis Hartel mate degree of confidence, also can calculate in this image collection except crossing except this benchmark image, the Lis Hartel of the image information of each image levy levy with benchmark Lis Hartel mate degree of confidence, disclosure embodiment does not limit this.
In sub-step 3034, by the image except benchmark image, the Lis Hartel of image information is levied the image that degree of confidence is greater than default coupling confidence threshold value that mates levied with benchmark Lis Hartel and is defined as similar image.
The Lis Hartel of the image information determining each image levy levy with benchmark Lis Hartel mate degree of confidence after, the Lis Hartel of the image information of each image can be levied the degree of confidence of mating of levying with benchmark Lis Hartel to make comparisons with the configuration confidence threshold value preset, by in the image except benchmark image, the Lis Hartel of image information is levied the image that degree of confidence is greater than default coupling confidence threshold value that mates levied with benchmark Lis Hartel and is defined as similar image.
Example, the Lis Hartel being image information with the feature of image information in disclosure embodiment is levied, the similarity of the feature of image information is the coupling degree of confidence that the Lis Hartel of image information is levied, the similarity threshold preset is default coupling confidence threshold value is example, step 303 is explained, in practical application, the feature of image can also be other features, the similarity of feature can also be other similarities, the similarity threshold preset can also be other threshold values, and disclosure embodiment is not construed as limiting this.
In step 304, benchmark image and similar image are divided into one group.
Based on the selection of user to target area on benchmark image, the benchmark image in this image collection can be determined, based on the comparative result of the coupling degree of confidence in step 303, the similar image in this image collection can be determined, benchmark image in this image collection and similar image can be divided into one group, further, other images in this image collection except this benchmark image and similar image can also be divided into one group.
It should be noted that, after the step 304, can also face recognition technology be adopted again to identify image collection, determine the image comprising face characteristic in image collection, then detect the image whether comprising face characteristic in other images in this image collection except this benchmark image and similar image, and be divided into same group by comprising the image of face characteristic, benchmark image and similar image.
Optionally, after step 302, can also judge whether the feature of the image information in target area comprises face characteristic.Concrete, can provide and adopt face recognition technology to identify the image information in this target area, judge whether the feature of the image information in this target area comprises face characteristic.If the feature of the image information in target area comprises face characteristic, face recognition technology then can be adopted to identify image collection, determine the image comprising face characteristic in image collection, and the image comprising face characteristic is divided into one group, the image not comprising face characteristic in this image collection is divided into one group.
Example, before step 301, face recognition technology can be adopted to carry out recognition of face to the image in image collection, identify the image comprising face characteristic in this image collection, and in step 301, using the image collection of set in step 301 of image do not identified by face recognition technology, and the image packets method identification adopting disclosure embodiment to provide this not by the image comprising face characteristic that face recognition technology identifies, and the image that the image (benchmark image and similar image) the image packets method adopting disclosure embodiment to provide identified and employing face recognition technology identify is divided into one group.
In sum, due in the image packets method that disclosure embodiment provides, according to the regional choice instruction that user generates the selection of target area on benchmark image in image collection, obtain the image information in target area, and according to the image information in target area, determine the similar image in image collection, benchmark image and similar image are divided into one group.Due to according to the selection of user to the benchmark image in this image collection, the similar image being greater than default similarity threshold with the similarity of this benchmark image is selected in this image collection, so, accurately image fuzzyyer for face characteristic can be divided into one group that comprises face figure, improve the accuracy of image packets.
Fig. 4-1 is the structural representation of a kind of image packets device 40 according to an exemplary embodiment, and as shown in Fig. 4-1, this image packets device 40 can comprise:
Generation module 401, be configured to according to the selection of user to target area on benchmark image in image collection, formation zone selection instruction, regional choice instruction is used to indicate target area, and image collection comprises at least two images.
Acquisition module 402, is configured to the image information obtained according to regional choice instruction in target area.
Determination module 403, is configured to, according to the image information in target area, determine the similar image in image collection, and the similarity of the feature of the image information in the feature of the image information of similar image and target area is greater than default similarity threshold.
First grouping module 404, is configured to benchmark image and similar image to be divided into one group.
In sum, due in the image packets device that disclosure embodiment provides, the regional choice instruction that generation module generates the selection of target area on benchmark image in image collection according to user, acquisition module obtains the image information in target area, determination module is according to the image information in target area, determine the similar image in image collection, benchmark image and similar image are divided into one group by the first grouping module.Due to according to the selection of user to the benchmark image in this image collection, the similar image being greater than default similarity threshold with the similarity of this benchmark image is selected in this image collection, so, can will comprise face characteristic accurately, and the fuzzyyer image of face characteristic is divided into one group that comprises face figure, improves the accuracy of image packets.
Optionally, as shown in the Fig. 4-2, generation module 401 can comprise:
Detection sub-module 4011, is configured to detect user whether identical with default gesture operation for the gesture operation of operating area, operating area be terminal user interface on show the region of benchmark image.
Determine submodule 4012, be configured to when user is identical with default gesture operation for the gesture operation of operating area, by user for operating area gesture operation corresponding to region be defined as target area.
Generate submodule 4013, be configured to according to formation zone, target area selection instruction.
Optionally, determination module 403 can be configured to:
The benchmark Lis Hartel obtaining the image information in target area is levied;
Obtain in the image in image collection except benchmark image, the Lis Hartel of the image information of each image is levied;
Determine the Lis Hartel of the image information of each image levy levy with benchmark Lis Hartel mate degree of confidence;
By in the image except benchmark image, the Lis Hartel of image information is levied the image that degree of confidence is greater than default coupling confidence threshold value that mates levied with benchmark Lis Hartel and is defined as similar image.
Optionally, benchmark image can comprise face figure.
First aspect, this default gesture operation can comprise: for the first slip touch operation of operating area, and the first sliding path of the first slip touch operation is the coincident of closed path, the first sliding path and face figure; Determine that submodule 4012 can be configured to: obtain the first sliding path; According to the first sliding path, determine every all outer the first rectangular area being cut in the first sliding path; First rectangular area is defined as target area, or, according to the first rectangular area, determine outer the first border circular areas being cut in the first rectangular area, the first border circular areas is defined as target area.
Second aspect, this default gesture operation can comprise: for the clicking operation of any two points in operating area, and it is two summits that face figure is positioned at any two points, is in cornerwise second rectangular area to connect the line segment of any two points; Determine that submodule 4012 can be configured to: the second rectangular area is defined as target area, or, according to the second rectangular area, determine outer the second border circular areas being cut in the second rectangular area, the second border circular areas is defined as target area.
The third aspect, this default gesture operation comprises: for the second slip touch operation of operating area, second sliding path of the second slip touch operation is line segment, the starting point of the second sliding path and the center superposition of face figure, the terminal of the second sliding path is positioned at the edge of face figure, and face figure is positioned at the starting point of the second sliding path for the center of circle, in the 3rd border circular areas being radius with the second sliding path; Determine that submodule 4012 can be configured to: the 3rd border circular areas is defined as target area, or, according to the 3rd border circular areas, determine every all outer the 3rd rectangular area being cut in the 3rd border circular areas, the 3rd rectangular area is defined as target area.
Fourth aspect, this default gesture operation comprises: for the 3rd slip touch operation of operating area, and the sliding path of the 3rd slip touch operation comprises: the 3rd sliding path and the 4th sliding path, 3rd sliding path and the 4th sliding path are line segment, and the 3rd sliding path and the 4th sliding path conllinear, connect the mid point of the line segment of the 3rd sliding path starting point and the 4th sliding path starting point and the center superposition of face figure, the terminal of the 3rd sliding path and the terminal of the 4th sliding path are all positioned at the edge of face figure, and face figure to be positioned at mid point be the center of circle, with in the 4th border circular areas that to connect the line segment of the 3rd sliding path terminal and the 4th sliding path terminal be diameter, determine that submodule 4012 can be configured to: the 4th border circular areas is defined as target area, or, according to the 4th border circular areas, determine every all outer the 4th rectangular area being cut in the 4th border circular areas, the 4th rectangular area is defined as target area.
In sum, due in the image packets device that disclosure embodiment provides, the regional choice instruction that generation module generates the selection of target area on benchmark image in image collection according to user, acquisition module obtains the image information in target area, determination module is according to the image information in target area, determine the similar image in image collection, benchmark image and similar image are divided into one group by the first grouping module.Due to according to the selection of user to the benchmark image in this image collection, the similar image being greater than default similarity threshold with the similarity of this benchmark image is selected in this image collection, so, can will comprise face characteristic accurately, and the fuzzyyer image of face characteristic is divided into one group that comprises face figure, improves the accuracy of image packets.
Fig. 4-3 is the structural representation of the another kind of image packets device 40 according to an exemplary embodiment, and as shown in Fig. 4-3, this image packets device 40 can comprise:
Generation module 401, be configured to according to the selection of user to target area on benchmark image in image collection, formation zone selection instruction, regional choice instruction is used to indicate target area, and image collection comprises at least two images.
Acquisition module 402, is configured to the image information obtained according to regional choice instruction in target area.
Determination module 403, is configured to, according to the image information in target area, determine the similar image in image collection, and the similarity of the feature of the image information in the feature of the image information of similar image and target area is greater than default similarity threshold.
First grouping module 404, is configured to benchmark image and similar image to be divided into one group.
Judge module 405, whether the feature being configured to the image information judged in target area comprises face characteristic.
First identification module 406, when the feature being configured to the image information in target area comprises face characteristic, adopts face recognition technology to identify image collection, determines the image comprising face characteristic in image collection.
Second grouping module 407, is configured to the image comprising face characteristic to be divided into one group.
In sum, due in the image packets device that disclosure embodiment provides, the regional choice instruction that generation module generates the selection of target area on benchmark image in image collection according to user, acquisition module obtains the image information in target area, determination module is according to the image information in target area, determine the similar image in image collection, benchmark image and similar image are divided into one group by the first grouping module.Due to according to the selection of user to the benchmark image in this image collection, the similar image being greater than default similarity threshold with the similarity of this benchmark image is selected in this image collection, so, can will comprise face characteristic accurately, and the fuzzyyer image of face characteristic is divided into one group that comprises face figure, improves the accuracy of image packets.
Fig. 4-4 is the structural representation of another the image packets device 40 according to an exemplary embodiment, and as shown in Fig. 4-4, this image packets device 40 can comprise:
Generation module 401, be configured to according to the selection of user to target area on benchmark image in image collection, formation zone selection instruction, regional choice instruction is used to indicate target area, and image collection comprises at least two images.
Acquisition module 402, is configured to the image information obtained according to regional choice instruction in target area.
Determination module 403, is configured to, according to the image information in target area, determine the similar image in image collection, and the similarity of the feature of the image information in the feature of the image information of similar image and target area is greater than default similarity threshold.
First grouping module 404, is configured to benchmark image and similar image to be divided into one group.
Judge module 405, whether the feature being configured to the image information judged in target area comprises face characteristic.
First identification module 406, when the feature being configured to the image information in target area comprises face characteristic, adopts face recognition technology to identify image collection, determines the image comprising face characteristic in image collection.
Second grouping module 407, is configured to the image comprising face characteristic to be divided into one group.
Second identification module 408, is configured to adopt face recognition technology to identify image collection, determines the image comprising face characteristic in image collection.
3rd grouping module 409, is configured to the image and benchmark image that comprise face characteristic to be divided into same group.
In sum, due in the image packets device that disclosure embodiment provides, the regional choice instruction that generation module generates the selection of target area on benchmark image in image collection according to user, acquisition module obtains the image information in target area, determination module is according to the image information in target area, determine the similar image in image collection, benchmark image and similar image are divided into one group by the first grouping module.Due to according to the selection of user to the benchmark image in this image collection, the similar image being greater than default similarity threshold with the similarity of this benchmark image is selected in this image collection, so, can will comprise face characteristic accurately, and the fuzzyyer image of face characteristic is divided into one group that comprises face figure, improves the accuracy of image packets.
The structured flowchart of another the image packets device 500 according to an exemplary embodiment that Fig. 5 provides for disclosure embodiment.Such as, this image packets device 500 can be mobile phone, computing machine, game console, tablet device, Medical Devices, body-building equipment, personal digital assistant etc.
With reference to Fig. 5, device 500 can comprise following one or more assembly: processing components 502, storer 504, power supply module 506, multimedia groupware 508, audio-frequency assembly 510, the interface 512 of I/O (I/O), sensor module 514, and communications component 516.
The integrated operation of the usual control device 500 of processing components 502, such as with display, call, data communication, camera operation and record operate the operation be associated.Processing components 502 can comprise one or more processor 520 to perform instruction, to complete all or part of step of above-mentioned method.In addition, processing components 502 can comprise one or more module, and what be convenient between processing components 502 and other assemblies is mutual.Such as, processing components 502 can comprise multi-media module, mutual with what facilitate between multimedia groupware 508 and processing components 502.
Storer 504 is configured to store various types of data to be supported in the operation of device 500.The example of these data comprises the instruction of any application program for operating on device 500 or method, contact data, telephone book data, message, picture, video etc.Storer 504 can be realized by the volatibility of any type or non-volatile memory device or their combination, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), ROM (read-only memory) (ROM), magnetic store, flash memory, disk or CD.
The various assemblies that power supply module 506 is device 500 provide electric power.Power supply module 506 can comprise power-supply management system, one or more power supply, and other and the assembly generating, manage and distribute electric power for device 500 and be associated.
Multimedia groupware 508 is included in the screen providing an output interface between described device 500 and user.In certain embodiments, screen can comprise liquid crystal display (LCD) and touch panel (TP).If screen comprises touch panel, screen may be implemented as touch-screen, to receive the input signal from user.Touch panel comprises one or more touch sensor with the gesture on sensing touch, slip and touch panel.Described touch sensor can the border of not only sensing touch or sliding action, but also detects the duration relevant to described touch or slide and pressure.In certain embodiments, multimedia groupware 508 comprises a front-facing camera and/or post-positioned pick-up head.When device 500 is in operator scheme, during as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and post-positioned pick-up head can be fixing optical lens systems or have focal length and optical zoom ability.
Audio-frequency assembly 510 is configured to export and/or input audio signal.Such as, audio-frequency assembly 510 comprises a microphone (MIC), and when device 500 is in operator scheme, during as call model, logging mode and speech recognition mode, microphone is configured to receive external audio signal.The sound signal received can be stored in storer 504 further or be sent via communications component 516.In certain embodiments, audio-frequency assembly 510 also comprises a loudspeaker, for output audio signal.
I/O interface 512 is for providing interface between processing components 502 and peripheral interface module, and above-mentioned peripheral interface module can be keyboard, some striking wheel, button etc.These buttons can include but not limited to: home button, volume button, start button and locking press button.
Sensor module 514 comprises one or more sensor, for providing the state estimation of various aspects for device 500.Such as, sensor module 514 can detect the opening/closing state of device 500, the relative positioning of assembly, such as described assembly is display and the keypad of device 500, the position of all right pick-up unit 500 of sensor module 514 or device 500 1 assemblies changes, the presence or absence that user contacts with device 500, the temperature variation of device 500 orientation or acceleration/deceleration and device 500.Sensor module 514 can comprise proximity transducer, be configured to without any physical contact time detect near the existence of object.Sensor module 514 can also comprise optical sensor, as CMOS or ccd image sensor, for using in imaging applications.In certain embodiments, this sensor module 514 can also comprise acceleration transducer, gyro sensor, Magnetic Sensor, pressure transducer or temperature sensor.
Communications component 516 is configured to the communication being convenient to wired or wireless mode between device 500 and other equipment.Device 500 can access the wireless network based on communication standard, as WiFi, 2G or 3G, or their combination.In one exemplary embodiment, communications component 516 receives from the broadcast singal of external broadcasting management system or broadcast related information via broadcast channel.In one exemplary embodiment, described communications component 516 also comprises near-field communication (NFC) module, to promote junction service.Such as, can based on radio-frequency (RF) identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, device 500 can be realized, for performing said method by one or more application specific integrated circuit (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD) (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components.
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium comprising instruction, such as, comprise the storer 504 of instruction, above-mentioned instruction can perform said method by the processor 520 of device 500.Such as, described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc.
A kind of non-transitory computer-readable recording medium, when the instruction in described storage medium is performed by the processor of image packets device 500, make image packets device 500 can perform a kind of image packets method, described image packets method can comprise:
According to the selection of user to target area on benchmark image in image collection, formation zone selection instruction, described regional choice instruction is used to indicate described target area, and described image collection comprises at least two images;
The image information in described target area is obtained according to described regional choice instruction;
According to the image information in described target area, determine the similar image in described image collection, the similarity of the feature of the image information in the feature of the image information of described similar image and described target area is greater than default similarity threshold;
Described benchmark image and described similar image are divided into one group.
Optionally, described according to the selection of user to target area on benchmark image in image collection, formation zone selection instruction, comprising:
Detect user whether identical with default gesture operation for the gesture operation of operating area, described operating area be terminal user interface on show the region of described benchmark image;
If user is identical with described default gesture operation for the gesture operation of operating area, then by described user for described operating area gesture operation corresponding to region be defined as described target area;
Described regional choice instruction is generated according to described target area.
Optionally, described according to the image information in described target area, determine the similar image in described image collection, comprising:
The benchmark Lis Hartel obtaining the image information in described target area is levied;
Obtain in the image in described image collection except described benchmark image, the Lis Hartel of the image information of each image is levied;
Determine the Lis Hartel of the image information of described each image levy levy with described benchmark Lis Hartel mate degree of confidence;
By in the image except described benchmark image, the Lis Hartel of image information is levied the image that degree of confidence is greater than default coupling confidence threshold value that mates levied with described benchmark Lis Hartel and is defined as described similar image.
Optionally, described benchmark image comprises face figure,
Described default gesture operation comprises: for the first slip touch operation of described operating area, and the first sliding path of described first slip touch operation is closed path, the coincident of described first sliding path and described face figure;
Described by described user for described operating area gesture operation corresponding to region be defined as described target area, comprising:
Obtain described first sliding path;
According to described first sliding path, determine every all outer the first rectangular area being cut in described first sliding path;
Described first rectangular area is defined as described target area,
Or, according to described first rectangular area, determine outer the first border circular areas being cut in described first rectangular area, described first border circular areas be defined as described target area.
Optionally, described benchmark image comprises face figure,
Described default gesture operation comprises: for the clicking operation of any two points in described operating area, and it is two summits that described face figure is positioned at described any two points, is in cornerwise second rectangular area to connect the line segment of described any two points;
Described by described user for described operating area gesture operation corresponding to region be defined as described target area, comprising:
Described second rectangular area is defined as described target area,
Or, according to described second rectangular area, determine outer the second border circular areas being cut in described second rectangular area, described second border circular areas be defined as described target area.
Optionally, described benchmark image comprises face figure,
Described default gesture operation comprises: for the second slip touch operation of described operating area, second sliding path of described second slip touch operation is line segment, the starting point of described second sliding path and the center superposition of described face figure, the terminal of described second sliding path is positioned at the edge of described face figure, and described face figure is positioned at the starting point of described second sliding path for the center of circle, in the 3rd border circular areas being radius with described second sliding path;
Described by described user for described operating area gesture operation corresponding to region be defined as described target area, comprising:
Described 3rd border circular areas is defined as described target area,
Or, according to described 3rd border circular areas, determine every all outer the 3rd rectangular area being cut in described 3rd border circular areas, described 3rd rectangular area be defined as described target area.
Optionally, described benchmark image comprises face figure,
Described default gesture operation comprises: for the 3rd slip touch operation of described operating area, and the sliding path of described 3rd slip touch operation comprises: the 3rd sliding path and the 4th sliding path, described 3rd sliding path and described 4th sliding path are line segment, and described 3rd sliding path and described 4th sliding path conllinear, connect the mid point of the line segment of described 3rd sliding path starting point and described 4th sliding path starting point and the center superposition of described face figure, the terminal of described 3rd sliding path and the terminal of described 4th sliding path are all positioned at the edge of described face figure, and described face figure is positioned at described mid point for the center of circle, to connect in the 4th border circular areas that the line segment of described 3rd sliding path terminal and described 4th sliding path terminal is diameter,
Described by described user for described operating area gesture operation corresponding to region be defined as described target area, comprising:
Described 4th border circular areas is defined as described target area,
Or, according to described 4th border circular areas, determine every all outer the 4th rectangular area being cut in described 4th border circular areas, described 4th rectangular area be defined as described target area.
Optionally, described method also comprises:
Judge whether the feature of the image information in described target area comprises face characteristic;
If the feature of the image information in described target area comprises face characteristic, then adopt face recognition technology to identify described image collection, determine the image comprising face characteristic in described image collection;
The described image comprising face characteristic is divided into one group.
Optionally, described method also comprises:
Adopt face recognition technology to identify described image collection, determine the image comprising face characteristic in described image collection;
The image comprising face characteristic by described and described benchmark image are divided into same group.
In sum, due to the image packets device that disclosure embodiment provides, according to the regional choice instruction that user generates the selection of target area on benchmark image in image collection, obtain the image information in target area, according to the image information in target area, determine the similar image in image collection, benchmark image and similar image are divided into one group.Due to according to the selection of user to the benchmark image in this image collection, the similar image being greater than default similarity threshold with the similarity of this benchmark image is selected in this image collection, so, can will comprise face characteristic accurately, and the fuzzyyer image of face characteristic is divided into one group that comprises face figure, improves the accuracy of image packets.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the specific works process of the image packets device of foregoing description, with reference to earlier figures as the corresponding process in group technology embodiment, can not repeat them here.
Those skilled in the art, at consideration instructions and after putting into practice disclosed herein disclosing, will easily expect other embodiment of the present disclosure.The application is intended to contain any modification of the present disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present disclosure and comprised the undocumented common practise in the art of the disclosure or conventional techniques means.Instructions and embodiment are only regarded as exemplary, and true scope of the present disclosure and spirit are pointed out by claim.
Should be understood that, the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and can carry out various amendment and change not departing from its scope.The scope of the present disclosure is only limited by appended claim.

Claims (19)

1. an image packets method, is characterized in that, described method comprises:
According to the selection of user to target area on benchmark image in image collection, formation zone selection instruction, described regional choice instruction is used to indicate described target area, and described image collection comprises at least two images;
The image information in described target area is obtained according to described regional choice instruction;
According to the image information in described target area, determine the similar image in described image collection, the similarity of the feature of the image information in the feature of the image information of described similar image and described target area is greater than default similarity threshold;
Described benchmark image and described similar image are divided into one group.
2. method according to claim 1, is characterized in that, described according to the selection of user to target area on benchmark image in image collection, formation zone selection instruction, comprising:
Detect user whether identical with default gesture operation for the gesture operation of operating area, described operating area be terminal user interface on show the region of described benchmark image;
If user is identical with described default gesture operation for the gesture operation of operating area, then by described user for described operating area gesture operation corresponding to region be defined as described target area;
Described regional choice instruction is generated according to described target area.
3. method according to claim 1, is characterized in that, described according to the image information in described target area, determines the similar image in described image collection, comprising:
The benchmark Lis Hartel obtaining the image information in described target area is levied;
Obtain in the image in described image collection except described benchmark image, the Lis Hartel of the image information of each image is levied;
Determine the Lis Hartel of the image information of described each image levy levy with described benchmark Lis Hartel mate degree of confidence;
By in the image except described benchmark image, the Lis Hartel of image information is levied the image that degree of confidence is greater than default coupling confidence threshold value that mates levied with described benchmark Lis Hartel and is defined as described similar image.
4. method according to claim 2, is characterized in that, described benchmark image comprises face figure,
Described default gesture operation comprises: for the first slip touch operation of described operating area, and the first sliding path of described first slip touch operation is closed path, the coincident of described first sliding path and described face figure;
Described by described user for described operating area gesture operation corresponding to region be defined as described target area, comprising:
Obtain described first sliding path;
According to described first sliding path, determine every all outer the first rectangular area being cut in described first sliding path;
Described first rectangular area is defined as described target area,
Or, according to described first rectangular area, determine outer the first border circular areas being cut in described first rectangular area, described first border circular areas be defined as described target area.
5. method according to claim 2, is characterized in that, described benchmark image comprises face figure,
Described default gesture operation comprises: for the clicking operation of any two points in described operating area, and it is two summits that described face figure is positioned at described any two points, is in cornerwise second rectangular area to connect the line segment of described any two points;
Described by described user for described operating area gesture operation corresponding to region be defined as described target area, comprising:
Described second rectangular area is defined as described target area,
Or, according to described second rectangular area, determine outer the second border circular areas being cut in described second rectangular area, described second border circular areas be defined as described target area.
6. method according to claim 2, is characterized in that, described benchmark image comprises face figure,
Described default gesture operation comprises: for the second slip touch operation of described operating area, second sliding path of described second slip touch operation is line segment, the starting point of described second sliding path and the center superposition of described face figure, the terminal of described second sliding path is positioned at the edge of described face figure, and described face figure is positioned at the starting point of described second sliding path for the center of circle, in the 3rd border circular areas being radius with described second sliding path;
Described by described user for described operating area gesture operation corresponding to region be defined as described target area, comprising:
Described 3rd border circular areas is defined as described target area,
Or, according to described 3rd border circular areas, determine every all outer the 3rd rectangular area being cut in described 3rd border circular areas, described 3rd rectangular area be defined as described target area.
7. method according to claim 2, is characterized in that, described benchmark image comprises face figure,
Described default gesture operation comprises: for the 3rd slip touch operation of described operating area, and the sliding path of described 3rd slip touch operation comprises: the 3rd sliding path and the 4th sliding path, described 3rd sliding path and described 4th sliding path are line segment, and described 3rd sliding path and described 4th sliding path conllinear, connect the mid point of the line segment of described 3rd sliding path starting point and described 4th sliding path starting point and the center superposition of described face figure, the terminal of described 3rd sliding path and the terminal of described 4th sliding path are all positioned at the edge of described face figure, and described face figure is positioned at described mid point for the center of circle, to connect in the 4th border circular areas that the line segment of described 3rd sliding path terminal and described 4th sliding path terminal is diameter,
Described by described user for described operating area gesture operation corresponding to region be defined as described target area, comprising:
Described 4th border circular areas is defined as described target area,
Or, according to described 4th border circular areas, determine every all outer the 4th rectangular area being cut in described 4th border circular areas, described 4th rectangular area be defined as described target area.
8. method according to claim 1, is characterized in that, described method also comprises:
Judge whether the feature of the image information in described target area comprises face characteristic;
If the feature of the image information in described target area comprises face characteristic, then adopt face recognition technology to identify described image collection, determine the image comprising face characteristic in described image collection;
The described image comprising face characteristic is divided into one group.
9. method according to claim 1, is characterized in that, described method also comprises:
Adopt face recognition technology to identify described image collection, determine the image comprising face characteristic in described image collection;
The image comprising face characteristic by described and described benchmark image are divided into same group.
10. an image packets device, is characterized in that, described image packets device comprises:
Generation module, be configured to according to the selection of user to target area on benchmark image in image collection, formation zone selection instruction, described regional choice instruction is used to indicate described target area, and described image collection comprises at least two images;
Acquisition module, is configured to the image information obtained according to described regional choice instruction in described target area;
Determination module, be configured to according to the image information in described target area, determine the similar image in described image collection, the similarity of the feature of the image information in the feature of the image information of described similar image and described target area is greater than default similarity threshold;
First grouping module, is configured to described benchmark image and described similar image to be divided into one group.
11. image packets devices according to claim 10, it is characterized in that, described generation module comprises:
Detection sub-module, is configured to detect user whether identical with default gesture operation for the gesture operation of operating area, described operating area be terminal user interface on show the region of described benchmark image;
Determine submodule, be configured to when user is identical with described default gesture operation for the gesture operation of operating area, by described user for described operating area gesture operation corresponding to region be defined as described target area;
Generate submodule, be configured to generate described regional choice instruction according to described target area.
12. image packets devices according to claim 10, it is characterized in that, described determination module is configured to:
The benchmark Lis Hartel obtaining the image information in described target area is levied;
Obtain in the image in described image collection except described benchmark image, the Lis Hartel of the image information of each image is levied;
Determine the Lis Hartel of the image information of described each image levy levy with described benchmark Lis Hartel mate degree of confidence;
By in the image except described benchmark image, the Lis Hartel of image information is levied the image that degree of confidence is greater than default coupling confidence threshold value that mates levied with described benchmark Lis Hartel and is defined as described similar image.
13. image packets devices according to claim 11, it is characterized in that, described benchmark image comprises face figure,
Described default gesture operation comprises: for the first slip touch operation of described operating area, and the first sliding path of described first slip touch operation is closed path, the coincident of described first sliding path and described face figure;
Describedly determine that submodule is configured to:
Obtain described first sliding path;
According to described first sliding path, determine every all outer the first rectangular area being cut in described first sliding path;
Described first rectangular area is defined as described target area,
Or, according to described first rectangular area, determine outer the first border circular areas being cut in described first rectangular area, described first border circular areas be defined as described target area.
14. image packets devices according to claim 11, it is characterized in that, described benchmark image comprises face figure,
Described default gesture operation comprises: for the clicking operation of any two points in described operating area, and it is two summits that described face figure is positioned at described any two points, is in cornerwise second rectangular area to connect the line segment of described any two points;
Describedly determine that submodule is configured to:
Described second rectangular area is defined as described target area,
Or, according to described second rectangular area, determine outer the second border circular areas being cut in described second rectangular area, described second border circular areas be defined as described target area.
15. image packets devices according to claim 11, it is characterized in that, described benchmark image comprises face figure,
Described default gesture operation comprises: for the second slip touch operation of described operating area, second sliding path of described second slip touch operation is line segment, the starting point of described second sliding path and the center superposition of described face figure, the terminal of described second sliding path is positioned at the edge of described face figure, and described face figure is positioned at the starting point of described second sliding path for the center of circle, in the 3rd border circular areas being radius with described second sliding path;
Describedly determine that submodule is configured to:
Described 3rd border circular areas is defined as described target area,
Or, according to described 3rd border circular areas, determine every all outer the 3rd rectangular area being cut in described 3rd border circular areas, described 3rd rectangular area be defined as described target area.
16. image packets devices according to claim 11, it is characterized in that, described benchmark image comprises face figure,
Described default gesture operation comprises: for the 3rd slip touch operation of described operating area, and the sliding path of described 3rd slip touch operation comprises: the 3rd sliding path and the 4th sliding path, described 3rd sliding path and described 4th sliding path are line segment, and described 3rd sliding path and described 4th sliding path conllinear, connect the mid point of the line segment of described 3rd sliding path starting point and described 4th sliding path starting point and the center superposition of described face figure, the terminal of described 3rd sliding path and the terminal of described 4th sliding path are all positioned at the edge of described face figure, and described face figure is positioned at described mid point for the center of circle, to connect in the 4th border circular areas that the line segment of described 3rd sliding path terminal and described 4th sliding path terminal is diameter,
Describedly determine that submodule is configured to:
Described 4th border circular areas is defined as described target area,
Or, according to described 4th border circular areas, determine every all outer the 4th rectangular area being cut in described 4th border circular areas, described 4th rectangular area be defined as described target area.
17. image packets devices according to claim 10, is characterized in that, described image packets device also comprises:
Judge module, whether the feature being configured to the image information judged in described target area comprises face characteristic;
First identification module, when the feature being configured to the image information in described target area comprises face characteristic, adopts face recognition technology to identify described image collection, determines the image comprising face characteristic in described image collection;
Second grouping module, is configured to the described image comprising face characteristic to be divided into one group.
18. image packets devices according to claim 10, is characterized in that, described image packets device also comprises:
Second identification module, is configured to adopt face recognition technology to identify described image collection, determines the image comprising face characteristic in described image collection;
3rd grouping module, the image being configured to comprise face characteristic by described and described benchmark image are divided into same group.
19. 1 kinds of image packets devices, is characterized in that, described image packets device comprises:
Processor;
For storing the storer of the executable instruction of described processor;
Wherein, described processor is configured to:
According to the selection of user to target area on benchmark image in image collection, formation zone selection instruction, described regional choice instruction is used to indicate described target area, and described image collection comprises at least two images;
The image information in described target area is obtained according to described regional choice instruction;
According to the image information in described target area, determine the similar image in described image collection, the similarity of the feature of the image information in the feature of the image information of described similar image and described target area is greater than default similarity threshold;
Described benchmark image and described similar image are divided into one group.
CN201510848347.6A 2015-11-27 2015-11-27 Image group technology and device Active CN105487774B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510848347.6A CN105487774B (en) 2015-11-27 2015-11-27 Image group technology and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510848347.6A CN105487774B (en) 2015-11-27 2015-11-27 Image group technology and device

Publications (2)

Publication Number Publication Date
CN105487774A true CN105487774A (en) 2016-04-13
CN105487774B CN105487774B (en) 2019-04-19

Family

ID=55674785

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510848347.6A Active CN105487774B (en) 2015-11-27 2015-11-27 Image group technology and device

Country Status (1)

Country Link
CN (1) CN105487774B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407441A (en) * 2016-09-28 2017-02-15 北京小米移动软件有限公司 Mistaken photo identification method and device
CN106650599A (en) * 2016-10-14 2017-05-10 北京智眸科技有限公司 A method for setting sparse sampling frequency regionally and selecting sampling points in stereo matching
WO2019106505A1 (en) * 2017-12-01 2019-06-06 International Business Machines Corporation Cognitive document image digitization
CN113780164A (en) * 2021-09-09 2021-12-10 福建天泉教育科技有限公司 Head posture recognition method and terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100083117A1 (en) * 2008-09-30 2010-04-01 Casio Computer Co., Ltd. Image processing apparatus for performing a designated process on images
CN102404494A (en) * 2010-09-08 2012-04-04 联想(北京)有限公司 Electronic equipment and method for acquiring image in determined area
CN103297699A (en) * 2013-05-31 2013-09-11 北京小米科技有限责任公司 Method and terminal for shooting images
CN104185981A (en) * 2013-10-23 2014-12-03 华为终端有限公司 Method and terminal selecting image from continuous captured image
CN104967786A (en) * 2015-07-10 2015-10-07 广州三星通信技术研究有限公司 Image selection method and device
CN105069426A (en) * 2015-07-31 2015-11-18 小米科技有限责任公司 Similar picture determining method and apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100083117A1 (en) * 2008-09-30 2010-04-01 Casio Computer Co., Ltd. Image processing apparatus for performing a designated process on images
CN102404494A (en) * 2010-09-08 2012-04-04 联想(北京)有限公司 Electronic equipment and method for acquiring image in determined area
CN103297699A (en) * 2013-05-31 2013-09-11 北京小米科技有限责任公司 Method and terminal for shooting images
CN104185981A (en) * 2013-10-23 2014-12-03 华为终端有限公司 Method and terminal selecting image from continuous captured image
CN104967786A (en) * 2015-07-10 2015-10-07 广州三星通信技术研究有限公司 Image selection method and device
CN105069426A (en) * 2015-07-31 2015-11-18 小米科技有限责任公司 Similar picture determining method and apparatus

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407441A (en) * 2016-09-28 2017-02-15 北京小米移动软件有限公司 Mistaken photo identification method and device
CN106650599A (en) * 2016-10-14 2017-05-10 北京智眸科技有限公司 A method for setting sparse sampling frequency regionally and selecting sampling points in stereo matching
WO2019106505A1 (en) * 2017-12-01 2019-06-06 International Business Machines Corporation Cognitive document image digitization
US10592738B2 (en) 2017-12-01 2020-03-17 International Business Machines Corporation Cognitive document image digitalization
CN111406262A (en) * 2017-12-01 2020-07-10 国际商业机器公司 Cognitive document image digitization
GB2582722A (en) * 2017-12-01 2020-09-30 Ibm Cognitive document image digitization
GB2582722B (en) * 2017-12-01 2021-03-03 Ibm Cognitive document image digitization
CN111406262B (en) * 2017-12-01 2023-09-22 国际商业机器公司 Cognition document image digitization
CN113780164A (en) * 2021-09-09 2021-12-10 福建天泉教育科技有限公司 Head posture recognition method and terminal
CN113780164B (en) * 2021-09-09 2023-04-28 福建天泉教育科技有限公司 Head gesture recognition method and terminal

Also Published As

Publication number Publication date
CN105487774B (en) 2019-04-19

Similar Documents

Publication Publication Date Title
CN104156149B (en) Acquisition parameters control method and device
CN104243819A (en) Photo acquiring method and device
CN105159559A (en) Mobile terminal control method and mobile terminal
CN105469056A (en) Face image processing method and device
CN105117033A (en) Connection method and device of external equipment
CN104469167A (en) Automatic focusing method and device
CN103995666A (en) Method and device for setting work mode
CN104933419A (en) Method and device for obtaining iris images and iris identification equipment
CN105426878A (en) Method and device for face clustering
CN104243829A (en) Self-shooting method and self-shooting device
CN105159496A (en) Touch event response method and mobile terminal
CN104216525A (en) Method and device for mode control of camera application
CN105224171A (en) icon display method, device and terminal
CN105630239B (en) Operate detection method and device
CN104156695A (en) Method and device for aligning face image
CN105487774A (en) Image grouping method and device
CN105208284A (en) Photographing reminding method and device
CN105323152A (en) Message processing method, device and equipment
CN105426042A (en) Icon position exchange method and apparatus
CN105204713A (en) Incoming call responding method and device
CN105335061A (en) Information display method and apparatus and terminal
CN103914151A (en) Information display method and device
CN105187671A (en) Recording method and device
CN104573642A (en) Face recognition method and device
CN104501790A (en) Calibration method and device of electronic compass

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant