WO2014002554A1 - 画像処理装置、画像処理方法、及びプログラム - Google Patents
画像処理装置、画像処理方法、及びプログラム Download PDFInfo
- Publication number
- WO2014002554A1 WO2014002554A1 PCT/JP2013/058796 JP2013058796W WO2014002554A1 WO 2014002554 A1 WO2014002554 A1 WO 2014002554A1 JP 2013058796 W JP2013058796 W JP 2013058796W WO 2014002554 A1 WO2014002554 A1 WO 2014002554A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- feature
- coordinate position
- group
- local feature
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
- G06V10/464—Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
Definitions
- Some aspects according to the present invention relate to an image processing apparatus, an image processing method, and a program.
- Patent Document 1 discloses an apparatus that uses SIFT (Scale Invariant Feature Transform) features.
- a large number of feature points are detected from one image (referred to as a first image), and local feature values are obtained from the coordinate positions, scales (sizes), and angles of these feature points. Is generated.
- the local feature amount group related to the first image and the local feature amount group related to the second image with respect to the generated local feature amount group composed of a large number of local feature amounts, The same or similar subject can be identified.
- “similar” means that a part of the subject is different, only a part of the subject is captured, or that the subject looks different because the shooting angle of the subject is different between images.
- An object is to provide an image processing method and a program.
- the first image processing apparatus includes a first feature including a local feature amount that is a feature amount of a plurality of local regions including each feature point with respect to the plurality of feature points detected from the first image.
- a first feature amount generating means for generating a local feature amount group and a first coordinate position information group including coordinate position information; and a feature point of the first image based on the first coordinate position information group Region dividing means for clustering, and collation for collating, in cluster units, the first local feature group and a second local feature group that is a local feature of feature points detected from the second image Means.
- the first image processing method includes a first feature that includes a local feature amount that is a feature amount of a plurality of local regions including each feature point with respect to the plurality of feature points detected from the first image.
- One program according to the present invention provides a first local feature including a local feature amount, which is a feature amount of a plurality of local regions including each feature point, for a plurality of feature points detected from the first image.
- a step of generating a quantity group and a first coordinate position information group including coordinate position information, a step of clustering feature points of the first image based on the first coordinate position information group, and a cluster unit Then, the image processing apparatus executes the step of collating the first local feature quantity group with the second local feature quantity group that is the local feature quantity of the feature point detected from the second image.
- part”, “means”, and “apparatus” do not simply mean physical means, but the functions of the “part”, “means”, and “apparatus” are realized by software. This includes cases where In addition, even if the functions of one “unit”, “means”, and “device” are realized by two or more physical means or devices, two or more “parts”, “means”, and “devices” The function may be realized by one physical means or apparatus.
- an image processing apparatus it is possible to provide an image processing apparatus, an image processing method, and a program that can accurately collate even when a large number of identical or similar subjects are included in an image.
- FIG. 1 is a diagram illustrating a configuration of an image processing apparatus according to a first embodiment.
- FIG. 10 is a flowchart illustrating a flow of processing of an image processing apparatus according to a fourth embodiment. It is a figure which shows the structure of the image processing apparatus which concerns on 5th Embodiment. It is a figure which shows the structure of the area
- FIG. 14 is a flowchart illustrating a processing flow of an image processing apparatus according to a sixth embodiment. It is a figure which shows the structure of the image processing apparatus which concerns on 7th Embodiment. It is a figure which shows the structure of the area
- region division part of 7th Embodiment. 14 is a flowchart illustrating a processing flow of an image processing apparatus according to a seventh embodiment. It is a figure which shows the structure of the image processing apparatus which concerns on 8th Embodiment. It is a figure which shows the structure of the area division part of 8th Embodiment. It is a flowchart which shows the flow of a process of the image processing apparatus which concerns on 8th Embodiment.
- FIG. 1 is a functional block diagram showing a functional configuration of an image processing apparatus 10 according to the present embodiment.
- Each functional configuration of the image processing apparatus 10 may be realized as a program that is temporarily stored in a memory and that runs on a CPU (central processing unit).
- the image processing apparatus 10 includes a first local feature quantity generation unit 101, a second local feature quantity generation unit 103, an area division unit 105, and a collation unit 107.
- first image includes many identical or similar subjects
- second image includes only one subject. The same applies to the second and subsequent embodiments.
- the first local feature quantity generation unit 101 detects a large number of feature points from the first image, and sends the first coordinate position information group composed of the coordinate positions of the large number of feature points to the region dividing unit 105. Is output. In addition, the first local feature quantity generation unit 101 compares the first local feature quantity group including the local feature quantities in the peripheral region (neighboring region) including the feature point from the coordinate position of each feature point. To output.
- the second local feature quantity generation unit 103 detects a large number of feature points included in the second image by the same operation as the first local feature quantity generation unit 101, and calculates local feature quantities for each feature point.
- the second local feature value group generated from the plurality of local feature values is output to the matching unit 107.
- the area dividing unit 105 clusters the feature points of the first image using the first coordinate position information group output from the first local feature quantity generating unit 101, and also includes a plurality of clusters including one or more feature points.
- a cluster information group composed of a plurality of pieces of cluster information is output to the collation unit 107.
- the collation unit 107 includes a first local feature amount group output from the first local feature amount generation unit 101, a second local feature amount group output from the second local feature amount generation unit 103, and an area dividing unit.
- identity or similarity is obtained for each feature point. judge.
- the collation unit 107 identifies the same or similar subject between the first image and the second image, and outputs the identification result (collation result).
- the collating unit 107 outputs the information of the region determined to be the same or similar in the first image based on the coordinate position information of the feature points belonging to the cluster for the feature points determined to be the same or similar. Also good. Details of the operation of each component of the image processing apparatus 10 will be described below.
- the first local feature quantity generation unit 101 detects a large number of feature points from the first image, and sets the first coordinate position information group composed of the coordinate positions of the detected many feature points as a region. The data is output to the dividing unit 105. In addition, the first local feature quantity generation unit 101 generates a local feature quantity from the detected coordinate position of each feature point, and a first local feature quantity group including a plurality of generated local feature quantities, The data is output to the matching unit 107.
- the first local feature value generation unit 101 may use the local feature value using, for example, the scale and angle information of the region where the local feature value is generated in addition to the coordinate position of each feature point.
- the local feature amount may be a SIFT (Scale Invariant Feature Transform) feature amount or other local feature amount.
- the second local feature value generation unit 103 generates local feature values of each feature point of the second image by the same operation as the first local feature value generation unit 101, and the generated many local features.
- the second local feature amount group configured by the amount is output to the matching unit 107.
- the database is not the second local feature amount generation unit 103 but the second feature amount is stored in the second image.
- a local feature group may be output. The same applies to the second and subsequent embodiments.
- the area dividing unit 105 clusters each feature point of the first image using the first coordinate position information group output from the first local feature value generating unit 101 and is configured by one or more feature points.
- a cluster information group that is cluster information of each cluster is output to the collating unit 107.
- For the clustering of feature points for example, it is conceivable to use a method of classifying feature points whose feature points are close to each other, that is, whose distance between two feature points (distance between coordinate points) is close to the same cluster. .
- the distance between the two feature points for example, the Euclidean distance may be used, the Mahalanobis distance may be used, or the urban area distance may be used.
- a method of calculating distances between all feature points and clustering the calculated distances by graph cut may be used.
- a graph is generated in which the feature points are nodes and the distance between the feature points is an edge between the nodes.
- a normalized cut may be used, or a Markov Cluster algorithm may be used.
- the method described in Non-Patent Document 1 can be used.
- the method described in Non-Patent Document 2 can be used.
- the k-means method, the LBG method, or the LBQ method may be used for clustering the feature points. Specific methods disclosed in Non-Patent Document 3 for the k-means method, Non-Patent Document 4 for the LBG method, and Non-Patent Document 5 for the LBQ method can be used.
- the feature points included in the region are counted for each analysis region of any size. If the count value is equal to or greater than a predetermined threshold, the feature points included in the region are the same. You may use the method of classifying into a cluster.
- a method may be used in which the first image is divided into grids of an arbitrary size and each grid is used as the analysis area.
- the analysis areas may or may not overlap, for example.
- the size of the analysis region may be fixed or variable, for example. In the case of variable, for example, the smaller the distance between the center of the analysis region and the image center, the smaller the analysis region size, and the larger the distance between the center of the analysis region and the image center, the larger the analysis region size. You may use the method of.
- a method of classifying feature points included in an analysis region whose count value is equal to or greater than a predetermined threshold may be used in the same cluster, or included in the region and the surrounding analysis region.
- a method of classifying feature points into the same cluster may be used.
- analysis areas whose count values are equal to or greater than a predetermined threshold are adjacent or overlap for example, a method of classifying feature points included in these analysis areas into the same cluster may be used, or different clusters may be used. You may use the method classified into.
- the matching unit 107 uses the cluster information group output from the region dividing unit 105 to collate the first local feature value group and the second local feature value group in units of clusters, and at the same time between the feature values. Or determine similarity. Thereby, the same or similar subject is identified between images.
- the distance between the local feature quantities between the local feature quantity group belonging to the target cluster of the first local feature quantity group and the second local feature quantity group is calculated.
- a correspondence relationship between the feature point of the first image and the feature point of the second image (which feature point of the second image corresponds to the feature point of the first image) is calculated.
- Euclidean distance may be used.
- the feature point having the smallest distance value may be used as the corresponding feature point.
- the target cluster and the second image may be determined to be the same (or similar).
- identity or similarity may be determined by performing geometric verification using the obtained correspondence. For example, assuming that the geometric relationship between two images is a projective transformation (homography), the projection transformation parameters are estimated using a robust estimation method, and the input correspondence to the estimated parameters is deviated. By determining the value, identity or similarity may be determined based on the number of outliers.
- the robust estimation method for example, RANSOC (Random Sample Consensus) or least square method may be used.
- FIG. 2 is a flowchart showing the flow of processing of the image processing apparatus 10 according to the present embodiment.
- Each processing step to be described later can be executed in any order or in parallel as long as there is no contradiction in processing contents, and other steps can be added between the processing steps. good. Further, a step described as a single step for convenience can be executed by being divided into a plurality of steps, and a step described as being divided into a plurality of steps for convenience can be executed as one step. The same applies to the second and subsequent embodiments.
- the first local feature value generation unit 101 detects a large number of feature points from the first image
- the second local feature value generation unit 103 detects a large number of feature points from the second image (S201).
- the first local feature quantity generation unit 101 and the second local feature quantity generation unit 103 start from the coordinate position of each feature point (as described above, a scale and an angle may be used as necessary).
- a local feature amount is generated (S203).
- the area dividing unit 105 clusters each feature point of the first image using the first coordinate position information group which is the coordinate position of the feature point of the first image (S205).
- the collation unit 107 identifies the same or similar subject between images by collating the first local feature amount group and the second local feature amount group in units of clusters of the first local feature amount. (S207).
- the image processing apparatus 10 clusters a large number of feature points detected from the first image on the basis of their coordinate positions, and the first local feature quantity group and the first feature quantity group.
- the two local feature quantity groups are collated in cluster units. In this way, by collating local feature amounts in units of clusters, a large number of identical or similar subjects in the image can be identified with high accuracy.
- FIG. 3 is a diagram illustrating a functional configuration of the image processing apparatus 10 according to the second embodiment.
- the image processing apparatus 10 includes a first local feature value generation unit 101, a second local feature value generation unit 103, a region division unit 105, and a collation unit 107.
- the operations of the second local feature quantity generation unit 103 and the collation unit 107 are the same as those in the first embodiment, and a description thereof will be omitted.
- the first local feature quantity generation unit 101 detects a large number of feature points of the first image and outputs the first coordinate position information group to the region division unit 105 as in the first embodiment.
- the first local feature quantity generation unit 101 generates a first local feature quantity group that is a local feature quantity of each feature point of the first image by the same operation as that of the first embodiment.
- the first local feature amount group is output to the region dividing unit 105 and the matching unit 107.
- the area dividing unit 105 clusters the feature points of the first image using the first local feature amount group and the first coordinate position information group output from the first local feature amount generating unit 101, and A cluster information group indicating the clustering result is output to the collating unit 107.
- FIG. 4 shows a detailed functional configuration of the area dividing unit 105 according to the present embodiment.
- the region dividing unit 105 includes a similarity calculation unit 401 and a feature point clustering unit 403.
- the similarity calculation unit 401 calculates the similarity between any two local feature values in the first local feature value group output from the first local feature value generation unit 101, and calculates the calculated multiple similarities. And output to the feature point clustering unit 403 as a similarity information group.
- a feature distance between two arbitrary local feature amounts for example, Euclidean distance
- similarity is calculated based on the distance. It is done. At this time, for example, the similarity may be increased when the distance value is small, and the similarity may be decreased when the distance value is large. It is also conceivable to use a method of normalizing the distance between feature amounts with a predetermined value and calculating the degree of similarity from the normalized value.
- the feature point clustering unit 403 uses the first coordinate position information group output from the first local feature quantity generation unit 101 and the similarity information group output from the similarity calculation unit 401 to generate the first image.
- the feature points are clustered, and a cluster information group indicating the clustering result is output to the matching unit 107.
- the feature point clustering unit 403 may perform clustering so that the local feature amounts are classified into clusters having a high similarity (a small distance value).
- a method of calculating the distance between an arbitrary feature point of the first image and the center of gravity of each cluster and using the method of classifying the feature points into clusters having the smallest calculated distance is used. Conceivable.
- a feature point having a similarity equal to or higher than the threshold when included in an arbitrary cluster, for example, a feature point having a long distance to the cluster centroid may be excluded from the cluster and classified into another cluster.
- the Euclidean distance may be used
- the Mahalanobis distance may be used
- the city distance may be used.
- clustering may be performed using a graph cut. For example, with feature points as nodes, edge values are calculated based on the distance between feature points and the similarity between local feature quantities (for example, the distance between feature points is small and the similarity between local feature quantities is A graph cut may be provided for a graph obtained by increasing the edge value between two nodes as the value is larger. For the graph cut, for example, a normalized cut may be used, or a Markov cluster algorithm may be used.
- FIG. 5 is a flowchart showing a processing flow of the image processing apparatus 10.
- the first local feature quantity generation unit 101 detects many feature points from the first image
- the second local feature quantity generation unit 103 detects many feature points from the second image (S501).
- the first local feature value generation unit 101 and the second local feature value generation unit 103 calculate a local feature value group (first local feature value) including the feature value of each feature point from the coordinate position of each feature point.
- a quantity group and a second local feature quantity group) are generated (S503).
- the region dividing unit 105 clusters the feature points of the first image using the first coordinate position information group and the first local feature amount group (S505).
- the collating unit 107 identifies the same or similar subject between images by collating the first local feature group and the second local feature group in cluster units (S507).
- clustering is performed so that feature points having similar local feature amounts are not easily included in the same cluster. Therefore, even if the same or similar subjects are close to each other, the first embodiment The subject can be identified with higher accuracy.
- FIG. 6 is a diagram illustrating a functional configuration of the image processing apparatus 10 according to the present embodiment.
- the image processing apparatus 10 includes a first local feature quantity generation unit 101, a second local feature quantity generation unit 103, a region division unit 105, and a collation unit 107.
- the operation of the first local feature quantity generation unit 101 is the same as that of the second embodiment, and the operation of the collation unit 107 is the same as that of the first embodiment, and thus the description thereof is omitted here.
- the operations of the second local feature quantity generation unit 103 and the region division unit 105 will be mainly described.
- the second local feature value generation unit 103 generates a local feature value related to each feature point of the second image by the same operation as that of the first embodiment, and the second local feature value set is a set of the local feature values.
- the region dividing unit 105 includes a first local feature amount group and a first coordinate position information group output by the first local feature amount generating unit 101, and a second local feature amount generating unit 103 that outputs the second local feature amount group.
- the feature points of the first image are clustered using the local feature amount group, and the cluster information group related to the clustering result is output to the matching unit 107.
- the functional configuration and operation of the area dividing unit 105 will be described with reference to FIG.
- FIG. 7 is a diagram illustrating a configuration of the area dividing unit 105 according to the present embodiment. As illustrated in FIG. 7, the region dividing unit 105 includes a corresponding point searching unit 405 and a feature point clustering unit 403.
- the corresponding point search unit 405 uses the first local feature amount group output from the first local feature amount generation unit 101 and the second local feature amount group output from the second local feature amount generation unit 103. Thus, which local feature quantity included in the first local feature quantity group matches which local feature quantity in the second local feature quantity group, that is, any feature point of the first image is the second Correspondence information, which is information relating to which feature point of the image corresponds to, is generated. Furthermore, the corresponding point search unit 405 outputs the generated large number of corresponding information to the feature point clustering unit 403 as a corresponding information group.
- the correspondence relationship may be that a certain feature point of the second image corresponds to a plurality of feature points of the first image. Furthermore, the feature points of the first image may correspond to the feature points of the second image on a one-to-one basis.
- the feature point clustering unit 403 uses the coordinate position information group output from the first local feature quantity generation unit 101 and the correspondence information group output from the corresponding point search unit 405, and uses the feature points of the first image. Then, after selecting a feature point corresponding to the feature point of the second image, the feature points of the selected first image are clustered based on their coordinate positions. Further, the feature point clustering unit 403 outputs a cluster information group indicating the result of the clustering to the matching unit 107. For the clustering of feature points, for example, the method described in any of Non-Patent Documents 3 to 5 may be used.
- the feature point clustering unit 403 may classify the feature points of the first image into different clusters when the feature points of the second image correspond to a plurality of feature points of the first image. You may cluster.
- the feature point clustering unit 403 may use clustering by graph cut.
- the plurality of feature points of the first image are used as nodes, and the edges between those nodes. It is conceivable to generate a graph so that the value is small and to apply a graph cut so as to divide between nodes having small edge values.
- a normalized cut or a Markov cluster algorithm may be used.
- the feature point clustering unit 403 has a short distance between any two feature points of the first image (for example, when the distance value is below a certain threshold value), and the feature point clustering unit 403 When the distance between feature points is long (for example, when the distance value exceeds another threshold value), the two feature points of the first image may be classified into different clusters. For this reason, it is also conceivable to use clustering by graph cut as described above.
- the feature point clustering unit 403 counts feature points included in an analysis area of an arbitrary size for each analysis area as in the first embodiment, and if the count value is equal to or greater than a predetermined threshold value, A method of classifying feature points included in the region into the same cluster may be used. By clustering the feature points in this way, there is an effect that processing can be performed at a higher speed than the method described in any one of Non-Patent Documents 3 to 5. Further, the third embodiment may be combined with the second embodiment.
- FIG. 8 is a flowchart showing a processing flow of the image processing apparatus 10.
- the first local feature quantity generation unit 101 detects a large number of feature points from the first image. Further, the second local feature quantity generation unit 103 detects a large number of feature points from the second image (S801). Next, the first local feature value generation unit 101 and the second local feature value generation unit 103 generate a local feature value from the coordinate position of each feature point (S803).
- the area dividing unit 105 obtains the correspondence between the first local feature amounts, that is, the correspondence between the feature points between the two images (S805). Next, the region dividing unit 105 uses the first coordinate position information group and the correspondence information group to select a feature point having a correspondence relationship with the feature point of the second image among the feature points of the first image.
- the feature points of the selected first image are clustered based on their coordinate positions (S807).
- the collation unit 107 collates the first local feature quantity group and the second local feature quantity group in units of clusters, and identifies the same or similar subject between images (S809).
- the image processing apparatus 10 extracts a large number of feature points that match the feature points of the second image from among a large number of feature points detected from the first image.
- Clustering is performed based on the coordinate position, and the same or similar subject is identified between images by collating the first local feature group and the second local feature group in cluster units. Thereby, the effect similar to 1st Embodiment can be acquired.
- the first image can be obtained even when many feature points of the first image are detected from other than the subject.
- the same or similar subjects can be identified more accurately than in the first embodiment.
- FIG. 9 is a diagram illustrating a functional configuration of the image processing apparatus 10 according to the present embodiment.
- the image processing apparatus 10 includes a first local feature quantity generation unit 101, a second local feature quantity generation unit 103, an area division unit 105, and a collation unit 107.
- the operation of the first local feature quantity generation unit 101 is the same as that of the second embodiment, and the operation of the collation unit 107 is the same as that of the first embodiment, and thus the description thereof is omitted.
- the operations of the second local feature quantity generation unit 103 and the region division unit 105 will be mainly described.
- the second local feature quantity generation unit 103 detects many feature points of the second image and outputs the second coordinate position information group to the region division unit 105 by the same operation as in the first embodiment. To do. In addition, the second local feature quantity generation unit 103 generates local feature quantities of the feature points of the second image by the same operation as that of the first embodiment, and the second local feature quantity includes the local feature quantities. Are output to the region dividing unit 105 and the collating unit 107.
- the region dividing unit 105 includes a first local feature amount group and a first coordinate position information group output by the first local feature amount generating unit 101, and a second local feature amount generating unit 103 that outputs the second local feature amount group.
- the feature points of the first image are clustered using the local feature amount group and the second coordinate position information group, and a cluster information group indicating the clustering result is output to the matching unit 107.
- the configuration and operation of the region dividing unit 105 will be described with reference to FIG.
- FIG. 10 is a diagram illustrating a configuration of the area dividing unit 105 of the present embodiment.
- the region dividing unit 105 includes a corresponding point searching unit 405, a ratio calculating unit 407, and a feature point clustering unit 403.
- Corresponding point search unit 405 generates a corresponding information group by the same operation as in the third embodiment, and outputs the generated corresponding information group to ratio calculation unit 407 and feature point clustering unit 403.
- the ratio calculation unit 407 includes a first local feature amount group output from the first local feature amount generation unit 101, a second local feature amount group output from the second local feature amount generation unit 103, and corresponding points.
- the correspondence information group output by the search unit 405 the distance between any two feature points of the first image (hereinafter referred to as the distance between feature points) and the second corresponding to those feature points.
- the ratio with the distance between the feature points of the image is calculated, and the calculated many ratios are output to the feature point clustering unit 403 as a ratio information group.
- the feature point clustering unit 403 includes a first coordinate position information group output from the first local feature quantity generation unit 101, a correspondence information group output from the corresponding point search unit 405, and ratio information output from the ratio calculation unit 407.
- the feature points of the first image are clustered using the group, and the cluster information group indicating the result is output to the matching unit 107.
- clustering may be performed using a graph cut.
- a graph cut may be performed on a graph obtained by using feature points as nodes and increasing the edge value between nodes based on the distance between the feature points and the difference in ratio. It is done.
- a normalized cut may be used, or a Markov cluster algorithm may be used.
- the feature point clustering unit 403 may use the coordinate position information group, the correspondence information group, and the ratio information group to cluster the feature points of the first image as follows. In this case, by using a ratio information group of a certain feature point and a plurality of surrounding feature points, a belonging probability that the feature point belongs to an arbitrary cluster is calculated. In this case, the feature point clustering unit 403 performs clustering based on the calculated affiliation probability and the coordinate position information of the feature point. For clustering of feature points, for example, a feature point of the first image corresponding to an arbitrary feature point of the second image is selected based on the correspondence information group, and between the feature point and each cluster centroid.
- the distance is calculated based on the coordinate position information and the affiliation probability, and the feature points are classified into clusters having the smallest calculated distance.
- the following equation can be used to calculate the distance between an arbitrary feature point and the cluster centroid.
- Equation 1 G i is the distance between an arbitrary feature point of the first image and the ith cluster centroid, p i is the intra-cluster probability density function of the ith cluster, and f i is the ith cluster's probability density function.
- the cluster occurrence probability, s i indicates the affiliation probability that an arbitrary feature point of the first image belongs to the i-th cluster.
- the probability density distribution p i is as shown in Equation 2.
- Equation 2 D is the number of dimensions of the input data, v is the input data, r is the cluster centroid of the i-th cluster, and ⁇ i is the intra-cluster covariance matrix of the i-th cluster.
- the intra-cluster occurrence probability f i is a value greater than 0 and less than or equal to 1, and it is conceivable to use, for example, the method described in Non-Patent Document 5 as the update method.
- s i can be calculated as follows using the distance between feature points of the first image and the distance between feature points of the second image.
- the ratio ratio nk of the distance between feature points in the first image and the distance between feature points in the second image is calculated by the following equation.
- v n are the coordinates of the n-th feature point of the first image
- u n 'n in the second image corresponding to the feature point in the v n' coordinate position of th feature point
- v k is the coordinate position distance is shorter feature points k th between v n in the feature point in the vicinity of the feature point in the v n
- the coordinate position of the k'th feature point of the second image is shown.
- the ranges of k and k ′ are 0 ⁇ k and k ′ ⁇ K (0 ⁇ K).
- the ratio ratio nk between the feature points calculated in this way is the case where the two feature points selected from the first image are both the same subject feature point and the different subject feature point. Therefore, the value is different.
- the determined median median n ratio of the distance between the calculated K-number of feature points, number 4, number 5, calculates the belonging probability s i by the number 6.
- label nk is the cluster number n th k-th feature point among the K feature points in the vicinity of the feature point of the first image belongs, N i Among feature points K near
- FIG. 11 is a flowchart showing a processing flow of the image processing apparatus 10.
- the first local feature generating unit 101 detects a large number of feature points from the first image. Further, the second local feature quantity generation unit 103 detects a large number of feature points from the second image (S1101). Next, the first local feature value generation unit 101 and the second local feature value generation unit 103 determine the local feature value (the first local feature value group and the second local feature value from the coordinate position of each feature point. Group) is generated (S1103).
- the region dividing unit 105 determines the correspondence between local feature amounts, that is, similarities based on the distance between each local feature amount of the first local feature amount group and each local feature amount of the second local feature amount group. Correspondence of feature points between images is obtained (S1105). Next, the area dividing unit 105 uses the first coordinate position information group, the second coordinate position information group, and the correspondence information group to determine the distance between the two feature points of the first image and the second image. The ratio with the distance between the two feature points is calculated (S1107). Further, the region dividing unit 105 clusters the feature points of the first image using the first coordinate position information group, the correspondence information group, and the ratio information group (S1109). The collation unit 107 collates the first local feature quantity group and the second local feature quantity group in units of clusters to identify the same or similar subject between images (S1111).
- the image processing apparatus 10 determines the feature points of the first image corresponding to the feature points of the second image based on the ratio between the coordinate position and the distance between the feature points.
- the first local feature quantity group and the second local feature quantity group are collated in units of clusters to identify the same or similar subjects between images. Thereby, the effect similar to 3rd Embodiment can be acquired.
- clustering is performed based on the ratio between the coordinate position and the distance between the feature points, even if the same or similar subject in the first image is close, the feature points are more than in the third embodiment. Clustering with high accuracy. Therefore, a large number of identical or similar subjects in the image can be identified more accurately than in the third embodiment.
- FIG. 12 is a diagram illustrating a functional configuration of the image processing apparatus 10 according to the fifth embodiment. As shown in FIG. 12, the configuration of the image processing apparatus 10 is the same as that of the fourth embodiment. However, the functional configuration and operation of the area dividing unit 105 are different. Hereinafter, the configuration and operation of the region dividing unit 105 will be described with reference to FIG.
- FIG. 13 is a diagram illustrating a functional configuration of the area dividing unit 105 according to the present embodiment.
- the region dividing unit 105 includes a corresponding point searching unit 405, a ratio calculating unit 407, a rotation amount calculating unit 409, a relative coordinate position database 411, and a feature point clustering unit 403.
- the ratio calculation unit 407 is the same as that of the fourth embodiment, the description thereof is omitted.
- the operations of the corresponding point search unit 405, the rotation amount calculation unit 409, the relative coordinate position database 411, and the feature point clustering unit 403 will be mainly described.
- the corresponding point search unit 405 generates a corresponding information group by the same operation as in the third embodiment, and outputs the generated corresponding information group to the ratio calculation unit 407, the rotation amount calculation unit 409, and the feature point clustering unit 403. To do.
- the rotation amount calculation unit 409 includes a first coordinate position information group output from the first local feature amount generation unit 101, a correspondence information group output from the corresponding point search unit 405, and a second local feature amount generation unit 103. Using the second coordinate position information group output from the above, the direction of the vector composed of the two feature points of the first image and the direction of the vector composed of the two feature points of the second image are determined. calculate. Further, the rotation amount calculation unit 409 calculates the rotation amount of the subject of the first image from the calculated vector direction, and outputs the calculated many rotation amounts to the feature point clustering unit 403 as a rotation amount information group. For example, the following equation may be used to calculate the direction ⁇ ij 1 of the vector composed of the two feature points of the first image.
- x is the x coordinate value of the feature point
- y is the y coordinate value of the feature point
- i and j are feature point numbers.
- the following equation may be used to calculate the direction ⁇ nm 2 of the vector formed by the two feature points of the second image.
- n is the feature point number of the second image corresponding to the i-th feature point of the first image
- m is the feature point of the second image corresponding to the j-th feature point of the first image. Indicates the number.
- the rotation amount may be calculated according to the following equation using the vector direction calculated based on, for example, Equation 7 or Equation 8.
- ⁇ ij indicates a rotation amount of a vector formed by the i-th and j-th feature points of the first image.
- the relative coordinate position database 411 includes a table indicating the relative coordinate position between the reference point (for example, the subject center) of the second image and each feature point of the second image.
- the reference point is a predetermined coordinate position in the second image.
- the reference point may be the center of the subject or the upper left coordinate position of the second image. In the following description, it is assumed that the reference point indicates the subject center.
- the relative coordinate position database 411 according to the present embodiment will be described with reference to FIG.
- FIG. 14 is a diagram illustrating a specific example of the relative coordinate position database 411.
- the relative coordinate position database 411 includes, for example, feature point numbers and relative coordinate positions as data items.
- the relative coordinate position between the coordinate position of the first feature point and the coordinate position of the subject center of the first feature point is (100, 100), the coordinate position of the second feature point,
- the relative coordinate position of the second feature point with respect to the coordinate position of the subject center is (50, ⁇ 10).
- the relative coordinate position u n ′ may be calculated as follows.
- n is the feature point number
- the x-coordinate values of the x n is the n-th feature point
- the y n is the y coordinate value of the n-th feature point
- x c is the x-coordinate value of the subject center
- y c represents the y coordinate value of the subject center.
- the feature point clustering unit 403 includes a first coordinate position information group output from the first local feature quantity generation unit 101, a correspondence information group output from the corresponding point search unit 405, and ratio information output from the ratio calculation unit 407.
- the feature points of the first image are clustered using the group, the rotation amount information group output by the rotation amount calculation unit 409, and the relative coordinate position stored in the relative coordinate position database 411. Further, the feature point clustering unit 403 outputs a cluster information group indicating the result of the clustering to the matching unit 107.
- a large number of feature points corresponding to an arbitrary feature point of the second image among the feature points of the first image are selected and selected based on the correspondence information group.
- a method can be used in which the subject center point of the first image is estimated based on the coordinate positions of the feature points, and the estimated subject center points are clustered based on the coordinate positions. For example, the following equation can be used to calculate the coordinate position of the subject center point.
- i and j are feature point numbers
- v i is the coordinate position of the i-th feature point of the first image
- c ij is the coordinate position of the subject center point
- n is the i-th feature point of the first image.
- the feature point numbers of the second image corresponding to the feature points are indicated.
- any of the methods described in Non-Patent Documents 3 to 5 can be used.
- the subject center points included in the analysis region of any size are counted, and if the count value is equal to or greater than a predetermined threshold, the subject center included in the region is counted.
- a method of classifying points into the same cluster may be used.
- a method of dividing the first image into grids of an arbitrary size and using each grid as an analysis area may be used.
- the analysis areas may or may not overlap, for example.
- the size of the analysis region may be fixed or variable, for example. In the case of variable, for example, the smaller the distance between the center of the analysis region and the image center, the smaller the analysis region size, and the larger the distance between the center of the analysis region and the image center, the larger the analysis region size. You may use the method of.
- a method of classifying subject center points included in an analysis region whose count value is a predetermined threshold or more into the same cluster may be used, or the region and the surrounding analysis region A method may be used in which subject center points included in are classified into the same cluster.
- a method of classifying subject center points included in these analysis areas into the same cluster may be used. You may use the method of classifying into a different cluster.
- the feature point clustering unit 403 for example, the cluster information of c ij may be the cluster information of v i.
- FIG. 15 is a flowchart showing a processing flow of the image processing apparatus 10.
- the first local feature quantity generation unit 101 detects a large number of feature points from the first image. Further, the second local feature quantity generation unit 103 detects a large number of feature points from the second image (S1501). Next, the first local feature value generation unit 101 and the second local feature value generation unit 103 generate a local feature value from the coordinate position of each feature point (S1503). Based on the distance between each local feature quantity of the first local feature quantity group and each local feature quantity of the second local feature quantity group, the region dividing unit 105 corresponds to the correspondence relationship between the local feature quantities, that is, two images. A correspondence relationship between the feature points is obtained (S1505).
- the region dividing unit 105 uses the first coordinate position information group, the second coordinate position information group, and the correspondence information group to calculate the distance between the two feature points of the first image and the second coordinate point information group.
- a ratio with the distance between two feature points of the image is calculated (S1507).
- the area dividing unit 105 calculates the rotation amount of the subject of the first image using the first coordinate position information group, the second coordinate position information group, and the correspondence information group (S1509).
- the area dividing unit 105 estimates the subject center point of the first image using the first coordinate position information group, the correspondence information group, the ratio information group, and the rotation information group, and the estimated subject center point. Are clustered based on their coordinate positions (S1511).
- the collation unit 107 collates the first local feature quantity group and the second local feature quantity group in units of clusters to identify the same or similar subject between images (S1513).
- the image processing apparatus 10 is configured such that the coordinate position of each feature point of the first image, the coordinate position of each feature point of the second image, and the feature point between the two images. And the relative coordinate position generated in advance are used to estimate the center point of the subject. Then, the estimated subject center points are clustered based on their coordinate positions, and the first local feature group and the second local feature group are collated in units of clusters, so that they are the same or similar between images. Identify the subject. As a result, the image processing apparatus 10 according to the present embodiment can obtain the same effects as the fourth embodiment liquid.
- the image processing apparatus 10 collects the feature points of the first image at the subject center and then clusters them, the feature points can be clustered more accurately than in the fourth embodiment. Therefore, a large number of identical or similar subjects in the image can be identified with higher accuracy than in the fourth embodiment.
- FIG. 16 is a block diagram illustrating a functional configuration of the image processing apparatus 10 according to the sixth embodiment.
- the image processing apparatus 10 has the same configuration as that of the third embodiment. However, the configuration and operation of the region dividing unit 105 are different from those in the third embodiment. Hereinafter, the configuration and operation of the region dividing unit 105 will be described with reference to FIG.
- FIG. 17 is a diagram illustrating a configuration of the area dividing unit 105.
- the region dividing unit 105 includes a corresponding point searching unit 405, a relative coordinate position database 411, and a feature point clustering unit 403. Since the operation of the corresponding point search unit 405 is the same as that of the third embodiment and the configuration of the relative coordinate position database 411 is the same as that of the fifth embodiment, description thereof is omitted here.
- the operation of the feature point clustering unit 403 will be mainly described.
- the feature point clustering unit 403 is stored in the first coordinate position information group output from the first local feature quantity generation unit 101, the correspondence information group output from the corresponding point search unit 405, and the relative coordinate position database 411.
- the feature points of the first image are clustered using the relative coordinate positions, and the resulting cluster information group is output to the matching unit 107.
- For clustering of feature points for example, among feature points of the first image, a large number of feature points corresponding to arbitrary feature points of the second image are selected based on the correspondence information group.
- the subject center point of the first image is estimated based on the coordinate position of the feature point of the selected first image, and the estimated subject center point is calculated based on the coordinate position of the fifth implementation. It is conceivable to perform clustering by the same method as that of the form. For example, the subject center may be estimated using Equations 10 and 12.
- c i is the coordinate position of the subject center point
- v i is the coordinate position of the i-th feature point of the first image
- n is the second position corresponding to the i-th feature point of the first image.
- the feature point numbers of the images are shown respectively.
- FIG. 18 is a flowchart illustrating a processing flow of the image processing apparatus 10 according to the present embodiment. Hereinafter, the processing flow of the image processing apparatus 10 will be described with reference to FIG.
- the first local feature generating unit 101 detects a large number of feature points from the first image. Further, the second local feature quantity generation unit 103 detects a large number of feature points from the second image (S1801). Next, the first local feature value generation unit 101 and the second local feature value generation unit 103 generate local feature values related to the feature points from the coordinate positions of the feature points (S1803).
- the region dividing unit 105 determines the correspondence between local features based on the distance between each local feature of the first local feature group and each local feature of the second local feature group, that is, between two images. Correspondences between feature points are obtained (S1805).
- the area dividing unit 105 uses the first coordinate position information group and the correspondence information to select many feature points corresponding to arbitrary feature points of the second image from among the feature points of the first image. Based on the selected feature point of the first image and the relative coordinate position stored in the relative coordinate position database 411, the subject center point of the first image is estimated, and the estimated subject center point is the coordinates thereof. Clustering is performed based on the position (S1807).
- the collating unit 107 identifies the same or similar subject between images by collating the first local feature group and the second local feature group in cluster units (S1809).
- the image processing apparatus 10 has the coordinate position of each feature point of the first image, the correspondence between the feature points between the two images, and the relative coordinates generated in advance.
- the center point of the subject is estimated using the position. Further, the estimated subject center points are clustered based on their coordinate positions, and the first local feature quantity group and the second local feature quantity group are collated in units of clusters, so that the same or Identify similar subjects. Therefore, the same effect as in the first embodiment can be obtained when congruent transformation is established between the acquired vertex of the subject of the first image and the acquired vertex of the subject of the second image.
- FIG. 19 is a diagram illustrating a functional configuration of the image processing apparatus 10 according to the present embodiment.
- the image processing apparatus 10 according to the present embodiment has the same configuration as that of the fifth embodiment, but the configuration and operation of the area dividing unit 105 are different.
- the configuration and operation of the region dividing unit 105 will be described with reference to FIG.
- FIG. 20 is a diagram illustrating a configuration of the area dividing unit 105 according to the present embodiment.
- the region dividing unit 105 according to the present embodiment includes a corresponding point searching unit 405, a ratio calculating unit 407, a rotation amount calculating unit 409, and a feature point clustering unit 403.
- the operation of the ratio calculation unit 407 is the same as that of the fourth embodiment, and the operations of the corresponding point search unit 405 and the rotation amount calculation unit 409 are the same as those of the fifth embodiment.
- the operation of the feature point clustering unit 403 will be described.
- the feature point clustering unit 403 includes a first coordinate position information group output from the first local feature quantity generation unit 101, a correspondence information group output from the corresponding point search unit 405, and ratio information output from the ratio calculation unit 407.
- the feature points of the first image are clustered using the group and the rotation amount information group output by the rotation amount calculation unit 409. Then, the feature point clustering unit 403 outputs a cluster information group, which is information related to each cluster obtained as a result of clustering, to the matching unit 107.
- feature points having a small difference in calculated ratio and difference in rotation amount are classified into the same cluster (feature points having a large difference in ratio and difference in rotation amount are classified into different clusters).
- clustering using a graph cut can be considered. For example, using feature points as nodes, edge values are calculated based on the distance between feature points, the difference in ratio and the difference in rotation amount (for example, the distance value between feature points is small, the difference in ratio and rotation A graph cut may be provided for the obtained graph (such as increasing the edge value between two nodes the smaller the difference in quantity).
- a normalized cut or a Markov cluster algorithm may be used for the graph cut.
- FIG. 21 is a flowchart showing the flow of processing of the image processing apparatus 10 according to the present embodiment.
- the processing of the image processing apparatus 10 according to the present embodiment will be described with reference to FIG.
- the first local feature quantity generation unit 101 detects a large number of feature points from the first image. Further, the second local feature quantity generation unit 103 detects a large number of feature points from the second image (S2101). Next, the first local feature value generation unit 101 and the second local feature value generation unit 103 generate a local feature value from the coordinate position of each feature point (S2103).
- the region dividing unit 105 Based on the distance between each local feature quantity of the first local feature quantity group and each local feature quantity of the second local feature quantity group, the region dividing unit 105 corresponds to the local feature quantity correspondence, that is, 2 Correspondence of feature points between images is obtained (S2105). Next, the area dividing unit 105 uses the first coordinate position information group, the second coordinate position information group, and the correspondence information group to determine the distance between the two feature points of the first image and the second image. The ratio with the distance between the two feature points is calculated (S2107). Then, the area dividing unit 105 calculates the rotation amount of the subject of the first image using the first coordinate position information group, the second coordinate position information group, and the correspondence information group (S2109).
- the area dividing unit 105 clusters the feature points of the first image using the first coordinate position information group, the correspondence information group, the ratio information group, and the rotation information group (S2111).
- the collation unit 107 collates the first local feature quantity group and the second local feature quantity group in cluster units, and identifies the same or similar subject between images (S2113).
- the image processing apparatus 10 is configured such that the coordinate position of each feature point of the first image, the coordinate position of each feature point of the second image, and the distance between the two images.
- the amount of rotation and the ratio of the subject of the first image are estimated using the correspondence between the feature points.
- the feature points of the first image are clustered based on them, and the first local feature value group and the second local feature value group are collated in units of clusters, so that they are the same or similar between the images. Identify the subject. Therefore, when the size and the amount of rotation of each subject in the first image are different between adjacent identical or similar subjects, the same effect as in the first embodiment can be obtained.
- FIG. 22 is a functional block diagram showing a functional configuration of the image processing apparatus 10 according to the present embodiment.
- the functional configuration of the image processing apparatus 10 according to the present embodiment is the same as that of the fifth embodiment, but the configuration and operation of the region dividing unit 105 are different.
- the configuration and operation of the region dividing unit 105 will be described with reference to FIG.
- FIG. 23 is a diagram illustrating a configuration of the area dividing unit 105 according to the present embodiment.
- the region dividing unit 105 includes a corresponding point searching unit 405, a rotation amount calculating unit 409, and a feature point clustering unit 403. Since the operations of the corresponding point search unit 405 and the rotation amount calculation unit 409 are the same as those in the fifth embodiment, description thereof is omitted here.
- the operation of the feature point clustering unit 403 will be mainly described.
- the feature point clustering unit 403 includes a first coordinate position information group output from the first local feature amount generation unit 101, a correspondence information group output from the corresponding point search unit 405, and a rotation output from the rotation amount calculation unit 409.
- the feature points of the first image are clustered using the quantity information group, and the cluster information group indicating the result is output to the matching unit 107.
- clustering for example, feature points with a small difference in the calculated rotation amount may be classified into the same cluster (feature points with a large difference in rotation amount are classified into different clusters). For example, clustering may be performed using a graph cut.
- edge values are calculated based on the distance between feature points and the difference in rotation amount (for example, the smaller the distance value between feature points and the smaller the difference in rotation amount, It is also conceivable to provide a graph cut for the obtained graph (which increases the edge value between nodes).
- a normalized cut may be used, or a Markov cluster algorithm may be used.
- FIG. 24 is a flowchart illustrating a processing flow of the image processing apparatus 10 according to the present embodiment. Hereinafter, the processing flow of the image processing apparatus 10 will be described with reference to FIG.
- the first local feature quantity generation unit 101 detects a large number of feature points from the first image. Further, the second local feature quantity generation unit 103 detects a large number of feature points from the second image (S2401). Next, the first local feature value generation unit 101 and the second local feature value generation unit 103 generate a local feature value from the coordinate position of each feature point (S2403).
- the region dividing unit 105 Based on the distance between each local feature quantity of the first local feature quantity group and each local feature quantity of the second local feature quantity group, the region dividing unit 105 corresponds to the correspondence relationship between the local feature quantities, that is, two images. A correspondence relationship between the feature points is obtained (S2405). Next, the area dividing unit 105 calculates the amount of rotation of the subject of the first image using the first coordinate position information group, the second coordinate position information group, and the correspondence information group (S2407). Then, the region dividing unit 105 clusters each feature point of the first image using the first coordinate position information group, the correspondence information group, and the rotation information group (S2409). The collation unit 107 collates the first local feature quantity group and the second local feature quantity group in units of clusters to identify the same or similar subject between images (S2411).
- the image processing apparatus 10 includes the coordinate position of each feature point of the first image, the coordinate position of each feature point of the second image, and the feature between the two images.
- the amount of rotation of the subject of the first image is estimated using the point correspondence.
- the feature points of the first image are clustered based on the estimated rotation amount and the first coordinate position information group, and the first local feature amount group and the second local feature amount group are clustered.
- the same or similar subject is identified between the images. Therefore, when the amount of rotation of each subject in the first image is different between adjacent identical or similar subjects, the same effect as in the first embodiment can be obtained.
- a first local feature amount group including a local feature amount that is a feature amount of a plurality of local regions including the respective feature points, and coordinate position information are included.
- First feature quantity generating means for generating a first coordinate position information group, area dividing means for clustering feature points of the first image based on the first coordinate position information group, and a cluster unit
- An image processing apparatus comprising: a collating unit that collates the first local feature quantity group with a second local feature quantity group that is a local feature quantity of a feature point detected from the second image.
- the region dividing means clusters the feature points of the first image using the similarity between the local feature amounts of the first local feature amount group and the first coordinate position information group.
- the region dividing unit is characterized by a feature distance between the first image and the second image based on a distance between feature amounts between the first local feature amount group and the second local feature amount group. Any one of appendix 1 to appendix 3, wherein a correspondence information group indicating a correspondence relationship between points is calculated, and the feature points of the first image are clustered using the correspondence information group and the first coordinate position information group.
- An image processing apparatus according to claim 1.
- the region dividing unit is based on a relative coordinate position between each feature point of the second image and a predetermined reference point of the second image, the correspondence information group, and the first coordinate position information group.
- the image processing apparatus according to appendix 4 wherein the feature points of the first image are clustered based on the estimated coordinate position of the reference point in the first image.
- the region dividing means uses the first coordinate position information group, a second coordinate position information group that is coordinate position information of feature points detected from the second image, and the correspondence information group. Calculating a ratio between a distance between any two feature points of the first image and a distance between two feature points of the second image corresponding to the two feature points;
- the image processing apparatus according to appendix 5, wherein the rotation amount of the subject of the first image is calculated using the coordinate position information group, the second coordinate position information group, and the correspondence information group.
- Appendix 7 The image processing apparatus according to appendix 6, wherein the area dividing unit clusters feature points of the first image using at least one of the ratio and the rotation amount and the first coordinate position information group.
- the area dividing unit is configured to estimate the first image of the reference point of the second image estimated using the rotation amount, the ratio, the relative coordinate position, and the first coordinate position information group.
- the image processing apparatus according to appendix 6 or appendix 7, wherein the feature points of the first image are clustered using the coordinate position at.
- a first local feature amount group including a local feature amount that is a feature amount of a plurality of local regions including the respective feature points, and coordinate position information are included.
- An image processing method comprising: comparing a group with a second local feature quantity group that is a local feature quantity of a feature point detected from the second image.
- a first local feature amount group including a local feature amount that is a feature amount of a plurality of local regions including the respective feature points, and coordinate position information are included.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
(1.1 機能構成)
以下、図1を参照しながら、本実施形態に係る画像処理装置10の機能構成を説明する。図1は、本実施形態に係る画像処理装置10の機能構成を示す機能ブロック図である。尚、画像処理装置10の各機能構成は、メモリに一時記憶されると共にCPU(central Processing Unit)上で動作するプログラムとして実現されても良い。
以下、画像処理装置10の各構成の動作の詳細を説明する。
(1.2.1 特徴量生成)
前述の通り、第1の局所特徴量生成部101は、第1の画像から特徴点を多数検出すると共に、検出した多数の特徴点の座標位置で構成される第1の座標位置情報群を領域分割部105へと出力する。また、第1の局所特徴量生成部101は、検出した各特徴点の座標位置から局所特徴量を生成すると共に、生成した多数の局所特徴量で構成される第1の局所特徴量群を、照合部107へと出力する。
領域分割部105は、第1の局所特徴量生成部101が出力した第1の座標位置情報群を用いて第1の画像の各特徴点をクラスタリングすると共に、1以上の特徴点から構成される各クラスタのクラスタ情報であるクラスタ情報群を照合部107へと出力する。特徴点のクラスタリングには、例えば、特徴点の座標位置が近い、すなわち、2つの特徴点間の距離(座標位置間の距離)が近い特徴点を同じクラスタに分類する方法を用いることが考えられる。ここで、2つの特徴点間の距離には、たとえばユークリッド距離を用いても良いし、マハラノビス距離を用いても良いし、市街地距離を用いても良い。
照合部107は、領域分割部105が出力したクラスタ情報群を用いて、クラスタ単位で、第1の局所特徴量群と第2の局所特徴量群とを照合すると共に、特徴量間で同一性または類似性を判定する。これにより、画像間で同一または類似の被写体を識別する。
次に、図2を参照しながら、本実施形態に係る画像処理装置10の処理の流れを説明する。図2は、本実施形態に係る画像処理装置10の処理の流れを示すフローチャートである。
以上説明したように、本実施形態にかかる画像処理装置10は、第1の画像から検出した多数の特徴点を、それらの座標位置に基づいてクラスタリングすると共に、第1の局所特徴量群と第2の局所特徴量群とを、クラスタ単位で照合する。このようにクラスタ単位で局所特徴量を照合することにより、画像内の同一または類似の被写体を、精度よく多数識別できる。
以下、第2の実施形態を説明する。以下の説明では、第1の実施形態と同一または類似の構成については同一の符号を振ると共に、説明を省略することがある。また、作用効果の記載についても、第1の実施形態と同様の場合には、説明を省略することがある。この点、第3の実施形態以降についても同様である。
図3は、第2の実施形態に係る画像処理装置10の機能構成を示す図である。図3に示すように、画像処理装置10は、第1の局所特徴量生成部101と、第2の局所特徴量生成部103と、領域分割部105と、照合部107とを含む。ここで、第2の局所特徴量生成部103及び照合部107の動作は、第1の実施形態と同様であるので説明を省略する。
以下、図5を参照しながら、本実施形態に係る画像処理装置10の処理の流れを説明する。図5は、画像処理装置10の処理の流れを示すフローチャートである。
以上説明したように、本実施形態では、第1の画像から検出した多数の特徴点を、それらの座標位置と類似度とに基づいてクラスタリングし、第1の局所特徴量群と第2の局所特徴量群とをクラスタ単位で照合することにより、画像間で同一または類似の被写体を識別する。これにより、第1の実施形態と同様の効果を得ることができる。
(3.1 機能構成)
図6を参照しながら、第3の実施形態に係る画像処理装置10の機能構成を説明する。図6は、本実施形態に係る画像処理装置10の機能構成を示す図である。
更に、第3の実施形態は、第2の実施形態と組み合わせることも考えられる。
以下、図8を参照しながら、本実施形態に係る画像処理装置10の処理の流れを説明する。図8は、画像処理装置10の処理の流れを示すフローチャートである。
以上説明したように、第3の実施形態に係る画像処理装置10は、第1の画像から検出した多数の特徴点のうち、第2の画像の特徴点と一致した多数の特徴点をそれらの座標位置に基づいてクラスタリングするとともに、第1の局所特徴量群と第2の局所特徴量群とをクラスタ単位で照合することにより、画像間で同一または類似の被写体を識別する。これにより、第1の実施形態と同様の効果を得ることができる。
(4.1 機能構成)
図9を参照しながら、第4の実施形態に係る画像処理装置10の機能構成を説明する。図9は、本実施形態に係る画像処理装置10の機能構成を示す図である。
任意の特徴点とクラスタ重心との距離の算出には、例えば、次式を用いることが考えられる。
クラスタ内生起確率fiは0より大きく1以下の値であり、その更新方法には、例えば非特許文献5に記載の方法を用いることが考えられる。
まず、第1の画像での特徴点間距離と第2の画像での特徴点間距離との比率rationkを次式により算出する。
次に、算出したK個の特徴点間距離の比の中央値mediannを求めると共に、数4、数5、数6により所属確率siを算出する。
以下、図11を参照しながら、本実施形態に係る画像処理装置10の処理の流れを説明する。図11は、画像処理装置10の処理の流れを示すフローチャートである。
以上説明したように、本実施形態に係る画像処理装置10は、第2の画像の特徴点に対応する第1の画像の特徴点をそれらの座標位置と特徴点間距離の比率とに基づいてクラスタリングすると共に、第1の局所特徴量群と第2の局所特徴量群とをクラスタ単位で照合することにより、画像間で同一または類似の被写体を識別する。これにより、第3の実施形態と同様の効果を得ることができる。
(5.1 機能構成)
図12を参照しながら、本実施形態に係る画像処理装置10の機能構成を説明する。図12は、第5の実施形態に係る画像処理装置10の機能構成を示す図である。図12に示す通り、画像処理装置10の構成は第4の実施形態と同様である。しかしながら、領域分割部105の機能構成及び動作は異なる。以下、図13を参照しながら、領域分割部105の構成及び動作を説明する。
第1の画像の2つの特徴点で構成されるベクトルの方向θij 1の算出には、例えば、次式を用いれば良い。
次に、回転量は、例えば数7や数8に基づき算出したベクトル方向を用いて、次式に従って算出すれば良い。
ここで、相対座標位置un’は次式のように算出すれば良い。
被写体中心点の座標位置の算出には、例えば次式を用いることができる。
被写体中心点のクラスタリングには、例えば、非特許文献3乃至5のいずれか記載の手法を用いることができる。
以上の処理の後、特徴点クラスタリング部403は、例えばcijのクラスタ情報をviのクラスタ情報とすれば良い。
以下、図15を参照しながら、本実施形態に係る画像処理装置10の処理の流れを説明する。図15は、画像処理装置10の処理の流れを示すフローチャートである。
以上説明したように、本実施形態に係る画像処理装置10は、第1の画像の各特徴点の座標位置と、第2の画像の各特徴点の座標位置と、2画像間での特徴点の対応関係と、事前に生成した相対座標位置とを用いて被写体の中心点を推定する。そして、推定した被写体中心点をそれらの座標位置に基づいてクラスタリングすると共に、第1の局所特徴量群と第2の局所特徴量群とをクラスタ単位で照合することにより、画像間で同一または類似の被写体を識別する。これにより、本実施形態に係る画像処理装置10も、第4の実施液体と同様の効果が得られる。
(6.1 機能構成)
以下、第6の実施形態を説明する。まず、図16を参照しながら、第6の実施形態に係る画像処理装置10の機能構成を説明する。図16は、第6の実施形態に係る画像処理装置10の機能構成を示すブロック図である。
被写体中心は、例えば、数10と数12とを用いて推定することが考えられる。
図18は、本実施形態に係る画像処理装置10の処理の流れを示すフローチャートである。以下、図18を参照しながら、画像処理装置10の処理の流れを説明する。
以上説明したように、第6の実施形態に係る画像処理装置10は、第1の画像の各特徴点の座標位置と、2画像間での特徴点の対応関係と、事前に生成した相対座標位置とを用いて被写体中心点を推定する。更に、推定した被写体中心点を、それらの座標位置に基づいてクラスタリングすると共に、第1の局所特徴量群と第2の局所特徴量群とをクラスタ単位で照合することにより、画像間で同一または類似の被写体を識別する。従って、第1の画像の被写体の獲得頂点と第2の画像の被写体の獲得頂点との間に合同変換が成り立つ場合に、第1の実施形態と同様の効果を得ることができる。
(7.1 機能構成)
続いて、図19乃至図21を参照しながら、第7の実施形態を説明する。図19は、本実施形態に係る画像処理装置10の機能構成を示す図である。本実施形態に係る画像処理装置10は、第5の実施形態と同様の構成であるが、領域分割部105の構成及び動作は異なる。以下、図20を参照しながら、領域分割部105の構成及び動作を説明する。
図21は、本実施形態に係る画像処理装置10の処理の流れを示すフローチャートである。図21を参照しながら、以下、本実施形態に係る画像処理装置10の処理を説明する。
以上説明したように、第7の実施形態に係る画像処理装置10は、第1の画像の各特徴点の座標位置と、第2の画像の各特徴点の座標位置と、2画像間での特徴点の対応関係とを用いて、第1の画像の被写体の回転量および比率を推定する。更に、それらに基づいて第1の画像の各特徴点をクラスタリングすると共に、第1の局所特徴量群と第2の局所特徴量群とをクラスタ単位で照合することにより、画像間で同一または類似の被写体を識別する。従って、第1の画像の各被写体の大きさや回転量が、近接する同一または類似の被写体同士で異なる場合、第1の実施形態と同様の効果を得ることができる。
(8.1 機能構成)
以下、図22乃至24を参照しながら、第8の実施形態を説明する。まず、図22を参照しながら、本実施形態に係る画像処理装置10の機能構成を説明する。図22は、本実施形態に係る画像処理装置10の機能構成を示す機能ブロック図である。
図24は、本実施形態に係る画像処理装置10の処理の流れを示すフローチャートである。以下、図24を参照しながら、画像処理装置10の処理の流れを説明する。
以上説明したように、第8の実施形態に係る画像処理装置10は、第1の画像の各特徴点の座標位置と、第2の画像の各特徴点の座標位置と、2画像間の特徴点の対応関係とを用いて、第1の画像の被写体の回転量を推定する。そして、推定した回転量と第1の座標位置情報群とに基づいて第1の画像の各特徴点をクラスタリングすると共に、第1の局所特徴量群と第2の局所特徴量群とをクラスタ単位で照合することにより、画像間で同一または類似の被写体を識別する。従って、第1の画像の各被写体の回転量が、近接する同一または類似の被写体同士で異なる場合、第1の実施形態と同様の効果を得ることができる。
尚、前述の各実施形態の構成は、組み合わせたり或いは一部の構成部分を入れ替えたりしてもよい。また、本発明の構成は前述の実施形態のみに限定されるものではなく、本発明の要旨を逸脱しない範囲内において種々変更を加えてもよい。
尚、前述の各実施形態の一部又は全部は、以下の付記のようにも記載されうるが、以下には限られない。
第1の画像から検出される複数の特徴点に対して、それぞれの特徴点を含む複数の局所領域の特徴量である局所特徴量を含む第1の局所特徴量群と、座標位置情報を含む第1の座標位置情報群とを生成する第1の特徴量生成手段と、前記第1の座標位置情報群に基づき、前記第1の画像の特徴点をクラスタリングする領域分割手段と、クラスタ単位で、前記第1の局所特徴量群と、第2の画像から検出された特徴点の局所特徴量である第2の局所特徴量群とを照合する照合手段とを有する画像処理装置。
前記領域分割手段は、各特徴点間の距離に応じて前記第1の画像の特徴点をクラスタリングする、付記1記載の画像処理装置。
前記領域分割手段は、前記第1の局所特徴量群の各局所特徴量同士の類似度と、前記第1の座標位置情報群とを用いて前記第1の画像の特徴点をクラスタリングする、付記1又は付記2記載の画像処理装置。
前記領域分割手段は、前記第1の局所特徴量群と、前記第2の局所特徴量群との間の特徴量間距離に基づいて、前記第1の画像と前記第2の画像との特徴点の対応関係を示す対応情報群を算出すると共に、当該対応情報群と前記第1の座標位置情報群とを用いて前記第1の画像の特徴点をクラスタリングする、付記1乃至付記3のいずれか1項記載の画像処理装置。
前記領域分割手段は、第2の画像の各特徴点と第2の画像の予め定められた基準点との相対座標位置と、前記対応情報群と、前記第1の座標位置情報群とに基づいて推定した、前記第1の画像での前記基準点の座標位置に基づいて前記第1の画像の特徴点をクラスタリングする、付記4記載の画像処理装置。
前記領域分割手段は、前記第1の座標位置情報群と、前記第2の画像から検出された特徴点の座標位置情報である第2の座標位置情報群と、前記対応情報群とを用いて、前記第1の画像の任意の2つの特徴点間の距離と、当該2つの特徴点に対応する前記第2の画像の2つの特徴点間の距離との比率を算出すると共に、前記第1の座標位置情報群と、前記第2の座標位置情報群と、前記対応情報群とを用いて前記第1の画像の被写体の回転量を算出する、付記5に記載の画像処理装置。
前記領域分割手段は、前記比率と前記回転量との少なくとも一方と、前記第1の座標位置情報群とを用いて前記第1の画像の特徴点をクラスタリングする、付記6記載の画像処理装置。
前記領域分割手段は、前記回転量と、前記比率と、前記相対座標位置と、前記第1の座標位置情報群とを用いて推定した、前記第2の画像の基準点の前記第1の画像での座標位置を用いて、前記第1の画像の特徴点をクラスタリングする、付記6又は付記7に記載の画像処理装置。
第1の画像から検出される複数の特徴点に対して、それぞれの特徴点を含む複数の局所領域の特徴量である局所特徴量を含む第1の局所特徴量群と、座標位置情報を含む第1の座標位置情報群とを生成するステップと、前記第1の座標位置情報群に基づき、前記第1の画像の特徴点をクラスタリングするステップと、クラスタ単位で、前記第1の局所特徴量群と、第2の画像から検出された特徴点の局所特徴量である第2の局所特徴量群とを照合するステップとを備える画像処理方法。
第1の画像から検出される複数の特徴点に対して、それぞれの特徴点を含む複数の局所領域の特徴量である局所特徴量を含む第1の局所特徴量群と、座標位置情報を含む第1の座標位置情報群とを生成するステップと、前記第1の座標位置情報群に基づき、前記第1の画像の特徴点をクラスタリングするステップと、クラスタ単位で、前記第1の局所特徴量群と、第2の画像から検出された特徴点の局所特徴量である第2の局所特徴量群とを照合するステップとを画像処理装置に実行させるプログラム。
Claims (10)
- 第1の画像から検出される複数の特徴点に対して、それぞれの特徴点を含む複数の局所領域の特徴量である局所特徴量を含む第1の局所特徴量群と、座標位置情報を含む第1の座標位置情報群とを生成する第1の特徴量生成手段と、
前記第1の座標位置情報群に基づき、前記第1の画像の特徴点をクラスタリングする領域分割手段と、
クラスタ単位で、前記第1の局所特徴量群と、第2の画像から検出された特徴点の局所特徴量である第2の局所特徴量群とを照合する照合手段と
を有する画像処理装置。 - 前記領域分割手段は、各特徴点間の距離に応じて前記第1の画像の特徴点をクラスタリングする、
請求項1記載の画像処理装置。 - 前記領域分割手段は、前記第1の局所特徴量群の各局所特徴量同士の類似度と、前記第1の座標位置情報群とを用いて前記第1の画像の特徴点をクラスタリングする、
請求項1又は請求項2記載の画像処理装置。 - 前記領域分割手段は、前記第1の局所特徴量群と、前記第2の局所特徴量群との間の特徴量間距離に基づいて、前記第1の画像と前記第2の画像との特徴点の対応関係を示す対応情報群を算出すると共に、当該対応情報群と前記第1の座標位置情報群とを用いて前記第1の画像の特徴点をクラスタリングする、
請求項1乃至請求項3のいずれか1項記載の画像処理装置。 - 前記領域分割手段は、第2の画像の各特徴点と第2の画像の予め定められた基準点との相対座標位置と、前記対応情報群と、前記第1の座標位置情報群とに基づいて推定した、前記第1の画像での前記基準点の座標位置に基づいて前記第1の画像の特徴点をクラスタリングする、
請求項4記載の画像処理装置。 - 前記領域分割手段は、
前記第1の座標位置情報群と、前記第2の画像から検出された特徴点の座標位置情報である第2の座標位置情報群と、前記対応情報群とを用いて、前記第1の画像の任意の2つの特徴点間の距離と、当該2つの特徴点に対応する前記第2の画像の2つの特徴点間の距離との比率を算出すると共に、
前記第1の座標位置情報群と、前記第2の座標位置情報群と、前記対応情報群とを用いて前記第1の画像の被写体の回転量を算出する、
請求項5に記載の画像処理装置。 - 前記領域分割手段は、前記比率と前記回転量との少なくとも一方と、前記第1の座標位置情報群とを用いて前記第1の画像の特徴点をクラスタリングする、
請求項6記載の画像処理装置。 - 前記領域分割手段は、前記回転量と、前記比率と、前記相対座標位置と、前記第1の座標位置情報群とを用いて推定した、前記第2の画像の基準点の前記第1の画像での座標位置を用いて、前記第1の画像の特徴点をクラスタリングする、
請求項6又は請求項7に記載の画像処理装置。 - 第1の画像から検出される複数の特徴点に対して、それぞれの特徴点を含む複数の局所領域の特徴量である局所特徴量を含む第1の局所特徴量群と、座標位置情報を含む第1の座標位置情報群とを生成するステップと、
前記第1の座標位置情報群に基づき、前記第1の画像の特徴点をクラスタリングするステップと、
クラスタ単位で、前記第1の局所特徴量群と、第2の画像から検出された特徴点の局所特徴量である第2の局所特徴量群とを照合するステップと
を備える画像処理方法。 - 第1の画像から検出される複数の特徴点に対して、それぞれの特徴点を含む複数の局所領域の特徴量である局所特徴量を含む第1の局所特徴量群と、座標位置情報を含む第1の座標位置情報群とを生成するステップと、
前記第1の座標位置情報群に基づき、前記第1の画像の特徴点をクラスタリングするステップと、
クラスタ単位で、前記第1の局所特徴量群と、第2の画像から検出された特徴点の局所特徴量である第2の局所特徴量群とを照合するステップと
を画像処理装置に実行させるプログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/411,587 US10540566B2 (en) | 2012-06-29 | 2013-03-26 | Image processing apparatus, image processing method, and program |
CN201380034040.1A CN104412301A (zh) | 2012-06-29 | 2013-03-26 | 图像处理设备、图像处理方法、以及程序 |
JP2014522452A JP6094949B2 (ja) | 2012-06-29 | 2013-03-26 | 画像処理装置、画像処理方法、及びプログラム |
US16/692,083 US10796188B2 (en) | 2012-06-29 | 2019-11-22 | Image processing apparatus, image processing method, and program to identify objects using image features |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012147239 | 2012-06-29 | ||
JP2012-147239 | 2012-06-29 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/411,587 A-371-Of-International US10540566B2 (en) | 2012-06-29 | 2013-03-26 | Image processing apparatus, image processing method, and program |
US16/692,083 Continuation US10796188B2 (en) | 2012-06-29 | 2019-11-22 | Image processing apparatus, image processing method, and program to identify objects using image features |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014002554A1 true WO2014002554A1 (ja) | 2014-01-03 |
Family
ID=49782733
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/058796 WO2014002554A1 (ja) | 2012-06-29 | 2013-03-26 | 画像処理装置、画像処理方法、及びプログラム |
Country Status (4)
Country | Link |
---|---|
US (2) | US10540566B2 (ja) |
JP (1) | JP6094949B2 (ja) |
CN (1) | CN104412301A (ja) |
WO (1) | WO2014002554A1 (ja) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016033776A (ja) * | 2014-07-31 | 2016-03-10 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | 大規模画像データベースの高速検索手法 |
JP2016033775A (ja) * | 2014-07-31 | 2016-03-10 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | 同一種類の複数の認識対象物体が検索対象画像中に存在する場合に、それぞれの認識対象物体の位置および向きを精度良く求める手法 |
JP2016045600A (ja) * | 2014-08-20 | 2016-04-04 | キヤノン株式会社 | 画像処理装置および画像処理方法 |
JPWO2015170461A1 (ja) * | 2014-05-07 | 2017-04-20 | 日本電気株式会社 | 画像処理装置、画像処理方法およびコンピュータ可読記録媒体 |
JP2021503131A (ja) * | 2017-11-14 | 2021-02-04 | マジック リープ, インコーポレイテッドMagic Leap,Inc. | ホモグラフィ適合を介した完全畳み込み着目点検出および記述 |
JP2022526548A (ja) * | 2020-01-22 | 2022-05-25 | 上▲海▼商▲湯▼▲臨▼▲港▼智能科技有限公司 | ターゲット検出方法、装置、電子機器およびコンピュータ可読記憶媒体 |
US11580721B2 (en) | 2017-11-07 | 2023-02-14 | Nec Corporation | Information processing apparatus, control method, and program |
US11797603B2 (en) | 2020-05-01 | 2023-10-24 | Magic Leap, Inc. | Image descriptor network with imposed hierarchical normalization |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105320956B (zh) * | 2015-10-14 | 2018-11-13 | 华侨大学 | 一种基于中心窗口变差的四象限分块模式的图像纹理特征提取方法 |
JP6948787B2 (ja) * | 2016-12-09 | 2021-10-13 | キヤノン株式会社 | 情報処理装置、方法およびプログラム |
US20180300872A1 (en) * | 2017-04-12 | 2018-10-18 | Ngr Inc. | Method And Apparatus For Integrated Circuit Pattern Inspection With Automatically Set Inspection Areas |
US20190012777A1 (en) * | 2017-07-07 | 2019-01-10 | Rolls-Royce Corporation | Automated visual inspection system |
WO2019073959A1 (ja) * | 2017-10-10 | 2019-04-18 | 株式会社博報堂Dyホールディングス | 情報処理システム、データ提供システム、及び関連する方法 |
JP6938408B2 (ja) * | 2018-03-14 | 2021-09-22 | 株式会社日立製作所 | 計算機及びテンプレート管理方法 |
CN114549883B (zh) * | 2022-02-24 | 2023-09-05 | 北京百度网讯科技有限公司 | 图像处理方法、深度学习模型的训练方法、装置和设备 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000090239A (ja) * | 1998-09-10 | 2000-03-31 | Matsushita Electric Ind Co Ltd | 画像検索装置 |
WO2005086092A1 (ja) * | 2004-03-03 | 2005-09-15 | Nec Corporation | 画像類似度算出システム、画像検索システム、画像類似度算出方法および画像類似度算出用プログラム |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6711293B1 (en) | 1999-03-08 | 2004-03-23 | The University Of British Columbia | Method and apparatus for identifying scale invariant features in an image and use of same for locating an object in an image |
CN101159855B (zh) | 2007-11-14 | 2011-04-06 | 南京优科漫科技有限公司 | 基于特征点分析的多目标分离预测方法 |
US20110017673A1 (en) * | 2009-07-21 | 2011-01-27 | Ge Healthcare As | Endotoxin removal in contrast media |
KR20110085728A (ko) * | 2010-01-21 | 2011-07-27 | 삼성전자주식회사 | 휴대용 단말기에서 건물 영역을 인식하기 위한 장치 및 방법 |
JP2011221688A (ja) * | 2010-04-07 | 2011-11-04 | Sony Corp | 認識装置、認識方法、およびプログラム |
KR20120067757A (ko) * | 2010-12-16 | 2012-06-26 | 한국전자통신연구원 | 항공 영상 간 대응 관계 추출 장치 및 그 방법 |
-
2013
- 2013-03-26 US US14/411,587 patent/US10540566B2/en active Active
- 2013-03-26 CN CN201380034040.1A patent/CN104412301A/zh active Pending
- 2013-03-26 JP JP2014522452A patent/JP6094949B2/ja active Active
- 2013-03-26 WO PCT/JP2013/058796 patent/WO2014002554A1/ja active Application Filing
-
2019
- 2019-11-22 US US16/692,083 patent/US10796188B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000090239A (ja) * | 1998-09-10 | 2000-03-31 | Matsushita Electric Ind Co Ltd | 画像検索装置 |
WO2005086092A1 (ja) * | 2004-03-03 | 2005-09-15 | Nec Corporation | 画像類似度算出システム、画像検索システム、画像類似度算出方法および画像類似度算出用プログラム |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2015170461A1 (ja) * | 2014-05-07 | 2017-04-20 | 日本電気株式会社 | 画像処理装置、画像処理方法およびコンピュータ可読記録媒体 |
US10147015B2 (en) | 2014-05-07 | 2018-12-04 | Nec Corporation | Image processing device, image processing method, and computer-readable recording medium |
US9483712B2 (en) | 2014-07-31 | 2016-11-01 | International Business Machines Corporation | Method for accurately determining the position and orientation of each of a plurality of identical recognition target objects in a search target image |
JP2016033776A (ja) * | 2014-07-31 | 2016-03-10 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | 大規模画像データベースの高速検索手法 |
US9704061B2 (en) | 2014-07-31 | 2017-07-11 | International Business Machines Corporation | Accurately determining the position and orientation of each of a plurality of identical recognition target objects in a search target image |
JP2016033775A (ja) * | 2014-07-31 | 2016-03-10 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | 同一種類の複数の認識対象物体が検索対象画像中に存在する場合に、それぞれの認識対象物体の位置および向きを精度良く求める手法 |
JP2016045600A (ja) * | 2014-08-20 | 2016-04-04 | キヤノン株式会社 | 画像処理装置および画像処理方法 |
US11580721B2 (en) | 2017-11-07 | 2023-02-14 | Nec Corporation | Information processing apparatus, control method, and program |
JP2021503131A (ja) * | 2017-11-14 | 2021-02-04 | マジック リープ, インコーポレイテッドMagic Leap,Inc. | ホモグラフィ適合を介した完全畳み込み着目点検出および記述 |
JP7270623B2 (ja) | 2017-11-14 | 2023-05-10 | マジック リープ, インコーポレイテッド | ホモグラフィ適合を介した完全畳み込み着目点検出および記述 |
JP7403700B2 (ja) | 2017-11-14 | 2023-12-22 | マジック リープ, インコーポレイテッド | ホモグラフィ適合を介した完全畳み込み着目点検出および記述 |
JP2022526548A (ja) * | 2020-01-22 | 2022-05-25 | 上▲海▼商▲湯▼▲臨▼▲港▼智能科技有限公司 | ターゲット検出方法、装置、電子機器およびコンピュータ可読記憶媒体 |
US11797603B2 (en) | 2020-05-01 | 2023-10-24 | Magic Leap, Inc. | Image descriptor network with imposed hierarchical normalization |
Also Published As
Publication number | Publication date |
---|---|
US10540566B2 (en) | 2020-01-21 |
US20150161468A1 (en) | 2015-06-11 |
JPWO2014002554A1 (ja) | 2016-05-30 |
US10796188B2 (en) | 2020-10-06 |
US20200089988A1 (en) | 2020-03-19 |
JP6094949B2 (ja) | 2017-03-15 |
CN104412301A (zh) | 2015-03-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6094949B2 (ja) | 画像処理装置、画像処理方法、及びプログラム | |
Minaei-Bidgoli et al. | Effects of resampling method and adaptation on clustering ensemble efficacy | |
Yu et al. | Learning from multiway data: Simple and efficient tensor regression | |
Zhang et al. | Graph degree linkage: Agglomerative clustering on a directed graph | |
Yang et al. | Scalable optimization of neighbor embedding for visualization | |
Mishra et al. | Application of linear and nonlinear PCA to SAR ATR | |
KR101581112B1 (ko) | 계층적 패턴 구조에 기반한 기술자 생성 방법 및 이를 이용한 객체 인식 방법과 장치 | |
CN106228121B (zh) | 手势特征识别方法和装置 | |
Salarpour et al. | Direction‐based similarity measure to trajectory clustering | |
Póczos et al. | Support distribution machines | |
Li et al. | High resolution radar data fusion based on clustering algorithm | |
Choi et al. | Gesture recognition based on manifold learning | |
Dai et al. | Statistical adaptive metric learning in visual action feature set recognition | |
Viet et al. | Using motif information to improve anytime time series classification | |
Luo et al. | Simple iterative clustering on graphs for robust model fitting | |
Xiao et al. | Motion retrieval based on graph matching and revised Kuhn-Munkres algorithm | |
Cheng et al. | SAGMAN: Stability Analysis of Graph Neural Networks on the Manifolds | |
Oprisescu et al. | Hand posture recognition using the intrinsic dimension | |
Wang et al. | Kernel-based clustering with automatic cluster number selection | |
Li | A Novel Objects Recognition Technique | |
Wang | The Use of k-Means Algorithm to Improve Kernel Method via Instance Selection | |
Ensari | Character recognition analysis with nonnegative matrix factorization | |
Liu et al. | Behavior Recognition Based on Complex Linear Dynamic Systems | |
Omar et al. | Arabic-Latin Offline Signature Recognition Based on Shape Context Descriptor | |
Rosén | Behavior Classification based on Sensor Data-Classifying time series using low-dimensional manifold representations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13810826 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014522452 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14411587 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13810826 Country of ref document: EP Kind code of ref document: A1 |