CN105426914B - A kind of image similarity detection method of facing position identification - Google Patents

A kind of image similarity detection method of facing position identification Download PDF

Info

Publication number
CN105426914B
CN105426914B CN201510807729.4A CN201510807729A CN105426914B CN 105426914 B CN105426914 B CN 105426914B CN 201510807729 A CN201510807729 A CN 201510807729A CN 105426914 B CN105426914 B CN 105426914B
Authority
CN
China
Prior art keywords
image
block
super
pixel
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510807729.4A
Other languages
Chinese (zh)
Other versions
CN105426914A (en
Inventor
李科
李钦
游雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PLA Information Engineering University
Original Assignee
PLA Information Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PLA Information Engineering University filed Critical PLA Information Engineering University
Priority to CN201510807729.4A priority Critical patent/CN105426914B/en
Publication of CN105426914A publication Critical patent/CN105426914A/en
Application granted granted Critical
Publication of CN105426914B publication Critical patent/CN105426914B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of image similarity detection methods of facing position identification, belong to image identification technical field.The present invention carries out super-pixel segmentation to image first, and the characteristic pattern of image is generated in conjunction with CNN model, and calculates the description vectors of each super-pixel block;Then image to be detected is divided into uniform image block, each image block description vectors is calculated according to the super-pixel block that image block includes, constitute the Description Matrix of image;The similarity in two images to be detected between correspondence image block is calculated using obtained each image block description vectors, the mean value of each correspondence image block similarity is similarity between two images required by the present invention.Present invention robustness with higher effectively can be identified accurately even if Same Scene content is changed, while can also find most like image in slave sequential images promptly and accurately.

Description

A kind of image similarity detection method of facing position identification
Technical field
The present invention relates to a kind of image similarity detection methods of facing position identification, belong to image identification technical field.
Background technique
Image similarity detection be soil match, image retrieval, the core link in pattern-recognition, in SLAM In (Simultaneous Localizations and Mapping) application, needs to carry out closed loop detection, exactly pass through The similarity detection of head and the tail image judges whether it is Same Scene to complete;Zero-bit works as machine in robot autonomous navigator fix Device people is come for the second time in a certain environment, and robot indoors, is up to built it needs to be determined that oneself position in the environment Around object, when positioning device is not available in some special screnes such as low cave, it is necessary to be sensed using robot interior true Positioning is set, and the method that can use image similarity detection at this time finds out same field when reaching the environment for the first time with robot Scape is positioned.
The key for calculating the similarity of two images is the vector that image substantive characteristics can be described for picture construction one Or matrix.Generally speaking, building can be divided into two classes as the method for description vectors: a kind of method is whole using image as one Body is described, such as color of image histogram, image aggregated vector and GIST.Image histogram can be regarded as image Global characteristics are widely applied description image since it is easy to surreptitiously closely question acquisition and understanding.But image histogram is not examined Consider the spatial relation between pixel, different images there may be similar histogram.It is lacked in addition, describing image with histogram Weary robustness, when the resolution ratio of image, ambient lighting change, fractional object disappearance or new object occur in scene, Image histogram can also occur significantly to change.
Second method is using local feature description's image, such as SIFT (Scale Invariant Feature Transform), SURF (Speed-Up Robust Feature) describes several image blocks comprising characteristic point in image, into And achieve the purpose that describe image.Typical method is using BoW (bag-of words) model, by all characteristic points of image Description vectors are projected to vocabulary, final for picture construction one description vectors of the reflection image comprising vocabulary situation.BoW mould Type is in image recognition classification, image retrieval (CBIR (the Content-based image of target identification memory image content-based Retrieval good effect)) is all achieved in task.FAB-MAP (Fast Appearance Based Mapping) is The technology of one position identification and map structuring, is widely used in closed loop test problems, and wherein BoW model is used for survey The each frame for trying video constructs description vectors.The characteristic point on all frames of test video is extracted first, calculates each characteristic point Description vectors;Cluster building vocabulary is carried out using all feature vectors of the K-means method to extraction;By the spy on each frame Sign point is projected as each frame building description vectors on vocabulary.It is this using BoW model construction picture frame description vectors Method can generally consume a large amount of time and memory, and the number of features for constructing vocabulary is excessively huge sometimes, so that adopting It is difficult to complete with the process that K-means is clustered.
Summary of the invention
The object of the present invention is to provide a kind of image similarity detection methods of facing position identification, to solve current image Similarity detects low, the computationally intensive problem of robustness.
The present invention provides a kind of image similarity detection method of facing position identification to solve above-mentioned technical problem, should Detection method includes the following steps:
1) super-pixel segmentation is carried out to original image to be detected, obtains super-pixel block;
2) characteristic pattern that original image to be detected is generated using convolutional neural networks model, each super-pixel block is mapped to The description vectors of each super-pixel block are calculated on every layer of characteristic pattern;
3) original image to be detected is carried out being divided into uniform image block, is calculated according to the super-pixel block that image block includes Each image block description vectors;
4) phase in two images to be detected between correspondence image block is calculated using obtained each image block description vectors Like degree, the mean value of each correspondence image block similarity is similarity between image.
The calculating process of each super-pixel block description vectors of step 2) is as follows:
A. convolutional neural networks model is acted on into original image and generates several middle layers, choose the institute in M output layer There is characteristic pattern of the characteristic pattern as original image to be detected, and is adjusted to original image size;
B. each super-pixel block corresponding region on each bottom convolution output layer characteristic pattern on original image is calculated The comentropy of middle all pixels, for each super-pixel block generate dimension be bottom convolution output layer characteristic pattern number description to Amount;
C. each super-pixel block corresponding region on each higher convolution output layer characteristic pattern on original image is calculated The average value of middle all pixels, be each super-pixel block generate dimension be higher convolution output layer characteristic pattern number description to Amount;
D. description vectors obtained in combining step B and C are each super-pixel block description vectors.
In the step B in corresponding region all pixels comentropy H are as follows:
pi=ni/total
Wherein piFor each bins occur probability, bins be statistical regions between pixel maximum and minimum value etc. between Every the pixel range of division, niFor the number of pixels fallen in each bins in statistical regions, total is area pixel sum.
Each image block description vectors in the step 3)Are as follows:
Wherein num is the super-pixel block number for including, weight in image blockiFor the weight of i-th piece of super-pixel,For the description vectors of i-th piece of super-pixel.
The weight weight of each super-pixel block are as follows:
Wherein sp_num is the number of pixels that super-pixel block includes in image block areas, and total_num is image block area Sum of all pixels in domain.
Similarity pat_simi between each image block in the step 4) are as follows:
WhereinFor the normalized description vectors of image block 1,For the normalization of image block 2 Description vectors.
The step 1) is to carry out super-pixel segmentation using the method for linear iteraction cluster.
Image block description vectors that image includes can be formed Description Matrix when described image block similarity calculation, with the The transposition dot product of the Description Matrix of piece image and the second width iamge description matrix, obtains similar matrix S, wherein the i-th row of S The element S of j columnijState the similarity on piece image on i-th of image block and the second width image between j-th of image block, S In each diagonal entry be correspondence image block similarity.
The beneficial effects of the present invention are: the present invention carries out super-pixel segmentation to image first, image is generated in conjunction with CNN model Characteristic pattern, and calculate the description vectors of each super-pixel block;Then image to be detected is divided into uniform image block, according to The super-pixel block that image block includes calculates each image block description vectors, constitutes the Description Matrix of image;Using each of obtaining Image block description vectors calculate the similarity in two images to be detected between correspondence image block, each correspondence image block similarity Mean value is similarity between two images required by the present invention.Present invention robustness with higher, and calculation amount is small, Yi Shi It is existing, even if Same Scene content is changed, effectively can accurately it identify, while can also slave sequence promptly and accurately Most like image is found in image.
Detailed description of the invention
Fig. 1 is the calculation flow chart of super-pixel block description vectors;
Fig. 2-a is the 1# image in experimental example 1 from Same Scene;
Fig. 2-b is the 2# image in experimental example 1 from Same Scene;
Fig. 2-c is the similar matrix schematic diagram in experimental example 1 from Same Scene image pair;
Fig. 3-a is the 1# image in experimental example 1 from different scenes;
Fig. 3-b is the 2# image in experimental example 1 from different scenes;
Fig. 3-c is the similar matrix schematic diagram in experimental example 1 from different scenes image pair;
Fig. 4 is test image selected in experimental example 2;
Fig. 5 is the most like frame image that experimental example 2 is found using the present invention;
Fig. 6 is to obtain similarity curve in experimental example 2.
Specific embodiment
A specific embodiment of the invention is described further with reference to the accompanying drawing.
The present invention carries out super-pixel segmentation to original image to be detected first, obtains super-pixel block;Then convolution is utilized Neural network model generates the characteristic pattern of original image to be detected, and each super-pixel block is mapped on every layer of characteristic pattern and is calculated The description vectors of each super-pixel block;Original image to be detected is carried out to be divided into uniform image block, includes according to image block Super-pixel block calculate each image block description vectors;Finally to be detected two are calculated using obtained each image block description vectors Similarity in width image between correspondence image block, the mean value of each correspondence image block similarity are similarity between image.It should The specific implementation step of method is as follows:
1. pair image to be detected carries out super-pixel segmentation
Super-pixel is exactly a series of and color adjacent by positions in image, brightness, the similar pixel group of Texture eigenvalue At zonule, these zonules remain the effective information of further progress image segmentation mostly, and will not generally break ring figure The boundary information of object as in.For piece image, single pixel simultaneously do not have practical significance, the mankind be all from The region that many pixels are composed obtains the relevant information of image.Therefore, only by the identical pixel of several properties It combines just meaningful to the mankind.Simultaneously as super-pixel number is far smaller than number of pixels, directly to super-pixel It is expressed and also substantially increases computational efficiency.The present embodiment carries out super picture using the method for simple linear iteration cluster (SLSC) Element segmentation, to generate compact, regular super-pixel block, and the super-pixel block generated retains the boundary information of object.
2. calculating the description vectors of super-pixel block using convolutional neural networks
Convolutional neural networks (CNNs) are a kind of multi-level network structure models, it is formed by multiple stage-trainings, Usual each stage includes three convolution operation, non-linear transfer and pondization parts, the output of high level, most bottom when the input of bottom The input of layer is exactly the image of most original, and more high-rise information is more abstract, and semantic information is abundanter, and each layer all includes a large amount of Characteristic pattern, each characteristic pattern reflect image information in terms of different.One L layers of CNNs model regards the linear fortune of some column as Calculate, nonlinear operation (such as sigmoid, tanh functional operation) and pondization operation (pool) form, the process can be with is defined as:
Fl=Pool (tanh (Wl*Fl-1+bl)) (1)
Wherein FlIt is exported for l layers, l ∈ 1 ..., L, blFor l layers of offset parameter, WlFor l layers of convolution kernel.Source images can It is looked at as F0
In order to obtain each layer of characteristic pattern, the present invention up-samples characteristic pattern so that each layer of characteristic pattern with Source images have identical size, stack all pattern images into three-dimensional matrice F, a F ∈ RN×H×W, wherein H is that image is high Degree, W is picture traverse, and N is characterized the quantity of figure, and F can be expressed as:
F=[up (F1),up(F2),…,up(FL)] (2)
Wherein up is up-sampling operation,NlIt is l layers of characteristic pattern number, for any one on image A pixel, description can be expressed as p ∈ RN
Each super-pixel block is described using the information of characteristic pattern used, enables super-pixel block that there is stronger expression Power, due to there is redundancy, reducing computational efficiency between some characteristic patterns, the present embodiment only selected section convolutional layer Characteristic pattern is used to construct the description vectors of super-pixel block, and the quality of feature description is also ensured that while improving computational efficiency.It is super The building process of block of pixels description vectors is as shown in Figure 1, detailed process is as follows:
A. convolutional neural networks model is acted on into original image and generates several middle layers, choose the institute in M output layer There is characteristic pattern of the characteristic pattern as original image to be detected, and is adjusted to original image size.
Selection CNN (convolutional neural networks) model acts on image and generates several middle layers, chooses several convolution output layers All characteristic patterns (total 64+256+256=576 characteristic pattern) in (in the present embodiment choose the 5th, 13,16 layer), and by 576 Layer characteristic pattern is readjusted to original image size.Wherein 1-64 layers of characteristic pattern belongs to bottom convolution output layer, maintains figure The boundary information of picture, 65 to 576 layers of characteristic pattern belong to higher convolution output layer, have stronger abstract semantics information.
B. each super-pixel block corresponding region on each bottom convolution output layer characteristic pattern on original image is calculated The comentropy of middle all pixels, for each super-pixel block generate dimension be bottom convolution output layer characteristic pattern number description to Amount.
Bottom convolution output layer is 1-64 layers in the present embodiment, all pixels in corresponding region on 1-64 layers of characteristic pattern of calculating Comentropy.The maximum and minimum value of pixel value in statistical regions divides several bins at equal intervals, falls in statistical regions each Number of pixels n in binsi, i=1,2,3 ..., bins calculate the Probability p that each bins occursi=ni/ total, (total is Area pixel sum);And the comentropy H according to obtained probability calculation region all pixels.
Corresponding region of each super-pixel block on every layer of characteristic pattern on original image is found, (each layer of characteristic pattern is all Adjust to original image size, each super-pixel block region on original image maps directly to characteristic pattern), it calculates former The comentropy of each super-pixel block all pixels in the corresponding region on characteristic pattern on beginning image produces for each super-pixel block The description vectors of raw 64 dimension.
C. each super-pixel block corresponding region on each higher convolution output layer characteristic pattern on original image is calculated The average value of middle all pixels, be each super-pixel block generate dimension be higher convolution output layer characteristic pattern number description to Amount.
In the present embodiment, higher convolution output layer is 65 to 576 layers of characteristic pattern, is counted with the average method in region, The average value of each super-pixel block all pixels in the corresponding region on characteristic pattern is calculated, is generated for each super-pixel block The description vectors of 512 dimensions.
D. by above-mentioned calculating, the vector of one 576 dimension may finally be generatedTo describe each super-pixel Block.
3. dividing an image into the image block of uniform-dimension, calculated according to the super-pixel block for including in each image block each The description vectors of image block.
The present embodiment can divide an image into 4 × 4 image blocks of uniform-dimension, count and surpass included in each image block Block of pixels, according to super-pixel block region area shared in image block areas, i.e., the number of pixels that super-pixel block is included accounts for The specific gravity for the sum of all pixels that image block includes assigns each super-pixel block corresponding weight w eight.
Wherein, sp_num is the number of pixels that super-pixel block includes in image block areas, and total_num is image block area Sum of all pixels in domain.
The description vectors of each image block are calculated according to the weight w eight of obtained each super-pixel block
Wherein num is the super-pixel block number for including, weight in image blockiFor the weight of i-th piece of super-pixel,For the description vectors of i-th piece of super-pixel.
The 576 dimension description vectors that each image block can be obtained through the above steps, are normalized each image block vector Operation, is finally described corresponding image block.
4. according to the similarity of image block description vectors calculating two images correspondence image block, each correspondence image block phase is obtained Average value like degree is the similarity of required two images.
The similar journey that the similarity of correspondence image block can be used to indicate, between image block in similarity between two images Degree can be by included angle cosine (cos) Lai Fanying between corresponding description vectors, and cosine value is bigger, and image block is more similar, if figure As the completely the same then cosine value of block is 1.Since image block description vectors have all carried out normalization operation, mould length is 1, then schemes As block description vectors dot product is its included angle cosine.
In actually calculating, the image block description vectors that can directly include by image form Description Matrix, with the first width figure The transposition dot product of the Description Matrix of picture and the second width iamge description matrix obtains the similar matrix S of 16*16 dimension, wherein the i-th of S The element S of row jth columnijIt states similar between i-th of image block and j-th of image block on the second width image on piece image It spends, 16 diagonal entries are the similarity of correspondence image block in S.
Similarity Simi between two images can be obtained by calculating the average value of each correspondence image block similarity, this reality Apply the similarity Simi between the two images in example are as follows:
By the above process, obtaining Simi is image similarity required by the present invention.
Experimental analysis
Experimental example 1
The purpose of the experimental example is verifying robustness of the invention.The present invention has chosen content changes locally respectively Same Scene image to the image from different scenes to carry out similarity calculation.Selected two groups of presentation graphics to point Not as shown in Fig. 2-a, Fig. 2-b, Fig. 3-a and Fig. 3-b.Wherein the image in Fig. 2-a and Fig. 2-b is to Same Scene is come from, only Localized variation has occurred in picture material;Image is to from different scenes in Fig. 3-a and Fig. 3-b.Image is drawn using the present invention It is divided into 4 × 4 image block, calculates the similarity between image block, forms similar matrix respectively as shown in Fig. 2-c and Fig. 3-c, it is similar Element in matrix on diagonal line is the similarity of correspondence image block, and the similarity point of two groups of images pair is calculated using formula (7) It Wei 0.9434,0.5254.
According to the above results it is found that for the image pair from Same Scene, obtained similarity is apparently higher than different fields The image pair of scape.For the image pair of Same Scene, the element in similar matrix on diagonal line is apparently higher than non-diagonal line element Element, the image in Fig. 2-b is to localized variation (i.e. occurring a chest in Fig. 2 (b)) is had occurred, so that part is changed The diagonal entry value of image block is significantly lower than the value of other image blocks, according to the data of similar matrix, can detect Same Scene The changed Position Approximate of image pair.And for the image pair from different scenes, diagonal entry value is relatively low, And there is no apparent difference with off diagonal element value, it is relatively low to similarity to calculate resulting image.
Experimental example 2
The purpose of the experimental example is the stability and feasibility of the verifying present invention in practical applications.Utilize phase of the invention Like degree detection method search for from captured video with the most like frame of test image, observation is that it is no be subjected to.Below It is tested for one indoor scene, designed experimental procedure is as follows:
(1) arbitrarily around the scene capture scene video (in experiment clapped scene for 2395 frames video).
(2) scene video is pre-processed, calculates iamge description matrix for each frame, i.e. image block description vectors form The matrix of 16 × 576 dimensions, and store and (generate the three-dimensional matrice of 2395 × 16 × 576 dimensions in experiment).
(3) scene is come again, arbitrarily shoots a test image, it is desirable that captured picture material is included in video In the scene domain of capture, the Description Matrix of the test image is calculated.
(4) pre-stored three-dimensional matrice in traversal step 2 finds one most like with test image using this paper algorithm Frame.
(5) scene image is re-shoot, finds corresponding most like frame according to formula 3,4.
Width test image therein as shown in figure 4, Fig. 5 be found from video it is most like with Fig. 4 test image One frame, Fig. 6 are the similarity curve of each frame and the test image in 2395 frame videos of shooting.In addition, between when detecting On, the iamge description vector of 2395 frames constructs in advance in video, is not counted in detection time-consuming, and time-consuming mainly includes calculating and surveying The Description Matrix and traversal video frame for attempting picture find most like image two parts, and process time-consuming (is tested for 0.75 second herein Environment is 64 Linux Debian7.5, Intel (R) Core (TM) i7-3632QM [email protected] processors, in 4G It deposits).
The experimental results showed that being found from video with the most like image of test image is the 566th frame, similarity curve table Image and test image near bright 566 frame still have very high similarity, this is because closing on frame in video generally has phase Same content.But the 566th frame and test image similarity highest (0.82), and other numerical value are apparently higher than, testing result base This is correct and time-consuming also less.
To sum up, present invention robustness with higher can be effectively accurate even if Same Scene content is changed Identification, while most like image can also be found in slave sequential images promptly and accurately.

Claims (2)

1. a kind of image similarity detection method of facing position identification, which is characterized in that detection method includes the following steps for this:
1) super-pixel segmentation is carried out to original image to be detected, obtains super-pixel block;
2) characteristic pattern that original image to be detected is generated using convolutional neural networks model, is mapped to every layer for each super-pixel block Characteristic pattern on calculate the description vectors of each super-pixel block;
3) original image to be detected is carried out being divided into uniform image block, is calculated according to the super-pixel block that image block includes each Image block description vectors;
4) similarity in two images to be detected between correspondence image block is calculated using obtained each image block description vectors, The mean value of each correspondence image block similarity is similarity between image;
The calculating process of each super-pixel block description vectors of step 2) is as follows:
A. convolutional neural networks model is acted on into original image and generates several middle layers, choose all spies in M output layer Characteristic pattern of the sign figure as original image to be detected, and adjusted to original image size;
B. each super-pixel block institute in corresponding region on each bottom convolution output layer characteristic pattern on original image is calculated There is the comentropy of pixel, generates the description vectors that dimension is bottom convolution output layer characteristic pattern number for each super-pixel block;
C. each super-pixel block institute in corresponding region on each higher convolution output layer characteristic pattern on original image is calculated There is the average value of pixel, is the description vectors that each super-pixel block generates that dimension is higher convolution output layer characteristic pattern number;
D. description vectors obtained in combining step B and C are each super-pixel block description vectors;
In the step B in corresponding region all pixels comentropy H are as follows:
pi=ni/total
Wherein piFor the probability that each bins occurs, bins is to draw at equal intervals between pixel maximum and minimum value in statistical regions The pixel range divided, niFor the number of pixels fallen in each bins in statistical regions, total is area pixel sum;
Each image block description vectors in the step 3)Are as follows:
Wherein num is the super-pixel block number for including, weight in image blockiFor the weight of i-th piece of super-pixel,For The description vectors of i-th piece of super-pixel;
The weight weight of each super-pixel block are as follows:
Wherein sp_num is the number of pixels that super-pixel block includes in image block areas, and total_num is in image block areas Sum of all pixels;
Similarity pat_simi between each image block in the step 4) are as follows:
WhereinFor the normalized description vectors of image block 1,For the normalized description of image block 2 Vector;
The step 1) is to carry out super-pixel segmentation using the method for linear iteraction cluster.
2. the image similarity detection method of facing position identification according to claim 1, which is characterized in that described image The image block description vectors that image includes can be formed Description Matrix when block similarity calculation, with the description square of piece image The transposition dot product of battle array and the second width iamge description matrix, obtains similar matrix S, wherein the element S of the i-th row jth column of SijStatement Similarity on piece image on i-th of image block and the second width image between j-th of image block, each diagonal line element in S Element is the similarity of correspondence image block.
CN201510807729.4A 2015-11-19 2015-11-19 A kind of image similarity detection method of facing position identification Active CN105426914B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510807729.4A CN105426914B (en) 2015-11-19 2015-11-19 A kind of image similarity detection method of facing position identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510807729.4A CN105426914B (en) 2015-11-19 2015-11-19 A kind of image similarity detection method of facing position identification

Publications (2)

Publication Number Publication Date
CN105426914A CN105426914A (en) 2016-03-23
CN105426914B true CN105426914B (en) 2019-03-15

Family

ID=55505112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510807729.4A Active CN105426914B (en) 2015-11-19 2015-11-19 A kind of image similarity detection method of facing position identification

Country Status (1)

Country Link
CN (1) CN105426914B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956597A (en) * 2016-05-04 2016-09-21 浙江大学 Binocular stereo matching method based on convolution neural network
CN110050243B (en) * 2016-12-21 2022-09-20 英特尔公司 Camera repositioning by enhanced neural regression using mid-layer features in autonomous machines
CN106709462A (en) * 2016-12-29 2017-05-24 天津中科智能识别产业技术研究院有限公司 Indoor positioning method and device
CN107330357A (en) * 2017-05-18 2017-11-07 东北大学 Vision SLAM closed loop detection methods based on deep neural network
CN109214235A (en) * 2017-06-29 2019-01-15 沈阳新松机器人自动化股份有限公司 outdoor scene classification method and system
CN107330127B (en) * 2017-07-21 2020-06-05 湘潭大学 Similar text detection method based on text picture retrieval
CN107992848B (en) * 2017-12-19 2020-09-25 北京小米移动软件有限公司 Method and device for acquiring depth image and computer readable storage medium
CN110322472A (en) * 2018-03-30 2019-10-11 华为技术有限公司 A kind of multi-object tracking method and terminal device
CN108829826B (en) * 2018-06-14 2020-08-07 清华大学深圳研究生院 Image retrieval method based on deep learning and semantic segmentation
CN109271870B (en) * 2018-08-21 2023-12-26 平安科技(深圳)有限公司 Pedestrian re-identification method, device, computer equipment and storage medium
CN109409418B (en) * 2018-09-29 2022-04-15 中山大学 Loop detection method based on bag-of-words model
CN110334226B (en) * 2019-04-25 2022-04-05 吉林大学 Depth image retrieval method fusing feature distribution entropy
CN110866532B (en) * 2019-11-07 2022-12-30 浙江大华技术股份有限公司 Object matching method and device, storage medium and electronic device
CN112907644B (en) * 2021-02-03 2023-02-03 中国人民解放军战略支援部队信息工程大学 Machine map-oriented visual positioning method
CN113139589B (en) * 2021-04-12 2023-02-28 网易(杭州)网络有限公司 Picture similarity detection method and device, processor and electronic device
CN113657415B (en) * 2021-10-21 2022-01-25 西安交通大学城市学院 Object detection method oriented to schematic diagram

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012148619A1 (en) * 2011-04-27 2012-11-01 Sony Corporation Superpixel segmentation methods and systems
CN104408405A (en) * 2014-11-03 2015-03-11 北京畅景立达软件技术有限公司 Face representation and similarity calculation method
CN104504055A (en) * 2014-12-19 2015-04-08 常州飞寻视讯信息科技有限公司 Commodity similarity calculation method and commodity recommending system based on image similarity
CN105005987A (en) * 2015-06-23 2015-10-28 中国人民解放军国防科学技术大学 SAR image superpixel generating method based on general gamma distribution

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012148619A1 (en) * 2011-04-27 2012-11-01 Sony Corporation Superpixel segmentation methods and systems
CN104408405A (en) * 2014-11-03 2015-03-11 北京畅景立达软件技术有限公司 Face representation and similarity calculation method
CN104504055A (en) * 2014-12-19 2015-04-08 常州飞寻视讯信息科技有限公司 Commodity similarity calculation method and commodity recommending system based on image similarity
CN105005987A (en) * 2015-06-23 2015-10-28 中国人民解放军国防科学技术大学 SAR image superpixel generating method based on general gamma distribution

Also Published As

Publication number Publication date
CN105426914A (en) 2016-03-23

Similar Documents

Publication Publication Date Title
CN105426914B (en) A kind of image similarity detection method of facing position identification
CN107609601B (en) Ship target identification method based on multilayer convolutional neural network
Xie et al. Multilevel cloud detection in remote sensing images based on deep learning
CN108549873A (en) Three-dimensional face identification method and three-dimensional face recognition system
CN103839277B (en) A kind of mobile augmented reality register method of outdoor largescale natural scene
CN111951384B (en) Three-dimensional face reconstruction method and system based on single face picture
Henderson et al. Unsupervised object-centric video generation and decomposition in 3D
CN110378997A (en) A kind of dynamic scene based on ORB-SLAM2 builds figure and localization method
CN109101981B (en) Loop detection method based on global image stripe code in streetscape scene
CN111951381B (en) Three-dimensional face reconstruction system based on single face picture
CN101694691A (en) Method and device for synthesizing facial images
CN105574545B (en) The semantic cutting method of street environment image various visual angles and device
CN106844739A (en) A kind of Remote Sensing Imagery Change information retrieval method based on neutral net coorinated training
CN110309835A (en) A kind of image local feature extracting method and device
WO2023273337A1 (en) Representative feature-based method for detecting dense targets in remote sensing image
CN106250918B (en) A kind of mixed Gauss model matching process based on improved soil-shifting distance
CN105740917B (en) The semi-supervised multiple view feature selection approach of remote sensing images with label study
Zhu et al. Rapid ship detection in SAR images based on YOLOv3
Yang et al. Visual SLAM based on semantic segmentation and geometric constraints for dynamic indoor environments
CN105320963B (en) The semi-supervised feature selection approach of large scale towards high score remote sensing images
CN116824485A (en) Deep learning-based small target detection method for camouflage personnel in open scene
Jain et al. Analyzing and improving neural networks by generating semantic counterexamples through differentiable rendering
Zhao et al. Robust real-time object detection based on deep learning for very high resolution remote sensing images
CN113011359B (en) Method for simultaneously detecting plane structure and generating plane description based on image and application
Li et al. Lightweight automatic identification and location detection model of farmland pests

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant