CN105590086A - Article antitheft detection method based on visual tag identification - Google Patents

Article antitheft detection method based on visual tag identification Download PDF

Info

Publication number
CN105590086A
CN105590086A CN201410651379.2A CN201410651379A CN105590086A CN 105590086 A CN105590086 A CN 105590086A CN 201410651379 A CN201410651379 A CN 201410651379A CN 105590086 A CN105590086 A CN 105590086A
Authority
CN
China
Prior art keywords
article
detection method
matching
antitheft detection
carry out
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201410651379.2A
Other languages
Chinese (zh)
Inventor
李静
赵磊
黄韵
聂永峰
傅俊锋
李增胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XI'AN SAMING TECHNOLOGY Co Ltd
Original Assignee
XI'AN SAMING TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XI'AN SAMING TECHNOLOGY Co Ltd filed Critical XI'AN SAMING TECHNOLOGY Co Ltd
Priority to CN201410651379.2A priority Critical patent/CN105590086A/en
Publication of CN105590086A publication Critical patent/CN105590086A/en
Withdrawn legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention discloses an article antitheft detection method based on visual tag identification, which performs local feature matching for a frame image extracted from a video and a database image to achieve antitheft detection, thereby improving speed, reliability and accuracy. The article antitheft detection method overcomes the problems that a conventional video article antitheft detection method has low reliability in background modeling and motion segmentation of a video sequence and has a low accuracy in article identification under a complex environment condition. The article antitheft detection method comprises the implementation steps of: (1) extracting local features of a detected article, and establishing a visual tag database; (2) extracting a frame image from a video stream at a fixed time interval; (3) extracting the same local feature, matching the local feature with the local features in the visual tag database, and removing false matching point pairs; and (4) judging whether the number of matching points exceeds a threshold. The article antitheft detection method of the present invention does not need sequence information and only needs a single frame image to perform article antitheft detection, thereby improving detection speed; in addition, local invariant feature matching achieves detection and identification on the condition that the article is partially shielded or illumination changes, thus detection accuracy is ensured.

Description

A kind of article anti-theft detection method based on vision tag recognition
Technical field
The invention belongs to technical field of image processing, relate to computer vision, particularly relate to image characteristics extraction and Image Feature Matching, carry out article anti-theft detection by the matching double points number of setting characteristic point between image.
Technical background
Article anti-theft is the Core Feature of video monitoring system, traditional article anti-theft detection method based on video is normally carried out background modeling and motion segmentation to video sequence, need to analyze multiple image, not only arithmetic speed is slow, and reliability is low. When the article generation partial occlusion of needs monitoring, or when the image generation illumination variation of video acquisition, traditional method can not be identified accurately to object.
According to the region of characteristic information extraction, Low Level Vision feature can be divided into global characteristics and local feature. Global characteristics uses effectively simple, but can not well describe the regional area of image, and coupling accuracy is limited. Local feature is general first by the characteristic point of characteristic detection method positioning image, then generates local feature vectors descriptor according to the local message of characteristic point, by the multiple local feature vectors descriptor table diagram pictures that extract. Good local feature description's symbol should have very strong consistency and the property distinguished, such as using many SIFT features, image rotation, yardstick convergent-divergent, translation are maintained the invariance, illumination variation, affine transformation are had to certain adaptability, robustness is relatively good. Therefore video image is extracted to local feature, can to a certain degree ensure to block, coupling accuracy in the situation such as illumination variation.
The present invention, by the article of needs monitoring are extracted to local feature, sets up vision tag database, utilizes the two field picture of video extraction and database images to carry out local feature coupling, carries out article anti-theft detection.
Summary of the invention
The invention reside in the deficiency for prior art, and business application, a kind of article anti-theft detection method based on vision tag recognition is proposed, two field picture by video extraction and database images are carried out local feature coupling and are carried out antitheft detection, improve the speed, reliability and the accuracy that detect.
For realizing above-mentioned functions, method of the present invention comprises the steps:
(1) Object Extraction local feature needs being detected, sets up vision tag database, for characteristic matching.
(2) video flowing Fixed Time Interval extracts a two field picture.
(3) image is extracted to identical local feature, mate with the tag file of vision tag database, and remove wrong matching double points.
(4) set the threshold value of matching double points number, be greater than threshold value if calculate matching double points, normal; If be less than threshold value, carry out alarm.
This method is improved conventional method, and its advantage is mainly manifested in the following aspects:
(1) do not need sequence information, only need to gather single-frame images and just can carry out article anti-theft detection, can ensure detection speed.
(2) adopt local invariant feature coupling, at object generation partial occlusion, under Varying Illumination, also can detect accurately and identify, ensure the degree of accuracy detecting.
Brief description of the drawings
Fig. 1 is system block diagram of the present invention.
Detailed description of the invention
Below the technology of the present invention method is explained in further detail, should be noted that, described embodiment is only intended to be convenient to the understanding of the present invention, and it is not played to any restriction effect.
The present invention includes following steps:
Step 1, sets up monitoring article vision tag database.
(1.1) gather multiple images under article different angles, the different illumination conditions that needs monitoring.
(1.2) the Feature Descriptor set that vision tag database is article.
Step 2, extracts local feature (including but not limited to SIFT feature) to gathering image, and the local feature of extraction is saved as to file.
(2.1) build metric space, the metric space L (x, y, σ) of image is defined as original image I (x, y) and ties up Gaussian function G (x, y, σ) convolution algorithms with 2 of a variable dimension:
L(x,y,σ)=G(x,y,σ)*I(x,y)
Two-dimensional space Gaussian function is:
(x, y) is space coordinates, and σ represents yardstick. Utilize difference of Gaussian DoG pyramid effectively to detect stable key point at metric space, on a certain yardstick, by subtracting each other two adjacent Gauss's metric spaces, obtain the response image D (x of DoG, y, σ) and carry out local maximum search, at locus and metric space location local feature point.
(2.2) detect DoG metric space extreme point, each pixel and all consecutive points compare, and in the time being greater than all consecutive points of (or being less than) image space and metric space, are extreme point. Comparison range is 3 × 3 cube, test point and 8 consecutive points with yardstick, and 9 × 2 points of neighbouring yardstick, and 26 somes comparison altogether, guarantees extreme point all to be detected in two dimensional image space and metric space.
(2.3) extreme point is accurately located, by metric space DoG function being carried out curve fitting to accurately determine position and the yardstick of key point, remove key point and the unsettled skirt response point of low contrast simultaneously, strengthen coupling stability, improve noise robustness.
(2.4) direction assignment, in order to realize image rotation consistency, utilizes the distribution character travel direction assignment of the histogram of gradients of key point field pixel. (x, y) locates the mould value of gradient and the formula of direction is as follows, and L yardstick σ used is the yardstick at key point place:
Travel direction assignment, first calculates the gradient of each key point neighborhood Gaussian image, utilizes histogram to carry out the statistics of neighborhood territory pixel gradient direction and mould value, and histogrammic peak value is the principal direction of this key point neighborhood gradient.
(2.5) key point is described, and for guaranteeing rotational invariance, centered by characteristic point, reference axis is rotated to be to the principal direction of key point. Around key point, get 16 × 16 neighborhood, and be divided into 4 × 4 subregion, at every sub regions compute gradient histogram, form a Seed Points with 8 direction gradient strength informations. Each key point adopts 4 × 4 Seed Points, forms the characteristic vector of 128 dimensions.
(2.6) Feature Descriptor of image being saved as to file, is vision tag database.
Step 3, extracts a two field picture to monitor video Fixed Time Interval.
(3.1) processing speed of consideration image, determines the time interval.
(3.2) two field picture of video extraction is extracted to identical local feature.
Step 4, the local feature that video image is extracted mates with the tag file of vision tag database, calculates matching double points number.
(4.1) adopt the Euclidean distance of key point characteristic vector as the similarity measurement of two width image key points.
(4.2) get certain key point in video image, find out the first two key point nearest with certain tag file Euclidean distance of vision tag database.
(4.3), in these two key points, if nearest distance is removed the proportion threshold value ratio that near distance is less than setting in proper order, accept this pair of match point. Ratio value can be between 0.4-0.6.
(4.4) the matching double points number of calculating video image and vision tag database tag file.
Step 5, utilizes removal error matching points to remove wrong coupling to algorithm RA NSAC and counts, recording feature match point number.
The SIFT characteristic vector composition data collection of (5.1) two width images, 4 matching double points of random selection form RANSAC sample, and according to matching double points computational transformation matrix M.
(5.2) concentrate with error metrics function calculated data the consistent collection Consensus that meets M according to the transform matrix M of calculating, and return to element number.
(5.3) be greater than current consistent collection if return to the element number of consistent collection, upgrade current optimum and unanimously collect.
(5.4) upgrade current error probability p, if p is greater than minimum error probability threshold value, proceed iteration, until p is less than minimum error probability.
(5.5) record is removed erroneous matching to the coupling number after point.
Step 6, comparison match point is to number and threshold value.
(6.1) set the threshold value of matching double points number, be greater than threshold value if calculate matching double points, normal.
(6.2), if be less than threshold value, carry out alarm.

Claims (2)

1. the article anti-theft detection method based on vision tag recognition, it is characterized in that, first, do not need sequence information, only need to gather single-frame images just can carry out article anti-theft detection, second, employing local invariant feature coupling, under object generation partial occlusion or Varying Illumination, also can carry out detection and Identification accurately, solve and in conventional method, needed video sequence to carry out the problem that cannot carry out accurate object identification under the lower and condition such as partial occlusion and illumination variation of the reliability of background modeling and motion segmentation.
2. comprise the steps:
The Object Extraction local feature that needs are detected, sets up vision tag database, for characteristic matching;
Video flowing Fixed Time Interval extracts a two field picture;
Image is extracted to identical local feature, mate with the tag file of vision tag database, and remove wrong matching double points;
Set the threshold value of matching double points number, be greater than threshold value if calculate matching double points, normal; If be less than threshold value, carry out alarm.
CN201410651379.2A 2014-11-17 2014-11-17 Article antitheft detection method based on visual tag identification Withdrawn CN105590086A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410651379.2A CN105590086A (en) 2014-11-17 2014-11-17 Article antitheft detection method based on visual tag identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410651379.2A CN105590086A (en) 2014-11-17 2014-11-17 Article antitheft detection method based on visual tag identification

Publications (1)

Publication Number Publication Date
CN105590086A true CN105590086A (en) 2016-05-18

Family

ID=55929658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410651379.2A Withdrawn CN105590086A (en) 2014-11-17 2014-11-17 Article antitheft detection method based on visual tag identification

Country Status (1)

Country Link
CN (1) CN105590086A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689635A (en) * 2018-07-05 2020-01-14 象山一居乐电子有限公司 Express receiving method and device, application and user terminal
CN111080525A (en) * 2019-12-19 2020-04-28 成都海擎科技有限公司 Distributed image and primitive splicing method based on SIFT (Scale invariant feature transform) features
CN111369754A (en) * 2020-03-05 2020-07-03 吉舵物联科技有限公司 Intelligent anti-theft alarm method and alarm system
CN111914670A (en) * 2020-07-08 2020-11-10 浙江大华技术股份有限公司 Method, device and system for detecting left-over article and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1812570A (en) * 2005-12-31 2006-08-02 浙江工业大学 Vehicle antitheft device based on omnibearing computer vision
CN102324042A (en) * 2011-09-13 2012-01-18 盛乐信息技术(上海)有限公司 Visual identifying system and visual identity method
CN102968619A (en) * 2012-11-13 2013-03-13 北京航空航天大学 Recognition method for components of Chinese character pictures
US8437558B1 (en) * 2009-10-08 2013-05-07 Hrl Laboratories, Llc Vision-based method for rapid directed area search
CN104036285A (en) * 2014-05-12 2014-09-10 新浪网技术(中国)有限公司 Spam image recognition method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1812570A (en) * 2005-12-31 2006-08-02 浙江工业大学 Vehicle antitheft device based on omnibearing computer vision
US8437558B1 (en) * 2009-10-08 2013-05-07 Hrl Laboratories, Llc Vision-based method for rapid directed area search
CN102324042A (en) * 2011-09-13 2012-01-18 盛乐信息技术(上海)有限公司 Visual identifying system and visual identity method
CN102968619A (en) * 2012-11-13 2013-03-13 北京航空航天大学 Recognition method for components of Chinese character pictures
CN104036285A (en) * 2014-05-12 2014-09-10 新浪网技术(中国)有限公司 Spam image recognition method and system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689635A (en) * 2018-07-05 2020-01-14 象山一居乐电子有限公司 Express receiving method and device, application and user terminal
CN111080525A (en) * 2019-12-19 2020-04-28 成都海擎科技有限公司 Distributed image and primitive splicing method based on SIFT (Scale invariant feature transform) features
CN111080525B (en) * 2019-12-19 2023-04-28 成都海擎科技有限公司 Distributed image and graphic primitive splicing method based on SIFT features
CN111369754A (en) * 2020-03-05 2020-07-03 吉舵物联科技有限公司 Intelligent anti-theft alarm method and alarm system
CN111914670A (en) * 2020-07-08 2020-11-10 浙江大华技术股份有限公司 Method, device and system for detecting left-over article and storage medium

Similar Documents

Publication Publication Date Title
Luvizon et al. A video-based system for vehicle speed measurement in urban roadways
Yu et al. Trajectory-based ball detection and tracking in broadcast soccer video
WO2017190656A1 (en) Pedestrian re-recognition method and device
Delannay et al. Detection and recognition of sports (wo) men from multiple views
CN104978567B (en) Vehicle checking method based on scene classification
CN103473551A (en) Station logo recognition method and system based on SIFT operators
CN104200495A (en) Multi-target tracking method in video surveillance
CN106709500B (en) Image feature matching method
CN104616297A (en) Improved SIFI algorithm for image tampering forensics
KR101130963B1 (en) Apparatus and method for tracking non-rigid object based on shape and feature information
Benseddik et al. SIFT and SURF Performance evaluation for mobile robot-monocular visual odometry
CN111145223A (en) Multi-camera personnel behavior track identification analysis method
Teutsch et al. Robust detection of moving vehicles in wide area motion imagery
JP2012531130A (en) Video copy detection technology
CN105590086A (en) Article antitheft detection method based on visual tag identification
Phan et al. Recognition of video text through temporal integration
Abdellali et al. L2d2: Learnable line detector and descriptor
Chen et al. Object tracking over a multiple-camera network
Donate et al. Shot boundary detection in videos using robust three-dimensional tracking
CN111104857A (en) Identity recognition method and system based on gait energy diagram
Kang et al. Edge and feature points based video intra-frame passive-blind copy-paste forgery detection
KR101528757B1 (en) Texture-less object recognition using contour fragment-based features with bisected local regions
WO2017179728A1 (en) Image recognition device, image recognition method, and image recognition program
CN114511803A (en) Target occlusion detection method for visual tracking task
Quach et al. A model-based approach to finding tracks in SAR CCD images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
CB02 Change of applicant information

Address after: 710075 A311 room, photoelectron professional incubator building, No. two, 77 hi tech Zone, Xi'an, Shaanxi

Applicant after: Xi'an Sanming Polytron Technologies Inc

Address before: 710075 A311 room, photoelectron professional incubator building, No. two, 77 hi tech Zone, Xi'an, Shaanxi

Applicant before: Xi'an Saming Technology Co., Ltd.

CB02 Change of applicant information
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20160518

WW01 Invention patent application withdrawn after publication