CN116258984A - Object recognition system - Google Patents

Object recognition system Download PDF

Info

Publication number
CN116258984A
CN116258984A CN202310524224.1A CN202310524224A CN116258984A CN 116258984 A CN116258984 A CN 116258984A CN 202310524224 A CN202310524224 A CN 202310524224A CN 116258984 A CN116258984 A CN 116258984A
Authority
CN
China
Prior art keywords
target object
information
target
identified
list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310524224.1A
Other languages
Chinese (zh)
Other versions
CN116258984B (en
Inventor
李睿
黄少卿
申震云
吴阿鹏
侯远哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Travelsky Mobile Technology Co Ltd
Original Assignee
China Travelsky Mobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Travelsky Mobile Technology Co Ltd filed Critical China Travelsky Mobile Technology Co Ltd
Priority to CN202310524224.1A priority Critical patent/CN116258984B/en
Publication of CN116258984A publication Critical patent/CN116258984A/en
Application granted granted Critical
Publication of CN116258984B publication Critical patent/CN116258984B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/422Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a system for object recognition, comprising: an AR device for scanning an object, a database, a number of RFID tags mounted on the target object, a processor and a memory storing a computer program, the database comprising: and a target object information list corresponding to the plurality of target objects. By storing the appearance characteristic information corresponding to the target object in advance, when the object with the RFID tag falling is found, according to the appearance characteristic of the object with the RFID tag falling and the found place and time, the most likely user of the object with the RFID tag falling is determined, and the problem that the object with the RFID tag falling cannot find the corresponding user is avoided.

Description

Object recognition system
Technical Field
The invention relates to the field of data processing, in particular to a system for object identification.
Background
In the prior art, when a user takes an airplane and carries out consignment on baggage, when an RFID tag placed on the baggage drops or is lost, the problem that the user corresponding to a target object which cannot identify the RFID tag cannot be determined can be caused, particularly when the user makes a cross country journey, the baggage corresponding to the user also passes through a plurality of transfer stations, and the problem that the baggage caused by mishandling, missed handling and lost baggage cannot find the corresponding user is more likely to exist. Therefore, an effective object recognition system is important for the accuracy of luggage recognition.
Disclosure of Invention
Aiming at the technical problems, the invention adopts the following technical scheme:
a system for object recognition, comprising: the device comprises an AR device, a database, a plurality of RFID labels, a processor and a memory storing a computer program, wherein the AR device is used for scanning an object, the RFID labels are installed on the target object, and the database comprises: target object information list a= (a) corresponding to a plurality of target objects 1 ,A 2 ,……,A i ,……,A m ) I=1, 2, … …, m, m is the number of target object information, where the i-th target object information a i =(DA i ,LA i ,WA i ,GA i1 ,GA i2 ,……,GA it ,……,GA iT(i) ),DA i For the user identification corresponding to the ith target object, LA i For DA i Corresponding user ID, WA i For the appearance characteristic information corresponding to the ith target object, t=1, 2, … …, T (i) is DA i The corresponding number of target location points that the user has passed through, GA it For DA i Target position point information of the corresponding t-th target position point, target positionThe point information includes: longitude and latitude coordinates and DA of target position point i The time at which the corresponding user arrives at the target location point.
When the computer program is executed by a processor, the following steps are implemented:
s100, responding to the object to be identified scanned by the AR equipment, and acquiring object information A to be identified 0 =(WA 0 ,TA 0 ,GA 0 ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein WA 0 For the appearance characteristic information of the object to be identified, TA 0 For the time the AR device scans the object to be identified, GA 0 Marking the longitude and latitude of an object to be identified, wherein the object to be identified is a target object with the RFID tag falling.
S200, according to WA 0 Traversing a to obtain a first target object information list a1= (A1) 1 ,A1 2 ,……,A1 j ,……,A1 n ) J=1, 2, … …, n, n is the number of first target object information; wherein A1 j The j-th first target object information is WA 0 The matching degree of the number of the matching points is larger than a preset matching degree threshold value X 0 Target object information corresponding to the appearance characteristic information of the target object.
S300, when n > 1, according to A 0 And A1, acquiring a second target object information list a2= (A2) 1 ,A2 2 ,……,A2 h ,……,A2 H ) H=1, 2, … …, H being the number of second target object information; wherein A2 h The h second target object information is the information of the second target object existing in the preset time range T 0 Inner and GA 0 Is within a preset distance range J 0 First target object information of the position point information in.
S400, when H > 1, according to A2 and A 0 Acquisition target priority list y1= (Y1) 1 ,Y1 2 ,……,Y1 h ,……,Y1 H ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein Y1 h Is A2 h Corresponding target priority, Y1 h Meets the following conditions:
Figure SMS_1
wherein JL 0 h Is A2 h Distance GA in corresponding position point information 0 Nearest location point WA2 h With GA 0 Distance between T1 h Is A2 h Arrival WA2 in corresponding location point information h Is a time of (a) to be used.
S500, Y1 max The corresponding second target object information is used as key object information, and the user ID corresponding to the key object information is transmitted to the AR equipment; wherein Y1 max =max (Y1), max () is a maximum value determination function.
The invention has at least the following beneficial effects:
responding to an object to be identified scanned by the AR equipment, acquiring object information to be identified, acquiring target object information corresponding to the object with appearance characteristic information of the object to be identified, the appearance characteristic information of which the matching degree is larger than a preset matching degree threshold value, as a first target object information list, acquiring first target object information, which is in a preset time range and is smaller than a preset distance range from the object to be identified, as second target object information, acquiring priority corresponding to each second target object information according to the distance between the second target object information and the object to be identified and the time of reaching the nearest position point between the second target object information and the object to be identified, so as to obtain a target priority list, taking second target object information corresponding to the maximum priority in the target priority list as key object information, and transmitting a user ID (identity) corresponding to the key object information to the AR equipment. The RFID tag is easy to damage or lose, so that the problem that the user corresponding to the target object which cannot identify the RFID tag cannot be determined can be caused.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a system for object recognition according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
The present invention provides a system for object recognition, the system comprising: an AR device for scanning an object, a database, a number of RFID tags mounted on the target object, a processor and a memory storing a computer program, the database comprising: target object information list a= (a) corresponding to a plurality of target objects 1 ,A 2 ,……,A i ,……,A m ) I=1, 2, … …, m, m is the number of target object information, where the i-th target object information a i =(DA i ,LA i ,WA i ,GA i1 ,GA i2 ,……,GA it ,……,GA iT(i) ),DA i For the user identification corresponding to the ith target object, LA i For DA i Corresponding user ID, WA i For the appearance characteristic information corresponding to the ith target object, t=1, 2, … …, T (i) is DA i The corresponding number of target location points that the user has passed through, GA it For DA i Target location point information of a corresponding t-th target location point, the target location point information including: longitude and latitude coordinates of target position pointDA (data acquisition) system i The time at which the corresponding user arrives at the target location point.
In the embodiment of the invention, the user identifier is a unique identifier of an object corresponding to the target user.
When the computer program is executed by a processor, as shown in fig. 1, the following steps are implemented:
s100, responding to the object to be identified scanned by the AR equipment, and acquiring object information A to be identified 0 =(WA 0 ,TA 0 ,GA 0 ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein WA 0 For the appearance characteristic information of the object to be identified, TA 0 For the time the AR device scans the object to be identified, GA 0 Marking the longitude and latitude of an object to be identified, wherein the object to be identified is a target object with the RFID tag falling.
Specifically, the AR device may be: AR intelligence glasses, annular AR intelligence glasses, AR intelligence helmet and binocular AR helmet.
Preferably, the AR device is a head-loop AR smart glasses.
In the embodiment of the invention, the appearance characteristic information can be understood as an appearance characteristic vector, the appearance characteristic vector is obtained by extracting characteristics of the appearance of the object to be identified, and the appearance characteristic is processed to obtain the appearance characteristic vector. Specifically, in the embodiment of the present invention, any method for processing the appearance feature to obtain the appearance feature vector falls within the protection scope of the present invention, and is not described herein.
S200, according to WA 0 Traversing a to obtain a first target object information list a1= (A1) 1 ,A1 2 ,……,A1 j ,……,A1 n ) J=1, 2, … …, n, n is the number of first target object information; wherein A1 j The j-th first target object information is WA and the first target object information is 0 The matching degree of the number of the matching points is larger than a preset matching degree threshold value X 0 Target object information corresponding to the appearance characteristic information of the target object.
Specifically, the first target object information is WA 0 Can be understood as the matching degree of (2)For the first target object information is WA 0 Is a similarity of (3).
In an embodiment of the present invention, any one of the first target object information is calculated as the sum WA 0 The similarity methods fall within the protection scope of the present invention, and are not described herein.
Further, X 0 =[85%,95%]。
Preferably X 0 =90%。
S300, when n > 1, according to A 0 And A1, acquiring a second target object information list a2= (A2) 1 ,A2 2 ,……,A2 h ,……,A2 H ) H=1, 2, … …, H being the number of second target object information; wherein A2 h The h second target object information is the information of the second target object existing in the preset time range T 0 Inner and GA 0 Is within a preset distance range J 0 First target object information of the position point information in.
In the embodiment of the present invention, when n=1, A1 will be n As key object information, and transmits a user ID corresponding to the key object information to the AR device.
Specifically T 0 =[TA 0 -T1 0 ,TA 0 +T1 0 ],T1 0 For a first preset time threshold value,
further, T1 0 =[1h,3h]。
In the embodiment of the invention, J 0 =[1km,3km]。
Preferably T1 0 =2h,J 0 =2 km; because the object can be the luggage of the user, when the user takes the airplane, some luggage can not be carried about and needs to be carried, in the process of going Li Tuoyun, the distance between the luggage and the user can not be too far, the time of arrival of the luggage is basically the same as that of the user, and the luggage with too much time difference between the distance between the luggage and the user and the arrival of the user at the place can not belong to the user, so that the user corresponding to the luggage is judged by setting a proper time range and a proper distance range, and the accuracy of judging the user corresponding to the luggage is improved.
S400, when H > 1, according to A2 and A 0 Acquisition target priority list y1= (Y1) 1 ,Y1 2 ,……,Y1 h ,……,Y1 H ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein Y1 h Is A2 h Corresponding target priority, Y1 h Meets the following conditions:
Figure SMS_2
wherein JL 0 h Is A2 h Distance GA in corresponding position point information 0 Nearest location point WA2 h With GA 0 Distance between T1 h Is A2 h Arrival WA2 in corresponding location point information h Is a time of (a) to be used.
In the embodiment of the present invention, when h=1, Y1 is set to H As key object information, and transmits a user ID corresponding to the key object information to the AR device.
S500, Y1 max The corresponding second target object information is used as key object information, and the user ID corresponding to the key object information is transmitted to the AR equipment; wherein Y1 max =max (Y1), max () is a maximum value determination function.
In the embodiment of the invention, the user ID may be a contact manner of the user corresponding to the target object, for example, a mobile phone number or an email account.
Specifically, the user ID is displayed at the upper right corner position of the AR device.
The method comprises the steps of responding to an object to be identified, which is scanned by an AR device, obtaining object information to be identified, obtaining target object information corresponding to object observation characteristic information of a target object, the appearance characteristic information of which is matched with the object to be identified by the AR device, wherein the appearance characteristic information of the target object is larger than a preset matching degree threshold value, obtaining first target object information, which is used as second target object information, of which the distance between the first target object information and the object to be identified is smaller than a preset distance range, obtaining priority corresponding to each second target object information according to the distance between the second target object information and the object to be identified and the time of reaching the nearest position point between the second target object information and the object to be identified, obtaining a target priority list, taking second target object information corresponding to the largest priority in the target priority list as key object information, and transmitting a user ID corresponding to the key object information to the AR device. The RFID tag is easy to damage or lose, so that the problem that the user corresponding to the target object which cannot identify the RFID tag cannot be determined can be caused.
In the embodiment of the invention, the system further comprises an object detection device, wherein x key cameras are arranged in the object detection device and are used for shooting images when detecting that a target object passes through the object detection device.
In the embodiment of the present invention, x=3 can be understood as that the object detection apparatus internally mounts three key cameras, which are mounted at intermediate positions above, to the left of, and to the right of the interior of the object detection apparatus.
Specifically, WA i The method comprises the following steps of:
s101, acquiring a target object image list set TX corresponding to an ith target object i =(TX i1 ,TX i2 ,……,TX ik ,……,TX ix ) K=1, 2, … …, x, x is the number of target object image lists; wherein the kth object image list set TX ik =(TX ik1 ,TX ik2 ,……,TX ikc ,……,TX ikv(k) ) C=1, 2, … …, v (k), v (k) being the number of target object images of the ith target object taken by the kth key camera, TX ikc And c target object images of the i target object shot by the k key camera.
Specifically, when the target object passes through the object detection device, the key camera continuously shoots the target object to obtain a target object image list.
Further, while the key camera shoots the target object, the target object moves at a constant speed in the same direction in the object detection device and is not in a static state.
S102, extracting image features of each target object image to obtain a target object image feature dimension list set SW i =(SW i1 ,SW i2 ,……,SW ik ,……,SW ix ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein TX is ik Corresponding object image feature dimension list SW ik =(SW ik1 ,SW ik2 ,……,SW ikc ,……,SW ikv(k) ),SW ikc For TX ikc And the corresponding object image feature dimension.
Specifically, any method for extracting image features of the target object image to obtain the feature dimension of the target object image falls within the protection scope of the present invention, and will not be described herein.
Further, a larger feature dimension of the target object image represents a larger feature contained in the target object image.
S103, obtaining TX i Corresponding target object image definition list QW i =(QW i1 ,QW i2 ,……,QW ik ,……,QW ix ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein TX is ik Corresponding target object image sharpness list QW ik =(QW ik1 ,QW ik2 ,……,QW ikc ,……,QW ikv(k) ),QW ikc For TX ikc And the corresponding target object image definition.
Specifically, any method for obtaining the definition of the target object image corresponding to the target object image falls within the protection scope of the present invention, and is not described herein.
Further, the larger the value of the sharpness is, the sharper the target object image is represented.
S104, according to SW i And QW (QW) i Acquiring TX i Corresponding second priority list set Y2 i =(Y2 i1 ,Y2 i2 ,……,Y2 ik ,……,Y2 ix ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein TX is ik Corresponding second priority list Y2 ik =(Y2 ik1 ,Y2 ik2 ,……,Y2 ikc ,……,Y2 ikv(k) ),Y2 ikc For TX ikc Corresponding second priority, Y2 ikc Meets the following conditions: y2 ick =β*SW ikc +γ*QW ikc Wherein β is a first preset weight value, γ is a second preset weight value, and β+γ=1.
Preferably, β=0.5, γ=0.5; since the image feature extraction is required to be performed on the target object image subsequently, it is required to ensure that the target object image is sufficiently clear to ensure the accuracy of the extracted target object image features while ensuring that the features in the target object image are as many as possible, and therefore, the first preset weight value and the second preset weight value are set to be the same size.
S105 according to Y2 i Acquiring a third priority list Y3 i =(Y3 i1 ,Y3 i2 ,……,Y3 ik ,……,Y3 ix ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein Y3 ik The kth third priority corresponding to the ith target object, and Y3 ik Meets the following conditions: y3 ik =max(Y2 ik )。
S106 according to Y3 i Acquiring a key object image list GT i =(GT i1 ,GT i2 ,……,GT ik ,……,GT ix ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein GT ik The method comprises the steps that a kth key object image corresponding to an ith target object is obtained, wherein the key object image is a target object image corresponding to a third priority;
s107 to GT i Extracting image characteristics of each key shooting image in the image to obtain WA i
The method comprises the steps of obtaining a target object image list set corresponding to an ith target object, extracting image features of each target object image, obtaining a target object image feature dimension list set, obtaining a second priority list set by obtaining a target object image definition list set, obtaining a third priority list according to the second priority list set, obtaining a target object image corresponding to the third priority as a key object image to obtain a key object image list, and extracting image features of each key shooting image in the key object image list to obtain appearance feature information corresponding to the ith target object. Through setting up the key camera, and the key camera installs in the intermediate position of inside top, left intermediate position and the intermediate position department on right side of object detection equipment, carry out comprehensive shooting to the target object to while guaranteeing that the characteristic is as many as possible in the target object image, still considered the target object image and enough clear, avoided the problem that the target object image characteristic that the key object image feature that obtains is few or unclear leads to is inaccurate.
In another embodiment of the invention, the system further comprises at least one RFID reading device for reading information within the RFID tag; determining the target object with the RFID tag as a target object to be identified, and when more than 1 target object to be identified exists in the identification range of the AR equipment and the appearance matching degree between the target objects to be identified is greater than a preset appearance matching degree threshold W 0 When the target object to be identified reads the information list MA, the target object to be identified of the target object to be identified is obtained through the following steps:
s1, responding to a target object to be identified scanned by AR equipment, and acquiring appearance characteristic information WM of the target object to be identified corresponding to the target object to be identified.
S2, responding to the target object to be identified read by the RFID reading equipment, and acquiring a sixth target object reading information list A6= (A6) 1 ,A6 2 ,……,A6 p ,……,A6 w ) P=1, 2, … …, w, w being the number of sixth target objects reading information; A6A 6 p Reading information for a p sixth target object, wherein the sixth target object reading information is that the matching degree with WM is larger than W 0 Object reading information corresponding to the appearance characteristic information of the RFID reader.
Specifically, the sixth target object reading information includes: the method comprises the steps of user identification corresponding to a sixth target object, user ID corresponding to the sixth target object, appearance characteristic information corresponding to the sixth target object, a plurality of pieces of target position point information corresponding to the user identification of the sixth target object and a plurality of pieces of transfer station information corresponding to the sixth target object.
S3, in response to the time when the RFID reading device reads the RFID tag of the object to be identified, arranging each sixth object information according to the time sequence from small to large to obtain a seventh object read information list A7= (A7) 1 ,A7 2 ,……,A7 p ,……,A7 w ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein A7 p Information is read for the p seventh target object.
S4, responding to the distance between the target object to be identified and the AR equipment scanned by the AR equipment, and acquiring a target object list WA= (WA) 1 ,WA 2 ,……,WA p ,……,WA w ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein WA p WA for the p-th target object to be identified 1 The distance from the AR device is the farthest.
S5, matching the read information of each seventh target object in the A7 with the target object to be identified to obtain MA= (MA) 1 ,MA 2 ,……,MA p ,……,MA w ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein MA is p For WA p Corresponding target object to be identified reads information and MA p ≌A7 p
The method comprises the steps of obtaining appearance characteristic information of a target object to be identified corresponding to the target object to be identified by responding to the target object to be identified scanned by the AR equipment, obtaining a sixth target object reading information list by responding to the target object to be identified read by the RFID reading equipment, arranging each sixth target object information according to a time sequence from small to large in response to the time when the RFID reading equipment reads the RFID tag of the target object to be identified to obtain a seventh target object reading information list, obtaining the target object list by responding to the distance between the target object to be identified and the AR equipment, and matching each seventh target object reading information in the seventh target object reading information list with the target object to be identified to obtain the target object reading information list of the target object to be identified. Because the AR equipment acquires the corresponding user ID according to the appearance characteristic information corresponding to the scanned object, when more than 1 target objects with appearance matching degree larger than a preset appearance matching degree threshold exist in the identification range of the AR equipment, the problem that the user ID corresponding to the target object cannot be accurately output is caused, and therefore the problem that the user ID corresponding to the target object is not matched with the target object is avoided through the steps.
In the embodiment of the invention, GW i The method comprises the following steps of:
s10, when an ith target object reaches any object transfer station, acquiring first object transfer station information corresponding to the ith target object; the first object transfer station information is an object transfer station ID corresponding to an object transfer station where the ith target object is located and a time when the ith target object arrives at the first object transfer station.
Specifically, the transfer station ID is a unique identification of the transfer station.
S20, adding the transfer station information of the first object corresponding to the ith target object into the transfer station information of the second object corresponding to the ith target object to obtain GW i The method comprises the steps of carrying out a first treatment on the surface of the The transfer station information in the second object is an object transfer station ID corresponding to each object transfer station through which the ith target object passes and the time when the ith target object arrives at each object transfer station.
S30, according to GW i Obtaining third object transfer station information z3= (TZ 3, DZ 3); wherein TZ3 is GW i DZ3 is the object transfer station ID corresponding to TZ3 in the time closest to the current time;
s40, obtaining GW i Corresponding key user information gl= (DL, WL); the method comprises the steps that DL is a user identifier corresponding to a key user, WL is the current longitude and latitude of the key user, and the key user is an ith target user;
s50, acquiring a key time difference T according to Z3 and GL 0 The method comprises the steps of carrying out a first treatment on the surface of the Wherein T is 0 Meets the following conditions: t (T) 0 =dt-TZ 3, DT being the current time;
s60, when DT > T2 0 When the processor is in operation, abnormal prompt information is sent to the processor; otherwise, S70 is performed; wherein T2 0 A second preset time threshold;
s70, acquiring a key distance difference GJ according to Z3 and GL; wherein GJ is the distance between the position where DZ3 is located and WL;
s80, when GJ > J 0 When the processor is in operation, abnormal prompt information is sent to the processor; wherein J is 0 Is a preset distance threshold.
When there is still an unextracted object at the preset position after the target time point, performing the steps of:
acquiring a target time point MT; wherein, MT meets the following conditions: mt=dt+t3 0 DT is the time when the target object reaches the preset position, T3 0 And a third preset time threshold.
In the embodiment of the invention, the preset position is an object placement position.
Further, the preset position is located right below the first shooting position.
Further, T3 0 =[15min,45min]。
Preferably T3 0 As can be seen from the investigation, in general, the user will take his or her own object from the object placement place within half an hour, and when the user does not take his or her own object from the object placement place after half an hour, an abnormal situation may occur, where the abnormal situation may be that the object is missed or the object is taken away later due to an accident.
In response to reaching the MT, a second target object image list a2= (A2) corresponding to the second target object is acquired 1 ,A2 2 ,……,A2 r ,……,A2 R ) R=1, 2, … …, R being the number of second target objects; wherein A2 r The second target object image is the r second target object, and the second target object is the target object at the preset position at the MT moment.
In the embodiment of the invention, when the MT is reached, the first target camera shoots the residual target object at the preset position to obtain a second target object image list.
Extracting image characteristics of each second target object image to obtain a second target object image characteristic information list ZA2= (ZA 2) 1 ,ZA2 2 ,……,ZA2 r ,……,ZA2 R ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein ZA2 r Is A2 r And corresponding second target object image characteristic information.
From a and ZA2, a second target matching degree list set x2= (X2) is obtained 1 ,X2 2 ,……,X2 r ,……,X2 R ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein A2 r Corresponding second target matching degree list X2 r =(X2 r1 ,X2 r2 ,……,X2 ri ,……,X2 rm ),X2 ri Is ZA2 r And WA i Degree of matching between the two.
In the embodiment of the present invention, the matching degree may be understood as a similarity degree, and specifically, any method for calculating the second target matching degree falls within the protection scope of the present invention, which is not described herein.
According to X2, a third target matching degree list x3= (X3) is acquired 1 ,X3 2 ,……,X3 r ,……,X3 R ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein X3 r Is A2 r Corresponding third target matching degree, X3 r Meets the following conditions: x3 r =max(X2 r ) Max () is a maximum value determination function.
According to X3, a final object information list Fa= (FA) corresponding to A2 is obtained 1 ,FA 2 ,……,FA r ,……,FA R ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein, FA r Is A2 r And the corresponding final object information is target object information corresponding to the third target matching degree.
And transmitting the FA to the AR equipment and sending prompt information to the terminal AR equipment corresponding to the user ID in each piece of final information.
The first target camera shoots the target object at the preset position at the target time point to obtain a second target object image list, image feature extraction is performed on each second target object image to obtain a second target object image feature information list, the matching degree between the second target object image feature information and appearance feature information corresponding to the target object is calculated, a second target matching degree list set is obtained, a third target matching degree list is obtained according to the second target matching degree list set, a final object information list is obtained, the final object information list is transmitted to the AR equipment, and prompt information is sent to the final AR equipment corresponding to the user ID in each final information. Because the condition of missing or mistakenly taking other objects easily occurs when a user takes the object, the storage space of the AR equipment is limited, the object information which is not taken away by the user after the target time point is stored into the AR equipment, and the problem that the time efficiency is low and the storage space of the AR equipment is limited and the whole target object information cannot be stored due to the fact that the information needs to be called in a database when the user urgently needs the object which is missing by the user is avoided, the time efficiency of calling the object information is improved on the premise that the storage space of the AR equipment is guaranteed to be reasonably utilized.
In another embodiment of the invention, the system further comprises a second object camera arranged at a second shooting position for shooting an image of the extracted object.
Specifically, the second photographing position is different from the first photographing position. The second shooting position is located at the outlet of the room where the preset position is located.
Further, the FA may be obtained by:
obtaining a third target object information list a3= (A3) corresponding to the third target object 1 ,A3 2 ,……,A3 g ,……,A3 G ) g=1, 2, … …, G being the number of third target object information; wherein A3 g And g third target object information, wherein the third target object is a target object which is not shot by the second target camera after MT.
Specifically, A3 is obtained by:
acquiring a first target object image list TX 1= (TX 1) shot by a second target camera in a preset time period 1 ,TX1 2 ,……,TX1 b ,……,TX1 B ) B=1, 2, … …, B being the number of first target object images; wherein TX1 b The preset time period is the time period between DT and MT for the b first target object image.
Specifically, when the second target camera senses that the user is within the shooting range and the user walks in the target direction, an image is shot.
Extracting image characteristics of each first target object image to obtain a first target object image characteristic information list TZ= (TZ) 1 ,TZ 2 ,……,TZ b ,……,TZ B ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein TZ is b Is TX1 b And corresponding first target object image characteristic information.
According to A and TZ, a first target matching degree list set XD= (XD) corresponding to TX1 is obtained 1 ,XD 2 ,……,XD b ,……,XD B ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein TX1 b Corresponding first target matching degree list XD b =(XD b1 ,XD b2 ,……,XD bi ,……,XD bm ),XD bi Is TZ b And WA i A first target match between.
According to XD, a second target matching degree list XD2= (XD 2) corresponding to TX1 is obtained 1 ,XD2 2 ,……,XD2 b ,……,XD2 B ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein XD2 b Is TX1 b Corresponding second target matching degree XD2 b =max(XD b )。
According to XD2, fourth target object information A4= (A4) corresponding to the fourth target object is obtained 1 ,A4 2 ,……,A4 b ,……,A4 B ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein A4 b Is TX1 b Corresponding fourth target object information, wherein the fourth target object is XD2 b A corresponding target object.
The target object information in a except A4 is taken as A3.
A3 is taken as FA and transmitted to the AR device.
The method comprises the steps of obtaining a first target object image list shot by a second target camera within a preset time period, extracting image features of each first target object image, obtaining a first target object image feature information list, calculating the matching degree between the first target object image feature information and appearance feature information corresponding to a target object to obtain a first target matching degree list set, obtaining a second target matching degree list according to the first target matching degree list set, obtaining fourth target object information corresponding to a fourth target object according to the second target matching degree list, taking target object information except the fourth target object information in the target object information list as third target object information, and taking the third target object information as final object information. Because the condition of missing or mistakenly taking other objects easily occurs when a user takes the object, the storage space of the AR equipment is limited, the object information which is not taken away by the user after the target time point is stored into the AR equipment, and the problem that the time efficiency is low and the storage space of the AR equipment is limited and the whole target object information cannot be stored due to the fact that the information needs to be called in a database when the user urgently needs the object which is missing by the user is avoided, the time efficiency of calling the object information is improved on the premise that the storage space of the AR equipment is guaranteed to be reasonably utilized. Compared with the last embodiment of the invention, the camera is arranged at the outlet position and used for shooting the extracted objects, so that the extracted objects can be more accurately judged, the problem of inaccurate judgment caused by overlapping objects at the preset position is avoided, and the accuracy of the final object information list is improved.
In another embodiment of the present invention, the system further includes a GPS positioning device and a plurality of RFID tags, where the GPS positioning device is mounted on the RFID tags for obtaining location information of the target object, and the RFID tags are mounted on the target object for recording information of the target object.
In particular, the RFID tag may be paper, plastic or metal.
Preferably, the RFID tag is made of plastic; because the plastic material is not easy to be torn and is not easy to lose efficacy due to the fact that water is touched by mistake, the stability of the RFID tag is improved.
Further, the GPS device may also be mounted on a target object; the problem that the RFID tag can fall off in the process of transporting the target object is solved.
Further, the FA may be obtained by:
acquiring a target position point MW; the target position point is a central position point of the object extraction position.
Obtaining a fifth target object information list a5= (A5) corresponding to the fifth target object 1 ,A5 2 ,……,A5 q ,……,A5 Q ) Q=1, 2, … …, Q being the number of fifth target object information; A5A 5 q The q fifth target object information is that the distance between MT and MW is smaller than a preset distance threshold J 0 Is a target object of (a).
A5 is taken as FA and transmitted to the AR device.
And acquiring information corresponding to the target object, of which the distance between the target time point and the target position point is smaller than a preset distance threshold, in the target object as a fifth target object information list by acquiring the target position point, and transmitting the fifth object information list as a final object information list to the AR equipment. Because the condition of missing or mistakenly taking other objects easily occurs when a user takes the object, the storage space of the AR equipment is limited, the object information which is not taken away by the user after the target time point is stored into the AR equipment, and the problem that the time efficiency is low and the storage space of the AR equipment is limited and the whole target object information cannot be stored due to the fact that the information needs to be called in a database when the user urgently needs the object which is missing by the user is avoided, the time efficiency of calling the object information is improved on the premise that the storage space of the AR equipment is guaranteed to be reasonably utilized. Compared with the last embodiment of the invention, by placing the GPS equipment on the target object, and each GPS equipment corresponds to a unique and determined target object, the problem of inaccurate information of the final object caused by unclear shooting of the camera is avoided, and therefore, the accuracy of the information list of the final object is improved.
Embodiments of the present invention also provide a computer program product comprising program code for causing an electronic device to carry out the steps of the method according to the various exemplary embodiments of the invention as described in the specification, when said program product is run on the electronic device.
While certain specific embodiments of the invention have been described in detail by way of example, it will be appreciated by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the invention. Those skilled in the art will also appreciate that many modifications may be made to the embodiments without departing from the scope and spirit of the invention. The scope of the invention is defined by the appended claims.

Claims (10)

1. A system for object recognition, the system comprising: an AR device for scanning an object, a database, a number of RFID tags mounted on the target object, a processor and a memory storing a computer program, the database comprising: target object information list a= (a) corresponding to a plurality of target objects 1 ,A 2 ,……,A i ,……,A m ) I=1, 2, … …, m, m is the number of target object information, where the i-th target object information a i =(DA i ,LA i ,WA i ,GA i1 ,GA i2 ,……,GA it ,……,GA iT(i) ),DA i For the user identification corresponding to the ith target object, LA i For DA i Corresponding user ID, WA i For the appearance characteristic information corresponding to the ith target object, t=1, 2, … …, T (i) is DA i The corresponding number of target location points that the user has passed through, GA it For DA i Target location point information of a corresponding t-th target location point, the target location point information including: longitude and latitude coordinates and DA of target position point i The time when the corresponding user arrives at the target position point;
when the computer program is executed by a processor, the following steps are implemented:
s100, responding to the object to be identified scanned by the AR equipment, and acquiring object information A to be identified 0 =(WA 0 ,TA 0 ,GA 0 ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein WA 0 For the appearance characteristic information of the object to be identified, TA 0 For the time the AR device scans the object to be identified, GA 0 Marking the longitude and latitude of an object to be identified, wherein the object to be identified is a target object with an RFID tag falling down;
s200, according to WA 0 Traversing a to obtain a first target object information list a1= (A1) 1 ,A1 2 ,……,A1 j ,……,A1 n ) J=1, 2, … …, n, n is the number of first target object information; wherein A1 j The j-th first target object information is WA and the first target object information is 0 The matching degree of the number of the matching points is larger than a preset matching degree threshold value X 0 Target object information corresponding to the appearance characteristic information of the target object;
s300, when n > 1, according to A 0 And A1, acquiring a second target object information list a2= (A2) 1 ,A2 2 ,……,A2 h ,……,A2 H ) H=1, 2, … …, H being the number of second target object information; wherein A2 h The h second target object information is the information of the second target object existing in the preset time range T 0 Inner and GA 0 Is within a preset distance range J 0 First target object information of the location point information within;
s400, when H > 1, according to A2 and A 0 Acquisition target priority list y1= (Y1) 1 ,Y1 2 ,……,Y1 h ,……,Y1 H ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein Y1 h Is A2 h Corresponding target priority, Y1 h Meets the following conditions:
Figure QLYQS_1
Wherein JL 0 h Is A2 h Distance GA in corresponding position point information 0 Nearest location point WA2 h With GA 0 Distance between T1 h Is A2 h Arrival WA2 in corresponding location point information h Time of (2);
s500, Y1 max The corresponding second target object information is used as key object information, and the user ID corresponding to the key object information is transmitted to the AR equipment; wherein Y1 max =max (Y1), max () is a maximum value determination function.
2. The system of claim 1, further comprising an object detection device having disposed therein x key cameras for capturing images of the object when the object is detected passing through the object detection device, WA i The method comprises the following steps of:
s101, acquiring a target object image list set TX corresponding to an ith target object i =(TX i1 ,TX i2 ,……,TX ik ,……,TX ix ) K=1, 2, … …, x, x is the number of target object image lists; wherein the kth object image list set TX ik =(TX ik1 ,TX ik2 ,……,TX ikc ,……,TX ikv(k) ) C=1, 2, … …, v (k), v (k) being the number of target object images of the ith target object taken by the kth key camera, TX ikc A c-th target object image of the i-th target object shot by the k-th key camera;
s102, extracting image features of each target object image to obtain a target object image feature dimension list set SW i =(SW i1 ,SW i2 ,……,SW ik ,……,SW ix ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein TX is ik Corresponding object image feature dimension list SW ik =(SW ik1 ,SW ik2 ,……,SW ikc ,……,SW ikv(k) ),SW ikc For TX ikc Corresponding object image feature dimensions;
s103, obtaining TX i Corresponding target object image definition list QW i =(QW i1 ,QW i2 ,……,QW ik ,……,QW ix ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein TX is ik Corresponding target object image sharpness list QW ik =(QW ik1 ,QW ik2 ,……,QW ikc ,……,QW ikv(k) ),QW ikc For TX ikc The corresponding target object image definition;
s104, according to SW i And QW (QW) i Acquiring TX i Corresponding second priority list set Y2 i =(Y2 i1 ,Y2 i2 ,……,Y2 ik ,……,Y2 ix ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein TX is ik Corresponding second priority list Y2 ik =(Y2 ik1 ,Y2 ik2 ,……,Y2 ikc ,……,Y2 ikv(k) ),Y2 ikc For TX ikc Corresponding second priority, Y2 ikc Meets the following conditions: y2 ick =β*SW ikc +γ*QW ikc Wherein β is a first preset weight value, γ is a second preset weight value, and β+γ=1;
s105 according to Y2 i Acquiring a third priority list Y3 i =(Y3 i1 ,Y3 i2 ,……,Y3 ik ,……,Y3 ix ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein Y3 ik The kth third priority corresponding to the ith target object, and Y3 ik Meets the following conditions: y3 ik =max(Y2 ik );
S106 according to Y3 i Acquiring a key object image list GT i =(GT i1 ,GT i2 ,……,GT ik ,……,GT ix ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein GT ik The method comprises the steps that a kth key object image corresponding to an ith target object is obtained, wherein the key object image is a target object image corresponding to a third priority;
s107 to GT i Extracting image characteristics of each key shooting image in the image to obtain WA i
3. The system of claim 1, further comprising at least one RFID reading device for reading information within the RFID tag; determining the target object with the RFID tag as a target object to be identified, and when more than 1 target object to be identified exists in the identification range of the AR equipment and the appearance matching degree between the target objects to be identified is greater than a preset appearance matching degree threshold W 0 When the target object to be identified reads the information list MA, the target object to be identified of the target object to be identified is obtained through the following steps:
s1, responding to a target object to be identified scanned by AR equipment, and acquiring appearance characteristic information WM of the target object to be identified corresponding to the target object to be identified;
s2, responding to the target object to be identified read by the RFID reading equipment, and acquiring a sixth target object reading information list A6= (A6) 1 ,A6 2 ,……,A6 p ,……,A6 w ) P=1, 2, … …, w, w being the number of sixth target objects reading information; A6A 6 p Reading information for a p sixth target object, wherein the sixth target object reading information is that the matching degree with WM is larger than W 0 Object reading information corresponding to the appearance characteristic information, wherein the object reading information is object information read by RFID reading equipment;
s3, in response to the time when the RFID reading device reads the RFID tag of the object to be identified, arranging each sixth object information according to the time sequence from small to large to obtain a seventh object read information list A7= (A7) 1 ,A7 2 ,……,A7 p ,……,A7 w ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein A7 p Reading information for a p seventh target object;
s4, responding to the distance between the target object to be identified and the AR equipment scanned by the AR equipment, and acquiring a target object list WA= (WA) 1 ,WA 2 ,……,WA p ,……,WA w ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein WA p WA for the p-th target object to be identified 1 The distance from the AR device is the farthest;
s5, matching the read information of each seventh target object in the A7 with the target object to be identified to obtain MA= (MA) 1 ,MA 2 ,……,MA p ,……,MA w ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein MA is p For WA p Corresponding target object to be identified reads information and MA p ≌A7 p
4. The system of claim 1 wherein the database further comprises a target object trajectory information list Gw= (GW) 1 ,GW 2 ,……,GW i ,……,GW m ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein GW i And when the track information of any target object is abnormal, sending abnormal prompt information to the processor.
5. The system of claim 4 wherein the GW i The method comprises the following steps of:
s10, when an ith target object reaches any object transfer station, acquiring first object transfer station information corresponding to the ith target object; the first object transfer station information is an object transfer station ID corresponding to an object transfer station where an ith target object is located and the time when the ith target object arrives at the first object transfer station;
s20, adding the transfer station information of the first object corresponding to the ith target object into the transfer station information of the second object corresponding to the ith target object to obtain GW i The method comprises the steps of carrying out a first treatment on the surface of the The transfer station information in the second object is an object transfer station ID corresponding to each object transfer station through which the ith target object passes and the time when the ith target object arrives at each object transfer station.
6. The system of claim 5 wherein the GW is configured by i Proceeding withJudging:
s30, according to GW i Obtaining third object transfer station information z3= (TZ 3, DZ 3); wherein TZ3 is GW i DZ3 is the object transfer station ID corresponding to TZ3 in the time closest to the current time;
s40, obtaining GW i Corresponding key user information gl= (DL, WL); the method comprises the steps that DL is a user identifier corresponding to a key user, WL is the current longitude and latitude of the key user, and the key user is an ith target user;
s50, acquiring a key time difference T according to Z3 and GL 0 The method comprises the steps of carrying out a first treatment on the surface of the Wherein T is 0 Meets the following conditions: t (T) 0 =dt-TZ 3, DT being the current time;
s60, when DT > T2 0 When the processor is in operation, abnormal prompt information is sent to the processor; otherwise, S70 is performed; wherein T2 0 A second preset time threshold;
s70, acquiring a key distance difference GJ according to Z3 and GL; wherein GJ is the distance between the position where DZ3 is located and WL;
s80, when GJ > J 0 When the processor is in operation, abnormal prompt information is sent to the processor; wherein J is 0 Is a preset distance threshold.
7. The system of claim 1, the AR device being a headband AR smart glasses.
8. The system of claim 1, wherein T1 0 =[1h,3h]。
9. The system of claim 1, wherein T1 0 =2h。
10. The system of claim 2, wherein β = 0.5 and γ = 0.5.
CN202310524224.1A 2023-05-11 2023-05-11 Object recognition system Active CN116258984B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310524224.1A CN116258984B (en) 2023-05-11 2023-05-11 Object recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310524224.1A CN116258984B (en) 2023-05-11 2023-05-11 Object recognition system

Publications (2)

Publication Number Publication Date
CN116258984A true CN116258984A (en) 2023-06-13
CN116258984B CN116258984B (en) 2023-07-28

Family

ID=86688277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310524224.1A Active CN116258984B (en) 2023-05-11 2023-05-11 Object recognition system

Country Status (1)

Country Link
CN (1) CN116258984B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006118882A (en) * 2004-10-19 2006-05-11 Ntt Docomo Inc Location positioning system and location positioning method
CN103886273A (en) * 2013-03-10 2014-06-25 周良文 Personal article monitoring integrated application system based on RFID electronic tag
US20180018627A1 (en) * 2016-07-15 2018-01-18 Alitheon, Inc. Database records and processes to identify and track physical objects during transportation
CN108805900A (en) * 2017-05-03 2018-11-13 杭州海康威视数字技术股份有限公司 A kind of determination method and device of tracking target
CN109561417A (en) * 2018-12-29 2019-04-02 出门问问信息科技有限公司 A kind of anti-lost method and device of article
CN110443828A (en) * 2019-07-31 2019-11-12 腾讯科技(深圳)有限公司 Method for tracing object and device, storage medium and electronic device
CN112085134A (en) * 2020-09-09 2020-12-15 华清科盛(北京)信息技术有限公司 Airport luggage identification system and method based on radio frequency identification
CN113205072A (en) * 2021-05-28 2021-08-03 上海高德威智能交通***有限公司 Object association method and device and electronic equipment
CN114416905A (en) * 2022-01-19 2022-04-29 维沃移动通信有限公司 Article searching method, label generating method and device
CN115222341A (en) * 2022-09-20 2022-10-21 珠海翔翼航空技术有限公司 Flight baggage processing method, system and equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006118882A (en) * 2004-10-19 2006-05-11 Ntt Docomo Inc Location positioning system and location positioning method
CN103886273A (en) * 2013-03-10 2014-06-25 周良文 Personal article monitoring integrated application system based on RFID electronic tag
US20180018627A1 (en) * 2016-07-15 2018-01-18 Alitheon, Inc. Database records and processes to identify and track physical objects during transportation
CN108805900A (en) * 2017-05-03 2018-11-13 杭州海康威视数字技术股份有限公司 A kind of determination method and device of tracking target
CN109561417A (en) * 2018-12-29 2019-04-02 出门问问信息科技有限公司 A kind of anti-lost method and device of article
CN110443828A (en) * 2019-07-31 2019-11-12 腾讯科技(深圳)有限公司 Method for tracing object and device, storage medium and electronic device
CN112085134A (en) * 2020-09-09 2020-12-15 华清科盛(北京)信息技术有限公司 Airport luggage identification system and method based on radio frequency identification
CN113205072A (en) * 2021-05-28 2021-08-03 上海高德威智能交通***有限公司 Object association method and device and electronic equipment
CN114416905A (en) * 2022-01-19 2022-04-29 维沃移动通信有限公司 Article searching method, label generating method and device
CN115222341A (en) * 2022-09-20 2022-10-21 珠海翔翼航空技术有限公司 Flight baggage processing method, system and equipment

Also Published As

Publication number Publication date
CN116258984B (en) 2023-07-28

Similar Documents

Publication Publication Date Title
US10853705B2 (en) Collation/retrieval system, collation/retrieval server, image feature extraction apparatus, collation/retrieval method, and program
US20230288219A1 (en) Hands-free augmented reality system for picking and/or sorting assets
CN109858435B (en) Small panda individual identification method based on face image
US9087245B2 (en) Portable terminal and computer program for locating objects with RFID tags based on stored position and direction data
US11709282B2 (en) Asset tracking systems
CN108886582A (en) Photographic device and focusing controlling method
CN111512317A (en) Multi-target real-time tracking method and device and electronic equipment
KR20190041775A (en) Method for registration and identity verification of using companion animal’s muzzle pattern
US20060269100A1 (en) Composite marker and composite marker information acquisition apparatus
JP6687199B2 (en) Product shelf position registration program and information processing device
CN111869586A (en) Animal wearing mark device and animal ear mark wearing mark and identification system
CN116258984B (en) Object recognition system
CN116597182B (en) System for transmitting object information
CN111079617B (en) Poultry identification method and device, readable storage medium and electronic equipment
CN115937743B (en) Infant care behavior identification method, device and system based on image fusion
US10991119B2 (en) Mapping multiple views to an identity
CN110324528A (en) Photographic device, image processing system and method
CN116152675A (en) Unmanned aerial vehicle rescue method and system based on deep learning
CN114255321A (en) Method and device for collecting pet nose print, storage medium and electronic equipment
JP4579026B2 (en) Compound marker information acquisition device
JP5953812B2 (en) Mobile terminal and program
JP6288166B2 (en) Mobile terminal and program
CN113163167B (en) Image acquisition method and device
CN115661231A (en) Method for identifying imaging direction of camera equipment, computer equipment and storage device
JP2005267252A (en) Electronic tag reading device and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant