CN115984912A - Identification method and device, equipment and storage medium - Google Patents

Identification method and device, equipment and storage medium Download PDF

Info

Publication number
CN115984912A
CN115984912A CN202111193914.0A CN202111193914A CN115984912A CN 115984912 A CN115984912 A CN 115984912A CN 202111193914 A CN202111193914 A CN 202111193914A CN 115984912 A CN115984912 A CN 115984912A
Authority
CN
China
Prior art keywords
key feature
feature point
comparison result
identified
service type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111193914.0A
Other languages
Chinese (zh)
Inventor
孙文超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Chengdu ICT Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Chengdu ICT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Chengdu ICT Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202111193914.0A priority Critical patent/CN115984912A/en
Publication of CN115984912A publication Critical patent/CN115984912A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses an identification method, an identification device, equipment and a storage medium; wherein the method comprises the following steps: performing feature extraction on an object to be identified in the acquired first image to obtain a feature vector of a first key feature point; comparing the difference between the feature vectors of any one first key feature point and at least one other first key feature point to obtain a first comparison result corresponding to any one first key feature point; and taking the first comparison result corresponding to each first key feature point as the feature of the object to be identified, and identifying whether the object to be identified belongs to the first service type at least based on the feature vector of each first key feature point and the corresponding first comparison result. Thus, a consideration factor is added, so that the recognition capability is stronger and the accuracy is higher.

Description

Identification method and device, equipment and storage medium
Technical Field
The present application relates to information technology, and relates to, but is not limited to, an identification method and apparatus, a device, and a storage medium.
Background
For most of merchant stores, after a user enters the merchant store, a service person cannot accurately determine identity information of the user (for example, whether the user is a member user or a non-member user), and therefore, a targeted service cannot be provided for the user based on the identity information of the user.
Disclosure of Invention
In view of this, the identification method, the identification device, the identification apparatus, and the storage medium provided in the present application have stronger identification capability and higher accuracy when identifying whether the object to be identified belongs to the first service type.
According to an aspect of an embodiment of the present application, there is provided an identification method, including: extracting the characteristics of an object to be identified in the acquired first image to obtain a characteristic vector of a first key characteristic point; comparing the difference between the feature vectors of any one first key feature point and at least one other first key feature point to obtain a first comparison result corresponding to any one first key feature point; and taking the first comparison result corresponding to each first key feature point as the feature of the object to be identified, and identifying whether the object to be identified belongs to the first service type at least based on the feature vector of each first key feature point and the corresponding first comparison result.
According to an aspect of an embodiment of the present application, there is provided an identification apparatus, including: the extraction module is used for extracting the features of the object to be identified in the acquired first image to obtain a feature vector of a first key feature point; the comparison module is used for comparing the difference between the feature vectors of any one first key feature point and at least one other first key feature point to obtain a first comparison result corresponding to any one first key feature point; and the identification module is used for taking a first comparison result corresponding to each first key feature point as the feature of the object to be identified, and identifying whether the object to be identified belongs to the first service type at least based on the feature vector of each first key feature point and the corresponding first comparison result.
According to an aspect of the embodiments of the present application, there is provided an electronic device, including a memory and a processor, the memory storing a computer program executable on the processor, and the processor implementing the method of the embodiments of the present application when executing the program.
According to an aspect of the embodiments of the present application, there is provided a computer-readable storage medium on which a computer program is stored, the computer program, when executed by a processor, implementing the method provided by the embodiments of the present application.
In the embodiment of the application, feature vectors of first key feature points are obtained by extracting features of an object to be identified in a first image; comparing the difference between the feature vectors of any first key feature point and at least one other first key feature point to obtain a first comparison result corresponding to any first key feature point; and identifying whether the object to be identified belongs to the first service type at least based on the feature vectors of the first key feature points and the corresponding first comparison result. Therefore, when the object to be identified is identified to be of the first service type, the identification capability is stronger, and the accuracy is higher. This is because: in the embodiment of the application, the difference between the feature vectors of the first key feature points of the object to be identified and the feature vectors of the second key feature points of other objects belonging to the first service type is compared, and the difference between the feature vectors of the first key feature points of the object to be identified is also transversely considered, so that a consideration factor is added when whether the object to be identified belongs to the first service type is identified at least based on the feature vectors of the first key feature points and the corresponding first comparison result, and therefore the identification capability is stronger and the accuracy is higher.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 is a schematic flow chart illustrating an implementation of an identification method according to an embodiment of the present application;
fig. 2 is a schematic diagram of feature extraction processing performed on an object to be recognized according to an embodiment of the present application;
fig. 3 is a schematic flow chart illustrating an implementation of an identification method according to an embodiment of the present application;
fig. 4 is a schematic implementation flow diagram of a database construction process provided in an embodiment of the present application;
fig. 5 is a flowchart illustrating a method for providing a membership service based on a micro base station according to an embodiment of the present application;
FIG. 6 is a diagram illustrating a member management platform according to an embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of an identification device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, specific technical solutions of the present application will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are intended to illustrate the present application, but are not intended to limit the scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
It should be noted that reference to the terms "first \ second \ third" in the embodiments of the present application does not denote a particular ordering with respect to the objects, and it should be understood that "first \ second \ third" may be interchanged under certain circumstances or of a certain order, such that the embodiments of the present application described herein may be performed in an order other than that shown or described herein.
The embodiment of the present application provides an identification method, which is applied to an electronic device, and the electronic device may be various types of devices with information processing capability in the implementation process, for example, the electronic device may include a cash register, a desktop computer, a notebook computer, a small notebook computer, a tablet computer, a mobile phone, a Personal Digital Assistant (PDA), and the like. The functions implemented by the method can be implemented by calling program code by a processor in an electronic device, and the program code can be stored in a computer storage medium.
Fig. 1 is a schematic view of an implementation flow of an identification method provided in an embodiment of the present application, and as shown in fig. 1, the method may include the following steps 101 to 103:
step 101, performing feature extraction on an object to be identified in an acquired first image to obtain a feature vector of a first key feature point.
In the embodiment of the present application, there may be one or more objects to be recognized in the first image, and the number of the objects to be recognized is not limited.
Here, the first image may be obtained by placing a camera at a doorway of the store and photographing an object to be recognized entering the store with the camera. In some embodiments, after the first image is obtained, the quality of the first image may also be detected, and feature extraction may be performed on the first image whose quality meets the requirement.
In some embodiments, the processing procedure of feature extraction on the object to be identified in the first image may be: and obtaining the feature vector of the first key feature point through face positioning processing, face registration processing and face feature extraction processing. As shown in fig. 2, wherein the face of the object to be recognized in the input first image is represented by a face frame 201 by the face localization process; converting the facial coordinate frame of the object to be identified into facial key feature points 202 through facial registration processing, certainly, the number of the key feature points is not limited, and the key feature points can be freely set according to actual conditions; converting the facial key feature points 202 into a string of feature vectors 203 with fixed-length numerical values through facial feature extraction processing to obtain a feature vector set V of first key feature points 0 ={P 1 ,P 2 ,P 3 ,...,P n },n>0。
Step 102, comparing the difference between the feature vectors of any first key feature point and at least one other first key feature point to obtain a first comparison result corresponding to any first key feature point.
It should be noted that, here, comparing the difference between the feature vector of any first key feature point and the feature vector of at least one other first key feature point requires the set V 0 The feature vectors of each first key feature point in the first image are compared once.
In the embodiment of the present application, the number of other first key feature points is not limited, and may be any first key feature point for comparisonCharacteristic point P i With one other first key feature point P m The difference between the feature vectors of (a); it can also be any first key feature point P i Difference between feature vectors of a plurality of other first key feature points, e.g. comparing first key feature point P i And the first key feature point P m 、P j Or comparing the first key feature point P i And the first key feature point P m 、P j And P d The difference between the feature vectors of (a).
Of course, the sequence of the difference between the feature vectors of any first key feature point and at least one other first key feature point is not limited, and when any first key feature point P is compared i With one other first key feature point P m May be the comparison P 1 And P 2 The difference between them, can also be a comparison P 1 And P 3 Or P 4 、P 5 And so on. Accordingly, in comparing any first key feature point P i The difference between the feature vectors of a plurality of other first key feature points may be a comparison P 1 And P 2 、P 3 The difference between them; can also be a comparison P 1 And P 4 、P 6 The difference between them.
In this embodiment of the application, a characterization manner of the first comparison result is not limited, and the first comparison result may be a difference vector between the key feature points, or may also be a sum, a variance, and the like of differences.
Step 103, using the first comparison result corresponding to each first key feature point as the feature of the object to be identified, and identifying whether the object to be identified belongs to the first service type at least based on the feature vector of each first key feature point and the corresponding first comparison result.
In some embodiments, the first service type is a member service type, and whether the object to be identified belongs to the first service type is identified, that is, whether the object to be identified is a member is identified.
In the embodiment of the application, when identifying whether the object to be identified belongs to the first service type, the difference between the feature vectors of the first key feature points of the object to be identified and the feature vectors of the second key feature points of other objects belonging to the first service type is compared, and the difference between the feature vectors of the first key feature points of the object to be identified is also considered transversely, so that when identifying whether the object to be identified belongs to the first service type based on at least the feature vectors of the first key feature points and the corresponding first comparison results, the identification capability is stronger, and the accuracy is higher.
Fig. 3 is a schematic implementation flow diagram of an identification method provided in an embodiment of the present application, and as shown in fig. 3, the method may include the following steps 301 to 307:
step 301, extracting features of an object to be identified in a first image to obtain a feature vector of a first key feature point;
step 302, comparing a difference between feature vectors of any first key feature point and at least one other first key feature point to obtain a first comparison result corresponding to the any first key feature point;
step 303, obtaining a second comparison result corresponding to the second key feature point of the first object belonging to the first service type in the database.
And comparing the second comparison result with the feature vector of any second key feature point of the first object and at least one other second key feature point of the first object.
In this embodiment of the application, the number of the first objects belonging to the first service type in the database may be one or multiple. In some embodiments, the method for determining the second key feature point of the first object may be implemented by performing steps 403 to 404 in the following embodiments, so as to obtain a set Sv = { V } of each first object in the database 1 ,V 2 ,V 3 ,...,V m And m is the number of first objects. Wherein for a first object V n In other words, the corresponding second key feature point set is V n ={P 1 ,P 2 ,P 3 ,...,P n },n>0。
Step 304, comparing the difference between the feature vectors of any first key feature point of the object to be identified and the corresponding second key feature point of the first object to obtain a third comparison result corresponding to any first key feature point.
Here, the set of first key feature points of the object to be recognized may be determined in a certain order. For example, if the first set of key feature points V 0 ={P 1 ,P 2 ,P 3 ,...,P n N > 0 is a set of key feature points of the face of the object to be identified, and then the key feature points of the eyebrows, the nose and the mouth can be arranged according to the determined sequence to obtain a set V 0 . Similarly, a second set V of key feature points of a first object is determined m When necessary, it is also required to be in accordance with V 0 Obtaining a second key feature point set V by the same determined sequence in the set m ={P 1 ,P 2 ,P 3 ,...,P n And n is more than 0. That is, the number of first key feature points is the same as the number of second key feature points, and the first key feature points P i And the second key feature point P i Is corresponding, e.g. first key feature point P i Is the nose feature point of the object to be recognized, the second key feature point P i It is also the nose feature point of the first subject.
Therefore, comparing the difference between the feature vectors of any first key feature point of the object to be identified and the corresponding second key feature point of the first object is to compare the first key feature point P i With the corresponding second key feature point P i The difference between the feature vectors of (a).
Accordingly, set V is compared in step 302 0 Any one of the first key feature points P i And the first key feature point P m When the first comparison result is obtained, in step 303, for the second key feature point set V of the first object m ={P 1 ,P 2 ,P 3 ,...,P n H, n > 0, also by comparing the second key feature point P i Second pass withKey feature point P m To obtain a second key feature point P i Corresponding second comparison result.
And 305, taking the second comparison results corresponding to the second key feature points as features of the first object, and determining the matching degree of the object to be recognized and the first object based on the first comparison results, the second comparison results and the third comparison results.
In the embodiment of the application, not only the difference between the feature vectors of the first key feature point of the object to be recognized and the corresponding second key feature point of the first object is vertically considered, but also the difference between the feature vectors of the first key feature point of the object to be recognized and the difference between the feature vectors of the second key feature point of the first object are horizontally considered, and the matching degree between the object to be recognized and the first object can be determined based on multiple factors, so that the matching accuracy can be higher.
In the embodiment of the present application, a manner of determining the matching degree between the object to be recognized and the first object is not limited, for example, the matching degree between the object to be recognized and the first object may be represented by differences in order to calculate the differences between the object to be recognized and the first object; the similarity between the object to be recognized and the first object can be calculated, and the matching degree between the object to be recognized and the first object can be represented by the similarity.
In some embodiments, step 305 may be implemented by performing steps 3051-3053 as follows:
3051, determining a first degree of difference between the object to be recognized and the first object based on the respective first comparison results and the respective second comparison results.
In some embodiments, a first degree of difference G between the object to be recognized and the first object may be determined by equation 1 m . Wherein, g i Is the first key feature point P i Corresponding first comparison result, h i Is the second key feature point P i Corresponding second comparison results.
Figure BDA0003302310790000061
Step 3052, determining a second degree of difference between the object to be recognized and the first object based on the respective third comparison results.
In some embodiments, the second degree of difference D between the object to be recognized and the first object may be determined by equation 2 m . Wherein, d j Is the first key feature point P i Corresponding third comparison results.
Figure BDA0003302310790000062
And step 3053, determining the matching degree between the object to be recognized and the first object based on the first difference degree and the second difference degree.
In some embodiments, the first degree of difference G is obtained m And a second degree of difference D m Thereafter, the first degree of difference G can be adjusted m And a second degree of difference D m Determining the matching degree between the object to be identified and the first object in a product mode; may also be determined by counting the first degree of difference G m And a second degree of difference D m The summation is used to determine the matching degree between the object to be identified and the first object, which is not limited herein.
Step 306, obtaining a second comparison result corresponding to the second key feature point of the next first object, thereby determining the matching degree between the object to be identified and the next first object.
After determining the matching degree between the object to be recognized and one first object in the database, the steps 303 to 305 are further executed repeatedly to determine the matching degree between the object to be recognized and the next first object in the database until the matching degree between the object to be recognized and each first object in the database is obtained.
Step 307, determining whether the object to be identified belongs to the first service type according to each matching degree.
In some embodiments, step 307 may be implemented by performing steps 3071 to 3074 as follows:
step 3071, determining a target matching degree representing the maximum similarity between the object to be recognized and the first object from all the obtained matching degrees.
In some embodiments, a target matching degree characterizing the object to be recognized and the first object with the minimum difference may be determined from all the obtained matching degrees.
Step 3072, determining whether the target matching degree meets a specific condition; if the target matching degree meets the specific condition, executing step 3073; otherwise, step 3074 is performed.
Here, the determining whether the target matching degree satisfies a certain condition may be determining whether the target matching degree satisfies a preset threshold. Of course, the preset threshold value can be freely set according to actual conditions.
When the target matching degree is the maximum similarity, whether the object to be identified belongs to the first service type can be determined by judging whether the target matching degree is smaller than a preset threshold value. For example, a preset threshold is set to be 95%, and if the target matching degree with the maximum similarity is 96%, which is greater than 95%, it indicates that an object very close to the feature of the object to be identified exists in the first object, and it may be determined that the object to be identified belongs to the first service type; if the target matching degree with the maximum similarity is 50%, which is obviously less than 95%, although the similarity between the first object and the object to be recognized is the maximum among all the first objects, the similarity is still not similar to the characteristics of the object to be recognized, and it may be determined that the object to be recognized does not belong to the first service type.
When the target matching degree is the minimum difference, whether the object to be identified belongs to the first service type can be determined by judging whether the target matching degree is larger than a preset threshold value. For example, setting a preset threshold to be 5%, if the target matching degree with the minimum difference is 4%, which is less than 5%, indicating that an object very close to the feature of the object to be identified exists in the first object, it may be determined that the object to be identified belongs to the first service type; if the target matching degree with the minimum difference is 30%, which is obviously greater than 5%, although the difference between the first object and the object to be recognized is the minimum of all the first objects, the difference is still not similar to the features of the object to be recognized, and it can be determined that the object to be recognized does not belong to the first service type.
Step 3073, it is determined that the object to be identified belongs to the first service type.
In some embodiments, when it is determined that the object to be identified belongs to the first service type, outputting a first prompt message; the first prompt message is used for indicating that a first service is provided for the object to be identified.
Here, the manner of outputting the first prompt message is not limited. For example, a first prompt message is displayed on a display interface of the electronic device; in another example, the first prompt message is output by means of voice prompt.
Step 3074, it is determined that the object to be identified belongs to the second service type.
In some embodiments, when it is determined that the object to be identified belongs to the second service type, outputting a second prompt message; the second prompt message is used for indicating that a second service is provided for the object to be identified, the first service type and the second service type are different, and the second service and the first service are different.
In some embodiments, the first service type is a member service type, the second service type may be a non-member service type, and may also be a service type of a non-member and other members besides the members in the first service type. For example, when the first service type is a member service type (including general members, senior members, premium members, etc.), the second service type is a non-member service type; when the first service type is a general member service type, the second service type may be a non-member service type and a member service type such as a senior member, a premium member, etc.
In some embodiments, as shown in fig. 4, the database building process includes the following steps 401 to 405:
step 401, detecting the identity and the moving direction of the second object in the specific physical area.
In some embodiments, the specific physical area is a coverage area of a micro base station, and the identity and the moving direction of the second object in the coverage area can be detected by the micro base station. Wherein the second object comprises a member object and a non-member object.
Here, the type of the identification is not limited. For example, the identity is a phone number used by the second object; as another example, the identity is a Personal Identification Number (Pin) code used by the second object; for another example, the ID is the ID number of the second object.
In some embodiments, the direction of movement of the second object may be determined by timing position information of the second object. E.g. determining the location of the service provision point as p 0 And obtaining the time sequence position information of the second object as Tp k ={p 1 ,p 2 ,p 3 ,...,p i ,...,p c C is more than or equal to 0,c is more than or equal to i is more than or equal to 0, and the position p of each time sequence position information and service providing point is calculated 0 Can determine whether the second object moves in the direction of the service providing point.
Step 402, selecting a first object from the second objects according to the identification and the moving direction of the second objects.
In some embodiments, step 402 is achieved by performing steps 4021 through 4022 as follows:
step 4021, selecting the target object belonging to the first service type from the second objects according to the identity of the second objects.
Here, according to the detected identity of the second object, the identity of the second object is compared with the identity of the object of the first service type pre-stored in the system of the service providing point, that is, the target object belonging to the first service type can be selected from the second object.
In some embodiments, upon picking out the target object, the system of service providing points may obtain marketing information and send the marketing information to the target object. The marketing information can be the same or different, and when the marketing information is different, the preference of the target object can be determined according to the information such as the historical consumption record of the target object, so that targeted marketing is performed on the target object, and different marketing information is sent; and the marketing information is sent to the target object in the specific physical area, so that the target object can be prevented from being frequently disturbed by the marketing information when the target object is not near a service providing point, namely the target object is not required by the service.
Step 4022, determining whether the target object is approaching the service providing point step by step according to the moving direction of the target object; if so, the target object is taken as the first object.
Here, the target object is taken as the first object when it is determined that the target object gradually approaches the service providing point. Thus, when the database is constructed through the related information of the first object, the database is not constructed based on all target objects belonging to the first service type in the specific physical area, but the object close to the service providing point is selected from the target objects to construct the database, so that the first object contained in the database can cover the object to be identified in the first image, the number of the first objects needing to be compared with the object to be identified in the database is reduced, the identification speed of the object to be identified can be accelerated, and the identification accuracy can be improved.
And 403, performing feature extraction on the first object to obtain a feature vector of the second key feature point.
Here, the manner of extracting the feature of the first object is the same as the manner of extracting the feature of the object to be recognized in step 101, and is not described herein again.
Step 404, comparing a difference between any second key feature point of the first object and a feature vector of at least one other second key feature point of the first object, to obtain a second comparison result corresponding to the any second key feature point.
Here, the way of comparing the difference between the feature vectors of any second key feature point of the first object and at least one other second key feature point is the same as the way of comparing the difference between the feature vectors of any first key feature point and at least one other first key feature point in step 102, and is not repeated here.
Step 405, using the identity of the first object and the feature vector of the second key feature point of the first object and the second comparison result corresponding to the second key feature point as a feature group, and obtaining a database based on the feature group of each first object.
It should be noted that, in the feature group of each first object, the feature vector of each second key feature point of the first object and the corresponding second comparison result are stored.
In some embodiments, the database may also be updated within a preset period. The preset period is not limited, and may be one hour, one day, one week, or the like.
A fifth Generation Mobile Communication technology (5 th Generation Mobile Communication technology,5 g) is a new Generation Communication technology, and provides a main support of a basic network for internet of everything, so that the application in scenes such as automatic driving, smart cities, smart industries, real-time Virtual Reality (VR)/Augmented Reality (AR) is realized. The 5G network construction is a main development direction of communication technology in the next few years, and has been raised to a certain strategic height by main countries in the world, and all large manufacturers and operators of the related industry chain are deployed in a tight and compact manner. The 5G network deployment is influenced by frequency spectrum resources, the high-frequency communication causes the base station distance to be closer, and the construction of 5G micro base stations by utilizing light poles and the like in dense urban areas is undoubtedly one of the choices of the 5G network deployment. The micro base station is a base station form for miniaturizing a conventional base station, is more convenient and easier to install and construct, can greatly improve the efficiency of 5G network construction when being applied to the 5G network construction, and has stronger practical application value. Due to the weak points of poor penetration of 5G millimeter waves and great attenuation in air, if 5G still employs the "macro base station" used in the past during the third Generation Mobile Communication technology (3 rd-Generation, 3G) and the fourth Generation Mobile Communication technology (4G), it cannot provide sufficient signal security for the users at a distance. To cope with this difficulty, 5G started with a completely new base station, the micro base station. As the name implies, a micro base station is a base station that is made small enough.
Face recognition is a biometric technology for identity recognition based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces. The face recognition technology in the related art is mainly face recognition based on visible light images, which is a familiar recognition mode, and has been developed for over 30 years. However, this method has a defect that it is difficult to overcome, and especially when the ambient light changes, the recognition effect will be rapidly reduced, which cannot meet the needs of the actual system. The scheme for solving the illumination problem comprises three-dimensional image face recognition and thermal imaging face recognition. However, the two technologies are still far from mature and the recognition effect is not satisfactory. One solution that has rapidly developed is a multi-light source face recognition technique based on active near-infrared images. The method can overcome the influence of light change, has excellent recognition performance, and has overall system performance exceeding that of three-dimensional image face recognition in the aspects of precision, stability and speed. The technology is rapidly developed in two or three years, and the face recognition technology gradually becomes practical. The human face is inherent like other biological characteristics (fingerprints, irises and the like) of a human body, the uniqueness and the good characteristic that the human face is not easy to copy provide necessary premise for identity identification, and compared with other types of biological identification, the human face identification has the following characteristics: (1) optional: the user does not need to be specially matched with face acquisition equipment, and can almost acquire a face image in an unconscious state, and the sampling mode is not mandatory; (2) non-contact: the user can obtain the face image without directly contacting with the equipment; concurrency: the method can be used for sorting, judging and identifying a plurality of faces in an actual application scene; (3) in addition to this, also according to the visual characteristics: the characteristic of 'people can be identified by the appearance', and the characteristics of simple operation, visual result, good concealment and the like.
In the case of the state of the art research, automatic face detection is the basis for all applications surrounding automatic face image analysis, including but not limited to: face recognition and verification, face tracking in monitoring situations, facial expression analysis, facial attribute recognition (gender/age recognition, color value evaluation), facial illumination adjustment and deformation, facial shape reconstruction, image video retrieval, digital photo album organization and demonstration. Face detection is the initial step of all modern vision-based human and computer, human and robot, interactive systems. The mainstream commercial digital cameras are embedded with face detection to assist automatic focusing. Many social networks implement images and/or people tagging using face detection mechanisms. From the field of problems, face detection belongs to the field of target detection, and there are two general categories of target detection: (1) general target detection: multiple classes of objects in an image are detected, such as various general object detection methods used in certain scenes. The core of the general target detection is a classification problem of n (target) +1 (background) = n + 1. The detection model is usually large and slow, and few STOA methods can achieve CPU real-time. (2) specific class target detection: only a certain class of specific objects in the image is detected, such as face detection, pedestrian detection, vehicle detection, etc., and the core of the specific class of object detection is 1 (object)) +1 (background) =2 classification problem. This kind of detection is usually relatively small in model and very high in speed requirement, and the basic requirement of the problem here is central processing unit real-time (CPU real-time).
From the development history, the role of deep learning is very obvious: a non-deep learning stage: the time classical detection algorithm is proposed for specific targets, such as a problem of face detection, a problem of pedestrian detection, a problem of detection for various targets, and the like, but in the detection for various targets, multi-target detection is used, a template needs to be trained for each category, which is equivalent to 200 specific category detection problems. And (3) deep learning stage: the classical detection algorithms in this period of time are proposed for general targets, such as a fast target detection algorithm (fast Region-CNN, fast-RCNN) with better performance, a full convolution Network (R-FCN) series based on regions, a fast (You Only Look on, YOLO), a target detection (SSD) series based on regression, and powerful deep learning, which can perform a multi-class detection task Only with one Convolutional Neural Network (CNN). Although these are multi-class methods, they can be used to solve the problem of single class, and State-of-the-art (SOTA) models of the problem of detecting specific targets such as face detection and pedestrian detection in the related art are targeted improvements of such methods. Family of fast-RCNN: the method has the advantages of high performance and low speed, cannot meet the extremely high requirement of human face detection on speed because the method cannot be used on a Graphic Processing Unit (GPU) in real time, and has the problem that the research of the method focuses on improving the efficiency since the performance is not a problem. SSD series: the method has the advantages of high speed and real-time performance on the GPU, and has the defects that the detection of the dense small target is poor, the human face is just the dense small target, the research of the method focuses on improving the detection performance of the dense small target, the speed is required to be as fast as possible, and the GPU real-time algorithm is still limited in application.
The embodiment of the application aims at creating a novel member service system, the member service system in the related technology is mainly used for the service after consumption of the member, such as expense deduction after consumption, recharging, point management, short message marketing and the like, the system belongs to passive member service, for merchants, how to effectively acquire member position dynamics, such as whether the member is near a storefront and timely, accurate marketing plays an important role in promoting store sales, member satisfaction and the like, in the related technology, the merchants mainly advertise commodities in stores through short message mass-sending mode, but the marketing cost is higher through the short message mass-sending mode, frequent marketing can cause member reaction, and negative influence is caused on brands. In the member management system in the related art, after a member enters a merchant store, the member portrait cannot be acquired at the first time, and the marketing service of thousands of people cannot be realized.
Based on this, an exemplary application of the embodiment of the present application in a practical application scenario will be described below.
The embodiment of the application realizes a face recognition system by relying on the 5G micro base station of the operator, can be integrated into the existing member management platform of a merchant, and enriches the functions of the member management platform.
The method for member service based on the micro base station provided by the embodiment of the application comprises three parts: the system comprises a 5G micro base station, a face recognition system and a member management platform, and is shown in the following figure 5. The main principle is that after a merchant accesses a system, when a member enters the coverage range of a 5G micro base station, the merchant sends different marketing information to the member (namely a target object) through the 5G micro base station; screening out members (namely, first objects) facing to the merchant position to establish a face recognition temporary library (namely, a database); when a member enters a store, the member information such as name, consumption preference and the like is quickly identified through the face identification system and the temporary library, and a service specialist is informed to meet and guide, so that accurate service is realized; and provides a face-brushing payment service during payment, thereby improving the service perception of the members. The member service system comprises the following three modules:
module 1:5G micro base station
The 5G micro base station comprises micro base stations which are uniformly installed by operators in a market and micro base stations which are installed by merchants in an application manner, dense urban area network blind repair can be performed by adopting the 5G micro base station method, the 5G network frequency band is high, in a dense urban area, an outdoor macro base station can meet network coverage of most areas, but urban area building groups are complicated and intricate, areas which are shielded to cause weak coverage can be subjected to network blind repair by adopting the lamp pole micro base station, coverage of a dense urban area weak signal area is met, and urban area network capacity is improved. And hot spot shunting in high-flow areas can be carried out, the flow in urban dense areas is high, and particularly in the people dense areas such as the commercial streets, the leisure and tourism squares, the student activity dense areas in colleges and universities and the like with dense people, the coverage is difficult to effectively permeate by depending on outdoor base stations, the network capacity requirements of high-outbreak user groups are difficult to meet, and network congestion is easy to occur. For the situation, a 5G micro base station can be constructed to carry out hot spot shunting. Finally, a micro base station can be split by a sensitive macro base station, a baseband processing Unit (BBU) is adopted for centralized deployment of a 5G macro base station test point in the related technology, and an Active Antenna Unit (AAU) is formed by integrating a radio frequency remote Unit and an Antenna. Since the AAU device of 64T64R (64 transmitting and 64 receiving) is large in size, it is easy to cause the unpleasant feeling of the residents. In a 5G scene, partial performance can be weakened to form a micro base station product, for example, a 16T16R (16 transmitting and 16 receiving) can be developed, the equipment volume is small, the noise is low, and the concealment performance is better. The macro base station which is laid aside for a long time due to sensitivity is split into the micro base stations, so that the problem of building the macro base station which is sensitive to property can be solved.
The 5G micro base station mainly has two functions: acquiring a telephone number (namely an identity) in a coverage range (namely a specific physical area) and transmitting the telephone number to a member management platform; and sending the marketing short message to the member. The principle is as follows: the micro base station sends the telephone number in the coverage range to a member management platform of a merchant, the member platform matches the member number and returns the number and the corresponding marketing short message to the micro base station, and the micro base station sends the short message to the member number.
And (3) module 2: member management platform
The member management platform mainly screens out members acquired from the base station for accurate message marketing, and screens out members facing to the store again; informing a service specialist of receiving and guiding (namely a first service type) the member entering the merchant to realize accurate service; and the face brushing payment service is provided during payment, and the member service perception is improved.
The specific embodiment of screening out members toward the store is as follows:
suppose that n members (i.e., target objects) S existing nearby are acquired through a micro base station n ={m 1 ,m 2 ,m 3 ,...,m k ,...,m n },n≥0,n≥k≥0,m k The telephone number or Pin code (i.e., identification) of the kth member.
Suppose that the kth member continues for a period of time T k If c positions are acquired, the time-series position information of the kth member is: tp (Tp) k ={p 1 ,p 2 ,p 3 ,...,p i ,...,p c },c≥0,c≥i≥0。
Merchant (i.e. service provider) location p 0 Is a fixed value and is kept unchanged in the X-Y coordinate system of the two-dimensional space.
C position deviations p of kth member 0 Is offset vector V k Is a V k ={p 1 -p 0 ,p 2 -p 0 ,p 3 -p 0 ,....,p i -p 0 ,...,p c -p 0 },c≥0,c≥i≥0。
Calculating the value of a by using the latest two-multiplication straight line fitting formula of y = a · x + b if a>=0, illustrating the offset vector V k Is increasing or at rest, is far away from p 0 (ii) a On the contrary, if a<0, illustrating the offset vector V k Is always decreasing, approaching p 0
So if the k-th member V k Linear fitting coefficient a of<And 0, explaining the members facing the shop, and further extracting the member data in the face database to establish a temporary face database.
The membership management platform records basic membership information of a member, and each information establishes the following fields { id _ card, id _ type, id _ name, id _ sender,. }, as shown in fig. 6 below.
The functions of the device comprise: adding, deleting, modifying and inquiring a member database; matching member numbers sent by the micro base station, wherein the member numbers are possibly multiple, and then pushing face photos to a face recognition system when the members are registered; and returning the member numbers successfully compared by the face recognition system and the corresponding marketing short messages to the micro base station.
And a module 3: face recognition system
The face recognition system comprises a face bottom library dynamic adjustment function, a face acquisition function and a face detection function.
3.1 dynamic adjustment of human face base
The dynamic adjustment of the face bottom library is used for the construction of a face recognition system supporting the dynamic library, and the dynamic library can be established based on a warehousing interface. The system acquires the telephone of the user connected to the base station through the micro base station data interface, and inquires the member management platform about the basic information of the member, including a member photo, a name and the like according to the telephone number. And establishing a dynamic personnel library through the basic information, defaulting the system to update the dynamic library every day, namely emptying the dynamic library at the 0 point of the day, so that personnel information in the dynamic library can be ensured to be contained in store-entering personnel as far as possible, the number of the base libraries is reduced as far as possible, and the identification accuracy is improved.
3.2 face acquisition function
The face collecting function is to transmit the collected face information to a face recognition system by means of a face collecting camera at a merchant entrance, and the face collecting function comprises member face photos and non-member face photos. The system supports detection of the face photos captured by the front-end camera, quality judgment of the face photos, feature extraction of the photos meeting the quality requirements and preparation for later recognition and attribute recognition. The accuracy rate is more than 99% when the left, right, depression and elevation angles of the face image are less than 15 degrees.
3.3 face detection function
The face detection function is used for pushing the pictures collected by the camera to the member face pictures of the face bottom library with the member management platform to be matched, information after matching is successfully displayed on the member management platform, the non-member displays the face pictures and prompts of the non-member, the member displays the member pictures, the member grade, the member consumption records, the consumption preference and the like, and a merchant can quickly identify the member identity and make differentiated services.
The human face detection functional process comprises human face positioning, human face registration, human face feature extraction and human face comparison.
(1) Face location
The input of the face location algorithm is a picture (i.e. the first image acquired), and the output is a face frame coordinate sequence (0 face frame or 1 face frame or a plurality of face frames). If a get _ front _ face _ detector locator in the dlib library can be used, the output face coordinate frame is one or more right-side rectangles.
(2) Face registration
The face registration is a technology for positioning the coordinates of key points of five sense organs on a face, the input of a face registration algorithm is 'one face picture' plus 'a face coordinate frame', and the coordinate sequence of the key points (namely key feature points) of the five sense organs is output. The number of key points of five sense organs is a preset fixed value.
(3) Face feature extraction
The function converts a face image into a series of numerical values with fixed length, and realizes the conversionExtracting key points of face registration into multi-dimensional characteristic vector V 0 ={P 1 ,P 2 ,P 3 ,...,P n H, n > 0 (i.e. the feature vector of the first key feature point).
(4) Human face comparison
The human face comparison is an algorithm for measuring the similarity between two human faces, and the input of the human face comparison algorithm is two human face characteristics (note: the human face characteristics are obtained by the previous human face characteristic extraction algorithm to obtain the characteristics V to be compared 0 And face bottom bank Sv = { V = 1 ,V 2 ,V 3 ,...,V m M is the number of face bases), the output is the similarity between two features, and the method of contrast vector similarity in the gradient direction is adopted.
Wherein V 0 Direction of gradient G 0 (i.e., the first comparison result) the calculation formula is as follows:
G={g n =P n -P n-1 p ∈ V, n > 0 (formula 3);
wherein V 0 And V 1 Contrast vector D 01 (i.e., the third comparison result) the calculation formula is as follows:
D i ={d n =P n -Q n },P∈V 0 ,Q∈V t n =128,t = m (formula 4);
G 0 and G 1 Similarity Sm 01 The calculation formula (i.e., the degree of matching) is as follows:
Figure BDA0003302310790000151
sequentially calculating Sm according to the method for calculating Sm 01 ,Sm 02 ,Sm 03 ,...,Sm 0m Then the minimum Sm is selected 0j ,j∈[1,m]If Sm 0j ,j∈[1,m]The value is smaller than a given threshold value T (T is larger than 0), and the j value at the moment is the ID of a certain person corresponding to the face bottom library, so that the matching is successful. If Sm 0j ,j∈[1,m]If the value is larger than a given threshold value T (T > 0), the matching fails, and the human face bottom library does not have the human.
If single photo information is supported to be transmitted, searching is carried out aiming at a specific library, and the photo information is displayed according to the descending order of the comparison scores. The system can support 100000 bottom libraries, the recognition rate of the bottom library is recommended to be the highest below 50, the first correct recognition rate is above 99.5%, and the relationship between the number of the face libraries, the detection correct rate and the time consumption in the following table 1 is shown. The system supports the configuration of different thresholds in face recognition, and can configure face quality thresholds, feature point quantity, face comparison thresholds, thresholds with different attributes and the like. The threshold value can be dynamically configured according to actual service requirements, and different threshold values can be set according to different scenes and different light rays or angles. The system supports different comparison thresholds set for different face libraries, prevents the condition that the membership visiting comparison similarity is too low and results are not output from occurring, and improves the experience of member clients.
Table 1 relationship between number of face libraries and detection accuracy and time consumption
Number of face bins 50 500 5000 50000 500000
Accuracy rate 99.6% 99.2% 98.5% 97.3% 95.7%
Time consuming (ms) 0.3 2 21 186 1955
In the embodiment of the application, (1) before the face recognition is adopted, the scheme of acquiring the member number close to a merchant in position from the micro base station to make the face bottom library in real time is adopted, so that the technical effect of reducing the number of the matched face bottom libraries is achieved, and the face recognition is rapid and high in precision;
(2) Adopting the distance range of the member to enter, and sending marketing information to the member by the merchant; when the member enters the store, the service special member is rapidly informed to carry out the scheme of meeting and guiding, so that the effect of accurate service is achieved;
(3) The technical effect of distinguishing whether the human face features are similar or not is achieved by adopting a scheme of comparing the similarity of vectors in the gradient direction and calculating the dual evaluation dimensions of the distance and the direction of the feature vectors.
In the embodiment of the application, the members around the merchant can perform quick marketing and precise marketing; the member portrait of the store is rapidly acquired, and differentiated service is provided for the members; the member is accurately and reasonably consumed when entering the store, and the income of the merchant is improved; and the accurate marketing of the members is realized, and a comprehensive solution is provided for the service type merchants.
When the member enters the coverage range of the 5G micro base station, the merchant sends different marketing information to the member through the 5G micro base station; screening out members facing to the position of a merchant to establish a face recognition temporary library; when a member enters a store, the member information such as name, consumption preference and the like is quickly identified through the face identification system and the temporary library, and a service specialist is informed to meet and guide, so that accurate service is realized; and provides a face-brushing payment service during payment, thereby improving the service perception of the members.
It should be noted that although the various steps of the methods in this application are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the shown steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Based on the foregoing embodiments, the present application provides an identification apparatus, which includes modules included in the apparatus and units included in the modules, and may be implemented by a processor; of course, the implementation can also be realized through a specific logic circuit; in implementation, the processor may be a Central Processing Unit (CPU), a Microprocessor (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 7 is a schematic structural diagram of an identification apparatus according to an embodiment of the present application, and as shown in fig. 7, the apparatus 700 includes an extraction module 701, a comparison module 702, and an identification module 703, where: the extraction module 701 is configured to perform feature extraction on an object to be identified in the acquired first image to obtain a feature vector of a first key feature point; a comparing module 702, configured to compare a difference between feature vectors of any one of the first key feature points and at least one other of the first key feature points, to obtain a first comparison result corresponding to the any one of the first key feature points; an identifying module 703 is configured to use a first comparison result corresponding to each first key feature point as a feature of the object to be identified, and identify whether the object to be identified belongs to the first service type based on at least the feature vector of each first key feature point and the corresponding first comparison result.
In some embodiments, the apparatus 700 further comprises an output module, configured to output a first prompt message if the object to be identified belongs to the first service type; the first prompt message is used for indicating that a first service is provided for the object to be identified; the output module is further used for outputting a second prompt message if the object to be identified belongs to a second service type; the second prompt message is used for indicating that a second service is provided for the object to be identified, the first service type is different from the second service type, and the second service is different from the first service.
In some embodiments, the apparatus 700 further includes an obtaining module and a determining module, where the obtaining module is configured to obtain a second comparison result corresponding to a second key feature point of a first object belonging to the first service type in the database; wherein the second comparison result is obtained by comparing a difference between feature vectors of any one of the second key feature points of the first object and at least one other of the second key feature points of the first object; a comparing module 702, configured to compare a difference between feature vectors of any one of the first key feature points of the object to be identified and a second key feature point of the first object, to obtain a third comparison result corresponding to the any one of the first key feature points; the determining module is configured to use a second comparison result corresponding to each second key feature point as a feature of the first object, and determine, based on each first comparison result, each second comparison result, and each third comparison result, a matching degree between the object to be identified and the first object; the determining module is further configured to determine whether the object to be identified belongs to the first service type according to the matching degree.
In some embodiments, the determining module is further configured to determine a first degree of difference between the object to be identified and the first object based on each of the first comparison results and each of the second comparison results; determining a second degree of difference between the object to be recognized and the first object based on each of the third comparison results; and determining the matching degree between the object to be recognized and the first object based on the first difference degree and the second difference degree.
In some embodiments, the determining module is further configured to determine, from all the obtained matching degrees, a target matching degree that characterizes the object to be recognized and has the greatest similarity with the first object; determining whether the target matching degree meets a first condition; and if the target matching degree meets a first condition, determining that the object to be identified belongs to the first service type.
In some embodiments, the apparatus 700 further comprises a detection module for detecting an identity and a moving direction of a second object within a particular physical area; the selecting module is used for selecting the first object from the second objects according to the identity and the moving direction of the second objects; an extracting module 701, configured to perform feature extraction on the first object to obtain a feature vector of the second key feature point; a comparing module 702, configured to compare a difference between feature vectors of any one of the second key feature points of the first object and at least one other of the second key feature points of the first object, to obtain a second comparison result corresponding to the any one of the second key feature points; and taking the identity of the first object, the feature vector of the second key feature point of the first object and a second comparison result corresponding to the second key feature point as a feature group, and obtaining the database based on the feature group of each first object.
In some embodiments, the selecting module is further configured to select a target object belonging to a first service type from the second objects according to the identity of the second object; the determining module is further configured to determine whether the target object is approaching a service providing point step by step according to the moving direction of the target object; if so, the target object is taken as the first object.
The above description of the apparatus embodiments, similar to the above description of the method embodiments, has similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be noted that, in the embodiment of the present application, the division of the module by the identification apparatus shown in fig. 7 is schematic, and is only one logical function division, and there may be another division manner in actual implementation. In addition, functional units in the embodiments of the present application may be integrated into one processing unit, may exist alone physically, or may be integrated into one unit by two or more units. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. Or may be implemented in a combination of software and hardware.
It should be noted that, in the embodiment of the present application, if the method described above is implemented in the form of a software functional module and sold or used as a standalone product, it may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing an electronic device to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
An electronic device according to an embodiment of the present application is provided, fig. 8 is a schematic diagram of a hardware entity of the electronic device according to the embodiment of the present application, and as shown in fig. 8, the electronic device 800 includes a memory 801 and a processor 802, the memory 801 stores a computer program that can be executed on the processor 802, and the processor 802 executes the computer program to implement the steps in the method provided in the embodiment.
It should be noted that the Memory 801 is configured to store instructions and applications executable by the processor 802, and may also buffer data (e.g., image data, audio data, voice communication data, and video communication data) to be processed or processed by each module in the processor 802 and the electronic device 800, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
Embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps in the methods provided in the above embodiments.
Embodiments of the present application provide a computer program product containing instructions, which when executed on a computer, cause the computer to perform the steps of the method provided by the above method embodiments.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium, the storage medium and the device of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" or "some embodiments" means that a particular feature, structure or characteristic described in connection with the embodiments is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" or "in some embodiments" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments. The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
The term "and/or" herein is merely an association relationship describing an associated object, and means that three relationships may exist, for example, object a and/or object B, may mean: the object A exists alone, the object A and the object B exist simultaneously, and the object B exists alone.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice, such as: multiple modules or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be electrical, mechanical or in other forms.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules; can be located in one place or distributed on a plurality of network units; some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional modules in the embodiments of the present application may be integrated into one processing unit, or each module may be separately used as one unit, or two or more modules may be integrated into one unit; the integrated module can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application or portions thereof that contribute to the related art may be embodied in the form of a software product, where the computer software product is stored in a storage medium and includes several instructions for causing an electronic device to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media that can store program code, such as removable storage devices, ROMs, magnetic or optical disks, etc.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to arrive at new method embodiments.
The features disclosed in the several product embodiments presented in this application can be combined arbitrarily, without conflict, to arrive at new product embodiments.
The features disclosed in the several method or apparatus embodiments provided herein may be combined in any combination to arrive at a new method or apparatus embodiment without conflict.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present application, and shall cover the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. An identification method, characterized in that the method comprises:
performing feature extraction on an object to be identified in the acquired first image to obtain a feature vector of a first key feature point;
comparing the difference between the feature vectors of any one first key feature point and at least one other first key feature point to obtain a first comparison result corresponding to any one first key feature point;
and taking the first comparison result corresponding to each first key feature point as the feature of the object to be identified, and identifying whether the object to be identified belongs to a first service type or not at least based on the feature vector of each first key feature point and the corresponding first comparison result.
2. The method of claim 1, further comprising:
if the object to be identified belongs to the first service type, outputting a first prompt message; the first prompt message is used for indicating that a first service is provided for the object to be identified;
if the object to be identified belongs to a second service type, outputting a second prompt message; the second prompt message is used for indicating that a second service is provided for the object to be identified, the first service type is different from the second service type, and the second service is different from the first service.
3. The method according to claim 1, wherein identifying whether the object to be identified belongs to a first service type based on at least the feature vector of each key feature point and the corresponding first comparison result comprises:
acquiring a second comparison result corresponding to a second key feature point of a first object belonging to the first service type in the database; wherein the second comparison result is obtained by comparing a difference between feature vectors of any one of the second key feature points of the first object and at least one other of the second key feature points of the first object;
comparing the difference between the feature vectors of any first key feature point of the object to be identified and the second key feature point of the first object to obtain a third comparison result corresponding to any first key feature point;
taking a second comparison result corresponding to each second key feature point as the feature of the first object, and determining the matching degree of the object to be identified and the first object based on each first comparison result, each second comparison result and each third comparison result;
and determining whether the object to be identified belongs to the first service type or not according to the matching degree.
4. The method according to claim 3, wherein the determining the matching degree of the object to be identified and the first object based on each of the first comparison result, each of the second comparison result and each of the third comparison result comprises:
determining a first degree of difference between the object to be recognized and the first object based on each of the first comparison results and each of the second comparison results;
determining a second degree of difference between the object to be recognized and the first object based on each of the third comparison results;
and determining the matching degree between the object to be recognized and the first object based on the first difference degree and the second difference degree.
5. The method according to claim 3, wherein said determining whether the object to be identified belongs to the first service type according to each of the matching degrees comprises:
determining a target matching degree representing the maximum similarity between the object to be recognized and the first object from all the obtained matching degrees;
determining whether the target matching degree meets a specific condition;
and if the target matching degree meets a specific condition, determining that the object to be identified belongs to the first service type.
6. The method of claim 3, wherein the database building process comprises:
detecting the identity and the moving direction of a second object in a specific physical area;
selecting the first object from the second objects according to the identity and the moving direction of the second objects;
performing feature extraction on the first object to obtain a feature vector of the second key feature point;
comparing the difference between any second key feature point of the first object and the feature vector of at least one other second key feature point of the first object to obtain a second comparison result corresponding to any second key feature point;
and taking the identity of the first object, the feature vector of the second key feature point of the first object and a second comparison result corresponding to the second key feature point as a feature group, and obtaining the database based on the feature group of each first object.
7. The method of claim 6, wherein the selecting the first object from the second objects according to the identity and the moving direction of the second object comprises:
selecting a target object belonging to a first service type from the second object according to the identity of the second object;
determining whether the target object is approaching a service providing point step by step according to the moving direction of the target object; if so, the target object is taken as the first object.
8. The method of claim 7, further comprising:
and acquiring marketing information and sending the marketing information to the target object.
9. An identification device, comprising:
the extraction module is used for extracting the features of the object to be identified in the acquired first image to obtain a feature vector of a first key feature point;
the comparison module is used for comparing the difference between the feature vectors of any one first key feature point and at least one other first key feature point to obtain a first comparison result corresponding to any one first key feature point;
and the identification module is used for taking a first comparison result corresponding to each first key feature point as the feature of the object to be identified, and identifying whether the object to be identified belongs to the first service type at least based on the feature vector of each first key feature point and the corresponding first comparison result.
10. An electronic device comprising a memory and a processor, the memory storing a computer program operable on the processor, wherein the processor implements the method of any one of claims 1 to 8 when executing the program.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN202111193914.0A 2021-10-13 2021-10-13 Identification method and device, equipment and storage medium Pending CN115984912A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111193914.0A CN115984912A (en) 2021-10-13 2021-10-13 Identification method and device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111193914.0A CN115984912A (en) 2021-10-13 2021-10-13 Identification method and device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115984912A true CN115984912A (en) 2023-04-18

Family

ID=85966760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111193914.0A Pending CN115984912A (en) 2021-10-13 2021-10-13 Identification method and device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115984912A (en)

Similar Documents

Publication Publication Date Title
CN106776619B (en) Method and device for determining attribute information of target object
CN108229335A (en) It is associated with face identification method and device, electronic equipment, storage medium, program
CN111047621B (en) Target object tracking method, system, equipment and readable medium
CN110472870A (en) A kind of cashier service regulation detection system based on artificial intelligence
CN105404860A (en) Method and device for managing information of lost person
CN101095149A (en) Image comparison
CN105913507B (en) A kind of Work attendance method and system
CN111985348B (en) Face recognition method and system
CN111160243A (en) Passenger flow volume statistical method and related product
US11048917B2 (en) Method, electronic device, and computer readable medium for image identification
CN109902644A (en) Face identification method, device, equipment and computer-readable medium
CN106663196A (en) Computerized prominent person recognition in videos
CN111241932A (en) Automobile exhibition room passenger flow detection and analysis system, method and storage medium
CN112085534B (en) Attention analysis method, system and storage medium
CN108133197B (en) Method and apparatus for generating information
CN109902681B (en) User group relation determining method, device, equipment and storage medium
CN110458091A (en) Recognition of face 1 based on position screening is than N algorithm optimization method
CN105989174A (en) Extraction device and extraction method of area of interest
CN113630721A (en) Method and device for generating recommended tour route and computer readable storage medium
CN111738199A (en) Image information verification method, image information verification device, image information verification computing device and medium
CN111126159A (en) Method, apparatus, electronic device, and medium for tracking pedestrian in real time
CN108416632B (en) Dynamic video identification method
CN111310531A (en) Image classification method and device, computer equipment and storage medium
CN117095462A (en) Behavior detection method, device and equipment
CN112385180A (en) System and method for matching identity and readily available personal identifier information based on transaction time stamp

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination