CN111641640A - Equipment binding processing method and device - Google Patents

Equipment binding processing method and device Download PDF

Info

Publication number
CN111641640A
CN111641640A CN202010471135.1A CN202010471135A CN111641640A CN 111641640 A CN111641640 A CN 111641640A CN 202010471135 A CN202010471135 A CN 202010471135A CN 111641640 A CN111641640 A CN 111641640A
Authority
CN
China
Prior art keywords
binding
information
data
equipment
face feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010471135.1A
Other languages
Chinese (zh)
Other versions
CN111641640B (en
Inventor
李郗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN202010471135.1A priority Critical patent/CN111641640B/en
Publication of CN111641640A publication Critical patent/CN111641640A/en
Application granted granted Critical
Publication of CN111641640B publication Critical patent/CN111641640B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a device binding processing method and a device, wherein the method comprises the following steps: receiving binding data sent by equipment, wherein the binding data at least comprises face feature information and equipment information of the equipment, and the face feature information is face feature information of a face image acquired by a camera under the condition that the equipment is in an unbound state; the face feature information and the equipment information are bound, the problems that in the related technology, equipment is bound through third-party application, binding can be completed only through multiple interactive operations of a user, and binding operation is complex can be solved.

Description

Equipment binding processing method and device
Technical Field
The present invention relates to the field of communications, and in particular, to a device binding processing method and apparatus.
Background
With the continuous progress of science and technology, smart phones and smart devices have come to enter homes of common people, and it is the current mainstream for users to control smart devices by using smart phones. However, the existing mobile phone needs to perform at least two steps before controlling the smart device, the first step is to download and install corresponding mobile phone app software, and the second step is to bind with the smart device step by step according to the prompt of the mobile phone app software. On the other hand, in background binding information verification, the binding device information is usually adopted to inquire whether the binding device information exists in a device inventory table, and if the binding device information exists, the binding device information is regarded as valid information.
The existing manufacturer APP is complex to register and log in, and a user needs to click and input for multiple times at an APP end. The user experience is poor. The specific binding requires that the user firstly queries the binding mode and then uses the corresponding scheme for binding, the steps are complicated, the binding mode of each manufacturer is not very same, and the learning cost is very high.
Aiming at the problems that in the related art, equipment binding is carried out through third-party application, the binding can be finished only through multiple interactive operations of a user, and the binding operation is complex, a solution is not provided.
Disclosure of Invention
The embodiment of the invention provides a device binding processing method and device, and aims to at least solve the problems that in the related art, device binding is performed through a third-party application, binding can be completed only through multiple interactive operations of a user, and the binding operation is complicated.
According to an embodiment of the present invention, there is provided a device binding processing method, including:
receiving binding data sent by equipment, wherein the binding data at least comprises face feature information and equipment information of the equipment, and the face feature information is face feature information of a face image acquired by a camera under the condition that the equipment is in an unbound state;
and binding the face feature information with the equipment information.
Optionally, the binding the face feature information and the device information includes:
judging whether the equipment information is matched with actual equipment information;
if the judgment result is yes, performing multivariate outlier detection on the binding data to check the validity of the binding data;
and if the binding data are valid data, binding the face feature information with the equipment information.
Optionally, performing multivariate outlier detection on the binding data to verify the validity of the binding data includes:
forming a coordinate task point by the binding data;
acquiring a preset amount of binding information from a database, and respectively using the preset amount of binding information as the preset amount of target task points;
respectively determining the distances between the coordinate tasks and the target tasks of the preset number;
determining a local discrete factor of the coordinate task point according to the distance;
and checking whether the binding data is valid data or not according to the local discrete factor.
Optionally, the checking whether the binding data is valid data according to the local dispersion factor includes:
if the difference value between the local discrete factor check and 1 is larger than a preset threshold value, determining that the binding data is invalid data;
and if the difference value between the local discrete factor check and 1 is less than or equal to a preset threshold value, determining that the binding data is valid data.
Optionally, determining the local dispersion factor of the coordinate task point according to the distance includes:
determining a point, of the preset number of target task points, with a distance from the coordinate task point being smaller than the Kth distance of the coordinate task point as a domain point of the coordinate task point;
determining the local reachable density of the field point and the local reachable density of the coordinate task point;
and determining the local discrete factor of the coordinate task point according to the local reachable density of the field point and the local reachable density of the coordinate task point.
Optionally, performing multivariate outlier detection on the binding data to verify the validity of the binding data includes:
forming a coordinate task point by the binding data;
acquiring a preset amount of binding information from a database, and respectively using the preset amount of binding information as the preset amount of target task points;
respectively determining the distances between the coordinate tasks and the target task points with the preset number;
if the number of the target task points of which the distance is greater than or equal to a first preset threshold is greater than a second preset threshold, determining that the binding data are valid data;
and if the number of the target task points with the distance smaller than the first preset threshold is larger than a second preset threshold, determining that the binding data are invalid data.
According to another embodiment of the present invention, there is also provided a device binding processing method, including:
under the condition that the equipment is in an unbound state, acquiring a face image through a camera;
extracting face feature information of the face image, and acquiring equipment information of the equipment;
combining the human face feature information and the equipment information into binding data;
and sending the binding data to a cloud server, wherein the binding data is used for indicating the cloud server to bind the face feature information with the equipment information.
Optionally, after sending the binding data to a cloud server, the method further includes:
receiving binding success data sent by the cloud server, wherein the binding success data carries the face feature information and the equipment information;
judging whether the received face feature information is matched with face feature information of the face image stored in advance or not and whether the received equipment information is matched with equipment information stored in advance or not;
and if the judgment result is yes, updating the state of the equipment from the unbound state to the bound state.
According to another embodiment of the present invention, there is also provided a device binding processing apparatus, including:
the device comprises a first receiving module, a second receiving module and a third receiving module, wherein the first receiving module is used for receiving binding data sent by a device, the binding data at least comprises face characteristic information and device information of the device, and the face characteristic information is face characteristic information of a face image acquired by a camera under the condition that the device is in an unbound state;
and the binding module is used for binding the face feature information with the equipment information.
Optionally, the binding module includes:
the judgment submodule is used for judging whether the equipment information is matched with the actual equipment information;
the checking submodule is used for carrying out multi-element outlier detection on the binding data under the condition that the judgment result is yes so as to check the validity of the binding data;
and the binding submodule is used for binding the face feature information with the equipment information if the binding data are valid data.
Optionally, the check submodule includes:
the first composition unit is used for composing the binding data into a coordinate task point;
the system comprises a first acquisition unit, a second acquisition unit and a processing unit, wherein the first acquisition unit is used for acquiring binding information of a preset number from a database and respectively taking the binding information of the preset number as target task points of the preset number;
a first determining unit, configured to determine distances between the coordinate tasks and the predetermined number of target tasks, respectively;
the second determining unit is used for determining a local discrete factor of the coordinate task point according to the distance;
and the checking unit is used for checking whether the binding data is valid data according to the local discrete factor.
Optionally, the verification unit is further used for
If the difference value between the local discrete factor check and 1 is larger than a preset threshold value, determining that the binding data is invalid data;
and if the difference value between the local discrete factor check and 1 is less than or equal to a preset threshold value, determining that the binding data is valid data.
Optionally, the second determining unit is further configured to
Determining a point, of the preset number of target task points, with a distance from the coordinate task point being smaller than the Kth distance of the coordinate task point as a domain point of the coordinate task point;
determining the local reachable density of the field point and the local reachable density of the coordinate task point;
and determining the local discrete factor of the coordinate task point according to the local reachable density of the field point and the local reachable density of the coordinate task point.
Optionally, the check submodule includes:
the first composition unit is used for composing the binding data into a coordinate task point;
the second acquisition unit is used for acquiring binding information of a preset quantity from a database and respectively using the binding information of the preset quantity as target task points of the preset quantity;
a third determining unit, configured to determine distances between the coordinate task and the predetermined number of target task points, respectively;
a fourth determining unit, configured to determine that the binding data is valid data if the number of target task points whose distances are greater than or equal to the first preset threshold is greater than a second preset threshold;
and a fifth determining unit, configured to determine that the binding data is invalid data if the number of target task points whose distances are smaller than the first preset threshold is greater than a second preset threshold.
According to another embodiment of the present invention, there is also provided a device binding processing apparatus, including:
the acquisition module is used for acquiring a face image through the camera under the condition that the equipment is in an unbound state;
the extraction module is used for extracting the face characteristic information of the face image and acquiring the equipment information of the equipment;
the composition module is used for composing the human face feature information and the equipment information into binding data;
and the sending module is used for sending the binding data to a cloud server, wherein the binding data is used for indicating the cloud server to bind the face feature information with the equipment information.
Optionally, the apparatus further comprises:
the second receiving module is used for receiving binding success data sent by the cloud server, wherein the binding success data carries the face feature information and the equipment information;
the judging module is used for judging whether the received face feature information is matched with face feature information of the face image stored in advance and whether the received equipment information is matched with equipment information stored in advance;
and the updating module is used for updating the state of the equipment from the unbound state to the bound state under the condition that the judgment result is yes.
According to a further embodiment of the present invention, a computer-readable storage medium is also provided, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above-described method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, binding data sent by equipment is received, wherein the binding data at least comprises face feature information and equipment information of the equipment, and the face feature information is the face feature information of a face image acquired by a camera under the condition that the equipment is in an unbound state; the face feature information and the equipment information are bound, the problems that in the related technology, equipment is bound through third-party application, binding can be completed only through multiple interactive operations of a user, and binding operation is complex can be solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal of a device binding processing method according to an embodiment of the present invention;
FIG. 2 is a first flowchart of a device binding processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of face recognition based device binding according to an embodiment of the present invention;
FIG. 4 is a first diagram illustrating a validity check according to an embodiment of the present invention;
FIG. 5 is a second schematic diagram of a validity check according to an embodiment of the present invention;
FIG. 6 is a flow chart diagram two of a method of device binding processing according to an embodiment of a backup;
FIG. 7 is a first block diagram of a device binding processing apparatus according to an embodiment of the present invention;
fig. 8 is a block diagram ii of the device binding processing apparatus according to the embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example 1
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking a mobile terminal as an example, fig. 1 is a hardware structure block diagram of the mobile terminal of the device binding processing method according to the embodiment of the present invention, as shown in fig. 1, a mobile terminal 10 may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, and optionally, the mobile terminal may further include a transmission device 106 for a communication function and an input/output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to the message receiving method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio FrequeNcy (RF) module, which is used to communicate with the internet in a wireless manner.
Based on the above mobile terminal or network architecture, in this embodiment, a device binding processing method is provided, and fig. 2 is a first flowchart of the device binding processing method according to the embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, receiving binding data sent by equipment, wherein the binding data at least comprises face feature information and equipment information of the equipment, and the face feature information is the face feature information of a face image acquired by a camera under the condition that the equipment is in an unbound state;
and step S204, binding the face feature information with the equipment information.
Receiving binding data sent by equipment through the steps S202 to S204, wherein the binding data at least comprises face feature information and equipment information of the equipment, and the face feature information is face feature information of a face image acquired by a camera under the condition that the equipment is in an unbound state; the face feature information and the equipment information are bound, the problems that in the related technology, equipment is bound through third-party application, binding can be completed only through multiple interactive operations of a user, and binding operation is complex can be solved.
Fig. 3 is a schematic diagram of device binding based on face recognition according to an embodiment of the present invention, and as shown in fig. 3, the smart screen device may be a smart refrigerator, a smart speaker, a smart television, or the like. The User data service platform undertakes the responsibility of binding information to check and establishes a User to device binding relationship, wherein the binding data check module has the function of illegal binding information analysis, a face recognition database stores User face information data, User data stores information of all users, and an equipment database stores manufacturer equipment data.
Since lawbreakers often forge device data to steal user private information through device binding in the binding process, the legality check of the binding data in the traditional mode often maintains and upgrades the binding function through various modes only after problems occur, and the binding information is often lagged behind being forged in the future. In view of the foregoing problem, in the embodiment of the present invention, the step S204 may specifically include:
s2061, judging whether the equipment information is matched with the actual equipment information;
s2062, under the condition that the judgment result is yes, performing multi-element outlier detection on the binding data to check the validity of the binding data;
and S2063, if the binding data are effective data, binding the face feature information and the equipment information.
Based on the binding big data, an abnormal verification algorithm based on multiple discrete points is introduced during equipment binding, and the legality of the equipment binding information is confirmed.
In an optional embodiment, the multi-Outlier detection may be performed on the binding data through a Local Outlier Factor (LOF), where the step S2062 may specifically include: forming a coordinate task point by the binding data; acquiring a preset amount of binding information from a database, and respectively using the preset amount of binding information as the preset amount of target task points; respectively determining the distances between the coordinate tasks and the target tasks of the preset number; determining a local discrete factor of the coordinate task point according to the distance; checking whether the binding data are valid data according to the local discrete factor, and further, if the difference value between the local discrete factor check and 1 is greater than a preset threshold value, determining that the binding data are invalid data; and if the difference value between the local discrete factor check and 1 is less than or equal to a preset threshold value, determining that the binding data is valid data.
Further, determining the local discrete factor of the coordinate task point according to the distance may specifically include: determining a point, of the preset number of target task points, with a distance from the coordinate task point being smaller than the Kth distance of the coordinate task point as a domain point of the coordinate task point; determining the local reachable density of the field point and the local reachable density of the coordinate task point; and determining the local discrete factor of the coordinate task point according to the local reachable density of the field point and the local reachable density of the coordinate task point.
In another optional embodiment, the step S2062 may specifically include: forming a coordinate task point by the binding data; acquiring a preset amount of binding information from a database, and respectively using the preset amount of binding information as the preset amount of target task points; respectively determining the distances between the coordinate tasks and the target task points with the preset number; if the number of the target task points of which the distance is greater than or equal to a first preset threshold is greater than a second preset threshold, determining that the binding data are valid data; and if the number of the target task points with the distance smaller than the first preset threshold is larger than a second preset threshold, determining that the binding data are invalid data.
The following describes examples of the present invention with reference to specific examples.
Assuming that the networking environment of the intelligent device is normal, the intelligent device is in an unbound state, and all service platforms operate normally, including:
step 1, a user starts the intelligent device for the first time.
When the intelligent equipment is firstly connected with the network, the equipment is in an unbound state, and when the user awakens the intelligent equipment, the intelligent equipment gives the user an interactive prompt for equipment binding. When the user selects the binding, the user is subjected to face recognition based on the camera of the screen end device, and face information is collected.
And 2, the intelligent equipment reports the data to the user data platform.
After the equipment acquires the user face information, the equipment starts equipment binding service to combine the user face information and the equipment information into binding data, the binding data is compressed and encrypted and then uploaded to a user data service platform through a network, and the service platform waits for a binding result to be returned.
And 3, binding the user and the equipment by the user data service platform.
And (4) after the user service platform receives the binding request sent by the screen terminal equipment, decoding the binding request, decompressing information, and then performing binding data validity check, if the check is passed, entering the step 4, and then directly ending the flow if illegal data is detected. If the data is judged to be normal data, the user portrait information is used for matching in the facial recognition database.
And if the corresponding portrait information is not matched, the user is considered as a new user, and a new user account is created based on the portrait information.
And if the corresponding portrait information is matched, the user is considered to be registered in the user platform, and meanwhile, the corresponding user account information is acquired according to the portrait information.
And then the user data service platform binds the user account information and the equipment information. At this time, the binding of the account and the intelligent device is already completed in the background service.
After the above process is completed, the user service platform compresses and encrypts the bound intelligent device information and the user account information, and sends the encrypted information to the intelligent device, and then the process goes to step 5.
And 4, checking the legality of the binding data.
After the bound data after being bound, decompressed and decrypted is obtained, whether the device information is matched with the actual device information or not is detected, for example, whether the device information (MAC, device unique code, model) is found in an actual selling device database or not is detected. If not found, the data may be considered illegal. If the binding data can be found normally, the binding data is brought into an LOF algorithm, and multivariate outlier detection is carried out on the binding data, so that whether the binding data has a safety risk or not is judged. If the risk exists, the binding process is stopped and fed back to the request end.
And performing multi-Outlier detection on the task by using a Local Outlier Factor algorithm. Describing a task coordinate task p (faceID, deviceID, IP, MAC and location) by face information, a device unique code, a binding request IP address, a device Wifi module MAC address and a device geographical position in the current binding data, selecting big data binding information as a task o (faceID, deviceID, deviceID, IP, MAC and location),
the distance d (p, o) between two points p and o is:
Figure BDA0002514335620000121
the kth distance d for point pk(p) is defined as follows: dkD (p, o), and at least k points in the set excluding p, o ' ∈ c { o ' ≠ p }, satisfying d (p, o ') ≦ d (p, o).
Fig. 4 is a first schematic diagram of validity checking according to an embodiment of the present invention, as shown in fig. 4, the kth distance of p, that is, the distance of the point k far away from p, does not include p.
K-th distance neighborhood N of point pk(p), all points within the kth distance of p, including the kth distance. Thus the number | N of k-th neighbor points of pk(p)|≥k。
The k-th reachable distance from point o to point p is defined as:
reach-distk(p,o)=max{dk(o),d(p,o)}
that is, the k-th reachable distance from point o to point p is at least the k-th distance of o, or the true distance between o and p, i.e., d (p, o).
This also means that the k points nearest to point o, the reachable distances of o to them are considered equal and all equal to dk(o)。
Fig. 5 is a schematic diagram of a second validity check according to an embodiment of the present invention, and as shown in fig. 5, the 3 rd reachable distance from the point a to the point p is the 3 rd distance from the point a:
reach-distk=3(p,a)=d3(o),
and the 3 rd reachable distance from point c to point p is the distance of two points:
reach-distk=3(p,c)=d(p,c),
with the above concept, the reciprocal of the sum of the ratios of the kth distance of the coordinate task point of the domain point to the number of the domain points is determined as the local reachable density of the coordinate task point, so that the local reachable density formula of any point x can be obtained:
Figure BDA0002514335620000122
representing the inverse of the average reachable distance of point x for all points in the kth neighborhood of point x. Where y represents any point of point x within the kth domain.
lrdk(x) It is to be understood that first of all this represents a density, the higher the density, the more likely it is considered to belong to the same cluster, and the lower the density, the more likely it is an outlier. If x and surrounding neighborhood points are in the same cluster, the more likely the reachable distance is d, which is smallerk(y), resulting in a smaller sum of the reachable distances and a higher density value; if x and surrounding neighbor points are far apart, the reachable distance may take a larger value of d (x, y), resulting in a lower density, more likely to be outliers.
Based on the above density formula, the local outlier factor for point p is expressed as:
Figure BDA0002514335620000131
representing the ratio of the average of the sum of the local reachable densities of all points in the kth field of point p to the local reachable density of point p, lrdk(o) the inverse of the average reachable distance of all points in the k-th domain of point o to o, lrdk(p) represents the reciprocal of the achievable distance of all points in the kth field of point p to the p-th tie. Where point o is an arbitrary point in the kth field of p.
If the ratio is closer to 1, the neighborhood point density of p is almost the same, and p may belong to the same cluster as the neighborhood; if the ratio is less than 1, the density of p is higher than that of the neighborhood points, and p is a dense point; if this ratio is greater than 1, it indicates that the density of p is less than its neighborhood point density, and p is more likely to be an outlier.
Assuming that the k value is 3, defining task 1(faceID, deviceID, deviceID, IP, mac, location), calculating the 3 rd distance of the task 1, calculating the 3 rd reachable distance from other tasks to the task 1, calculating the local reachable density of the task 1, and calculating the local discrete factor of the task 1, namely obtaining the local discrete factor value of the current binding information, wherein if the value is far larger than 1, illegal data is considered, and if the value is not larger than 1, effective data is considered.
And 5, the screen-side equipment acquires the binding receipt and refreshes the interactive state. When the intelligent equipment receives the binding success data returned by the user service platform, after data decompression and decryption, whether the data are matched with the current equipment and the portrait information in the equipment is checked, after the data are checked successfully, the intelligent equipment refreshes the interaction prompt that the current user shows the binding success, and stores and shows the corresponding user account information. At this point, the whole device binding process is finished.
According to the device binding mode provided by the embodiment of the invention, the user can carry out device binding only by using the device to carry out face recognition, and the user does not need to use any app to carry out device binding, so that the user experience is greatly improved. A mathematical model is established based on the bound big data, illegal bound data can be detected through outlier algorithm analysis of the data parameters, and the early defense of the illegal data can be achieved compared with the traditional binding verification mode.
Example 2
According to another embodiment of the present invention, there is further provided a device binding processing method, and fig. 6 is a second flowchart of the device binding processing method according to the backup embodiment, as shown in fig. 6, including:
step S602, collecting a face image through a camera under the condition that the equipment is in an unbound state;
step S604, extracting the face feature information of the face image, and acquiring the equipment information of the equipment;
step S606, the human face feature information and the equipment information form binding data;
step S608, sending the binding data to a cloud server, where the binding data is used to instruct the cloud server to bind the facial feature information and the device information.
Further, after the binding data are sent to a cloud server, receiving binding success data sent by the cloud server, wherein the binding success data carry the face feature information and the equipment information; judging whether the received face feature information is matched with face feature information of the face image stored in advance or not and whether the received equipment information is matched with equipment information stored in advance or not; and if the judgment result is yes, updating the state of the equipment from the unbound state to the bound state.
Further, a prompt message of success or failure of binding is displayed on a display interface to ensure that a user knows the binding condition.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 3
In this embodiment, an apparatus binding processing apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and details of which have been already described are omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 7 is a first block diagram of the device binding processing apparatus according to the embodiment of the present invention, as shown in fig. 7, including:
a first receiving module 72, configured to receive binding data sent by a device, where the binding data at least includes face feature information and device information of the device, and the face feature information is face feature information of a face image acquired by a camera when the device is in an unbound state;
and a binding module 74, configured to bind the facial feature information with the device information.
Optionally, the binding module 64 includes:
the judgment submodule is used for judging whether the equipment information is matched with the actual equipment information;
the checking submodule is used for carrying out multi-element outlier detection on the binding data under the condition that the judgment result is yes so as to check the validity of the binding data;
and the binding submodule is used for binding the face feature information with the equipment information if the binding data are valid data.
Optionally, the check submodule includes:
the first composition unit is used for composing the binding data into a coordinate task point;
the system comprises a first acquisition unit, a second acquisition unit and a processing unit, wherein the first acquisition unit is used for acquiring binding information of a preset number from a database and respectively taking the binding information of the preset number as target task points of the preset number;
a first determining unit, configured to determine distances between the coordinate tasks and the predetermined number of target tasks, respectively;
the second determining unit is used for determining a local discrete factor of the coordinate task point according to the distance;
and the checking unit is used for checking whether the binding data is valid data according to the local discrete factor.
Optionally, the verification unit is further used for
If the difference value between the local discrete factor check and 1 is larger than a preset threshold value, determining that the binding data is invalid data;
and if the difference value between the local discrete factor check and 1 is less than or equal to a preset threshold value, determining that the binding data is valid data.
Optionally, the second determining unit is further configured to
Determining a point, of the preset number of target task points, with a distance from the coordinate task point being smaller than the Kth distance of the coordinate task point as a domain point of the coordinate task point;
determining the local reachable density of the field point and the local reachable density of the coordinate task point;
and determining the local discrete factor of the coordinate task point according to the local reachable density of the field point and the local reachable density of the coordinate task point.
Optionally, the check submodule includes:
the first composition unit is used for composing the binding data into a coordinate task point;
the second acquisition unit is used for acquiring binding information of a preset quantity from a database and respectively using the binding information of the preset quantity as target task points of the preset quantity;
a third determining unit, configured to determine distances between the coordinate task and the predetermined number of target task points, respectively;
a fourth determining unit, configured to determine that the binding data is valid data if the number of target task points whose distances are greater than or equal to the first preset threshold is greater than a second preset threshold;
and a fifth determining unit, configured to determine that the binding data is invalid data if the number of target task points whose distances are smaller than the first preset threshold is greater than a second preset threshold.
Example 4
According to another embodiment of the present invention, there is further provided a device binding processing apparatus, and fig. 8 is a second block diagram of the device binding processing apparatus according to the embodiment of the present invention, as shown in fig. 8, including:
the acquisition module 82 is used for acquiring a face image through a camera under the condition that the equipment is in an unbound state;
an extraction module 84, configured to extract face feature information of the face image, and obtain device information of the device;
a composing module 86, configured to compose binding data from the face feature information and the device information;
a sending module 88, configured to send the binding data to a cloud server, where the binding data is used to instruct the cloud server to bind the facial feature information and the device information.
Optionally, the apparatus further comprises:
the second receiving module is used for receiving binding success data sent by the cloud server, wherein the binding success data carries the face feature information and the equipment information;
the judging module is used for judging whether the received face feature information is matched with face feature information of the face image stored in advance and whether the received equipment information is matched with equipment information stored in advance;
and the updating module is used for updating the state of the equipment from the unbound state to the bound state under the condition that the judgment result is yes.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 5
Embodiments of the present invention also provide a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s11, receiving binding data sent by equipment, wherein the binding data at least comprises face feature information and equipment information of the equipment, and the face feature information is face feature information of a face image acquired by a camera under the condition that the equipment is in an unbound state;
and S12, binding the face feature information and the equipment information.
Optionally, in this embodiment, the storage medium may be further configured to store a computer program for executing the following steps:
s21, collecting a face image through a camera under the condition that the equipment is in an unbound state;
s22, extracting the face feature information of the face image, and acquiring the equipment information of the equipment;
s23, the human face feature information and the equipment information form binding data;
and S24, sending the binding data to a cloud server, wherein the binding data is used for indicating the cloud server to bind the face feature information with the device information.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-ONly Memory (ROM), a RaNdom Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, which can store computer programs.
Example 6
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s11, receiving binding data sent by equipment, wherein the binding data at least comprises face feature information and equipment information of the equipment, and the face feature information is face feature information of a face image acquired by a camera under the condition that the equipment is in an unbound state;
and S12, binding the face feature information and the equipment information.
Optionally, in this embodiment, the processor may be further configured to execute, by the computer program, the following steps:
s21, collecting a face image through a camera under the condition that the equipment is in an unbound state;
s22, extracting the face feature information of the face image, and acquiring the equipment information of the equipment;
s23, the human face feature information and the equipment information form binding data;
and S24, sending the binding data to a cloud server, wherein the binding data is used for indicating the cloud server to bind the face feature information with the device information.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A device binding processing method, comprising:
receiving binding data sent by equipment, wherein the binding data at least comprises face feature information and equipment information of the equipment, and the face feature information is face feature information of a face image acquired by a camera under the condition that the equipment is in an unbound state;
and binding the face feature information with the equipment information.
2. The method of claim 1, wherein binding the face feature information with the device information comprises:
judging whether the equipment information is matched with actual equipment information;
if the judgment result is yes, performing multivariate outlier detection on the binding data to check the validity of the binding data;
and if the binding data are valid data, binding the face feature information with the equipment information.
3. The method of claim 2, wherein performing multi-outlier detection on the bound data to verify validity of the bound data comprises:
forming a coordinate task point by the binding data;
acquiring a preset amount of binding information from a database, and respectively using the preset amount of binding information as the preset amount of target task points;
respectively determining the distances between the coordinate tasks and the target tasks of the preset number;
determining a local discrete factor of the coordinate task point according to the distance;
and checking whether the binding data is valid data or not according to the local discrete factor.
4. The method of claim 3, wherein checking whether the binding data is valid data according to the local dispersion factor comprises:
if the difference value between the local discrete factor check and 1 is larger than a preset threshold value, determining that the binding data is invalid data;
and if the difference value between the local discrete factor check and 1 is less than or equal to a preset threshold value, determining that the binding data is valid data.
5. The method of claim 3, wherein determining the local dispersion factor for the coordinate task point as a function of the distance comprises:
determining a point, of the preset number of target task points, with a distance from the coordinate task point being smaller than the Kth distance of the coordinate task point as a domain point of the coordinate task point;
determining the local reachable density of the field point and the local reachable density of the coordinate task point;
and determining the local discrete factor of the coordinate task point according to the local reachable density of the field point and the local reachable density of the coordinate task point.
6. The method of claim 2, wherein performing multi-outlier detection on the bound data to verify validity of the bound data comprises:
forming a coordinate task point by the binding data;
acquiring a preset amount of binding information from a database, and respectively using the preset amount of binding information as the preset amount of target task points;
respectively determining the distances between the coordinate tasks and the target task points with the preset number;
if the number of the target task points of which the distance is greater than or equal to a first preset threshold is greater than a second preset threshold, determining that the binding data are valid data;
and if the number of the target task points with the distance smaller than the first preset threshold is larger than a second preset threshold, determining that the binding data are invalid data.
7. A device binding processing method, comprising:
under the condition that the equipment is in an unbound state, acquiring a face image through a camera;
extracting face feature information of the face image, and acquiring equipment information of the equipment;
combining the human face feature information and the equipment information into binding data;
and sending the binding data to a cloud server, wherein the binding data is used for indicating the cloud server to bind the face feature information with the equipment information.
8. The method of claim 7, wherein after sending the binding data to a cloud server, the method further comprises:
receiving binding success data sent by the cloud server, wherein the binding success data carries the face feature information and the equipment information;
judging whether the received face feature information is matched with face feature information of the face image stored in advance or not and whether the received equipment information is matched with equipment information stored in advance or not;
and if the judgment result is yes, updating the state of the equipment from the unbound state to the bound state.
9. An apparatus for binding a device, comprising:
the device comprises a first receiving module, a second receiving module and a third receiving module, wherein the first receiving module is used for receiving binding data sent by a device, the binding data at least comprises face characteristic information and device information of the device, and the face characteristic information is face characteristic information of a face image acquired by a camera under the condition that the device is in an unbound state;
and the binding module is used for binding the face feature information with the equipment information.
10. An apparatus for binding a device, comprising:
the acquisition module is used for acquiring a face image through the camera under the condition that the equipment is in an unbound state;
the extraction module is used for extracting the face characteristic information of the face image and acquiring the equipment information of the equipment;
the composition module is used for composing the human face feature information and the equipment information into binding data;
and the sending module is used for sending the binding data to a cloud server, wherein the binding data is used for indicating the cloud server to bind the face feature information with the equipment information.
11. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the method of any one of claims 1 to 6 and 7 to 8 when the computer program is executed.
12. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the method of any one of claims 1 to 6 and 7 to 8.
CN202010471135.1A 2020-05-28 2020-05-28 Equipment binding processing method and device Active CN111641640B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010471135.1A CN111641640B (en) 2020-05-28 2020-05-28 Equipment binding processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010471135.1A CN111641640B (en) 2020-05-28 2020-05-28 Equipment binding processing method and device

Publications (2)

Publication Number Publication Date
CN111641640A true CN111641640A (en) 2020-09-08
CN111641640B CN111641640B (en) 2022-10-28

Family

ID=72331101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010471135.1A Active CN111641640B (en) 2020-05-28 2020-05-28 Equipment binding processing method and device

Country Status (1)

Country Link
CN (1) CN111641640B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115842720A (en) * 2021-08-19 2023-03-24 青岛海尔科技有限公司 Intelligent device binding method and device, storage medium and electronic device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102770780A (en) * 2009-12-10 2012-11-07 诺基亚公司 Method and apparatus for constructing a user-generated geolocation system
CN105446293A (en) * 2015-11-27 2016-03-30 珠海格力电器股份有限公司 Binding method, device and system of intelligent household equipment and intelligent terminal
CN105607504A (en) * 2016-03-15 2016-05-25 美的集团股份有限公司 Intelligent home system, and intelligent home control apparatus and method
CN106503656A (en) * 2016-10-24 2017-03-15 厦门美图之家科技有限公司 A kind of image classification method, device and computing device
CN106682068A (en) * 2015-11-11 2017-05-17 三星电子株式会社 Methods and apparatuses for adaptively updating enrollment database for user authentication
CN107093066A (en) * 2017-03-22 2017-08-25 阿里巴巴集团控股有限公司 Service implementation method and device
CN109635532A (en) * 2018-12-05 2019-04-16 上海碳蓝网络科技有限公司 A kind of picture pick-up device and its binding method
WO2019236284A1 (en) * 2018-06-03 2019-12-12 Apple Inc. Multiple enrollments in facial recognition
CN111159680A (en) * 2019-12-30 2020-05-15 云知声智能科技股份有限公司 Equipment binding method and device based on face recognition
CN111159001A (en) * 2019-12-31 2020-05-15 青岛海尔科技有限公司 Detection method and device for operating system and server

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102770780A (en) * 2009-12-10 2012-11-07 诺基亚公司 Method and apparatus for constructing a user-generated geolocation system
CN106682068A (en) * 2015-11-11 2017-05-17 三星电子株式会社 Methods and apparatuses for adaptively updating enrollment database for user authentication
CN105446293A (en) * 2015-11-27 2016-03-30 珠海格力电器股份有限公司 Binding method, device and system of intelligent household equipment and intelligent terminal
CN105607504A (en) * 2016-03-15 2016-05-25 美的集团股份有限公司 Intelligent home system, and intelligent home control apparatus and method
CN106503656A (en) * 2016-10-24 2017-03-15 厦门美图之家科技有限公司 A kind of image classification method, device and computing device
CN107093066A (en) * 2017-03-22 2017-08-25 阿里巴巴集团控股有限公司 Service implementation method and device
WO2019236284A1 (en) * 2018-06-03 2019-12-12 Apple Inc. Multiple enrollments in facial recognition
CN109635532A (en) * 2018-12-05 2019-04-16 上海碳蓝网络科技有限公司 A kind of picture pick-up device and its binding method
CN111159680A (en) * 2019-12-30 2020-05-15 云知声智能科技股份有限公司 Equipment binding method and device based on face recognition
CN111159001A (en) * 2019-12-31 2020-05-15 青岛海尔科技有限公司 Detection method and device for operating system and server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵新想: "基于密度的局部离群点检测算法的研究与改进", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115842720A (en) * 2021-08-19 2023-03-24 青岛海尔科技有限公司 Intelligent device binding method and device, storage medium and electronic device

Also Published As

Publication number Publication date
CN111641640B (en) 2022-10-28

Similar Documents

Publication Publication Date Title
CN110263936B (en) Horizontal federal learning method, device, equipment and computer storage medium
CN106161496B (en) The remote assistance method and device of terminal, system
US9774642B2 (en) Method and device for pushing multimedia resource and display terminal
US20180041893A1 (en) Method and system of multi-terminal mapping to a virtual sim card
CN111885115B (en) Device binding changing method and device
CN111885594B (en) Equipment binding method and device
CN104852990A (en) Information processing method and intelligent household control system
CN102710549B (en) To be established a communications link the method for relation, terminal and system by shooting
CN108076071B (en) Method for accessing broadcast television system
CN107135149B (en) Method and equipment for recommending social users
CN112738265A (en) Equipment binding method and device, storage medium and electronic device
CN104635543A (en) Method and device for carrying out management operation
EP3780550B1 (en) Information pushing method and device
CN111049711A (en) Device control right sharing method and device, computer device and storage medium
CN112564942A (en) Distribution network control method and device of Internet of things equipment, equipment and storage medium
CN105959188B (en) Method and device for controlling user terminal to be on-line
CN106453349A (en) An account number login method and apparatus
CN111641640B (en) Equipment binding processing method and device
CN115527090A (en) Model training method, device, server and storage medium
CN103916444A (en) Method for displaying number information through cloud model
CN115499816A (en) Information processing method, device and system based on near field communication signal
CN110908643A (en) Configuration method, device and system of software development kit
CN104331649A (en) Identity recognition system and method based on network connection
CN113395741B (en) Network distribution system, method and device of equipment, electronic equipment and storage medium
CN106105128A (en) The system and method that terminal, server, user identify

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant