CN112580553A - Switch control method, device, computer equipment and storage medium - Google Patents

Switch control method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN112580553A
CN112580553A CN202011558538.6A CN202011558538A CN112580553A CN 112580553 A CN112580553 A CN 112580553A CN 202011558538 A CN202011558538 A CN 202011558538A CN 112580553 A CN112580553 A CN 112580553A
Authority
CN
China
Prior art keywords
target user
image
face
information
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011558538.6A
Other languages
Chinese (zh)
Inventor
刘婵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202011558538.6A priority Critical patent/CN112580553A/en
Publication of CN112580553A publication Critical patent/CN112580553A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

The present disclosure provides a switch control method, device, computer device and storage medium, including: acquiring a gate channel image, wherein the gate channel image is acquired by an image acquisition device deployed at a first designated position of a gate; carrying out face detection on the gate channel image, and determining at least one face image contained in the gate channel image; determining gazing area information corresponding to a plurality of face images respectively in response to the fact that the gate channel images contain the plurality of face images; determining a target user watching a display device based on the watching region information from users respectively corresponding to the plurality of face images, and determining identity information of the target user based on the face image of the target user; wherein the display device is deployed at a second designated position of the gate; and controlling the switch of the gate channel based on the identity information of the target user.

Description

Switch control method, device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a method and an apparatus for controlling a switch, a computer device, and a storage medium.
Background
With the development of artificial intelligence technology, the technology of image recognition and comparison through a neural network has been widely applied to human life, and brings great convenience to users, such as unlocking of a widely applied face recognition smart phone, access control through face recognition, and the like.
With the continuous development of computer vision technology, it is hoped that the face recognition technology can be further applied to public transportation payment scenes, and exemplarily, subway payment scenes. In the subway payment scene, the user swipes the face to enter when the subway enters, and swipes the face to pay the current subway cost and exit when the subway exits.
However, this phenomenon is more severe due to the smaller spacing between users in the subway payment scenario, especially during peak hours. This further results in the situation that the faces of other adjacent users are detected when the face detection recognition is performed on the user who is currently entering/exiting the subway gate, thereby resulting in the situations of mistakenly brushing the faces and mistakenly deducting money.
Disclosure of Invention
The embodiment of the disclosure at least provides a switch control method, a switch control device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a switch control method, including:
acquiring a gate channel image, wherein the gate channel image is acquired by an image acquisition device deployed at a first designated position of a gate;
carrying out face detection on the gate channel image, and determining at least one face image contained in the gate channel image;
determining gazing area information corresponding to a plurality of face images respectively in response to the fact that the gate channel images contain the plurality of face images;
determining a target user watching a display device based on the watching region information from users respectively corresponding to the plurality of face images, and determining identity information of the target user based on the face image of the target user; wherein the display device is deployed at a second designated position of the gate;
and controlling the switch of the gate channel based on the identity information of the target user.
By the method, the target user watching the display device in the gate channel image acquired by the image acquisition device can be identified, and then the switch of the gate channel is controlled based on the identity information of the target user, so that the consideration on the watching area is increased when the target user waiting to pass through the gate channel is identified, the identification error caused by too short distance between users can be avoided, the identification precision when more users waiting to pass through the gate channel are increased, and the accurate control on the switch of the gate channel is further realized.
In a possible embodiment, the determining the gaze area information corresponding to each of the plurality of face images includes:
aiming at any face image, the following processes are executed based on a pre-trained neural network to determine the gazing area information of a user corresponding to the face image respectively:
extracting whole face features of any one face image to obtain whole face feature vectors, and extracting eye local features of any one face image to obtain eye local feature vectors; determining a position feature vector, wherein the position feature vector is used for indicating the position of any face image in the gate channel image;
determining a fusion feature vector fusing eye local features and whole face features of any human face image based on the eye local feature vector and the whole face feature vector;
and determining the gazing area information of the user of any human face image based on the fusion feature vector and the position feature vector.
By the method, when the gazing area of the user is determined, the whole face characteristic and the eye local characteristic of the face image and the position of the face image in the gate channel image are combined, and the determined gazing area information is more accurate.
In a possible implementation, the extracting the whole-face features of any one of the face images to obtain a whole-face feature vector, and extracting the eye local features of any one of the face images to obtain an eye local feature vector includes:
extracting the features of any human face image to obtain a whole face feature map corresponding to any human face image;
carrying out deep feature extraction on the whole face feature map to obtain the whole face feature vector; determining an eye local feature map in the whole face feature map based on the position of the eyes of the user of any one face image in any one face image;
and carrying out deep feature extraction on the eye local feature map to obtain the eye local feature vector.
In a possible embodiment, the method further comprises: outputting gazing prompt information, wherein the gazing prompt information is used for prompting a user waiting to pass through the gate channel to watch the display device, wherein the outputting gazing prompt information comprises voice playing gazing prompt information or the display device displays the gazing prompt information.
In one possible embodiment, the determining a target user who gazes at a display device includes:
and determining the user closest to the gate channel as the target user from the users watching the display device.
Because the current user waiting for outbound is generally the user closest to the gate channel, when the target user is determined, the consideration of distance is increased, and the accuracy of determining the target user is improved.
In a possible embodiment, the method further comprises:
determining pose information of users in the plurality of face images;
the determining a target user gazing at a display device includes:
and determining a user with posture information meeting preset conditions as the target user in the users watching the display device.
When the target user is determined, the target user is determined through the gazing area information and the posture information, and the accuracy of determining the target user is improved.
In a possible implementation manner, in the case that a plurality of target users are determined, the method further includes:
outputting warning information for prompting a plurality of users at a gate passage to watch the display device; the output warning information comprises voice playing warning information or screen display warning information.
In a possible embodiment, in a case that the gate channel is an inbound channel, the identifying, based on a face image of the target user, identity information of the target user includes:
and comparing the facial image of the target user with the facial image of the registered user stored in the database in advance, and taking the identity information of the registered user successfully compared as the identity information of the target user.
Therefore, the target user who is currently inbound can be ensured to be the registered user, so that the deduction process can be normally executed when the target user is outbound.
In a case where the gate channel is an inbound channel, after determining identity information of the target user, the method further comprises:
determining inbound information of the target user, and correspondingly storing the inbound information and the identity information of the target user; wherein the inbound information includes an inbound site identification and a facial image of the target user, and at least one of an inbound time, a gate channel number, and an inbound line number.
By storing the inbound information of the target user and the identity information of the target user, when the target user is outbound, the fee to be paid can be determined based on the inbound information, and the payment can be made based on the identity information of the target user.
In a possible implementation manner, in a case that the gate channel is an outbound channel, the identifying, based on a face image of the target user, identity information of the target user includes:
and comparing the face image of the target user with the face image of the inbound user stored in the inbound information base, and taking the identity information of the inbound user successfully compared as the identity information of the target user.
If the gate channel is an outbound channel, the target user must pass through the inbound channel, so the inbound information base must have the inbound information of the target user, and therefore the inbound information of the target user can be quickly determined by comparing the face image of the target user with the face image in the inbound information base.
When the gate channel is an outbound channel, the controlling the switch of the gate channel based on the identity information of the target user includes:
determining the fee to be paid based on the inbound information of the target user and the site information corresponding to the gate channel, and paying according to the fee to be paid based on the identity information of the target user;
and after the payment is completed, controlling the gate channel to be opened.
In a second aspect, an embodiment of the present disclosure further provides a switch control device, including:
the gate machine channel image acquisition module is used for acquiring a gate machine channel image, and the gate machine channel image is acquired by an image acquisition device deployed at a first designated position of a gate machine;
the first determining module is used for carrying out face detection on the gate channel image and determining at least one face image contained in the gate channel image;
the second determining module is used for determining gazing area information corresponding to a plurality of face images respectively in response to the fact that the gate channel image contains the plurality of face images;
a third determining module, configured to determine, from among users corresponding to the plurality of face images, a target user who watches the display device based on the watching area information, and determine, based on a face image of the target user, identity information of the target user; wherein the display device is deployed at a second designated position of the gate;
and the control module is used for controlling the switch of the gate channel based on the identity information of the target user.
In one possible embodiment, the second determining module, when determining the gaze area information corresponding to each of the plurality of face images, is configured to:
aiming at any face image, the following processes are executed based on a pre-trained neural network to determine the gazing area information of a user corresponding to the face image respectively:
extracting whole face features of any one face image to obtain whole face feature vectors, and extracting eye local features of any one face image to obtain eye local feature vectors; determining a position feature vector, wherein the position feature vector is used for indicating the position of any face image in the gate channel image;
determining a fusion feature vector fusing eye local features and whole face features of any human face image based on the eye local feature vector and the whole face feature vector;
and determining the gazing area information of the user of any human face image based on the fusion feature vector and the position feature vector.
In one possible implementation, the second determining module, when extracting the whole-face features of any one of the face images to obtain a whole-face feature vector and extracting the eye local features of any one of the face images to obtain an eye local feature vector, is configured to:
extracting the features of any human face image to obtain a whole face feature map corresponding to any human face image;
carrying out deep feature extraction on the whole face feature map to obtain the whole face feature vector; determining an eye local feature map in the whole face feature map based on the position of the eyes of the user of any one face image in any one face image;
and carrying out deep feature extraction on the eye local feature map to obtain the eye local feature vector.
In a possible implementation manner, the apparatus further includes a prompting module configured to:
outputting gazing prompt information, wherein the gazing prompt information is used for prompting a user waiting to pass through the gate channel to watch the display device, wherein the outputting gazing prompt information comprises voice playing gazing prompt information or the display device displays the gazing prompt information.
In one possible embodiment, the third determining module, when determining the target user looking at the display device, is configured to:
and determining the user closest to the gate channel as the target user from the users watching the display device.
In a possible implementation, the second determining module is further configured to:
determining pose information of users in the plurality of face images;
the third determination module, when determining a target user gazing at the display device, is configured to:
and determining a user with posture information meeting preset conditions as the target user in the users watching the display device.
In a possible implementation manner, in the case that a plurality of target users are determined, the prompting module is further configured to:
outputting warning information for prompting a plurality of users at a gate passage to watch the display device; the output warning information comprises voice playing warning information or screen display warning information.
In a possible implementation manner, in a case that the gate channel is an inbound channel, the third determining module, when recognizing the identity information of the target user based on the face image of the target user, is configured to:
comparing the facial image of the target user with facial images of registered users stored in a database in advance, and taking the identity information of the registered users successfully compared as the identity information of the target user;
in a case where the gate channel is an inbound channel, the apparatus further comprises a storage module to:
after the identity information of the target user is determined, determining the inbound information of the target user, and correspondingly storing the inbound information and the identity information of the target user; wherein the inbound information includes an inbound site identification and a facial image of the target user, and at least one of an inbound time, a gate channel number, and an inbound line number.
In a possible implementation manner, in a case that the gate channel is an outbound channel, the third determining module, when identifying the identity information of the target user based on the facial image of the target user, is configured to:
comparing the face image of the target user with the face image of the inbound user stored in the inbound information base, and taking the identity information of the inbound user successfully compared as the identity information of the target user;
when the gate channel is an outbound channel, the control module, when controlling the switch of the gate channel based on the identity information of the target user, is configured to:
determining the fee to be paid based on the inbound information of the target user and the site information corresponding to the gate channel, and paying according to the fee to be paid based on the identity information of the target user;
and after the payment is completed, controlling the gate channel to be opened.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the description of the effects of the switch control device, the computer device, and the computer-readable storage medium, reference is made to the description of the switch control method, and details are not repeated here.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flow chart of a switch control method provided by an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a method for determining gazing area information of a user through a neural network in a switch control method provided by an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a specific implementation of determining gaze region information provided by an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating an architecture of a switch control apparatus provided in an embodiment of the present disclosure;
fig. 5 shows a schematic structural diagram of a computer device 500 provided by the embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
In a subway payment scenario, the spacing between users is small, especially during peak hours, which is more severe. This further results in the situation that the faces of other adjacent users are detected when the face detection recognition is performed on the user who is currently entering/exiting the subway gate, thereby resulting in the situations of mistakenly brushing the faces and mistakenly deducting money.
Specifically, when a user is in the station, if the face of another user is mistakenly brushed, the inbound information of the user is not stored in the inbound information base, so that the user cannot normally go out because the user cannot find the inbound information when going out; and for the mistakenly brushed user, because the face of the mistakenly brushed user is mistakenly brushed, the inbound information of the mistakenly brushed user is already stored in the inbound information base, and therefore the mistakenly brushed user cannot normally enter when entering.
When a user is out of the station, if the face of other users is brushed by mistake, the user can not enter the station normally because the corresponding fee is not deducted in the last time of the station when the user enters the station again after the station is out of the station; for the faces of other users who are wrongly brushed, the corresponding fee is deducted because the faces of the other users are wrongly brushed, so that the faces of the other users can not be normally outbound when the faces of the other users are outbound.
Based on the above research, the present disclosure provides a switch control method, device, computer equipment, and storage medium, which can identify a target user watching a display device in a gate channel image collected by an image collection device, and then control a switch of a gate channel based on identity information of the target user, so that when identifying the target user to pass through the gate channel, consideration for a watching area is increased, an identification error caused by too close distance between users can be avoided, identification accuracy when there are many users to pass through the gate channel is improved, and further, accurate control of the switch of the gate channel is realized.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In order to facilitate understanding of the embodiment, a detailed description is first given to a switch control method disclosed in the embodiment of the present disclosure, and the switch control method provided by the present disclosure is applicable to a payment scene of a public transportation vehicle, such as a subway payment scene, a sectional charging bus payment scene, and the like.
Referring to fig. 1, a flowchart of a switch control method provided in an embodiment of the present disclosure is shown, where the method includes steps 101 to 105, where:
step 101, acquiring a gate channel image, wherein the gate channel image is acquired by an image acquisition device deployed at a first designated position of a gate.
And 102, carrying out face detection on the gate channel image, and determining at least one face image contained in the gate channel image.
Step 103, determining gazing area information corresponding to a plurality of face images respectively in response to the fact that the gate channel image contains the plurality of face images.
104, determining a target user watching a display device from users respectively corresponding to the face images based on the watching region information, and determining the identity information of the target user based on the face image of the target user; wherein the display device is deployed at a second designated position of the gate.
And 105, controlling the switch of the gate channel based on the identity information of the target user.
By the method, the target user watching the display device in the gate channel image acquired by the image acquisition device can be identified, and then the switch of the gate channel is controlled based on the identity information of the target user, so that the consideration on the watching area is increased when the target user waiting to pass through the gate channel is identified, the identification error caused by too short distance between users can be avoided, the identification precision when more users waiting to pass through the gate channel are increased, and the accurate control on the switch of the gate channel is further realized.
The following is a detailed description of the above steps.
For step 101, the image acquisition device is configured to acquire an image of a user passing through the gate channel, in a possible implementation manner, the image acquisition device may acquire the image in real time, that is, the image acquisition device may acquire a video of the user passing through the gate channel, and when step 101 is executed, the video acquired by the target user may be sampled to obtain a plurality of sampled video frames, where each sampled video frame is a gate channel image acquired by the image acquisition device.
Or, in another possible implementation, an infrared device may be further disposed at the gate passageway, and when the infrared device detects that a user is present at a position corresponding to the gate passageway, the image acquisition device is controlled to acquire an image, and the gate passageway image acquired by the image acquisition device is acquired.
With respect to step 102, here, the facial image may be an image including a face of the user, or may be an image including a face of the user and a part of a torso of the user, for example, an image including a head of the user, and a torso and upper limbs of the upper body.
In a possible implementation, when performing face detection on the gate channel image, semantic segmentation may be performed on the gate channel image (for example, through a pre-trained semantic segmentation network), and based on a semantic segmentation result, the at least one face image is determined.
With respect to step 103 and step 104, in the case that it is detected that the gate passageway image includes one face image, directly taking the face image in the image as the face image of the target user, determining the identity information of the target user, and then executing step 105.
Under the condition that the gate channel image comprises a plurality of face images, a target user watching a display device in users corresponding to the face images can be determined, and identity information of the target user is determined based on the face image of the target user.
In a possible implementation manner, because the image acquisition device for acquiring the gate channel image is generally a wide-angle camera, the proportion of the face image in the gate channel is low, and in addition, because the illumination condition changes greatly, sometimes a shadow is formed on the face, so that the most important eye part is darker, and the accuracy of feature extraction is affected.
Based on this, when determining the gazing region information based on the face image, the contrast of the eye region may be first increased, and for example, the contrast between the eye region and other regions may be increased by a histogram equalization method.
Another problem caused by the wide-angle camera is that the distortion of the edge of the image is severe, which causes great changes to the images of the human face and eyes, and the distortion and distortion of the image also cause severe interference to the judgment of the gazing area, so that the distortion correction can be performed on the acquired image.
In a possible implementation manner, for any face image, when determining the gazing area information of the user in the face image, the determination may be performed based on a pre-trained neural network, and specifically, the neural network may perform the method shown in fig. 2 to determine the gazing area information of the user, including the following steps:
step 201, extracting whole face features of any one face image to obtain whole face feature vectors, and extracting eye local features of any one face image to obtain eye local feature vectors; and determining a position feature vector, wherein the position feature vector is used for indicating the position of any face image in the gate channel image.
In a possible implementation manner, when extracting the whole-face features of any one of the face images to obtain a whole-face feature vector, feature extraction (here, shallow feature extraction, such as convolution) may be performed on any one of the face images to obtain a whole-face feature map corresponding to any one of the face images, and then deep feature extraction may be performed on the whole-face feature map to obtain the whole-face feature vector.
Here, the extraction depth of the shallow feature extraction process is smaller than the extraction depth of the deep feature extraction; the extraction depth can be understood as the number of times of feature extraction, and the two feature extraction processes can be performed in different ways.
In a possible implementation manner, when extracting the eye local features of any one of the face images to obtain the eye local feature vector, the eye local feature map in the whole face feature map may be determined based on the position of the eyes of the user of any one of the face images in any one of the face images, and then deep feature extraction may be performed on the eye local feature map to obtain the eye local vector.
Here, the specific implementation method for performing deep layer feature extraction on the eye local feature map may be the same as the way of performing deep layer feature extraction on the whole face feature map to obtain the whole face feature vector, and the specific method for performing deep layer feature extraction processing is not limited in the present application.
Step 202, determining a fusion feature vector fusing the eye local feature and the whole face feature of any human face image based on the eye local feature vector and the whole face feature vector.
In one possible embodiment, when determining the fused feature vector based on the eye local feature vector and the whole face feature vector, for example, the two feature vectors may be concatenated in a preset order.
Illustratively, if the eye local feature vector is (a, b, c) and the whole face feature vector is (e, f, g), the fused feature vector is (a, b, c, e, f, g).
Step 203, determining the gazing area information of the user of any face image based on the fusion feature vector and the position feature vector.
Here, the fusion feature vector and the position feature vector may be spliced to obtain a spliced feature vector, and then the spliced feature vector is processed to determine the gazing area information based on a classifier.
The gaze region information may refer to a region within the display device at which the user gazes or a region outside the display device at which the user gazes.
For example, a specific execution flow for determining the gaze area information may be as shown in fig. 3.
In a possible implementation manner, before acquiring the gate channel image, a gaze prompt message may be output for prompting the user to gaze at the display device, where the output gaze prompt message includes a voice playing gaze prompt message or a gaze prompt message displayed by the display device, for example, a voice prompt message of "please watch the screen" may be played.
In one possible embodiment, when there are a plurality of determined users looking at the display device, the target user may be determined based on any one of the following methods:
the first method,
And determining the user closest to the gate channel as the target user from the users looking at the display device.
In one possible embodiment, when determining the user closest to the gate tunnel, depth information of a user image looking at a display device may be determined, and then a target user may be determined from the users looking at the display device based on the depth information of the user image.
Here, the depth information of the user image may indicate a distance between a user corresponding to the user image and the gate passageway. For any user image, when determining the depth information of the user image, the depth information of each pixel point of the user image may be determined first, and then the mean value of the depth information of the pixel points is used as the depth information of the user image.
In another possible implementation, when determining the user closest to the gate passageway, size information of face images of users watching the display device may be determined, and then the user corresponding to the face image with the largest size may be used as the target user closest to the gate passageway according to the size information.
Here, the size information of the plurality of face images may be areas of the face images; in another possible embodiment, the aspect ratio of different face images may be the same, for example, 1:1, and in this embodiment, the size information of the face image may be the length of a diagonal line of the face image.
In this manner, the closer to the gate channel, the larger the size of the face image of the user, and therefore, based on the size of the face image of the user, the target user closest to the gate channel can be quickly identified from among users who look at the display device.
In another possible embodiment, when determining the target user closest to the gate aisle, the target user closest to the current gate aisle may also be determined by a distance measurement tool installed at a preset position of the gate aisle.
The distance measuring tool may be, for example, a range radar, and the range radar may emit a plurality of pulse beams, and determine position information of each user from the gate passage based on a reflection result of the pulse beams.
Specifically, after the ranging radar determines the position information of each user in the gate channel image from the gate channel, based on the relative position relationship between the ranging radar and the installation position of the image acquisition device, the corresponding relationship between the position information and each user in the gate channel image can be determined, and then the target user closest to the gate channel in the user watching the display device can be determined.
After the target user closest to the gate channel, the position of the target user in the image acquired by the image acquisition device can be determined based on the relative position relationship between the ranging radar and the installation position of the image acquisition device, that is, the face image of the target user in the gate channel image can be determined.
In the case where there is only one user who watches the display device, the target user may be directly the user, and in the case where there are a plurality of users who watch the display device, the target user may be the user closest to the gateway lane among the plurality of users who watch the display device.
The second method,
Determining the posture information of users in the face images, and then determining the user of which the posture information meets the preset conditions as the target user in the users watching the display device.
In one possible embodiment, the pose information may include at least one of the following:
expression information, gesture action information, limb action information, face orientation information.
It should be noted that, in the case that the posture information includes the limb movement information, the gate passage image acquired in step 101 may be a plurality of consecutive video frame images; the face direction information may be direction information of the user relative to the gate passageway, for example, if the user is facing the gate passageway, the face direction information is 0 degree, and if the user is facing the gate passageway, the face direction information may be 90 degrees or-90 degrees.
In one possible implementation, for any user image, when determining the posture information of the user corresponding to the user image, the user image may be input to a posture information detection network, and the posture information detection network may directly output the posture information of the user in the user image.
In one possible implementation, when a target user whose posture information satisfies a preset condition is determined from users looking at the display device, a user presenting a first preset target posture may be determined as the target user based on posture information of users corresponding to the plurality of user images, respectively.
The first preset target gesture may be, for example, a preset limb action (e.g., blinking, waving, etc.), a preset expression (e.g., smiling), a preset gesture action (e.g., erecting a thumb, and moving a scissor hand), or a preset face orientation range.
It should be noted that, if there is only one user watching the display device, the user may be directly used as the target user, and if there are a plurality of users watching the display device, a user whose posture information satisfies a preset condition among the users watching the display device may be selected as the target user. Or, if there is only one user watching the display device, if the posture information of the user does not satisfy the preset condition, the user is not taken as the target user.
In a possible implementation mode, in the case that a plurality of target users are determined, warning information can be output, and the warning information is used for prompting that a plurality of users watch the display device at the gateway channel; the output warning message includes a voice playing warning message or a screen display warning message, where the screen may be the display device.
In one possible implementation, after the target user is determined, the identity information of the target user may be determined based on a facial image of the target user.
Specifically, when a target user enters and leaves, the identification information of the user is identified in different ways, and under the condition that the gate channel is an entering channel, the face image of the target user can be compared with the face image of the registered user stored in the database in advance, and the successfully compared identity information of the registered user is used as the identity information of the target user; and under the condition that the gate channel is an outbound channel, comparing the face image of the target user with the face image of the inbound user stored in the inbound information base, and taking the identity information of the inbound user successfully compared as the identity information of the target user.
Here, whether the gate channel is an inbound channel or an outbound channel may be preset.
Under the condition that the gate channel is an inbound channel, comparing the face image of the target user with the face image of the registered user stored in the database in advance, so that the target user can be ensured to be the registered user, and further, when the target user is outbound, the fee deduction program can be normally executed; under the condition that the gate channel is an inbound channel, the database for storing the registered user information is necessarily larger than the inbound information database, so the face image of the target user is compared with the face image of the inbound user stored in the inbound information database, the identity information of the target user and the inbound information of the target user can be quickly determined, and the fee deduction efficiency is improved.
In a possible implementation manner, in the case that the gate channel is an inbound channel, after the identity information of the target user is determined, the inbound information of the target user may also be determined, and the inbound information may be stored in correspondence with the identity information of the target user, and in particular, may be stored in the inbound information base.
The inbound information comprises inbound site identification and a facial image of a target user, and in addition, the inbound information also comprises at least one of inbound time, a gate channel number and an inbound line number.
For step 105,
In the case that the gate channel is an inbound channel, after the identity information of the target user is determined, and the target user is determined to be a registered user, the gate channel may be directly controlled to be opened.
When the gate channel is an outbound channel, after the identity information of the target user is determined, the fee to be paid can be determined based on the inbound information of the target user and the site information corresponding to the gate channel, the fee to be paid is paid according to the fee to be paid based on the identity information of the target user, and the gate channel is controlled to be opened after the payment is completed.
Here, since the target user is a registered user, the target user has a corresponding payment account, after the identity information of the target user is determined, the payment account corresponding to the target user may be directly determined, and since the identity information of the target user is determined based on the face image of the target user, the fee to be paid may be directly deducted from the payment account of the target user.
In one possible embodiment, in the case that the gate channel is an outbound channel, the inbound information of the target subscriber may be deleted from the inbound information base after controlling the gate channel to open and deducting the fees to be paid, so as to realize timely updating of the inbound information base.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a switch control device corresponding to the switch control method, and as the principle of solving the problem of the device in the embodiment of the present disclosure is similar to the switch control method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 4, there is shown a schematic structural diagram of a switch control apparatus according to an embodiment of the present disclosure, the apparatus includes: an acquisition module 401, a first determination module 402, a second determination module 403, a third determination module 404, and a control module 405; wherein the content of the first and second substances,
the acquiring module 401 is configured to acquire a gate channel image, where the gate channel image is acquired by an image acquisition device deployed at a first designated location of a gate;
a first determining module 402, configured to perform face detection on the gate channel image, and determine at least one face image included in the gate channel image;
a second determining module 403, configured to determine, in response to detecting that the gate channel image includes multiple face images, gaze area information corresponding to each of the multiple face images;
a third determining module 404, configured to determine, from among users corresponding to the plurality of face images, a target user who gazes at the display device based on the gazing area information, and determine, based on a face image of the target user, identity information of the target user; wherein the display device is deployed at a second designated position of the gate;
and a control module 405, configured to control the switch of the gate channel based on the identity information of the target user.
In one possible implementation, the second determining module 403, when determining the gaze area information corresponding to each of the plurality of face images, is configured to:
aiming at any face image, the following processes are executed based on a pre-trained neural network to determine the gazing area information of a user corresponding to the face image respectively:
extracting whole face features of any one face image to obtain whole face feature vectors, and extracting eye local features of any one face image to obtain eye local feature vectors; determining a position feature vector, wherein the position feature vector is used for indicating the position of any face image in the gate channel image;
determining a fusion feature vector fusing eye local features and whole face features of any human face image based on the eye local feature vector and the whole face feature vector;
and determining the gazing area information of the user of any human face image based on the fusion feature vector and the position feature vector.
In a possible implementation manner, the second determining module 403, when extracting the whole-face features of any one of the face images to obtain a whole-face feature vector and extracting the eye local features of any one of the face images to obtain an eye local feature vector, is configured to:
extracting the features of any human face image to obtain a whole face feature map corresponding to any human face image;
carrying out deep feature extraction on the whole face feature map to obtain the whole face feature vector; determining an eye local feature map in the whole face feature map based on the position of the eyes of the user of any one face image in any one face image;
and carrying out deep feature extraction on the eye local feature map to obtain the eye local feature vector.
In a possible implementation, the apparatus further includes a prompt module 406 configured to:
outputting gazing prompt information, wherein the gazing prompt information is used for prompting a user waiting to pass through the gate channel to watch the display device, wherein the outputting gazing prompt information comprises voice playing gazing prompt information or the display device displays the gazing prompt information.
In a possible implementation, the third determining module 404, when determining the target user gazing at the display device, is configured to:
and determining the user closest to the gate channel as the target user from the users watching the display device.
In a possible implementation, the second determining module 403 is further configured to:
determining pose information of users in the plurality of face images;
the third determining module 404, when determining the target user gazing at the display device, is configured to:
and determining a user with posture information meeting preset conditions as the target user in the users watching the display device.
In a possible implementation manner, in the case that there are a plurality of determined target users, the prompting module 406 is further configured to:
outputting warning information for prompting a plurality of users at a gate passage to watch the display device; the output warning information comprises voice playing warning information or screen display warning information.
In a possible implementation manner, in the case that the gate channel is an inbound channel, the third determining module 404, when identifying the identity information of the target user based on the facial image of the target user, is configured to:
comparing the facial image of the target user with facial images of registered users stored in a database in advance, and taking the identity information of the registered users successfully compared as the identity information of the target user;
in the case where the gate channel is an inbound channel, the apparatus further comprises a memory module 407 for:
after the identity information of the target user is determined, determining the inbound information of the target user, and correspondingly storing the inbound information and the identity information of the target user; wherein the inbound information includes an inbound site identification and a facial image of the target user, and at least one of an inbound time, a gate channel number, and an inbound line number.
In a possible implementation manner, in the case that the gate channel is an outbound channel, the third determining module 404, when identifying the identity information of the target user based on the facial image of the target user, is configured to:
comparing the face image of the target user with the face image of the inbound user stored in the inbound information base, and taking the identity information of the inbound user successfully compared as the identity information of the target user;
in the case that the gate channel is an outbound channel, the control module 405, when controlling the switch of the gate channel based on the identity information of the target user, is configured to:
determining the fee to be paid based on the inbound information of the target user and the site information corresponding to the gate channel, and paying according to the fee to be paid based on the identity information of the target user;
and after the payment is completed, controlling the gate channel to be opened.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Through the device, the target user who watches the display device can be identified in the gate channel image collected by the image collection device, then the switch of the gate channel is controlled based on the identity information of the target user, thus when the target user who waits to pass through the gate channel is identified, consideration to the watching area is increased, the identification error caused by too close distance between users can be avoided, the identification precision when the users who wait to pass through the gate channel are more is improved, and the accurate control of the switch of the gate channel is realized.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 5, a schematic structural diagram of a computer device 500 provided in the embodiment of the present disclosure includes a processor 501, a memory 502, and a bus 503. The memory 502 is used for storing execution instructions and includes a memory 5021 and an external memory 5022; the memory 5021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 501 and data exchanged with an external storage 5022 such as a hard disk, the processor 501 exchanges data with the external storage 5022 through the memory 5021, and when the computer device 500 operates, the processor 501 communicates with the storage 502 through the bus 503, so that the processor 501 executes the following instructions:
acquiring a gate channel image, wherein the gate channel image is acquired by an image acquisition device deployed at a first designated position of a gate;
carrying out face detection on the gate channel image, and determining at least one face image contained in the gate channel image;
determining gazing area information corresponding to a plurality of face images respectively in response to the fact that the gate channel images contain the plurality of face images;
determining a target user watching a display device based on the watching region information from users respectively corresponding to the plurality of face images, and determining identity information of the target user based on the face image of the target user; wherein the display device is deployed at a second designated position of the gate;
and controlling the switch of the gate channel based on the identity information of the target user.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the switch control method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the switch control method in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A switch control method, comprising:
acquiring a gate channel image, wherein the gate channel image is acquired by an image acquisition device deployed at a first designated position of a gate;
carrying out face detection on the gate channel image, and determining at least one face image contained in the gate channel image;
determining gazing area information corresponding to a plurality of face images respectively in response to the fact that the gate channel images contain the plurality of face images;
determining a target user watching a display device based on the watching region information from users respectively corresponding to the plurality of face images, and determining identity information of the target user based on the face image of the target user; wherein the display device is deployed at a second designated position of the gate;
and controlling the switch of the gate channel based on the identity information of the target user.
2. The method according to claim 1, wherein the determining the gaze area information corresponding to each of the plurality of facial images comprises:
aiming at any face image, the following processes are executed based on a pre-trained neural network to determine the gazing area information of a user corresponding to the face image respectively:
extracting whole face features of any one face image to obtain whole face feature vectors, and extracting eye local features of any one face image to obtain eye local feature vectors; determining a position feature vector, wherein the position feature vector is used for indicating the position of any face image in the gate channel image;
determining a fusion feature vector fusing eye local features and whole face features of any human face image based on the eye local feature vector and the whole face feature vector;
and determining the gazing area information of the user of any human face image based on the fusion feature vector and the position feature vector.
3. The method according to claim 2, wherein the extracting the whole-face features of any one of the face images to obtain a whole-face feature vector and extracting the eye local features of any one of the face images to obtain an eye local feature vector comprises:
extracting the features of any human face image to obtain a whole face feature map corresponding to any human face image;
carrying out deep feature extraction on the whole face feature map to obtain the whole face feature vector; determining an eye local feature map in the whole face feature map based on the position of the eyes of the user of any one face image in any one face image;
and carrying out deep feature extraction on the eye local feature map to obtain the eye local feature vector.
4. The method according to any one of claims 1 to 3, further comprising:
outputting gazing prompt information, wherein the gazing prompt information is used for prompting a user waiting to pass through the gate channel to watch the display device, wherein the outputting gazing prompt information comprises voice playing gazing prompt information or the display device displays the gazing prompt information.
5. The method of claim 1, wherein determining the target user to look at the display device comprises:
and determining the user closest to the gate channel as the target user from the users watching the display device.
6. The method of claim 1, further comprising:
determining pose information of users in the plurality of face images;
the determining a target user gazing at a display device includes:
and determining a user with posture information meeting preset conditions as the target user in the users watching the display device.
7. The method according to any one of claims 1 to 6, wherein in case that a plurality of target users are determined, the method further comprises:
outputting warning information for prompting a plurality of users at a gate passage to watch the display device; the output warning information comprises voice playing warning information or screen display warning information.
8. The method according to any one of claims 1 to 7, wherein in a case that the gate channel is an inbound channel, the identifying the identity information of the target user based on the face image of the target user comprises:
comparing the facial image of the target user with facial images of registered users stored in a database in advance, and taking the identity information of the registered users successfully compared as the identity information of the target user;
in a case where the gate channel is an inbound channel, after determining identity information of the target user, the method further comprises:
determining inbound information of the target user, and correspondingly storing the inbound information and the identity information of the target user; wherein the inbound information includes an inbound site identification and a facial image of the target user, and at least one of an inbound time, a gate channel number, and an inbound line number.
9. The method according to any one of claims 1 to 8, wherein in a case that the gate channel is an outbound channel, the identifying the identity information of the target user based on the face image of the target user comprises:
comparing the face image of the target user with the face image of the inbound user stored in the inbound information base, and taking the identity information of the inbound user successfully compared as the identity information of the target user;
when the gate channel is an outbound channel, the controlling the switch of the gate channel based on the identity information of the target user includes:
determining the fee to be paid based on the inbound information of the target user and the site information corresponding to the gate channel, and paying according to the fee to be paid based on the identity information of the target user;
and after the payment is completed, controlling the gate channel to be opened.
10. A switch control device, comprising:
the gate machine channel image acquisition module is used for acquiring a gate machine channel image, and the gate machine channel image is acquired by an image acquisition device deployed at a first designated position of a gate machine;
the first determining module is used for carrying out face detection on the gate channel image and determining at least one face image contained in the gate channel image;
the second determining module is used for determining gazing area information corresponding to a plurality of face images respectively in response to the fact that the gate channel image contains the plurality of face images;
a third determining module, configured to determine, from among users corresponding to the plurality of face images, a target user who watches the display device based on the watching area information, and determine, based on a face image of the target user, identity information of the target user; wherein the display device is deployed at a second designated position of the gate;
and the control module is used for controlling the switch of the gate channel based on the identity information of the target user.
11. A computer device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when a computer device is running, the machine-readable instructions when executed by the processor performing the steps of the switch control method of any one of claims 1 to 9.
12. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, performs the steps of the switch control method according to any one of claims 1 to 9.
CN202011558538.6A 2020-12-25 2020-12-25 Switch control method, device, computer equipment and storage medium Pending CN112580553A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011558538.6A CN112580553A (en) 2020-12-25 2020-12-25 Switch control method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011558538.6A CN112580553A (en) 2020-12-25 2020-12-25 Switch control method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112580553A true CN112580553A (en) 2021-03-30

Family

ID=75140160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011558538.6A Pending CN112580553A (en) 2020-12-25 2020-12-25 Switch control method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112580553A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393243A (en) * 2021-05-11 2021-09-14 北京京东振世信息技术有限公司 Payment information processing method and device, electronic equipment and computer readable medium
CN113626797A (en) * 2021-08-09 2021-11-09 杭州海康威视数字技术股份有限公司 Method for reducing false triggering of multi-channel gate and gate system
CN113673426A (en) * 2021-08-20 2021-11-19 支付宝(杭州)信息技术有限公司 Target user determination method and device
CN114613056A (en) * 2022-03-17 2022-06-10 深圳创维-Rgb电子有限公司 Gate control method, device, equipment and computer readable storage medium
CN117079378A (en) * 2023-10-16 2023-11-17 八维通科技有限公司 Multi-face passing gate processing method and system in site traffic and computer program medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102610035A (en) * 2012-04-05 2012-07-25 广州广电运通金融电子股份有限公司 Financial self-service device and anti-peeping system and anti-peeping method thereof
JP2016053896A (en) * 2014-09-04 2016-04-14 グローリー株式会社 Gate system and method of controlling passage of gate
CN108615159A (en) * 2018-05-03 2018-10-02 百度在线网络技术(北京)有限公司 Access control method and device based on blinkpunkt detection
CN109190539A (en) * 2018-08-24 2019-01-11 阿里巴巴集团控股有限公司 Face identification method and device
CN110059666A (en) * 2019-04-29 2019-07-26 北京市商汤科技开发有限公司 A kind of attention detection method and device
CN110570200A (en) * 2019-08-16 2019-12-13 阿里巴巴集团控股有限公司 payment method and device
CN110751767A (en) * 2019-10-31 2020-02-04 中国联合网络通信集团有限公司 Image processing method and device
CN110969744A (en) * 2018-09-26 2020-04-07 江门浦泰轨道交通设备有限公司 Safety monitoring method based on face recognition
CN110992546A (en) * 2019-12-02 2020-04-10 杭州磊盛智能科技有限公司 Face recognition gate and anti-trailing method thereof
CN111460413A (en) * 2019-01-18 2020-07-28 阿里巴巴集团控股有限公司 Identity recognition system, method and device, electronic equipment and storage medium
US20200294060A1 (en) * 2019-08-16 2020-09-17 Alibaba Group Holding Limited Payment method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102610035A (en) * 2012-04-05 2012-07-25 广州广电运通金融电子股份有限公司 Financial self-service device and anti-peeping system and anti-peeping method thereof
JP2016053896A (en) * 2014-09-04 2016-04-14 グローリー株式会社 Gate system and method of controlling passage of gate
CN108615159A (en) * 2018-05-03 2018-10-02 百度在线网络技术(北京)有限公司 Access control method and device based on blinkpunkt detection
CN109190539A (en) * 2018-08-24 2019-01-11 阿里巴巴集团控股有限公司 Face identification method and device
CN110969744A (en) * 2018-09-26 2020-04-07 江门浦泰轨道交通设备有限公司 Safety monitoring method based on face recognition
CN111460413A (en) * 2019-01-18 2020-07-28 阿里巴巴集团控股有限公司 Identity recognition system, method and device, electronic equipment and storage medium
CN110059666A (en) * 2019-04-29 2019-07-26 北京市商汤科技开发有限公司 A kind of attention detection method and device
CN110570200A (en) * 2019-08-16 2019-12-13 阿里巴巴集团控股有限公司 payment method and device
US20200294060A1 (en) * 2019-08-16 2020-09-17 Alibaba Group Holding Limited Payment method and device
CN110751767A (en) * 2019-10-31 2020-02-04 中国联合网络通信集团有限公司 Image processing method and device
CN110992546A (en) * 2019-12-02 2020-04-10 杭州磊盛智能科技有限公司 Face recognition gate and anti-trailing method thereof

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393243A (en) * 2021-05-11 2021-09-14 北京京东振世信息技术有限公司 Payment information processing method and device, electronic equipment and computer readable medium
CN113626797A (en) * 2021-08-09 2021-11-09 杭州海康威视数字技术股份有限公司 Method for reducing false triggering of multi-channel gate and gate system
CN113673426A (en) * 2021-08-20 2021-11-19 支付宝(杭州)信息技术有限公司 Target user determination method and device
CN114613056A (en) * 2022-03-17 2022-06-10 深圳创维-Rgb电子有限公司 Gate control method, device, equipment and computer readable storage medium
CN114613056B (en) * 2022-03-17 2024-03-12 深圳创维-Rgb电子有限公司 Gate control method, device, equipment and computer readable storage medium
CN117079378A (en) * 2023-10-16 2023-11-17 八维通科技有限公司 Multi-face passing gate processing method and system in site traffic and computer program medium
CN117079378B (en) * 2023-10-16 2024-01-09 八维通科技有限公司 Multi-face passing gate processing method and system in site traffic and computer program medium

Similar Documents

Publication Publication Date Title
CN112580553A (en) Switch control method, device, computer equipment and storage medium
CN112258193B (en) Payment method and device
EP2620896B1 (en) System And Method For Face Capture And Matching
US20200279120A1 (en) Method, apparatus and system for liveness detection, electronic device, and storage medium
CN113366487A (en) Operation determination method and device based on expression group and electronic equipment
KR100580626B1 (en) Face detection method and apparatus and security system employing the same
EP2336949B1 (en) Apparatus and method for registering plurality of facial images for face recognition
CN112115866A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN107424266A (en) The method and apparatus of recognition of face unblock
CN111539740B (en) Payment method, device and equipment
CN105528703A (en) Method and system for implementing payment verification via expression
CN108197585A (en) Recognition algorithms and device
CN104063709B (en) Sight line detector and method, image capture apparatus and its control method
KR20220042301A (en) Image detection method and related devices, devices, storage media, computer programs
CN111597910A (en) Face recognition method, face recognition device, terminal equipment and medium
CN106471440A (en) Eye tracking based on efficient forest sensing
CN112560775A (en) Switch control method and device, computer equipment and storage medium
Hebbale et al. Real time COVID-19 facemask detection using deep learning
CN110599187A (en) Payment method and device based on face recognition, computer equipment and storage medium
CN111259757A (en) Image-based living body identification method, device and equipment
CN111738078A (en) Face recognition method and device
CN108197608A (en) Face identification method, device, robot and storage medium
CN110175553A (en) The method and device of feature database is established based on Gait Recognition and recognition of face
CN112560768A (en) Gate channel control method and device, computer equipment and storage medium
CN108197593B (en) Multi-size facial expression recognition method and device based on three-point positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination