CN110659957A - Unmanned convenience store shopping method, device, equipment and storage medium - Google Patents

Unmanned convenience store shopping method, device, equipment and storage medium Download PDF

Info

Publication number
CN110659957A
CN110659957A CN201910836186.7A CN201910836186A CN110659957A CN 110659957 A CN110659957 A CN 110659957A CN 201910836186 A CN201910836186 A CN 201910836186A CN 110659957 A CN110659957 A CN 110659957A
Authority
CN
China
Prior art keywords
shopping
data
user
identity
feature data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910836186.7A
Other languages
Chinese (zh)
Inventor
叶灵
陈志明
刘思伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Weaving Point Intelligent Technology Co Ltd
Original Assignee
Guangzhou Weaving Point Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Weaving Point Intelligent Technology Co Ltd filed Critical Guangzhou Weaving Point Intelligent Technology Co Ltd
Priority to CN201910836186.7A priority Critical patent/CN110659957A/en
Publication of CN110659957A publication Critical patent/CN110659957A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0633Lists, e.g. purchase orders, compilation or processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention discloses a shopping method, a device, equipment and a storage medium for an unmanned convenience store, which relate to the technical field of intelligent control and comprise the following steps: the method comprises the steps that first face characteristic data of a user, collected by at least one first camera, are obtained in real time, and the first camera is configured inside a shopping area; determining a first identity of the user according to the first facial feature data; acquiring at least one first article identifier associated with the first facial feature data; and adding the first item identifier into a shopping list corresponding to the first identity identifier. By adopting the scheme, the shopping list of the user can be simply, quickly and accurately obtained.

Description

Unmanned convenience store shopping method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of intelligent control, in particular to a shopping method, a device, equipment and a storage medium for an unmanned convenience store.
Background
With the development of intellectualization, the unmanned supermarket plays an important role in the life of people gradually, compared with the traditional supermarket, the unmanned supermarket does not need to be provided with shopping guide staff, security guards, cashiers, supermarket attendants and other supermarket staff, and the labor cost is saved for merchants.
How to accurately confirm the articles selected by the user is a crucial link in the normal operation of the unmanned supermarket. In the prior art, when a user leaves an unmanned supermarket, the user confirms an article selected by the user in a self-service article scanning mode, however, when the number of the users is too large, the user queues up to wait for the self-service article scanning for too long time. Meanwhile, the self-service scanning mode depends on the operation capability of the user, and is not beneficial to the groups with poor operation capability, such as children or the old.
In conclusion, how to simply, rapidly and accurately obtain the shopping list of the user becomes a technical problem which needs to be solved urgently.
Disclosure of Invention
The invention provides a shopping method, a device, equipment and a storage medium for an unmanned convenience store, which are used for simply, quickly and accurately obtaining a shopping list of a user.
In a first aspect, an embodiment of the present invention provides an unmanned convenience store shopping method, including:
the method comprises the steps that first face characteristic data of a user, collected by at least one first camera, are obtained in real time, and the first camera is configured inside a shopping area;
determining a first identity of the user according to the first facial feature data;
acquiring at least one first article identifier associated with the first facial feature data;
and adding the first item identifier into a shopping list corresponding to the first identity identifier.
Further, before the real-time acquisition of the first facial feature data of the user collected by the at least one first camera, the method includes:
acquiring second facial feature data of a user, which are acquired by at least one second camera, wherein the second camera is arranged at an entrance of a shopping area;
establishing a second identity of the user according to the second facial feature data;
the determining a first identity of the user according to the first facial feature data comprises:
and when the first face feature data and the second face feature data are successfully matched, acquiring a second identity corresponding to the second face feature data, and taking the second identity as the first identity of the user.
Further, the acquiring, in real time, first facial feature data of the user collected by the at least one first camera includes:
acquiring video data acquired by at least one first camera in real time;
and carrying out face recognition on the video data to obtain first face feature data of the user.
Further, the obtaining at least one first item identifier associated with the first facial feature data includes:
confirming the position data of the first face characteristic data according to the video data;
acquiring a shelf closest to the position data;
and obtaining at least one first item identifier according to the item taking data of the shelf.
Further, the obtaining at least one first item identifier according to the item pickup data of the shelf includes:
acquiring the gravity change data of the goods shelf, wherein at least one sensor for acquiring the gravity change data is arranged in the placing area of each type of goods on the goods shelf;
obtaining article taking data according to the gravity change data, wherein the article taking data comprises a second article identifier leaving the shelf;
determining the second item identification as the first item identification selected by the first facial feature data.
Further, the article taking data also comprises the article taking quantity identified by the second article;
the adding the first item identifier to the shopping list corresponding to the first identity identifier comprises:
and adding the first item identification and the item taking quantity into a shopping list corresponding to the first identity identification.
Further, after the adding the first item identifier to the shopping list corresponding to the first identity identifier, the method further includes:
when the user is confirmed to leave the shopping area, settling shopping expenses according to the shopping list;
determining a deduction account number of the user according to the first identity mark;
and deducting the money from the money deduction account according to the shopping expense so as to finish shopping.
In a second aspect, an embodiment of the present invention further provides an unmanned convenience store shopping device, including:
the system comprises a first characteristic acquisition module, a second characteristic acquisition module and a third characteristic acquisition module, wherein the first characteristic acquisition module is used for acquiring first face characteristic data of a user, which is acquired by at least one first camera in real time, and the first camera is configured in a shopping area;
the first identification confirmation module is used for determining a first identity identification of the user according to the first facial feature data;
a second identifier validation module configured to obtain at least one first item identifier associated with the first facial feature data;
and the identification adding module is used for adding the first article identification into a shopping list corresponding to the first identity identification.
In a third aspect, an embodiment of the present invention further provides an unmanned convenience store shopping device, including:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the unmanned convenience store shopping method of the first aspect.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer-executable instructions which, when executed by a computer processor, are used to perform the method of unmanned convenience store shopping according to the first aspect.
The shopping method, the device, the equipment and the storage medium of the unmanned convenience store acquire the first facial feature data of the user in real time through at least one first camera to determine the first identity identification of the user, then acquire the first article identification selected by the first facial feature data, and adds the first item identification to the shopping list corresponding to the first identity identification, by collecting the first facial feature data, the accurate identification and tracking of the user are realized, the identity of the user is associated with the selected item identification through the first facial feature data, the item selected by the user can be accurately obtained, and then the shopping list of the user can be quickly and accurately obtained, and the process does not need the user to carry out additional operation, thereby reducing the operation difficulty of the user and avoiding the condition of queuing for self-service scanning of articles.
Drawings
Fig. 1 is a flowchart of a method for shopping in an unmanned convenience store according to an embodiment of the present invention;
fig. 2 is a flowchart of an unmanned convenience store shopping method according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an unmanned convenience store shopping device according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an unmanned convenience store shopping device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are for purposes of illustration and not limitation. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action or object from another entity or action or object without necessarily requiring or implying any actual such relationship or order between such entities or actions or objects. For example, the "first" and "second" of the first camera and the second camera are used only for distinction.
In order to reduce the dependence of the unmanned supermarket on the self-service article scanning technology, a camera in the security system can be used for tracking the user through the picture acquired by the camera, and the article selected by the user is confirmed in the tracking process. However, when tracking the user, it is necessary to collect the appearance features of the user, such as collecting the body shape, the clothes style and/or the color of the user. When a user enters a camera dead angle (an area which cannot be collected by a camera) or more than two users with similar appearance characteristics appear, a picture collected by the camera cannot accurately track the user, and further the user cannot accurately confirm an article selected by the user. Therefore, the embodiment provides a method for accurately tracking the user and accurately determining the item selected by the user.
Example one
Fig. 1 is a flowchart of an unmanned convenience store shopping method according to an embodiment of the present invention. The unmanned convenience store shopping method provided by the embodiment is applied to scenes of unmanned supermarkets, wherein the unmanned supermarkets comprise at least one goods shelf, and goods for sale are placed on the goods shelf. Further, the unmanned convenience store shopping method may be performed by an unmanned convenience store shopping device, which may be implemented in software and/or hardware and integrated in the unmanned convenience store shopping apparatus. The shopping equipment of the unmanned convenience store can be intelligent equipment with data processing and analyzing capabilities, such as a computer, a mobile phone, a tablet computer and the like.
Each unmanned supermarket is provided with corresponding unmanned convenience store shopping equipment, and the unmanned convenience store shopping equipment can correspond to one unmanned supermarket or can be shared by a plurality of unmanned supermarkets. Further, the unmanned convenience store shopping device may be in data communication with other devices configured in the unmanned supermarket. The specific types and installation positions of other devices can be set according to actual conditions, for example, the other devices can be cameras, alarms, various sensors (such as gravity sensors, smoke sensors and the like), access switches and the like.
Specifically, referring to fig. 1, the method for shopping in an unmanned convenience store according to the present embodiment includes:
and step 110, acquiring the first facial feature data of the user acquired by at least one first camera in real time, wherein the first camera is configured in the shopping area.
By way of example, a shopping area may be understood as an interior area of an unmanned supermarket, within which a user may select goods. At least one camera is installed in the shopping area, and in the embodiment, the camera installed in the shopping area is recorded as a first camera. Further, the installation position of the first camera can be set according to actual conditions, for example, the first camera can be installed on a shelf in a shopping area, and/or the first camera can be installed at a set distance above the shelf to perform video acquisition on the environment around the shelf. The first camera may be in data communication with the unmanned convenience store shopping device, for example, the first camera transmits captured video data to the unmanned convenience store shopping device, or the unmanned convenience store shopping device controls a capture range of the first camera. The embodiment sets that the first camera can collect video data in real time and sends the collected video data to the unmanned convenience store shopping equipment.
After receiving the video data, the unmanned convenience store shopping equipment analyzes the video data to confirm whether the facial image of the user exists in the video data. If the facial image of the user exists, facial feature data in the facial image is acquired. The face feature data refers to feature data that can distinguish a user in a face image, and has uniqueness. In the embodiment, the facial feature data obtained based on the first camera is recorded as the first facial feature data. Specifically, when the first facial feature data is obtained from the video data, a face recognition (also referred to as face recognition) technique may be adopted, and a specific analysis algorithm embodiment thereof is not limited.
It is understood that when the number of the first cameras is plural, the plural first cameras may capture different regions, or there may be overlapping regions. At this time, the video data collected by the plurality of first cameras in real time may be analyzed to determine whether the first camera collects the first facial feature data.
Step 120, determining a first identity of the user according to the first facial feature data.
Specifically, each facial feature data corresponds to an identity, wherein the identity can be understood as a user ID, which has uniqueness. The embodiment of the creation rule of the identity is not limited. In an embodiment, an identity corresponding to the first facial feature data is set to be a first identity.
Optionally, when the first facial feature data of the user is acquired for the first time, a corresponding first identity identifier is created, and in the subsequent tracking process, if the same first facial feature data is identified, it is determined that the first identity identifier is tracked. Optionally, when the user enters the shopping area for the first time, the user is guided to register, at this time, the facial feature data of the user are collected and stored through a camera arranged at the entrance, and a corresponding identity is established. And when the first facial feature data is subsequently acquired, matching the first facial feature data with the pre-stored facial feature data, if the corresponding facial feature identification is matched, taking the identity identification corresponding to the facial feature identification as the first identity identification, otherwise, confirming that the comparison fails, and prompting the user to re-register. Optionally, when the user enters the shopping area, the camera arranged at the entrance collects and stores facial feature data of the user, and establishes a corresponding identity. And when the first facial feature data are subsequently acquired, matching the first facial feature data with the pre-stored facial feature data, if the corresponding facial feature identification is matched, taking the identity identification corresponding to the facial feature identification as the first identity identification, otherwise, confirming that the comparison fails, and starting an alarm to prompt that the current irrelevant personnel enter the shopping area.
At least one first item identification associated with the first facial feature data is obtained 130.
Specifically, each type of article on the goods shelf has a corresponding placing area, and each type of article placed on the goods shelf has a corresponding article identifier. The item identifier may also be understood as an item ID, which is unique. The embodiment of the generation rule of the item identifier is not limited. In an embodiment, the identifier corresponding to the item selected by the first facial feature data is denoted as a first item identifier.
Optionally, if the video data acquired by the first camera includes a picture taken by the user, the article selected by the user may be determined by analyzing the picture taken, and then the article identifier corresponding to the article is searched and recorded as the first article identifier selected by the first facial feature data. Still alternatively, a gravity sensor is provided on the shelf, and the same gravity sensor is used for each type of article. When the user takes the article from the goods shelves, because article on the goods shelves reduce, can make the data that gravity sensor gathered change, and then confirm the article sign that the user took according to the situation of change, at this moment, confirm the goods shelves at user place through video data to confirm first article sign through gravity sensor on the goods shelves. In consideration of the fact that in practical application, a user may take a plurality of similar articles, the unit weight of each article may be stored in advance, and then the article taking amount is determined through the data collected by the gravity sensor and the unit weight.
Step 140, add the first item identifier to the shopping list corresponding to the first identity identifier.
Specifically, each first identity mark corresponds to a shopping list, and the shopping list is used for recording articles taken by a user in an unmanned supermarket in the shopping process. Generally, after a user enters a shopping area, after the identity of the user is obtained by unmanned convenience store shopping equipment, a shopping list corresponding to the identity is established. Wherein, the shopping list at least records the first item identification and the item taking quantity. It will be appreciated that the shopping list is continually updated during the user's shopping. And deleting the corresponding shopping list after confirming that the user finishes the shopping, or deleting the corresponding shopping list at set time intervals (such as 24 hours). The embodiment of the method for confirming that the user finishes shopping this time is not limited, and for example, the user is confirmed to leave the shopping area.
If more than two users shop at the same time, the first facial feature data of each user can be collected respectively, and the first user identifier corresponding to each first facial feature data is obtained, at this time, a plurality of first user identifiers can be set to share one shopping list.
Optionally, the first camera is set to collect all shopping areas. In practical application, if a shooting dead angle (namely an area which cannot be shot) exists in the first camera, whether a user enters the shooting dead angle or not can be determined through the first facial feature data, when the user enters the shooting dead angle, the goods shelf at the position of the shooting dead angle is determined, and whether the user takes an article and a corresponding article identifier or not is determined through a gravity sensor arranged on the goods shelf.
The technical scheme includes that the first face feature data of the user are collected by the at least one first camera in real time to determine the first identity identification of the user, then the first article identification selected by the first face feature data is obtained, the first article identification is added to a shopping list corresponding to the first identity identification, accurate identification and tracking of the user are achieved by collecting the first face feature data, the identity identification of the user is associated with the selected article identification through the first face feature data, the article selected by the user can be accurately obtained, and then the shopping list of the user is rapidly and accurately obtained.
Based on the above embodiment, after the shopping list is obtained, when the user leaves the shopping area, the payment can be directly carried out based on the shopping list. Accordingly, after the setting step 140, the method further includes:
and 150, when the user is confirmed to leave the shopping area, settling the shopping fee according to the shopping list.
The embodiment of the method for confirming that the user leaves the shopping area is not limited. For example, a camera is arranged at an exit of the shopping area and is recorded as a third camera, and the facial feature data collected by the third camera is recorded as third facial feature data. And confirming the third face characteristic data through the video data acquired by the third camera, and matching the third face characteristic data with the first face characteristic data to determine the corresponding first identity identification, so as to determine that the corresponding user leaves the shopping area.
Further, when the user is determined to leave the shopping area, the corresponding first identity identifier is determined first, and a shopping list of the first identity identifier is obtained. And then, determining the shopping expense of the user at this time based on the shopping list. Optionally, the item identifier of each item and the corresponding unit price are stored in advance in the unmanned convenience store shopping device. And after the shopping list is obtained, acquiring a first article identifier, determining the unit price corresponding to the first article identifier according to the corresponding relation between the article identifier and the unit price, and then calculating the shopping expense according to the article taking quantity and the corresponding unit price of the shopping list.
And step 160, determining the deduction account of the user according to the first identity.
The type embodiment of the deduction account is not limited.
Typically, when the user is a registered user, the user may be instructed to bind the deduction account in the user registration process. And meanwhile, establishing a corresponding relation between the deduction account and the identity. And after the shopping expense is settled, acquiring a deduction account corresponding to the first identity. When the user is a non-registered user, after the shopping expense is settled, the user is prompted to input a deduction account number at the exit, in the input process of the user, the third face characteristic data is continuously collected to determine the first identity identification corresponding to the third face characteristic data, and then whether the first identity identification of the currently input deduction account number is consistent with the first identity identification corresponding to the shopping expense is determined. And if the first identity identification of the currently input deduction account is inconsistent with the first identity identification corresponding to the shopping expense, stopping inputting the deduction account. It should be noted that, when a plurality of users shop simultaneously, only one first identity identifier of the plurality of first identity identifiers corresponding to the shopping fees needs to be consistent with the first identity identifier corresponding to the currently input deduction account.
And 170, deducting the deduction account according to the shopping expense so as to finish shopping.
Specifically, the shopping fee is sent to the deduction account, and the deduction account deducts money according to the shopping fee or prompts a user to input a password and then deducts money automatically. The communication method and communication flow embodiment with the deduction account are not limited. Further, when the payment is confirmed to be completed, the user is allowed to leave the shopping area from the outlet.
By the method, after the shopping bill is obtained, the shopping expense can be automatically calculated based on the shopping bill, the shopping expense is not calculated in a mode of self-service article scanning in line, the shopping process of a user is simplified, and the use experience of the user is improved.
Example two
Fig. 2 is a flowchart of an unmanned convenience store shopping method according to a second embodiment of the present invention. The method for shopping in an unmanned convenience store according to the present embodiment is embodied on the basis of the above-described embodiment, and in the present embodiment, the shopping area includes at least two shelves. Specifically, referring to fig. 2, the method for shopping in an unmanned convenience store according to the present embodiment includes:
step 210, obtaining second facial feature data of the user collected by at least one second camera, the second camera being disposed at an entrance of the shopping area.
Specifically, at least one camera is arranged at an entrance of the shopping area, and in the embodiment, the camera at the entrance is referred to as a second camera. The embodiment of the installation position of the second camera is not limited. Usually, the second camera shoots a video picture at the entrance, and when the user is at the entrance, the second camera can collect facial feature data of the user, and in the embodiment, the facial feature data collected by the second camera is recorded as the second facial feature data. The second facial feature data and the first facial feature data are acquired in the same manner, which is not described herein again.
And step 220, establishing a second identity of the user according to the second facial feature data.
Specifically, if the shopping area only allows the registered user to enter, when the user enters the shopping area for the first time, the second camera collects the second facial feature data of the user for storage, and meanwhile, an identity corresponding to the second facial feature data is established. When the user enters the shopping area again, the second camera collects second facial feature data of the user and matches the second facial feature data of each registered user, if the matching is successful, the corresponding second identity is obtained, and if the matching is unsuccessful, the user is prompted to register.
If the shopping area allows the non-registered user to enter, the corresponding second identity is directly established after the second camera collects the second face characteristic data, so that the identity of the user in the shopping is confirmed. At this time, after the user leaves the shopping area, the second facial feature data and the second identification may be deleted synchronously, or the second facial feature data and the second identification may be deleted after meeting the setting condition, where the setting condition may be: no second facial feature data is collected again for a set period of time (e.g., three months). At this time, if the second face feature data is collected again within the set time, the corresponding second identity can be directly obtained.
It will be appreciated that each second facial characteristic data has a corresponding second identity.
And step 230, acquiring video data acquired by at least one first camera in real time.
And 240, performing face recognition on the video data to obtain first face characteristic data of the user.
Specifically, the specific technical means adopted for face recognition is not limited. Whether the video data contains the face image or not can be confirmed through face recognition, and when the face image is contained, the face image is processed to obtain first face feature data.
And step 250, when the first face feature data and the second face feature data are successfully matched, acquiring a second identity corresponding to the second face feature data, and using the second identity as the first identity of the user.
Specifically, the limiting condition for successful matching can be set according to the actual situation. For example, a similarity threshold is set, and when the similarity between the first face feature data and a certain second face feature data is higher than the similarity threshold, it is determined that the matching is successful. And if the similarity of the first face feature data and the plurality of second face feature data is higher than the similarity threshold, selecting the corresponding second face feature data with the highest similarity. For another example, a matching model is obtained by machine learning, and at this time, the first face feature data is used as an input of the matching model, and after the matching model processes the first face feature data, the corresponding second face feature data is output.
Further, after the second face feature data obtained through matching is determined, a second identity corresponding to the second face feature data is obtained and is used as the first identity of the current user.
Step 260, confirming the position data of the first face feature data according to the video data.
Typically, the location data refers to the location of the user within the shopping area. The position data may be determined by video data. Optionally, since the shelf and the user are in the same space, the same coordinate system may be adopted, and at this time, when the shelf is displayed in the video data, the shelf may be identified to confirm the position of the shelf in the shopping area. The positions of the shelves in the unmanned convenience store shopping device are stored in advance, and the positions of the shelves in the video data can be determined according to the shelf identification result in the video data and the pre-stored shelf positions. After confirming the position of the shelf, confirming the relative position relation between the shelf and the user, and further obtaining the position data. Optionally, the first camera which acquires the first face characteristic data is determined, the installation position of the first camera is obtained, and then the position data is obtained according to the installation position. Optionally, the location data of the user is confirmed when the article pickup event is confirmed. The embodiment of the detection mode of the taking event is not limited, for example, the taking action is detected through video data, and for example, it is determined that the weight of an article placed on a certain shelf changes.
And step 270, acquiring the shelf closest to the position data.
Specifically, the shelf closest to the user can be determined based on the shelf location and the user location data. The shelf closest to the user is the shelf closest to the direction in which the user takes the goods.
Step 280, obtaining at least one first item identifier according to item pickup data of the shelf.
The item pickup data is data generated when the user picks up an item on the shelf. Optionally, the article pickup data may be obtained from video data acquired by the first camera. For example, after image processing is performed on video data, whether the video data includes a fetching action of a user is determined, and if the video data includes the fetching action of the user, a shelf closest to the user is confirmed, a fetching screen is identified to determine an article to be fetched by the user, and then article fetching data is obtained. Still optionally, be provided with at least one sensor in the region of putting of every type of article on the goods shelves, this sensor is gravity sensor, can obtain the gravity change data of goods shelves through this gravity sensor, and then obtains the data of taking of article according to gravity change data. In the embodiment, it is described by taking an example that at least one sensor is provided in a placement area of each type of article on a shelf. In this case, the steps specifically include steps 281 to 283:
and 281, acquiring gravity change data of the goods shelf, wherein at least one sensor for acquiring the gravity change data is arranged in the placing area of each type of goods on the goods shelf.
Specifically, the sensor is a gravity sensor, which can collect weight data in a detection range in real time. Wherein, the detection range can be the placing area of the similar articles. When the shelf closest to the user is confirmed, only the data collected by the sensor installed on the shelf is identified. Further, when a user takes articles from the goods shelf, the weight of the articles on the goods shelf changes, and the weight data collected by the gravity sensor changes synchronously. At this time, the gravity change data can be obtained according to the change situation of the weight data. It can be understood that the gravity change data is relative data, that is, the gravity change data is the weight change of the same kind of articles on the shelf before and after the user takes the articles.
And 282, obtaining item taking data according to the gravity change data, wherein the item taking data comprises a second item identifier leaving the shelf.
Typically, the sensor numbers corresponding to the identifications of various items on the shelf are pre-recorded. After the gravity change data is acquired, the sensor number of the weight change can be confirmed, and then the taken article identifier is confirmed. And writing the second article identification into the article taking data after the second article identification is obtained.
Optionally, the item pickup data further includes an item pickup number. Specifically, the unit weight corresponding to each type of article identifier is recorded in advance. The basis weight is weight data collected by the sensor when the number of articles is 1. And when the second article identification is determined, acquiring the corresponding unit weight. Thereafter, the article pickup amount is determined based on the gravity change data and the unit weight. In practical applications, the decimal may be obtained by dividing the gravity change data by the unit weight, and in this case, the integer may be obtained by rounding, and the integer is used as the article taking quantity.
Step 283, the second item identification is determined as the first item identification selected by the first facial feature data.
Step 290, add the first item identifier to the shopping list corresponding to the first identity identifier.
On the basis of the above embodiment, the article pickup data further includes the article pickup number identified by the second article. At this time, step 290 specifically includes: and adding the first item identification and the item taking quantity into a shopping list corresponding to the first identity identification.
Specifically, in order to ensure the accuracy of the shopping list, in the embodiment, when the first item identifier is written, the item taking number is synchronously written. The embodiment of the writing method of the article pickup amount is not limited.
The second face characteristic data of the user is collected through the second camera at the entrance of the shopping area, the second identity mark corresponding to the second face characteristic data is established, after the user enters the shopping area, the first face characteristic data of the user is collected through the first camera, the identity of the user is identified in a mode of matching the first face characteristic data with the second face characteristic data, then the goods shelf where the user can take the goods is determined based on the position data of the user in the shopping area, the goods mark taken by the user and the quantity taken by the goods are determined according to the gravity change data collected by the sensor on the goods shelf, and then the goods mark and the quantity taken by the goods are written into a shopping list, so that the accurate identification and tracking of the shopping process of the user are realized, and the identity of the user can be accurately identified through the face characteristic data, the gravity change data collected by the sensor can be collected to determine the identification of the article taken by the user, so that the shopping list of the user can be obtained quickly and accurately.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an unmanned convenience store shopping device according to a third embodiment of the present invention. Referring to fig. 3, the unmanned convenience store shopping apparatus includes: a first feature acquisition module 301, a first identity validation module 302, a second identity validation module 303, and an identity join module 304.
The first feature acquisition module 301 is configured to acquire, in real time, first facial feature data of a user acquired by at least one first camera, where the first camera is disposed inside a shopping area; a first identity confirmation module 302, configured to determine a first identity of the user according to the first facial feature data; a second identifier confirming module 303, configured to obtain at least one first item identifier associated with the first facial feature data; an identity joining module 304, configured to join the first item identity to a shopping list corresponding to the first identity.
On the basis of the above embodiment, the method further includes: the second characteristic acquisition module is used for acquiring second facial characteristic data of the user, which is acquired by at least one second camera before acquiring the first facial characteristic data of the user, which is acquired by at least one first camera in real time, wherein the second camera is arranged at an entrance of a shopping area; and the identification establishing module is used for establishing a second identity identification of the user according to the second facial feature data. Correspondingly, the first identifier confirming module 302 is specifically configured to: and when the first face feature data and the second face feature data are successfully matched, acquiring a second identity corresponding to the second face feature data, and taking the second identity as the first identity of the user.
On the basis of the above embodiment, the first feature acquisition module 301 includes: the video acquisition unit is used for acquiring video data acquired by at least one first camera in real time; and the face recognition unit is used for carrying out face recognition on the video data so as to obtain first face characteristic data of the user.
On the basis of the above embodiment, the second identifier confirming module 303 includes: a position determination unit configured to confirm position data of the first face feature data from the video data; a shelf confirmation unit configured to acquire a shelf closest to the position data; and the identification obtaining unit is used for obtaining at least one first item identification according to the item taking data of the shelf.
On the basis of the above embodiment, the identification obtaining unit includes: the gravity collecting subunit is used for obtaining the gravity change data of the goods shelf, and at least one sensor for collecting the gravity change data is arranged in the placing area of each type of goods on the goods shelf; the data generation subunit is used for obtaining article taking data according to the gravity change data, wherein the article taking data comprises a second article identifier leaving the shelf; an identity determination subunit, configured to determine the second item identity as the first item identity selected by the first facial feature data.
On the basis of the above embodiment, the item pickup data further includes the item pickup number identified by the second item. Correspondingly, the identifier adding module 304 is specifically configured to: and adding the first item identification and the item taking quantity into a shopping list corresponding to the first identity identification.
On the basis of the above embodiment, the method further includes: the expense settlement module is used for settling the shopping expense according to the shopping list after the first article identifier is added into the shopping list corresponding to the first identity identifier and the user is confirmed to leave the shopping area; the account confirmation module is used for determining a deduction account of the user according to the first identity; and the deduction module is used for deducting the money from the money deduction account according to the shopping expense so as to finish shopping.
The unmanned convenience store shopping device provided by the embodiment of the invention can be used for executing the unmanned convenience store shopping method provided by any embodiment, and has corresponding functions and beneficial effects.
Example four
Fig. 4 is a schematic structural diagram of an unmanned convenience store shopping device according to a fourth embodiment of the present invention. As shown in fig. 4, the unmanned convenience store shopping device includes a processor 40, a memory 41, an input device 42, an output device 43, and a communication device 44; the number of processors 40 in the unmanned convenience store shopping device may be one or more, one processor 40 being exemplified in fig. 4; the processor 40, the memory 41, the input device 42, the output device 43, and the communication device 44 in the unmanned convenience store shopping device may be connected by a bus or other means, and the bus connection is exemplified in fig. 4.
The memory 41 serves as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the method for unmanned convenience store shopping according to an embodiment of the present invention (for example, the first feature acquisition module 301, the first identifier confirmation module 302, the second identifier confirmation module 303, and the identifier joining module 304 in the unmanned convenience store shopping apparatus). The processor 40 executes various functional applications and data processing of the unmanned convenience store shopping device, that is, implements the above-described unmanned convenience store shopping method, by operating software programs, instructions, and modules stored in the memory 41.
The memory 41 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the unmanned convenience store shopping device, and the like. Further, the memory 41 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 41 may further include memory located remotely from processor 40, which may be connected to the unmanned convenience store shopping device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 42 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function controls of the unmanned convenience store shopping apparatus. The output device 43 may include a display device such as a display screen. The communication device 44 is used for data communication with an apparatus such as a camera.
The unmanned convenience store shopping equipment comprises the unmanned convenience store shopping device, can be used for executing any unmanned convenience store shopping method, and has corresponding functions and beneficial effects.
EXAMPLE five
Embodiments of the present invention also provide a storage medium containing computer-executable instructions which, when executed by a computer processor, perform a method of unmanned convenience store shopping, the method comprising:
the method comprises the steps that first face characteristic data of a user, collected by at least one first camera, are obtained in real time, and the first camera is configured inside a shopping area;
determining a first identity of the user according to the first facial feature data;
acquiring at least one first article identifier associated with the first facial feature data;
and adding the first item identifier into a shopping list corresponding to the first identity identifier.
Of course, the storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the method operations described above, and may also perform related operations in the method for unmanned convenience store shopping provided by any embodiments of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the above unmanned convenience store shopping device, the units and modules included in the embodiment are only divided according to the function logic, but are not limited to the above division as long as the corresponding functions can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An unmanned convenience store shopping method, comprising:
the method comprises the steps that first face characteristic data of a user, collected by at least one first camera, are obtained in real time, and the first camera is configured inside a shopping area;
determining a first identity of the user according to the first facial feature data;
acquiring at least one first article identifier associated with the first facial feature data;
and adding the first item identifier into a shopping list corresponding to the first identity identifier.
2. The method for unmanned convenience store shopping according to claim 1, wherein the step of obtaining the first facial feature data of the user collected by the at least one first camera in real time comprises:
acquiring second facial feature data of a user, which are acquired by at least one second camera, wherein the second camera is arranged at an entrance of a shopping area;
establishing a second identity of the user according to the second facial feature data;
the determining a first identity of the user according to the first facial feature data comprises:
and when the first face feature data and the second face feature data are successfully matched, acquiring a second identity corresponding to the second face feature data, and taking the second identity as the first identity of the user.
3. The automated convenience store shopping method of claim 1, wherein the obtaining in real time the first facial feature data of the user captured by the at least one first camera comprises:
acquiring video data acquired by at least one first camera in real time;
and carrying out face recognition on the video data to obtain first face feature data of the user.
4. The automated convenience store shopping method of claim 3, wherein said obtaining at least one first item identification associated with the first facial characteristic data comprises:
confirming the position data of the first face characteristic data according to the video data;
acquiring a shelf closest to the position data;
and obtaining at least one first item identifier according to the item taking data of the shelf.
5. The automated convenience store shopping method of claim 4, wherein said deriving at least one first item identification from item pickup data for said shelf comprises:
acquiring the gravity change data of the goods shelf, wherein at least one sensor for acquiring the gravity change data is arranged in the placing area of each type of goods on the goods shelf;
obtaining article taking data according to the gravity change data, wherein the article taking data comprises a second article identifier leaving the shelf;
determining the second item identification as the first item identification selected by the first facial feature data.
6. The automated convenience store shopping method according to claim 5, wherein the item pickup data further includes an item pickup number identified by the second item;
the adding the first item identifier to the shopping list corresponding to the first identity identifier comprises:
and adding the first item identification and the item taking quantity into a shopping list corresponding to the first identity identification.
7. The method of any of claims 1-6, wherein after adding the first item identifier to the shopping list corresponding to the first identity identifier, further comprising:
when the user is confirmed to leave the shopping area, settling shopping expenses according to the shopping list;
determining a deduction account number of the user according to the first identity mark;
and deducting the money from the money deduction account according to the shopping expense so as to finish shopping.
8. An unmanned convenience store shopping device comprising:
the system comprises a first characteristic acquisition module, a second characteristic acquisition module and a third characteristic acquisition module, wherein the first characteristic acquisition module is used for acquiring first face characteristic data of a user, which is acquired by at least one first camera in real time, and the first camera is configured in a shopping area;
the first identification confirmation module is used for determining a first identity identification of the user according to the first facial feature data;
a second identifier validation module configured to obtain at least one first item identifier associated with the first facial feature data;
and the identification adding module is used for adding the first article identification into a shopping list corresponding to the first identity identification.
9. An unmanned convenience store shopping device comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of unmanned convenience store shopping as recited in any one of claims 1-7.
10. A storage medium containing computer-executable instructions, which when executed by a computer processor, operate to perform the method of unmanned convenience store shopping of any one of claims 1-7.
CN201910836186.7A 2019-09-05 2019-09-05 Unmanned convenience store shopping method, device, equipment and storage medium Withdrawn CN110659957A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910836186.7A CN110659957A (en) 2019-09-05 2019-09-05 Unmanned convenience store shopping method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910836186.7A CN110659957A (en) 2019-09-05 2019-09-05 Unmanned convenience store shopping method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110659957A true CN110659957A (en) 2020-01-07

Family

ID=69036725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910836186.7A Withdrawn CN110659957A (en) 2019-09-05 2019-09-05 Unmanned convenience store shopping method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110659957A (en)

Similar Documents

Publication Publication Date Title
US11501523B2 (en) Goods sensing system and method for goods sensing based on image monitoring
US11176597B2 (en) Associating shoppers together
JP6261197B2 (en) Display control apparatus, display control method, and program
CN110647825A (en) Method, device and equipment for determining unmanned supermarket articles and storage medium
EP3208763A1 (en) Association program, computer-readable medium, information processing method and information processing device
JPWO2019038968A1 (en) Store device, store system, store management method, program
US9891798B2 (en) Face image tracking system
JP2022539920A (en) Method and apparatus for matching goods and customers based on visual and gravity sensing
JP7264401B2 (en) Accounting methods, devices and systems
CN112215167B (en) Intelligent store control method and system based on image recognition
EP4075399A1 (en) Information processing system
US11379903B2 (en) Data processing method, device and storage medium
WO2019181364A1 (en) Store management device and store management method
CN111178860A (en) Settlement method, device, equipment and storage medium for unmanned convenience store
CN110287867A (en) Unmanned convenience store enters recognition methods, device, equipment and storage medium
CN111222870A (en) Settlement method, device and system
CN108510673B (en) Automatic radio frequency quick checkout method and device and electronic equipment
JP7030092B2 (en) Information generation method and equipment and equipment for human-computer interaction
CN110689389A (en) Computer vision-based shopping list automatic maintenance method and device, storage medium and terminal
EP3629276A1 (en) Context-aided machine vision item differentiation
CN110659955A (en) Multi-user shopping management method, device, equipment and storage medium
CN111783509A (en) Automatic settlement method, device, system and storage medium
US20230005348A1 (en) Fraud detection system and method
WO2020179730A1 (en) Information processing device, information processing method, and program
EP3474183A1 (en) System for tracking products and users in a store

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20200107