WO2021042730A1 - 基于视觉和重力感应的商品与顾客的匹配方法和装置 - Google Patents

基于视觉和重力感应的商品与顾客的匹配方法和装置 Download PDF

Info

Publication number
WO2021042730A1
WO2021042730A1 PCT/CN2020/084780 CN2020084780W WO2021042730A1 WO 2021042730 A1 WO2021042730 A1 WO 2021042730A1 CN 2020084780 W CN2020084780 W CN 2020084780W WO 2021042730 A1 WO2021042730 A1 WO 2021042730A1
Authority
WO
WIPO (PCT)
Prior art keywords
customer
shelf
recognition result
product
weight
Prior art date
Application number
PCT/CN2020/084780
Other languages
English (en)
French (fr)
Inventor
吴一黎
Original Assignee
图灵通诺(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 图灵通诺(北京)科技有限公司 filed Critical 图灵通诺(北京)科技有限公司
Priority to EP20739233.3A priority Critical patent/EP3809325A4/en
Priority to JP2020540494A priority patent/JP2022539920A/ja
Priority to US16/965,563 priority patent/US11983250B2/en
Publication of WO2021042730A1 publication Critical patent/WO2021042730A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G19/00Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01GWEIGHING
    • G01G19/00Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups
    • G01G19/40Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight
    • G01G19/413Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight using electromechanical or electronic computing means
    • G01G19/414Weighing apparatus or methods adapted for special purposes not provided for in the preceding groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight using electromechanical or electronic computing means using electronic computing means only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0029Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Definitions

  • the invention belongs to the field of computer technology, and particularly relates to a method and device for matching merchandise and customers based on vision and gravity sensing.
  • the matching between the customer and the product is usually achieved by queuing at the checkout counter and scanning the barcode on the product through a code scanning terminal held by the cashier, and then determining the product that a certain customer intends to purchase for settlement.
  • one aspect of the present invention provides a method for matching products and customers based on vision and gravity sensing, which includes: acquiring the identity information of the customer and tracking the customer in real time in a shopping place, The shopping place is equipped with a shelf for carrying commodities; obtains the shelf position of the shelf whose carrying weight has changed; identifies the commodity that causes the carrying weight of the shelf to change, and obtains the product identification result; according to the shelf The position, the position of the customer's hand, and the customer's action and behavior determine the identity information of the purchase customer, and the purchase customer is a customer matching the product identification result.
  • Another aspect of the present invention provides a device for matching goods and customers based on vision and gravity sensing, which includes: a customer tracking module for acquiring customer identity information and tracking the customer in real time in a shopping place. There are shelves for carrying goods in the premises; shelf position acquisition module, used to acquire the shelf position of the shelf whose carrying weight has changed; product recognition result acquisition module, used to detect the goods that cause the carrying weight of the shelf to change Perform identification to obtain the result of product identification; a matching module, used to determine the identity information of the purchasing customer according to the position of the shelf, the position of the customer's hand, and the behavior of the customer, and the purchasing customer is related to the product Identify customers whose results match.
  • Another aspect of the present invention provides a device for matching merchandise and customers based on vision and gravity sensing, which includes a memory and a processor.
  • the processor is connected to the memory, and is configured to execute the above-mentioned method for matching products and customers based on vision and gravity sensing based on instructions stored in the memory.
  • Another aspect of the present invention provides a computer-readable storage medium on which a computer program is stored.
  • the program When the program is executed by a processor, it realizes the above-mentioned visual and gravity sensing-based merchandise matching method with customers.
  • FIG. 1 is a schematic flowchart of a method for matching products and customers based on vision and gravity sensing according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of a method for matching goods and customers based on vision and gravity sensing according to another embodiment of the present invention
  • FIG. 3 is a schematic flowchart of another method for matching goods and customers based on vision and gravity sensing according to another embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a device for matching goods and customers based on vision and gravity sensing according to an embodiment of the present invention.
  • the embodiment of the present invention provides a method for matching goods and customers based on vision and gravity sensing. Referring to Fig. 1, the method includes the following steps:
  • Step 101 Obtain the customer's identity information and track the customer in real time in the shopping place.
  • the app of the matching method can complete the customer registration through a mini program or WeChat official account.
  • the mini program can be a mini program in WeChat (or WeChat mini program); it can also be registered on a self-service registration terminal arranged in a shopping place .
  • the customer s identity information (ID) needs to be collected during registration. It is used to distinguish different customers and is a unique identification of the customer. It can include one of: ID card number, mobile phone number, user account, nickname, facial information, fingerprint information, etc. Kind or several.
  • the identity information can also include: payment account number, which can be a bank account or a third-party payment account, such as Alipay, WeChat Pay, QQ Wallet, JD Wallet Wait.
  • payment account number can be a bank account or a third-party payment account, such as Alipay, WeChat Pay, QQ Wallet, JD Wallet Wait.
  • the identity information may also include: name, gender, occupation, etc.
  • a gate is arranged at the entrance of the shopping place. After the identity information recognition device obtains the identity information of the customer, the identity information is verified. The customer who passes the verification can open the gate and enter the shopping place to shop without registration or verification. Customers who fail to pass cannot open the gates and thus cannot enter the shopping venues.
  • the identity recognition information device may be a face recognition device, and the corresponding identity information is facial information.
  • the identification information device may be a fingerprint identification device, and the corresponding identification information is fingerprint information. In other embodiments, the identification information device may also be other devices, such as a code scanning terminal, and the medium carrying customer identification information may be a two-dimensional code. The code scanning terminal realizes the verification of the customer's identity information by scanning the two-dimensional code displayed on the terminal device held by the customer.
  • the identification information device can also be a two-dimensional code or barcode that characterizes the number of a shopping place, and the customer scans the two-dimensional code or barcode through the App on the terminal device to verify the identity information.
  • the pick-and-place behaviors include: take behavior and put back behavior.
  • the take behavior means that there is a willingness to buy and take the action of taking the product; the return behavior means that the willingness to buy is abandoned and the action is put back on the product.
  • the shopping place has a large space and can accommodate multiple customers to shop at the same time. In order to determine which customer purchased the product, it is necessary to keep real-time tracking of the customer after the customer enters the shopping place, that is, to obtain the customer's location in the shopping place in real time.
  • the real-time tracking method can be based on the positioning method of the depth camera, such as deploying multiple depth cameras on the top or ceiling of the shopping place, the shooting direction of the depth camera is downward, and the number of depth cameras can be adjusted according to the size of the shopping place In practice, it is better to take the shooting range to cover the space formed by the shopping place.
  • each depth camera When a customer is active in a shopping place, each depth camera will detect the customer's position in the picture through a target detection algorithm in each image frame, and then calculate the customer's position in the three-dimensional space formed by the shopping place based on the depth information .
  • Depth information refers to the distance from an object in the three-dimensional space corresponding to each pixel point captured by a camera with a depth camera to the camera itself.
  • the depth camera is deployed above the shelf, the customer's hand position can be obtained based on the depth camera technology. Since the implementation process involves image processing, it can be called a vision-based technology.
  • Step 102 Obtain the shelf position of the shelf whose carrying weight has changed.
  • the customer After entering the shopping place, the customer will move in the shopping place. When the customer encounters a product they like, they will stop in front of the shelf that carries the product, and then take the product to show that the product belongs to the product to be purchased, or The return action indicates that the product is not a product to be purchased. When the customer takes the action, the weight carried on the shelf will decrease; when the customer puts it back, the weight carried on the shelf will increase. Detect the load-bearing weight change of the shelf based on gravity sensing technology, such as setting a gravity sensor on the shelf. If it is detected that the load is reduced, it means that the customer has taken the product; if it is detected that the load is increased, it means that the customer has put the product back. Before use, measure the position of each shelf in the three-dimensional space of the shopping place, and correspond to the gravity sensor installed on it, so that when the gravity sensor detects a change in the load carrying weight, it can be determined that the load carrying weight has changed. Shelf position.
  • Step 103 Identify the commodities that cause the weight of the shelf to change, and obtain the result of the commodity identification.
  • the weight change value of the load bearing weight of the shelf is obtained based on the gravity sensing technology, and the weight identification result is obtained according to the weight change value and the gravity value of each commodity carried by the shelf, and the weight identification result is the product identification result.
  • the shelf gravity value table records the types of goods on the shelf and the corresponding weight value.
  • Multiple cargo lanes can be formed by setting partitions on the support plates of the shelf at intervals.
  • Step 104 Determine the identity information of the purchasing customer according to the position of the shelf, the position of the customer's hand, and the action of the customer, and the purchasing customer is a customer who matches the product identification result.
  • the customers near the shelf are determined according to the position of the shelf and the customer's positioning position.
  • the customers near the shelf are the customers around the shelf when the load-bearing weight changes, and then based on the actions of the customers near the shelf and the shelf
  • the hand position of the nearby customer determines which customer's actions in the vicinity of the shelf caused the change in the carrying weight of the shelf, then the customer is the purchasing customer, and the product identification result is matched with the customer to determine the customer's identity information , And then add or delete products to the customer's shopping cart (shopping list or virtual shopping cart) based on the product identification result, which is conducive to settlement.
  • the embodiment of the present invention tracks customers in shopping places in real time, obtains the shelf position of the shelf whose carrying weight has changed, identifies the commodities that cause the shelf's carrying weight to change, and obtains the result of product identification.
  • the position of the shelf, the position of the customer's hand, and the customer's action and behavior determine the identity information of the buyer who is the customer who matches the product identification result, so as to achieve the accurate matching of the product identification result and the customer, which is beneficial to the unmanned convenience store
  • the completion of the settlement work reduces the work of the cashier.
  • FIG. 2 another embodiment of the present invention provides a method for matching goods and customers based on vision and gravity sensing, which includes the following steps:
  • Step 201 Obtain the customer's identity information and track the customer in real time in the shopping place.
  • Step 202 Obtain the shelf position of the shelf whose weight has changed.
  • Step 203 Obtain the weight change value of the load bearing weight of the shelf based on the gravity sensing technology, and obtain the weight recognition result according to the weight change value and the gravity value of each commodity in the shopping place.
  • the total gravity value table records the types of goods in the shopping place, the corresponding gravity value and the initial placement of the shelves. It should be noted that the descriptions of steps 201 to 203 can refer to the related descriptions of the foregoing embodiments, which will not be repeated here.
  • Step 204 Obtain a visual recognition result corresponding to the product related to the customer's pick-and-place behavior based on the visual recognition technology, and the pick-and-place time performed by the pick-and-place behavior matches the weight change time when the load bearing weight of the shelf changes.
  • the customer’s pick-and-place behavior of the product is photographed, so that the image of the customer during the shopping process can be obtained.
  • the arrangement of the photographing camera can be as follows: above the front of the three-dimensional space formed by the shelf, such as in a shopping place On the top or shelf, the camera is deployed, and the shooting direction of the camera can be downward.
  • the recognition of pick-and-place behavior can be judged by the trajectory of the action behavior. If the customer has a product in his hand and gradually moves away from the shelf, the behavior is identified as a picking behavior for the product. If the customer has a product in his hand and is gradually approaching the shelf, the behavior is identified as a return behavior for the product.
  • the recognition of commodities can be done through recognition models, such as convolutional neural network models.
  • the input of the model can be the collected product image
  • the output can be the pick-and-place product (or the type of product in the product image), that is, the product corresponding to the customer's pick-and-place action. Since the position of the shelf is determined based on the gravity sensor that detects the change in the load bearing weight, and the weight change will inevitably be caused by the customer's pick and place behavior, the pick and place time involved in this visual recognition technology needs to be related to the load of the shelf. The time when the weight changes are matched, that is to say, the pick-and-place time corresponding to the visual recognition result matches the time when the load bearing weight of the shelf changes as the product recognition result.
  • the visual recognition result may include: the type of the product to be picked and placed, as well as the type and quantity of the product to be picked and placed.
  • the pick-and-place time is the time when the pick-and-place behavior is executed. It can be: any time when the image containing the product is collected as the pick-and-place time, or: if the pick-and-place behavior is a pick-and-place behavior, then in this pick-and-place behavior In this case, the time when the image containing the product is first collected is the pick-and-place time (or the time when the product is called); if the pick-and-place behavior is a return behavior, the time when the image containing the product is finally collected in this pick-and-place behavior It is the pick-and-place time (or the time to place the product).
  • the goods for sale in the shopping place can be placed at will, that is, there is no fixed place.
  • the weight of each product can also be the same, and it is no longer limited to different weights. This enriches the types of goods and makes them more effective. Meet the individual needs of customers.
  • image processing is involved, so it can be called a vision-based technology.
  • this technology does not limit the way the product is placed. For example, the product that the customer does not intend to buy can no longer be put back.
  • the original position (or initial position) can be placed on other shelves or other shelves.
  • Step 205 The visual recognition result is the product recognition result.
  • steps 204 and 205 are also steps: to identify the commodity that causes the load of the shelf to change, and to obtain the realization method of the commodity identification result.
  • the matching method further includes:
  • Step 206 Determine whether the weight change value is consistent with the product weight value corresponding to the visual recognition result.
  • the product weight value corresponding to the visual recognition result can be obtained by looking up in the total gravity table. Then compare it with the weight change value.
  • step 207 if it is judged to be inconsistent, the product recognition result is obtained according to the weight change value, the gravity value of each product in the shopping place, and the visual recognition result.
  • this step can be as follows:
  • the weight recognition result is used as the product recognition result. If the weight change value is G, the absolute value of G is used as the limiting condition, and the products in the shopping place are combined so that the combined total weight is consistent with G. If there is only one combination, then the products that constitute the combination are called It is the weight prediction product, which is used as the actual product in the product recognition result, that is, the weight recognition result is the product recognition result. Usually in this scenario, the visual recognition result will also be consistent with the weight recognition result, so the product recognition result can also be Visual recognition results.
  • the weight recognition result with the highest degree of overlap with the visual recognition result among the multiple weight recognition results is used as the product recognition result.
  • Coincidence refers to the products whose weight recognition result and visual recognition result have the same type, that is, there is an intersection between the two. In other words, the weight recognition result that is closest to the visual recognition result is used as the product recognition result.
  • the implementation of this step can also be:
  • the product in the visual recognition result is a multi-specification product
  • the multi-specification product is a product with the same appearance and different weights, such as cola of different specifications
  • the product type is determined according to the visual recognition result
  • the product type and the shopping place are determined.
  • the gravity value of each product determines the product specifications corresponding to the visual recognition results
  • the number of products is determined based on the weight change value and the gravity value of the product.
  • the product identification results include product specifications, product types, and product quantities. The selection of various products in the venue can be diversified, which can improve the goodness of customer experience.
  • step 208 if it is determined that they are consistent, go to step 205.
  • step 205 if both the weight recognition result and the visual recognition result are obtained, and the corresponding weight change value is consistent with the weight value, the visual recognition result is considered correct, so jump to step 205, that is, execute Step 205.
  • steps 204 to 208 are also steps: to identify the commodity that causes the load weight of the shelf to change, and to obtain the realization method of the commodity identification result.
  • the embodiment of the present invention tracks customers in shopping places in real time, obtains the shelf position of the shelf whose carrying weight has changed, identifies the commodities that cause the shelf's carrying weight to change, and obtains the result of product identification.
  • the position of the shelf, the position of the customer's hand, and the customer's action and behavior determine the identity information of the buyer who is the customer who matches the product identification result, so as to achieve the accurate matching of the product identification result and the customer, which is beneficial to the unmanned convenience store
  • the completion of the settlement work can also improve the accuracy of product identification and reduce the work of cashiers.
  • an embodiment of the present invention provides a device for matching goods and customers based on vision and gravity sensing, which has the function of realizing the above method embodiments.
  • This function can be realized by hardware, or corresponding software can be executed by hardware.
  • the matching device includes: a customer tracking module 401, a shelf position acquisition module 402, a product recognition result acquisition module 403, and a matching module 404.
  • the customer tracking module is used to obtain the customer's identity information and track the customer in the shopping place in real time, and the shopping place is arranged with a shelf for carrying goods.
  • the shelf position acquisition module is used to acquire the shelf position of the shelf whose weight has changed.
  • the product recognition result acquisition module is used to identify the product that causes the load-bearing weight of the shelf to change, and obtain the product recognition result.
  • the matching module is used to determine the identity information of the purchasing customer according to the position of the shelf, the position of the customer's hand, and the action of the customer.
  • the purchasing customer is the customer who matches the product identification result.
  • the matching module includes: a customer determination unit near the shelf and a matching unit.
  • the customer determination unit near the shelf is used to determine the customers near the shelf according to the position of the shelf and the customer's positioning position.
  • the customers near the shelf are the customers located around the shelf when the load carrying weight changes.
  • the matching unit is used to determine the purchase customer according to the position of the hand and the action behavior of the customer near the shelf, and the action behavior of the purchase customer causes the load bearing weight of the shelf to change.
  • the product recognition result module includes: a weight change value acquisition unit and a first product recognition result acquisition unit.
  • the weight change value obtaining unit is used to obtain the weight change value of the load-bearing weight of the shelf based on the gravity sensing technology.
  • the first product recognition result obtaining unit is used to obtain the weight recognition result according to the weight change value and the gravity value of each product carried by the shelf, and the weight recognition result is the product recognition result.
  • the product recognition result module includes: a visual recognition result acquisition unit and a second product recognition result acquisition unit.
  • the visual recognition result obtaining unit is used to obtain the visual recognition result corresponding to the product related to the customer's picking and placing behavior based on the visual recognition technology, and the picking and placing time of the picking and placing behavior is matched with the weight change time when the load bearing weight of the shelf changes.
  • the second product recognition result acquisition unit is used to make the visual recognition result be the product recognition result.
  • the product recognition result module further includes: a judgment unit, a third product recognition result acquisition unit, and a jump unit.
  • the judging unit is used to judge whether the weight change value is consistent with the pick-and-place gravity value, and the pick-and-place gravity value is the gravity value of the pick-and-place commodity corresponding to the pick-and-place time.
  • the third product recognition result obtaining unit is configured to obtain the product recognition result according to the weight change value, the gravity value of each product in the shopping place, and the visual recognition result if it is determined to be inconsistent.
  • the jump unit is used to perform the function of the second product recognition result acquisition unit if it is judged to be consistent.
  • the embodiment of the present invention implements the accurate matching of the product identification result with the customer by setting the customer tracking module, the shelf position acquisition module, the product recognition result acquisition module, and the matching module, which is conducive to the settlement work of unmanned convenience stores. Completed, it can also improve the accuracy of product recognition and reduce the work of cashiers.
  • An embodiment of the present invention provides a device for matching goods and customers based on vision and gravity sensing, which includes a memory and a processor.
  • the processor is connected to the memory, and is configured to execute the above-mentioned method for matching products and customers based on vision and gravity sensing based on instructions stored in the memory.
  • An embodiment of the present invention provides a computer-readable storage medium in which at least one instruction, at least one program, code set or instruction set is stored, and the at least one instruction, at least one program, code set or instruction is loaded and executed by a processor Realize the above-mentioned matching method of goods and customers based on vision and gravity sensing.
  • the computer storage medium can be read-only memory (ROM), random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • the embodiments described herein may be implemented by hardware, software, firmware, middleware, microcode, or any other combination.
  • the aforementioned terminal equipment can be implemented in one or more application-specific integrated circuits (ASIC), digital signal processor (DSP), digital signal processor device (DSPD), programmable logic device (PLD), field programmable A gate array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic elements are designed to perform the functions described herein, or a combination thereof.
  • ASIC application-specific integrated circuits
  • DSP digital signal processor
  • DSPD digital signal processor device
  • PLD programmable logic device
  • FPGA field programmable A gate array
  • controller a microcontroller
  • microprocessor or other electronic elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Operations Research (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)

Abstract

一种基于视觉和重力感应的商品与顾客的匹配方法和装置,该方法包括:获取顾客的身份信息并在购物场所内实时跟踪顾客(101),获取承载重量发生变化的层架的层架位置(102);对引起层架承载重量发生变化的商品进行识别,获取商品识别结果(103);根据层架位置、顾客的手部位置和顾客的动作行为确定购买顾客的身份信息,购买顾客为与商品识别结果匹配的顾客(104)。该方法实现了商品识别结果与顾客的精准匹配,有利于无人便利店结算工作的完成,还可以提高商品的识别准确率。

Description

基于视觉和重力感应的商品与顾客的匹配方法和装置 技术领域
本发明属于计算机技术领域,特别涉及一种基于视觉和重力感应的商品与顾客的匹配方法和装置。
背景技术
顾客在超市、商店等购物场所看到自己喜欢或需要的商品时,需要通过结算才能得到该商品。
现有技术中,通常是以在收银台排队并通过收银员持有的扫码终端扫描商品上的条形码的方式实现顾客与商品的匹配,进而确定某个顾客打算购买的商品以用于结算。
由于顾客与商品的匹配过程需要收银员参与,而收银员的工作时间是有限的,不能全天候工作,不利于满足不同顾客的购物需求,顾客的购物体验差。
发明内容
为了解决现有技术中存在的问题,本发明一方面提供了一种基于视觉和重力感应的商品与顾客的匹配方法,其包括:获取顾客的身份信息并在购物场所内实时跟踪所述顾客,所述购物场所内布置有用于承载商品的层架;获取承载重量发生变化的层架的层架位置;对引起层架承载重量发生变化的商品进行识别,获取商品识别结果;根据所述层架位置、所述顾客的手部位置和所述顾客的动作行为确定购买顾客的身份信息,所述购买顾客为与所述商品识别结果匹配的顾客。
本发明另一方面提供了一种基于视觉和重力感应的商品与顾客的匹配装置,其包括:顾客跟踪模块,用于获取顾客的身份信息并在购物场所内实时跟踪所述顾客,所述购物场所内布置有用于承载商品的层架;层架位置获取模块,用于获取承载重量发生变化的层架的层架位置;商品识别结果获取模 块,用于对引起层架承载重量发生变化的商品进行识别,获取商品识别结果;匹配模块,用于根据所述层架位置、所述顾客的手部位置和所述顾客的动作行为确定购买顾客的身份信息,所述购买顾客为与所述商品识别结果匹配的顾客。
本发明又一方面提供了一种基于视觉和重力感应的商品与顾客的匹配装置,其包括:存储器和处理器。处理器与存储器连接,被配置为基于存储在存储器中的指令,执行上述基于视觉和重力感应的商品与顾客的匹配方法。
本发明再一方面提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述基于视觉和重力感应的商品与顾客的匹配方法。
本发明实施例通过上述技术方案带来的有益效果如下:
通过实时跟踪购物场所内的顾客,获取承载重量发生变化的层架的层架位置,对引起层架承载重量发生变化的商品进行识别,获取商品识别结果,根据层架位置、顾客的手部位置和顾客的动作行为确定购买顾客的身份信息,该购买顾客为与商品识别结果匹配的顾客,从而实现了商品识别结果与顾客的精准匹配,利于无人便利店结算工作的完成。
附图说明
图1为本发明一实施例提供的一种基于视觉和重力感应的商品与顾客的匹配方法的流程示意图;
图2为本发明另一实施例提供的一种基于视觉和重力感应的商品与顾客的匹配方法的流程示意图;
图3为本发明另一实施例提供的另一种基于视觉和重力感应的商品与顾客的匹配方法的流程示意图;
图4为本发明实施例提供的一种基于视觉和重力感应的商品与顾客的匹配装置的结构示意图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地详细描述。
本发明实施例提供了一种基于视觉和重力感应的商品与顾客的匹配方法,参见图1,该方法包括以下步骤:
步骤101,获取顾客的身份信息并在购物场所内实时跟踪顾客。
具体地,在顾客进入购物场所前,如商店、超市,尤其是无人店,可以在终端设备上,如手机、平板电脑,安装与该匹配方法对应的App(Application,应用软件)或含有该匹配方法的App或通过小程序或微信公众号完成顾客的注册,小程序可以是微信(WeChat)内的小程序(或称微信小程序);还可以在购物场所布置的自助注册终端上完成注册。注册时需采集顾客的身份信息(ID),其用于区分不同的顾客,是顾客的唯一标识,可以包括:身份证号、手机号、用户账号、昵称、面部信息、指纹信息等中的一种或几种。为了便于顾客在结算时,无须出示支付账号,自动支付,该身份信息还可以包括:支付账号,其可以为银行账号,也可以为第三方支付账号,如支付宝、微信支付、QQ钱包、京东钱包等。为了能更好地为顾客提供所需要的商品,该身份信息还可以包括:姓名、性别、职业等。
在购物场所的入口处布置有闸机,身份信息识别装置获取顾客的身份信息后,对该身份信息进行验证,验证通过的顾客可以开启闸机,从而进入购物场所内去购物,未注册或验证未通过的顾客不能开启闸机,从而无法进入购物场所内。身份识别信息装置可以是人脸识别装置,对应的身份信息为面部信息。身份识别信息装置可以是指纹识别装置,对应的身份信息为指纹信息。在其他的实施例中,身份识别信息装置还可以为其他装置,如扫码终端,载有顾客身份信息的介质可以是二维码。扫码终端通过扫描顾客持有的终端设备上显示的二维码,实现顾客身份信息的验证。身份识别信息装置又可以为表征购物场所编号的二维码或条形码,顾客通过终端设备上的App扫描该二维码或条形码,实现身份信息的验证。购物场所内布置有层架,在层架上放置有商品,多个层架间隔叠置可以形成一个货架,货架的数量可以为多个,多个货架布置在购物场所形成的空间内。当顾客进入购物场所后,可以根据自己的需要在层架前停留,通过执行取放行为来挑选打算购买的商品,取放行为包括:拿取行为和放回行为。拿取行为表示有购买意愿,对商品进行拿取动作;放回行为表示放弃购买意愿,对商品进行放回动作。购物场所的空间较大,可以同时容纳多个顾客购物。为了确定商品是哪个顾客购买的,需 要在顾客进入购物场所后保持对该顾客的实时追踪,即需要实时获取顾客在购物场所内的定位位置。
实时跟踪的实现方法可以是基于深度摄像头的定位方法,如在购物场所内的顶部或天花板上部署多个深度摄像头,深度摄像头的拍摄方向向下,深度摄像头的数量可以根据购物场所的大小进行调整,实际中以拍摄范围覆盖购物场所形成的空间为宜。当顾客在购物场所内活动时,每个深度摄像头会在每一图像帧画面中通过目标检测算法检测顾客在画面中的位置,然后根据深度信息计算出顾客在购物场所形成的三维空间中的位置。在相邻的时间点上,通过计算顾客位置之间的距离,找出位置变化最小的顾客,完成对顾客的追踪,即通过对比相邻的图像帧中在前述三维空间内顾客的距离,两帧之间距离最近的顾客被认为是同一个顾客。深度信息是指与具有深度摄像头的相机拍摄到的每一个像素点对应的在三维空间中的物体到该相机本身的距离。当深度摄像头部署于层架的上方时,可以基于深度摄像头技术获取顾客的手部位置。由于实现过程涉及图像处理,可以将其称为基于视觉的技术。
步骤102,获取承载重量发生变化的层架的层架位置。
进入购物场所后,顾客会在购物场所内移动,当顾客遇到自己喜欢的商品时,会在承载商品的层架前停留,然后对商品进行拿取动作以表明该商品属于待购商品,或放回动作以表明该商品不属于待购商品。当顾客进行拿取动作时,层架上承载的重量会减少;当顾客进行放回动作时,层架上承载的重量会增加。基于重力感应技术检测层架的承载重量变化,如在层架上设置重力传感器。若检测到承载重量减少,则表明顾客拿取了商品;若检测到承载重量增加,则表明顾客放回了商品。使用前,测量每个层架在购物场所内的三维空间的位置,并与设置在其上的重力传感器对应,从而在重力传感器检测到承载重量变化时,可以确定承载重量发生变化的层架的层架位置。
步骤103,对引起层架承载重量发生变化的商品进行识别,获取商品识别结果。
具体地,基于重力感应技术获取层架承载重量发生变化的重量变化值,根据重量变化值和层架承载的各商品的重力值获取重量识别结果,该重量识别结果为商品识别结果。
例如:为每个层架部署重力传感器,层架上仅放置不同重量的商品,可 以是层架的货道上仅放置一种商品,且货道之间放置的商品重量不同,在该层架的层架重力值表中记录层架上各商品的种类和相应的重量值,多个货道可以通过在层架的支撑板上间隔地设置隔板形成。当顾客执行取放动作时,重力传感器会检测到层架承载重量发生变化的重量变化值。当该重量变化值为减少时,表明顾客针对该商品执行的是拿取行为,当该重量变化值为增加时,表面顾客针对该商品执行的是放回行为。在层架重力值表中查找与该重量变化值的绝对值对应的重力值,若能查找到,则与该重力值对应的商品种类和商品数量作为重量识别结果,然后将该重量识别结果作为商品识别结果。
步骤104,根据层架位置、顾客的手部位置和顾客的动作行为确定购买顾客的身份信息,购买顾客为与商品识别结果匹配的顾客。
具体地,根据层架位置和顾客的定位位置确定层架附近顾客,层架附近顾客为承载重量发生变化时位于该层架周围的顾客,然后根据该层架附近顾客的动作行为和该层架附近顾客的手部位置确定层架附近顾客中是哪个顾客的动作行为引起了层架承载重量发生变化,则该顾客即为购买顾客,如此完成商品识别结果与顾客的匹配,确定顾客的身份信息,然后依据商品识别结果对该顾客的购物车(购物清单或虚拟购物车)进行添加商品操作或删除商品操作,利于结算。
综上所述,本发明实施例通过实时跟踪购物场所内的顾客,获取承载重量发生变化的层架的层架位置,对引起层架承载重量发生变化的商品进行识别,获取商品识别结果,根据层架位置、顾客的手部位置和顾客的动作行为确定购买顾客的身份信息,该购买顾客为与商品识别结果匹配的顾客,从而实现了商品识别结果与顾客的精准匹配,利于无人便利店结算工作的完成,减少收银员的工作。
参见图2,本发明另一实施例提供了一种基于视觉和重力感应的商品与顾客的匹配方法,其包括以下步骤:
步骤201,获取顾客的身份信息并在购物场所内实时跟踪顾客。
步骤202,获取承载重量发生变化的层架的层架位置。
步骤203,基于重力感应技术获取层架承载重量发生变化的重量变化值,并根据重量变化值和购物场所内各商品的重力值获取重量识别结果。
通常在总重力值表中记录购物场所内各商品的种类、相应的重力值和初始摆放层架。需要说明的是,步骤201~203的描述可参见上述实施例的相关描述,此处不再一一赘述。
步骤204,基于视觉识别技术获取与顾客的取放行为涉及的商品对应的视觉识别结果,取放行为执行的取放时间与层架承载重量发生变化的重量变化时间匹配。
具体地,对顾客针对商品的取放行为进行拍摄,如此可以获取顾客在购物过程中的图像,拍摄摄像头的布置方式可以如下:在层架所形成的立体空间的前面的上方,如购物场所的顶部或层架上,部署摄像头,摄像头的拍摄方向可以向下。取放行为的识别可以通过动作行为轨迹来判断。若顾客手中有商品并逐渐远离层架,则识别该行为为针对商品的拿取行为。若顾客手中有商品并逐渐靠近层架,则识别该行为为针对商品的放回行为。商品的识别可以通过识别模型来完成,如卷积神经网络模型。模型的输入可以是采集的商品图像,输出可以是取放商品(或称商品图像中商品的种类),即与顾客的取放动作对应的商品。由于层架位置是基于检测到承载重量发生变化的重力传感器确定的,且重量变化势必会由顾客执行了取放行为而引起,所以,此视觉识别技术中涉及的取放时间需与层架承载重量发生变化的时间匹配,也就是说,视觉识别结果对应的取放时间中与层架承载重量发生变化的时间匹配的才作为商品识别结果。视觉识别结果可以包括:取放商品的种类,还可以包括取放商品的种类和数量。
取放时间为该次取放行为执行的时间,其可以:以采集到含有商品的图像的任意时间为取放时间,还可以:若取放行为为拿取行为,则在此次取放行为中以首先采集到含有商品的图像的时间为取放时间(或称取商品时间);若取放行为为放回行为,则在此次取放行为中以最后采集到含有商品的图像的时间为取放时间(或称放商品时间)。
通过此技术,购物场所内的待售卖商品可以随意摆放,即可以没有固定的放置位置,同时,各商品的重量也可以相同,不再局限于重量不同,如此丰富了商品的种类,更能满足顾客的个性化需求。在商品识别时,涉及到了图像处理,因此可称为基于视觉的技术,本技术相对于重力感应技术,可以不限制商品的摆放方式,如顾客不打算购买的商品可以不再只能放回原位置 (或称初始位置),可以放置在其他层架上或其他货架上。
步骤205,视觉识别结果为商品识别结果。
需要说明的是,步骤204和205的相关描述内容也是步骤:对引起层架承载重量发生变化的商品进行识别,获取商品识别结果的实现方式。
在某些场景下,例如一次拿取了多件商品,多件商品之间可能会发生相互遮挡现象,则会影响视觉识别结果的准确度,此时视觉识别结果的取放商品并不是顾客实际拿取的商品,如识别的取放商品为D,而顾客实际拿取的商品为C,即将实际商品C识别为商品D,若以商品D作为商品识别结果,在给顾客进行结算时,会影响顾客对无人购物的体验,因此为了提高顾客的购物体验,提高商品识别的准确度,参见图3,在步骤204之后,步骤205之前,本匹配方法还包括:
步骤206,判断重量变化值和与视觉识别结果对应的商品重量值是否一致。
与视觉识别结果对应的商品重量值可通过在总重力表中查找得到。然后将其与重量变化值进行比较。
步骤207,若判断为不一致,则根据重量变化值、购物场所内各商品的重力值和视觉识别结果得到商品识别结果。
具体地,该步骤的实现方式可以为:
若根据重量变化值和购物场所内各商品的重力值得到只有一种重量识别结果,则将该重量识别结果作为商品识别结果。如重量变化值为G,以G的绝对值为限定条件,将购物场所内各商品进行组合,使组合后的总重量与G一致,得到只有一种组合情况,则将构成该组合的商品称之为重量预测商品,以其作为商品识别结果中的实际商品,即重量识别结果作为商品识别结果,通常在此场景下,视觉识别结果也会与重量识别结果一致,因此商品识别结果还可以为视觉识别结果。
该步骤的实现方式又可以为:
若根据重量变化值和购物场所内各商品的重力值得到多个重量识别结果,则在多个重量识别结果中以与视觉识别结果重合度最高的重量识别结果作为商品识别结果。重合度指的是重量识别结果与视觉识别结果有同种类的商品,即两者有交集,换言之,将与视觉识别结果最相近的重量识别结果作 为商品识别结果。
该步骤的实现方式还可以为:
若视觉识别结果中的商品属于多规格商品,多规格商品为同一种外观且具有不同重量的商品,如不同规格的可乐,则根据视觉识别结果确定商品种类,根据确定的商品种类和购物场所内各商品的重力值,确定与视觉识别结果对应的商品规格,根据重量变化值和该商品的重力值确定商品数量,此时商品识别结果中包括商品规格、商品种类和商品数量,如此使得在购物场所内各商品的选品上可以实现多样化,更能提高顾客体验的良好性。
步骤208,若判断为一致,则跳转至步骤205。
在该步骤中,若既获取了重量识别结果,又获取了视觉识别结果,且对应的重量变化值和重量值是一致的,则认为此视觉识别结果正确,于是跳转至步骤205,即执行步骤205。
需要说明的是,步骤204~208的相关描述内容也是步骤:对引起层架承载重量发生变化的商品进行识别,获取商品识别结果的实现方式。
综上所述,本发明实施例通过实时跟踪购物场所内的顾客,获取承载重量发生变化的层架的层架位置,对引起层架承载重量发生变化的商品进行识别,获取商品识别结果,根据层架位置、顾客的手部位置和顾客的动作行为确定购买顾客的身份信息,该购买顾客为与商品识别结果匹配的顾客,从而实现了商品识别结果与顾客的精准匹配,利于无人便利店结算工作的完成,还可以提高商品的识别准确率,减少收银员的工作。
参见图4,本发明一实施例提供了一种基于视觉和重力感应的商品与顾客的匹配装置,具有实现上述方法实施例的功能,该功能可以由硬件实现,也可以由硬件执行相应的软件实现。该匹配装置包括:顾客跟踪模块401、层架位置获取模块402、商品识别结果获取模块403和匹配模块404。
其中,顾客跟踪模块用于获取顾客的身份信息并在购物场所内实时跟踪顾客,购物场所内布置有用于承载商品的层架。层架位置获取模块用于获取承载重量发生变化的层架的层架位置。商品识别结果获取模块用于对引起层架承载重量发生变化的商品进行识别,获取商品识别结果。匹配模块用于根据层架位置、顾客的手部位置和顾客的动作行为确定购买顾客的身份信息, 购买顾客为与商品识别结果匹配的顾客。
可选地,匹配模块包括:层架附近顾客确定单元和匹配单元。层架附近顾客确定单元用于根据层架位置和顾客的定位位置确定层架附近顾客,层架附近顾客为承载重量发生变化时位于层架周围的顾客。匹配单元用于根据层架附近顾客的手部位置和动作行为确定购买顾客,购买顾客的动作行为引起了层架承载重量发生了变化。
可选地,商品识别结果模块包括:重量变化值获取单元和第一商品识别结果获取单元。重量变化值获取单元用于基于重力感应技术获取层架承载重量发生变化的重量变化值。第一商品识别结果获取单元用于根据重量变化值和层架承载的各商品的重力值获取重量识别结果,重量识别结果为商品识别结果。
可选地,商品识别结果模块包括:视觉识别结果获取单元和第二商品识别结果获取单元。视觉识别结果获取单元用于基于视觉识别技术获取与顾客的取放行为涉及的商品对应的视觉识别结果,取放行为执行的取放时间与层架承载重量发生变化的重量变化时间匹配。第二商品识别结果获取单元用于使视觉识别结果为商品识别结果。
可选地,商品识别结果模块还包括:判断单元、第三商品识别结果获取单元和跳转单元。判断单元用于判断所述重量变化值和取放重力值是否一致,所述取放重力值为与所述取放时间对应的取放商品的重力值。第三商品识别结果获取单元用于若判断为不一致,则根据所述重量变化值、所述购物场所内各商品的重力值和所述视觉识别结果得到所述商品识别结果。跳转单元用于若判断为一致,则执行第二商品识别结果获取单元的功能。
综上所述,本发明实施例通过设置顾客跟踪模块、层架位置获取模块、商品识别结果获取模块和匹配模块,从而实现了商品识别结果与顾客的精准匹配,利于无人便利店结算工作的完成,还可以提高商品的识别准确率,减少收银员的工作。
本发明一实施例提供了一种基于视觉和重力感应的商品与顾客的匹配装置,其包括:存储器和处理器。处理器与存储器连接,被配置为基于存储在存储器中的指令,执行上述基于视觉和重力感应的商品与顾客的匹配方法。
本发明一实施例提供了一种计算机可读存储介质,其中存储有至少一条指令、至少一段程序、代码集或指令集,至少一条指令、至少一段程序、代码集或指令由处理器加载并执行实现上述基于视觉和重力感应的商品与顾客的匹配方法。计算机存储介质可以是只读存储器(ROM)、随机存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本文所述的实施例可以有硬件、软件、固件、中间件、微代码或其他任意组合来实现。对于硬件实现方式,前述的终端设备可以在一个或多个专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理器器件(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元被设计以执行本文所述功能的模块或其组合来实现。当以软件、固件、中间件或微代码、程序代码或代码段来实现实施例时,可以将它们存储在诸如存储组件的机器可读介质中。
由技术常识可知,本发明可以通过其它的不脱离其精神实质或必要特征的实施方案来实现。因此,上述公开的实施方案,就各方面而言,都只是举例说明,并不是仅有的。所有在本发明范围内或在等同于本发明的范围内的改变均被本发明包含。

Claims (10)

  1. 一种基于视觉和重力感应的商品与顾客的匹配方法,其特征在于,所述匹配方法包括:
    获取顾客的身份信息并在购物场所内实时跟踪所述顾客,所述购物场所内布置有用于承载商品的层架;
    获取承载重量发生变化的层架的层架位置;
    对引起层架承载重量发生变化的商品进行识别,获取商品识别结果;
    根据所述层架位置、所述顾客的手部位置和所述顾客的动作行为确定购买顾客的身份信息,所述购买顾客为与所述商品识别结果匹配的顾客。
  2. 根据权利要求1所述的匹配方法,其特征在于,所述根据所述层架位置、所述顾客的手部位置和所述顾客的动作行为确定购买顾客的身份信息,包括:
    根据所述层架位置和所述顾客的定位位置确定层架附近顾客,所述层架附近顾客为承载重量发生变化时位于所述层架周围的顾客;
    根据所述层架附近顾客的手部位置和动作行为确定购买顾客,所述购买顾客的动作行为引起了层架承载重量发生了变化。
  3. 根据权利要求1所述的匹配方法,其特征在于,所述对引起层架承载重量发生变化的商品进行识别,获取商品识别结果,包括:
    基于重力感应技术,获取层架承载重量发生变化的重量变化值;
    根据所述重量变化值和所述层架承载的各商品的重力值获取重量识别结果,所述重量识别结果为所述商品识别结果。
  4. 根据权利要求1所述的匹配方法,其特征在于,所述对引起层架承载重量发生变化的商品进行识别,获取商品识别结果,包括:
    基于视觉识别技术,获取与顾客的取放行为涉及的商品对应的视觉识别结果,所述取放行为执行的取放时间与层架承载重量发生变化的重量变化时间匹配;
    所述视觉识别结果为所述商品识别结果。
  5. 根据权利要求4所述的匹配方法,其特征在于,所述视觉识别结果为所述商品识别结果之前,所述匹配方法还包括:
    判断所述重量变化值和取放重力值是否一致,所述取放重力值为与所述取放时间对应的取放商品的重力值;
    若判断为不一致,则根据所述重量变化值、所述购物场所内各商品的重力值和所述视觉识别结果得到所述商品识别结果;
    若判断为一致,则跳转至步骤所述视觉识别结果为所述商品识别结果。
  6. 一种基于视觉和重力感应的商品与顾客的匹配装置,其特征在于,所述匹配装置包括:
    顾客跟踪模块,用于获取顾客的身份信息并在购物场所内实时跟踪所述顾客,所述购物场所内布置有用于承载商品的层架;
    层架位置获取模块,用于获取承载重量发生变化的层架的层架位置;
    商品识别结果获取模块,用于对引起层架承载重量发生变化的商品进行识别,获取商品识别结果;
    匹配模块,用于根据所述层架位置、所述顾客的手部位置和所述顾客的动作行为确定购买顾客的身份信息,所述购买顾客为与所述商品识别结果匹配的顾客。
  7. 根据权利要求6所述的匹配装置,其特征在于,所述匹配模块包括:
    层架附近顾客确定单元,用于根据所述层架位置和所述顾客的定位位置确定层架附近顾客,所述层架附近顾客为承载重量发生变化时位于所述层架周围的顾客;
    匹配单元,用于根据所述层架附近顾客的手部位置和动作行为确定购买顾客,所述购买顾客的动作行为引起了层架承载重量发生了变化。
  8. 根据权利要求6所述的匹配装置,其特征在于,所述商品识别结果模块包括:
    重量变化值获取单元,用于基于重力感应技术获取层架承载重量发生变 化的重量变化值;
    第一商品识别结果获取单元,用于根据所述重量变化值和所述层架承载的各商品的重力值获取重量识别结果,所述重量识别结果为所述商品识别结果。
  9. 根据权利要求6所述的匹配装置,其特征在于,所述商品识别结果模块包括:
    视觉识别结果获取单元,用于基于视觉识别技术,获取与顾客的取放行为涉及的商品对应的视觉识别结果,所述取放行为执行的取放时间与层架承载重量发生变化的重量变化时间匹配;
    第二商品识别结果获取单元,用于使所述视觉识别结果为所述商品识别结果。
  10. 根据权利要求9所述的匹配装置,其特征在于,所述商品识别结果模块还包括:
    判断单元,用于判断所述重量变化值和取放重力值是否一致,所述取放重力值为与所述取放时间对应的取放商品的重力值;
    第三商品识别结果获取单元,用于若判断为不一致,则根据所述重量变化值、所述购物场所内各商品的重力值和所述视觉识别结果得到所述商品识别结果;
    跳转单元,用于若判断为一致,则跳转至步骤所述视觉识别结果为所述商品识别结果。
PCT/CN2020/084780 2019-09-06 2020-04-14 基于视觉和重力感应的商品与顾客的匹配方法和装置 WO2021042730A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP20739233.3A EP3809325A4 (en) 2019-09-06 2020-04-14 METHOD AND APPARATUS BASED ON VISUAL DETECTION AND GRAVITY FOR MATCHING GOODS WITH CUSTOMERS
JP2020540494A JP2022539920A (ja) 2019-09-06 2020-04-14 視覚及び重力感知に基づく商品と顧客とのマッチ方法及び装置
US16/965,563 US11983250B2 (en) 2019-09-06 2020-04-14 Item-customer matching method and device based on vision and gravity sensing

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201910840293.7 2019-09-06
CN201910840293 2019-09-06
CN202010256683.2A CN112464697B (zh) 2019-09-06 2020-04-02 基于视觉和重力感应的商品与顾客的匹配方法和装置
CN202010256683.2 2020-04-02

Publications (1)

Publication Number Publication Date
WO2021042730A1 true WO2021042730A1 (zh) 2021-03-11

Family

ID=74807813

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2020/079896 WO2021042698A1 (zh) 2019-09-06 2020-03-18 基于视觉和重力感应的商品识别方法、装置和***
PCT/CN2020/084780 WO2021042730A1 (zh) 2019-09-06 2020-04-14 基于视觉和重力感应的商品与顾客的匹配方法和装置

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/079896 WO2021042698A1 (zh) 2019-09-06 2020-03-18 基于视觉和重力感应的商品识别方法、装置和***

Country Status (5)

Country Link
US (2) US11416718B2 (zh)
EP (2) EP4027311A4 (zh)
JP (2) JP7068450B2 (zh)
CN (2) CN112466035B (zh)
WO (2) WO2021042698A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6821009B2 (ja) * 2017-12-25 2021-01-27 ▲図▼▲霊▼通▲諾▼(北京)科技有限公司Yi Tunnel (Beijing) Technology Co.,Ltd. 会計方法、装置及びシステム
CN112466035B (zh) * 2019-09-06 2022-08-12 图灵通诺(北京)科技有限公司 基于视觉和重力感应的商品识别方法、装置和***
US11966901B2 (en) * 2020-09-01 2024-04-23 Lg Electronics Inc. Automated shopping experience using cashier-less systems
US11922386B2 (en) * 2021-03-12 2024-03-05 Amazon Technologies, Inc. Automated and self-service item kiosk
CN112950329A (zh) * 2021-03-26 2021-06-11 苏宁易购集团股份有限公司 商品动态信息生成方法、装置、设备及计算机可读介质
CN116403339B (zh) * 2023-06-08 2023-08-11 美恒通智能电子(广州)股份有限公司 一种基于rfid标签识别的打印机智能管理***及方法
CN117133078B (zh) * 2023-10-24 2023-12-29 珠海微米物联科技有限公司 基于重力感应的自动售卖***

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090063176A1 (en) * 2007-08-31 2009-03-05 French John R Shopping cart basket monitor
CN107886655A (zh) * 2017-09-28 2018-04-06 陈维龙 应付账单的生成方法、应付账单与已付账单的匹配方法
CN108198052A (zh) * 2018-03-02 2018-06-22 北京京东尚科信息技术有限公司 用户选购商品识别方法、装置以及智能货架***
CN208188867U (zh) * 2018-06-01 2018-12-04 西安未来鲜森智能信息技术有限公司 一种用于无人自动售货的商品识别***
CN109409291A (zh) * 2018-10-26 2019-03-01 虫极科技(北京)有限公司 智能货柜的商品识别方法和***及购物订单的生成方法

Family Cites Families (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3837475B2 (ja) * 2001-07-19 2006-10-25 独立行政法人産業技術総合研究所 自動化ショッピングシステム
US6766270B2 (en) * 2002-03-12 2004-07-20 American Gnc Corporation Gravity-reference vision system
JP2004213232A (ja) * 2002-12-27 2004-07-29 Glory Ltd レンタル物品のオートレジシステム
US8571298B2 (en) * 2008-12-23 2013-10-29 Datalogic ADC, Inc. Method and apparatus for identifying and tallying objects
JP2013008116A (ja) * 2011-06-23 2013-01-10 Japan Research Institute Ltd 課金装置、課金方法、およびプログラム
US10290031B2 (en) * 2013-07-24 2019-05-14 Gregorio Reid Method and system for automated retail checkout using context recognition
US10664795B1 (en) * 2013-09-20 2020-05-26 Amazon Technologies, Inc. Weight based item tracking
US10515309B1 (en) * 2013-09-20 2019-12-24 Amazon Technologies, Inc. Weight based assistance determination
JP2015106380A (ja) * 2013-12-02 2015-06-08 富士通フロンテック株式会社 セルフチェックアウト端末
US10657411B1 (en) * 2014-03-25 2020-05-19 Amazon Technologies, Inc. Item identification
US10713614B1 (en) * 2014-03-25 2020-07-14 Amazon Technologies, Inc. Weight and vision based item tracking
US10163149B1 (en) * 2014-03-28 2018-12-25 Amazon Technologies, Inc. Providing item pick and place information to a user
US11030541B1 (en) * 2014-06-24 2021-06-08 Amazon Technologies, Inc. Proactive resolution of event information
US10339493B1 (en) * 2014-06-24 2019-07-02 Amazon Technologies, Inc. Associating users with totes
US10242393B1 (en) * 2014-06-24 2019-03-26 Amazon Technologies, Inc. Determine an item and user action in a materials handling facility
US20160110791A1 (en) * 2014-10-15 2016-04-21 Toshiba Global Commerce Solutions Holdings Corporation Method, computer program product, and system for providing a sensor-based environment
CN105608459B (zh) * 2014-10-29 2018-09-14 阿里巴巴集团控股有限公司 商品图片的分割方法及其装置
US10332066B1 (en) * 2015-03-30 2019-06-25 Amazon Technologies, Inc. Item management system using weight
US10318917B1 (en) * 2015-03-31 2019-06-11 Amazon Technologies, Inc. Multiple sensor data fusion system
US11042836B1 (en) * 2019-06-07 2021-06-22 Amazon Technologies, Inc. Fusion of sensor data for detecting interactions at an inventory location
US10262293B1 (en) * 2015-06-23 2019-04-16 Amazon Technologies, Inc Item management system using multiple scales
US11117744B1 (en) * 2015-12-08 2021-09-14 Amazon Technologies, Inc. Determination of untidy item return to an inventory location
US10565554B2 (en) * 2016-06-10 2020-02-18 Walmart Apollo, Llc Methods and systems for monitoring a retail shopping facility
CN106408369B (zh) * 2016-08-26 2021-04-06 西安超嗨网络科技有限公司 一种智能鉴别购物车内商品信息的方法
CN107103503B (zh) * 2017-03-07 2020-05-12 阿里巴巴集团控股有限公司 一种订单信息确定方法和装置
US11238401B1 (en) * 2017-03-27 2022-02-01 Amazon Technologies, Inc. Identifying user-item interactions in an automated facility
US11087271B1 (en) * 2017-03-27 2021-08-10 Amazon Technologies, Inc. Identifying user-item interactions in an automated facility
US10943465B1 (en) * 2017-03-28 2021-03-09 Amazon Technologies, Inc. Device notification and aggregation
JP2018169752A (ja) * 2017-03-29 2018-11-01 パナソニックIpマネジメント株式会社 商品認識システム、学習済みモデル、及び商品認識方法
US20180285902A1 (en) * 2017-03-31 2018-10-04 Walmart Apollo, Llc System and method for data-driven insight into stocking out-of-stock shelves
US10424174B2 (en) * 2017-07-27 2019-09-24 Acuity-Vct, Inc. Quantitative document tampering protection and monitoring system utilizing weight and video
CN109409175B (zh) * 2017-08-16 2024-02-27 图灵通诺(北京)科技有限公司 结算方法、装置和***
RU2671753C1 (ru) * 2017-09-01 2018-11-06 Тимур Юсупович Закиров Система контроля и идентификации товара в магазине
CN109559453A (zh) * 2017-09-27 2019-04-02 缤果可为(北京)科技有限公司 用于自动结算的人机交互装置及其应用
US11301684B1 (en) * 2017-09-29 2022-04-12 Amazon Technologies, Inc. Vision-based event detection
CN107862814A (zh) * 2017-11-01 2018-03-30 北京旷视科技有限公司 自动结算***及自动结算方法
CN107833365A (zh) * 2017-11-29 2018-03-23 武汉市哈哈便利科技有限公司 一种重力感应和图像识别双控的无人售货***
US10739551B1 (en) * 2017-12-13 2020-08-11 Amazon Technologies, Inc. Compensating for orientation and temperature related changes in camera focus
CN108497839B (zh) * 2017-12-18 2021-04-09 上海云拿智能科技有限公司 可感知货品的货架
JP6821009B2 (ja) * 2017-12-25 2021-01-27 ▲図▼▲霊▼通▲諾▼(北京)科技有限公司Yi Tunnel (Beijing) Technology Co.,Ltd. 会計方法、装置及びシステム
CN108335408B (zh) * 2018-03-02 2020-11-03 北京京东尚科信息技术有限公司 用于自动售货机的物品识别方法、装置、***及存储介质
CN108549851B (zh) * 2018-03-27 2020-08-25 合肥美的智能科技有限公司 智能货柜内货品识别方法及装置、智能货柜
US20190333039A1 (en) * 2018-04-27 2019-10-31 Grabango Co. Produce and bulk good management within an automated shopping environment
CN109003403B (zh) * 2018-06-05 2021-09-14 西安超嗨网络科技有限公司 购物车商品称重和计价方法、装置及***
CN109102274A (zh) * 2018-06-27 2018-12-28 深圳市赛亿科技开发有限公司 一种无人超市智能购物结算方法及***
CN109064630B (zh) * 2018-07-02 2024-03-05 高堆 无人自动称重计价货柜***
CN108921540A (zh) * 2018-07-09 2018-11-30 南宁市安普康商贸有限公司 基于购买者位置定位的开放式自助销售方法及***
CA3011318A1 (en) * 2018-07-13 2020-01-13 E-Ventor Tech Kft. Automatic product identification in inventories, based on multimodal sensor operation
CA3109571A1 (en) * 2018-07-16 2020-01-23 Accel Robotics Corporation Autonomous store tracking system
US20220230216A1 (en) * 2018-07-16 2022-07-21 Accel Robotics Corporation Smart shelf that combines weight sensors and cameras to identify events
CN109353397B (zh) * 2018-09-20 2021-05-11 北京旷视科技有限公司 商品管理方法、装置和***及存储介质及购物车
US10984239B2 (en) * 2018-09-27 2021-04-20 Ncr Corporation Context-aided machine vision
CN109583942B (zh) * 2018-11-07 2021-05-11 浙江工业大学 一种基于密集网络的多任务卷积神经网络顾客行为分析方法
CN109214806B (zh) * 2018-11-20 2022-01-04 北京京东尚科信息技术有限公司 自助结算方法、装置以及存储介质
CN109649915B (zh) * 2018-11-27 2021-01-19 上海京东到家元信信息技术有限公司 一种智能货柜货物识别方法和装置
US11085809B1 (en) * 2018-12-03 2021-08-10 Amazon Technologies, Inc. Multi-channel weight sensing system
CN109766962B (zh) * 2018-12-18 2021-01-19 创新奇智(南京)科技有限公司 一种商品识别方法、存储介质及商品识别***
CN111523348B (zh) * 2019-02-01 2024-01-05 百度(美国)有限责任公司 信息生成方法和装置、用于人机交互的设备
CN109886169B (zh) * 2019-02-01 2022-11-22 腾讯科技(深圳)有限公司 应用于无人货柜的物品识别方法、装置、设备及存储介质
US20200265494A1 (en) * 2019-02-17 2020-08-20 Grabango Co. Remote sku on-boarding of products for subsequent video identification and sale
CN110197561A (zh) * 2019-06-10 2019-09-03 北京华捷艾米科技有限公司 一种商品识别方法、装置及***
US10902237B1 (en) * 2019-06-19 2021-01-26 Amazon Technologies, Inc. Utilizing sensor data for automated user identification
CN112466035B (zh) * 2019-09-06 2022-08-12 图灵通诺(北京)科技有限公司 基于视觉和重力感应的商品识别方法、装置和***
US11270546B1 (en) * 2019-12-06 2022-03-08 Amazon Technologies, Inc. Item dispenser with motion-based removal detection
US20210182921A1 (en) * 2019-12-12 2021-06-17 Amazon Technologies, Inc. Customized retail environments
JP2022528022A (ja) * 2020-03-09 2022-06-08 ▲図▼▲霊▼通▲諾▼(北京)科技有限公司 スーパーマーケット商品棚上の商品の分析方法及びシステム
US20210398097A1 (en) * 2020-03-09 2021-12-23 Yi Tunnel (Beijing) Technology Co., Ltd. Method, a device and a system for checkout
TW202205163A (zh) * 2020-07-22 2022-02-01 李建文 辨識物品及存量的系統與方法
JP7502113B2 (ja) * 2020-08-24 2024-06-18 東芝テック株式会社 商品登録装置及びその制御プログラム
US11966901B2 (en) * 2020-09-01 2024-04-23 Lg Electronics Inc. Automated shopping experience using cashier-less systems

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090063176A1 (en) * 2007-08-31 2009-03-05 French John R Shopping cart basket monitor
CN107886655A (zh) * 2017-09-28 2018-04-06 陈维龙 应付账单的生成方法、应付账单与已付账单的匹配方法
CN108198052A (zh) * 2018-03-02 2018-06-22 北京京东尚科信息技术有限公司 用户选购商品识别方法、装置以及智能货架***
CN208188867U (zh) * 2018-06-01 2018-12-04 西安未来鲜森智能信息技术有限公司 一种用于无人自动售货的商品识别***
CN109409291A (zh) * 2018-10-26 2019-03-01 虫极科技(北京)有限公司 智能货柜的商品识别方法和***及购物订单的生成方法

Also Published As

Publication number Publication date
EP4027311A1 (en) 2022-07-13
JP2022501660A (ja) 2022-01-06
EP4027311A4 (en) 2023-10-04
JP7068450B2 (ja) 2022-05-16
US20210406617A1 (en) 2021-12-30
CN112464697A (zh) 2021-03-09
JP2022539920A (ja) 2022-09-14
EP3809325A4 (en) 2021-06-02
US11983250B2 (en) 2024-05-14
WO2021042698A1 (zh) 2021-03-11
US11416718B2 (en) 2022-08-16
EP3809325A1 (en) 2021-04-21
CN112466035A (zh) 2021-03-09
CN112464697B (zh) 2024-05-14
US20230160740A1 (en) 2023-05-25
CN112466035B (zh) 2022-08-12

Similar Documents

Publication Publication Date Title
WO2021042730A1 (zh) 基于视觉和重力感应的商品与顾客的匹配方法和装置
JP7229580B2 (ja) 無人販売システム
US20210304176A1 (en) Information processing system
RU2739542C1 (ru) Система автоматической регистрации для торговой точки
WO2019165892A1 (zh) 自动售货方法、装置和计算机可读存储介质
KR101678970B1 (ko) 물품 감정 방법
US20170068945A1 (en) Pos terminal apparatus, pos system, commodity recognition method, and non-transitory computer readable medium storing program
JP7417738B2 (ja) カスタマイズされた小売環境
JP7225434B2 (ja) 情報処理システム
JP2016081174A (ja) 関連付プログラム及び情報処理装置
WO2019038968A1 (ja) 店舗装置、店舗システム、店舗管理方法、プログラム
KR20110122890A (ko) 상품이미지 검색을 통한 무인 결제 시스템 및 그 방법
CN111263224A (zh) 视频处理方法、装置及电子设备
US20230125326A1 (en) Recording medium, action determination method, and action determination device
EP2570967A1 (en) Semi-automatic check-out system and method
CN111178860A (zh) 无人便利店的结算方法、装置、设备及存储介质
CN111428743B (zh) 商品识别方法、商品处理方法、装置及电子设备
CN111260685B (zh) 视频处理方法、装置及电子设备
CN114358881A (zh) 自助结算方法、装置和***
CN116665380B (zh) 一种智能结账处理方法、***、pos收银机及储介质
JP2016081498A (ja) 関連付プログラム及び情報処理装置
WO2023188068A1 (ja) 商品数特定装置、商品数特定方法、及び記録媒体
US20220270061A1 (en) System and method for indicating payment method availability on a smart shopping bin
JP2024037466A (ja) 情報処理システム、情報処理方法及びプログラム
JP2022164939A (ja) 販売システム

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2020540494

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020739233

Country of ref document: EP

Effective date: 20200820

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20739233

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE