WO2021228134A1 - 识别方法及装置 - Google Patents

识别方法及装置 Download PDF

Info

Publication number
WO2021228134A1
WO2021228134A1 PCT/CN2021/093304 CN2021093304W WO2021228134A1 WO 2021228134 A1 WO2021228134 A1 WO 2021228134A1 CN 2021093304 W CN2021093304 W CN 2021093304W WO 2021228134 A1 WO2021228134 A1 WO 2021228134A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
container
goods
area
user
Prior art date
Application number
PCT/CN2021/093304
Other languages
English (en)
French (fr)
Inventor
王梦雄
蔡文舟
周大江
赵雄心
Original Assignee
支付宝(杭州)信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 支付宝(杭州)信息技术有限公司 filed Critical 支付宝(杭州)信息技术有限公司
Publication of WO2021228134A1 publication Critical patent/WO2021228134A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F11/00Coin-freed apparatus for dispensing, or the like, discrete articles
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F9/00Details other than those peculiar to special kinds or types of apparatus
    • G07F9/006Details of the software used for the vending machines

Definitions

  • the embodiments of this specification relate to the field of automatic vending technology, and in particular to an identification method.
  • One or more embodiments of this specification also involve an identification device, a container, a computing device, and a computer-readable storage medium.
  • unmanned visual containers are being rolled out in major and medium-sized cities at a very fast pace. Different from traditional vending machines, unmanned visual containers use machine vision to identify the goods placed in or removed from the cabinet.
  • the embodiments of this specification provide an identification method.
  • One or more embodiments of this specification also involve an identification device, a container, a computing device, and a computer-readable storage medium to solve the technical defects in the prior art.
  • an identification method including: receiving a user's pick-up instruction for a container, and based on the pick-up instruction, starting an image acquisition device to collect a first image of the goods in the container; Receiving the opening instruction of the container door, and starting the motion sensing device based on the opening instruction to obtain the operation area of the user taking the goods in the container; receiving the closing instruction of the container door, and based on The shutdown instruction starts the image capture device to capture a second image of the goods in the container corresponding to the first image; compare the first image with the second image to determine the first image Two different areas of the image; the different areas of the second image are filtered according to the operation area, and the target recognition result is determined based on the filtered different areas of the second image.
  • a container including: a cabinet body with a cabinet door installed; an image acquisition device installed in the cabinet body for collecting images of goods in the container; The motion sensing device around the cabinet and close to the cabinet door is used to obtain the operating area where the user takes the goods in the container; the control device installed in the cabinet is used to control the image
  • the acquisition device and the motion sensing device implement the above identification method.
  • an identification device including: a first image acquisition device configured to receive a user's pick-up instruction for a container, and based on the pick-up instruction to start the image acquisition device collection station The first image of the goods in the container; an operation area acquisition device configured to receive an opening instruction of the container door, and based on the opening instruction to activate the motion sensing device to acquire the user taking the container in the container The operation area of the goods; the second image acquisition device is configured to receive the closing instruction of the container door, and based on the closing instruction, start the image acquisition device to collect the goods in the container corresponding to the first image The second image of the; the difference area acquisition device is configured to compare the first image with the second image to determine the difference area of the second image; the recognition result determination device is configured to be based on the The operation area screens the difference area of the second image, and determines the target recognition result based on the screened difference area of the second image.
  • a computing device including: a memory and a processor; the memory is used to store computer-executable instructions, and the processor is used to execute the computer-executable instructions: receiving a user A pickup instruction for a container, and based on the pickup instruction, the image acquisition device is activated to collect the first image of the goods in the container; the opening instruction of the container door is received, and the motion sensing device is activated based on the opening instruction to acquire The operating area where the user takes the goods in the container; receives a closing instruction of the container door, and starts the image acquisition device based on the closing instruction to collect the image corresponding to the first image The second image of the goods in the container; compare the first image with the second image to determine the difference area of the second image; perform the difference area of the second image according to the operation area Screening, and determining the target recognition result based on the screened difference area of the second image.
  • a computer-readable storage medium which stores computer-executable instructions, which implement the steps of the identification method when executed by a processor.
  • An embodiment of this specification implements an identification method and device, and a container, wherein the identification method includes receiving a user's pick-up instruction for a container, and based on the pick-up instruction, starting an image capture device to collect the contents of the container The first image of the goods; receiving the opening instruction of the container door, and starting the motion sensing device based on the opening instruction to obtain the operation area of the user taking the goods in the container; receiving the container door And start the image acquisition device based on the shutdown instruction to acquire a second image of the goods in the container corresponding to the first image; compare the first image with the second image To determine the difference area of the second image; filter the difference area of the second image according to the operation area, and determine the target recognition result based on the filtered difference area of the second image; the recognition Method
  • the user acquired by the motion sensing device is directed to the operating area of the container to take the goods, and after correcting the difference area before and after taking the goods in the container, the user can be accurately identified according to the correction result
  • FIG. 1 is a flowchart of an identification method provided by an embodiment of this specification
  • FIG. 2 is a schematic cross-sectional view of a wide-angle camera on a single-layer goods rack in a container in an identification method according to an embodiment of this specification;
  • FIG. 3 is a schematic diagram of the shooting field of view of an image acquisition device on a single-layer goods rack in an identification method according to an embodiment of the present specification
  • FIG. 4 is a first image of goods on a single-layer goods rack collected by an image acquisition device in an identification method according to an embodiment of this specification;
  • FIG. 5 is a schematic structural diagram of a container in an identification method provided by an embodiment of this specification.
  • FIG. 6 is a second image of goods on a single-layer goods rack collected by an image acquisition device in an identification method according to an embodiment of this specification;
  • FIG. 7 is a processing flowchart of a specific application scenario of an identification method provided by an embodiment of this specification.
  • Figure 8 is a schematic structural diagram of a container provided by an embodiment of the present specification.
  • FIG. 9 is a schematic structural diagram of an identification device provided by an embodiment of this specification.
  • Fig. 10 is a structural block diagram of a computing device provided by an embodiment of this specification.
  • first, second, etc. may be used to describe various information in one or more embodiments of this specification, the information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
  • the first may also be referred to as the second, and similarly, the second may also be referred to as the first.
  • word "if” as used herein can be interpreted as "when” or "when” or "in response to certainty.”
  • an identification method is provided.
  • This specification also relates to an identification device, a container, a computing device, and a computer-readable storage medium, which are described in detail in the following embodiments.
  • FIG. 1 shows a flowchart of an identification method provided according to an embodiment of this specification, including step 102 to step 110.
  • Step 102 Receive a user's pick-up instruction for a container, and based on the pick-up instruction, activate the image acquisition device to collect a first image of the goods in the container.
  • the identification method provided in the embodiments of this specification is applied to unmanned visual containers, that is, containers that can identify goods placed or taken away in the cabinet through machine vision; the goods include but are not limited to Any kind of commodities that can be sold through the container, such as beverages, books, fast food, etc.; and the identification method is applied to the control device of the container.
  • receiving the user's pick-up instruction for the container can be understood as receiving the user's pick-up operation for the goods in the container on the container, such as receiving the pick-up instruction formed by the user clicking the purchase button on the container, or receiving the user through
  • the smart terminal scans the pick-up instruction formed by the shopping QR code on the container.
  • the image acquisition device installed in the container is activated based on the pick-up instruction to collect the first image of the goods in the container; specifically, multiple goods racks are set in the container, and the image The collection device is installed on the top of the container body and the center position of the lower panel of each product rack to realize image collection of the goods placed on each layer of the product rack; in actual applications, the image collection device can be a wide-angle Camera.
  • FIG. 2 shows a cross-sectional schematic diagram of the wide-angle camera on a single-layer goods rack in the container when the image acquisition device is a wide-angle camera.
  • FIG. 3 shows a schematic diagram of the shooting field of view of each image capturing device on a single-layer goods rack after the image capturing device is installed in the container.
  • Figure 4 shows the first image of the goods on the single-layer goods rack in the container collected by the image acquisition device after receiving the user's pick-up instruction for the container; in practical applications, a container will be Set up multi-layer goods racks, each layer of goods racks will have an image capture device installed in the center of the lower panel, and after receiving the user’s pick-up instructions for the container, each image capture device will be activated to collect every container in the container.
  • the first image of the goods on the shelf of goods on the first layer therefore, the first image is composed of the first images of the goods on each shelf of the goods in the container that are collected.
  • the receiving user's pick-up instruction for the container is based on all Before the pickup instruction starts the image acquisition device to collect the first image of the goods in the container, it further includes: acquiring attribute information of the user, and determining the identity of the user based on the attribute information of the user; accordingly, all
  • the activating the image acquisition device to collect the first image of the goods in the container includes: in the case that the identity of the user is determined to meet the opening condition of the container door, activating the image acquisition device to collect the first image of the goods in the container image.
  • the user's attribute information includes but is not limited to the user's fingerprint, facial features, and/or the user's account name, etc.
  • the user's attribute information is obtained, and then the user's identity is determined based on the user's attribute information;
  • the specific application scenario is as follows: the receiving user scans the container through a smart terminal (such as a mobile phone) After the pick-up instruction is formed by the shopping QR code on the above, the container starts the camera installed on itself, scans the user’s face, obtains the user’s facial features, and then uploads the user’s facial features to the corresponding server.
  • the server recognizes the user’s identity based on the user’s facial features; or after receiving the pick-up instruction formed by the user scanning the shopping QR code on the container through a smart terminal (such as a mobile phone), the container starts the fingerprint collection device installed on itself, Instruct the user to input fingerprints, obtain the fingerprint characteristics of the user, and then upload the fingerprint characteristics of the user to a corresponding server, and the server recognizes the identity of the user based on the fingerprint characteristics of the user.
  • a smart terminal such as a mobile phone
  • the image acquisition device After determining the identity of the user, in the case of determining that the user identity meets the opening conditions of the container door, start the image acquisition device to collect the first image of the goods in the container, where the opening conditions of the container door can be carried out according to actual applications
  • the setting is not limited here.
  • the opening condition of the container door is that the user's credit score is greater than or equal to 600 points, or the user purchases goods through this container more than 30 times, etc.
  • the image acquisition device is started to collect
  • the first image of the goods in the container can be understood as: after determining the user’s identity (that is, the user’s real name and credit status), the user’s credit score is determined to be 650 points, then it can be determined that the user’s credit score is greater than
  • the opening condition of the container door the user's credit score is greater than or equal to 600 points.
  • the image acquisition device can be activated to collect the first image of the goods on each shelf of the goods in the container.
  • the identification method in the embodiment of this specification after receiving the user's pick-up instruction for the container, confirms the user's identity, and then starts the image acquisition device to collect the first image of the goods in the container; : Confirming the user’s identity can ensure the safety of the user who picks up the goods to ensure the safety of the container; second: Only when the user’s identity is confirmed to meet the requirements of opening the container door, the image acquisition device will be activated to collect the inside of the container The first image of the goods, to avoid receiving the user’s invalid pickup instruction for the container (for example, touching the pickup button of the container by mistake), the image acquisition device is also activated to collect the image of the goods in the container, resulting in waste of resources .
  • Step 104 Receive an opening instruction of the container door, and activate a motion sensing device based on the opening instruction to obtain an operation area where the user takes the goods in the container.
  • receiving the opening instruction of the container door can be understood as receiving the user's opening operation on the container door, for example, receiving the user's manual opening operation of the upper door of the container to realize the opening instruction.
  • control device of the container receives the user's instruction to open the container door, it activates the motion sensing device based on the opening instruction to obtain the operating area where the user takes the goods in the container.
  • the activation of the motion sensing device to obtain the operation area where the user takes the goods in the container includes:
  • the motion sensing device is activated to acquire the coordinate positions of the limbs staying in the container when the user takes the goods in the container.
  • the motion sensing device includes, but is not limited to, infrared sensors installed on at least two adjacent sides of the container body and close to the cabinet door; and in practical applications, in order to ensure that the motion sensing device can accurately obtain the user's picking of goods in the container
  • infrared sensors that can sense actions can be installed at equal distances around the container body near the door. See Figure 5 for details. It can be seen from Figure 5 that the sensors (ie infrared sensors) are installed in the container at equal distances.
  • a perception surface or sensor network is formed at the cabinet door; in addition, the distance of the infrared sensor can also be adjusted adaptively according to the height of each shelf of goods in the container to ensure the storage of goods on each layer. Between the racks, objects with the size of the arm's cross-section can be effectively sensed. When all the infrared sensors are turned on, they can form a sensing surface or sensing net at the cabinet door.
  • all motion sensing devices are activated to obtain the coordinate position of the user's arm, wrist or palm staying in the container when the user picks up the goods in the container.
  • all infrared sensors are activated to form a perception network at the door of the container. The user’s arm reaches into any shelf of the container to select the goods. At this time, the infrared sensor will locate and record the position of the user each time and the coordinates of the position, until the user’s arm leaves the cabinet door and the cabinet When the door is closed, the infrared sensor will stop recording and close.
  • the operation area where the user takes the goods in the container can be understood as the vertical sensing network formed by the motion sensing device at the door of the cabinet when the user invades the cabinet detected by the motion sensing device.
  • the two-dimensional coordinate operation area of the plane can be understood as the vertical sensing network formed by the motion sensing device at the door of the cabinet when the user invades the cabinet detected by the motion sensing device.
  • the motion sensing device is evenly arranged around the cabinet body close to the cabinet door according to the spacing distance of the goods rack, which can tell the user when the body is in the container when taking the goods in the container.
  • the coordinate position staying in the container is accurately recorded, so that the subsequent correction of the missing area of the goods can be realized based on the operation area, and the accuracy of the identification of the goods can be improved.
  • Step 106 receiving a closing instruction of the container door, and starting the image acquisition device based on the closing instruction to acquire a second image of the goods in the container corresponding to the first image.
  • receiving the closing instruction of the container door can be understood as receiving the user's closing operation on the container door, for example, receiving the user's manual closing operation of the upper door of the container to realize the opening instruction.
  • the motion sensing device is turned off based on the closing instruction, and then the image acquisition device is restarted to acquire a second image of the goods in the container corresponding to the first image.
  • Figure 6 shows a second image corresponding to the first image of the goods on the single-layer goods rack in the container collected by the image acquisition device after receiving the user's instruction to close the container door. .
  • the image acquisition device is started again to collect the second image of the goods on each shelf of the goods in the container. Therefore, the second image can be understood as being collected by the container.
  • the second image composition of the goods on each shelf of the goods on the inner shelf that is, the image acquisition device will target each container during the entire container pickup operation of triggering pick-up, opening the door, picking up, and closing the door.
  • Two image acquisitions are performed on the goods on the shelf of the goods on the first floor, one is the first image acquisition of the goods on the goods on each shelf before the container door is opened after the pickup is triggered, the first image; the container is closed once After the door, a second image acquisition is performed on the goods on each shelf of the goods, that is, the second image.
  • Step 108 Compare the first image with the second image to determine the difference area of the second image.
  • the difference area of the second image may be determined based on the first image and the second image.
  • the specific implementation manner is as follows: Comparing with the second image to determine the difference area of the second image includes: acquiring the first pixel of the first image and the second pixel of the second image based on a preset image processing method The first pixel of the first image is compared with the second pixel of the second image, and the position where the pixel is different is determined as the difference area of the second image.
  • the preset image processing method can be any existing method that can obtain image pixels, which is not limited here.
  • the position difference between the first image and the second image can be acquired through the deep learning model, which is not limited here.
  • the identification method adopts a method of comparing the first pixel of the first image with the second pixel of the second image to determine the difference area of the second image, without other complicated calculations.
  • the difference area of the second image can be quickly determined, and while the recognition speed of the difference area is improved, the overall recognition efficiency of the goods in the difference area after screening will also be greatly improved.
  • Step 110 Filter the different areas of the second image according to the operation area, and determine a target recognition result based on the filtered different areas of the second image.
  • filtering the difference area of the second image according to the operation area includes: determining the position of the operation area in the difference area between the container and the second image. The position of the container; the difference area of the second image is filtered based on the difference between the position of the container in the operation area and the position of the second image in the container to determine the first Second, the difference screening area of the image.
  • first determine the location of the operation area in the container for example, determine which row of the shelf of the shelf of each operation area is implemented in the container, and determine the position of the difference area of the second image in the container, For example, it is determined that the difference area of the second image is realized in the first row of the shelf of the container, and then the difference area of the second image that does not have the corresponding operation area is deleted, and the remaining difference area of the second image Determined as the difference screening area of the second image.
  • the identification method provided by the embodiment of this specification filters the difference area of the second image through the operation area, so as to filter out the difference filter area of the second image that may appear when the user moves or picks up the goods.
  • the difference screening area of the second image is used to identify the goods, which reduces the recognition area of the goods to be recognized, and greatly improves the recognition speed and the recognition accuracy.
  • the method before the target recognition result is determined based on the filtered difference area of the second image, the method further includes: determining a first image corresponding to the second image containing the difference filtering area; The second image of the screening area and the corresponding first image are respectively input to the recognition model, and the position and name of the goods in the container in the second image containing the difference screening area are obtained, and the goods in the corresponding first image are in the container The location and the name of the product.
  • the first image corresponding to the second image containing the difference screening area is first determined.
  • the second image containing the difference screening area is the image of the second-layer goods rack of the container, and the second image containing the difference screening area is the same as the second image containing the difference screening area.
  • the first image corresponding to the image can be understood as the first image of the second-layer goods rack of the container that was collected for the first time; then the second image containing the difference screening area and the corresponding first image are input into the recognition model respectively To obtain the position and name of the goods in the container in the second image containing the difference screening area, and the position and name of the goods in the container in the corresponding first image; for example, obtain the second image containing the difference screening area The location of each product in the container and the name of each product, and the location of each product in the container and the name of each product in the corresponding first image.
  • the recognition model may be a pre-trained deep learning model
  • the input of the deep learning model is an image
  • the output is the coordinate position and the name of the item in the image.
  • Figures 4 and 6 where Figure 4 is the collected first image of a certain layer of goods racks, and Figure 6 is the collected second image of the difference screening area of this layer of goods racks.
  • An image and the second image in Figure 6 are uploaded to the identification model of the container server.
  • the identification model outputs the position of each product in Figure 4, such as the row and column, and the name of each product, for example Drink a, drink b, etc.; and output the position of each item in Figure 6, such as the row and column, and the name of each item, such as drink a, drink b, and so on.
  • the identification method provided by the embodiment of this specification can quickly and accurately obtain the position of the goods in the container and the name of the goods in the second image containing the difference screening area through the recognition model of deep learning, and the goods in the corresponding first image. State the location of the container and the name of the product, so that the location and name of the product that are missing in the second image containing the difference screening area can be accurately identified later.
  • the determining the target recognition result based on the filtered difference area of the second image includes: determining the first image corresponding to the second image based on the difference filtering area of the second image The area to be compared; the difference screening area is compared with the area to be compared, and the location and the name of the goods missing in the second image after the screening are determined in the container.
  • the difference screening area determines the area to be compared of the first image corresponding to the second image. Taking the above as an example, the difference screening area of the second image is the first column on the right in FIG. The area to be compared of the first image corresponding to the second image is the first column on the right in FIG. 4.
  • the screened product can be determined The position of the missing goods in the second image and the name of the goods; for example, the missing goods in FIG. 6 are beverages a in the first row and the first row on the right.
  • the identification method uses the motion sensing device to obtain the user's operation area for taking the goods in the container, and realizes the screening of the different areas before and after taking the goods in the container, and Based on the recognition model, pre-identify the second image of the goods in the container and the name of the goods and the position of the goods in the corresponding first image, so as to realize the subsequent identification of the position of the missing goods in the second image and the position of the goods through the recognition result
  • the name is determined accurately and quickly to improve user experience.
  • the method further includes: determining, based on the target recognition result, items that are missing in the second image after screening The quantity and the corresponding amount to be paid.
  • the number of items missing in the second image after screening and the corresponding amount to be paid can be determined based on the target recognition result, and then displayed to the user, which is convenient for the user to carry out the inventory and settlement of the goods. , Improve user experience.
  • the identification method provided by the embodiment of this specification uses an action position sensing device composed of infrared sensors arranged around the cabinet door to detect the level and orientation of the user's body into the container to take the goods, to assist the goods spreading algorithm to improve the accuracy, Thereby improving the overall accuracy of goods identification.
  • FIG. 7 shows a processing flowchart of a specific application scenario of an identification method according to an embodiment of the present specification, including step 702 to step 722.
  • Step 702 Obtain the door opening image.
  • the image acquisition device is activated to collect the first image of the goods in each shelf of the goods in the container, so as to save the goods in the container before the door of the container is opened.
  • the status of the goods in the goods shelf on the first floor and then waits for the user to open the door.
  • Step 704 Drop the lock and open the door.
  • the lock is released so that the door of the container can be opened.
  • Step 706 The sensor is turned on.
  • the opening instruction of the container door is received, and the motion sensing device is activated.
  • the motion sensing device For example, the moment the user opens the door of a container, he activates the motion sensing device and records the detection result.
  • Step 708 Record the sensing data.
  • the operation of the user to take the goods in the container is acquired through the motion sensing device. For example, record the operation behavior when the user's body part enters the container to take the goods.
  • Step 710 Calculate the sensing range.
  • calculate the operating area of the user taking the goods in the container for example, when the user's body part enters the container to take the goods, calculate the up and down direction corresponding to the entry position of the user's body part
  • calculate the left and right sensors will locate the level and orientation of the container entered by the user.
  • Step 712 The user closes the door.
  • the user closes the door of the container after taking the goods.
  • Step 714 Close the door to obtain an image.
  • Step 716 Determine whether there is a difference, if yes, proceed to step 718, if not, end.
  • the images of the goods in the goods racks in the containers are differentially compared twice to determine whether the images of the goods in the goods racks on each layer have a difference area.
  • Step 718 Differentiate the non-sensing area.
  • the difference area that does not have the corresponding sensing area is filtered and deleted, so as to realize the difference of the non-sensing area.
  • Step 720 There is a sensing area instead of the difference.
  • the difference area has a corresponding sensing area based on the recorded sensing area, and the difference area is determined as the difference screening area.
  • Step 722 Difference area comparison.
  • the two captured images containing the difference screening area are compared to determine the position and the name of the missing goods in the shelf of the goods corresponding to the difference screening area, and the end of the process Identification process.
  • the identification method provided in the embodiment of this specification uses an action position sensing device composed of infrared sensors arranged around the cabinet door to detect the level and orientation of the user's body into the container to take the goods, and detect the obtained sensing area to assist the goods
  • the ripple algorithm improves the accuracy, thereby improving the overall accuracy of product identification.
  • FIG. 8 shows a schematic structural diagram of a container provided according to an embodiment of the present specification, including: a cabinet body 804 installed with a cabinet door 802; an image acquisition device 806 installed in the cabinet body 804 , Used to collect images of the goods in the container; motion sensing devices 808 installed on the adjacent sides of the cabinet 804 and close to the door 802, used to obtain the user taking the goods in the container The operating area; the control device installed in the cabinet 804 is used to control the image acquisition device 806 and the motion sensing device 808 to implement the identification method provided in the above embodiment.
  • At least one product rack 810 is provided in the cabinet body 804, and the image acquisition device 806 is installed on the top layer of the cabinet body 804 and the lower panel of each layer of the product rack 810.
  • the image capture device 806 can be installed on the top of the cabinet 804 and the center position of the lower panel of each layer of the goods rack 810 to achieve a more reasonable image. collection.
  • the motion sensing device 808 is installed equidistantly around the inner wall of the cabinet 804 and close to the cabinet door 802.
  • the motion sensing device 808 can reasonably adjust the installation distance of the motion sensing device 808 based on the distance of each layer of the goods rack 810 installed in the cabinet 804.
  • the devices 808 can be arranged in a staggered manner, so as to realize that all areas in the goods rack of the layer are in the sensing area.
  • the image acquisition device 806 includes a wide-angle camera
  • the motion sensing device 808 includes an infrared sensor
  • each layer of the goods rack 810 shares at least four corresponding infrared sensors.
  • the motion sensing device 808 also It can include, but is not limited to, a narrow-angle camera or any sensor that can realize distance measurement.
  • control device is further configured to determine the quantity of goods taken by the user at one time and the corresponding amount to be paid based on the target recognition result.
  • a container provided by the embodiment of this specification includes a cabinet body with a cabinet door installed; an image acquisition device installed in the cabinet body for collecting images of goods in the container; and installed on adjacent two sides of the cabinet body And the motion sensing device close to the cabinet door is used to obtain the operating area where the user takes the goods in the container; the control device installed in the cabinet is used to control the image acquisition device and the Motion sensing device to achieve any of the identification methods.
  • the container can be installed and used in conjunction with the internal image capture device and motion sensing device, and combined with the specific processing calculation of the control device, to achieve precise positioning and rapid positioning of the goods for sale. Identification, which greatly improves the user's buying experience.
  • FIG. 9 shows a schematic structural diagram of an identification device provided by an embodiment of this specification.
  • the device includes:
  • the first image acquisition device 902 is configured to receive a user's pick-up instruction for the container, and based on the pick-up instruction to start the image acquisition device to acquire a first image of the goods in the container;
  • the operation area acquisition device 904 is configured to Receiving the opening instruction of the container door, and starting the motion sensing device based on the opening instruction to acquire the operation area of the user taking the goods in the container;
  • the second image acquisition device 906 is configured to receive The closing instruction of the container door, and based on the closing instruction, the image acquisition device is activated to collect a second image of the goods in the container corresponding to the first image;
  • the difference area acquisition device 908 is configured to The first image is compared with the second image to determine the difference area of the second image;
  • the recognition result determining device 910 is configured to filter the difference area of the second image according to the operation area , And determine the target recognition result based on the filtered difference area of the second image.
  • the device further includes: an identity determination module configured to obtain attribute information of the user, and determine the identity of the user based on the attribute information of the user; correspondingly, the first image collection
  • the device 902 is further configured to: in the case where it is determined that the identity of the user meets the opening condition of the container door, start the image acquisition device to acquire the first image of the goods in the container.
  • the operation area acquiring device 904 is further configured to: activate the motion sensing device to acquire the coordinate position of the limb staying in the container when the user picks up the goods in the container.
  • the difference area obtaining device 908 is further configured to: obtain the first pixel of the first image and the second pixel of the second image based on a preset image processing method; The first pixel of an image is compared with the second pixel of the second image, and the position where the pixel is different is determined as the difference area of the second image.
  • the recognition result determining device 910 is further configured to: determine the position of the operation area in the container and the difference area between the second image and the position in the container; based on the operation area Filtering the difference area of the second image at the position of the container and the difference area of the second image at the position of the container to determine the difference filtering area of the second image.
  • the device further includes: an image determining module configured to determine a first image corresponding to a second image containing a difference screening area; and a position acquiring module configured to combine the second image containing the difference screening area And the corresponding first image is input into the recognition model respectively, and the position and name of the goods in the container in the second image containing the difference screening area are obtained, and the position and name of the goods in the container in the corresponding first image are obtained.
  • an image determining module configured to determine a first image corresponding to a second image containing a difference screening area
  • a position acquiring module configured to combine the second image containing the difference screening area And the corresponding first image is input into the recognition model respectively, and the position and name of the goods in the container in the second image containing the difference screening area are obtained, and the position and name of the goods in the container in the corresponding first image are obtained.
  • the recognition result determining device 910 is further configured to: determine a region to be compared of the first image corresponding to the second image based on the difference screening area of the second image; and filter the difference The area is compared with the area to be compared, and the position and the name of the goods missing in the second image after the screening are determined in the container.
  • the device further includes: an item determination module configured to determine the number of items missing in the filtered second image and the corresponding amount to be paid based on the target recognition result.
  • Fig. 10 shows a structural block diagram of a computing device 1000 according to an embodiment of the present specification.
  • the components of the computing device 1000 include, but are not limited to, a memory 1010 and a processor 1020.
  • the processor 1020 and the memory 1010 are connected through a bus 1030, and the database 1050 is used to store data.
  • the computing device 1000 also includes an access device 1040 that enables the computing device 1000 to communicate via one or more networks 1060.
  • networks include a public switched telephone network (PSTN), a local area network (LAN), a wide area network (WAN), a personal area network (PAN), or a combination of communication networks such as the Internet.
  • the access device 1040 may include one or more of any type of wired or wireless network interface (for example, a network interface card (NIC)), such as IEEE802.11 wireless local area network (WLAN) wireless interface, global interconnection for microwave access ( Wi-MAX) interface, Ethernet interface, universal serial bus (USB) interface, cellular network interface, Bluetooth interface, near field communication (NFC) interface, etc.
  • NIC network interface card
  • the aforementioned components of the computing device 1000 and other components not shown in FIG. 10 may also be connected to each other, for example, via a bus. It should be understood that the structural block diagram of the computing device shown in FIG. 10 is only for the purpose of example, and is not intended to limit the scope of this specification. Those skilled in the art can add or replace other components as needed.
  • the computing device 1000 can be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (for example, a tablet computer, a personal digital assistant, a laptop computer, a notebook computer, a netbook, etc.), a mobile phone (for example, a smart phone) ), wearable computing devices (for example, smart watches, smart glasses, etc.) or other types of mobile devices, or stationary computing devices such as desktop computers or PCs.
  • a mobile computer or mobile computing device for example, a tablet computer, a personal digital assistant, a laptop computer, a notebook computer, a netbook, etc.
  • a mobile phone for example, a smart phone
  • wearable computing devices for example, smart watches, smart glasses, etc.
  • stationary computing devices such as desktop computers or PCs.
  • the computing device 1000 may also be a mobile or stationary server.
  • the processor 1020 is configured to execute the following computer-executable instructions: receive a user's pick-up instruction for a container, and based on the pick-up instruction, activate an image acquisition device to collect a first image of the goods in the container; and receive the container
  • the door opening instruction, and based on the opening instruction, the motion sensing device is activated to obtain the operation area of the user taking the goods in the container; receiving the closing instruction of the container door, and starting based on the closing instruction
  • the image acquisition device acquires a second image of the goods in the container corresponding to the first image; compares the first image with the second image to determine the difference area of the second image
  • the difference area of the second image is filtered according to the operation area, and the target recognition result is determined based on the filtered difference area of the second image.
  • An embodiment of the present specification also provides a computer-readable storage medium that stores computer instructions, which implement the steps of the identification method when the instructions are executed by a processor.
  • the computer instructions include computer program codes, and the computer program codes may be in the form of source code, object code, executable files, or some intermediate forms.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media, etc. It should be noted that the content contained in the computer-readable medium can be appropriately added or deleted according to the requirements of the legislation and patent practice in the jurisdiction. For example, in some jurisdictions, according to the legislation and patent practice, the computer-readable medium Does not include electrical carrier signals and telecommunication signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)

Abstract

本说明书实施例提供识别方法及装置,其中,所述方法包括接收用户针对货柜的取货指令,且基于所述取货指令启动图像采集装置采集所述货柜内货品的第一图像;接收所述货柜柜门的开启指令,且基于所述开启指令启动动作感应装置获取所述用户在所述货柜内拿取所述货品的操作区域;接收所述货柜柜门的关闭指令,且基于所述关闭指令启动所述图像采集装置采集与所述第一图像对应的所述货柜内货品的第二图像;将所述第一图像与所述第二图像进行比对,以确定所述第二图像的差异区域;根据所述操作区域对所述第二图像的差异区域进行筛选,并基于筛选后的所述第二图像的差异区域确定目标识别结果。

Description

识别方法及装置 技术领域
本说明书实施例涉及自动售货技术领域,特别涉及一种识别方法。本说明书一个或者多个实施例同时涉及一种识别装置,一种货柜,一种计算设备,以及一种计算机可读存储介质。
背景技术
无人视觉货柜作为传统自动售货机和无人货架的替代者,以极快的节奏在各大中城市进行铺开。不同于传统的自动售货机,无人视觉货柜是通过机器视觉的方式对柜内放入或拿走的货品进行识别的。
但是现有技术中,很多物体检测算法都会存在着多检,漏检和识别错误的问题,并且货柜内的货品数量越多,越会加大整体的错误率。
因此急需提供一种可以对货柜内的货品增加整体的识别准确率的识别方法。
发明内容
有鉴于此,本说明书施例提供了一种识别方法。本说明书一个或者多个实施例同时涉及一种识别装置,一种货柜,一种计算设备,以及一种计算机可读存储介质,以解决现有技术中存在的技术缺陷。
根据本说明书实施例的第一方面,提供了一种识别方法,包括:接收用户针对货柜的取货指令,且基于所述取货指令启动图像采集装置采集所述货柜内货品的第一图像;接收所述货柜柜门的开启指令,且基于所述开启指令启动动作感应装置获取所述用户在所述货柜内拿取所述货品的操作区域;接收所述货柜柜门的关闭指令,且基于所述关闭指令启动所述图像采集装置采集与所述第一图像对应的所述货柜内货品的第二图像;将所述第一图像与所述第二图像进行比对,以确定所述第二图像的差异区域;根据所述操作区域对所述第二图像的差异区域进行筛选,并基于筛选后的所述第二图像的差异区域确定目标识别结果。
根据本说明书实施例的第二方面,提供了一种货柜,包括:安装有柜门的柜体;安装于所述柜体内的图像采集装置,用于采集所述货柜内货品的图像;安装于所述柜体四周且靠近所述柜门的动作感应装置,用于获取用户在所述货柜内拿取所述货品的操作区域;安装于所述柜体内的控制装置,用于控制所述图像采集装置以及所述动作感应装置以实现上述识别方法。
根据本说明书实施例的第三方面,提供了一种识别装置,包括:第一图像采集装置,被配置为接收用户针对货柜的取货指令,且基于所述取货指令启动图像采集装置采集所述货柜内货品的第一图像;操作区域获取装置,被配置为接收所述货柜柜门的开启指令,且基于所述开启指令启动动作感应装置获取所述用户在所述货柜内拿取所述货品 的操作区域;第二图像采集装置,被配置为接收所述货柜柜门的关闭指令,且基于所述关闭指令启动所述图像采集装置采集与所述第一图像对应的所述货柜内货品的第二图像;差异区域获取装置,被配置为将所述第一图像与所述第二图像进行比对,以确定所述第二图像的差异区域;识别结果确定装置,被配置为根据所述操作区域对所述第二图像的差异区域进行筛选,并基于筛选后的所述第二图像的差异区域确定目标识别结果。
根据本说明书实施例的第四方面,提供了一种计算设备,包括:存储器和处理器;所述存储器用于存储计算机可执行指令,所述处理器用于执行所述计算机可执行指令:接收用户针对货柜的取货指令,且基于所述取货指令启动图像采集装置采集所述货柜内货品的第一图像;接收所述货柜柜门的开启指令,且基于所述开启指令启动动作感应装置获取所述用户在所述货柜内拿取所述货品的操作区域;接收所述货柜柜门的关闭指令,且基于所述关闭指令启动所述图像采集装置采集与所述第一图像对应的所述货柜内货品的第二图像;将所述第一图像与所述第二图像进行比对,以确定所述第二图像的差异区域;根据所述操作区域对所述第二图像的差异区域进行筛选,并基于筛选后的所述第二图像的差异区域确定目标识别结果。
根据本说明书实施例的第五方面,提供了一种计算机可读存储介质,其存储有计算机可执行指令,该指令被处理器执行时实现所述识别方法的步骤。
本说明书一个实施例实现了一种识别方法及装置、一种货柜,其中,所述识别方法包括接收用户针对货柜的取货指令,且基于所述取货指令启动图像采集装置采集所述货柜内货品的第一图像;接收所述货柜柜门的开启指令,且基于所述开启指令启动动作感应装置获取所述用户在所述货柜内拿取所述货品的操作区域;接收所述货柜柜门的关闭指令,且基于所述关闭指令启动所述图像采集装置采集与所述第一图像对应的所述货柜内货品的第二图像;将所述第一图像与所述第二图像进行比对,以确定所述第二图像的差异区域;根据所述操作区域对所述第二图像的差异区域进行筛选,并基于筛选后的所述第二图像的差异区域确定目标识别结果;所述识别方法通过动作感应装置获取的用户针对所述货柜内拿取所述货品的操作区域,实现对货柜内货品拿取前和拿取后的差异区域进行修正后,根据修正结果可以准确的识别出用户拿取的货品以及货品位置。
附图说明
图1是本说明书一个实施例提供的一种识别方法的流程图;
图2是本说明书一个实施例提供的一种识别方法中广角摄像头在货柜内单层货品置物架上的横截面示意图;
图3是本说明书一个实施例提供的一种识别方法中图像采集装置在单层货品置物架上的拍摄视野示意图;
图4是本说明书一个实施例提供的一种识别方法中图像采集装置采集的单层货品置物架上的货品的第一图像;
图5是本说明书一个实施例提供的一种识别方法中货柜的结构示意图;
图6是本说明书一个实施例提供的一种识别方法中图像采集装置采集的单层货品置物架上的货品的第二图像;
图7是本说明书一个实施例提供的一种识别方法的具体应用场景的处理流程图;
图8是本说明书一个实施例提供的一种货柜的结构示意图;
图9是本说明书一个实施例提供的一种识别装置的结构示意图;
图10是本说明书一个实施例提供的一种计算设备的结构框图。
具体实施方式
在下面的描述中阐述了很多具体细节以便于充分理解本说明书。但是本说明书能够以很多不同于在此描述的其它方式来实施,本领域技术人员可以在不违背本说明书内涵的情况下做类似推广,因此本说明书不受下面公开的具体实施的限制。
在本说明书一个或多个实施例中使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本说明书一个或多个实施例。在本说明书一个或多个实施例和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本说明书一个或多个实施例中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本说明书一个或多个实施例中可能采用术语第一、第二等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本说明书一个或多个实施例范围的情况下,第一也可以被称为第二,类似地,第二也可以被称为第一。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
在本说明书中,提供了一种识别方法,本说明书同时涉及一种识别装置,一种货柜,一种计算设备,以及一种计算机可读存储介质,在下面的实施例中逐一进行详细说明。
参见图1,图1示出了根据本说明书一个实施例提供的一种识别方法的流程图,包括步骤102至步骤110。
步骤102:接收用户针对货柜的取货指令,且基于所述取货指令启动图像采集装置采集所述货柜内货品的第一图像。
具体实施时,本说明书实施例提供的所述识别方法应用于无人视觉货柜,即可以通过机器视觉的方式对柜内放入或拿走的货品进行识别的货柜;其中,货品包括但不限于饮料、书籍、速食等任何一种可通过货柜出售的商品;而所述识别方法则应用于货柜的控制装置中。
其中,接收用户针对货柜的取货指令,即可以理解为接收用户在货柜上针对该货柜内货品的取货操作,例如接收用户点击货柜上的购买按钮形成的取货指令,又或者接收用户通过智能终端扫描货柜上的购物二维码形成的取货指令。
而在接收用户针对货柜的取货指令后,基于所述取货指令启动货柜内安装的图像 采集装置采集该货柜内货品的第一图像;具体的,货柜内会设置多个货品置物架,图像采集装置则安装于该货柜的柜体顶部以及每个货品置物架的下面板的中心位置,以实现对每层货品置物架上放置的货品进行图像采集;实际应用中,图像采集装置可以为广角摄像头。
参见图2,图2中展示在图像采集装置为广角摄像头的情况下,该广角摄像头在货柜内单层货品置物架上的横截面示意图。
参见图3,图3示出了在货柜内安装图像采集装置后,每个图像采集装置在单层货品置物架上的拍摄视野示意图。
参见图4,图4示出了在接收用户针对货柜的取货指令后,启动图像采集装置采集的该货柜内单层货品置物架上的货品的第一图像;实际应用中,一个货柜内会设置多层货品置物架,每层货品置物架的下面板的中心位置均会安装一个图像采集装置,而在接收用户针对货柜的取货指令后,会启动每个图像采集装置采集该货柜内每层货品置物架上的货品的第一图像,因此,第一图像由采集的该货柜内每层货品置物架上的货品的第一图像组成。
实际应用中,为了保证货柜的安全性,在接收用户针对货柜的取货指令后,会先对用户的身份进行确认,具体实现方式如下:所述接收用户针对货柜的取货指令,且基于所述取货指令启动图像采集装置采集所述货柜内货品的第一图像之前,还包括:获取所述用户的属性信息,且基于所述用户的属性信息确定所述用户的身份;相应的,所述启动图像采集装置采集所述货柜内货品的第一图像包括:在确定所述用户的身份满足所述货柜柜门的开启条件的情况下,启动图像采集装置采集所述货柜内货品的第一图像。
其中,用户的属性信息包括但不限于用户的指纹、面部特征和/或用户的账户名等。
具体的,接收用户针对货柜的取货指令后,会获取该用户的属性信息,然后基于该用户的属性信息确定该用户的身份;具体应用场景如下:接收用户通过智能终端(例如手机)扫描货柜上的购物二维码形成的取货指令后,货柜启动安装在自身的摄像装置,对该用户的面部进行扫描,获取该用户的面部特征,然后将该用户的面部特征上传至对应服务器,该服务器基于该用户的面部特征对该用户的身份进行识别;或者接收用户通过智能终端(例如手机)扫描货柜上的购物二维码形成的取货指令后,货柜启动安装在自身的指纹采集装置,指导该用户进行指纹输入,获取该用户的指纹特征,然后将该用户的指纹特征上传至对应服务器,该服务器基于该用户的指纹特征对该用户的身份进行识别。
确定用户的身份后,在确定该用户身份满足该货柜柜门的开启条件的情况下,启动图像采集装置采集该货柜内货品的第一图像,其中,货柜柜门的开启条件可以根据实际应用进行设置,在此不作任何限定,例如货柜柜门的开启条件为该用户的信用分大于等于600分,或者是该用户通过此种货柜购买货品次数超过30次等。
举例说明,若货柜柜门的开启条件为该用户的信用分大于等于600分,则确定用户的身份后,在确定该用户身份满足该货柜柜门的开启条件的情况下,启动图像采集装 置采集该货柜内货品的第一图像,可以理解为,在确定用户的身份(即该用户的真实姓名、信用情况)后,确定该用户的信用分为650分,那么可以确定该用户的信用分大于该货柜柜门的开启条件:该用户的信用分大于等于600分,此时,则可以启动图像采集装置采集该货柜内每层货品置物架上货品的第一图像。
本说明书实施例中的所述识别方法,在接收用户针对货柜的取货指令后,对该用户的身份进行确认后,才会启动图像采集装置采集所述货柜内货品的第一图像;第一:对用户的身份进行确认可以保证取货用户的安全性,以保障货柜的安全;第二:在确认用户的身份满足开启货柜柜门的情况下,才会启动图像采集装置采集所述货柜内货品的第一图像,避免在接收到用户针对货柜的无效取货指令(例如错误触碰到货柜的取货按钮)的情况下,也启动图像采集装置对货柜内货品进行图像采集,造成资源浪费。
步骤104:接收所述货柜柜门的开启指令,且基于所述开启指令启动动作感应装置获取所述用户在所述货柜内拿取所述货品的操作区域。
其中,接收货柜柜门的开启指令,即可以理解为接收用户在货柜上针对柜门的开启操作,例如接收用户用手打开货柜上柜门的操作,实现的开启指令。
例如,货柜的控制装置接收用户针对货柜柜门的开启指令后,则基于该开启指令启动动作感应装置获取用户在货柜内拿取货品的操作区域。
具体的,为了对动作感应装置获取的用户在货柜内拿取货品的操作区域进行详细的描述,具体实现方式如下所述:
所述启动动作感应装置获取用户在所述货柜内拿取所述货品的操作区域包括:
启动动作感应装置获取用户在所述货柜内拿取所述货品时肢体在所述货柜内停留的坐标位置。
其中,动作感应装置包括但不限于安装于货柜柜体的至少相邻两侧且靠近柜门的红外线传感器;而实际应用中,为了保证动作感应装置可以准确的获取用户在货柜内拿取货品的操作区域,可以在货柜柜体靠近柜门的四周均等距离的安装可以感知动作的红外线传感器,具体参见图5,由图5中可以看出,感应器(即红外线感应器)等距离安装于货柜柜体内靠近柜门的四周,在柜门处形成一个感知面或者感知网;此外,红外线传感器的距离还可以根据货柜内每层货品置物架的层高进行适应性调整,以确保每层货品置物架之间都能够有效的感知到手臂截面大小的物体通过,而所有的红外线传感器在开启时,可以柜门处形成一个感知面或者感知网。
具体实施时,启动所有的动作感应装置获取用户在该货柜内拿取货品时手臂、手腕或手掌在该货柜内停留的坐标位置,例如启动所有的红外线传感器,在货柜柜门处形成感知网,用户的手臂伸入货柜的任何一层货品置物架挑选货品,此时红外线传感器会将该用户每次停留的位置以及该位置所处的坐标均定位记录下来,直到用户的手臂离开柜门且柜门关闭,红外线传感器才会停止记录,并关闭。
因此,具体的,用户在所述货柜内拿取所述货品的操作区域可以理解为动作感应装置感知获取的用户侵入柜体时,由动作感应装置在柜门处形成的感知网组成的竖直平 面的二维坐标操作区域。
本说明书实施例提供的所述识别方法,在柜体靠近柜门的四周,根据货品置物架的间隔距离均匀的设置动作感应装置,可以对用户在所述货柜内拿取所述货品时肢体在所述货柜内停留的坐标位置进行准确的记录,以便后续可以基于该操作区域实现对货品缺失区域的修正,提高货品识别的准确性。
步骤106:接收所述货柜柜门的关闭指令,且基于所述关闭指令启动所述图像采集装置采集与所述第一图像对应的所述货柜内货品的第二图像。
其中,接收货柜柜门的关闭指令,即可以理解为接收用户在货柜上针对柜门的关闭操作,例如接收用户用手关闭货柜上柜门的操作,实现的开启指令。
具体的,接收用户针对货柜柜门的关闭指令后,即基于该关闭指令关闭动作感应装置,然后再次启动上述图像采集装置采集与上述第一图像对应的货柜内货品的第二图像。
参见图6,图6示出了在接收用户针对货柜柜门的关闭指令后,再次启动图像采集装置采集的该货柜内、与上述单层货品置物架上货品的第一图像对应的第二图像。
实际应用中,在接收用户针对货柜柜门的关闭指令后,再次启动图像采集装置采集该货柜内每层货品置物架上货品的第二图像,因此第二图像可以理解为是由采集的该货柜内每层货品置物架上的货品的第二图像组成,也即为在触发取货、开柜门、取货、关柜门的整个在货柜的取货操作过程中,图像采集装置会针对每层货品置物架上的货品进行两次图像采集,一次在触发取货之后,打开货柜柜门之前对每层货品置物架上的货品进行第一次图像采集,即第一图像;一次关闭货柜柜门之后对每层货品置物架上的货品进行第二次图像采集,即第二图像。
步骤108:将所述第一图像与所述第二图像进行比对,以确定所述第二图像的差异区域。
具体的,在获取第一图像以及与第一图像对应的第二图像之后,可以基于第一图像与第二图像确定第二图像的差异区域,具体实现方式如下:所述将所述第一图像与所述第二图像进行比对,以确定所述第二图像的差异区域包括:基于预设图像处理方法获取所述第一图像的第一像素点以及所述第二图像的第二像素点;将所述第一图像的第一像素点与所述第二图像的第二像素点进行比对,将像素点存在差异的位置确定为所述第二图像的差异区域。
其中,预设图像处理方法可以为现有的任何一种可以获取图像像素点的方法,在此不作任何限定。
具体实施时,首先通过预设图像处理方法获取所述第一图像的第一像素点以及与所述第一像素点位于同一坐标位置的、与所述第一图像对应的所述第二图像的第二像素点,然后将所述第一图像的第一像素点与所述第二图像的第二像素点进行比对,通过比对确定第一图像的第一像素点与对应的第二图像的第二像素点是否存在位置差异,若是,则将像素点存在差异的位置标记为该第二图像的差异区域。
实际应用中,可以通过深度学习模型对第一图像以及第二图像之间的位置差异进行获取,在此不作任何限定。
本说明书实施例中,所述识别方法采用在将第一图像的第一像素点与第二图像的第二像素点进行比对,确定第二图像的差异区域的方式,无需进行其他复杂运算,就可以快速的确定出第二图像的差异区域,而在差异区域的识别速度提升的同时,也会极大的提升对筛选后差异区域中的货品的整体识别效率。
步骤110:根据所述操作区域对所述第二图像的差异区域进行筛选,并基于筛选后的所述第二图像的差异区域确定目标识别结果。
具体的,在获取到第二图像的差异区域之后,首先通过操作区域对第二图像的差异区域进行筛选,删除有差异无操作的区域,以减少后续对差异区域的计算,提升识别的准确率以及速度,具体实现方式如下:根据所述操作区域对所述第二图像的差异区域进行筛选包括:确定所述操作区域的在所述货柜的位置与所述第二图像的差异区域的在所述货柜的位置;基于所述操作区域的在所述货柜的位置与所述第二图像的差异区域的在所述货柜的位置对所述第二图像的差异区域进行筛选,以确定所述第二图像的差异筛选区域。
具体实施时,首先确定操作区域在货柜的位置,例如确定每个操作区域是在货柜的第几层货架的第几列实现的,以及确定第二图像的差异区域的在所述货柜的位置,例如确定第二图像的差异区域是在该货柜的第几层货架的第几列实现的,然后删除不存在对应的操作区域的第二图像的差异区域,且将剩余的第二图像的差异区域确定为第二图像的差异筛选区域。
举例说明,若货柜的第一层的左边第一列存在差异区域,而不存在操作区域,则可以说明用户在该货柜的第一层进行过拿取货品操作的可能性较小,因此可以删除掉该差异区域;而若货柜的第一层的左边第一列,既存在差异区域,又存在操作区域,则可以说明用户可能对该层该列的货品进行了拿取,后续可以采用识别模型对该层该列的差异区域进行识别。
实际应用中,若某个第二图像中仅存在差异区域,不存在对应的操作区域,只可以表明第二图像所对应的货品置物架中的货品发生了移动,并且是在非用户操作的情况下发生了移动,因此则无需对该差异区域的货品进行后续的识别;而若某个第二图像中仅存在差异区域,且存在对应的操作区域,此时就可以确定是由于用户在该第二图像对应的货品置物架中的货品发生了移动产生了差异区域,因此,是需要对该第二图像的差异区域进行后续的货品识别。
本说明书实施例提供的所述识别方法,通过操作区域对第二图像的差异区域进行筛选,以筛选出有可能出现用户对货品移动或拿取的第二图像的差异筛选区域,后续可以仅针对该第二图像的差异筛选区域的货品进行识别,减少了待货品识别的识别区域,极大的提升了识别速度以及识别准确率。
本说明书另一实施例中,所述基于筛选后的所述第二图像的差异区域确定目标识 别结果之前,还包括:确定与包含差异筛选区域的第二图像对应的第一图像;将包含差异筛选区域的第二图像以及对应的第一图像分别输入识别模型,得到包含差异筛选区域的第二图像中货品在所述货柜的位置以及货品名称,以及对应的第一图像中货品在所述货柜的位置以及货品名称。
具体的,首先确定与包含差异筛选区域的第二图像对应的第一图像,例如包含差异筛选区域的第二图像为货柜的第二层货品置物架的图像,则与包含差异筛选区域的第二图像对应的第一图像,即可以理解为第一次采集的该货柜的第二层货品置物架的第一图像;然后将包含差异筛选区域的第二图像以及对应的第一图像分别输入识别模型,得到包含差异筛选区域的第二图像中货品在所述货柜的位置以及货品名称,以及对应的第一图像中货品在所述货柜的位置以及货品名称;例如得到包含差异筛选区域的第二图像中每个货品在所述货柜的位置以及每个货品名称,以及对应的第一图像中每个货品在所述货柜的位置以及每个货品名称。
其中,识别模型可以为预先训练的深度学习模型,该深度学习模型的输入为图像,输出为该图像中物品的坐标位置以及物品名称。
参见图4和图6,其中,图4为采集的某层货品置物架的第一图像,图6为采集的该层货品置物架的存在差异筛选区域的第二图像,将图4中的第一图像以及图6中的第二图像分别上传至货柜的服务器的识别模型中,该识别模型输出图4中每个货品的位置,例如第几排第几列,以及每个货品的名称,例如饮料a、饮料b等;并且输出图6中每个货品的位置,例如第几排第几列,以及每个货品的名称,例如饮料a、饮料b等。
本说明书实施例提供的识别方法,可以通过深度学习的识别模型快速准确的得到包含差异筛选区域的第二图像中货品在所述货柜的位置以及货品名称,以及对应的第一图像中货品在所述货柜的位置以及货品名称,以便后续可以对包含差异筛选区域的第二图像中缺失的货品的位置以及货品名称进行准确识别。
本说明书另一实施例中,所述基于筛选后的所述第二图像的差异区域确定目标识别结果包括:基于所述第二图像的差异筛选区域确定与所述第二图像对应的第一图像的待比对区域;将所述差异筛选区域与所述待比对区域进行对比,确定筛选后的所述第二图像中缺失的货品在所述货柜的位置以及货品名称。
具体的,在得到包含差异筛选区域的第二图像中货品在所述货柜的位置以及货品名称,以及对应的第一图像中货品在所述货柜的位置以及货品名称之后,首先基于第二图像的差异筛选区域确定与所述第二图像对应的第一图像的待比对区域,仍以上述为例,第二图像的差异筛选区域为图6中右侧第一列,则可以确定与所述第二图像对应的第一图像的待比对区域为图4中右侧第一列。
此外,由于第一图像以及第二图像中的每个货品均识别出了货品位置以及货品名称,然后将所述差异筛选区域与所述待比对区域进行对比,即可确定筛选后的所述第二图像中缺失的货品在所述货柜的位置以及货品名称;例如图6中缺失的货品为右侧第一列第一排的饮料a。
本说明书实施例中,所述识别方法通过动作感应装置获取的用户针对所述货柜内拿取所述货品的操作区域,实现对货柜内货品拿取前和拿取后的差异区域的筛选,并且基于识别模型对筛选后的货柜内货品的第二图像以及对应的第一图像中货品的货品名称以及货品位置进行预先识别,实现后续通过该识别结果对第二图像中缺失的货品的位置以及货品名称进行准确快速的确定,提升用户体验。
本说明书另一实施例中,所述基于筛选后的所述第二图像的差异区域确定目标识别结果之后,还包括:基于所述目标识别结果确定筛选后的所述第二图像中缺失的货品数量以及对应的待支付额度。
具体的,在获得目标识别结果后,可以基于目标识别结果确定筛选后的所述第二图像中缺失的货品数量以及对应的待支付额度,然后将其展示给用户,方便用户进行货品清点以及结算,提升用户体验。
本说明书实施例提供的所述识别方法利用柜体门内四周布置的红外线传感器组成的动作位置感应装置,来检测用户身体进入货柜内拿取货品的层和方位,来辅助货品波及算法提高精度,从而提高整体的货品识别精度。
参见图7,图7示出了根据本说明书一个实施例提供的一种识别方法的具体应用场景的处理流程图,包括步骤702至步骤722。
步骤702:开门图像获取。
具体的,即接收用户针对货柜触发的交易指令后,启动图像采集装置采集所述货柜内每层货品置物架中货品的第一图像,以保存所述货柜柜门开启前的所述货柜内每层货品置物架中货品的状态,然后等待用户开门。
步骤704:落锁开门。
具体的,即第一图像采集完毕后,落锁使得所述货柜的柜门处于可以被开启状态。
步骤706:传感器开启。
具体的,即接收所述货柜柜门的开启指令,启动动作感应装置。例如,用户打开货柜柜门的瞬间,启动动作感应装置并记录检测结果。
步骤708:记录感应数据。
具体的,即通过动作感应装置获取所述用户在所述货柜内拿取所述货品的操作。例如,记录当用户的身体部分进入货柜内拿取货品的操作行为。
步骤710:计算感应范围。
具体的,即计算所述用户在所述货柜内拿取所述货品的操作区域;例如当用户的身体部分进入货柜内拿取货品的操作时,计算该用户身体部分进入位置所对应的上下向以及左右向的传感器会定位用户所进入的货柜的层和方位。
步骤712:用户关门。
具体的,用户拿取货品后,关闭货柜柜门。
步骤714:关门获取图像。
具体的,接收所述货柜柜门的关闭指令,启动所述图像采集装置采集与所述第一 图像对应的所述货柜内货品的第二图像;例如当用户关闭货柜柜门后,再次对货柜内货品置物架中的货品进行拍照。
步骤716:判断是否有差分,若是,则执行步骤718,若否,则结束。
具体的,将开启货柜柜门前以及关闭货柜柜门后两次针对货柜内货品置物架中的货品的图像进行差分比对,确定每层货品置物架中的货品的图像是否存在差分区域。
步骤718:无感应区域去差分。
具体的,若该层货品置物架中的货品的图像存在差分区域,则基于上述记录的感应区域,对不存在对应的感应区域的差分区域进行筛选删除,实现无感应区域去差分。
步骤720:有感应区域替代差分。
具体的,若该层货品置物架中的货品的图像存在差分区域,则基于上述记录的感应区域确定出该差分区域存在对应的感应区域,则将该差分区域确定为差异筛选区域。
步骤722:差分区域比对。
具体的,通过上述实施例提供的方法将包含差异筛选区域的两次拍摄图像进行比对,以确定出该差异筛选区域对应的该层货品置物架中缺失的货品的位置以及货品名称,结束该识别流程。
本说明书实施例提供的所述识别方法利用柜体门内四周布置的红外线传感器组成的动作位置感应装置,检测用户身体进入货柜内拿取货品的层和方位,以检测获得的感应区域来辅助货品波及算法提高精度,从而提高整体的货品识别精度。
仍参见图8,图8示出了根据本说明书一个实施例提供的一种货柜的结构示意图,包括:安装有柜门802的柜体804;安装于所述柜体804内的图像采集装置806,用于采集所述货柜内货品的图像;安装于所述柜体804相邻两侧且靠近所述柜门802的动作感应装置808,用于获取用户在所述货柜内拿取所述货品的操作区域;安装于所述柜体804内的控制装置,用于控制所述图像采集装置806以及所述动作感应装置808以实现上述实施例提供的识别方法。
可选的,所述柜体804内设置至少一个货品置物架810,所述图像采集装置806安装于所述柜体804顶层以及每层货品置物架810的下面板。
具体的,为了实现更好的图像采集效果,实际应用中,可以将图像采集装置806安装于所述柜体804顶层以及每层货品置物架810的下面板的中心位置,以实现较合理的图像采集。
可选的,所述动作感应装置808等距离安装于所述柜体804内壁四周且靠近所述柜门802的位置。
具体的,所述动作感应装置808可以基于柜体804内安装的每层货品置物架810的距离合理调整动作感应装置808的安装距离,此外,为了减少动作感应装置808的使用个数,减少资源浪费,可以只在柜体804内靠近柜门802的相邻两侧设置动作感应装置808即可;本说明书另一种可实施方案装中,柜体804内安装的相对应两侧的动作感应装置808可以交错设置,以实现将该层货品置物架内的所有区域均至于感应区。
可选的,所述图像采集装置806包括广角摄像头,所述动作感应装置808包括红外线传感器,且每层货品置物架810至少共用四个对应的红外线传感器,具体的,所述动作感应装置808还可以包括但不限于窄角摄像头或者任何可以实现测距的传感器等。
可选的,所述控制装置还用于基于所述目标识别结果确定所述用户一次取走的货品的数量以及对应的待支付额度。
具体的,所述控制装置对于货柜内货品的识别方法的详细描述可以参见上述实施例提供的识别方法,在此不做赘述。
本说明书实施例提供的一种货柜包括安装有柜门的柜体;安装于所述柜体内的图像采集装置,用于采集所述货柜内货品的图像;安装于所述柜体相邻两侧且靠近所述柜门的动作感应装置,用于获取用户在所述货柜内拿取所述货品的操作区域;安装于所述柜体内的控制装置,用于控制所述图像采集装置以及所述动作感应装置以实现任意一项所述识别方法,该货柜可以基于安装与内部的图像采集装置以及动作感应装置的配合使用,以及结合控制装置的具体处理计算,实现对出售货品的精确定位与快速识别,极大的提升用户的购买体验。
与上述方法实施例相对应,本说明书还提供了识别装置实施例,图9示出了本说明书一个实施例提供的一种识别装置的结构示意图。如图9所示,该装置包括:
第一图像采集装置902,被配置为接收用户针对货柜的取货指令,且基于所述取货指令启动图像采集装置采集所述货柜内货品的第一图像;操作区域获取装置904,被配置为接收所述货柜柜门的开启指令,且基于所述开启指令启动动作感应装置获取所述用户在所述货柜内拿取所述货品的操作区域;第二图像采集装置906,被配置为接收所述货柜柜门的关闭指令,且基于所述关闭指令启动所述图像采集装置采集与所述第一图像对应的所述货柜内货品的第二图像;差异区域获取装置908,被配置为将所述第一图像与所述第二图像进行比对,以确定所述第二图像的差异区域;识别结果确定装置910,被配置为根据所述操作区域对所述第二图像的差异区域进行筛选,并基于筛选后的所述第二图像的差异区域确定目标识别结果。
可选的,所述装置,还包括:身份确定模块,被配置为获取所述用户的属性信息,且基于所述用户的属性信息确定所述用户的身份;相应的,所述第一图像采集装置902,进一步被配置为:在确定所述用户的身份满足所述货柜柜门的开启条件的情况下,启动图像采集装置采集所述货柜内货品的第一图像。
可选的,所述操作区域获取装置904,进一步被配置为:启动动作感应装置获取用户在所述货柜内拿取所述货品时肢体在所述货柜内停留的坐标位置。
可选的,所述差异区域获取装置908,进一步被配置为:基于预设图像处理方法获取所述第一图像的第一像素点以及所述第二图像的第二像素点;将所述第一图像的第一像素点与所述第二图像的第二像素点进行比对,将像素点存在差异的位置确定为所述第二图像的差异区域。
可选的,识别结果确定装置910,进一步被配置为:确定所述操作区域的在所述货 柜的位置与所述第二图像的差异区域的在所述货柜的位置;基于所述操作区域的在所述货柜的位置与所述第二图像的差异区域的在所述货柜的位置对所述第二图像的差异区域进行筛选,以确定所述第二图像的差异筛选区域。
可选的,所述装置,还包括:图像确定模块,被配置为确定与包含差异筛选区域的第二图像对应的第一图像;位置获取模块,被配置为将包含差异筛选区域的第二图像以及对应的第一图像分别输入识别模型,得到包含差异筛选区域的第二图像中货品在所述货柜的位置以及货品名称,以及对应的第一图像中货品在所述货柜的位置以及货品名称。
可选的,所述识别结果确定装置910,进一步被配置为:基于所述第二图像的差异筛选区域确定与所述第二图像对应的第一图像的待比对区域;将所述差异筛选区域与所述待比对区域进行对比,确定筛选后的所述第二图像中缺失的货品在所述货柜的位置以及货品名称。
可选的,所述装置,还包括:货品确定模块,被配置为基于所述目标识别结果确定筛选后的所述第二图像中缺失的货品数量以及对应的待支付额度。
上述为本实施例的一种识别装置的示意性方案。需要说明的是,该识别装置的技术方案与上述的识别方法的技术方案属于同一构思,识别装置的技术方案未详细描述的细节内容,均可以参见上述识别方法的技术方案的描述。
图10示出了根据本说明书一个实施例提供的一种计算设备1000的结构框图。该计算设备1000的部件包括但不限于存储器1010和处理器1020。处理器1020与存储器1010通过总线1030相连接,数据库1050用于保存数据。
计算设备1000还包括接入设备1040,接入设备1040使得计算设备1000能够经由一个或多个网络1060通信。这些网络的示例包括公用交换电话网(PSTN)、局域网(LAN)、广域网(WAN)、个域网(PAN)或诸如因特网的通信网络的组合。接入设备1040可以包括有线或无线的任何类型的网络接口(例如,网络接口卡(NIC))中的一个或多个,诸如IEEE802.11无线局域网(WLAN)无线接口、全球微波互联接入(Wi-MAX)接口、以太网接口、通用串行总线(USB)接口、蜂窝网络接口、蓝牙接口、近场通信(NFC)接口,等等。
在本说明书的一个实施例中,计算设备1000的上述部件以及图10中未示出的其他部件也可以彼此相连接,例如通过总线。应当理解,图10所示的计算设备结构框图仅仅是出于示例的目的,而不是对本说明书范围的限制。本领域技术人员可以根据需要,增添或替换其他部件。
计算设备1000可以是任何类型的静止或移动计算设备,包括移动计算机或移动计算设备(例如,平板计算机、个人数字助理、膝上型计算机、笔记本计算机、上网本等)、移动电话(例如,智能手机)、可佩戴的计算设备(例如,智能手表、智能眼镜等)或其他类型的移动设备,或者诸如台式计算机或PC的静止计算设备。计算设备1000还可以是移动式或静止式的服务器。
其中,处理器1020用于执行如下计算机可执行指令:接收用户针对货柜的取货指令,且基于所述取货指令启动图像采集装置采集所述货柜内货品的第一图像;接收所述货柜柜门的开启指令,且基于所述开启指令启动动作感应装置获取所述用户在所述货柜内拿取所述货品的操作区域;接收所述货柜柜门的关闭指令,且基于所述关闭指令启动所述图像采集装置采集与所述第一图像对应的所述货柜内货品的第二图像;将所述第一图像与所述第二图像进行比对,以确定所述第二图像的差异区域;根据所述操作区域对所述第二图像的差异区域进行筛选,并基于筛选后的所述第二图像的差异区域确定目标识别结果。
上述为本实施例的一种计算设备的示意性方案。需要说明的是,该计算设备的技术方案与上述的识别方法的技术方案属于同一构思,计算设备的技术方案未详细描述的细节内容,均可以参见上述识别方法的技术方案的描述。
本说明书一实施例还提供一种计算机可读存储介质,其存储有计算机指令,该指令被处理器执行时实现所述识别方法的步骤。
上述为本实施例的一种计算机可读存储介质的示意性方案。需要说明的是,该存储介质的技术方案与上述的识别方法的技术方案属于同一构思,存储介质的技术方案未详细描述的细节内容,均可以参见上述识别方法的技术方案的描述。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
所述计算机指令包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
需要说明的是,对于前述的各方法实施例,为了简便描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本说明书实施例并不受所描述的动作顺序的限制,因为依据本说明书实施例,某些步骤可以采用其它顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定都是本说明书实施例所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。
以上公开的本说明书优选实施例只是用于帮助阐述本说明书。可选实施例并没有详尽叙述所有的细节,也不限制该发明仅为所述的具体实施方式。显然,根据本说明书实施例的内容,可作很多的修改和变化。本说明书选取并具体描述这些实施例,是为了更好地解释本说明书实施例的原理和实际应用,从而使所属技术领域技术人员能很好地理解和利用本说明书。本说明书仅受权利要求书及其全部范围和等效物的限制。

Claims (16)

  1. 一种识别方法,包括:
    接收用户针对货柜的取货指令,且基于所述取货指令启动图像采集装置采集所述货柜内货品的第一图像;
    接收所述货柜柜门的开启指令,且基于所述开启指令启动动作感应装置获取所述用户在所述货柜内拿取所述货品的操作区域;
    接收所述货柜柜门的关闭指令,且基于所述关闭指令启动所述图像采集装置采集与所述第一图像对应的所述货柜内货品的第二图像;
    将所述第一图像与所述第二图像进行比对,以确定所述第二图像的差异区域;
    根据所述操作区域对所述第二图像的差异区域进行筛选,并基于筛选后的所述第二图像的差异区域确定目标识别结果。
  2. 根据权利要求1所述的识别方法,所述接收用户针对货柜的取货指令,且基于所述取货指令启动图像采集装置采集所述货柜内货品的第一图像之前,还包括:
    获取所述用户的属性信息,且基于所述用户的属性信息确定所述用户的身份;
    相应的,所述启动图像采集装置采集所述货柜内货品的第一图像包括:
    在确定所述用户的身份满足所述货柜柜门的开启条件的情况下,启动图像采集装置采集所述货柜内货品的第一图像。
  3. 根据权利要求1所述的识别方法,所述启动动作感应装置获取用户在所述货柜内拿取所述货品的操作区域包括:
    启动动作感应装置获取用户在所述货柜内拿取所述货品时肢体在所述货柜内停留的坐标位置。
  4. 根据权利要求1所述的识别方法,所述将所述第一图像与所述第二图像进行比对,以确定所述第二图像的差异区域包括:
    基于预设图像处理方法获取所述第一图像的第一像素点以及所述第二图像的第二像素点;
    将所述第一图像的第一像素点与所述第二图像的第二像素点进行比对,将像素点存在差异的位置确定为所述第二图像的差异区域。
  5. 根据权利要求1所述的识别方法,根据所述操作区域对所述第二图像的差异区域进行筛选包括:
    确定所述操作区域的在所述货柜的位置与所述第二图像的差异区域的在所述货柜的位置;
    基于所述操作区域的在所述货柜的位置与所述第二图像的差异区域的在所述货柜的位置对所述第二图像的差异区域进行筛选,以确定所述第二图像的差异筛选区域。
  6. 根据权利要求5所述的识别方法,所述基于筛选后的所述第二图像的差异区域确定目标识别结果之前,还包括:
    确定与包含差异筛选区域的第二图像对应的第一图像;
    将包含差异筛选区域的第二图像以及对应的第一图像分别输入识别模型,得到包含差异筛选区域的第二图像中货品在所述货柜的位置以及货品名称,以及对应的第一图像中货品在所述货柜的位置以及货品名称。
  7. 根据权利要求6所述的识别方法,所述基于筛选后的所述第二图像的差异区域确定目标识别结果包括:
    基于所述第二图像的差异筛选区域确定与所述第二图像对应的第一图像的待比对区域;
    将所述差异筛选区域与所述待比对区域进行对比,确定筛选后的所述第二图像中缺失的货品在所述货柜的位置以及货品名称。
  8. 根据权利要求1或7所述的识别方法,所述基于筛选后的所述第二图像的差异区域确定目标识别结果之后,还包括:
    基于所述目标识别结果确定筛选后的所述第二图像中缺失的货品数量以及对应的待支付额度。
  9. 一种货柜,包括:
    安装有柜门的柜体;
    安装于所述柜体内的图像采集装置,用于采集所述货柜内货品的图像;
    安装于所述柜体相邻两侧且靠近所述柜门的动作感应装置,用于获取用户在所述货柜内拿取所述货品的操作区域;
    安装于所述柜体内的控制装置,用于控制所述图像采集装置以及所述动作感应装置以实现权利要求1-8任意一项所述识别方法。
  10. 根据权利要求9所述的货柜,还包括:
    所述柜体内设置至少一个货品置物架,所述图像采集装置安装于所述柜体顶层以及每层货品置物架的下面板。
  11. 根据权利要求9所述的货柜,还包括:
    所述动作感应装置等距离安装于所述柜体四周且靠近所述柜门的位置。
  12. 根据权利要求9所述的货柜,所述图像采集装置包括广角摄像头,
    所述动作感应装置包括红外线传感器,且每层货品置物架至少共用四个对应的红外线传感器。
  13. 根据权利要求9所述的货柜,所述控制装置还用于基于所述目标识别结果确定所述用户一次取走的货品的数量以及对应的待支付额度。
  14. 一种识别装置,包括:
    第一图像采集装置,被配置为接收用户针对货柜的取货指令,且基于所述取货指令启动图像采集装置采集所述货柜内货品的第一图像;
    操作区域获取装置,被配置为接收所述货柜柜门的开启指令,且基于所述开启指令启动动作感应装置获取所述用户在所述货柜内拿取所述货品的操作区域;
    第二图像采集装置,被配置为接收所述货柜柜门的关闭指令,且基于所述关闭指令启动所述图像采集装置采集与所述第一图像对应的所述货柜内货品的第二图像;
    差异区域获取装置,被配置为将所述第一图像与所述第二图像进行比对,以确定所述第二图像的差异区域;
    识别结果确定装置,被配置为根据所述操作区域对所述第二图像的差异区域进行筛选,并基于筛选后的所述第二图像的差异区域确定目标识别结果。
  15. 一种计算设备,包括:
    存储器和处理器;
    所述存储器用于存储计算机可执行指令,所述处理器用于执行所述计算机可执行指令:
    接收用户针对货柜的取货指令,且基于所述取货指令启动图像采集装置采集所述货柜内货品的第一图像;
    接收所述货柜柜门的开启指令,且基于所述开启指令启动动作感应装置获取所述用户在所述货柜内拿取所述货品的操作区域;
    接收所述货柜柜门的关闭指令,且基于所述关闭指令启动所述图像采集装置采集与所述第一图像对应的所述货柜内货品的第二图像;
    将所述第一图像与所述第二图像进行比对,以确定所述第二图像的差异区域;
    根据所述操作区域对所述第二图像的差异区域进行筛选,并基于筛选后的所述第二图像的差异区域确定目标识别结果。
  16. 一种计算机可读存储介质,其存储有计算机指令,该指令被处理器执行时实现权利要求1至8任意一项所述识别方法的步骤。
PCT/CN2021/093304 2020-05-15 2021-05-12 识别方法及装置 WO2021228134A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010411805.0 2020-05-15
CN202010411805.0A CN111340009A (zh) 2020-05-15 2020-05-15 识别方法及装置

Publications (1)

Publication Number Publication Date
WO2021228134A1 true WO2021228134A1 (zh) 2021-11-18

Family

ID=71186561

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/093304 WO2021228134A1 (zh) 2020-05-15 2021-05-12 识别方法及装置

Country Status (2)

Country Link
CN (1) CN111340009A (zh)
WO (1) WO2021228134A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114435828A (zh) * 2021-12-31 2022-05-06 深圳云天励飞技术股份有限公司 货品存储方法、装置、搬运设备及存储介质
CN114494877A (zh) * 2022-01-28 2022-05-13 北京云迹科技股份有限公司 一种货物出柜控制方法、装置、电子设备和存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340009A (zh) * 2020-05-15 2020-06-26 支付宝(杭州)信息技术有限公司 识别方法及装置
CN112883784A (zh) * 2021-01-13 2021-06-01 北京每日优鲜电子商务有限公司 智能售货柜的立体视觉检测方法、设备和存储介质
CN113128464B (zh) * 2021-05-07 2022-07-19 支付宝(杭州)信息技术有限公司 图像识别方法和***
CN113128463B (zh) * 2021-05-07 2022-08-26 支付宝(杭州)信息技术有限公司 图像识别方法和***
CN113435448A (zh) * 2021-07-29 2021-09-24 上海商汤智能科技有限公司 一种图像处理方法、装置、计算机设备及存储介质
CN113762094A (zh) * 2021-08-18 2021-12-07 南京宝坚电子科技有限公司 一种静态图像识别货品的装置及方法
CN115054089A (zh) * 2022-08-04 2022-09-16 元气森林(北京)食品科技集团有限公司 陈列柜、控制方法、设备、介质及产品

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170193430A1 (en) * 2015-12-31 2017-07-06 International Business Machines Corporation Restocking shelves based on image data
CN109190706A (zh) * 2018-09-06 2019-01-11 深圳码隆科技有限公司 无人售货方法、装置及***
CN109409291A (zh) * 2018-10-26 2019-03-01 虫极科技(北京)有限公司 智能货柜的商品识别方法和***及购物订单的生成方法
US20190197561A1 (en) * 2016-06-29 2019-06-27 Trax Technology Solutions Pte Ltd Identifying products using a visual code
CN110910567A (zh) * 2019-11-29 2020-03-24 合肥美的智能科技有限公司 扣付方法、装置、电子设备、计算机可读存储介质和货柜
CN110991367A (zh) * 2019-12-09 2020-04-10 上海扩博智能技术有限公司 采集货柜商品售卖信息的方法、***、设备和存储介质
CN111340009A (zh) * 2020-05-15 2020-06-26 支付宝(杭州)信息技术有限公司 识别方法及装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108198331B (zh) * 2018-01-08 2020-04-21 深圳正品创想科技有限公司 一种取货检测方法、装置及无人售货柜
CN110443118B (zh) * 2019-06-24 2021-09-03 上海了物网络科技有限公司 基于人造特征的商品识别方法、***及介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170193430A1 (en) * 2015-12-31 2017-07-06 International Business Machines Corporation Restocking shelves based on image data
US20190197561A1 (en) * 2016-06-29 2019-06-27 Trax Technology Solutions Pte Ltd Identifying products using a visual code
CN109190706A (zh) * 2018-09-06 2019-01-11 深圳码隆科技有限公司 无人售货方法、装置及***
CN109409291A (zh) * 2018-10-26 2019-03-01 虫极科技(北京)有限公司 智能货柜的商品识别方法和***及购物订单的生成方法
CN110910567A (zh) * 2019-11-29 2020-03-24 合肥美的智能科技有限公司 扣付方法、装置、电子设备、计算机可读存储介质和货柜
CN110991367A (zh) * 2019-12-09 2020-04-10 上海扩博智能技术有限公司 采集货柜商品售卖信息的方法、***、设备和存储介质
CN111340009A (zh) * 2020-05-15 2020-06-26 支付宝(杭州)信息技术有限公司 识别方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114435828A (zh) * 2021-12-31 2022-05-06 深圳云天励飞技术股份有限公司 货品存储方法、装置、搬运设备及存储介质
CN114494877A (zh) * 2022-01-28 2022-05-13 北京云迹科技股份有限公司 一种货物出柜控制方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN111340009A (zh) 2020-06-26

Similar Documents

Publication Publication Date Title
WO2021228134A1 (zh) 识别方法及装置
US11501523B2 (en) Goods sensing system and method for goods sensing based on image monitoring
CN111415461B (zh) 物品识别方法及***、电子设备
US9824459B2 (en) Tracking objects between images
US11049373B2 (en) Storefront device, storefront management method, and program
JP6806261B2 (ja) 店舗装置、店舗システム、店舗管理方法、プログラム
CN111263224B (zh) 视频处理方法、装置及电子设备
CN108921098B (zh) 人体运动分析方法、装置、设备及存储介质
US11983250B2 (en) Item-customer matching method and device based on vision and gravity sensing
TWI694352B (zh) 互動行為檢測方法、裝置、系統及設備
WO2021179137A1 (zh) 结算方法、装置和***
CN110689389A (zh) 基于计算机视觉的购物清单自动维护方法及装置、存储介质、终端
CN107945392A (zh) 自动售货机及采集证据的方法以及存储介质
CN113409056B (zh) 支付方法、装置、本地识别设备、人脸支付***及设备
WO2020029663A1 (zh) 商品信息查询方法和***
CN111260685A (zh) 视频处理方法、装置及电子设备
CN108259769A (zh) 图像处理方法、装置、存储介质及电子设备
CN108652161A (zh) 一种智能识别旅行箱及其控制方法
CN110910567A (zh) 扣付方法、装置、电子设备、计算机可读存储介质和货柜
CN111461104B (zh) 视觉识别方法、装置、设备及存储介质
CN113297889A (zh) 对象信息处理方法及装置
CN112837471A (zh) 网约房安全监控方法和装置
CN111444757A (zh) 无人超市的行人重识别方法、装置、设备及存储介质
CN113297890A (zh) 对象信息处理方法及装置
CN113298597A (zh) 对象热度分析***、方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21804379

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21804379

Country of ref document: EP

Kind code of ref document: A1