CN114299388A - Article information specifying method, showcase, and storage medium - Google Patents

Article information specifying method, showcase, and storage medium Download PDF

Info

Publication number
CN114299388A
CN114299388A CN202111589547.6A CN202111589547A CN114299388A CN 114299388 A CN114299388 A CN 114299388A CN 202111589547 A CN202111589547 A CN 202111589547A CN 114299388 A CN114299388 A CN 114299388A
Authority
CN
China
Prior art keywords
target
display
article
item
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111589547.6A
Other languages
Chinese (zh)
Inventor
郭峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yuanqi Forest Beijing Food Technology Group Co ltd
Original Assignee
Yuanqi Forest Beijing Food Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yuanqi Forest Beijing Food Technology Group Co ltd filed Critical Yuanqi Forest Beijing Food Technology Group Co ltd
Priority to CN202111589547.6A priority Critical patent/CN114299388A/en
Publication of CN114299388A publication Critical patent/CN114299388A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Cold Air Circulating Systems And Constructional Details In Refrigerators (AREA)

Abstract

The present disclosure provides an item information determination method, a showcase, and a storage medium, wherein the method includes: acquiring target images which are shot by a target camera towards the interior of each display cabinet and contain a plurality of articles; detecting position information of each article in the target image, and extracting target article characteristics of each article based on the position information; identifying a target item among the plurality of items based on the target item feature; wherein the target object is at least part of a plurality of preset objects; and counting display information of the target objects in each display case, and sending the display information to a user.

Description

Article information specifying method, showcase, and storage medium
Technical Field
The present disclosure relates to the field of image processing, and more particularly, to an article information determining method, a display case, and a storage medium.
Background
In recent years, showcases have been widely used in convenience stores and supermarkets, such as self-service showcases, refrigerated showcases for displaying refrigerated goods, and the like. In a conventional showcase, a patrol inspector typically inspects articles in each showcase, and thereby determines the display condition of each article in the showcase according to the patrol result. However, this manual inspection method wastes a lot of human resources, and because the inspection efficiency of the manual inspection method is low, the display condition of the article cannot be obtained in time in the conventional inspection method, which causes a delay in the conventional inspection method.
Disclosure of Invention
The embodiment of the disclosure provides at least an article information determination method, a display case and a storage medium. The embodiment of the disclosure automatically detects the display information of each article in the showcase and automatically sends the display information to the user, so that a large amount of human resources can be reduced, and the display information of each article in the showcase can be acquired in real time, thereby reducing the delay time.
In a first aspect, an embodiment of the present disclosure provides an article information determining method, including: acquiring target images which are shot by a target camera towards the interior of each display cabinet and contain a plurality of articles; detecting position information of each article in the target image, and extracting target article characteristics of each article based on the position information; identifying a target item among the plurality of items based on the target item feature; wherein the target object is at least part of a plurality of preset objects; and counting display information of the target objects in each display case, and sending the display information to a user.
In a second aspect, an embodiment of the present disclosure further provides an article information determining apparatus, including: the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a target image which is shot by a target camera towards the inside of each display cabinet and contains a plurality of articles; the detection unit is used for detecting the position information of each article in the target image and extracting the target article characteristics of each article based on the position information; an identification unit for identifying a target item among the plurality of items based on the target item feature; wherein the target object is at least part of a plurality of preset objects; and the counting unit is used for counting the display information of the target objects in each display case and sending the display information to a user.
In a third aspect, embodiments of the present disclosure also provide a display case, including: the system comprises a showcase body, a target camera and a showcase controller, wherein the target camera is mounted on the showcase body, and a lens of the target camera faces the inside of the showcase body; the target camera is configured to acquire a target image in the display cabinet; the display cabinet controller is configured to acquire the target image, detect position information of each article in the target image, and extract a target article feature of each article based on the position information; identifying a target item among a plurality of items based on the target item feature; wherein the target object is at least part of a plurality of preset objects; counting display information of the target item in the display case, and transmitting the display information to a user.
In a fourth aspect, embodiments of the present disclosure provide another display case, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus, the machine-readable instructions, when executed by the processor, performing the steps of the first aspect described above, or any possible implementation of the first aspect.
In a fifth aspect, this disclosed embodiment further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
In the embodiment of the present disclosure, first, a target image including a plurality of articles captured by a target camera toward the inside of each display case is acquired, and position information of each article in the target image is detected, and a target article feature of each article is extracted based on the position information. Then, the target item is identified among the plurality of items based on the target item feature, the display information of the target item in each display case is counted, and the display information of the target item in each display case is transmitted to the user.
In the embodiment of the present disclosure, the display information of each item in the showcase can be automatically detected and the display information can be automatically transmitted to the user by acquiring the target image, identifying the target item among the items included in the target image, and counting the display information of the target item in each showcase.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of an item information determination method provided by an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a specific method for detecting location information of each item in the target image and extracting a target item feature of each item based on the location information in the item information determination method according to the embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a specific method for identifying a target item in the plurality of items based on the target item feature in the item information determination method provided by the embodiment of the present disclosure;
FIG. 4 is a flow chart illustrating one particular method of accounting for display information of the target item within the respective display cases in the item information determination methods provided by embodiments of the present disclosure;
FIG. 5 is a schematic diagram illustrating the effect of a display channel on a display shelf within a display case provided by embodiments of the present disclosure;
FIG. 6 is a flow chart illustrating another specific method of counting display information of the target item in each display case in the item information determination method provided by the embodiment of the present disclosure;
fig. 7 shows a flowchart of a second item information determination method provided by an embodiment of the present disclosure;
fig. 8 shows a flowchart of a third item information determination method provided by an embodiment of the present disclosure;
fig. 9 shows a schematic diagram of an item information determination apparatus provided by an embodiment of the present disclosure;
FIG. 10 is a schematic diagram illustrating a display case according to an embodiment of the present disclosure;
fig. 11 is a schematic structural view of another showcase provided by the embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
It has been found that, in the prior art showcases, a patrol inspector typically inspects the articles in each showcase, so as to determine the display condition of each article in the showcase according to the patrol inspection result. However, this manual inspection method wastes a lot of human resources, and because the inspection efficiency of the manual inspection method is low, the display condition of the article cannot be obtained in time in the conventional inspection method, which causes a delay in the conventional inspection method.
Based on the above-described research, the present disclosure provides an article information determination method, a showcase, and a storage medium. In the embodiment of the present disclosure, first, a target image including a plurality of articles captured by a target camera toward the inside of each display case is acquired, and position information of each article in the target image is detected, and a target article feature of each article is extracted based on the position information. Then, the target item is identified among the plurality of items based on the target item feature, the display information of the target item in each display case is counted, and the display information of the target item in each display case is transmitted to the user.
In the embodiment of the present disclosure, the display information of each item in the showcase can be automatically detected and the display information can be automatically transmitted to the user by acquiring the target image, identifying the target item among the items included in the target image, and counting the display information of the target item in each showcase.
For example, a user may wish to monitor display information for all of the items in the display case to determine the display of the items based on the monitored display information. The display change trend of the competitive products in the showcase can be analyzed through the display condition of the competitive products, so that a user is reminded to determine a corresponding coping strategy according to the display change trend. After the display information of the target object in the showcase is obtained by the object information determining method, the display change trend of the competitive objects in the showcase can be analyzed timely and quickly, so that a user can be guided to determine a corresponding coping strategy quickly.
In the embodiment of the present disclosure, the showcase may be any showcase in which the cabinet doors can be opened and closed, for example, the showcase may be a showcase supporting an unmanned function, the showcase may be a refrigerator for placing refrigerated goods in a supermarket, and in addition, the showcase may be any showcase in which the cabinet doors can be opened and closed, such as a warm showcase, a refrigerator for placing refrigerated goods, and the like.
To facilitate understanding of the present embodiment, a detailed description will be given first of all of an article information determination method disclosed in the embodiments of the present disclosure.
Referring to fig. 1, a flowchart of an article information determining method provided in an embodiment of the present disclosure is shown, where the method includes steps S101 to S107, where:
s101: and acquiring a target image containing a plurality of articles shot by the target camera towards the inside of each display cabinet.
In the disclosed embodiment, the target camera may be mounted on a door handle of the cabinet door, with a lens of the target camera disposed toward the interior of the display case. When the cabinet door of the showcase is detected to be opened, shooting is started through the target camera, and therefore a target image is determined in the shot video.
In specific implementation, a video image shot when the opening angle of the cabinet door is the maximum can be determined as a target image; the video image with the integrity satisfying the requirement in the video can be determined as the target image, and the integrity can be understood as the integrity of the showcase contained in the video image.
S103: and detecting the position information of each article in the target image, and extracting the target article characteristics of each article based on the position information.
Here, a bounding box of each item may be detected in the target image, and the position information of each item may be determined based on the bounding box. After the position information of each article is determined, the article feature of each article can be extracted from the target image based on the position information, so as to obtain the target article feature.
S105: identifying a target item among the plurality of items based on the target item feature; wherein the target object is at least part of a plurality of preset objects.
Here, the preset article is one or more articles preset by the user. The preset item may be a competitive product of an item to which the user belongs, the preset item may also be an item to which the user belongs, and in addition, the preset item may also be another type of item having an association relationship with the user, which is not specifically limited in this disclosure.
S107: and counting display information of the target objects in each display case, and sending the display information to a user.
In the disclosed embodiment, after the target item is identified from the plurality of items, the display information of the target item in each display case can be counted.
Here, the display information is indicative of at least one display position of the target item within each display case and the quantity of target item displayed at each display position.
Here, the target camera installed in the showcase may be a network camera. After the target camera captures the target image, the target image may be transmitted to the server through the internet to cause the server to perform the above-described steps S101 to S107.
In addition, a display case controller can be installed in the display case, wherein the display case controller can be communicatively connected to the target camera. After the target camera captures the target image, the display controller may execute the above-described steps S101 to S107 by transmitting the target image to the display controller.
In the embodiment of the present disclosure, first, a target image including a plurality of articles captured by a target camera toward the inside of each display case is acquired, and position information of each article in the target image is detected, and a target article feature of each article is extracted based on the position information. Then, the target item is identified among the plurality of items based on the target item feature, the display information of the target item in each display case is counted, and the display information of the target item in each display case is transmitted to the user.
In the embodiment of the present disclosure, the display information of each item in the showcase can be automatically detected and the display information can be automatically transmitted to the user by acquiring the target image, identifying the target item among the items included in the target image, and counting the display information of the target item in each showcase.
The method for determining the information of the objects in the display case will be described in detail with reference to specific application scenarios.
For example, the showcase is a general showcase, and a network camera (i.e., the target camera) capable of being connected to a server network in a communication manner is mounted in advance in the showcase. Aiming at the common showcase, the showcase can be placed in the environments such as supermarkets, convenience stores and the like which need manual settlement. For example, a user can take corresponding articles in a general showcase placed in a convenience store, and thus, the articles taken by the user are automatically settled at a settlement counter. Assuming that the target item contains item A and an offer for item A, in this case, the display information may be information for item A in the display case, as well as information for the offer for item A in the display case.
In specific implementation, after the cabinet door of the display cabinet is detected to be opened, the network camera is used for collecting the interior of the display cabinet to obtain a target image. The target image may then be transmitted to a server over a network communication connection. After the server acquires the target image, the server can detect the position information of each article in the target image and extract the target article characteristics of each article based on the position information. Item a and an item contest for item a may then be identified among the various items in the target image based on the target item characteristics. For an identified item, display information for that item in the display case, such as the number of displays and the number of displays of the item in the display case, may be counted. For an identified item A, the display information of that item A in the display case, e.g., the number of items A displayed and the number of displays in the display case, may be counted.
If the situation that the competitive products are displayed in the center of the showcase is determined and the display quantity of the competitive products in the showcase is large, the display situation of the competitive products can be provided for the user in a mode of sending display information to the user, and the display change trend of the competitive products in the showcase can be analyzed through the display situation, so that the user is reminded to determine a corresponding coping strategy according to the display change trend.
The above steps will be described with reference to specific embodiments.
As can be seen from the above description, in the embodiment of the present disclosure, the sensor may be installed at the cabinet door of the showcase in advance. When the sensor detects that the cabinet door is opened, the target camera can be triggered to start shooting, and a video is obtained. After the video is captured, the target image may be determined in the video.
Here, the sensor may be a gyroscope installed in the showcase, and the sensor may also be another type of door opening sensor, which is not particularly limited by the present disclosure so as to be implemented.
Specifically, a video image shot when the opening angle of the cabinet door is the maximum can be determined as a target image; the video image with the completeness and/or definition meeting the requirement in the video can also be determined as the target image, and the completeness can be understood as the completeness of the showcase contained in the video image.
After the target image is acquired, the position information of each article in the target image can be detected, and the target article feature of each article can be extracted based on the position information.
In an alternative embodiment, as shown in fig. 2, step S103: detecting the position information of each article in the target image, and extracting the target article characteristics of each article based on the position information, specifically comprising the following steps:
step S301: carrying out target detection on the target image through a target detection model to obtain position information of each article in the target image;
step S302: cutting the target image based on the position information to obtain a sub-image containing each article;
step S303: and performing feature extraction on the sub-images through a feature extraction model to obtain the target article feature of each article.
In the embodiment of the present disclosure, the object in the target image may be detected by the target detection model, so as to obtain the position information of each object in the target image.
Before the target image is detected by the target detection model, the detection model to be trained and the feature extraction model to be trained need to be trained.
The training process for the detection model to be trained and the feature extraction model to be trained can be described as the following process:
firstly, obtaining a first training sample, wherein the first training sample comprises a sample image and an image label, the sample image is an image obtained by collecting articles in each display cabinet, and the image label is article labeling information of each article in the sample image;
secondly, determining a second training sample based on the first training sample, wherein the second training sample comprises sub-images of all articles in each sample image and an image label of each sub-image, and the image label is used for indicating the category of the articles contained in the corresponding sub-image;
and finally, training the detection model to be trained through the first training sample to obtain the target detection model, and training the feature extraction model to be trained through the second training sample to obtain the feature extraction model.
In particular, a large number of sample images can be acquired through the showcase. When the showcase is opened, the inside of the showcase is photographed each time to obtain a sample image, and then each article in the sample image may be labeled to obtain article labeling information (for example, a bounding box of each article), thereby constituting a first training sample.
After the first training sample is obtained, the first training sample can be further split into a training set, a verification set and a test set.
Next, the data set samples (i.e., training set, validation set, test set) may also be enriched using data enhancement means, which may be at least one of: HSV (Value) channels for randomly modifying images, color dithering, gaussian blurring, ISO noise, random rotation, random cropping, random enlargement or reduction, mirroring, sharpening, etc.
After the image enhancement processing is performed on the data set sample, the detection model to be trained can be trained based on the data set sample after the image enhancement processing, so that the target detection model is obtained. Here, YOLOV5 may be used as the detection model to be trained, and other deep learning models may also be used, which is not specifically limited by the present disclosure to be able to be implemented.
When the detection model to be trained is trained based on the data set sample after image enhancement processing, the size of the bounding box marked in the data set sample can be clustered through a K-means clustering algorithm, and the clustering result of the bounding box is obtained. Then, a plurality of anchor frame sizes can be generated for each article in the sample image based on the clustering result of the boundary frames, and the intersection ratio between each anchor frame and the boundary frame marked in advance is calculated, so that the boundary frame predicted by the detection model to be trained for the article is determined in the plurality of anchor frames according to the calculated intersection ratio.
Since different sample images may be images taken at different angles and different distances, the bounding boxes of the same object on different sample images are of different sizes. The above description of generating a plurality of anchor frame sizes for each article in the sample image can be understood as determining a plurality of bounding boxes, i.e. a plurality of anchor frames, which are in accordance with the dimension characteristics of the same article and are from images captured at different angles and different distances, for the same article according to the plurality of sample images.
In the embodiment of the disclosure, in the process of training the detection model to be trained, label smoothing can be added, and the label can be prevented from being predicted too confidently during the training process of the model by adding the label smoothing, so that the problem of poor generalization capability is improved. In the embodiment of the disclosure, a target Loss function, Focal local, may also be introduced, by which the problem of difficult sample convergence may be solved.
And under the condition that the detection model to be trained meets the training requirement, determining the detection model to be trained meeting the training requirement as the target detection model.
In the embodiment of the present disclosure, when the feature extraction model to be trained is trained, a second training sample may be extracted based on the first training sample.
In specific implementation, each article included in the sample image of the first training sample may be cut based on the position information of each article to obtain a sub-image including each article, and then, each article in the sub-image is subjected to type labeling to obtain an image tag indicating a category of the article included in the corresponding sub-image, thereby forming the second training sample.
Thereafter, the second training sample may also be enriched using a data enhancement approach, which may be at least one of: HSV (Value) channel of randomly modified images, color dithering, gaussian blur, ISO noise, mirroring, sharpening.
Next, the feature extraction model to be trained may be trained by a second training sample after the data enhancement process. The feature extraction model to be trained here may use Resnet50, or may use other deep learning models, which the present disclosure does not specifically limit.
After the target detection model and the feature extraction model are obtained, if the deployment devices of the target detection model and the feature extraction model are terminal devices, quantization operation can be performed on the target detection model and the feature extraction model, so that the target detection model and the feature extraction model are converted into models in an ONNX format. The object detection model and the feature extraction model in ONNX format may then be deployed in the processor. Because the existing processor cannot directly use the trained target detection model and feature extraction model, the target detection model and the feature extraction model need to be quantized, and the target detection model and the feature extraction model can be converted into corresponding format files through a specified library by model quantization.
In an alternative embodiment, as shown in fig. 3, step S105: identifying a target item among the plurality of items based on the target item feature, specifically comprising the steps of:
step S1051: acquiring preset article characteristics of the preset article;
step S1052: calculating the feature similarity between the target article feature and the preset article feature;
step S1053: identifying the target item among the plurality of items based on the feature similarities.
In the embodiment of the present disclosure, the item feature of each preset item, that is, the preset item feature, may be acquired from the preset item library. Then, the feature similarity between the target article feature and the preset article feature of each preset article can be calculated, and the target article is identified in the plurality of articles based on the feature similarity.
Specifically, for each article, at least one feature similarity may be calculated; then, a feature similarity greater than a similarity threshold may be determined among the calculated at least one feature similarity. If the detected feature similarity comprises the feature similarity which is larger than the similarity threshold value, the object is determined to be the target object, at the moment, a preset object corresponding to the target object can be determined, and the category of the target object is further determined.
In the above embodiment, by calculating the feature similarity between the target item feature and the preset item feature, the target item can be identified quickly and accurately among the respective items in the showcase in such a manner that the target item is identified among the plurality of items according to the feature similarity.
In an alternative embodiment, the above steps: the method can also identify a target item in the plurality of items by the following method, and specifically comprises the following steps:
and inputting each article as data input into a target classification model for classification processing, thereby obtaining a classification result, wherein the classification result is used for indicating the probability that the article is of each preset article type.
After the classification result is obtained, the article is determined to be the target article based on the preset article type corresponding to the maximum probability meeting the requirement in the probabilities contained in the classification result.
In an alternative embodiment, as shown in fig. 4, in the case that the number of the target items is plural, the step S107: counting the display information of the target articles in each display cabinet, specifically comprising the following steps:
step S401: determining a display position of each of said target items in said display case based on said position information for each item;
step S402: a first quantity of items of a target item located at a specified display location of the plurality of target items is counted based on the display location, and the display information is determined based on the first quantity of items.
In embodiments of the present disclosure, after the target item is identified among the plurality of items, a display location of the target item within the display case may be determined based on the location information for each item. For example, target item A is on the first tier of display shelf in the display case and on the left Mth display channel adjacent to the display shelf. As shown in FIG. 5, the area extending from the outer edge of the display shelf toward the interior of the display case for the product displayed on each display shelf is the display channel in the disclosed embodiment.
After the display positions of the plurality of target items are acquired, the designated display position may be determined among the display positions of the plurality of target items, and the item located at the designated display position may be determined, so as to obtain a first item quantity of the target item located at the designated display position, and determine the first item quantity as the display information.
Here, the designated display position may be a position point set in advance by the user, or may be a position region set in advance in the showcase by the user. For example, the designated display location may be each display channel of a mid-shelf of the display case, or for example, the designated display location may be a circular region of the display case centered about the mid-shelf of the display case and having a radius R. The present disclosure does not specifically limit the setting manner of the designated display position to be able to be implemented.
For example, the preset items in the preset item library can be competitive items, and the designated display position is a circular area in the display cabinet, wherein the circular area takes the middle position of the display cabinet as the center of a circle and R as the radius. At this time, the contest may be determined among a plurality of items and the display position of the contest in the display case may be determined in the manner described above. The number of the bids located within the circular area can then be determined based on the display position to obtain a first quantity of items.
In the above embodiment, through the above processing method, the statistical analysis of the target articles at the designated display positions in the showcase can be realized, so that the personalized requirements of the user can be met, and the user can know the display information of the articles at each position in the showcase in time.
In an alternative embodiment, as shown in fig. 6, in the case that the number of the target items is plural, the step S107: counting the display information of the target articles in each display cabinet, specifically comprising the following steps:
step S601: determining the category of each target article to obtain at least one category, wherein the category is used for indicating a preset article corresponding to the target article;
step S602: and counting the quantity of the target articles corresponding to each category in each display cabinet to obtain a second article quantity, and determining the display information based on the second article quantity.
As can be seen from the above description, for each article, the feature similarity between the target article feature and the preset article feature of the article may be calculated, so as to obtain at least one feature similarity; then, the feature similarity greater than the similarity threshold may be determined from the calculated at least one feature similarity, and a preset article corresponding to the feature similarity greater than the similarity threshold may be determined as the category of the article (i.e., the target article).
After determining the category of each target item in the manner described above, at least one category may be obtained. For each category, the item number of the target item belonging to the category may be determined among the plurality of target items, resulting in a second item number. Then, the number of the second item can be determined as display information, and the display information includes position information of the target item belonging to each category in the showcase.
For example, display case includes target item 1, target item 2, target item 3, target item 4, and target item 5. At this time, the category of each of the target items 1 to 5 may be determined. Assuming that the target item 1 and the target item 2 belong to the same category, that is, the preset items corresponding to the target item 1 and the target item 2 are the same, and the target item 3, the target item 4 and the target item 5 belong to the same category, which is denoted as category a, that is, the preset items corresponding to the target item 3, the target item 4 and the target item 5 are the same, which is denoted as category B.
In this case, the number of items of the target item corresponding to the category a in the showcase is counted as P, and the number of items of the target item corresponding to the category B in the showcase is counted as Q. Wherein, the article quantity P and the article quantity Q are the second article quantity.
The display information may be determined by specifying the number of the second item and the position information of the target item for each of the target item 1, the target item 2, the target item 3, the target item 4, and the target item 5.
In the above embodiment, the display information of the target items of each type in each showcase can be obtained by the above processing method, so that the user can know the display information of the target items of each type in the showcase in time.
In an optional embodiment, the step S602: counting the number of the target articles corresponding to each category in each display cabinet to obtain the number of the second articles, specifically comprising the following steps:
step S6021: determining a display channel within each of said display cases for each of said categories of target items;
step S6022: determining a display depth of the displayed target item in the display channel;
step S6023: estimating an estimated quantity of items of the displayed target item in each display lane based on the item size of the target item and the display depth;
step S6024: and determining the quantity of the target items corresponding to each category in each display cabinet based on the estimated quantity of the items to obtain the second quantity of the items.
In the embodiment of the present disclosure, when a large number of articles are placed in the display case, the articles contained in the target image are the articles located at the outermost layer in the display case due to the blocking between the articles. At this time, if the number of the second item is determined according to the number of the target items detected in the target image, the accuracy of the determined number of the second item is poor.
In this case, the display lane for the target item for each category may be determined, for example, the mth display lane of the nth tier of display shelves. For each display channel, a display depth of a displayed target item within the display channel may be estimated.
In specific implementation, a depth image can be acquired through a depth camera, and then the depth distance between the first target object displayed on each display channel in the depth image and the depth sensor is determined according to the depth image. Based upon the known total display depth of the display channel of the display case, the display depth of the target product displayed in the display channel can be determined based upon the depth distance and the total display depth. Since the item size information for the target item is known, the item quantity of the target item displayed in each display aisle may be determined in combination with the item size information and the display depth, such that the item quantity of the target item for each category of display case may be obtained, resulting in the second item quantity.
In the embodiment of the present disclosure, in addition to the manner in which the depth image is acquired by the depth camera to determine the display depth from the depth image, the display depth may be determined in other manners. For example, the distance between the leading target item in each display lane and the distance sensor may be detected by the distance sensor, and the display depth of the displayed target item in the display lane may be determined based on the distance.
In an alternative embodiment, as shown in fig. 7, the method further comprises:
step S701: analyzing display trends for the items displayed in the respective display cases based on the display information, wherein the display trends are indicative of quantity trends and/or location trends;
step S702: and generating corresponding early warning information based on the display change trend, and sending the early warning information to an early warning object.
In the present disclosure, it is possible to acquire display information acquired within a preset time period in the case where an analysis request by a user is detected, and analyze a display change tendency of items displayed in each showcase based on the acquired display information.
In addition, the user may create an analysis task in advance, and the analysis task is periodically executed according to a preset task period. In performing the analysis task, the display information acquired within a preset time period may be acquired, and the display variation tendency of the displayed items in each showcase may be analyzed based on the acquired display information.
Here, the display variation tendency may be a location variation tendency and/or a number variation tendency. For example, a position trend can be understood as a trend in which the display position of the target item changes from the edge of the display case to the center of the display case, and a quantity trend can be understood as a trend in which the quantity of the target item changes over a predetermined period of time.
After the display variation trend is determined, corresponding early warning information can be generated according to the display variation trend, for example, the number of target articles (or competitive products) is increased, and the increase amplitude is large; for example, the target item (or the competitive item) is located at the center of the showcase, and the display information of the target item in the showcase can be notified of the warning information of the trend of change. After receiving the early warning information, the early warning object can know the change condition of each target object in the display cabinet according to the early warning information, for example, the change condition of the competitive objects in the display cabinet can be known more quickly and timely, so that the early warning object (or a user) can respond in time according to the early warning information.
In an alternative embodiment, as shown in fig. 8, the method further comprises:
step S801: determining a remaining item except the target item among the plurality of items, and extracting item feature information of the remaining item;
step S802: determining that the residual articles are new preset articles, and adding the article characteristic information of the residual articles into a preset article library as preset article characteristics of the new preset articles, wherein the preset article library comprises the preset article characteristics of each preset article.
In the embodiment of the present disclosure, if an item (i.e., the above-described remaining item) that does not belong to the preset item library is contained in the target image, item feature information of the item (i.e., the above-described remaining item) may be extracted by the feature extraction model.
After the article feature information is extracted, the type of the remaining article can be determined, the remaining article is stored in a preset article library as a new preset article, at this time, the extracted article feature information is the preset article feature of the new preset article, and meanwhile, the type of the new preset article can be added to the preset article library.
Through the processing mode, the preset articles in the preset article library can be enriched, so that the identification precision of the target articles can be further improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, an article information determining apparatus corresponding to the article information determining method is also provided in the embodiments of the present disclosure, and because the principle of solving the problem of the apparatus in the embodiments of the present disclosure is similar to the article information determining method in the embodiments of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 9, a schematic diagram of an article information determining apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: the device comprises an acquisition unit 10, a detection unit 20, an identification unit 30 and a statistic unit 40; wherein the content of the first and second substances,
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a target image which is shot by a target camera towards the inside of each display cabinet and contains a plurality of articles;
the detection unit is used for detecting the position information of each article in the target image and extracting the target article characteristics of each article based on the position information;
an identification unit for identifying a target item among the plurality of items based on the target item feature; wherein the target object is at least part of a plurality of preset objects;
and the counting unit is used for counting the display information of the target objects in each display case and sending the display information to a user.
In the embodiment of the present disclosure, first, a target image including a plurality of articles captured by a target camera toward the inside of each display case is acquired, and position information of each article in the target image is detected, and a target article feature of each article is extracted based on the position information. Then, the target item is identified among the plurality of items based on the target item feature, the display information of the target item in each display case is counted, and the display information of the target item in each display case is transmitted to the user.
In the embodiment of the present disclosure, the display information of each item in the showcase can be automatically detected and the display information can be automatically transmitted to the user by acquiring the target image, identifying the target item among the items included in the target image, and counting the display information of the target item in each showcase.
In a possible embodiment, the identification unit is further configured to: acquiring preset article characteristics of the preset article; calculating the feature similarity between the target article feature and the preset article feature; identifying the target item among the plurality of items based on the feature similarities.
In a possible embodiment, the detection unit is further configured to: carrying out target detection on the target image through a target detection model to obtain position information of each article in the target image; cutting the target image based on the position information to obtain a sub-image containing each article; and performing feature extraction on the sub-images through a feature extraction model to obtain the target article feature of each article.
In one possible embodiment, the apparatus is further configured to: acquiring a first training sample, wherein the first training sample comprises a sample image and an image label, the sample image is an image acquired by collecting articles in each display cabinet, and the image label is article labeling information of each article in the sample image; determining a second training sample based on the first training sample, wherein the second training sample comprises a sub-image of each article in each sample image and an image label of each sub-image, and the image labels are used for indicating the category of the articles contained in the corresponding sub-images; and training the detection model to be trained through the first training sample to obtain the target detection model, and training the feature extraction model to be trained through the second training sample to obtain the feature extraction model.
In a possible implementation, the statistical unit is further configured to: determining a display position of each of the target items in the display case based on the position information of each of the items in the case where the number of the target items is plural; a first quantity of items of a target item located at a specified display location of the plurality of target items is counted based on the display location, and the display information is determined based on the first quantity of items.
In a possible implementation, the statistical unit is further configured to: in the case where the number of the target items is plural: determining the category of each target article to obtain at least one category, wherein the category is used for indicating a preset article corresponding to the target article; and counting the quantity of the target articles corresponding to each category in each display cabinet to obtain a second article quantity, and determining the display information based on the second article quantity.
In a possible implementation, the statistical unit is further configured to: determining a display channel within each of said display cases for each of said categories of target items; determining a display depth of the displayed target item in the display channel; estimating an estimated quantity of items of the displayed target item in each display lane based on the item size of the target item and the display depth; and determining the quantity of the target items corresponding to each category in each display cabinet based on the estimated quantity of the items to obtain the second quantity of the items.
In one possible embodiment, the apparatus is further configured to: analyzing display trends for the items displayed in the respective display cases based on the display information, wherein the display trends are indicative of quantity trends and/or location trends; and generating corresponding early warning information based on the display change trend, and sending the early warning information to an early warning object.
In one possible embodiment, the apparatus is further configured to: determining a remaining item except the target item among the plurality of items, and extracting item feature information of the remaining item; determining that the residual articles are new preset articles, and adding the article characteristic information of the residual articles into a preset article library as preset article characteristics of the new preset articles, wherein the preset article library comprises the preset article characteristics of each preset article.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Referring to fig. 10, a schematic structural diagram of a display case provided in an embodiment of the present disclosure is shown, where the display case includes: a showcase cabinet 111, a target camera 112, and a showcase controller 113.
The target camera 112 is mounted on the showcase body 111, and a lens of the target camera 112 faces the inside of the showcase body.
A target camera 112 configured to capture a target image of the interior of the display case;
a display cabinet controller 113 configured to acquire the target image, detect position information of each item in the target image, and extract a target item feature of each item based on the position information; identifying a target item among the plurality of items based on the target item feature; wherein the target object is at least part of a plurality of preset objects; and counting display information of the target objects in each display case, and sending the display information to a user.
Corresponding to the method for determining the item information in fig. 1, an embodiment of the present disclosure further provides a display cabinet 1100, as shown in fig. 11, a schematic structural diagram of the display cabinet 1100 provided in the embodiment of the present disclosure includes:
a processor 111, a memory 112, and a bus 113; the storage 112 is used for storing execution instructions and includes a memory 1121 and an external storage 1122; the memory 1121 is also referred to as an internal memory, and is used to temporarily store the operation data in the processor 111 and the data exchanged with the external memory 1122 such as a hard disk, the processor 111 exchanges data with the external memory 1122 via the memory 1121, and when the showcase 1100 is operated, the processor 111 communicates with the memory 112 via the bus 113, so that the processor 111 executes the following instructions:
acquiring target images which are shot by a target camera towards the interior of each display cabinet and contain a plurality of articles;
detecting position information of each article in the target image, and extracting target article characteristics of each article based on the position information;
identifying a target item among the plurality of items based on the target item feature; wherein the target object is at least part of a plurality of preset objects;
and counting display information of the target objects in each display case, and sending the display information to a user.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the item information determination method in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the method for determining item information in the foregoing method embodiments, which may be referred to specifically in the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. An item information determination method, characterized by comprising:
acquiring target images which are shot by a target camera towards the interior of each display cabinet and contain a plurality of articles;
detecting position information of each article in the target image, and extracting target article characteristics of each article based on the position information;
identifying a target item among the plurality of items based on the target item feature; wherein the target object is at least part of a plurality of preset objects;
and counting display information of the target objects in each display case, and sending the display information to a user.
2. The method of claim 1, wherein the identifying a target item among the plurality of items based on the target item characteristic comprises:
acquiring preset article characteristics of the preset article;
calculating the feature similarity between the target article feature and the preset article feature;
identifying the target item among the plurality of items based on the feature similarities.
3. The method of claim 1, wherein the detecting location information of each item in the target image and extracting target item features of each item based on the location information comprises:
carrying out target detection on the target image through a target detection model to obtain position information of each article in the target image;
cutting the target image based on the position information to obtain a sub-image containing each article;
and performing feature extraction on the sub-images through a feature extraction model to obtain the target article feature of each article.
4. The method of claim 3, further comprising:
acquiring a first training sample, wherein the first training sample comprises a sample image and an image label, the sample image is an image acquired by collecting articles in each display cabinet, and the image label is article labeling information of each article in the sample image;
determining a second training sample based on the first training sample, wherein the second training sample comprises a sub-image of each article in each sample image and an image label of each sub-image, and the image labels are used for indicating the category of the articles contained in the corresponding sub-images;
and training the detection model to be trained through the first training sample to obtain the target detection model, and training the feature extraction model to be trained through the second training sample to obtain the feature extraction model.
5. The method of claim 1, wherein the target item is plural in number; the counting of the display information of the target items in each display case comprises the following steps:
determining a display position of each of said target items in said display case based on said position information for each item;
a first quantity of items of a target item located at a specified display location of the plurality of target items is counted based on the display location, and the display information is determined based on the first quantity of items.
6. The method of claim 1, wherein the target item is plural in number; the counting of the display information of the target items in each display case comprises the following steps:
determining the category of each target article to obtain at least one category, wherein the category is used for indicating a preset article corresponding to the target article;
and counting the quantity of the target articles corresponding to each category in each display cabinet to obtain a second article quantity, and determining the display information based on the second article quantity.
7. The method of claim 6, wherein said counting the number of items of the target item for each of said categories in said respective display cases to obtain a second number of items comprises:
determining a display channel within each of said display cases for each of said categories of target items;
determining a display depth of the displayed target item in the display channel;
estimating an estimated quantity of items of the displayed target item in each display lane based on the item size of the target item and the display depth;
and determining the quantity of the target items corresponding to each category in each display cabinet based on the estimated quantity of the items to obtain the second quantity of the items.
8. The method of claim 1, further comprising:
analyzing display trends for the items displayed in the respective display cases based on the display information, wherein the display trends are indicative of quantity trends and/or location trends;
and generating corresponding early warning information based on the display change trend, and sending the early warning information to an early warning object.
9. The method of claim 1, further comprising:
determining a remaining item except the target item among the plurality of items, and extracting item feature information of the remaining item;
determining that the residual articles are new preset articles, and adding the article characteristic information of the residual articles into a preset article library as preset article characteristics of the new preset articles, wherein the preset article library comprises the preset article characteristics of each preset article.
10. A display case, comprising: the system comprises a showcase body, a target camera and a showcase controller, wherein the target camera is mounted on the showcase body, and a lens of the target camera faces the inside of the showcase body;
the target camera is configured to acquire a target image in the display cabinet;
the display cabinet controller is configured to acquire the target image, detect position information of each article in the target image, and extract a target article feature of each article based on the position information; identifying a target item among a plurality of items based on the target item feature; wherein the target object is at least part of a plurality of preset objects; counting display information of the target item in the display case, and transmitting the display information to a user.
11. A display case, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus, the machine-readable instructions, when executed by the processor, performing the steps of the item information determination method according to any one of claims 1 to 9.
12. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the item information determination method according to any one of claims 1 to 9.
CN202111589547.6A 2021-12-23 2021-12-23 Article information specifying method, showcase, and storage medium Pending CN114299388A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111589547.6A CN114299388A (en) 2021-12-23 2021-12-23 Article information specifying method, showcase, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111589547.6A CN114299388A (en) 2021-12-23 2021-12-23 Article information specifying method, showcase, and storage medium

Publications (1)

Publication Number Publication Date
CN114299388A true CN114299388A (en) 2022-04-08

Family

ID=80969007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111589547.6A Pending CN114299388A (en) 2021-12-23 2021-12-23 Article information specifying method, showcase, and storage medium

Country Status (1)

Country Link
CN (1) CN114299388A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115619791A (en) * 2022-12-20 2023-01-17 苏州万店掌网络科技有限公司 Article display detection method, device, equipment and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115619791A (en) * 2022-12-20 2023-01-17 苏州万店掌网络科技有限公司 Article display detection method, device, equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN111415461B (en) Article identification method and system and electronic equipment
CA2791597C (en) Biometric training and matching engine
CN109271847B (en) Abnormity detection method, device and equipment in unmanned settlement scene
JP6791864B2 (en) Barcode tag detection in side view sample tube images for laboratory automation
CN108596128A (en) Object identifying method, device and storage medium
US20120314079A1 (en) Object recognizing apparatus and method
CN109977824B (en) Article taking and placing identification method, device and equipment
JP2018512567A5 (en)
RU2695056C1 (en) System and method for detecting potential fraud on the part of a cashier, as well as a method of forming a sampling of images of goods for training an artificial neural network
EP3107038A1 (en) Method and apparatus for detecting targets
CN111126119A (en) Method and device for counting user behaviors arriving at store based on face recognition
WO2014193220A2 (en) System and method for multiple license plates identification
CN113494803B (en) Intelligent refrigerator and storage and taking operation detection method for storage in refrigerator door
CN113468914B (en) Method, device and equipment for determining purity of commodity
CN114299388A (en) Article information specifying method, showcase, and storage medium
CN110826481A (en) Data processing method, commodity identification method, server and storage medium
CN114025075A (en) Method for detecting articles in showcase, and storage medium
CN108052949B (en) Item category statistical method, system, computer device and readable storage medium
CN104103062A (en) Image processing device and image processing method
CN112991379A (en) Unmanned vending method and system based on dynamic vision
CN112380971A (en) Behavior detection method, device and equipment
CN114648720A (en) Neural network training method, image detection method, device, equipment and medium
KR20170048108A (en) Method and system for recognizing object and environment
CN111507282B (en) Target detection early warning analysis system, method, equipment and medium
CN114202537A (en) Camera imaging defect detection method, display cabinet and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination