CN115346110A - Service plate identification method, service plate identification system, electronic equipment and storage medium - Google Patents
Service plate identification method, service plate identification system, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115346110A CN115346110A CN202211283750.5A CN202211283750A CN115346110A CN 115346110 A CN115346110 A CN 115346110A CN 202211283750 A CN202211283750 A CN 202211283750A CN 115346110 A CN115346110 A CN 115346110A
- Authority
- CN
- China
- Prior art keywords
- dinner plate
- target
- image
- characteristic information
- meal tray
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 235000003166 Opuntia robusta Nutrition 0.000 claims abstract description 324
- 244000218514 Opuntia robusta Species 0.000 claims abstract description 324
- 235000012054 meals Nutrition 0.000 claims abstract description 56
- 238000012549 training Methods 0.000 claims abstract description 52
- 230000011218 segmentation Effects 0.000 claims abstract description 40
- 238000013507 mapping Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 7
- 238000005516 engineering process Methods 0.000 abstract description 7
- 238000001514 detection method Methods 0.000 description 6
- 230000009466 transformation Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 235000013305 food Nutrition 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 241000270295 Serpentes Species 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 235000021190 leftovers Nutrition 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000004064 recycling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000009966 trimming Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
- G06V20/36—Indoor scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of data identification, and provides a dinner plate identification method, a dinner plate identification system, electronic equipment and a storage medium, wherein the dinner plate identification method comprises the following steps: collecting a target dinner plate image; inputting the target meal tray image into a meal tray recognition model to obtain first characteristic information output by the meal tray recognition model; the dinner plate recognition model is obtained by training a pre-constructed example segmentation model by adopting a training sample; and carrying out dinner plate recognition on the target dinner plate image according to the first characteristic information and the object knowledge base. According to the invention, the dinner plate is identified by collecting the example segmentation technology and the object knowledge base, so that the accuracy of dinner plate identification is improved, and the flexible configuration of information such as the shape, texture and color of the target to be identified is realized.
Description
Technical Field
The invention relates to the technical field of data identification, in particular to a dinner plate identification method, a dinner plate identification system, electronic equipment and a storage medium.
Background
With the continuous development of economy, the labor cost is continuously improved, and more dining halls realize automation through robots and artificial intelligence algorithms in links such as dish making, ordering and cleaning. In the intelligent restaurant, the manpower cost of the restaurant can be greatly reduced by a mode of selecting dishes and automatically pricing before meals and a mode of automatically classifying and recycling tableware after meals, and meanwhile, direct contact between people is reduced, and food safety is guaranteed more favorably. Due to the fact that the dinner plates are multiple in types and are easily shielded by dishes placed in the dinner plates, accurate identification is often difficult through a target detection algorithm, and therefore the dinner plate identification accuracy is low.
Disclosure of Invention
The invention provides a dinner plate identification method, a dinner plate identification system, electronic equipment and a storage medium, which are used for solving the problem of low dinner plate identification accuracy, and the dinner plate identification is carried out by collecting an example segmentation technology and an object knowledge base, so that the dinner plate identification accuracy is improved, and the flexible configuration of information such as the shape, texture, color and the like of a target to be identified is realized.
The invention provides a dinner plate identification method, which comprises the following steps:
collecting a target dinner plate image;
inputting the target dinner plate image into a dinner plate recognition model to obtain first characteristic information output by the dinner plate recognition model; the dinner plate recognition model is obtained by training a pre-constructed example segmentation model by adopting a training sample;
and carrying out dinner plate recognition on the target dinner plate image according to the first characteristic information and the object knowledge base.
In one embodiment, the dinner plate recognition model is obtained based on the following steps:
collecting multi-frame dinner plate sample images;
performing data enhancement on any frame of dinner plate sample image, and determining a classification label of each frame of dinner plate sample image to construct a plurality of training samples;
and training a pre-constructed example segmentation model by using a plurality of training samples to obtain the dinner plate recognition model.
In one embodiment, the determining the classification label for each frame of the dinner plate sample image comprises:
determining pixel information of each frame of the dinner plate sample image;
and carrying out data annotation on each pixel point in each frame of dinner plate sample image based on the pixel information to obtain the classification label.
In one embodiment, determining the knowledge base of objects comprises:
determining second characteristic information corresponding to different dinner plate types to establish mapping information of the dinner plate types and the second characteristic information;
determining the object knowledge base based on the mapping information.
In one embodiment, the meal tray recognition of the target meal tray image according to the first feature information and the object knowledge base comprises:
matching the first characteristic information with the second characteristic information in the object knowledge base to determine the dinner plate type of the target dinner plate based on the matching result;
determining a dinner plate type for the target dinner plate according to the confidence of the dinner plate type for the target dinner plate.
In one embodiment, after the matching the first feature information with the second feature information in the object knowledge base, the method further includes:
and if the matching result is that the first characteristic information is not matched with the second characteristic information, updating the object knowledge base based on the first characteristic information.
In one embodiment, the acquiring of the target meal tray image comprises:
collecting video information of a target dinner plate;
and extracting at least one frame of image from the video information as the target dinner plate image.
The invention also provides a service plate identification system, comprising:
the acquisition module is used for acquiring a target dinner plate image;
the image instance segmentation module is used for inputting the target dinner plate image into a dinner plate recognition model so as to obtain first characteristic information output by the dinner plate recognition model;
the object knowledge base module is used for managing an object knowledge base;
and the dinner plate recognition module is used for carrying out dinner plate recognition on the target dinner plate image according to the first characteristic information and the object knowledge base.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the dinner plate identification method.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a meal tray identification method as described in any one of the above.
According to the dinner plate identification method, the dinner plate identification system, the electronic equipment and the storage medium, the target dinner plate image is collected; inputting the target meal tray image into a meal tray recognition model to obtain first characteristic information output by the meal tray recognition model; the dinner plate recognition model is obtained by training a pre-constructed example segmentation model by adopting a training sample; and carrying out dinner plate recognition on the target dinner plate image according to the first characteristic information and the object knowledge base. According to the invention, the dinner plate is identified by collecting the example segmentation technology and the object knowledge base, so that the accuracy of dinner plate identification is improved, and the flexible configuration of information such as the shape, texture and color of the target to be identified is realized.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a dinner plate identification method provided by the present invention;
FIG. 2 is a schematic diagram of a plate recognition system according to the present invention;
FIG. 3 is a schematic flow chart of the meal tray identification based on the meal tray identification system provided by the present invention;
fig. 4 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
The meal tray identification method, system, electronic device and storage medium of the present invention are described below in conjunction with fig. 1-4.
Specifically, the invention provides a dinner plate identification method, and referring to fig. 1, fig. 1 is a flow chart of the dinner plate identification method provided by the invention.
The dinner plate identification method provided by the embodiment of the invention comprises the following steps:
it should be noted that the main executing body of the dinner plate identification method provided in the embodiment of the present invention may be a server or a computer device, such as a mobile phone, a tablet computer, a notebook computer, a palm computer, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like.
The dinner plate identification method provided by the embodiment of the invention is suitable for dining places such as restaurants, dining halls, restaurants and the like, and the embodiment of the invention takes the restaurants as application scenes for analysis and explanation.
Set up fixed camera of bowing in the dinner plate detection area (if get the dinner plate district, receive the dinner plate district) in the dining room, this camera is arranged in shooing various dinner plates in the tray, and wherein, shoot the angle and bow for the high altitude, shoot the region and be whole tray. In the dinner plate recognition process, a target dinner plate image is collected through a camera, wherein the target dinner plate image refers to an image of a dinner plate to be detected, and a frame of target dinner plate image can comprise a plurality of target dinner plates of different types or styles.
the dinner plate recognition model is obtained by training a pre-constructed example segmentation model by using a training sample, and is used for segmenting and recognizing characteristic information such as the position, the shape, the texture, the color and the like of a dinner plate.
After gathering target dinner plate image, with target dinner plate image input to dinner plate recognition model to acquire the first characteristic information who is exported by dinner plate recognition model, the dinner plate recognition model that also adopts the training in advance carries out the example to target dinner plate image and cuts apart, obtains the first characteristic information of target dinner plate, and this first characteristic information includes characteristics such as position, shape, texture, colour of dinner plate.
For example, the dinner plate recognition model performs Instance Segmentation on a target dinner plate image based on an image Instance Segmentation (Instance Segmentation) algorithm, wherein the image Instance Segmentation can separate the foreground and the background of an object, so that the object separation at a pixel level is realized, and meanwhile, the Instance Segmentation can segment objects of different instances in the same class.
The embodiment of the invention realizes classification of image pixel point levels through the dinner plate recognition model, reduces the influence of dishes in the dinner plate on the recognition result by separating the foreground and the background of the object, and has better robustness, model interpretability and accuracy rate compared with the traditional dinner plate recognition model based on a target detection algorithm.
It should be noted that the object knowledge base may be understood as a dinner plate knowledge base, and is used for storing second feature information corresponding to different dinner plate types, the subordination relationship between different dinner plate types, and information such as fingerprints of images generated based on a convolutional neural network.
After outputting the first characteristic information of the target dinner plate based on the dinner plate recognition model, carrying out dinner plate recognition according to the first characteristic information and the second characteristic information in the object knowledge base, for example, matching a corresponding dinner plate type in the object knowledge base based on the first characteristic information, and then determining the dinner plate type of the target dinner plate based on the confidence degree of the matched dinner plate type.
After the dinner plate type of the target dinner plate is determined, the price of the optional dishes can be automatically charged through the dinner plate type, and the tableware can be automatically classified and recycled through the dinner plate type, so that the labor cost of a restaurant can be greatly reduced, and meanwhile, the direct contact between people is reduced, and the food safety is better guaranteed.
According to the dinner plate identification method provided by the embodiment of the invention, a target dinner plate image is collected; inputting the target dinner plate image into a dinner plate recognition model to obtain first characteristic information output by the dinner plate recognition model; the dinner plate recognition model is obtained by training a pre-constructed neural network model by adopting an example segmentation algorithm and a training sample; and carrying out dinner plate recognition on the target dinner plate image according to the first characteristic information and the object knowledge base. According to the invention, the dinner plate is identified by collecting the example segmentation technology and the object knowledge base, so that the accuracy of dinner plate identification is improved, and the flexible configuration of information such as the shape, texture and color of the target to be identified is realized.
Based on the above embodiment, the dinner plate recognition model is obtained based on the following steps: collecting multi-frame dinner plate sample images; performing data enhancement on any frame of dinner plate sample image, and determining a classification label of each frame of dinner plate sample image to construct a plurality of training samples; and training a pre-constructed example segmentation model by using a plurality of training samples to obtain the dinner plate recognition model.
Acquiring a meal tray sample image through a camera of a meal tray detection area, wherein the meal tray sample image comprises sample images acquired from different time periods, such as a sample image acquired before a meal, an image acquired when a meal is selected, and a sample image acquired after a meal, wherein the step of acquiring the meal tray sample image comprises:
(1) Carry out image acquisition to one or more dinner plates of placing on the tray based on the camera that fixed the erections, wherein, shoot the angle and bow to shoot for the high altitude, shoot the region for whole tray.
(2) The method comprises the steps of placing dinner plates of different styles in a tray in a random combination mode before a meal is taken, and then taking an acquired dinner plate sample image as a first original data set of model training, wherein the dinner plate sample image comprises the tray and empty dinner plates placed on the tray.
(3) When the user chose, shoot the dish combination of having a dinner personnel self-selection, then the dinner plate sample image that will gather is as the second original data set of model training, and wherein, dinner plate and dish that place on dinner plate sample image contains tray and the tray.
(4) The tray is shot after a meal, and then the collected dinner plate sample image is used as a third original data set for model training, wherein the dinner plate sample image comprises the tray and dinner plates and leftovers placed on the tray.
After dinner plate sample images are collected from different time periods, data enhancement is carried out on any frame of dinner plate sample image, wherein the data enhancement comprises operations such as turning transformation, random trimming, color dithering, translation transformation, scale transformation, contrast transformation, noise disturbance, rotation transformation and reflection transformation. By carrying out data enhancement on the dinner plate sample image, the visual effect of the image can be improved, the identification degree of important detail information or a target of the image is improved, and then the accuracy of dinner plate identification is improved.
Determining the classification label of each frame of dinner plate sample image, for example, using a general data labeling tool (such as labelme and label img) to perform data labeling on the images of the first original data set, the second original data set and the third original data set, so as to obtain the classification label of each frame of dinner plate sample image.
And determining a training sample based on the dinner plate sample image subjected to data enhancement and the classification label, namely, the training sample consists of the dinner plate sample image subjected to data enhancement and the classification label. The training samples are further subjected to data partitioning, for example, the training samples are partitioned into a training set, a validation set and a test set, wherein the partition ratio of the training set, the validation set and the test set is 6.
The pre-constructed example segmentation model is trained by using a plurality of training samples, for example, the example segmentation Deep learning model (such as Mask R-CNN, SOLO, deep Snake and the like) is trained by using a plurality of training samples, so as to obtain the dinner plate recognition model.
The embodiment of the invention collects multi-frame dinner plate sample images; performing data enhancement on any frame of dinner plate sample image, and determining a classification label of each frame of dinner plate sample image to construct a plurality of training samples; and training a pre-constructed example segmentation model by using a plurality of training samples to obtain the dinner plate recognition model. Therefore, the example segmentation result of the image is rapidly acquired through the dinner plate recognition model, and the efficiency and the accuracy of the dinner plate recognition are improved.
Based on the above embodiment, the determining the classification label of the dinner plate sample image in each frame includes: determining pixel information of each frame of the dinner plate sample image; and carrying out data annotation on each pixel point in each frame of dinner plate sample image based on the pixel information to obtain the classification label.
After the dinner plate sample image is collected, determining pixel information of each frame of dinner plate sample image, and then performing data annotation on each pixel point in each frame of dinner plate sample image based on the pixel information to obtain a classification label, namely a classification label at the pixel level of each frame of image, wherein the classification label comprises an object ID (namely, a dinner plate ID) corresponding to each pixel point in the image, and the type or style of the dinner plate to which the dinner plate belongs can be determined based on the object ID, for example: [ abscissa: 1, ordinate: 1, object ID:001].
The embodiment of the invention determines the pixel information of each frame of dinner plate sample image; and carrying out data annotation on each pixel point in each frame of the dinner plate sample image based on the pixel information to obtain the classification label, so that the accuracy of dinner plate identification is improved.
Based on the above embodiment, determining the knowledge base of objects includes: determining second characteristic information corresponding to different dinner plate types to establish mapping information of the dinner plate types and the second characteristic information; determining the object knowledge base based on the mapping information.
It should be noted that different dinner plate types or styles correspond to different second characteristic information, where the second characteristic information includes characteristics such as color, shape, and texture. And establishing an object knowledge base based on the mapping information between the dinner plate type and the second characteristic information, wherein the object knowledge base can store data in a knowledge graph mode or a table mode.
For example, different tray types are assigned an identity ID, and the shape, size, texture, and color of the tray are recorded, such as: [ object ID:001, shape: circular, size: 30cm, texture: 001, color: red ], label the dependencies between dinner plate types, resulting in parent-child pairs, for example: [ parent object ID:001, child object ID:002], finally obtaining a tree-shaped knowledge graph, and generating a fingerprint of the image based on a convolutional neural network, for example: [ object ID:001, fingerprint: 'characteristic string' ].
According to the embodiment of the invention, the mapping information of the dinner plate type and the second characteristic information is established by determining the second characteristic information corresponding to different dinner plate types; determining the object knowledge base based on the mapping information. The invention realizes the dynamic configuration of the target to be identified by introducing the object knowledge base, can carry out hierarchical management and identification on the subordinate classification of the dinner plate, supports the dynamic update of the identification model through the knowledge base, retrains in disorder and reduces the operation and maintenance cost and difficulty of the system.
Based on the above embodiment, the performing dinner plate recognition on the target dinner plate image according to the first feature information and the object knowledge base includes: matching the first characteristic information with the second characteristic information in the object knowledge base to determine the dinner plate type of the target dinner plate based on the matching result; determining a dinner plate type for the target dinner plate according to the confidence of the dinner plate type for the target dinner plate.
After first characteristic information of a target dinner plate is obtained based on a dinner plate recognition model, the first characteristic information is matched with second characteristic information corresponding to different dinner plate types stored in an object knowledge base, then the dinner plate type of the target dinner plate is determined based on a matching result, the confidence coefficient of the dinner plate type to which the target dinner plate belongs is determined, and then the dinner plate type with high confidence coefficient is determined as the dinner plate type of the target dinner plate. The confidence level is also referred to as reliability, or confidence level, or confidence coefficient.
For example, assuming that the target dinner plate image includes a target dinner plate a, a target dinner plate B and a target dinner plate C, the first feature information of the target dinner plates a, B and C is respectively matched with the second feature information in the object knowledge base, a dinner plate type ID and a confidence coefficient with higher probability values are determined based on the matching result, and then the dinner plate type with the highest confidence coefficient is determined as the type of the target dinner plate.
According to the embodiment of the invention, the dinner plate is identified by collecting the example segmentation technology and the object knowledge base, so that the accuracy of dinner plate identification is improved, and the flexible configuration of information such as the shape, texture and color of the target to be identified is realized.
Based on the above embodiment, after the matching the first feature information with the second feature information in the object knowledge base, the method further includes: and if the matching result is that the first characteristic information is not matched with the second characteristic information, updating the object knowledge base based on the first characteristic information.
And if the matching result is that the first characteristic information is not matched with each second characteristic information, judging that the characteristic information of the target dinner plate does not exist in the object knowledge base, indicating that the target dinner plate is a newly added dinner plate, and directly updating the object knowledge base based on the first characteristic information of the target dinner plate without retraining the model.
The invention realizes the dynamic configuration of the target to be identified by introducing the object knowledge base, can carry out hierarchical management and identification on the subordinate classification of the dinner plate, supports the dynamic update of the identification model through the knowledge base, retrains in disorder and reduces the operation and maintenance cost and difficulty of the system.
Based on the above embodiment, the acquiring of the target dinner plate image includes: collecting video information of a target dinner plate; and extracting at least one frame of image from the video information as the target dinner plate image.
In the dinner plate identification process, video information of a target dinner plate is collected through a camera of a dinner plate detection area, and at least one frame of image is extracted from the video information to serve as a target dinner plate image. For example, a video frame extraction method is adopted to obtain an image of the target dinner plate, wherein the video frame extraction means that video frames are extracted at set time intervals in a video.
Optionally, the collected video information can be identified in advance, video frames of the target dinner plate are determined to exist based on the identification result, and then the video frames are extracted.
The embodiment of the invention collects the video information of the target dinner plate; and extracting at least one frame of image from the video information as the target dinner plate image, so that the image acquisition efficiency is improved, and the dinner plate identification efficiency is improved.
Fig. 2 is a schematic structural diagram of a meal tray identification system provided by the present invention, and referring to fig. 2, an embodiment of the present invention provides a meal tray identification system, including:
the acquisition module is used for acquiring a target dinner plate image;
an image instance segmentation module, configured to input the target meal tray image to a meal tray recognition model to obtain first feature information output by the meal tray recognition model;
the object knowledge base module is used for managing an object knowledge base;
and the dinner plate recognition module is used for carrying out dinner plate recognition on the target dinner plate image according to the first characteristic information and the object knowledge base.
For example, the acquisition module acquires video information of a target dinner plate through a camera of a dinner plate detection area, and then extracts at least one frame of image from the video information in a video frame extraction mode to serve as a target dinner plate image.
The image instance segmentation module segments all object instances in the target dinner plate image by adopting a dinner plate identification model to obtain first characteristic information of all target dinner plates, such as position, shape, texture, color and the like.
The object knowledge base module is used for managing an object knowledge base, and comprises information of shape, texture, color and the like of objects, information of subordination of categories among the objects and the like.
The dinner plate recognition module performs multi-mode fusion by combining the example segmentation result and data of the object knowledge base to obtain the top N dinner plate type IDs and confidence degrees of all dinner plates with higher probability in the target dinner plate image, and then determines the dinner plate type in the target dinner plate image based on the dinner plate type IDs and the confidence degrees.
The dinner plate recognition system further comprises a result returning module used for packaging the model reasoning service into an API interface for being called by an external system.
Referring to fig. 3, in the embodiment of the present invention, the steps of meal tray identification based on the meal tray identification system are as follows:
(1) Acquiring an image after frame extraction, and performing example segmentation on the image by adopting a dinner plate recognition model;
(2) Judging whether the image contains a target object, namely a target dinner plate, or not based on the segmentation result;
(3) If the target objects are included, classifying all the target objects by combining an object knowledge base; if the target object is not included, the target object is not specified with a style dinner plate;
(4) Judging whether a dinner plate with a specified style appears or not based on the classification result;
(5) If the dinner plate is ordered, the information of the shape, texture feature, color and the like of the dinner plate is returned.
According to the embodiment of the invention, the dinner plate recognition is carried out by collecting the example segmentation technology and the object knowledge base, the dinner plate recognition accuracy rate is improved, the dynamic configuration of the target dinner plate is realized, the subordinate classification of the dinner plate can be managed and recognized in a grading manner, the dinner plate recognition model is dynamically updated through the object knowledge base, retraining is not needed, and the system operation and maintenance cost and difficulty are reduced.
The embodiment of the invention provides a dinner plate recognition device which comprises a first acquisition module, a first image instance segmentation module and a first dinner plate recognition module.
The first acquisition module is used for acquiring a target dinner plate image;
the first image instance segmentation module is used for inputting the target dinner plate image into a dinner plate recognition model so as to obtain first characteristic information output by the dinner plate recognition model; the dinner plate recognition model is obtained by training a pre-constructed example segmentation model by adopting a training sample;
and the first dinner plate recognition module is used for carrying out dinner plate recognition on the target dinner plate image according to the first characteristic information and the object knowledge base.
According to the dinner plate recognition device provided by the embodiment of the invention, the target dinner plate image is collected; inputting the target dinner plate image into a dinner plate recognition model to obtain first characteristic information output by the dinner plate recognition model; the dinner plate recognition model is obtained by training a pre-constructed example segmentation model by adopting a training sample; and carrying out dinner plate recognition on the target dinner plate image according to the first characteristic information and the object knowledge base. According to the invention, the dinner plate is identified by collecting the example segmentation technology and the object knowledge base, so that the accuracy of dinner plate identification is improved, and the flexible configuration of information such as the shape, texture and color of the target to be identified is realized.
In one embodiment, the meal tray recognition apparatus further includes a training module, the training module being specifically configured to:
collecting multi-frame dinner plate sample images;
performing data enhancement on any frame of dinner plate sample image, and determining a classification label of each frame of dinner plate sample image to construct a plurality of training samples;
and training a pre-constructed example segmentation model by using a plurality of training samples to obtain the dinner plate recognition model.
In one embodiment, the training module is specifically configured to:
determining pixel information of each frame of the dinner plate sample image;
and carrying out data annotation on each pixel point in each frame of dinner plate sample image based on the pixel information to obtain the classification label.
In one embodiment, the dinner plate recognition apparatus further comprises a knowledge base determination module, the knowledge base determination module is specifically configured to:
determining second characteristic information corresponding to different dinner plate types to establish mapping information of the dinner plate types and the second characteristic information;
determining the object knowledge base based on the mapping information.
In one embodiment, the first meal tray identification module is specifically configured to:
matching the first characteristic information with the second characteristic information in the object knowledge base to determine the dinner plate type of the target dinner plate based on the matching result;
determining the dinner plate type of the target dinner plate according to the confidence of the dinner plate type of the target dinner plate.
In one embodiment, the first meal identification module further comprises:
and if the matching result is that the first characteristic information is not matched with the second characteristic information, updating the object knowledge base based on the first characteristic information.
In one embodiment, the first acquisition module is specifically configured to:
collecting video information of a target dinner plate;
and extracting at least one frame of image from the video information as the target dinner plate image.
Fig. 4 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 4: a processor (processor) 410, a communication Interface 420, a memory (memory) 430 and a communication bus 440, wherein the processor 410, the communication Interface 420 and the memory 430 are communicated with each other via the communication bus 440. The processor 410 may invoke logic instructions in the memory 430 to perform a meal tray identification method comprising:
collecting a target dinner plate image;
inputting the target dinner plate image into a dinner plate recognition model to obtain first characteristic information output by the dinner plate recognition model; the dinner plate recognition model is obtained by training a pre-constructed example segmentation model by adopting a training sample;
and carrying out dinner plate recognition on the target dinner plate image according to the first characteristic information and the object knowledge base.
In addition, the logic instructions in the memory 430 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program being implemented by a processor to perform the dinner plate identification method provided by the above methods, the method comprising:
collecting a target dinner plate image;
inputting the target meal tray image into a meal tray recognition model to obtain first characteristic information output by the meal tray recognition model; the dinner plate recognition model is obtained by training a pre-constructed example segmentation model by adopting a training sample;
and carrying out dinner plate recognition on the target dinner plate image according to the first characteristic information and the object knowledge base.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A method of identifying a meal tray, comprising:
collecting a target dinner plate image;
inputting the target meal tray image into a meal tray recognition model to obtain first characteristic information output by the meal tray recognition model; the dinner plate recognition model is obtained by training a pre-constructed example segmentation model by adopting a training sample;
and carrying out dinner plate recognition on the target dinner plate image according to the first characteristic information and the object knowledge base.
2. The meal tray identification method of claim 1, wherein the meal tray identification model is obtained based on the following steps:
collecting multi-frame dinner plate sample images;
performing data enhancement on any frame of dinner plate sample image, and determining a classification label of each frame of dinner plate sample image to construct a plurality of training samples;
and training a pre-constructed example segmentation model by using a plurality of training samples to obtain the dinner plate recognition model.
3. The meal tray identification method of claim 2, wherein the determining the classification label for each frame of the meal tray sample image comprises:
determining pixel information of each frame of the dinner plate sample image;
and carrying out data annotation on each pixel point in each frame of dinner plate sample image based on the pixel information to obtain the classification label.
4. The meal tray identification method of claim 1, wherein determining the knowledge base of objects comprises:
determining second characteristic information corresponding to different dinner plate types to establish mapping information of the dinner plate types and the second characteristic information;
determining the object knowledge base based on the mapping information.
5. The meal tray recognition method of claim 4, wherein the meal tray recognition of the target meal tray image according to the first characteristic information and an object knowledge base comprises:
matching the first characteristic information with the second characteristic information in the object knowledge base to determine a dinner plate type of the target dinner plate based on a matching result;
determining the dinner plate type of the target dinner plate according to the confidence of the dinner plate type of the target dinner plate.
6. The meal tray identification method of claim 5, wherein the matching the first characteristic information with the second characteristic information in the object knowledge base further comprises:
and if the matching result is that the first characteristic information is not matched with the second characteristic information, updating the object knowledge base based on the first characteristic information.
7. The meal tray identification method of claim 1, wherein the capturing of the target meal tray image comprises:
collecting video information of a target dinner plate;
and extracting at least one frame of image from the video information as the target dinner plate image.
8. A meal tray identification system, which is applied to the meal tray identification method according to any one of claims 1 to 7, comprising:
the acquisition module is used for acquiring a target dinner plate image;
the image instance segmentation module is used for inputting the target dinner plate image into a dinner plate recognition model so as to obtain first characteristic information output by the dinner plate recognition model;
the object knowledge base module is used for managing an object knowledge base;
and the dinner plate recognition module is used for carrying out dinner plate recognition on the target dinner plate image according to the first characteristic information and the object knowledge base.
9. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the dish identification method according to any one of claims 1 to 7.
10. A non-transitory computer-readable storage medium on which a computer program is stored, the computer program, when executed by a processor, implementing the meal tray identification method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211283750.5A CN115346110A (en) | 2022-10-20 | 2022-10-20 | Service plate identification method, service plate identification system, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211283750.5A CN115346110A (en) | 2022-10-20 | 2022-10-20 | Service plate identification method, service plate identification system, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115346110A true CN115346110A (en) | 2022-11-15 |
Family
ID=83957074
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211283750.5A Pending CN115346110A (en) | 2022-10-20 | 2022-10-20 | Service plate identification method, service plate identification system, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115346110A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116503614A (en) * | 2023-04-27 | 2023-07-28 | 杭州食方科技有限公司 | Dinner plate shape feature extraction network training method and dinner plate shape information generation method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096932A (en) * | 2016-06-06 | 2016-11-09 | 杭州汇萃智能科技有限公司 | The pricing method of vegetable automatic recognition system based on tableware shape |
CN107992871A (en) * | 2017-12-21 | 2018-05-04 | 陕西伟景机器人科技有限公司 | The automatic accounting method in dining room based on image recognition |
CN111178136A (en) * | 2019-12-03 | 2020-05-19 | 广东马上信息科技有限公司 | Big data-based smart campus identity recognition method and system |
CN111640268A (en) * | 2020-04-22 | 2020-09-08 | 深圳拓邦股份有限公司 | Intelligent settlement method and system based on dinner plate shape and color |
CN114708555A (en) * | 2022-04-18 | 2022-07-05 | 刘文珍 | Forest fire prevention monitoring method based on data processing and electronic equipment |
-
2022
- 2022-10-20 CN CN202211283750.5A patent/CN115346110A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106096932A (en) * | 2016-06-06 | 2016-11-09 | 杭州汇萃智能科技有限公司 | The pricing method of vegetable automatic recognition system based on tableware shape |
CN107992871A (en) * | 2017-12-21 | 2018-05-04 | 陕西伟景机器人科技有限公司 | The automatic accounting method in dining room based on image recognition |
CN111178136A (en) * | 2019-12-03 | 2020-05-19 | 广东马上信息科技有限公司 | Big data-based smart campus identity recognition method and system |
CN111640268A (en) * | 2020-04-22 | 2020-09-08 | 深圳拓邦股份有限公司 | Intelligent settlement method and system based on dinner plate shape and color |
CN114708555A (en) * | 2022-04-18 | 2022-07-05 | 刘文珍 | Forest fire prevention monitoring method based on data processing and electronic equipment |
Non-Patent Citations (1)
Title |
---|
杨露菁 等: "《智能图像处理及应用》", 31 March 2019 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116503614A (en) * | 2023-04-27 | 2023-07-28 | 杭州食方科技有限公司 | Dinner plate shape feature extraction network training method and dinner plate shape information generation method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Aguilar et al. | Grab, pay, and eat: Semantic food detection for smart restaurants | |
CN109670532B (en) | Method, device and system for identifying abnormality of biological organ tissue image | |
CN111898547B (en) | Training method, device, equipment and storage medium of face recognition model | |
CN108596046A (en) | A kind of cell detection method of counting and system based on deep learning | |
CN106845513B (en) | Manpower detector and method based on condition random forest | |
CN110781805B (en) | Target object detection method, device, computing equipment and medium | |
CN112633297B (en) | Target object identification method and device, storage medium and electronic device | |
CN106557728B (en) | Query image processing and image search method and device and monitoring system | |
CN107169106A (en) | Video retrieval method, device, storage medium and processor | |
CN109829072A (en) | Construct atlas calculation and relevant apparatus | |
CN111680603A (en) | Dish detection and identification method | |
WO2019197021A1 (en) | Device and method for instance-level segmentation of an image | |
Shetty et al. | Segmentation and labeling of documents using conditional random fields | |
CN111275060A (en) | Recognition model updating processing method and device, electronic equipment and storage medium | |
Rossi et al. | FishAPP: A mobile App to detect fish falsification through image processing and machine learning techniques | |
US11600088B2 (en) | Utilizing machine learning and image filtering techniques to detect and analyze handwritten text | |
CN112581438A (en) | Slice image recognition method and device, storage medium and electronic equipment | |
CN111476319B (en) | Commodity recommendation method, commodity recommendation device, storage medium and computing equipment | |
CN114419363A (en) | Target classification model training method and device based on label-free sample data | |
CN110097603B (en) | Fashionable image dominant hue analysis method | |
CN112950658A (en) | Optical disk evaluation method, device, equipment and storage medium | |
Zhu et al. | Scene text relocation with guidance | |
CN108256578B (en) | Gray level image identification method, device, equipment and readable storage medium | |
CN111753618A (en) | Image recognition method and device, computer equipment and computer readable storage medium | |
CN115346110A (en) | Service plate identification method, service plate identification system, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20221115 |