WO2022206043A1 - 智能冰箱、存取动作识别方法、设备及介质 - Google Patents

智能冰箱、存取动作识别方法、设备及介质 Download PDF

Info

Publication number
WO2022206043A1
WO2022206043A1 PCT/CN2021/139238 CN2021139238W WO2022206043A1 WO 2022206043 A1 WO2022206043 A1 WO 2022206043A1 CN 2021139238 W CN2021139238 W CN 2021139238W WO 2022206043 A1 WO2022206043 A1 WO 2022206043A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
information
target
image
smart refrigerator
Prior art date
Application number
PCT/CN2021/139238
Other languages
English (en)
French (fr)
Inventor
高语函
谢飞学
高雪松
陈维强
赵启东
曲磊
张璧程
孙菁
董秀莲
Original Assignee
海信集团控股股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110348540.9A external-priority patent/CN115147916A/zh
Priority claimed from CN202110719382.3A external-priority patent/CN115601827A/zh
Priority claimed from CN202111084767.3A external-priority patent/CN115830589A/zh
Application filed by 海信集团控股股份有限公司 filed Critical 海信集团控股股份有限公司
Publication of WO2022206043A1 publication Critical patent/WO2022206043A1/zh

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F25REFRIGERATION OR COOLING; COMBINED HEATING AND REFRIGERATION SYSTEMS; HEAT PUMP SYSTEMS; MANUFACTURE OR STORAGE OF ICE; LIQUEFACTION SOLIDIFICATION OF GASES
    • F25DREFRIGERATORS; COLD ROOMS; ICE-BOXES; COOLING OR FREEZING APPARATUS NOT OTHERWISE PROVIDED FOR
    • F25D29/00Arrangement or mounting of control or safety devices

Definitions

  • the present application relates to the technical field of smart home appliances, and in particular, to smart refrigerators, access action recognition methods, devices, and media.
  • the management of ingredients includes the management of the type, quantity, preservation period and storage location information of the ingredients.
  • the storage location information of the ingredients is determined based on the access action of the ingredients.
  • the spatial position of the item is obtained, and then the position of the hand is determined. In action or out action.
  • the spatial position of the item is determined based on the depth camera, when the item is far away from the camera, the calculated spatial position of the item has a large error with the actual spatial position, which will lead to inaccurate or unrecognized access action recognition. question.
  • the present application provides a smart refrigerator, an access action recognition method, equipment and medium, which are used to solve the problems of inaccurate and unrecognized access action recognition in the prior art.
  • the present application provides a smart refrigerator, and the smart refrigerator includes:
  • the collection unit is configured to collect the first target RGB image and collect the second target RGB image;
  • control unit configured to:
  • the first target ROI area of the hand area is included in the collected first target RGB image, and the second target RGB image collected after the collection time of the first target RGB image includes the hand area The second target ROI area of ;
  • the first target ROI area is located in the food storage area, determine the first area of the first target ROI area in the first target RGB image and the second target ROI area in the second target RGB image The second area of ; determine the access action according to the size of the first area and the second area.
  • the access action is determined according to the size of the first area and the second area.
  • the access action is determined according to the size of the first target ROI region and the second target ROI region, which improves the accuracy of the access action recognition.
  • the present application also provides a method for identifying an access action, the method comprising:
  • the first target ROI area of the hand area is included in the collected first target RGB image, and the second target RGB image collected after the collection time of the first target RGB image includes the hand area The second target ROI area of ;
  • the first target ROI area is located in the food storage area, determine the first area of the first target ROI area in the first target RGB image and the second target ROI area in the second target RGB image the second area of ;
  • the access action is determined according to the size of the first area and the second area.
  • the application also provides a smart refrigerator, including:
  • a sensor arranged in the image acquisition blind area of the acquisition unit
  • a processor configured to collect images by the acquisition unit to determine the visual detection result, and to determine the blind spot detection information through the sensor, and to determine the user's perception of the items in the household appliance according to the visual inspection result and the blind spot detection information and/or the storage location of the item in the household appliance.
  • an image is collected by an acquisition unit to determine a visual detection result
  • blind area detection information is determined by a sensor disposed in an image acquisition blind area of the acquisition unit, so as to detect the blind area according to the visual detection result and the blind area detection.
  • the present application also provides an information identification method, the method comprising:
  • the visual detection result is determined by the acquisition unit installed on the smart refrigerator for storing items, and the blind spot detection information is determined by the sensor installed on the smart refrigerator, wherein the sensor is arranged in the image acquisition blind area of the acquisition unit ;
  • the user's access action to the items in the smart refrigerator and/or the storage position of the items in the smart refrigerator is determined.
  • the application also provides a smart refrigerator, comprising:
  • a collection unit located on the top of the refrigerator body
  • the processor is configured to acquire image information of the food material through the collection unit, and obtain the weight information of the food material through the gravity sensor; and determine the type information of the food material according to the image information and the weight information.
  • the image information of the food material and the weight information of the food material are obtained, and the type information of the food material is determined according to the image information and the weight information.
  • the accuracy of food identification is further improved.
  • the application also provides a method for identifying ingredients, the method comprising:
  • type information of the food material is determined.
  • the present application further provides an electronic device, the electronic device includes at least a processor and a memory, and the processor is configured to implement the steps of any of the above methods when executing a computer program stored in the memory.
  • the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, implements the steps of any of the above methods.
  • FIG. 1 is a schematic structural diagram of a smart refrigerator provided by the application.
  • FIG. 2a is a schematic diagram of a ROI area provided by some embodiments of the present application.
  • 2b is a schematic diagram of a ROI area provided by some embodiments of the present application.
  • 3a is a schematic diagram of a first area of a drawer provided by some embodiments of the present application.
  • 3b is a schematic diagram of a first area of a drawer provided by some embodiments of the present application.
  • FIG. 4 is a schematic diagram of the interior space of a smart refrigerator provided by some embodiments of the present application.
  • Fig. 5a is a schematic diagram showing that the first target ROI region provided by some embodiments of the present application is not located in the first region;
  • FIG. 5b is a schematic display diagram of the first target ROI region located in the first region according to some embodiments of the present application.
  • FIG. 6 is a schematic flowchart of an access action identification provided by some embodiments of the present application.
  • FIG. 7 is a schematic flowchart of an access action identification provided by some embodiments of the present application.
  • FIG. 8 is a schematic diagram of the position of the sensor on the refrigerator provided by some embodiments of the present application.
  • FIG. 9 is a schematic flowchart of an information identification method provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of another information identification method provided by an embodiment of the present application.
  • 11 to 14 are schematic diagrams of images of a user accessing ingredients collected by a collection unit according to an embodiment of the present application.
  • 15 is a schematic flowchart of an information identification method provided by an embodiment of the present application.
  • Figure 16 is a schematic diagram of the food material (green awn on the left, papaya on the right) with similar appearance and color provided by the embodiment of the application, but with a large weight difference;
  • 17 is a schematic diagram of the ingredients (cherry tomatoes on the left, tomatoes on the left) that are similar in appearance and color, but with a large weight difference, provided by the embodiment of the application;
  • FIG. 18 is a schematic structural diagram of another smart refrigerator provided by an embodiment of the application.
  • FIG. 19 is a schematic diagram of an overall system framework provided by an embodiment of the present application.
  • 20 is a schematic diagram of fusion of the result of image recognition and the weight measurement of the gravity sensor provided by the embodiment of the application;
  • 21 is a schematic diagram of the core functional framework of the image recognition and gravity sensor measurement weight fusion solution provided by the embodiment of the application;
  • 22 is a schematic flowchart of a solution for fusion of image recognition and gravity sensor measurement of weight provided by an embodiment of the application;
  • FIG. 23 is a schematic flowchart of a method for identifying ingredients according to an embodiment of the present application.
  • 24 is a schematic structural diagram of an electronic device provided by the application.
  • 25 is a schematic structural diagram of another electronic device provided by the application.
  • FIG. 26 is a schematic structural diagram of an information identification apparatus provided by an embodiment of the present application.
  • the first target RGB image contains a hand area
  • the first area of the first target ROI area and the second area of the second target ROI area in the second target RGB image, and the access action is determined according to the size of the first area and the second area.
  • the present application provides a smart refrigerator, an access action identification method, a device, and a medium.
  • FIG. 1 is a schematic structural diagram of a smart refrigerator provided by the application, and the smart refrigerator includes:
  • Collection unit 101 the collection unit is configured to collect the first target RGB image and collect the second target RGB image;
  • Control unit 102 the control unit is configured to determine the first target ROI region of interest including the hand region in the collected first target RGB image, and the first target RGB image collected after the collection time.
  • the second target ROI area of the hand area is included in the second target RGB image;
  • the first target is determined The first area of the first target ROI area in the image and the second area of the second target ROI area in the second target RGB image;
  • the access action is determined according to the size of the first area and the second area.
  • a collection unit is installed in the top area inside the smart refrigerator.
  • the collection unit collects RGB images in real time.
  • the collection unit may include an image collection device.
  • the acquisition unit preferentially uses an image acquisition device with a short exposure time.
  • the second target RGB image may be the first RGB image collected after the collection time of the first target RGB image, or may be collected after a preset time interval of the collection time of the first target RGB image RGB image.
  • a first target region of interest (ROI, region of interest) including a hand region in the first target RGB image is determined, and a second target RGB image including the hand region is obtained from the second target RGB image.
  • ROI region of interest
  • the hand area includes the user's hand and the food material held by the user's hand.
  • the hand region can be determined based on traditional vision methods, such as hand skin color detection or moving target detection.
  • the smart refrigerator when recognizing the access action based on the target RGB image, in order to save resources, when the target ROI area in the target RGB image is located in the food storage area, the smart refrigerator performs the access action recognition. If the ROI area is not located in the food material storage area, the identification of the access action is not performed until the target RGB image that the target ROI area is located in the food material storage area appears.
  • the first area of the first target ROI area in the first target RGB image and the second target ROI area in the second target RGB image are determined. Second area.
  • the access action is determined according to the size of the first area and the second area. Specifically, if the first area is larger than the second area, the storage action is determined, and if the first area is smaller than the second area, the picking action is determined.
  • the present application by determining the first target ROI area in the first target RGB image and the second target ROI area in the first target RGB image, when it is recognized that the first target ROI area is located in the food storage area, according to the first target ROI area.
  • the size of the area of the target ROI area and the second target ROI area determines the access action, which improves the accuracy of the access action recognition.
  • control unit 102 is specifically configured as:
  • the target ROI area when determining the target ROI area, a method based on deep learning can be used. Specifically, when determining the ROI area of the hand area included in the target RGB image, the target RGB image is input into the trained network model, and the RGB image output by the network model with information identifying the target ROI area is received. Specifically, the information of the target ROI area may be a shadow area selected by a frame in the RGB image, or may be coordinate information of the information of the target ROI area in the RGB image.
  • FIG. 2 a is a schematic diagram of a ROI area provided by some embodiments of the present application. As shown in FIG. 2 a , the framed area in FIG. 2 a is the ROI area.
  • FIG. 2b is a schematic diagram of a ROI area provided by some embodiments of the present application. As shown in FIG. 2b, the framed area in FIG. 2b is the ROI area.
  • control unit 102 is specifically configured as:
  • the network model is trained according to the first location information and the second location information.
  • the training of the network model needs to be completed in advance.
  • a training set is pre-configured, wherein a sample RGB image is stored in the training set, and the first information of the ROI area is pre-marked in the sample RGB image.
  • the RGB image marked with the information of the ROI area is input into the network model, the network model is trained, the loss value of the network model is calculated according to the result of the network output and the pre-marked ROI area, and then based on the loss value Adjust the parameters of the network model. If the loss value reaches the minimum loss value or the number of iterations reaches the preset maximum number of iterations, it is considered that the training of the network model is completed.
  • control unit 102 is specifically configured as:
  • the first target ROI area is located in the first area, it is determined that the first target ROI area is located in the food storage area.
  • the first target RGB image is determined.
  • the first area where the middle drawer is located to determine whether the first target ROI area is located in the first area, and if the first target ROI area is located in the first area, it is determined that the first target ROI area is located in the food storage area, And it is located in the drawer position of the food storage area.
  • the first target RGB image in the constructed coordinate system determines each coordinate of the edge of the first target ROI area, if all the coordinates of the edge of the first target ROI area are within Within the range of the first area, it is determined that the first target ROI area is located in the first area, that is, it is determined that the first target ROI area is located in the food storage area.
  • Fig. 3a is a schematic diagram of the first area of the drawer provided by some embodiments of the present application. As shown in Fig. 3a, the framed area is the first area of the drawer.
  • Fig. 3b is a schematic diagram of the first area of the drawer provided by some embodiments of the present application. As shown in Fig. 3b, the framed area is the first area of the drawer.
  • control unit 102 is specifically configured as:
  • the smart refrigerator also includes other food storage areas other than the drawer, such as compartments, door side storage, etc.
  • the second area of the smart refrigerator other than the drawer is pre-stored for other food storage areas, and the first target is determined.
  • Each coordinate of the edge of the ROI area if all the coordinates of the edge of the first target ROI area are within the range of the second area, then determine that the first target ROI area is located in the second area, and determine the first target ROI area. Located in the food storage area.
  • the coordinates of the upper left corner of the drawer in the first target RGB image are (105, 90)
  • the coordinates of the lower right corner are (200, 75)
  • the coordinates of the upper left corner of the pre-saved drawer are (105, 75)
  • the coordinates of the lower right corner are (105, 75).
  • the coordinates of the drawer are (200, 75), determine that the drawer is in an open state, and determine that the first area of the drawer is (150, 90), (105, 75), (200, 75) and (200, 90) are Rectangular box of vertices.
  • the first target ROI area is a rectangular frame with (160, 82), (160, 70), (180, 70) and (180, 82) as vertices, and it is determined by the coordinates that the first target ROI area is located in the first target ROI area. Within an area, it is determined that the first target ROI area is located in the food material storage area.
  • FIG. 4 is a schematic diagram of the interior space of a smart refrigerator provided by some embodiments of the present application. As shown in FIG. 4 , the collection unit of the smart refrigerator is located at the top middle position of the interior space, and the food storage area of the smart refrigerator includes drawers and compartments.
  • Fig. 5a is a schematic diagram showing that the first target ROI region is not located in the first region provided by some embodiments of the present application. As shown in Fig. 5a, the first target ROI region is not located in the first region of the drawer.
  • FIG. 5b is a schematic display diagram of the first target ROI area located in the first area according to some embodiments of the present application. As shown in FIG. 5b, the first target ROI area is located in the first area of the drawer.
  • FIG. 6 is a schematic flowchart of an access action identification provided by some embodiments of the present application. As shown in FIG. 6 , the process includes:
  • S601 Determine that the first target ROI region of interest including the hand region is included in the collected first target RGB image, and that the second target RGB image collected after the collection time of the first target RGB image includes the hand The second target ROI region of the region.
  • S602 Identify whether the first target ROI area is located in the food material storage area, if so, execute S603, and if not, end.
  • S603 Determine a first area of the first target ROI region in the first target RGB image and a second area of the second target ROI region in the second target RGB image.
  • S604 Determine an access action according to the size of the first area and the second area.
  • control unit 102 is specifically configured as:
  • the current action is a take-out action.
  • the first area of the first target ROI region in the first target RGB image and the second target RGB image collected after the collection time of the first target RGB image can be based on In the second area of the second target ROI region, the access action is determined.
  • the location of the capturing unit for capturing RGB images in the smart refrigerator is fixed, and is located at the top of the smart refrigerator.
  • the hand area also changes as the action occurs. It means that the target ROI area gradually moves away from the image acquisition device. Therefore, in the RGB image collected by the acquisition unit, the area of the target ROI area gradually decreases; That is, the target ROI area gradually approaches the image acquisition device, so in the RGB image acquired by the acquisition unit, the area of the target ROI area gradually increases.
  • the current action is approaching the image acquisition device, then it is determined that the current The action is a take-out action; if the first area of the first target ROI area of the first target RGB image is larger than the second area of the second ROI area of the second RGB image, it means that the current action is moving away from the image acquisition device, then the current action is determined as save action.
  • control unit 102 is specifically configured as:
  • the first position information of the drawer edge in the first target RGB image determines the The state information of the drawer in the first target RGB image. If the first position information is consistent with the second position information, it is determined that the drawer is in a closed state, and if the first position information and the second position information are inconsistent, it is determined that the drawer is in an open state.
  • a coordinate system is constructed in the first target RGB image according to the method for constructing a coordinate system in advance, and the coordinates of the drawer edge in the first target RGB image are calculated according to the pre-stored drawer when the drawer is closed.
  • the coordinates of the drawer edge determine whether the drawer is in an open state. If the coordinates of the drawer edge in the first RGB image are inconsistent with the pre-stored coordinates of the drawer edge, it is determined that the drawer is in an open state.
  • FIG. 7 is a schematic diagram of an access action identification process provided by some embodiments of the present application. As shown in FIG. 7 , the process includes:
  • S701 Determine that the first target ROI region of interest in the collected first target RGB image includes a hand region, and that the second target RGB image collected after the collection time of the first target RGB image includes a hand The second target ROI region of the partial region.
  • S702 If it is recognized that the first target ROI area is located in the food storage area, determine the first area of the first target ROI area in the first target RGB image and the second target in the second target RGB image The second area of the ROI region.
  • S703 Determine an access action according to the size of the first area and the second area.
  • determining the target ROI region of interest that includes the hand region in the collected target RGB image includes:
  • the identifying that the first target ROI region is located at the food material storage location includes:
  • the first target ROI area is located in the first area, it is determined that the first target ROI area is located in the food storage area.
  • the identifying that the first target ROI region is located at the food material storage location includes:
  • the determining the access action according to the size of the first area and the second area includes:
  • the current action is a take-out action.
  • the training process of the network model includes:
  • the network model is trained according to the first information and the second information.
  • determining that the drawer is in an open state includes:
  • a sensor in order to more accurately identify the user's access operation information to the smart refrigerator, can be set in the image acquisition blind area of the acquisition unit, and the visual detection result can be determined by acquiring the image by the acquisition unit (that is, the above-mentioned process The detection result obtained by the action inspection), and the blind spot detection information is determined by the sensor, and according to the visual inspection result and the blind spot detection information, the user's access action to the items in the smart refrigerator and/or the storage position of the items in the smart refrigerator is determined.
  • a gravity sensor may also be set on the shelf of the refrigerator, the image information of the ingredients is acquired through the acquisition unit, and the weight information of the ingredients is acquired through the gravity sensor; according to the image information and the weight information, the Kind information.
  • the refrigerator in order to help the user better manage the ingredients in the refrigerator, the refrigerator needs to automatically identify the ingredients in the refrigerator and the storage location.
  • One solution of the related art is to use a static computer vision solution, that is, by taking a static picture in the refrigerator and analyzing the types of ingredients.
  • a static computer vision solution that is, by taking a static picture in the refrigerator and analyzing the types of ingredients.
  • the ingredients stacked outside will block the ingredients stored in the interior, resulting in incomplete identification results.
  • Another solution of the related art is a dynamic computer vision solution, that is, by recognizing the user's dynamic pick-and-place action, the user can monitor in real time what ingredients are put in and what ingredients are taken out.
  • the field of view of the dynamic vision solution needs to cover the entire space of the refrigerator.
  • the smart refrigerator may include a collection unit for collecting images, and a sensor may be arranged in an image collection blind area of the collection unit.
  • the visual detection result is determined by collecting images by the acquisition unit, and the blind spot detection information is determined by the sensor, and according to the visual inspection result and the blind spot detection information, the user's access actions to the items in the smart refrigerator and/or the items in the smart refrigerator are determined. Storage location. Therefore, the user's access operation information to the smart refrigerator can be recognized, and the overall detection accuracy is improved and the cost is saved without the need for too many cameras.
  • the embodiments of the present application propose a refrigerator full-space access action and access position recognition based on proximity sensors and 3D computer vision, especially blind spot recognition on the first layer of the partition and the door.
  • the acquisition unit can be called a camera, such as a color camera, or a color camera and a depth camera. accuracy.
  • the sensor disposed in the image acquisition blind area of the acquisition unit may be a proximity sensor (for example, an infrared sensor), and the proximity sensor is installed in the blind area that is difficult to cover by the acquisition unit.
  • the types of proximity sensors are not limited, such as infrared sensors, through-beam sensors, etc., as long as objects can be detected entering or leaving the detection range.
  • an infrared sensor is used as an example for description below.
  • the area below the two dotted lines is the area that can be covered by the field of view of the acquisition unit (color camera, or a combination of color camera and depth camera). .
  • One implementation is to obtain detection information through an infrared sensor in real time. When it detects that an object is approaching or moving away, it can be considered that the user has entered or exited at this position, and then combined with the visual detection results (that is, the detection obtained by using the camera to perform motion detection). Result) determine that the access action is to deposit or take out, and finally obtain the detection result of the access action and the access position.
  • the flow chart is as shown in Figure 9, wherein the visual detection method is to use color data or depth data for detection, and the detection method includes but is not limited to Image processing methods, deep learning methods, computer vision methods, etc. This implementation is simple and straightforward.
  • Another implementation is to fuse the real-time detection results of the infrared sensor and vision (using a camera for detection), and the specific implementation process includes: first, pre-setting the detection range of the infrared sensor, such as the image acquisition blind area of the camera, and reading the infrared sensor in real time Sensor detection information. Then, use the depth camera or color camera to track the moving target in the field of view. If the centroid position of the moving target is within the field of view of the camera, follow the process of visually detecting the access action and the access position.
  • the centroid position of the moving target exceeds The camera's field of view, according to the detection information of the infrared sensor to determine whether there is an object entering or leaving its corresponding position, when an object enters the position, it can be regarded as an entry action, when an object leaves the position, it can be regarded as an exit action, and then combined with the color From the region recognition result of the region of interest (ROI) of the camera, the access information of the ingredients at the corresponding location (that is, the information that the user stores the ingredients or the user takes out the ingredients) can be obtained. For example, when a user deposits ingredients into the first layer of the partition, this position is a blind spot that is not visible to the camera.
  • ROI region recognition result of the region of interest
  • the visual inspection usually predicts that the result may be the first layer of the partition or the second layer of the partition.
  • the sensor on the first layer detects If there is a target entering, this operation is a deposit, and the position is the first layer of the partition. If the sensor on the first layer does not detect that there is a target entering, this operation is a deposit, and the position is the second layer of the partition.
  • the specific process is as follows shown in Figure 10.
  • a camera module is installed above the refrigerator.
  • the module pops up, triggering the acquisition of images of the stored food during the food access process.
  • the camera module is retracted, and the image acquisition ends.
  • the blind spot position is, for example, the position near the first shelf (or called the compartment) above the refrigerator door as shown in FIG. 8 .
  • the infrared sensor can be installed according to the actual blind spot position of the camera. Taking the sensor installation in Figure 8 as an example, the sensor is installed on the edge of the door compartment, and the installation position can be adjusted according to actual needs, such as the interior of the compartment and the long side of the compartment. , The short side of the compartment, if it is installed inside the box, it can be installed on the left or right side of the box, if it is installed in the drawer, it can be placed on the side wall of the long or short side of the drawer, etc.
  • the number of sensors The number of sensors may be one or more, and changes in the installation position or number of sensors do not affect the essence of the embodiments of the present application.
  • the infrared sensor may cover other locations, such as being installed on the first layer of the box partition, but the detection range covers the second layer of the box partition, which may lead to false triggering, so , its intensity and detection range can be adjusted according to actual needs.
  • the upper part of the refrigerator is a detection blind spot, as shown in the lower rectangular box in Fig. 11-Fig. 14, where the hand is in the refrigerator compartment.
  • the first layer of the board is accessed, but because the image cannot be covered, it is difficult to determine whether the specific location is the first layer of the partition or the second layer of the partition.
  • the position range is shown by the rectangular boxes on both sides in Fig. 11 - Fig. 14 .
  • a blind spot ROI area can be preset, taking the position shown by the lower rectangular box in Figure 11 to Figure 14 as an example, when a moving hand enters the ROI area, it is possible to remove the blind spot partition from the first layer or the partition wall.
  • the access to the second layer of the board needs to be judged based on the detection information of the sensor.
  • Figures 11-14 show the positional changes of the moving hand during the removal action.
  • the actual blind area setting can be set in the color image, in the depth image, or both are set for fusion, the size of the ROI area can be adjusted according to actual needs, and the difference in the position or size of the blind area ROI does not affect the embodiments of this application.
  • the moving hand in the image is detected and tracked.
  • the classifier can be trained by machine learning or deep neural network, and the judgment is made according to the coordinates of the moving hand. If the moving hand is in the above-mentioned blind spot
  • the visual detection method is used to detect the access action and the access position directly according to the color map or the depth map or the combination of the color map and the depth map. If the moving hand is within the above-mentioned blind area ROI, fusion is required.
  • the detection information of the infrared sensor is used to determine the access operation and the access location.
  • the data of the infrared sensor needs to be read in real time.
  • the vision algorithm detects that a moving object enters the detection area in Figure 11 and gradually approaches and enters the ROI blind area shown by the rectangular box below, and the infrared sensor placed on the first layer of the partition detects an object Enter (that is, the user puts ingredients on the first layer of the refrigerator partition), then this action is entry, and the access position is the first layer of the partition; if the sensor does not detect any object entering, this action is entry, and the storage Take the position as the second layer of the partition.
  • the judgment process of the hand exit action is the same as the judgment process of the entry action.
  • the hand exit can be the action of taking out the food from the refrigerator, or the action after the food is put in, which does not specifically affect the implementation of the embodiments of the present application. Here No longer.
  • the visual detection information can be set to wait for the detection result of the infrared sensor within a period of time T, that is, when the camera captures the image to determine Waiting for the detection result obtained by the sensor within a period of time T after the visual detection result (that is, the detection of the user's access action).
  • the setting of the threshold value T is more important. If it is too short, the detection result of the infrared sensor may not be transmitted, resulting in missed detection. If it is too long, the action may have been completed, or even enter the next action, but the result has not been updated.
  • this application optimizes the selection of T, and estimates according to the motion degree V of the hand in the image sequence frame and the spatial distance S, and S can be directly obtained according to the actual positional relationship between the camera and the refrigerator. Then the step of information fusion can be optimized as if the hand moving target enters the upper edge of the ROI shown in the lower rectangular box in Fig. 11-Fig. 14, then the time T1, T2... Take the detection result of the infrared sensor, when entering the lower edge of the ROI shown in the lower rectangle in Fig. 11-Fig. 14, take the average or maximum value of T1, T2... calculated in real time before as the threshold T.
  • the threshold value T can also be preset, and does not need to be calculated according to the speed of the user's hand movement in real time.
  • the visual detection result is the result obtained by collecting an image by a camera and performing detection and analysis on the image.
  • the color camera intercepts the trigger area of the hand, and uses this part of the information to recognize the object.
  • the ingredients are deposited; when there is an object in the hand and the "exit” message is detected, the ingredients are taken out.
  • the blind spot identification proposed in the embodiment of the present application is only a preferred embodiment.
  • the access action identification and the access position identification of any spatial position in the refrigerator can be combined with sensors to improve the identification accuracy, and both are implemented in this application. within the scope of the example.
  • sensors include, but are not limited to, proximity sensors, photoelectric sensors, single-point TOF, etc., and sensors with similar principles are within the scope of the embodiments of the present application.
  • an information identification method provided by an embodiment of the present application includes:
  • S1501. Determine a visual detection result by using a camera installed on a household appliance for storing items, and determine blind spot detection information by using a sensor installed on the household appliance, where the sensor is set in an image capture blind area of the camera ;
  • S1502. Determine, according to the visual detection result and the blind spot detection information, the user's access action to the items in the household appliance and/or the storage position of the items in the household appliance.
  • the sensor when it is determined through the visual detection result that within a preset time period after the user's access action to the items in the household appliance in the preset region of interest ROI of the image acquisition blind area, the sensor is waiting to pass through the sensor.
  • the blind spot detection information is acquired, and the storage position of the item in the household appliance is determined according to whether the blind spot detection information is acquired through the sensor within the preset time period.
  • the preset duration is determined according to a speed of a user's access action, and the speed of the user's access action is determined by detecting an image captured by the camera.
  • the method further includes: when the user performs an access operation to the items in the household appliance outside the blind area of image acquisition, directly using the image captured by the camera to determine whether the user has access to the items in the household appliance and/or the storage location of the item in the household appliance.
  • the processor determines that the access location is the installation location of the sensor, and finally determines the user's position on the home appliance in combination with the images collected by the camera.
  • the embodiment of the present application provides an information identification method, which is used to detect the storage operation in each area of the refrigerator, and the blind area of the camera is combined with the sensor to identify the access action and the access position. Detection accuracy and cost savings.
  • the food management refrigerator with the RFID solution is for each item that the user wishes to manage. All ingredients need to be affixed with RFID tags in order to manage the ingredients. For users, the operation is relatively complicated and the management cost is high.
  • the RGB color camera is usually equipped with a voice recognition module. For the wrong ingredients identified by the color camera, the user can change the type of ingredients through voice error correction, but frequent voice error correction will greatly reduce the user's experience of using the refrigerator.
  • the embodiments of the present application provide an intelligent refrigerator and an ingredient identification method, so as to improve the accuracy of ingredient identification.
  • the high-precision identification of refrigerator ingredients is realized through multi-sensor fusion, with low structural complexity and high identification accuracy. Specifically, the results of the deep learning network model recognition of the food images collected by the RGB color camera and the weight data of the food collected by the gravity sensor are combined, and weight restrictions are added to the food recognition results to improve the accuracy of food recognition. , Improve user experience.
  • RGB color camera food identification has always been a technical problem in industry and academia, mainly including the following points:
  • Lighting directly affects the imaging quality of ingredients in the image. For the same food, it shows different color characteristics under different lighting. In addition, the smooth and reflective surface of the food or the uneven light on the surface will cause the surface of the food to be too bright or too dark. For example, snow pears appear as white luminous bodies under strong light, and red dates appear black under weak light. The difference in lighting will cause the ingredients to be recognized as other categories or not recognized.
  • the technical solutions provided by the embodiments of the present application incorporate weight data during image recognition, so as to distinguish food ingredients that are similar in shape but have a large difference in weight.
  • the hardware structure of the refrigerator provided by the embodiment of the present application includes, for example, the following five parts:
  • Processor It can also be called the central processing unit, that is, the main control of the refrigerator in Figure 18.
  • the central processing unit is located on the top of the refrigerator and is used for the identification of ingredients, the processing of gravity sensor data, and the gravity sensor based on the deep learning network model.
  • RGB cameras can be used, and color RGB cameras are used to collect food images.
  • the RGB camera adopts a dynamic identification scheme, and adopts a high frame rate (120fps) RGB camera.
  • the dynamic identification is distinguished from static identification.
  • Static recognition is to recognize after storage
  • dynamic is to recognize in the process of storage.
  • Gravity sensor a gravity sensor is installed on each compartment of the refrigerator compartment and each shelf of the refrigerator door. The weight information of ingredients is introduced into the identification of ingredient types to improve the accuracy of ingredient identification.
  • the increase or decrease of the gravity of the compartment/shelf can also determine the access and position of the ingredients, and provide the access position information of the access action of the ingredients.
  • the display screen in the embodiment of the present application is embedded on the refrigerator door body, and when the user opens the refrigerator door to access the ingredients, it is easier to see the update of the information such as the access position, type, and action of the ingredients. . There is no need to close the refrigerator door to see the contents of the display outside the refrigerator door.
  • Externally embedded prompt light bar is used to prompt the user for the information of food access.
  • the embedded prompt light bar displays the breathing light effect.
  • Microphone array which includes a microphone for collecting voice and a speaker for playing voice, which is used to voice remind the user of the location, type, action and other information of the stored ingredients.
  • the microphone array voice announces "Put an apple in the first layer".
  • functions such as voice query weather can also be provided.
  • the perception layer composed of sensors perceives the user's operation, and transmits the data to the decision-making layer.
  • the decision layer logically processes the data of the perception layer, and sends the processing results to the output layer for users to obtain intuitively.
  • the perception layer including RGB cameras, gravity sensors, radio frequency identification (Radio Frequency Identification, RFID) modules, scanners, door switch signal detection equipment, etc.
  • the decision-making layer is the central processing unit of the refrigerator, which has strong data processing capabilities.
  • Output layer Visually display the results output by the processor to the user.
  • the hardware devices include a built-in display screen, a microphone array, and an embedded prompt light bar.
  • the identification of similar ingredients is a difficult problem in image recognition, but for similar ingredients of different sizes, the introduction of the weight information of the ingredients can distinguish the ingredients well.
  • cherry tomatoes and tomatoes a single tomato and cherry tomatoes are very similar in appearance, but there is a big difference in size/weight. If the two are distinguished only by color RGB images, there will inevitably be more misidentifications.
  • the embodiment of the present application integrates the results of the food images collected by the RGB color camera recognized by the deep learning network and the data of the weight of the food collected by the gravity sensor, and adds a weight restriction condition to the recognition result of the food, which improves the accuracy of the food recognition. .
  • the embodiment of the present application adopts a deep learning method for food identification, including but not limited to (deep) neural network, convolutional neural network, deep belief network, deep generative network, deep reinforcement learning and other network structures or a derivative model thereof.
  • deep learning network image recognition will be affected by illumination, occlusion, and the high similarity of the appearance of individual ingredients. The limitations of these conditions can hardly be solved by algorithmic methods. It solves the problem of difficult identification of similar ingredients.
  • the weight distribution of the ingredients is similar to the Gaussian distribution.
  • the probability of measuring the weight and predicting that the ingredients are the correct ingredients can be expressed by the Gaussian distribution:
  • w a is the preset expected weight of the ingredients.
  • w b is the weight of the ingredients measured by the gravity sensor, that is, the actual weight of the ingredients.
  • ⁇ , ⁇ are adjustment factors, which are preset constants and can be set according to empirical values.
  • Pweight is the probability that an ingredient is predicted to be the correct ingredient according to its weight, referred to as the probability of weight recognition.
  • the result of image recognition fuses the weight measured by the gravity sensor, which can well distinguish similar ingredients of different sizes.
  • the image recognition results show that the probability of identifying Sydney as Elizabeth melon (91%) is 9% higher than the probability of identifying it as snow pear (82%). If only based on the image recognition results, the recognition results will be wrong. But the measured weight output by the gravity sensor is 270g, the expected weight of Elizabeth melon is 1200g, and the expected weight of snow pear is 350g. For the weight of 270g, the probability of Elizabeth melon is extremely small (0.002%), but the probability of snow pear is very small. High (92.312%). Fusing image recognition results and gravity sensing data, Elizabeth melons and snow pears can be well differentiated.
  • the final result of the last column in Table 2 is the result obtained after the weight recognition probability Pweight and the image recognition probability Pimage are fused.
  • Pweight and Pimage can be multiplied, and the obtained product can be used as the final
  • the input layer, hidden layer, and output layer described in FIG. 20 are all technical terms in the deep learning network model, and will not be explained here.
  • the core function point of the embodiment of the present application is the management of ingredients, that is, to inform the refrigerator in a more convenient way which ingredients are in the refrigerator, and the result of the RGB color camera image to identify the ingredients is fused with the weight of the ingredients obtained by the gravity sensor information, distinguish ingredients with similar shapes and large differences in size, improve the accuracy of ingredient recognition, better manage information such as the quantity and shelf life of ingredients in the refrigerator, and recommend reasonable meals to users based on the existing ingredients in the refrigerator.
  • the built-in large-screen mode makes the dynamic identification of ingredients more user-friendly in terms of user perception.
  • the user's operation of the refrigerator is displayed in real time, and the user can face the large screen of the refrigerator during the process of accessing the ingredients, avoiding the problem that the user needs to move to the side of the refrigerator to see the access information of the ingredients during the process of accessing the ingredients.
  • the speaker at the bottom of the screen can also broadcast the location and type information of the user's access to the ingredients in real time.
  • the food identification method provided in the embodiment of the present application specifically includes the following steps:
  • the initialization of the image recognition module in this step specifically includes:
  • Visual sensor pops up, camera parameter initialization, deep learning network parameter weight loading, etc.
  • camera parameters used to correct image distortion.
  • Deep learning network parameter weight loading mainly the weight loading of the deep learning network recognition model.
  • the deep learning network recognition model is implemented by a central processing unit.
  • Gravity sensor module initialization including opening and clearing the gravity sensor.
  • the image recognition module may include an RGB camera, a central processing unit, and the like.
  • the RGB color camera captures the images of the ingredients accessed by the user, and sends them to the deep learning network recognition model in the medium-voltage processor to identify the ingredients, and the deep learning network recognition model determines and captures the captured images.
  • the gravity sensor outputs the weight information of the current ingredients to the central processor.
  • the central processing unit determines the weight identification probability Pweight of the food material by using the weight information of the current food material, and the specific calculation method refers to the above-mentioned formula (1).
  • a method of multiplying the weight recognition probability Pweight and the image recognition probability Pimage, or other methods such as weighted multiplication, may be used to obtain the final food material recognition probability P.
  • step S202 is continuously performed repeatedly.
  • a method for identifying ingredients provided in an embodiment of the present application includes:
  • S2302. Determine the type information of the food material according to the image information and the weight information.
  • the image information of the food material and the weight information of the food material are obtained, and the type information of the food material is determined according to the image information and the weight information. Since a weight restriction condition is added to the recognition result of the food material , thereby further improving the accuracy of food identification.
  • determine the type information of the ingredients which specifically includes:
  • the ingredient set identify the ingredients according to the weight information, and determine the weight identification probability of the ingredients in the ingredient set;
  • the food type information is determined.
  • the set of ingredients such as the four ingredients listed in Table 2, through image recognition, first determine which ingredients have an image recognition probability greater than 50%. For example, when recognizing snow pears, there are Table 2. The image recognition probabilities of the four ingredients shown are all greater than fifty percent. Then, in the set of ingredients, the weight recognition probability corresponding to each ingredient is further determined by means of gravity recognition. Finally, for each ingredient in the ingredient set, multiply the image recognition probability and weight recognition probability of the same ingredient, and determine the ingredient with the highest product as the final recognition result.
  • determine the food type information which specifically includes:
  • the image recognition probability and the weight recognition probability are multiplied or weighted, and the obtained product is used as the final ingredient recognition probability.
  • identify the ingredients according to the weight information, and determine the weight identification probability of the ingredients specifically including:
  • the wa is the expected weight of the food
  • w b is the measured weight of the ingredients
  • ⁇ , ⁇ are preset adjustment factors.
  • the image information of the food is acquired through an RGB camera.
  • the weight information of the food is acquired through a gravity sensor.
  • FIG. 24 is a schematic structural diagram of an electronic device provided by the application.
  • the application also provides an electronic device, as shown in FIG. 24 , including: a processor 801, a communication interface 802, a memory 803 and a communication bus 804, wherein the processor 801, the communication interface 802, and the memory 803 complete the communication with each other through the communication bus 804;
  • a computer program is stored in the memory 803, and when the program is executed by the processor 801, the processor 801 is caused to perform the following steps:
  • the first target ROI area of the hand area is included in the collected first target RGB image, and the second target RGB image collected after the collection time of the first target RGB image includes the hand area The second target ROI area of ;
  • the first target ROI area is located in the food storage area, determine the first area of the first target ROI area in the first target RGB image and the second target ROI area in the second target RGB image the second area of ;
  • the access action is determined according to the size of the first area and the second area.
  • determining the target ROI region of interest that includes the hand region in the collected target RGB image includes:
  • the identifying that the first target ROI region is located at the food material storage location includes:
  • the first target ROI area is located in the first area, it is determined that the first target ROI area is located in the food storage area.
  • the identifying that the first target ROI region is located at the food material storage location includes:
  • the determining the access action according to the size of the first area and the second area includes:
  • the current action is a take-out action.
  • the training process of the network model includes:
  • the network model is trained according to the first information and the second information.
  • determining that the drawer is in an open state includes:
  • the implementation of the above electronic device may refer to the above embodiment, and the repetition will not be repeated.
  • the communication bus mentioned in the above electronic device may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, EISA) bus or the like.
  • PCI peripheral component interconnect standard
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of presentation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • the communication interface 802 is used for communication between the above electronic device and other devices.
  • the memory may include random access memory (Random Access Memory, RAM), and may also include non-volatile memory (Non-Volatile Memory, NVM), such as at least one disk storage.
  • the memory may also be at least one storage device located remotely from the aforementioned processor.
  • the above-mentioned processor can be a general-purpose processor, including a central processing unit, a network processor (NP), etc.; it can also be a digital instruction processor (Digital Signal Processing, DSP), an application-specific integrated circuit, a field programmable gate array, or Other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the processor 801 may be configured to call the program instructions stored in the memory 803, and execute according to the obtained program:
  • type information of the food material is determined.
  • determining the type information of the ingredients according to the image information and the weight information specifically including:
  • the ingredient set identify the ingredients according to the weight information, and determine the weight identification probability of the ingredients in the ingredient set;
  • the food type information is determined.
  • determine the food type information which specifically includes:
  • the image recognition probability and the weight recognition probability are multiplied or weighted, and the obtained product is used as the final ingredient recognition probability.
  • identify the ingredients according to the weight information, and determine the weight identification probability of the ingredients specifically including:
  • the wa is the expected weight of the food
  • w b is the measured weight of the ingredients
  • ⁇ and ⁇ are preset adjustment factors.
  • the image information of the food is acquired through an RGB camera.
  • the weight information of the food is acquired through a gravity sensor.
  • a smart refrigerator provided by the embodiments of the present application includes: the above-mentioned electronic device, for example, the refrigerator master control in FIG. 18 .
  • the smart refrigerator further includes:
  • Image acquisition equipment located on the top of the refrigerator body such as the RGB camera in Figure 18;
  • each layer of the shelf and each door shelf can be provided with one or more gravity sensors.
  • the smart refrigerator further includes one or a combination of the following devices:
  • Microphone array located on the inside of the refrigerator door
  • the light bar on the inside of the refrigerator door.
  • the smart refrigerator provided by the embodiments of the present application also includes some devices commonly used in refrigerators, which will not be repeated here.
  • an embodiment of the present application further provides an electronic device, which may be a smart refrigerator, including:
  • a sensor 12 arranged in the blind area of the camera for image acquisition
  • the processor 13 is configured to use the camera to collect images to determine the visual detection result, and to determine the blind spot detection information through the sensor, and to determine the user's perception of the items in the smart refrigerator according to the visual inspection result and the blind spot detection information. and/or the storage position of the item in the smart refrigerator.
  • a camera is used to collect images to determine a visual detection result, and blind area detection information is determined by a sensor disposed in an image acquisition blind area of the camera, so as to detect the blind area according to the visual detection result and the blind area detection.
  • information to determine the user's access action to the items in the smart refrigerator and/or the storage position of the item in the smart refrigerator, that is, to identify the user's access operation information to the smart refrigerator for storing items, In the case of not needing too many cameras, the overall detection accuracy is improved and costs are saved.
  • the processor 13 further combines the sensor when determining the user's access action to the items in the household appliance within the preset region of interest ROI of the blind spot for image acquisition based on the visual detection result.
  • the blind spot detection information of the device determines the user's access action to the items in the smart refrigerator and/or the storage position of the items in the smart refrigerator.
  • the processor 13 determines, based on the visual detection result, within a preset time period after the user's access action to the items in the smart refrigerator in the preset region of interest ROI of the image acquisition blind area, Wait for the blind spot detection information to be obtained through the sensor, and determine the storage position of the item in the smart refrigerator according to whether the blind spot detection information is obtained through the sensor within the preset time period.
  • the preset duration is determined according to a speed of a user's access action, and the speed of the user's access action is determined by detecting an image captured by the camera.
  • the processor 13 is further configured to: when the user performs an access operation to the items in the smart refrigerator outside the image collection blind area, directly use the image collected by the camera to determine whether the user has Access actions of items in the smart refrigerator and/or storage positions of the items in the smart refrigerator.
  • the processor 13 first determines whether the sensor detects the user, and if so, the processor determines that the access location is the installation location of the sensor, and finally determines the user's preference based on the images collected by the camera.
  • the camera is installed above the smart refrigerator, and when the user opens the refrigerator door, it starts to collect images of the user accessing food; the sensor is installed on the edge of the door compartment of the smart refrigerator.
  • an embodiment of the present application further provides an information identification device, including:
  • the first unit 21 is used to determine the visual detection result through the acquisition unit installed on the smart refrigerator for storing items, and determine the blind spot detection information through the sensor installed on the smart refrigerator, wherein the sensor is arranged at the location.
  • the second unit 22 is configured to determine, according to the visual detection result and the blind spot detection information, the user's access action to the items in the smart refrigerator and/or the storage position of the items in the smart refrigerator.
  • the second unit 22 waits for a preset period of time after the user accesses the items in the smart refrigerator in the preset region of interest ROI of the image acquisition blind area based on the visual detection result.
  • the blind spot detection information is obtained through the sensor, and the storage position of the item in the smart refrigerator is determined according to whether the blind spot detection information is obtained through the sensor within the preset time period.
  • the preset duration is determined according to a speed of a user's access action, and the speed of the user's access action is determined by detecting an image captured by the camera.
  • the second unit 22 is further configured to: when the user performs an access operation to the items in the household appliance outside the image collection blind area, directly use the image collected by the camera to determine whether the user has an interest in the smart home appliance. Access actions of items in the refrigerator and/or storage positions of the items in the smart refrigerator.
  • the second unit 22 first determines whether the sensor detects the user, and if so, the processor determines that the access location is the installation location of the sensor, and finally determines the user's preference for the sensor in combination with the images collected by the camera.
  • the user's access action to the items in the smart refrigerator and/or the storage position of the items in the smart refrigerator is determined.
  • the first unit 21 can be used to acquire image information of the food material and the weight information of the food material; the second unit 22 can be used to determine the image information and the weight information according to the image information and the weight information. Describe the type of food.
  • determining the type information of the ingredients according to the image information and the weight information specifically including:
  • the ingredient set identify the ingredients according to the weight information, and determine the weight identification probability of the ingredients in the ingredient set;
  • the food type information is determined.
  • determine the food type information which specifically includes:
  • the image recognition probability and the weight recognition probability are multiplied or weighted, and the obtained product is used as the final ingredient recognition probability.
  • identify the ingredients according to the weight information, and determine the weight identification probability of the ingredients specifically including:
  • the wa is the expected weight of the food
  • w b is the measured weight of the ingredients
  • ⁇ and ⁇ are preset adjustment factors.
  • the image information of the food is acquired through an RGB camera.
  • the weight information of the food is acquired through a gravity sensor.
  • the present application further provides a computer-readable storage medium, where a computer program executable by a processor is stored in the computer-readable storage medium, when the program is executed on the processor When running, the processor implements the following steps when executing:
  • the first target ROI area of the hand area is included in the collected first target RGB image, and the second target RGB image collected after the collection time of the first target RGB image includes the hand area The second target ROI area of ;
  • the first target ROI area is located in the food storage area, determine the first area of the first target ROI area in the first target RGB image and the second target ROI area in the second target RGB image the second area of ;
  • the access action is determined according to the size of the first area and the second area.
  • determining the target ROI region of interest that includes the hand region in the collected target RGB image includes:
  • the identifying that the first target ROI region is located at the food material storage location includes:
  • the first target ROI area is located in the first area, it is determined that the first target ROI area is located in the food storage area.
  • the identifying that the first target ROI region is located at the food material storage location includes:
  • the determining the access action according to the size of the first area and the second area includes:
  • the current action is a take-out action.
  • the training process of the network model includes:
  • the network model is trained according to the first information and the second information.
  • determining that the drawer is in an open state includes:
  • the steps implemented by the processor may refer to the above-mentioned embodiments, and the repetition is not repeated. Repeat.

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Thermal Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Cold Air Circulating Systems And Constructional Details In Refrigerators (AREA)

Abstract

本申请提供了一种智能冰箱、存取动作识别方法、设备及介质,在本申请中,确定采集到的第一目标RGB图像中包含有手部区域的第一目标感兴趣ROI区域,以及该第一目标RGB图像的采集时间之后采集到的第二目标RGB图像中包含有手部区域的第二目标ROI区域,若识别到该第一目标ROI区域位于食材存储区域,则确定第一目标RGB图像中该第一目标ROI区域的第一面积及该第二目标RGB图像中该第二目标ROI区域的第二面积,根据该第一面积和第二面积的大小,确定存取动作,提高了存取动作识别的准确性。

Description

智能冰箱、存取动作识别方法、设备及介质
相关申请的交叉引用
本申请要求在2021年06月28日提交中国专利局、申请号为202110719382.3、申请名称为“一种智能冰箱、存取动作识别方法、设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中;本申请要求在2021年03月31日提交中国专利局、申请号为202110348540.9、申请名称为“家用电器、信息识别方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中;本申请要求在2021年09月16日提交中国专利局、申请号为202111084767.3、申请名称为“一种食材识别方法及设备、智能设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及智能家电技术领域,尤其涉及智能冰箱、存取动作识别方法、设备及介质。
背景技术
随着科学技术的进步,智能冰箱出现在各个家庭中,作为家庭生活中最常见的白色家电。其中,关于冰箱的食材管理已成为智能冰箱的核心功能,食材管理包括食材种类、数量、保鲜期及存放位置信息的管理。其中,食材存放位置信息是基于食材的存取动作确定的。
在一些识别冰箱食材的存取动作的实现中,通常基于深度相机,获取物品的空间位置,再确定手部的位置,根据该物品的空间位置和手部的位置,识别用户当前的动作是存入动作还是取出动作。但是,在基于深度相机确定物品的空间位置时,当物品距离相机较远时,计算得到的物品的空间位置与实际的空间位置误差较大,会导致存取动作识别不准确、或无法识别的问题。
发明内容
本申请提供了一种智能冰箱、存取动作识别方法、设备及介质,用以解决现有技术中存取动作识别不准确以及无法识别的问题。
第一方面,本申请提供一种智能冰箱,所述智能冰箱包括:
采集单元,所述采集单元被配置为采集第一目标RGB图像及采集第二目标RGB图像;
控制单元,所述控制单元被配置为:
确定采集到的第一目标RGB图像中包含有手部区域的第一目标感兴趣ROI区域,以及所述第一目标RGB图像的采集时间之后采集到的第二目标RGB图像中包含有手部区域的第二目标ROI区域;
若识别到所述第一目标ROI区域位于食材存储区域,则确定第一目标RGB图像中所述第一目标ROI区域的第一面积及所述第二目标RGB图像中所述第二目标ROI区域的第二面积;根据所述第一面积和第二面积的大小,确定存取动作。
由于在本申请中,确定采集到的第一目标RGB图像中包含有手部区域的第一目标感兴趣ROI区域,以及该第一目标RGB图像的采集时间之后采集到的第二目标RGB图像中包含有手部区域的第二目标ROI区域,若识别到该第一目标ROI区域位于食材存储区域,则确定第一目标RGB图像中该第一目标ROI区域的第一面积及该第二目标RGB图像中该第二目标ROI区域的第二面积,根据该第一面积和第二面积的大小,确定存取动作。在本申请中,通过确定第一目标RGB图像中包含有手部区域的第一目标ROI区域和第一目标RGB图像中包含有手部区域的第二目标ROI区域,当识别到第一目标ROI区域位于食材存储区域时,根据第一目标ROI区域和第二目标ROI区域的面积的大小,确定存取动作,提高了存取动作识别的准确性。
第二方面,本申请还提供一种存取动作识别方法,所述方法包括:
确定采集到的第一目标RGB图像中包含有手部区域的第一目标感兴趣ROI区域,以及所述第一目标RGB图像的采集时间之后采集到的第二目标RGB图像中包含有手部区域的第二目标ROI区域;
若识别到所述第一目标ROI区域位于食材存储区域,则确定第一目标RGB图像中所述第一目标ROI区域的第一面积及所述第二目标RGB图像中所述第二目标ROI区域的第二面积;
根据所述第一面积和第二面积的大小,确定存取动作。
第三方面,本申请还提供一种智能冰箱,包括:
用于采集图像的采集单元;
设置在所述采集单元的图像采集盲区的传感器;
处理器,用于通过所述采集单元采集图像确定视觉检测结果,以及通过所述传感器确定盲区检测信息,并根据所述视觉检测结果和所述盲区检测信息,确定用户对所述家用电器内物品的存取动作和/或所述物品在所述家用电器内的存放位置。
本申请提供的上述智能冰箱,通过采集单元采集图像,确定视觉检测结果,并且通过设置在所述采集单元的图像采集盲区的传感器确定盲区检测信息,从而根据所述视觉检测结果和所述盲区检测信息,确定用户对智能冰箱内物品的存取动作和/或所述物品在智能冰箱内的存放位置,即实现了识别用户对用于存放物品的智能冰箱的存取操作信息,在无需过多相机的情况下,提高了整体检测准确率,节约成本。
第四方面,本申请还提供一种信息识别方法,该方法包括:
通过安装在用于存放物品的智能冰箱上的采集单元确定视觉检测结果,并且,通过安装在所述智能冰箱上的传感器确定盲区检测信息,其中所述传感器设置在所述采集单元的图像采集盲区;
根据所述视觉检测结果和所述盲区检测信息,确定用户对所述智能冰箱内物品的存取动作和/或物品在所述智能冰箱内的存放位置。
第五方面,本申请还提供一种智能冰箱,包括:
位于冰箱本体顶部的采集单元;
位于冰箱置物架上的重力传感器;
处理器,用于通过所述采集单元获取食材的图像信息,以及通过所述重力传感器获取所述食材的重量信息;根据所述图像信息和所述重量信息,确定所述食材的种类信息。
本申请通过获取食材的图像信息,以及所述食材的重量信息,并根据所述图像信息和所述重量信息,确定所述食材的种类信息,由于在食材识别结果上加入了重量限制条件,从而进一步提高了食材识别的准确率。
第六方面,本申请还提供一种食材识别方法,该方法包括:
获取食材的图像信息,以及所述食材的重量信息;
根据所述图像信息和所述重量信息,确定所述食材的种类信息。
第七方面,本申请还提供一种电子设备,所述电子设备至少包括处理器和存储器,所述处理器用于执行存储器中存储的计算机程序时实现上述任一方法的步骤。
第八方面,本申请还提供一种计算机可读存储介质,其存储有计算机程序,所述计算机程序被处理器执行时实现上述任一方法的步骤。
附图说明
为了更清楚地说明本申请的技术方案,下面将对实施例描述中所需要使用的附图作简要介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请提供的一种智能冰箱的结构示意图;
图2a为本申请的一些实施例提供的ROI区域的示意图;
图2b为本申请的一些实施例提供的ROI区域的示意图;
图3a为本申请一些实施例提供的抽屉的第一区域的示意图;
图3b为本申请一些实施例提供的抽屉的第一区域的示意图;
图4为本申请一些实施例提供的智能冰箱内部空间示意图;
图5a为本申请一些实施例提供的第一目标ROI区域未位于第一区域内的显示示意图;
图5b为本申请一些实施例提供的第一目标ROI区域位于第一区域内的显示示意图;
图6为本申请一些实施例提供的存取动作识别的流程示意图;
图7为本申请一些实施例提供的存取动作识别流程示意图;
图8为本申请一些实施例提供的传感器在冰箱上的位置示意图;
图9为本申请实施例提供的一种信息识别方法的流程示意图;
图10为本申请实施例提供的另一种信息识别方法的流程示意图;
图11—图14为本申请实施例提供的通过采集单元采集的用户存取食材的图像示意图;
图15为本申请实施例提供的一种信息识别方法的流程示意图;
图16为本申请实施例提供的外形、颜色相似,但重量差异较大的食材(左边为绿芒,右边为木瓜)示意图;
图17为本申请实施例提供的外形、颜色相似,但重量差异较大的食材(左边为圣女果,左边为西红柿)示意图;
图18为本申请实施例提供的另一种智能冰箱的结构示意图;
图19为本申请实施例提供的***整体框架示意图;
图20为本申请实施例提供的图像识别的结果与重力传感器测量重量融合示意图;
图21为本申请实施例提供的图像识别与重力传感器测量重量融合方案核心功能框架示意图;
图22为本申请实施例提供的图像识别与重力传感器测量重量融合方案流程示意图;
图23为本申请实施例提供的一种食材识别方法的流程示意图;
图24为本申请提供的一种电子设备的结构示意图;
图25为本申请提供的另一种电子设备的结构示意图;
图26为本申请实施例提供的一种信息识别装置的结构示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在本申请中,当接收到采集的第一目标RGB图像和在该第一目标RGB图像的采集时间之后采集到的第二目标RGB图像后,确定该第一目标RGB图像中包含有手部区域的第一目标感兴趣ROI区域,以及该第二目标RGB图像中包含有手部区域的第二目标ROI区域,若识别到该第一目标ROI区域位于食材存储区域,则确定第一目标RGB图像中该第一目标ROI区域的第一面积及该第二目标RGB图像中该第二目标ROI区域的第二面积,根据该第一面积和第二面积的大小,确定存取动作。
在一些实施例中,为了提高了存取动作识别的准确性,本申请提供了一种智能冰箱、存取动作识别方法、设备及介质。
图1为本申请提供的一种智能冰箱的结构示意图,该智能冰箱包括:
采集单元101,所述采集单元被配置为采集第一目标RGB图像及采集第二目标RGB图像;
控制单元102,所述控制单元被配置为确定采集到的第一目标RGB图像中包含有手部 区域的第一目标感兴趣ROI区域,以及所述第一目标RGB图像的采集时间之后采集到的第二目标RGB图像中包含有手部区域的第二目标ROI区域;
若识别到所述第一目标ROI区域位于食材存储区域,则确定第一目标
Figure PCTCN2021139238-appb-000001
图像中所述第一目标ROI区域的第一面积及所述第二目标RGB图像中所述第二目标ROI区域的第二面积;
根据所述第一面积和第二面积的大小,确定存取动作。
在本申请中,智能冰箱内部的顶部区域安装有采集单元,当智能冰箱的冰箱门体被开启后,该采集单元实时地进行RGB图像的采集,具体的该采集单元中可以包括图像采集设备。为了减少由于手部运动导致拍摄到的RGB图像出现模糊不清楚的情况,在本申请中,该采集单元优先使用曝光时间较短的图像采集设备。
当采集到第一目标RGB图像和二目标RGB图像后,为了进行存取动作的识别,在本申请中,首先需要跟踪手部在目标RGB图像中的位置。其中,该第二目标RGB图像可以是该第一目标RGB图像的采集时间之后采集到的首张RGB图像,还可以是在该第一目标RGB图像的采集时间的预设时间间隔之后采集到的RGB图像。
具体的,确定第一目标RGB图像中包含有手部区域的第一目标感兴趣(ROI,region of interest)区域,以及从第二目标RGB图像中获取包含手部区域的第二目标RGB图像。其中,在本申请中,该手部区域包括用户的手,以及用户手上拿的食材。
具体的,可以基于传统视觉的方法确定手部区域,例如通过手部肤色检测或运动目标检测等。
在本申请中,基于目标RGB图像识别存取动作时,为了节省资源,当目标RGB图像中的目标ROI区域位于食材存储区域时,智能冰箱进行存取动作识别,若该目标RGB图像中的目标ROI区域未位于食材存储区域,则不进行存取动作的识别,直至出现目标ROI区域位于食材存储区域的目标RGB图像出现。
在本申请中,若识别到第一目标ROI区域位于食材存储区域,则确定第一目标RGB图像中的第一目标ROI区域的第一面积以及第二目标RGB图像中的第二目标ROI区域的第二面积。
在确定了第一面积和第二面积后,根据该第一面积和第二面积的大小,确定存取动作。具体的,若该第一面积大于该第二面积,在确定为存放动作,若该第一面积小于该第二面积,则确定为拿取动作。
在本申请中,通过确定第一目标RGB图像中的第一目标ROI区域和第一目标RGB图像中的第二目标ROI区域,当识别到第一目标ROI区域位于食材存储区域时,根据第一目标ROI区域和第二目标ROI区域的面积的大小,确定存取动作,提高了存取动作识别的准确性。
为了在目标RGB图像中确定包含手部区域的ROI区域,在上述实施例的基础上,在本申请中,所述控制单元102,具体被配置为:
将目标RGB图像输入到训练完成的网络模型,接收所述网络模型输出标识有所述目标ROI区域的信息的RGB图像。
在本申请中,在确定目标ROI区域时,可以基于深度学习的方法。具体的,在确定目标RGB图像中包含的手部区域的ROI区域时,将目标RGB图像输入到训练完成的网络模型,接收网络模型输出的标识有目标ROI区域的信息的RGB图像。具体的,该目标ROI区域的信息可以是在RGB图像中框选出的阴影区域,还可以是目标ROI区域的信息在RGB图像中的坐标信息。
图2a为本申请的一些实施例提供的ROI区域的示意图,如该图2a所示,该图2a中被框出的区域即为ROI区域。
图2b为本申请的一些实施例提供的ROI区域的示意图,如该图2b所示,该图2b中被框出的区域即为ROI区域。
为了得到训练完成的用于获取目标ROI区域的网络模型,在上述各实施例的基础上,在本申请中,所述控制单元102,具体被配置为:
获取训练集中的任一样本RGB图像,其中,所述样本RGB图像中预先标注有ROI区域的第一位置信息;
将所述样本RGB图像输入到网络模型中,输出所述样本RGB图像中ROI区域的第二位置信息;
根据所述第一位置信息和第二位置信息,对所述网络模型进行训练。
在基于深度学习的方法,确定目标ROI区域时,需要预先完成对网络模型的训练。为了完成对网络模型的训练,预先配置有训练集,其中该训练集中保存有样本RGB图像,该样本RGB图像中预先标注有ROI区域的第一信息。
具体进行训练时,获取训练集中的任一样本RGB图像,其中,在该样本RGB图像中预先标注有ROI区域的第一信息,将样本RGB图像输入到网络模型中,输出该样本RGB图像中ROI区域的第二信息,根据该第一信息和第二信息,对该网络模型进行训练。
具体的,将标记有ROI区域的信息的RGB图像输入到网络模型中,对该网络模型进行训练,根据网络输出的结果和预先标记的ROI区域计算该网络模型的损失值,再基于该损失值调整网络模型的参数。若该损失值达到最小损失值或者迭代次数达到预设的最大迭代次数,则认为完成了网络模型的训练。
为了识别第一目标ROI区域是否位于食材存储区域,在上述各实施例的基础上,在本申请中,所述控制单元102,具体被配置为:
若确定智能冰箱的抽屉处于开启状态,则确定所述抽屉在所述第一目标RGB图像中的第一区域;
若所述第一目标ROI区域位于所述第一区域内,则确定所述第一目标ROI区域位于食材存储区域。
在本申请中,若确定智能冰箱的抽屉处于开启状态,则用户可能在对抽屉内的食材进行存取操作,因此为了确定用户是否对抽屉内的食材进行存取操作,确定第一目标RGB图像中抽屉所在的第一区域,判断该第一目标ROI区域是否位于该第一区域内,若该第一目标ROI区域位于该第一区域内,则确定该第一目标ROI区域位于食材存储区域,并且位于食材存储区域的抽屉位置。
确定了该抽屉处于开启状态后,在该构建了坐标系中的第一目标RGB图像中,确定第一目标ROI区域的边缘的每个坐标,若该第一目标ROI区域边缘的所有坐标均在第一区域的范围内,则确定该第一目标ROI区域位于第一区域内,即确定该第一目标ROI区域位于食材存储区域。
图3a为本申请一些实施例提供的抽屉的第一区域的示意图,如该图3a所示,被框出的区域即为抽屉的第一区域。
图3b为本申请一些实施例提供的抽屉的第一区域的示意图,如该图3b所示,被框出的区域即为抽屉的第一区域。
为了识别第一目标ROI区域位于食材存储区域,在上述各实施例的基础上,在本申请中,所述控制单元102,具体被配置为:
根据预先保存的智能冰箱的除抽屉外的食材存储区域的第二区域,判断所述第一目标ROI区域是否位于所述第二区域内;
若是,则确定所述第一目标ROI区域位于食材存储区域。
智能冰箱中还包括除抽屉外的其他食材存储区域,如隔层、门侧收纳等,在本申请中,预先保存有智能冰箱除抽屉外的其他食材存储区域的第二区域,确定第一目标ROI区域的边缘的每个坐标,若该第一目标ROI区域边缘的所有坐标均在第二区域的范围内,则确定该第一目标ROI区域位于该第二区域,确定该第一目标ROI区域位于食材存储区域。
例如,确定第一目标RGB图像中抽屉的左上角的坐标为(105,90),右下角的坐标 为(200,75),预先保存的抽屉的左上的坐标为(105,75),右下角的坐标为(200,75),确定该抽屉处于开启状态,并确定该抽屉的第一区域为以(150,90)、(105,75)、(200,75)和(200,90)为顶点的矩形框。识别到第一目标ROI区域为以(160,82)、(160,70)、(180,70)和(180,82)为顶点的矩形框,通过坐标确定该第一目标ROI区域位于该第一区域内,则确定该第一目标ROI区域位于食材存储区域。
图4为本申请一些实施例提供的智能冰箱内部空间示意图,如该图4所示,该智能冰箱的采集单元位于内部空间的顶部中间位置,该智能冰箱的食材存放区域包括抽屉和隔层。
图5a为本申请一些实施例提供的第一目标ROI区域未位于第一区域内的显示示意图,如该图5a所示,该第一目标ROI区域未位于抽屉的第一区域。
图5b为本申请一些实施例提供的第一目标ROI区域位于第一区域内的显示示意图,如该图5b所示,该第一目标ROI区域位于抽屉的第一区域。
图6为本申请一些实施例提供的存取动作识别的流程示意图,如该图6所示,该过程包括:
S601:确定采集到的第一目标RGB图像中包含有手部区域的第一目标感兴趣ROI区域,以及该第一目标RGB图像的采集时间之后采集到的第二目标RGB图像中包含有手部区域的第二目标ROI区域。
S602:识别该第一目标ROI区域是否位于食材存储区域,若是,则执行S603,若否,则结束。
S603:确定第一目标RGB图像中该第一目标ROI区域的第一面积及该第二目标RGB图像中该第二目标ROI区域的第二面积。
S604:根据该第一面积和第二面积的大小,确定存取动作。
为了提高了存取动作识别的准确性,在上述各实施例的基础上,在本申请中,所述控制单元102,具体被配置为:
若所述第一面积大于所述第二面积,则确定当前动作为存入动作;
若所述第一面积小于所述第二面积,则确定当前动作为取出动作。
在本申请中,在确定存取动作时,可以根据第一目标RGB图像中的第一目标ROI区域的第一面积,以及在该第一目标RGB图像的采集时间之后采集的第二目标RGB图像中的第二目标ROI区域的第二面积,确定存取动作。
具体的,在本申请中,智能冰箱中用于采集RGB图像的采集单元的位置是固定的,并且位于智能冰箱的顶部位置,当出现存入动作时,随着动作的发生,手部区域也就是目标ROI区域时逐渐远离该图像采集设备的,因此在该采集单元采集到的RGB图像中,该目标ROI区域的面积逐渐减小;当出现取出动作时,随着动作的发生,手部区域也就是目标ROI区域时逐渐靠近该图像采集设备的,因此在该采集单元采集到的RGB图像中,该目标ROI区域的面积逐渐增大。
因此,在本申请中,若第一目标RGB图像的第一目标ROI区域的第一面积小于第二RGB图像的第二ROI区域的第二面积,说明当前动作正在靠近图像采集设备,则确定当前动作为取出动作;若第一目标RGB图像的第一目标ROI区域的第一面积大于第二RGB图像的第二ROI区域的第二面积,说明当前动作正在远离图像采集设备,则确定当前动作为存入动作。
为了识别抽屉的开关状态,在上述各实施例的基础上,在本申请中,所述控制单元102,具体被配置为:
确定所述第一目标RGB图像中抽屉边缘的第一位置信息;
若所述第一位置信息和预先保存的抽屉闭合时抽屉边缘的第二位置信息不一致,则确定所述抽屉处于开启状态。
在确定抽屉是否处于开启状态时,在第一目标RGB图像中确定抽屉边缘的第一位置信息,根据该第一位置信息和预先保存的抽屉处于闭合状态时抽屉边缘的第二位置信息, 确定该第一目标RGB图像中抽屉的状态信息。若该第一位置信息和该第二位置信息一致,则确定该抽屉处于闭合状态,若该第一位置信息和该第二位置信息不一致,则确定该抽屉处于开启状态。
具体的,在本申请中,根据预先设置有构建坐标系的方法,在第一目标RGB图像中构建坐标系,并将该第一目标RGB图像中抽屉边缘的坐标,根据预先保存的抽屉闭合时抽屉边缘的坐标,确定该抽屉是否处于开启状态,若该第一RGB图像中的抽屉边缘的坐标与预先保存的抽屉边缘的坐标不一致,则确定该抽屉处于开启状态。
图7为本申请一些实施例提供的存取动作识别过程示意图,如该图7所示,该过程包括:
S701:确定采集到的第一目标RGB图像中包含有手部区域的第一目标感兴趣ROI区域,以及所述第一目标RGB图像的采集时间之后采集到的第二目标RGB图像中包含有手部区域的第二目标ROI区域。
S702:若识别到所述第一目标ROI区域位于食材存储区域,则确定第一目标RGB图像中所述第一目标ROI区域的第一面积及所述第二目标RGB图像中所述第二目标ROI区域的第二面积。
S703:根据所述第一面积和第二面积的大小,确定存取动作。
在一种可能的实施方式中,确定采集到的目标RGB图像中包含有手部区域的目标感兴趣ROI区域包括:
将目标RGB图像输入到训练完成的网络模型,接收所述网络模型输出标识有所述目标ROI区域的信息的RGB图像。
在一种可能的实施方式中,所述识别到所述第一目标ROI区域位于食材存储位置包括:
若确定智能冰箱的抽屉处于开启状态,则确定所述抽屉在所述第一目标RGB图像中的第一区域;
若所述第一目标ROI区域位于所述第一区域内,则确定所述第一目标ROI区域位于食材存储区域。
在一种可能的实施方式中,所述识别到所述第一目标ROI区域位于食材存储位置包括:
根据预先保存的智能冰箱的除抽屉外的食材存储区域的第二区域,判断所述第一目标ROI区域是否位于所述第二区域内;
若是,则确定所述第一目标ROI区域位于食材存储区域。
在一种可能的实施方式中,所述根据所述第一面积和第二面积的大小,确定存取动作包括:
若所述第一面积大于所述第二面积,则确定当前动作为存入动作;
若所述第一面积小于所述第二面积,则确定当前动作为取出动作。
在一种可能的实施方式中,所述网络模型的训练过程包括:
获取训练集中的任一样本RGB图像,其中,所述样本RGB图像中预先标注有ROI区域的第一信息;
将所述样本RGB图像输入到网络模型中,输出所述样本RGB图像中ROI区域的第二信息;
根据所述第一信息和第二信息,对所述网络模型进行训练。
在一种可能的实施方式中,确定所述抽屉处于开启状态包括:
确定所述第一目标RGB图像中抽屉边缘的第一位置信息;
若所述第一位置信息和预先保存的抽屉处于闭合状态时抽屉边缘的第二位置信息不一致,则确定所述抽屉处于开启状态。
在一种可选的实施例中,为了能够更准确地识别用户对智能冰箱的存取操作信息,可以在采集单元的图像采集盲区设置传感器,通过采集单元采集图像确定视觉检测结果(即上述进行动作检查所得到的检测结果),以及通过传感器确定盲区检测信息,并根据视觉 检测结果和盲区检测信息,确定用户对智能冰箱内物品的存取动作和/或物品在智能冰箱内的存放位置。
在一种可选的实施例中,还可以在冰箱置物架上设置重力传感器,通过采集单元获取食材的图像信息,以及通过重力传感器获取食材的重量信息;根据图像信息和重量信息,确定食材的种类信息。
在另一些实施例中,考虑到为帮助用户更好的管理冰箱内的食材,冰箱需要自动识别出冰箱内的食材及存放位置。相关技术的一种方案是用静态计算机视觉方案,也就是通过拍摄冰箱内的静态图片并分析食材种类。但是由于冰箱内食材存放比较满,外部堆放的食材会遮挡存放在内部的食材,造成识别结果不全面。相关技术的另一种方案是动态计算机视觉方案,也就是通过识别用户的动态取放动作,实时监测用户放入了什么食材和取出了什么食材。动态视觉方案视野需要覆盖冰箱全空间,但是由于冰箱的箱体、门体、冷冻区多个位置视野较大,单个相机拍摄会导致冰箱隔板第一层和门体存在盲区,若单纯利用相机覆盖需要堆叠多个相机,多个相机之间的标记是难点问题,而且,视觉物体检测需要大量的计算资源,对硬件平台要求高,造成整体硬件成本高昂。
为了在无需过多相机的情况下,提高整体检测准确率,节约成本,在本申请实施例中,智能冰箱可以包括用于采集图像的采集单元,可以在采集单元的图像采集盲区设置传感器。通过采集单元采集图像确定视觉检测结果,以及通过传感器确定盲区检测信息,并根据视觉检测结果和盲区检测信息,确定用户对智能冰箱内物品的存取动作和/或所述物品在智能冰箱内的存放位置。从而可以实现识别用户对智能冰箱的存取操作信息,在无需过多相机的情况下,提高了整体检测准确率,节约成本。
示例性地,考虑到冰箱全空间食材存取动作和存取位置识别对于冰箱食材管理具有重要意义,但冰箱全空间视野范围较大,单纯通过视觉识别覆盖冰箱门体,箱体,冷冻区需要多个相机,多个相机间的标定是难点问题,并且需要大量硬件资源处理算法。针对上述问题和需求,本申请实施例提出基于接近式传感器和三维计算机视觉的冰箱全空间存取动作和存取位置识别,尤其是隔板第一层和门体的盲区识别。
本申请实施例提供的技术方案中,引入了彩色相机和接近式传感器。如图8所示,采集单元可以称为相机,例如彩色相机,或彩色相机和深度相机等,彩色相机安装在冰箱柜体上沿中间位置,深度相机为可选设备,可以用于提高位置检测的准确性。设置在采集单元的图像采集盲区的传感器可以是接近式传感器(例如红外传感器),接近式传感器安装在采集单元难以覆盖的盲区位置。其中以冰箱左、右门体第一层为例,接近式传感器类型不限,例如红外传感器、对射式传感器等,只要能够检测到有物体进入或者离开检测范围即可,类似原理的传感器均属于本申请实施例的范围内,下面以红外传感器为例进行说明。如图8所示,其中的两条虚线以下是采集单元(彩色相机,或者彩色相机和深度相机相结合)视野能够覆盖的区域,两条虚线以上是图像采集盲区,即需要增加红外传感器的位置。
一种实施方式为实时通过红外传感器获取检测信息,当其检测到有物体接近或者远离,即可认为用户在该位置进入或者退出,然后结合视觉检测结果(即利用相机进行动作检测所得到的检测结果)判定存取动作为存入或者取出,最终得到存取动作和存取位置的检测结果,流程如图9,其中视觉检测方式为利用彩色数据或者深度数据进行检测,检测方法包括但不限于图像处理方法、深度学习方法、计算机视觉方法等。该实施方式简单直接。
另一种实施方式是融合红外传感器和视觉(利用相机进行检测)的实时检测结果,具体实现过程包括:首先,预先设定红外传感器的检测范围,例如相机的图像采集盲区,并且实时读取红外传感器检测信息。然后,利用深度相机或彩色相机对视野内的运动目标进行跟踪,若运动目标质心位置在相机视野范围内,则按照视觉检测存取动作和存取位置的流程实现,若运动目标的质心位置超过相机视野,则根据红外传感器的检测信息确定是否有物体进入或离开其对应位置,当有物体进入该位置,可认为是进入动作,当有物体离开该位置,可以认为是退出动作,然后结合彩色相机的感兴趣区域(Region of Interest,ROI) 的区域识别结果,可以得到对应位置的食材存取信息(即用户存入食材或者用户取出食材的信息)。例如,用户往隔板第一层存入食材,该位置是相机不可见的盲区,视觉检测通常通过预测获取结果可能为隔板第一层或者隔板第二层,第一层的传感器检测到有目标进入,该次操作为存入,位置为隔板第一层,若第一层的传感器没有检测到有目标进入,该次操作为存入,位置为隔板第二层,具体流程如图10所示。
下面给出具体实施例进一步的详细介绍。
硬件部分:
例如图8所示,本申请实施例在冰箱上方安装相机模组,当用户打开冰箱门时,模组弹出,触发获取食材存取过程中存取食材的图像。冰箱关门时,相机模组收回,图像采集结束。
在相机视野的盲区位置安装红外传感器,用以获取是否有物体进入或者退出该盲区位置。所述盲区位置,例如图8所示的冰箱门的上面第一层置物架(或称为隔层)附近的位置。
详细方案:
1、红外传感器安装设置:
红外传感器可以根据相机实际的盲区位置进行安装,以图8中的传感器安装为例,传感器安装在门体隔层边缘,安装位置可根据实际的需要进行调整,比如隔层内部、隔层长边、隔层短边,若安装在箱体内部,可安装在左侧或者右侧的箱体壁上,若安装在抽屉内,可放置在抽屉长边或者短边的侧壁等,传感器的数量可以为1个,或者多个,传感器安装的位置或者数量变化均不影响本申请实施例的本质。
需要说明的是,由于红外传感器检测范围较大,可能覆盖到其他位置,比如安装在箱体隔板第一层,但检测范围覆盖到箱体隔板第二层,有可能导致误触发,因此,可根据实际需要调整其强度和检测范围。
2、盲区ROI区域设定:
如图11—图14所示,由于相机视野难以覆盖冰箱全空间区域,冰箱上部分位置为检测盲区,如图11—图14中的下端矩形框所示位置范围,此处手部在冰箱隔板第一层存取,但是由于图像无法覆盖,难以确定具***置是隔板第一层还是隔板第二层靠上边的区域存取,同理,门体的隔层也存在同样的问题,如图11—图14中两侧的矩形框所示位置范围。因此,可以预先设定一个盲区ROI区域,以图11—图14中的下端矩形框所示位置为例,当有运动手部进入该ROI区域,则有可能从盲区隔板第一层或者隔板第二层存取,需要结合传感器的检测信息进行判断。
图11—图14展示了取出动作过程中运动手部的位置变化。实际的盲区设定可以在彩色图像中,或者深度图像中,或者两者同时设定进行融合,ROI区域的大小可以根据实际需求调整,盲区ROI的位置或者大小的不同均不影响本申请实施例提供的技术方案的实质。
3、运动目标检测跟踪:
该步骤对图像中的运动手部进行检测跟踪,对于手部的检测跟踪,可通过机器学习或深度神经网络训练分类器实现,根据运动手部的坐标进行判定,若运动的手部在上述盲区ROI以外的区域,则直接根据彩色图或者深度图或者彩色图和深度图的结合,利用视觉检测方法进行存取动作和存取位置检测,若运动的手部在上述盲区ROI以内,则需要融合红外传感器的检测信息进行存取动作和存取位置的判定。
4、红外传感器信息融合:
红外传感器的数据需要实时读取,当红外传感器检测到有物体进入,则结合视觉检测的信息进行判定。例如,在图11中,视觉算法检测到有运动目标进入图11中的检测区域且逐渐靠近并进入下面的矩形框所示的ROI盲区,且隔板第一层放置的红外传感器检测到有物体进入(即用户放食材到冰箱隔板第一层上),则本次动作为进入,存取位置为隔板第一层;若传感器未检测到有物体进入,则本次动作为进入,存取位置为隔板的第二层。 手部退出动作的判定过程与进入动作的判定过程同理,手部退出可以是从冰箱内取走食材,也可以是放入食材之后的动作,具体不影响本申请实施例的实现,此处不再赘述。
进一步,由于手部从图像中消失到实际进入红外传感器的检测区域有一定空间距离S,因此可以设定视觉检测信息在一个时间段T内等待红外传感器的检测结果,即当通过相机采集图像确定了视觉检测结果(即检测到用户的存取动作)后的时长T内等待通过传感器得到的检测结果。阈值T设定比较重要,若过短,可能红外传感器的检测结果还未传出,导致漏检,若过长,可能动作已经完成,甚至进入下一个动作,但结果还未更新,因此本申请实施例对T的选取进行优化,根据图像序列帧中手部的运动度V,以及空间距离S,进行估算,S可以根据相机和冰箱的实际位置关系直接获取。则信息融合的步骤可以优化为若手部运动目标进入图11—图14中下面矩形框所示ROI的上边缘,则实时计算其进入红外检测区域需要的时间T1、T2…...同时实时读取红外传感器的检测结果,当进入图11—图14中下面矩形框所示ROI的下边缘,将之前实时计算的T1,T2…...的平均值或最大值,作为阈值T。例如,根据相机和冰箱的实际位置关系,距离S=0.2m,根据帧差法计算手部运动速度V为1m/s,则阈值T设定为S/V=0.2s,若0.2s内传感器检测到目标,则判定手部进入第一层,若0.2s内传感器没有检测到目标,则认为手部没有进入第一层,而是进入第二层。
需要说明的是,阈值T也可以预先设置好,无需实时根据用户手部移动的速度计算,具体实现方式可以根据实际需要而定,本申请实施例不进行限制。
5、视觉检测结果融合:
所述视觉检测结果,即通过相机采集图像,并对图像进行检测分析得到的结果。当通过图像检测到手部的“进入”和“退出”信号时,彩色相机对手部触发区域进行截取,利用该部分信息进行物体识别。当手中有物体,且检测到“进入”信号,则为存入食材;当手中有物体,且检测到“退出”信息,则为取出食材。
通过上述过程,即可实现包括盲区的冰箱全空间存取动作和存取位置识别。
另外,本申请实施例提出的针对盲区识别仅为优选实施例,实际上,冰箱内任一空间位置的存取动作识别和存取位置识别均可以结合传感器提高识别准确率,均在本申请实施例的范围内。
另外,传感器的类型包括但不限于接近式传感器、光电传感器、单点TOF等,类似原理的传感器均在本申请实施例范围之内。
在另一些实施例中,参见图15,本申请实施例提供的一种信息识别方法包括:
S1501、通过安装在用于存放物品的家用电器上的相机确定视觉检测结果,并且,通过安装在所述家用电器上的传感器确定盲区检测信息,其中所述传感器设置在所述相机的图像采集盲区;
S1502、根据所述视觉检测结果和所述盲区检测信息,确定用户对所述家用电器内物品的存取动作和/或物品在所述家用电器内的存放位置。
可选地,当通过所述视觉检测结果,确定所述图像采集盲区的预设感兴趣区域ROI内用户对所述家用电器内物品的存取动作之后的预设时长内,等待通过所述传感器获取盲区检测信息,根据在所述预设时长内是否通过所述传感器获取了盲区检测信息,确定所述物品在所述家用电器内的存放位置。
可选地,所述预设时长是根据用户存取动作的速度确定的,所述用户存取动作的速度是通过对所述相机采集的图像进行检测确定的。
可选地,该方法还包括:当用户在所述图像采集盲区之外对所述家用电器内物品进行存取操作时,直接通过所述相机采集的图像,确定用户对所述家用电器内物品的存取动作和/或所述物品在所述家用电器内的存放位置。
可选地,首先判断所述传感器是否检测到用户,如果是,则所述处理器确定存取位置为所述传感器安装位置,并结合所述相机采集的图像最终确定用户对所述家用电器内物品 的存取动作和/或所述物品在所述家用电器内的存放位置;否则,所述处理器确定存取位置不是所述传感器安装位置,并结合所述相机采集的图像最终确定用户对所述家用电器内物品的存取动作和/或所述物品在所述家用电器内的存放位置。
本申请实施例提供了一种信息识别方法,用以对冰箱各区域储物操作进行检测,对于相机的盲区结合传感器识别存取动作和存取位置,在无需过多相机的情况下,提高整体检测准确率,节约成本。
在另一些实施例中,考虑到目前市面上主流的食材管理冰箱有RFID方案的食材管理冰箱和RGB彩色相机方案的食材管理冰箱两种,其中RFID方案的食材管理冰箱对于用户希望管理的每一个食材都需要贴上RFID标签,才能起到管理食材的作用,对用户来说有操作相对复杂,管理成本高的缺陷。RGB彩色相机通常配合语音识别模块,对于彩色相机识别错误的食材,用户通过语音纠错的方式更改食材种类,但是频繁的语音纠错会极大地降低用户使用冰箱的体验。本申请实施例提供了一种智能冰箱及食材识别方法,用以提高食材识别的准确率。通过多传感器融合的方式实现冰箱食材的高精度识别,结构复杂度低,识别准确率高。具体地,融合了RGB彩色相机采集到的食材图像在深度学习网络模型识别的结果和重力传感器采集到的食材重量的数据,在食材识别结果上加入了重量限制条件,提升了食材识别的准确率、提高用户体验。
RGB彩色相机食材识别一直是工业界和学术界的技术难题,主要包括以下几点:
1)、食材外观多样性影响:对于同种食材,可能表现出不同的外观特征,易造成食材识别的混淆。典型的食材如芒果,芒果存在绿色芒果和黄色芒果两种颜色,黄芒在外观上则更接近于黄色的木瓜,因此在识别中易识别混淆为木瓜,而对于未成熟的木瓜,则和绿色的芒果比较像(如图16所示)。对于同一食材,由于用户的拿取角度不同,该食材在摄像头中的成像也可能表现出与另一类食材相似的特征,典型如仅仅暴露出红苹果的一部分则会与西红柿表现出相似的特征造成误识别。
2)、光照影响:光照直接影响食材在图像中的成像质量。对于同一种食材,在不同光照下表现出不同的颜色特征。此外,食材表面光滑反光或表面受光照不均匀会造成食材表面局部过亮或者过暗,如雪花梨在强光照下表面表现为白色发光体,红枣在较弱光照下则表现为黑色。光照的不同将造成食材被识别成其他类别或无法识别的情况。
3)、相似食材不同尺寸:对于圣女果和西红柿。单个的西红柿和圣女果外观十分相似,但是尺寸/重量上相差很大(如图17所示),单纯靠彩色RGB采集的图像区分两者,必然会存在较多的误识别。
针对上述3个问题,本申请实施例提供的技术方案,在图像识别时融入了重量数据,区分食材外形相似但是重量相差较大的食材。
一、硬件结构:
1.1硬件结构组成:
参见图18,本申请实施例提供的冰箱的硬件结构,例如包括如下五个部分:
1)、处理器:也可称为中央处理器,即图18中的冰箱主控,中央处理器位于冰箱顶部,用于基于深度学习网络模型进行食材的识别、重力传感器数据的处理、重力传感器获取的重量和图像数据的融合,以及整套***主要逻辑的处理,包括控制提示灯条的控制、语音播报,内嵌大屏内容显示等功能。
2)、采集单元:可以采用RGB摄像头,彩色RGB摄像头用于采集食材图像。
可选地,本申请实施例中,RGB摄像头采用动态识别方案,采用高帧率(120fps)RGB相机。
所述动态识别区别于静态识别。静态识别是存放好之后再识别,动态则是在存放的过程中识别。
3)、重力传感器:冰箱冷藏室的每个隔层以及冰箱门体的每个搁物架上均安装了重力传感器。将食材的重量信息引入到食材种类识别,提升食材识别的准确率。
另外,隔层/搁物架重力的增减也能确定食材的存取和位置,提供食材存取动作存取位置信息。
4)、内置显示屏:本申请实施例中的显示屏幕内嵌在冰箱门体上,用户打开冰箱门存取食材时,可以更容易地看到食材存取位置、种类、动作等信息的更新。无需关冰箱门看冰箱门外显示屏的内容。
5)、外嵌提示灯条:外嵌提示灯条用于提示用户食材存取的信息。
例如,麦克风阵列中的扬声器进行语音播报时,外嵌提示灯条显示呼吸灯效果。
6)、麦克风阵列:其中包括用于采集语音的麦克风和用于播放语音的扬声器,用于语音提醒用户存放食材的位置、种类、动作等信息。
例如,第一层放入苹果时,麦克风阵列语音播报“第一层放入苹果”。另外,也可以提供如语音查询天气等功能。
1.2硬件***整体框架:
本申请实施例提供的***整体硬件框架,如图19所示,传感器组成的感知层感知用户的操作,并将数据传输到决策层。决策层通过逻辑处理感知层的数据,并将处理结果发送到输出层供用户直观获取。其中,感知层:包括RGB摄像头、重力传感器、射频识别(Radio Frequency Identification,RFID)模块、扫码器、开关门信号检测设备等。决策层为冰箱的中央处理器,该处理器具备较强的数据处理能力。输出层:将处理器输出的结果直观地展示给用户,硬件设备包括内置的显示屏幕、麦克风阵列、外嵌的提示灯条等。
相似食材识别是图像识别的难题,但对于不同尺寸的相似食材,引入食材重量信息可很好地区分食材。例如,圣女果和西红柿,单个的西红柿和圣女果外观十分相似,但是尺寸/重量上相差很大,如果单纯靠彩色RGB采集的图像区分两者,必然会存在较多的误识别。本申请实施例融合了深度学习网络识别的RGB彩色相机采集到的食材图像的结果和重力传感器采集到的食材重量的数据,在食材识别结果上加入了重量限制条件,提升了食材识别的准确率。
本申请实施例对食材识别采用深度学习的方法,包括但不限于(深度)神经网络、卷积神经网络、深度置信网络、深度生成网络、深度强化学习等网络结构的一种或其衍生模型。但是深度学习网络图像识别会受到光照、遮挡、个别食材外观相似度高的影响,这些条件的限制几乎不能通过算法的方法解决,本申请实施例将重量的数据和图像识别的结果融合,较好地解决了相似食材识别难的问题。
光线、外观、模型自身、摄像头成像等因素,均会导致图像识别结果错误。例如,某次识别中(如表1所示),对于食材雪花梨,最可能的识别结果依次为:伊丽莎白甜瓜、雪花梨、黄柿子、黄圆椒。
表1 某次识别中雪花梨仅图像识别的结果:
Figure PCTCN2021139238-appb-000002
食材的重量分布类似高斯分布,测量重量与预测食材是正确食材的概率可用高斯分布表示:
Figure PCTCN2021139238-appb-000003
w a是预设的食材预期重量。
w b是重力传感器测量的食材重量,即食材的实测重量。
σ,ω是调节因子,为预设常数,可以根据经验值设定。
Pweight是按照重量预测食材是正确食材的概率,简称重量识别概率。
在冰箱食材存取场景下,例如,调节因子σ,ω分别取20和200,则:
Figure PCTCN2021139238-appb-000004
表2 某次识别中雪花梨图像识别结果融合重力传感器测量重量的结果:
Figure PCTCN2021139238-appb-000005
在表2的某次雪花梨识别中,图像识别的结果融合了重力传感器测量的重量,可以很好地区分不同尺寸的相似食材。图像识别的结果将雪梨识别为伊丽莎白甜瓜的概率(91%),比识别为雪花梨的概率(82%)高9%,如果仅仅依据图像识别结果,本次识别结果将是错误的。但重力传感器输出的测量重量是270g,伊丽莎白甜瓜的预期重量是1200g,雪花梨的预期重量是350g,对于270g的重量,是伊丽莎白甜瓜的概率极其小(0.002%),而是雪花梨的概率很高(92.312%)。融合图像识别结果和重力传感数据,可以很好地区分伊丽莎白甜瓜和雪花梨。
也就是说,参见图20,表2中最后一列的最终结果,是重量识别概率Pweight与图像识别概率Pimage进行融合后得到的结果,例如,可以将Pweight与Pimage相乘,将得到的乘积作为最终的食材识别概率P,例如,表2中雪花梨的最终识别结果P为0.92312*0.82=0.76。
另外,关于图20中所述的输入层、隐藏层、输出层,都属于深度学习网络模型中的专业术语,在此不再解释说明。
综上所述:参见图21,本申请实施例的核心功能点是食材管理,即以更加便捷的方式告知用冰箱里面有那些食材,RGB彩色摄像头图像识别食材的结果融合重力传感器获取的食材重量信息,区分外形相似、尺寸差异较大食材,提升食材识别准确率,更好地管理冰箱食材数量、保质期等信息,以及根据冰箱现有食材给用户推荐合理膳食。
内置大屏方式使食材动态识别在用户感知方面更加人性化。实时显示用户对冰箱的操作,用户在存取食材的过程中可以正视冰箱大屏,避免了用户在存取食材的过程中,看食材存取信息需要移步到冰箱侧面的问题。另外,屏幕下方扬声器也能实时播报用户存取食材的位置、种类信息。
如图22所示,本申请实施例中提供的食材识别方法具体包括如下步骤:
S201:图像识别模块初始化,以及重力传感器模块初始化;
例如,本步骤中的图像识别模块初始化具体包括:
视觉智感器(例如图18中的RGB相机)弹出、相机参数初始化、深度学习网络参数权重载入等。其中,相机参数:用于矫正图像畸变。深度学习网络参数权重载入:主要是深度学习网络识别模型的权重载入。其中,所述的深度学习网络识别模型,由中央处理器实现。重力传感器模块初始化:包括重力传感器打开、清零等。
所述的图像识别模块,可以包括RGB相机和中央处理器等。
S202:用户在存取食材的过程中,RGB彩色相机抓拍用户存取的食材图像,并送入中压处理器中的深度学习网络识别模型进行食材识别,深度学习网络识别模型确定与抓拍到的食材最相似的食材种类和每个种类的图像识别概率Pimage;同时,重力传感器输出当前 食材的重量信息给中央处理器。
S203:中央处理器利用当前食材的重量信息,确定食材的重量识别概率Pweight,具体的计算方法参见上面所述的公式(1)。
S204:将食材的重量识别概率Pweight和图像识别概率Pimage进行融合,得到最终的识别结果。
例如,本步骤可采用将重量识别概率Pweight和图像识别概率Pimage相乘的方法,或者其他如加权相乘的方法得到最终的食材识别概率P。
若用户继续存取食材,则继续重复执行上述步骤S202。
参见图23,本申请实施例提供的一种食材识别方法,包括:
S2301、获取食材的图像信息,以及所述食材的重量信息;
S2302、根据所述图像信息和所述重量信息,确定所述食材的种类信息。
本申请实施例通过获取食材的图像信息,以及所述食材的重量信息,并根据所述图像信息和所述重量信息,确定所述食材的种类信息,由于在食材识别结果上加入了重量限制条件,从而进一步提高了食材识别的准确率。
可选地,根据图像信息和重量信息,确定食材的种类信息,具体包括:
根据图像信息进行食材识别,确定食材集合以及该食材集合内的食材的图像识别概率,其中,该食材集合内的食材的图像识别概率大于预设值;
在食材集合内,根据重量信息进行食材识别,确定该食材集合内的食材的重量识别概率;
根据图像识别概率和重量识别概率,确定食材种类信息。
所述食材集合,例如表2中所列的四种食材,通过图像识别的方式,先确定了图像识别概率大于百分之五十的食材都有哪些,例如对雪花梨识别时,有表2所示的四种食材的图像识别概率都大于百分之五十。那么,在该食材集合内,进一步通过重力识别的方式,确定各食材对应的重量识别概率。最后,对该食材集合内的每一食材,将同一食材的图像识别概率和重量识别概率相乘,将得到的乘积最高的食材确定为识别的最终结果。
可选地,根据所述图像识别概率和所述重量识别概率,确定食材种类信息,具体包括:
将所述图像识别概率和所述重量识别概率相乘或加权相乘,将得到的乘积作为最终的食材识别概率。
可选地,根据所述重量信息进行食材识别,确定所述食材的重量识别概率,具体包括:
采用如下公式确定食材的重量识别概率Pweight:
Figure PCTCN2021139238-appb-000006
其中,所述w a是食材的预期重量;
w b是食材的实测重量;
σ,ω均为预设的调节因子。
可选地,通过RGB相机获取所述食材的图像信息。
可选地,通过重力传感器获取所述食材的重量信息。
图24为本申请提供的一种电子设备结构示意图,在上述各实施例的基础上,本申请还提供了一种电子设备,如图24所示,包括:处理器801、通信接口802、存储器803和通信总线804,其中,处理器801,通信接口802,存储器803通过通信总线804完成相互间的通信;
所述存储器803中存储有计算机程序,当所述程序被所述处理器801执行时,使得所述处理器801执行如下步骤:
确定采集到的第一目标RGB图像中包含有手部区域的第一目标感兴趣ROI区域,以及所述第一目标RGB图像的采集时间之后采集到的第二目标RGB图像中包含有手部区域的第二目标ROI区域;
若识别到所述第一目标ROI区域位于食材存储区域,则确定第一目标RGB图像中所述第一目标ROI区域的第一面积及所述第二目标RGB图像中所述第二目标ROI区域的第二面积;
根据所述第一面积和第二面积的大小,确定存取动作。
在一种可能的实施方式中,确定采集到的目标RGB图像中包含有手部区域的目标感兴趣ROI区域包括:
将目标RGB图像输入到训练完成的网络模型,接收所述网络模型输出标识有所述目标ROI区域的信息的RGB图像。
在一种可能的实施方式中,所述识别到所述第一目标ROI区域位于食材存储位置包括:
若确定智能冰箱的抽屉处于开启状态,则确定所述抽屉在所述第一目标RGB图像中的第一区域;
若所述第一目标ROI区域位于所述第一区域内,则确定所述第一目标ROI区域位于食材存储区域。
在一种可能的实施方式中,所述识别到所述第一目标ROI区域位于食材存储位置包括:
根据预先保存的智能冰箱的除抽屉外的食材存储区域的第二区域,判断所述第一目标ROI区域是否位于所述第二区域内;
若是,则确定所述第一目标ROI区域位于食材存储区域。
在一种可能的实施方式中,所述根据所述第一面积和第二面积的大小,确定存取动作包括:
若所述第一面积大于所述第二面积,则确定当前动作为存入动作;
若所述第一面积小于所述第二面积,则确定当前动作为取出动作。
在一种可能的实施方式中,所述网络模型的训练过程包括:
获取训练集中的任一样本RGB图像,其中,所述样本RGB图像中预先标注有ROI区域的第一信息;
将所述样本RGB图像输入到网络模型中,输出所述样本RGB图像中ROI区域的第二信息;
根据所述第一信息和第二信息,对所述网络模型进行训练。
在一种可能的实施方式中,确定所述抽屉处于开启状态包括:
确定所述第一目标RGB图像中抽屉边缘的第一位置信息;
若所述第一位置信息和预先保存的抽屉处于闭合状态时抽屉边缘的第二位置信息不一致,则确定所述抽屉处于开启状态。
由于上述电子设备解决问题的原理与存取动作识别方法相似,因此上述电子设备的实施可以参见上述实施例,重复之处不再赘述。
上述电子设备提到的通信总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。该通信总线可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。通信接口802用于上述电子设备与其他设备之间的通信。存储器可以包括随机存取存储器(Random Access Memory,RAM),也可以包括非易失性存储器(Non-Volatile Memory,NVM),例如至少一个磁盘存储器。可选地,存储器还可以是至少一个位于远离前述处理器的存储装置。上述处理器可以是通用处理器,包括中央处理器、网络处理器(Network Processor,NP)等;还可以是数字指令处理器(Digital Signal Processing,DSP)、专用集成电路、现场可编程门陈列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。
在另一些实施例中,处理器801可以用于调用所述存储器803中存储的程序指令,按照获得的程序执行:
获取食材的图像信息,以及所述食材的重量信息;
根据所述图像信息和所述重量信息,确定所述食材的种类信息。
可选地,根据所述图像信息和所述重量信息,确定所述食材的种类信息,具体包括:
根据所述图像信息进行食材识别,确定食材集合以及该食材集合内的食材的图像识别概率,其中,该食材集合内的食材的图像识别概率大于预设值;
在所述食材集合内,根据所述重量信息进行食材识别,确定该食材集合内的食材的重量识别概率;
根据所述图像识别概率和所述重量识别概率,确定食材种类信息。
可选地,根据所述图像识别概率和所述重量识别概率,确定食材种类信息,具体包括:
将所述图像识别概率和所述重量识别概率相乘或加权相乘,将得到的乘积作为最终的食材识别概率。
可选地,根据所述重量信息进行食材识别,确定所述食材的重量识别概率,具体包括:
采用如下公式确定食材的重量识别概率Pweight:
Figure PCTCN2021139238-appb-000007
其中,所述w a是食材的预期重量;
w b是食材的实测重量;
σ,ω均为预设的调节因子。
可选地,通过RGB相机获取所述食材的图像信息。
可选地,通过重力传感器获取所述食材的重量信息。
在一些实施例中,本申请实施例提供的一种智能冰箱,包括:上述的电子设备,例如图18中的冰箱主控。
可选地,所述智能冰箱还包括:
位于冰箱本体顶部的图像采集设备,例如图18中的RGB相机;
位于冰箱置物架的重力传感器,可选地,每层置物架、每个门体搁物架都可以设置一个或多个的重力传感器。
可选地,所述智能冰箱还包括下列装置之一或组合:
位于冰箱门内侧的内置显示屏;
位于冰箱门内侧的麦克风阵列;
位于冰箱门内侧的提示灯条。
上述内置显示屏、麦克风阵列、提示灯条等内容在前文已详述,此处不再赘述。
除了上述装置,本申请实施例提供的一种智能冰箱还包括冰箱常用的一些装置,此处不再赘述。
在另一些实施例中,参见图25,本申请实施例还提供一种电子设备,该电子设备可以是智能冰箱,包括:
用于采集图像的采集单元11;
设置在所述相机的图像采集盲区的传感器12;
处理器13,用于通过所述相机采集图像确定视觉检测结果,以及通过所述传感器确定盲区检测信息,并根据所述视觉检测结果和所述盲区检测信息,确定用户对所述智能冰箱内物品的存取动作和/或所述物品在所述智能冰箱内的存放位置。
本申请实施例提供的上述智能冰箱,通过相机采集图像,确定视觉检测结果,并且通过设置在所述相机的图像采集盲区的传感器确定盲区检测信息,从而根据所述视觉检测结果和所述盲区检测信息,确定用户对所述智能冰箱内物品的存取动作和/或所述物品在所述智能冰箱内的存放位置,即实现了识别用户对用于存放物品的智能冰箱的存取操作信息,在无需过多相机的情况下,提高了整体检测准确率,节约成本。
可选地,所述处理器13当通过所述视觉检测结果,确定所述图像采集盲区的预设感兴趣区域ROI内用户对所述家用电器内物品的存取动作时,进一步结合所述传感器的盲区检测信息,确定用户对所述智能冰箱内物品的存取动作和/或所述物品在所述智能冰箱内的 存放位置。
可选地,所述处理器13当通过所述视觉检测结果,确定所述图像采集盲区的预设感兴趣区域ROI内用户对所述智能冰箱内物品的存取动作之后的预设时长内,等待通过所述传感器获取盲区检测信息,根据在所述预设时长内是否通过所述传感器获取了盲区检测信息,确定所述物品在所述智能冰箱内的存放位置。
可选地,所述预设时长是根据用户存取动作的速度确定的,所述用户存取动作的速度是通过对所述相机采集的图像进行检测确定的。
可选地,所述处理器13还用于:当用户在所述图像采集盲区之外对所述智能冰箱内物品进行存取操作时,直接通过所述相机采集的图像,确定用户对所述智能冰箱内物品的存取动作和/或所述物品在所述智能冰箱内的存放位置。
可选地,所述处理器13首先判断所述传感器是否检测到用户,如果是,则所述处理器确定存取位置为所述传感器安装位置,并结合所述相机采集的图像最终确定用户对所述智能冰箱内物品的存取动作和/或所述物品在所述智能冰箱内的存放位置;否则,所述处理器确定存取位置不是所述传感器安装位置,并结合所述相机采集的图像最终确定用户对所述智能冰箱内物品的存取动作和/或所述物品在所述智能冰箱内的存放位置。
可选地,所述相机安装在所述智能冰箱的上方,当用户打开冰箱门时,开始采集用户存取食材的图像;所述传感器安装在所述智能冰箱的门体隔层边缘。
参见图26,本申请实施例还提供一种信息识别装置,包括:
第一单元21,用于通过安装在用于存放物品的智能冰箱上的采集单元确定视觉检测结果,并且,通过安装在所述智能冰箱上的传感器确定盲区检测信息,其中所述传感器设置在所述相机的图像采集盲区;
第二单元22,用于根据所述视觉检测结果和所述盲区检测信息,确定用户对所述智能冰箱内物品的存取动作和/或物品在所述智能冰箱内的存放位置。
可选地,第二单元22当通过所述视觉检测结果,确定所述图像采集盲区的预设感兴趣区域ROI内用户对所述智能冰箱内物品的存取动作之后的预设时长内,等待通过所述传感器获取盲区检测信息,根据在所述预设时长内是否通过所述传感器获取了盲区检测信息,确定所述物品在所述智能冰箱内的存放位置。
可选地,所述预设时长是根据用户存取动作的速度确定的,所述用户存取动作的速度是通过对所述相机采集的图像进行检测确定的。
可选地,第二单元22还用于:当用户在所述图像采集盲区之外对所述家用电器内物品进行存取操作时,直接通过所述相机采集的图像,确定用户对所述智能冰箱内物品的存取动作和/或所述物品在所述智能冰箱内的存放位置。
可选地,第二单元22首先判断所述传感器是否检测到用户,如果是,则所述处理器确定存取位置为所述传感器安装位置,并结合所述相机采集的图像最终确定用户对所述智能冰箱内物品的存取动作和/或所述物品在所述智能冰箱内的存放位置;否则,所述处理器确定存取位置不是所述传感器安装位置,并结合所述相机采集的图像最终确定用户对所述智能冰箱内物品的存取动作和/或所述物品在所述智能冰箱内的存放位置。
在另一些实施例中,第一单元21,可以用于获取食材的图像信息,以及所述食材的重量信息;第二单元22,可以用于根据所述图像信息和所述重量信息,确定所述食材的种类信息。
可选地,根据所述图像信息和所述重量信息,确定所述食材的种类信息,具体包括:
根据所述图像信息进行食材识别,确定食材集合以及该食材集合内的食材的图像识别概率,其中,该食材集合内的食材的图像识别概率大于预设值;
在所述食材集合内,根据所述重量信息进行食材识别,确定该食材集合内的食材的重量识别概率;
根据所述图像识别概率和所述重量识别概率,确定食材种类信息。
可选地,根据所述图像识别概率和所述重量识别概率,确定食材种类信息,具体包括:
将所述图像识别概率和所述重量识别概率相乘或加权相乘,将得到的乘积作为最终的食材识别概率。
可选地,根据所述重量信息进行食材识别,确定所述食材的重量识别概率,具体包括:
采用如下公式确定食材的重量识别概率Pweight:
Figure PCTCN2021139238-appb-000008
其中,所述w a是食材的预期重量;
w b是食材的实测重量;
σ,ω均为预设的调节因子。
可选地,通过RGB相机获取所述食材的图像信息。
可选地,通过重力传感器获取所述食材的重量信息。
在上述各实施例的基础上,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质内存储有可由处理器执行的计算机程序,当所述程序在所述处理器上运行时,使得所述处理器执行时实现如下步骤:
确定采集到的第一目标RGB图像中包含有手部区域的第一目标感兴趣ROI区域,以及所述第一目标RGB图像的采集时间之后采集到的第二目标RGB图像中包含有手部区域的第二目标ROI区域;
若识别到所述第一目标ROI区域位于食材存储区域,则确定第一目标RGB图像中所述第一目标ROI区域的第一面积及所述第二目标RGB图像中所述第二目标ROI区域的第二面积;
根据所述第一面积和第二面积的大小,确定存取动作。
在一种可能的实施方式中,确定采集到的目标RGB图像中包含有手部区域的目标感兴趣ROI区域包括:
将目标RGB图像输入到训练完成的网络模型,接收所述网络模型输出标识有所述目标ROI区域的信息的RGB图像。
在一种可能的实施方式中,所述识别到所述第一目标ROI区域位于食材存储位置包括:
若确定智能冰箱的抽屉处于开启状态,则确定所述抽屉在所述第一目标RGB图像中的第一区域;
若所述第一目标ROI区域位于所述第一区域内,则确定所述第一目标ROI区域位于食材存储区域。
在一种可能的实施方式中,所述识别到所述第一目标ROI区域位于食材存储位置包括:
根据预先保存的智能冰箱的除抽屉外的食材存储区域的第二区域,判断所述第一目标ROI区域是否位于所述第二区域内;
若是,则确定所述第一目标ROI区域位于食材存储区域。
在一种可能的实施方式中,所述根据所述第一面积和第二面积的大小,确定存取动作包括:
若所述第一面积大于所述第二面积,则确定当前动作为存入动作;
若所述第一面积小于所述第二面积,则确定当前动作为取出动作。
在一种可能的实施方式中,所述网络模型的训练过程包括:
获取训练集中的任一样本RGB图像,其中,所述样本RGB图像中预先标注有ROI区域的第一信息;
将所述样本RGB图像输入到网络模型中,输出所述样本RGB图像中ROI区域的第二信息;
根据所述第一信息和第二信息,对所述网络模型进行训练。
在一种可能的实施方式中,确定所述抽屉处于开启状态包括:
确定所述第一目标RGB图像中抽屉边缘的第一位置信息;
若所述第一位置信息和预先保存的抽屉处于闭合状态时抽屉边缘的第二位置信息不一致,则确定所述抽屉处于开启状态。
由于上述提供的计算机可读取介质解决问题的原理与存取动作识别方法相似,因此处理器执行上述计算机可读取介质中的计算机程序后,实现的步骤可以参见上述实施例,重复之处不再赘述。

Claims (28)

  1. 一种智能冰箱,其特征在于,所述智能冰箱包括:
    采集单元,所述采集单元被配置为采集第一目标RGB图像及采集第二目标RGB图像;
    控制单元,所述控制单元被配置为:
    确定采集到的第一目标RGB图像中包含有手部区域的第一目标感兴趣ROI区域,以及所述第一目标RGB图像的采集时间之后采集到的第二目标RGB图像中包含有手部区域的第二目标ROI区域;
    若识别到所述第一目标ROI区域位于食材存储区域,则确定第一目标RGB图像中所述第一目标ROI区域的第一面积及所述第二目标RGB图像中所述第二目标ROI区域的第二面积;根据所述第一面积和第二面积的大小,确定存取动作。
  2. 根据权利要求1所述的智能冰箱,其特征在于,所述控制单元,具体被配置为:
    将目标RGB图像输入到训练完成的网络模型,接收所述网络模型输出标识有所述目标ROI区域的信息的RGB图像。
  3. 根据权利要求1所述的智能冰箱,其特征在于,所述控制单元,具体被配置为:
    若确定智能冰箱的抽屉处于开启状态,则确定所述抽屉在所述第一目标RGB图像中的第一区域;
    若所述第一目标ROI区域位于所述第一区域内,则确定所述第一目标ROI区域位于食材存储区域。
  4. 根据权利要求1所述的智能冰箱,其特征在于,所述控制单元,具体被配置为:
    根据预先保存的智能冰箱的除抽屉外的食材存储区域的第二区域,判断所述第一目标ROI区域是否位于所述第二区域内;
    若是,则确定所述第一目标ROI区域位于食材存储区域。
  5. 根据权利要求1所述的智能冰箱,其特征在于,所述控制单元,具体被配置为:
    若所述第一面积大于所述第二面积,则确定当前动作为存入动作;
    若所述第一面积小于所述第二面积,则确定当前动作为取出动作。
  6. 根据权利要求2所述的智能冰箱,其特征在于,所述控制单元,具体被配置为:
    获取训练集中的任一样本RGB图像,其中,所述样本RGB图像中预先标注有ROI区域的第一信息;
    将所述样本RGB图像输入到网络模型中,输出所述样本RGB图像中ROI区域的第二信息;
    根据所述第一信息和第二信息,对所述网络模型进行训练。
  7. 根据权利要求1所述的智能冰箱,其特征在于,所述控制单元,具体被配置为:
    确定所述第一目标RGB图像中抽屉边缘的第一位置信息;
    若所述第一位置信息和预先保存的抽屉处于闭合状态时抽屉边缘的第二位置信息不一致,则确定所述抽屉处于开启状态。
  8. 根据权利要求1~7任一项所述的智能冰箱,其特征在于,所述智能冰箱还包括设置在所述采集单元的图像采集盲区的传感器;
    所述控制单元,还用于通过所述采集单元采集图像确定视觉检测结果,以及通过所述传感器确定盲区检测信息,并根据所述视觉检测结果和所述盲区检测信息,确定用户对所述智能冰箱内物品的存取动作和/或所述物品在所述智能冰箱内的存放位置。
  9. 根据权利要求1~7任一项所述的智能冰箱,其特征在于,所述智能冰箱还包括位于冰箱置物架的重力传感器;
    所述控制单元,还用于通过所述采集单元获取食材的图像信息,以及通过所述重力传感器获取所述食材的重量信息;
    根据所述图像信息和所述重量信息,确定所述食材的种类信息。
  10. 一种存取动作识别方法,其特征在于,所述方法包括:
    确定采集到的第一目标RGB图像中包含有手部区域的第一目标感兴趣ROI区域,以及所述第一目标RGB图像的采集时间之后采集到的第二目标RGB图像中包含有手部区域的第二目标ROI区域;
    若识别到所述第一目标ROI区域位于食材存储区域,则确定第一目标RGB图像中所述第一目标ROI区域的第一面积及所述第二目标RGB图像中所述第二目标ROI区域的第二面积;
    根据所述第一面积和第二面积的大小,确定存取动作。
  11. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    通过安装在智能冰箱上的采集单元采集图像确定视觉检测结果,并且,通过安装在所述智能冰箱上的传感器确定盲区检测信息,其中所述传感器设置在所述采集单元的图像采集盲区;
    根据所述视觉检测结果和所述盲区检测信息,确定用户对所述智能冰箱内物品的存取动作和/或物品在所述智能冰箱内的存放位置。
  12. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    通过所述采集单元获取食材的图像信息,以及通过位于冰箱置物架的重力传感器获取所述食材的重量信息;
    根据所述图像信息和所述重量信息,确定所述食材的种类信息。
  13. 一种智能冰箱,其特征在于,包括:
    用于采集图像的采集单元;
    设置在所述采集单元的图像采集盲区的传感器;
    处理器,用于通过所述采集单元采集图像确定视觉检测结果,以及通过所述传感器确定盲区检测信息,并根据所述视觉检测结果和所述盲区检测信息,确定用户对所述家用电器内物品的存取动作和/或所述物品在所述家用电器内的存放位置。
  14. 根据权利要求13所述的智能冰箱,其特征在于,所述处理器当通过所述视觉检测结果,确定所述图像采集盲区的预设感兴趣区域ROI内用户对所述家用电器内物品的存取动作时,进一步结合所述传感器的盲区检测信息,确定用户对所述家用电器内物品的存取动作和/或所述物品在所述家用电器内的存放位置。
  15. 根据权利要求14所述的智能冰箱,其特征在于,所述处理器当通过所述视觉检测结果,确定所述图像采集盲区的预设感兴趣区域ROI内用户对所述家用电器内物品的存取动作之后的预设时长内,等待通过所述传感器获取盲区检测信息,根据在所述预设时长内是否通过所述传感器获取了盲区检测信息,确定所述物品在所述智能冰箱内的存放位置。
  16. 根据权利要求15所述的智能冰箱,其特征在于,所述预设时长是根据用户存取动作的速度确定的,所述用户存取动作的速度是通过对所述采集单元采集的图像进行检测确定的。
  17. 根据权利要求13所述的智能冰箱,其特征在于,所述处理器还用于:当用户在所述图像采集盲区之外对所述家用电器内物品进行存取操作时,直接通过所述采集单元采集的图像,确定用户对所述智能冰箱内物品的存取动作和/或所述物品在所述智能冰箱内的存放位置。
  18. 根据权利要求13所述的智能冰箱,其特征在于,所述处理器首先判断所述传感器是否检测到用户,如果是,则所述处理器确定存取位置为所述传感器安装位置,并结合所述采集单元采集的图像最终确定用户对所述智能冰箱内物品的存取动作和/或所述物品在所述智能冰箱内的存放位置;否则,所述处理器确定存取位置不是所述传感器安装位置,并结合所述采集单元采集的图像最终确定用户对所述智能冰箱内物品的存取动作和/或所述物品在所述智能冰箱内的存放位置。
  19. 根据权利要求13~18任一所述的智能冰箱,其特征在于,所述采集单元安装在所 述智能冰箱的上方,当用户打开冰箱门时,开始采集用户存取食材的图像;所述传感器安装在所述冰箱的门体隔层边缘。
  20. 一种信息识别方法,其特征在于,该方法包括:
    通过安装在用于存放物品的智能冰箱上的采集单元确定视觉检测结果,并且,通过安装在所述智能冰箱上的传感器确定盲区检测信息,其中所述传感器设置在所述采集单元的图像采集盲区;
    根据所述视觉检测结果和所述盲区检测信息,确定用户对所述智能冰箱内物品的存取动作和/或物品在所述智能冰箱内的存放位置。
  21. 一种智能冰箱,其特征在于,包括:
    位于冰箱本体顶部的采集单元;
    位于冰箱置物架上的重力传感器;
    处理器,用于通过所述采集单元获取食材的图像信息,以及通过所述重力传感器获取所述食材的重量信息;根据所述图像信息和所述重量信息,确定所述食材的种类信息。
  22. 根据权利要求21所述的智能冰箱,其特征在于,所述智能冰箱还包括下列装置之一或组合:
    位于冰箱门内侧的内置显示屏;
    位于冰箱门内侧的麦克风阵列;
    位于冰箱门内侧的提示灯条。
  23. 一种食材识别方法,其特征在于,该方法包括:
    获取食材的图像信息,以及所述食材的重量信息;
    根据所述图像信息和所述重量信息,确定所述食材的种类信息。
  24. 根据权利要求23所述的方法,其特征在于,根据所述图像信息和所述重量信息,确定所述食材的种类信息,具体包括:
    根据所述图像信息进行食材识别,确定食材集合以及该食材集合内的食材的图像识别概率,其中,该食材集合内的食材的图像识别概率大于预设值;
    在所述食材集合内,根据所述重量信息进行食材识别,确定该食材集合内的食材的重量识别概率;
    根据所述图像识别概率和所述重量识别概率,确定食材种类信息。
  25. 根据权利要求24所述的方法,其特征在于,根据所述图像识别概率和所述重量识别概率,确定食材种类信息,具体包括:
    将所述图像识别概率和所述重量识别概率相乘或加权相乘,将得到的乘积作为最终的食材识别概率。
  26. 根据权利要求24所述的方法,其特征在于,根据所述重量信息进行食材识别,确定所述食材的重量识别概率,具体包括:
    采用如下公式确定食材的重量识别概率Pweight:
    Figure PCTCN2021139238-appb-100001
    其中,所述w a是食材的预期重量;
    w b是食材的实测重量;
    σ,ω均为预设的调节因子。
  27. 一种电子设备,其特征在于,所述电子设备至少包括处理器和存储器,所述处理器用于执行存储器中存储的计算机程序时实现权利要求10~12任一项中所述的方法、权利要求20所述的方法或权利要求23~26任一项所述的方法的步骤。
  28. 一种计算机可读存储介质,其特征在于,其存储有计算机程序,所述计算机程序被处理器执行时实现权利要求10~12任一项中所述的方法、权利要求20所述的方法或权利要求23~26任一项所述的方法的步骤。
PCT/CN2021/139238 2021-03-31 2021-12-17 智能冰箱、存取动作识别方法、设备及介质 WO2022206043A1 (zh)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202110348540.9 2021-03-31
CN202110348540.9A CN115147916A (zh) 2021-03-31 2021-03-31 家用电器、信息识别方法及装置
CN202110719382.3A CN115601827A (zh) 2021-06-28 2021-06-28 一种智能冰箱、存取动作识别方法、设备及介质
CN202110719382.3 2021-06-28
CN202111084767.3 2021-09-16
CN202111084767.3A CN115830589A (zh) 2021-09-16 2021-09-16 一种食材识别方法及设备、智能设备

Publications (1)

Publication Number Publication Date
WO2022206043A1 true WO2022206043A1 (zh) 2022-10-06

Family

ID=83457872

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/139238 WO2022206043A1 (zh) 2021-03-31 2021-12-17 智能冰箱、存取动作识别方法、设备及介质

Country Status (1)

Country Link
WO (1) WO2022206043A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105222509A (zh) * 2015-09-30 2016-01-06 沈阳海尔电冰箱有限公司 冰箱的间室照明控制方法与冰箱
US20170284733A1 (en) * 2016-03-29 2017-10-05 Teco Electric & Machinery Co., Ltd. Remote food management system
CN107886028A (zh) * 2016-09-29 2018-04-06 九阳股份有限公司 一种冰箱的食材录入方法及食材录入装置
CN110443946A (zh) * 2018-05-03 2019-11-12 北京京东尚科信息技术有限公司 售货机、物品种类的识别方法和装置
CN110674789A (zh) * 2019-10-12 2020-01-10 海信集团有限公司 食材管理方法和冰箱
CN111476194A (zh) * 2020-04-20 2020-07-31 海信集团有限公司 一种感知模组工作状态检测方法及冰箱
CN111503991A (zh) * 2020-04-15 2020-08-07 海信集团有限公司 一种识别冰箱食材存取位置的方法及冰箱

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105222509A (zh) * 2015-09-30 2016-01-06 沈阳海尔电冰箱有限公司 冰箱的间室照明控制方法与冰箱
US20170284733A1 (en) * 2016-03-29 2017-10-05 Teco Electric & Machinery Co., Ltd. Remote food management system
CN107886028A (zh) * 2016-09-29 2018-04-06 九阳股份有限公司 一种冰箱的食材录入方法及食材录入装置
CN110443946A (zh) * 2018-05-03 2019-11-12 北京京东尚科信息技术有限公司 售货机、物品种类的识别方法和装置
CN110674789A (zh) * 2019-10-12 2020-01-10 海信集团有限公司 食材管理方法和冰箱
CN111503991A (zh) * 2020-04-15 2020-08-07 海信集团有限公司 一种识别冰箱食材存取位置的方法及冰箱
CN111476194A (zh) * 2020-04-20 2020-07-31 海信集团有限公司 一种感知模组工作状态检测方法及冰箱

Similar Documents

Publication Publication Date Title
CN106802113B (zh) 基于多弹孔模式识别算法的智能报靶***和方法
CN111340126B (zh) 物品识别方法、装置、计算机设备和存储介质
CN111899131B (zh) 物品配送方法、装置、机器人和介质
CN110689560B (zh) 食材管理方法和设备
US20190370551A1 (en) Object detection and tracking delay reduction in video analytics
CN107477971A (zh) 一种对冰箱内食物的管理方法和设备
CN111476194B (zh) 一种感知模组工作状态检测方法及冰箱
CN105426828A (zh) 人脸检测方法、装置及***
CN109492577A (zh) 一种手势识别方法、装置及电子设备
CN106597463A (zh) 基于动态视觉传感器芯片的光电式接近传感器及探测方法
CN113139402B (zh) 一种冰箱
CN108596187A (zh) 商品纯净度检测方法及展示柜
CN111582257A (zh) 用于对待检测对象进行检测的方法、装置及***
CN110619266B (zh) 目标物识别方法、装置和冰箱
CN111080493A (zh) 一种菜品信息识别方法、装置及菜品自助结算***
CN111126133A (zh) 一种基于深度学习的智能冰箱存取动作识别方法
WO2022206043A1 (zh) 智能冰箱、存取动作识别方法、设备及介质
US20220325946A1 (en) Selective image capture using a plurality of cameras in a refrigerator appliance
CN112184751A (zh) 物体识别方法及***、电子设备
CN114359973A (zh) 基于视频的商品状态识别方法、设备及计算机可读介质
CN104268534A (zh) 触发式视频监控的智能箱控制***及其控制方法
CN108241876A (zh) 实现酒柜内酒品定位的方法和装置
CN113494803B (zh) 智能冰箱及冰箱门体内储物的存取操作检测方法
CN113124635B (zh) 冰箱
CN105737477B (zh) 一种冰箱

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21934679

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21934679

Country of ref document: EP

Kind code of ref document: A1