WO2022002129A1 - 识别物体的卫生状况方法及相关电子设备 - Google Patents

识别物体的卫生状况方法及相关电子设备 Download PDF

Info

Publication number
WO2022002129A1
WO2022002129A1 PCT/CN2021/103541 CN2021103541W WO2022002129A1 WO 2022002129 A1 WO2022002129 A1 WO 2022002129A1 CN 2021103541 W CN2021103541 W CN 2021103541W WO 2022002129 A1 WO2022002129 A1 WO 2022002129A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
camera
image
type
bacteria
Prior art date
Application number
PCT/CN2021/103541
Other languages
English (en)
French (fr)
Inventor
戴同武
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21834643.5A priority Critical patent/EP4167127A4/en
Priority to CN202180045358.4A priority patent/CN115867948A/zh
Priority to US18/003,853 priority patent/US20230316480A1/en
Publication of WO2022002129A1 publication Critical patent/WO2022002129A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/945User interactive design; Environments; Toolboxes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Definitions

  • the present application relates to the field of artificial intelligence and the corresponding sub-field of computer vision, and in particular, to a method for recognizing the hygiene status of an object and related electronic equipment.
  • the microscopic world usually refers to the material world at the particle level such as molecules and atoms. There are many microorganisms in the microscopic world, which are closely related to our life. The microscopic world is difficult to observe with the naked eye. If we can observe the types and density distribution of bacteria in our lives, it can help us better understand our living environment.
  • the identification of bacteria should do the following aspects. First, observe the individual morphology of bacteria under a traditional microscope, including: perform Gram staining to distinguish whether it is a Gram-positive bacteria (G+ bacteria) or Gram-negative bacteria (G- bacteria), and observe its shape under a traditional microscope , size, presence or absence of spores and their location, etc.; then observe the morphology of the bacteria under a traditional microscope, mainly to observe its shape, size, edge condition, uplift, transparency, color, smell and other characteristics; then do a kinetic test on the bacteria , to see whether the bacteria can move and the type of flagella attachment (terminal, pericyto); finally, the bacteria are subjected to physiological and biochemical reactions and serological reaction experiments.
  • G+ bacteria Gram-positive bacteria
  • G- bacteria Gram-negative bacteria
  • the bacterial species was determined by referring to the microbial classification key table.
  • traditional microscopes are expensive, bulky and bulky, which are difficult to apply in daily life, and it is difficult for ordinary people to have the experimental conditions and expertise to identify bacteria.
  • the embodiments of the present application provide a method for identifying the sanitary status of an object and related electronic equipment, which can determine the sanitary status of the object through the electronic equipment and give intelligent prompts.
  • the "object” referred to in this article can be a certain part of the human body (such as hands, feet, etc.), or it can be any object or part of any object other than a human (such as a certain food such as fruit, Another example can be a dinner plate), and so on.
  • the present application provides a method for recognizing the sanitary status of an object, including: determining a type of a first object by an electronic device; collecting, by the electronic device, a first image of the first object through a first camera, and the first image is a microscopic image; the electronic device outputs first prompt information according to the type of the first object and the first image, and the first prompt information is used to indicate the sanitary condition of the first object.
  • the electronic device determines the type of the first object, which may be determined by the electronic device itself, or the electronic device obtains the determined type of the first object from other devices (eg, a server), or the electronic device determines the type of the first object according to the user
  • the inputted information indicating the kind of the first object determines the kind of the first object, and so on.
  • the electronic device outputs first prompt information according to the type of the first object and the first image, which may be that the electronic device itself analyzes the type of the first object and the first image and outputs a prompt related to the analysis result information, it can also be that the electronic device sends at least the first image to other devices (such as a server), and the other device analyzes and sends the analysis result to the electronic device, and then the electronic device outputs prompt information related to the analysis result, etc.
  • the execution sequence of each step may have various possible implementation manners.
  • the step of determining the type of the first object by the electronic device may occur when the electronic device collects the first image of the first object through the first camera. steps before or after or at the same time.
  • the first camera may be a built-in microscopic camera of the electronic device, or may be an external microscopic camera 1, in a possible implementation manner, the external microscopic camera 1 may be combined with
  • the electronic device has a communication connection.
  • the external microscope camera 1 can be installed on an electronic device, such as clipped on the side of the electronic device.
  • the electronic device acquires a microscopic image of the first object through the microscopic camera 1 (a microscopic image may refer to an image obtained by microscopically magnifying the object to be photographed), and determines the first information of the first object according to the microscopic image.
  • an external microscopic camera 2 may be installed on a built-in camera of the electronic device, and the electronic device may obtain the information of the first object through the external microscopic camera 2 and the built-in camera.
  • the electronic device determines the first information of the first object according to the microscopic image; there may be no communication connection between the external microscopic camera 2 and the electronic device, and is only physically mounted on the surface of the built-in camera, The content in the external field of view of the built-in camera is changed (the object to be photographed is enlarged), and then the electronic device can obtain a microscopic image of the first object through the built-in camera; in this implementation, the electronic device uses the The external microscopic camera 2 and the built-in camera are used to obtain the microscopic image (first image) of the first object, and the "electronic device collects the first image of the first object through the first camera".
  • the first camera can be understood as at least one of the external microscopic camera 2 and the built-in camera.
  • the electronic device performs comprehensive analysis based on the type of the first object and the first image of the first object (the first image can be understood as a microscopic image), determines the sanitary condition of the first object, and gives first prompt information.
  • the hygiene status described in the first prompt information can be expressed in the form of a score, and the higher the score, the more sanitary the object is; the hygiene state can also be expressed in the form of text description, such as hygiene, unhygienic, very hygiene, etc. describe. That is to say, users can conveniently observe microscopic images of objects in life by using electronic devices (such as mobile phones, tablet computers and other portable electronic devices), and can obtain hygienic advice for the objects. This method enables users to easily identify the types of bacteria in their daily life, helps users understand the microscopic world, judges the hygiene status of objects through electronic devices, and provides users with intelligent hygiene tips.
  • the method further includes: the electronic device collects a second image of the first object through a second camera (the second image can be understood as a macroscopic image) Different from microscopic images, macroscopic images can magnify objects by a certain factor but do not have a large magnification. In some example scenarios, macroscopic images can also be understood as daily photos taken with the cameras currently commonly used in mobile phones. The obtained image); then the electronic device determining the type of the first object specifically includes: the electronic device determining the type of the first object according to the second image.
  • the second camera may be a camera camera (which may include one or more cameras), and the electronic device obtains a second image of the first object through the one or more camera cameras, thereby determining the type of the first object.
  • the second image of the first object may include the first object, and may also include other objects.
  • the user can click on the screen of the electronic device. and other methods to determine that the object of interest is the first object in it.
  • the second image further includes a second object
  • the above method further includes: the electronic device acquires a user operation on the display area of the second object, and outputs second prompt information indicating the sanitary condition of the second object.
  • the electronic device determines the type of the first object and the type of the second object according to the second image, and when the electronic device acquires a user operation ( For example, the user clicks in the display area of the first object on the second image), the electronic device collects the first image of the first object through the first camera, according to the type of the first object and the The first image outputs first prompt information indicating the sanitary condition of the first object; when the electronic device acquires a user operation on the display area for the second object, the electronic device captures the first image of the second object through the first camera.
  • the electronic device determining the type of the first object includes: the electronic device determining the type of the first object according to the detected user operation.
  • the user operation may be a user operation of inputting voice/text, a correction, a user operation of clicking an option, and the like.
  • the voice information is recognized to determine the type of the first object
  • the electronic device detects text information input by the user the text information is recognized to determine the type of the first object.
  • the user can assist the electronic device to correctly determine the type of the object.
  • determining the type of the first object by the electronic device specifically includes: determining the type of the first object by the electronic device according to the second image of the first object collected by the first camera.
  • the first camera can be a microscopic camera, wherein the magnification of the microscopic camera can be adjusted.
  • the electronic device can collect the image of the object through the microscope camera to determine the type of the object; when the magnification of the microscope camera is adjusted to be enough to identify bacteria, the electronic device can collect the image of the object through the microscope camera. Microscopic images determine the distribution of bacteria on objects.
  • the method further includes: determining first information of the first object according to the first image, wherein there is a correlation between the first information of the first object and the sanitary condition of the first object, wherein the first information One information includes the type and number of bacteria. In this way, the hygienic condition of the first object is judged by analyzing the type and quantity of bacteria on the first object.
  • the first information may include at least one of texture, pores, and color information.
  • the freshness of the first object (such as fruits, vegetables, etc.) can be determined by analyzing at least one of the information on the first object, such as texture, pores, and color.
  • the first prompt information output by the electronic device may also be used to indicate the freshness of the first object.
  • the first information includes the quantity of the first bacteria, and when the quantity of the first bacteria is the first quantity, the first prompt information indicates that the sanitary condition of the first object is the first a sanitary condition; when the quantity of the first bacteria is the second quantity, the first prompt message indicates that the sanitary condition of the first object is the second sanitary condition.
  • the first sanitary condition may be expressed as sanitary; if the second quantity exceeds the first threshold, the second sanitary condition may be expressed as unsanitary.
  • the first sanitary condition may be indicated as unhygienic; if the second quantity exceeds the second threshold, the second sanitary condition may be indicated as very unhygienic, wherein the second threshold is greater than the first threshold. That is, the greater the number of the first bacteria, the greater the degree of influence on the sanitary condition of the first object.
  • different bacteria have different effects on the hygiene of the first object. For example, when pathogenic bacteria exist on the first object, it can be directly confirmed that the first object is unsanitary; when common bacteria exist on the first object, it can be further determined whether the first object is unsanitary by the number of common bacteria.
  • the electronic device outputting the first prompt information includes: the electronic device displays a first image of the first object, and displays the first prompt information on the first image of the first object.
  • the microscopic camera can also be activated through application software corresponding to the microscopic camera, and the application software can be installed on the electronic device.
  • the prompt information includes suggestions for improving the hygiene of the first object.
  • the reasons causing the unsanitary conditions of the first object can be identified, and corresponding suggestions can be given.
  • the prompt information may be that cleaning is recommended, high-temperature heating is recommended, discarding is recommended, and so on.
  • the electronic device outputs the first prompt information according to the type of the first object and the first image, including: the electronic device according to the knowledge map and the first image corresponding to the type of the first object,
  • the hygienic state of the first object is determined, wherein the knowledge graph includes common bacterial species corresponding to the species of the first object.
  • the knowledge graph indicates the association rule between the hygiene status of the first object and the type of bacteria.
  • the association rule can be that when a certain type of bacteria exists, it means that the first object is unsanitary.
  • the association rule can also be that when the number of a certain type of bacteria exceeds a threshold , then the first object is unsanitary, and so on.
  • the first camera is a microscope camera; the second camera is a camera camera; the electronic device is a mobile phone; and the type of the first object is a hand.
  • the present application provides an electronic device including one or more processors, one or more memories, and a touch screen.
  • the one or more memories are coupled to the one or more processors for storing computer program code, the computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform
  • the computer program code comprising computer instructions that, when executed by the one or more processors, cause the electronic device to perform
  • the embodiments of the present application provide a computer storage medium, including computer instructions, when the computer instructions are executed on an electronic device, the electronic device is made to perform any of the possible implementations of any of the above aspects.
  • Methods of health status are provided.
  • an embodiment of the present application provides a computer program product that, when the computer program product runs on a computer, causes the computer to execute the method for identifying the sanitary status of an object in any possible implementation manner of the first aspect above.
  • FIG. 1a is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • FIG. 1b is a schematic structural diagram of another electronic device provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a group of interfaces provided in an embodiment of the present application.
  • Fig. 4a is another set of interface schematic diagrams provided by the embodiment of the present application.
  • Fig. 4b is another set of interface schematic diagrams provided by the embodiment of the present application.
  • FIG. 7a is another set of interface schematic diagrams provided by an embodiment of the present application.
  • Fig. 7b is another set of interface schematic diagrams provided by this embodiment of the application.
  • FIG. 11 is a schematic diagram of an algorithm structure provided by an embodiment of the present application.
  • FIG. 12 is a structural diagram of a knowledge graph provided by an embodiment of the present application.
  • FIG. 13 is a schematic flowchart of a method for identifying the sanitary status of an object provided by an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • FIG. 15 is a schematic diagram of a software architecture provided by an embodiment of the present application.
  • first and second are only used for descriptive purposes, and should not be construed as implying or implying relative importance or implying the number of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of the features. In the description of the embodiments of the present application, unless otherwise specified, the “multiple” The meaning is two or more.
  • the embodiments of the present application provide a method for recognizing the sanitary status of an object, which can be applied to an electronic device having a microscopic camera and a camera camera, and the microscopic camera and the camera camera can work simultaneously or sequentially in a preset order.
  • the electronic device can collect the image of the object through the camera, and recognize the scene in the image (for example, food, hand, dining table, etc.) (or be understood as recognizing the type of the object in the image).
  • Electronic devices can also collect images of the same object through a microscope camera, and identify the microscopic information in the image.
  • the microscopic information includes the type and quantity of bacteria, such as the image of an apple.
  • the possible microscopic information includes yeast, actinomycetes, edible fungi. etc.
  • the electronic device can determine the sanitation status of the object through scene information and microscopic information corresponding to the object. Through this method, the user can easily observe the distribution of microorganisms on the object in the living scene, so that the user can take corresponding sanitation treatment.
  • comprehensive analysis can be performed in combination with scene information and microscopic information corresponding to the object, to determine the sanitation status of the object, and to give intelligent prompts.
  • the smart prompt may include a description of the hygienic state of the object, a suggestion for improving the hygienic state of the object, a suggestion for how to dispose of the object, and so on.
  • the manner in which the electronic device gives a prompt is not limited to text, voice, vibration, and/or indicator lights, and the like.
  • the microscope camera may include a plan achromatic micro-objective lens, and the micro-objective lens may have an optical resolution of 2 ⁇ m, a magnification of about 20-400 times, and a field diameter of 5 mm.
  • FIG. 1a is an exemplary schematic structural diagram of an electronic device.
  • the side of the back cover 10 of the electronic device includes a camera 11 . in,
  • the camera camera 11 may include a plurality of cameras, including at least a microscopic camera 12 (currently common camera cameras do not include a microscopic camera), and may also include currently common camera cameras (cameras used for taking pictures on electronic devices), Such as: mid-focus camera, telephoto camera, wide-angle camera, ultra-wide-angle camera, TOF (time of night) depth camera, movie camera, and/or macro camera, etc. (in this implementation, the microscopic camera 12 is built in In the electronic equipment, it belongs to one of the cameras 11). Electronic devices can be equipped with dual cameras (two cameras), triple cameras (three cameras), quad cameras (four cameras), five cameras (five cameras) or even six cameras (six cameras) for different functional requirements. A combination of cameras to improve the performance of taking pictures.
  • a microscopic camera 12 currently common camera cameras do not include a microscopic camera
  • currently common camera cameras cameras used for taking pictures on electronic devices
  • the microscopic camera 12 is built in In the electronic equipment, it belongs to one of the cameras 11.
  • the magnification of the camera head 11 is between 0.2-20 times.
  • the basic parameters of the camera are exemplarily introduced, for example, a mid-focus camera, 50mm focal length, f/1.6 aperture; telephoto camera, 200mm focal length, f/3.4 aperture; wide-angle camera, 35mm focal length, f/2.2 aperture.
  • the microscope camera 12 with a certain magnification, can observe bacteria. Generally speaking, the maximum magnification of the microscope camera 12 is more than 200 times.
  • the microscopic camera 12 is set in the electronic device as one of the plurality of camera cameras 11 , and the electronic device collects images through one of the cameras of the plurality of camera cameras 11 that is not the microscopic camera 12 ,
  • the scene in the image can be identified, and in some embodiments, the scene can be understood as the type of object (eg, food, hands, dining table, etc.) in the captured image.
  • the electronic device collects the image through the microscopic camera 12, and can identify the microscopic information in the image, and the microscopic information includes the type and quantity of bacteria.
  • the microscopic camera 12 and other cameras in the camera camera 11 can work at the same time, and the user can conveniently observe the distribution of microorganisms in the living scene.
  • the embodiment of the present application does not limit the position of the microscopic camera 12.
  • the microscopic camera 12 can be placed on the back cover side of the electronic device, or can be placed on the display screen side of the electronic device, or can be placed on the display screen of the electronic device. on the opposite side of the screen, or it can be placed on the side of the side screen of the electronic device.
  • FIG. 1b exemplarily shows a schematic structural diagram of another electronic device.
  • a side of the rear cover 10 of the electronic device includes a camera camera 11, and a microscopic camera 12 is also installed on the rear cover (in this implementation, the microscopic camera 12 does not belong to one of the camera cameras 11).
  • the camera camera 11 may include a plurality of cameras (such as a mid-focus camera, a telephoto camera, a wide-angle camera, etc.), and the microscopic camera 12 is installed as an accessory of the electronic device in one of the plurality of camera cameras 11.
  • the microscopic camera 12 can be attached to the surface of a camera to change the content in the external field of view of the camera (the object to be photographed is enlarged), and then the electronic device can obtain the first object through the camera , the camera camera may be called a borrowed camera).
  • the electronic device can identify the borrowed camera, and use other available cameras among the multiple cameras 11 to perform daily shooting (microscopic shooting using a microscopic camera is different from current daily/regular shooting). The following exemplarily introduces possible ways of how the electronic device recognizes the borrowed camera.
  • Mode 1 after the microscopic camera 12 is installed on one of the cameras of the camera 11, the electronic device uses each camera of the camera 11 to capture images.
  • the electronic device receives the information sent by the application software corresponding to the microscopic camera 12 and determines the camera on which the microscopic camera 12 is installed.
  • Manner 3 After receiving the user operation for starting the application software corresponding to the microscopic camera 12, the electronic device displays a first user interface, where the first user interface includes a camera supporting the installation of the microscopic camera 12 in the camera camera 11 of the electronic device.
  • the first user interface may also include the electronic device recommending a camera on which the microscopic camera 12 can be installed by the user.
  • the electronic device determines the camera on which the microscope camera 12 is installed according to the received user operation.
  • the electronic device can call the wide-angle camera to collect images, and recognize the scene according to the image;
  • the electronic device calls the telephoto camera to collect an image, and recognizes the scene according to the image.
  • the electronic device can call the wide-angle camera and/or the telephoto camera to collect images.
  • Image recognition scene when the microscope camera is installed on the telephoto camera, the electronic device can call the mid-focus camera and/or the wide-angle camera to collect images, and recognize the scene according to the image.
  • the electronic device can call other cameras in the camera camera 11 to perform conventional shooting, and identify the scene in the captured image.
  • the electronic device may be an electronic device such as a mobile phone, a tablet computer, a handheld computer, a wearable device, a virtual reality device, or a smart home device, or a functional module installed on or running on the above-mentioned electronic device, etc. .
  • the microscopic camera 12 on the electronic device may be an external camera (installed outside the electronic device), and the external microscopic camera 12 (a possible product form such as Tipscope, in some embodiments external
  • the interaction between the microscopic camera and the electronic device can refer to the interaction between the Tipscope and the mobile phone) can include a miniature objective lens, and can also include other components.
  • the external microscopic camera 12 can be communicatively connected to the electronic device, wherein the connection between the microscopic camera 12 and the electronic device is not limited to wired or wireless (eg, Bluetooth, WIFI, etc.).
  • the image captured by the microscopic camera 12 is sent to the electronic device, and the electronic device acquires the image and acquires the microscopic information in the image.
  • the camera that plays a role in capturing microscopic images may be referred to as the first camera
  • the camera that plays a role in capturing macroscopic images may be referred to as the second camera.
  • the first camera is the microscopic camera 12
  • the second camera is one or more of the cameras in the camera camera 11 except the microscopic camera 12
  • the first camera can be understood as the microscopic camera 12.
  • a camera with a microscopic camera 12 installed in the camera camera 11 and the second camera is one or more of the cameras in the camera camera 11 without the microscopic camera 12 installed.
  • the microscopic image may be referred to as the first image and the macroscopic image may be referred to as the second image.
  • the electronic device collects microscopic images through a first camera (which may include one or more cameras, and the performance may be different from each other), and recognizes microscopic information in the microscopic image; the electronic device uses a second camera (which may include one or more cameras) , the performance can be different from each other) to collect macroscopic images and identify the scene in the macroscopic images.
  • a first camera which may include one or more cameras, and the performance may be different from each other
  • the electronic device uses a second camera (which may include one or more cameras) , the performance can be different from each other) to collect macroscopic images and identify the scene in the macroscopic images.
  • user operations include but are not limited to touch operations, voice operations, gesture operations, and the like.
  • the following describes in detail how the electronic device recognizes the hygiene status of the object from the application interface.
  • Manner 1 Start the camera application of the electronic device, and obtain microscopic information through the first camera.
  • a camera application is an application software for taking pictures of an electronic device.
  • the camera application is started, and the electronic device calls each camera to shoot.
  • the first camera may be configured in the camera application, and the first camera may be called through the camera.
  • the display interface 202 in part a in FIG. 2 displays a plurality of application icons.
  • the display interface 202 includes an application icon of the camera 205 .
  • the electronic device detects the user operation 206 acting on the application icon of the camera 205, the application interface provided by the camera application is displayed.
  • part b in FIG. 2 shows a user interface provided by one possible camera application.
  • the application interface of the camera 205 is shown in part b in FIG. 2 .
  • the application interface may include: a display area 30 , a flash icon 301 , a setting icon 302 , a mode selection area 303 , a gallery icon 304 , a confirmation icon 305 , and a switch icon 306 . If the user wants to acquire microscopic information, the application icon of the microscopic mode 303A in the mode selection area 303 can be triggered by the user operation 307 .
  • the display area 30 in part b in FIG. 2 displays a preview image of the data collected by the camera currently used by the electronic device.
  • the camera currently used by the electronic device can be the default camera set by the camera application, that is, when the camera application is opened, the display content of the display area 30 is always the preview image of the data collected by the second camera; the camera currently used by the electronic device can also be the above. The camera to use when closing the camera app once.
  • the flash icon 301 can be used to indicate the working status of the flash.
  • the flash icon 301 when the flash is on and off can be displayed in different forms, for example, when the flash is on, the flash icon is filled with white, and when the flash is off, the flash icon is filled with black.
  • the user can control the on and off of the flash by touching the flash icon 301 .
  • the flash is also turned on when the microscopic image is captured by the first camera, and the object to be photographed is illuminated by the flash.
  • the setting icon 302 when a user operation acting on the setting icon 302 is detected, in response to the operation, the electronic device can display other shortcut functions, such as adjusting the resolution, time-lapse shooting (also known as time-lapse shooting, which can be controlled to start taking pictures) time), shooting mute, voice-activated photo, smile capture (when the camera detects a smile feature, automatically focus on the smile) and other functions.
  • time-lapse shooting also known as time-lapse shooting, which can be controlled to start taking pictures
  • shooting mute also known as time-lapse shooting, which can be controlled to start taking pictures
  • voice-activated photo voice-activated photo
  • smile capture when the camera detects a smile feature, automatically focus on the smile
  • the mode selection area 303 is used to provide different shooting modes. According to the different shooting modes selected by the user, the cameras and shooting parameters enabled by the electronic device are also different. It may include a micro mode 303A, a night scene mode 303B, a photo mode 303C, a video mode 303D, and more 303E.
  • the icon of the photographing mode 303C in the part b of FIG. 2 is marked to remind the user that the current mode is the photographing mode. in,
  • Micro mode 303A in which the user can observe a microscopic image of the object.
  • the electronic device captures a micro image through the first camera.
  • the icon of the photographing mode 303C in the mode selection area 303 is no longer marked, and the micro mode 303A is marked (in FIG. 3 , the icon 303A is marked as gray), which is used to prompt the user that the current mode is the micro mode.
  • the electronic device acquires the image collected by the first camera, and the display content of the display area 30 is the image collected by the first camera.
  • the electronic device acquires microscopic information according to the image collected by the first camera, and the microscopic information includes the type and quantity of bacteria.
  • the night scene mode 303B can improve the detail rendering ability of bright and dark parts, control noise, and present more picture details.
  • the photographing mode 303C is suitable for most photographing scenes, and can automatically adjust photographing parameters according to the current environment.
  • Video mode 303D used to shoot a video.
  • More 303E when detecting a user operation acting on more 303E, in response to the operation, the electronic device may display other selection modes, such as panorama mode (to achieve automatic stitching, the electronic device stitches multiple photos taken continuously into one 3 photos to achieve the effect of expanding the viewing angle of the picture), HDR mode (automatic continuous shooting underexposure, normal exposure, overexposure three photos, and select the best part to combine into one photo) and so on.
  • panorama mode to achieve automatic stitching, the electronic device stitches multiple photos taken continuously into one 3 photos to achieve the effect of expanding the viewing angle of the picture
  • HDR mode automatic continuous shooting underexposure, normal exposure, overexposure three photos, and select the best part to combine into one photo
  • the image displayed in the display area 30 is the image processed in the current mode.
  • mode selection area 303 eg, micro mode 303A, night scene mode 303B, photo mode 303C, video mode 303D, panorama mode, HDR mode, etc.
  • the mode icons in the mode selection area 303 are not limited to virtual icons, but can also be implemented as physical buttons.
  • the gallery icon 304 when a user operation acting on the gallery icon 304 is detected, in response to the operation, the electronic device can enter the gallery, and the photos and videos that have been taken are displayed in the gallery.
  • the gallery icon 304 may be displayed in different forms. For example, after the electronic device saves the image currently captured by the camera, the gallery icon 304 displays a thumbnail of the image.
  • Confirmation icon 305 when a user operation (such as a touch operation, voice operation, gesture operation, etc.) acting on the confirmation icon 305 is detected, in response to the operation, the electronic device acquires the image currently captured by the camera used in the current mode ( or the processed image of the currently collected image corresponding to the currently used mode), and save it in the gallery. Among them, the gallery can be entered through the gallery icon 304 .
  • a user operation such as a touch operation, voice operation, gesture operation, etc.
  • the switch icon 306 can be used to switch between the front camera and the rear camera.
  • the front camera and the rear camera both belong to the camera camera 11 .
  • the shooting direction of the front camera is the same as the display direction of the screen of the electronic device used by the user, and the shooting direction of the rear camera is opposite to the display direction of the screen of the electronic device used by the user. If the display area 30 currently displays the image captured by the rear camera, when a user operation acting on the switch icon 306 is detected, the display area 30 displays the image captured by the front camera in response to the operation. If the display area 30 currently displays the image captured by the front camera, when a user operation acting on the switch icon 306 is detected, the display area 30 displays the image captured by the rear camera in response to the operation.
  • the electronic device detects a user operation 307 acting on the micro mode 303A, and in response to the user operation, the electronic device captures a micro image through the first camera.
  • the icon of the photographing mode 303C in the mode selection area 303 is no longer marked, but the micro mode 303A is marked, which is used to prompt the user that the current mode is the micro mode.
  • the electronic device acquires the image collected by the first camera, and the display content of the display area 30 is the image collected by the first camera.
  • part c in FIG. 2 exemplarily shows the application interface of the micro mode 303A.
  • the icon of the micro mode 303A in the mode selection area 303 is marked, indicating that the current mode is the micro mode.
  • the display area 30 in part c in FIG. 2 displays the image captured by the first camera, and the electronic device can acquire microscopic information according to the image captured by the first camera.
  • the electronic device refreshes/updates the toggle icon 306 to the transition icon 307 in response to a user operation acting on the micro mode 303A.
  • the conversion icon 307 in part c in FIG. 2 can be used for the conversion display of the macro image captured by the second camera and the micro image captured by the first camera in the display content of the display area 30 . If the display area 30 currently displays the image captured by the first camera, when a user operation acting on the conversion icon 307 is detected, the display area 30 displays the image captured by the second camera in response to the operation. If the display area 30 currently displays the image captured by the second camera, when a user operation acting on the conversion icon 307 is detected, the display area 30 displays the image captured by the first camera in response to the operation.
  • the second camera and the first camera can capture images at the same time, regardless of whether the display area 30 displays images captured by the second camera or the first camera, the electronic device can capture images according to the second camera.
  • the macroscopic image of the device acquires scene information in the image (or is understood as acquiring the types of objects in the image), and at the same time acquires microscopic information according to the microscopic image acquired by the first camera.
  • the user operation 206 and the user operation 307 include, but are not limited to, user operations such as clicks, shortcut keys, gestures, floating touch, and voice commands.
  • the second way is to start an application (eg, an application named as a micro mode) of the electronic device that is specifically aimed at acquiring microscopic images, and collect microscopic images through the first camera.
  • an application eg, an application named as a micro mode
  • the micro-mode application can be an application specifically for the micro-camera, which can be downloaded from the Internet and installed on the electronic device.
  • the microscopic mode application is started, and the electronic device calls the first camera to shoot.
  • "micro-pattern" is just an example of one possible name, and other names are possible.
  • the display interface 202 in part a in FIG. 3 displays a plurality of application icons.
  • the display interface 202 includes application icons of the micro mode 207 . If the user wants to capture a microscopic image through the first camera, the application icon of the microscopic mode 207 is triggered through the user operation 208 .
  • the electronic device displays the application interface of the micro mode 207 in response to the user operation 208 .
  • part b in FIG. 3 exemplarily shows an application interface provided by a possible micro mode application.
  • the application interface may include: a display area 40 , a flash icon 401 , a setting icon 402 , a gallery icon 403 , a confirmation icon 404 , and a conversion icon 405 .
  • the display area 40 can display the image captured by the camera currently used by the electronic device.
  • the gallery icon 403 when a user operation acting on the gallery icon 403 is detected, in response to the operation, the electronic device may enter a microscopic image gallery, and the microscopic images and videos that have been taken are displayed in the microscopic image gallery.
  • the gallery icon 403 may be displayed in different forms. For example, after the electronic device saves the microscopic image currently captured by the first camera, the gallery icon 403 displays a thumbnail of the microscopic image.
  • the conversion icon 405 can be used for conversion display of the captured image of the second camera and the captured image of the first camera in the display content of the display area 40 .
  • the display area 40 currently displays the image captured by the first camera, when a user operation 406 acting on the conversion icon 405 is detected, the display area 40 displays the image captured by the second camera in response to the operation. If the display area 40 currently displays the image captured by the second camera, when a user operation acting on the conversion icon 405 is detected, the display area 40 displays the image captured by the first camera in response to the operation.
  • the flash icon 401 reference may be made to the related description of the flash icon 301 in part b in FIG. 2 .
  • the setting icon 402 may refer to the related description of the setting icon 302 in part b in FIG. 2 .
  • the confirmation icon 404 can refer to the relevant description of the confirmation icon 305 in part b in FIG. 2 .
  • the display content of the display area 40 can also be converted by sliding.
  • the user operation 407 is a leftward sliding operation, which acts on the display area 40 .
  • the display area 40 of the electronic device initially displays a microscopic image (image captured by the first camera), and when the electronic device detects the operation 407 acting on the display area 40, it gradually displays the image captured by the second camera along with the operation 407. The effect of switching the display content of the display area 40 .
  • the display area 40 of the electronic device starts to display the image captured by the second camera, the user can achieve the effect of changing the display content of the display area 40 by sliding to the right.
  • the second camera and the first camera can capture images at the same time, regardless of whether the display area 40 displays the images captured by the second camera or the first camera, the electronic device can capture the images according to the second camera.
  • the image acquires the scene in the image, and simultaneously acquires the microscopic information in the image according to the image acquired by the first camera.
  • the user operation 208 includes, but is not limited to, user operations such as clicks, shortcut keys, gestures, floating touch, and voice commands.
  • the display area 410 may display the image captured by the second camera in real time, and the display area 40 may display the image captured by the first camera in real time.
  • the images of the display area 40 and the display area 410 correspond to each other, that is, the display content of the display area 410 is a microscopic image of the display content of the display area 40 .
  • the images captured by the second camera and the first camera are also constantly changing, and the display contents of the display area 40 and the display area 410 are also constantly changing.
  • the application interface of the micro mode in part b in FIG. 5 may further include a control 411 .
  • the control 411 is used to trigger the electronic device to recognize the microscopic information of the object (further optional, it can also be used to trigger the electronic device to recognize the scene information corresponding to the object, and the scene information corresponding to the electronic device to recognize the object may not be triggered by the user through the control 411 . It is triggered by other operations of the user) to determine the hygiene status of the object.
  • the electronic device when the electronic device detects a user operation on the control 411 for the first time (for example, the user clicks and selects the control 411 ), the electronic device acquires the scene in the image according to the image captured by the second camera, and The microscopic information of the object in the image is acquired according to the image collected by the first camera.
  • the electronic device determines the sanitation state of the object according to the scene in the image and the microscopic information, and the electronic device can output prompt information about the sanitation state of the object.
  • the electronic device when the electronic device detects a user operation on the control 411 for the second time (it may be that the user clicks and deselects the control 411), the electronic device may no longer recognize the scene information and microscopic information corresponding to the object, That is, prompt information about the hygiene status of the object is no longer output. In this case, the user can view the microscopic image and the macroscopic image of the object in the application interface of the microscopic mode.
  • control 411 in part b in FIG. 5 can also be used to trigger the electronic device to collect a macro image through the second camera (in this case, the macro image is displayed in part b in FIG. 5 (may The display area 410 that is a thumbnail image of a macro image) may not appear at first, but appears after the user operates the control 411).
  • the electronic device detects a user operation on the control 411 for the first time, the electronic device can display the image (which can be a thumbnail) captured by the second camera in the display area 410 in real time.
  • the image acquires the scene in the image, and simultaneously acquires the microscopic information in the image according to the image acquired by the first camera, so that the sanitary condition of the object can be determined, and the electronic device can output prompt information about the sanitary condition of the object.
  • the electronic device detects a user operation on the control 411 for the second time, the electronic device does not need to acquire a macro image, and the display area 410 can be hidden or a black screen can be displayed.
  • the electronic device no longer recognizes the scene information and microscopic information corresponding to the object, that is, no longer outputs prompt information about the hygiene status of the object.
  • the display contents of the display area 40 and the display area 410 can be switched to each other according to user operations.
  • the display area 410 displays the image captured by the second camera
  • the display area 40 displays the image captured by the first camera.
  • the display contents of the display area 40 and the display area 410 are switched to each other, that is, the display area 40 displays the image captured by the second camera, and the display area 410 displays the image captured by the first camera. .
  • the user operation is not limited to a click operation on the display area 410, but may also be a click operation on the conversion icon 405, and may also be a drag operation, a double-click operation, a gesture operation, and the like on the display area 410 or the display area 40.
  • the size of the display area 410 may be different from that shown in FIG. 5 above.
  • the area covered by the display area 410 may be larger or smaller than the area covered by the display area 410 in FIG. 5 described above.
  • the shape, position and size of the display area 410 may be set by default by the system.
  • the system can set the display area 410 as a vertical rectangular interface in the lower right area of the display screen 5 by default.
  • the shape, position and size of the display area 410 may also be determined in real time according to user operations.
  • the size and position of the display area 410 may be related to the position where the user stops sliding with two fingers on the display screen. For example, the larger the distance between the positions where the user stops sliding two fingers on the display screen, the larger the display area 410 is.
  • the area where the display area 410 is located may cover the track of the user's two-finger sliding.
  • FIG. 6 includes a display area 41 and a display area 42 , and the display area 41 and the display area 42 are respectively displayed on the display screen of the electronic device in a split-screen manner.
  • the display area 41 can display the image collected by the second camera in real time
  • the display area 42 can display the image collected by the first camera in real time.
  • the images of the display area 41 and the display area 42 correspond to each other, that is, the display content of the display area 42 is a microscopic display image of the display area 41 .
  • the images captured by the second camera and the first camera are also constantly changing, and the display contents of the display area 40 and the display area 410 are also constantly changing.
  • FIG. 6 also includes a position frame 51 , wherein the display content of the display area 42 is a microscopic image in the position frame 51 . As the images in the position frame 51 are different, the display content of the display area 42 also changes accordingly.
  • the sizes of the display area 41 and the display area 42 may be determined in real time according to user operations. For example, drag the split screen line of the display area 41 and the display area 42 up and down, when the user drags the split screen line upward, the length of the display area 41 becomes smaller, and the length of the display area 42 becomes larger; when the user drags the split screen line downward line, the length of the display area 41 becomes larger, and the length of the display area 42 becomes smaller.
  • the display contents of the display area 40 and the display area 410 can be switched to each other according to user operations.
  • the above embodiment provides a possible application interface of the micro mode
  • the electronic device responds to the user operation 307, collects a micro image through the first camera, and displays the application interface of the micro mode 303A (as shown in part c in FIG. 2); or the electronic device
  • a microscopic image is captured by the first camera, and an application interface of the microscopic mode 207 is displayed (as shown in part b in FIG. 3); or the application interface of FIG. 5 and FIG. 6 is displayed;
  • the electronic device can obtain the microscopic information of the first object through the first camera, and then infer the sanitary condition of the first object in combination with the type of the first object.
  • the following describes how the electronic device determines the type of the first object.
  • the electronic device collects the image of the first object through the second camera, and automatically detects the type of the first object in the collected image.
  • the display area 40 of the left drawing of Fig. 7a displays the image captured by the second camera.
  • Cursor 70 and cursor 71 respectively indicate objects (peaches and hands) in the image detected by the electronic device, where cursor 70 is displayed in the display area of the peach and cursor 71 is displayed in the display area of the hand.
  • the number of cursors depends on the number of objects in the image detected by the electronic device, and the description of the recognized object type is displayed near the cursor.
  • the text content near the cursor 70 is "peaches", and the text content near the cursor 71 is "Hand".
  • the object pointed by the cursor 70 is actually not a peach but an apple, the user can perform a click operation on the display area where "peaches" are displayed.
  • the electronic device When detecting a user operation acting on the display area of the text content describing the object, the electronic device displays an input window on the application interface, prompting the user to input the object to be detected, so as to achieve the function of correcting the type of object recognized by the electronic device Effect.
  • the electronic device displays an input window 50 on the application interface, prompting the user to input the desired detected object.
  • the user can input the type of the object in the input window 50 .
  • the image captured by the second camera is an apple.
  • the electronic device recognizes that the type of object in the image is a peach, then the user can click on the display area with the text content of "peach", and enter the object's information in the input window 50.
  • the species is apple.
  • the electronic device receives the text input by the user and corrects the type of the object in the image to be an apple. At this time, the text content "apple" is displayed near the cursor 70 .
  • the input window 50 may further include function icons such as re-identification 501 , voice input 502 and confirmation 503 . in,
  • Re-identification 501 when a user operation acting on the re-identification 501 is detected, in response to the operation, the electronic device recognizes the type of the object in the display area 40 again, the recognition result is different from the previous recognition result, and the recognition result is It is displayed near the cursor of the object in the image to prompt the user to know that the electronic device has detected the object and the type of the object included in the image.
  • Voice input 502 when a user operation acting on the voice input 502 is detected, in response to the operation, the electronic device acquires the audio input by the user, identifies the content of the audio, and the object described in the audio is the type of the first object.
  • Confirm 503 when a user operation acting on confirmation 503 is detected, in response to the operation, the electronic device saves the text input by the user in the manual input window 50, the text being the type of the first object.
  • the input window 50 provides a method for assisting the electronic device to determine the type of the first object.
  • the user can click the text content of the object by clicking the text content of the object. Correction is performed in the manner of the displayed area. Improve the accuracy of detecting the hygiene status of objects.
  • the present application also provides a way of determining the type of the first object. That is, the electronic device does not need to recognize the scene from the image captured by the second camera, but directly determines the type of the first object to be detected through text information, voice information, etc. input by the user.
  • the display area 40 of the left drawing of Fig. 7b displays the image captured by the first camera.
  • the application interface may further include: a manual input icon 73 , a gallery icon 701 , and a confirmation icon 702 .
  • the gallery icon 701 may refer to the related description of the gallery icon 403 in FIG. 3
  • the confirmation icon 702 may refer to the related description of the confirmation icon 404 in FIG. 3 .
  • the manual input icon 73 is used to input the type of the first object.
  • the electronic device displays an input window 51 on the application interface, prompting the user to input the object to be detected.
  • the user can input the type of the object in the input window 51 . For example, if the input text content received by the electronic device is "apple", the electronic device determines that the type of the first object is an apple.
  • the voice input icon and the confirmation icon in the figure on the right side of FIG. 7b may refer to the relevant description of the voice input 502 and the confirmation 503 in FIG. 7a.
  • the electronic device does not need to collect macro images through the second camera, nor does it need to recognize the scene from the collected images, but directly determines the type of the first object to be detected through the text information, voice information, etc. input by the user, saving electronic equipment resources to improve efficiency.
  • the above-mentioned embodiments provide a method for the electronic device to determine the type of the first object, including detecting the image captured by the second camera or receiving a user operation to determine the type of the first object.
  • the present application provides a method for identifying the sanitary status of an object.
  • An electronic device can obtain microscopic information of the first object through a first camera, and infer the sanitary status of the first object in combination with the type of the first object, thereby outputting the first object. information about the health status.
  • the electronic device when the user clicks the cursor 70 for the apple, the electronic device performs analysis and calculation in combination with the apple and the apple's microscopic information, obtains the health status of the apple, and outputs prompt information.
  • the electronic device after determining the type of the first object and the microscopic information of the first object, obtains the sanitary status of the first object, and outputs prompt information after receiving an instruction to obtain the sanitary status of the first object.
  • the electronic device outputs prompt information.
  • the prompt information may be used to prompt the user of the sanitary condition of the first object.
  • the prompt information can also be used to prompt the user how to improve the hygiene of the object.
  • the prompt information may also be used to prompt the user how to handle the object.
  • the manner in which the electronic device gives a prompt is not limited to text, voice, vibration, indicator light, and the like.
  • the electronic device may output prompt information in response to the received user operation. Users can choose to view the hygiene status of objects they want to know about. Referring to FIG. 8 , it exemplarily shows a manner in which the electronic device outputs prompt information after receiving a user operation.
  • the display area 40 of part a in FIG. 8 displays the image captured by the second camera, the cursor 70 and the cursor 71 respectively indicate the objects (apple and hand) in the image detected by the electronic device, wherein the cursor 70 is displayed in the display area of the apple, and the cursor 71 is displayed in the display area of the hand.
  • the electronic device outputs a prompt in the display area 40 (click on the object to check the sanitary status) to prompt the user that the user can click on the object to check the sanitary status of the object.
  • a prompt in the display area 40 click on the object to check the sanitary status
  • the display area 40 in part b in FIG. 8 displays the first
  • the image captured by the camera also includes a prompt box 60 .
  • the prompt content of the prompt box 60 includes the type and quantity of bacteria (800,000 of rod-shaped bacteria and 100,000 of penicillium), and the hygienic condition of the object (the apple is not clean, it is recommended to clean it).
  • the display area 40 in part d in FIG. 8 displays the first camera capture.
  • the prompt content of the prompt box 61 includes the type and quantity of bacteria (Escherichia coli 800,000, Staphylococcus 300,000, and influenza virus 50,000), and the hygiene status of the object (hands are not clean, it is recommended to wash).
  • the electronic device after determining the sanitary condition of the first object, the electronic device directly outputs prompt information.
  • the user can know the hygiene status of the object in the fastest time.
  • FIG. 9 it exemplarily shows a manner in which the electronic device directly outputs prompt information.
  • the display area 40 of part a in FIG. 9 displays the image captured by the second camera, the cursor 70 and the cursor 71 respectively indicate the objects (apple and hand) in the image detected by the electronic device, wherein the cursor 70 is displayed in the display area of the apple, and the cursor 71 is displayed in the display area of the hand.
  • Part a in FIG. 9 also includes a prompt area 60 and a prompt area 61 .
  • the prompt area 60 and the prompt area 61 respectively describe the hygiene conditions of the apple and the hand.
  • the prompt area 61 outputs the prompt message "Apple is not clean, it is recommended to clean it"
  • the prompt area 62 outputs the prompt message "The hand is not clean, it is recommended to clean it”.
  • the number of prompt areas depends on the number of hygienic conditions of objects in the image detected by the electronic device. When the electronic device detects the hygiene status of two objects, it outputs two prompt areas, the electronic device detects the hygiene conditions of three objects, and outputs three prompt areas, and so on.
  • the display area 40 in part b in FIG. 9 displays the image captured by the first camera.
  • Part b in FIG. 9 also includes a prompt area 60 and a prompt area 61 . It will not be repeated here.
  • the output manner of the prompt information is not limited. It can be the way of outputting text (for example, the way of displaying the prompt area 60 in FIG. 8 and FIG. 9 ), the way of image, voice, vibration, indicator light, etc., or the display color of the cursor and text to indicate the hygiene status.
  • part a in FIG. 9 includes a cursor 70 indicating an apple and a cursor 71 indicating a hand. If the electronic device detects that the apple is unsanitary, the cursor 70 of the apple is displayed in red; if the electronic device detects that the hand is hygienic, the hand The cursor 71 of the section is displayed in green.
  • the user can know the hygienic status of the object in advance, and more intuitively find the unsanitary object among multiple objects. Then check the unsanitary objects in detail, and make hygienic treatments for the unsanitary objects.
  • the output content of the prompt information is not limited.
  • the output content of the prompt information can include a description of the hygienic state of the object (for example, the object is unsanitary, the object is unclean, the object has a low level of hygiene), and it can also include suggestions for improving the hygienic state of the object (for example, recommended cleaning, recommended wiping, recommended heating), advice on how the object should be handled (e.g., recommended to discard), and may also include a description of the impact of bacterial species on sanitation (e.g., if food is unsanitary due to excessive E. coli abundance, it is recommended to heat and sterilize at 100°C), It can also include the freshness of the object (eg apples are not fresh, bananas are spoiled), and so on.
  • a description of the hygienic state of the object for example, the object is unsanitary, the object is unclean, the object has a low level of hygiene
  • suggestions for improving the hygienic state of the object for example, recommended cleaning, recommended wiping
  • the size of the prompting area 60 or the prompting area 61 may be different from those in the above-mentioned FIGS. 8 and 9 .
  • the area covered by the prompt area 60 or the prompt area 61 may be larger or smaller than the area covered by the prompt area 60 or the prompt area 61 in the prompt area 60 or the prompt area 61 .
  • the shape, position and size of the prompt area 60 or the prompt area 61 may be set by default by the system. It may also be determined in real time according to user operations.
  • the size and position of the prompt area 60 or the prompt area 61 may be related to the position where the user stops sliding with two fingers on the display screen. For example, the larger the distance between the positions where the user stops sliding two fingers on the display screen, the larger the prompt area 60 or the prompt area 61 is.
  • the area where the prompting area 60 or the prompting area 61 is located may cover the track of the user's two-finger sliding.
  • the above-mentioned embodiment provides a related manner for the electronic device to output prompt information of the sanitary condition of the first object, including the output content of the prompt information and the output form of the prompt information.
  • the display content of the display area in the application interface can be saved in response to the user operation received on the shooting control.
  • Users can view microscopic images of objects and prompt information through the gallery.
  • the photographing controls may be, for example, the confirmation icon 404 and the confirmation icon 305 .
  • the display area 90 exemplarily displays an image 81 , an image 82 and an image 83 .
  • the image 81 is the display content of the display area in FIG. 6 .
  • the electronic device detects the user operation on the confirmation icon 404 in FIG. 6 , acquires the display content in the display area in FIG. 6 , and saves it in the gallery.
  • Image 82 is the display content of the display area 40 in part b in FIG. 3
  • the electronic device detects the user operation on the confirmation icon 404 in part b in FIG. 3 , and obtains the display of the display area 40 in part b in FIG. 3 content, saved in the gallery.
  • Image 83 is the display content (including the prompt area 60) of the display area 40 in the figure of part b in FIG. 8
  • the electronic device detects the user operation on the confirmation icon in part b in FIG.
  • the displayed content of the display area 40 in the part is stored in the gallery.
  • the application interface shown in FIG. 10 can be entered through the gallery icon 403 .
  • the electronic device collects an image of an object through the second camera, and identifies a scene in the image, where the scene is the type of object in the collected image (eg food, hands, dining table, etc.).
  • the electronic device collects an image of the same object through the first camera, and identifies microscopic information in the image, where the microscopic information includes the type and quantity of bacteria.
  • the sanitation status of the scene can be judged, and intelligent prompts can be given.
  • the bacteria included in the microscopic information include yeast, actinomycetes, edible fungi, and the like.
  • the smart prompts given by the electronic device for the food include that the food is unsanitary, and cleaning is recommended; it also includes that the food is recommended to be heated at a high temperature, the food is recommended to be discarded, and the like.
  • the bacteria present on the first object may be Staphylococcus, Escherichia coli, influenza virus and the like.
  • the smart tips given by electronic devices for hands include unhygienic hands and recommended cleaning; unhygienic hands and hand sanitizers recommended for cleaning, etc.
  • the bacteria present on the first object may be Neisseria meningitidis, Mycobacterium tuberculosis, Hemolytic coccus, Bacillus diphtheriae, Bacillus pertussis, and the like.
  • the smart prompts given by electronic devices for air include poor air quality, and it is recommended to wear a mask; as well as poor air quality, it is recommended to wear a medical mask, etc.
  • a type of image classification technology a method of judging the type of location where an image scene is located from an image.
  • Using the existing mature network framework such as resnet
  • Places365 is an open source dataset for scene classification, including Places365-standard and Places365-challenge.
  • the training set for Places365 – standard has 365 scene categories, where each category has up to 5000 images.
  • the training set of Places365-challenge has 620 scene categories, where each scene category has up to 40000 images.
  • a model for image location scene category recognition is trained by Places365. Convolutional neural networks trained on the Places365 database can be used for scene recognition as well as general deep scene features for visual recognition.
  • the type of the image scene can be determined from the second image through the image location scene category recognition technology.
  • Object detection is to find all objects of interest in the image and determine the location and size of the objects.
  • the identification process includes classification, location, detection, and segmentation.
  • Figure 11 shows the network structure of YOLO v3.
  • the structure of the YOLOv3 network specifically includes:
  • darknet-53without FC layer 53 represents the number of convolutional layers plus fully connected layers in the darknet network
  • darknet-53without FC layer represents the first 52 layers of darknet-53 without fully connected layers.
  • Input layer 416 ⁇ 416 ⁇ 3 means that the pixels of the input image are 416*416 and the number of channels is 3.
  • DBL Darknetconv2d_BN_Leaky, is the basic component of yolo_v3. It is convolution+BN+Leaky relu. for feature extraction from images.
  • n represents a number, res1, res2, ..., res8, etc., indicating how many res_units are contained in this res_block. Its input and output are generally consistent, and no other operations are performed, only the difference is calculated.
  • Concat Tensor concatenation. Splicing the upsampling of the darknet intermediate layer and a later layer. The operation of splicing is different from the operation of add in the residual layer. Splicing will expand the dimension of the tensor, while the add only adds directly and does not change the dimension of the tensor.
  • Output layer including 3 prediction paths, the depths of y1, y2 and y3 are all 255, and the rule of side length is 13:26:52.
  • the above YOLOv3 network is further described below by taking 416*416*3 as the input image as an example.
  • Y1 layer input 13*13 feature map (feature map), a total of 1024 channels. After a series of convolution operations, the size of the feature map remains unchanged, but the number of channels is finally reduced to 75. The final output is a feature map of size 13*13, 75 channels, and classification and position regression are performed on this basis.
  • Y2 layer Convolve the 13*13 and 512-channel feature maps of the 79th layer to generate 13*13 and 256-channel feature maps, and then perform upsampling to generate 26*26 and 256-channel feature maps.
  • the 61-layer 26*26, 512-channel mesoscale feature maps are merged. After a series of convolution operations, the size of the feature map remains unchanged, but the number of channels is finally reduced to 75.
  • the final output is a feature map of size 26*26, 75 channels, and then classification and position regression are performed here.
  • Y3 layer Convolve the 26*26 and 256-channel feature maps of the 91-layer to generate 26*26 and 128-channel feature maps, and then perform upsampling to generate 52*52 and 128-channel feature maps.
  • the layer's 52*52, 256-channel mesoscale feature maps are merged. After a series of convolution operations, the size of the feature map remains unchanged, but the number of channels is finally reduced to 75.
  • the final output is a feature map of size 52*52, 75 channels, and then classification and position regression are performed here.
  • This application uses a deep learning-based target detection technology to use a camera to perform scene recognition, and a microscopic camera to perform microbial recognition.
  • the identification process of the micro camera and the camera camera is the same.
  • the identification process mainly includes the following steps: First, the acquisition of information, that is, the optical information is converted into electrical information through the camera sensor, that is, the basic information of the object captured by the terminal is acquired and passed through a certain A way to turn it into information that machines can recognize. Second, preprocessing is to perform operations such as denoising, smoothing, and transformation on the captured image, thereby enhancing the important features of the image. Third, feature extraction and selection, extract and select features from the preprocessed image, identify and extract useful features through the image's own features. Fourth, classifier design, that is, a recognition rule obtained through training, and a method of feature classification is obtained through the recognition rule. Fifth, classification decision, classifying the recognized object in the feature space, so as to recognize the specific category of the recognized object in the shooting scene.
  • a knowledge graph can be understood as a meshed knowledge base formed by entities with attributes linked through relationships, including nodes and connecting lines. Nodes are entities, and connecting lines are association rules.
  • the knowledge graph connects various trivial and fragmented objective knowledge to support comprehensive knowledge retrieval, auxiliary decision-making and intelligent inference.
  • This application correlates macroscopic information and microscopic information in the form of a knowledge graph, which allows users who use this function to quickly and quickly understand the bacterial situation around macroscopic objects, and combine the suggestions given by intelligent inference to improve self-protection awareness.
  • the practicability of electronic equipment is improved.
  • the electronic device After acquiring the type of the first object, acquires the knowledge map of the type of the first object in the knowledge map. For example, as shown in Fig. 12, Fig. 12 shows a knowledge graph with three nodes "unclean hands", "unclean food” and “unclean apples” as the central nodes respectively.
  • the bacteria that can cause unclean apples include Bacillus, Rhodotorula, Penicillium and other bacteria, where the connecting line represents an association rule.
  • the association rule can be that when bacteria exist, it means that the food is not clean, and the association rule can also be that when the number of bacteria exceeds a threshold, it means that the food is not clean, and so on.
  • the electronic device obtains that the type of the first object is an apple, the hygienic status of the apple can be judged through the knowledge map of the apple in combination with the type and quantity of microorganisms detected by the electronic device.
  • FIG. 13 is a schematic flowchart of a method for identifying the sanitary status of an object provided by an embodiment of the present application. As shown in FIG. 13 , the method for identifying the sanitary condition of an object may include the following steps.
  • Step S101A Capture a second image through a second camera.
  • the electronic device collects the second image through the second camera.
  • the second image is a macro image
  • the second camera is one or more cameras used to collect macro images
  • the second image includes the first object.
  • Step S102A Determine the type of the first object in the second image.
  • the electronic device determines the type of the first object according to the second image.
  • the type of the first object may be a broad category such as food and insects, or a fine category such as apples, bananas, grapes, bread, ants, and hands, or scenes such as air, rivers and seas.
  • the electronic device captures an image of an apple through the second camera, and according to the captured image, the electronic device determines that the image includes an object, and identifies it as an apple according to the image, that is, the type of the first object is an apple.
  • the electronic device captures an image of a hand holding an apple through the second camera, and according to the captured image, the electronic device determines that the image includes two objects, which are a first object and a second object, respectively.
  • the image is identified as an apple and a hand.
  • the type of the first object can be an apple, and the type of the first object can also be a hand; when the type of the first object is an apple, the type of the second object is a hand; when the type of the first object is a hand , then the type of the second object is apple.
  • the target object is the first object
  • the electronic device recognizes the type of the first object according to the image recognition technology
  • the first object can be determined according to a preset rule or a received user operation, and the type of the first object can be identified according to the image recognition technology.
  • a first object is determined from among a plurality of target objects according to a preset rule.
  • the preset rule can be that the target object with the largest proportion of the whole image is used as the first object; it can also be the target object with the largest proportion of the screen occupying the center of the image as the first object; it can also be the target occupying the center of the image. All objects can be used as the first object; it can also be that all target objects in the image frame can be used as the first object; and so on.
  • the electronic device determines a first object among the multiple target objects in the second image according to a preset rule, and after determining the first object, identifies the type of the first object according to the image.
  • the electronic device collects a second image through the second camera, and the second image includes four target objects, namely, an apple, a hand, a banana, and a table.
  • the preset rule is the target object that occupies the largest proportion of the entire image.
  • the electronic device determines that the first object is a table; the preset rule is that the target object with the largest proportion occupying the center position of the image is the first object, and the electronic device determines that the first object is an apple.
  • the first object is determined according to the detected user operation.
  • the user operation includes a user operation of inputting voice/text/image.
  • the electronic device detects the user operation and determines the first object or the type of the first object.
  • the user can draw a preset graphic on the image and select the first object.
  • the image captured by the second camera is an image of a hand holding an apple
  • the user draws a closed figure on the image, indicating that the object within the range covered by the closed figure area is the first object.
  • the electronic device receives that the object in the closed graphic drawn by the user includes an apple
  • the first object is an apple
  • the electronic device receives that the object in the closed graphic drawn by the user includes a hand
  • the first object is the hand
  • the objects in the closed graphic drawn by the user that the device receives include the apple and the hand, and the first object and the second object are the apple and the hand.
  • the user can correct the type of the first object. For example, referring to Fig. 7a, in the left side of Fig. 7a, the user can change the type of the object by clicking on the display area of "peaches". When the electronic device receives a user operation for the display area of the "peaches", it means that the object "peaches" is modified. As shown in the right side of Figure 7a, the electronic device prompts the user to input the object to be detected, and the electronic device detects the "apple” input by the user in the text box, and determines that the type of the object is an apple.
  • the user may also modify the type of the first object to be an apple by means of voice input.
  • the electronic device detects the "apple" input by the user through the voice, it is determined that the type of the object is an apple.
  • the user can also make the electronic device re-identify the type of the object in the image by triggering the re-identification of the electronic device. Change the type of object.
  • the electronic device receives a user operation for the display area of the "peaches”, it means that the object "peaches” is modified.
  • the electronic device detects a user operation made by the user to re-identify the icon, and re-identifies "peaches", and the identified types are different from peaches.
  • the method of determining the type of the first object is not limited to the above steps S101A and S102A, and the method of determining the type of the second object is similar to the method of determining the type of the first object.
  • the electronic device does not need to use the second camera to acquire macro images, but can determine the type of the first object according to the detected user operation.
  • User operations include text input, image input, and voice input. For example, referring to Fig. 7b, the electronic device detects "apple" input by the user in the text box, and determines that the type of the first object is an apple.
  • the electronic device may also obtain the type of the object in the image input by the user as the type of the first object according to image recognition according to the image input by the user, and the image may come from a gallery or a network.
  • the electronic device may also acquire, according to the voice input by the user, according to voice recognition, the type of the object in the voice input by the user as the type of the first object.
  • the electronic device does not need to use the second camera to acquire macro images, but can acquire macro images using a micro camera (for example, micro camera 12 in FIG. A kind of object.
  • the magnification of the microscope camera can be between 1 and 400 times. When the magnification of the microscope camera is between 1 and 5 times, macroscopic images can be obtained. When the magnification of the microscope camera is between 200 and 400 times. Microscopic images can be acquired over time. By automatically converting the magnification of the microscopic camera, the microscopic camera collects the macroscopic image and the microscopic image of the first object, and the electronic device identifies the type of the first object in the macroscopic image.
  • Step S101B Collect the first image through the first camera.
  • the electronic device collects the first image through the first camera.
  • the first image is a microscopic image
  • the first camera is one or more cameras that collect microscopic images
  • the first image includes bacteria present on the first object.
  • the electronic device displays a first user interface; when the electronic device receives a user operation acting on the first icon, and responds to the user operation acting on the first icon, the electronic device captures the first user interface through the first camera
  • the image, the first user interface includes the first icon.
  • the first user interface may be the interface of part b in FIG. 2
  • the first icon is the icon of the micro mode 303A.
  • the electronic device collects the first image through the first camera.
  • the application interface is shown in part c in FIG. 2 .
  • the electronic device displays a preview image of the data collected by the first camera in real time on the display interface.
  • the preview image is a microscopic image and shows bacteria existing on the photographed object.
  • the electronic device displays a main interface, and the main interface includes multiple application icons, wherein the multiple application icons include a first application icon; when the electronic device receives a user operation for the first application icon, the electronic device The device collects the first image through the first camera.
  • the main interface is the interface of part a in FIG. 3
  • the first application icon is the application icon of the micro mode 207 .
  • the application interface is shown in part b in FIG. 3
  • the electronic device displays the image captured by the first camera on the display interface, and the image is a microscopic image showing bacteria existing on the photographed object.
  • Step S102B Determine the first information of the first object in the first image.
  • the electronic device determines the first information of the first object according to the first image, wherein the first information includes the condition of bacteria existing on the first object, including the type and quantity of the bacteria.
  • the electronic device collects the first image of the first object through the first camera, and determines the type and quantity of bacteria in the first object according to a target detection algorithm (eg, the above-mentioned YOLO v3 algorithm). For example, the electronic device collects the first image of the first object through the first camera, and according to the YOLO v3 algorithm, determines that the bacteria on the first object in the first image include bacteria 1, bacteria 2, bacteria 3, and bacteria 4, and The number of bacteria 1, bacteria 2, bacteria 3 and bacteria 4.
  • a target detection algorithm eg, the above-mentioned YOLO v3 algorithm
  • the electronic device may determine the first information of the first object in the first image according to the knowledge graph of the first object (specifically, the knowledge graph corresponding to the type of the first object).
  • the knowledge map of the first object includes common bacterial species corresponding to the first object, so that it can play a reference role when determining the bacterial species on the object according to the microscopic image of the object (for example, whether it is on the object is preferentially compared? common bacterial species) to improve the efficiency of bacterial identification.
  • the electronic device recognizes that the type of the first object is a hand, and the electronic device combines the knowledge map of the hand to obtain that common bacteria distributed on the hand include Escherichia coli, Streptococcus, and Agrobacterium aeruginosa.
  • the electronic device can preferentially compare with the common bacteria on the hand such as Escherichia coli, Streptococcus, and Agrobacterium aeruginosa.
  • the bacterial species is Escherichia coli. There is no need to compare with other bacterial species (such as uncommon bacteria on the hand), which improves the efficiency of identifying bacterial species.
  • the reference function of the knowledge graph can also be embodied in the following example: for example, the electronic device recognizes that the type of the first object is the hand, and the electronic device recognizes the type of bacteria on the hand according to the target detection algorithm. Electronic devices are difficult to identify accurately by appearance due to the very similar appearance of some bacteria, such as Salmonella and E. coli.
  • the electronic device combined with the knowledge map of the hand can conclude that the common bacteria distributed on the hand include Escherichia coli, but not Salmonella, then the probability that the electronic device recognizes some bacteria as Salmonella is the same as Escherichia coli When the probability is similar (for example, the probability of Salmonella is 51%, and the probability of Escherichia coli is 49%), Escherichia coli is preferentially identified, which improves the efficiency and accuracy of identifying bacterial species.
  • the electronic device may receive the bacterial species input by the user, and identify and screen the bacterial species in a targeted manner. For example, if a user wants to detect the distribution of Escherichia coli on an apple, he enters the name of the bacterial species as Escherichia coli in the input box, and the electronic device responds to the user input and performs targeted targeting of Escherichia coli in the microscopic image of the apple. Identify and obtain the quantity and distribution of Escherichia coli.
  • the first information of the first object includes the quantity and distribution of Escherichia coli.
  • Escherichia coli is not a common bacteria on apples
  • the electronic device only obtains common bacteria on apples based on Apple's knowledge graph, and identifies these common bacteria, it may not meet the user's requirements.
  • the specific bacterial species that the user is concerned about can be determined by receiving the user's input for targeted identification (it can be prioritized to identify whether the specific bacterial species is present on the object), so as to be immune to the knowledge of the object Due to the limitations of the graph, it can better meet the individual needs of different users.
  • the reference function of the knowledge graph can also be embodied in the following examples:
  • the knowledge graph may only include common bacterial species corresponding to an object, and for some bacteria ( Bacteria that are not strongly associated with objects), including newly emerged bacteria or a type of bacteria that has recently attracted public attention, the electronic device can determine the type of bacteria that the user is concerned about according to the received user action (the newly emerging bacteria can be provided on the interface.
  • the bacteria that the user is particularly concerned about can be preferentially screened (further, the bacterial species to be screened may also include common bacteria on the object).
  • the prompt information output by the electronic device may include a prompt of whether the bacteria concerned by the user exist.
  • the electronic device recognizes that the type of the first object is the hand, and the electronic device combines the knowledge map of the hand to obtain that common bacteria distributed on the hand include Escherichia coli, Streptococcus, Agrobacterium aeruginosa, and so on.
  • the name of the bacteria to be detected that the electronic device receives input from the user is Salmonella. Assuming that Salmonella and Escherichia coli are very similar in appearance, it is difficult for the electronic device to accurately identify the type of bacteria on the hand during the process of identifying the bacterial species on the hand. .
  • the electronic device can output the probability information about the existence of Salmonella (further, it can also be sent to the user. Probability that the bacteria present on the object is Escherichia coli).
  • Step S103 Determine the sanitary condition of the first object according to the type of the first object and the first information of the first object.
  • the electronic device After acquiring the type of the first object and the first information of the first object, the electronic device determines the sanitary status of the first object according to the knowledge map of the first object and the bacteria existing on the first object.
  • the knowledge graph of the first object indicates the association relationship between at least one type of bacteria and the sanitary status of the type of the first object.
  • the bacteria existing on the first object may be yeast, actinomycetes, edible fungi, etc.
  • the bacteria existing on the first object It can be Staphylococcus, Escherichia coli, influenza virus, etc.
  • the bacteria present on the first object can be Neisseria meningitidis, Mycobacterium tuberculosis, Hemolytic coccus, Diphtheria bacillus, Pertussis Bacillus, etc.
  • the electronic device acquires the knowledge map of the hygiene status of the first object or an object of the same type as the first object (the second object) in the knowledge map.
  • the type of the first object obtained by the electronic device is "apple”
  • the knowledge graph of the unclean apple is obtained according to the type of the first object.
  • the knowledge graph of unclean apples indicates the relationship between unclean apples and bacteria.
  • the lower left side of Figure 8 exemplarily shows the knowledge graph of apples.
  • the bacteria associated with unclean apples include Bacillus, Penicillium , red yeast, etc. Combined with the identification of the bacteria on the apple by the electronic device, the sanitation status of the apple is determined according to the association rule.
  • the above association rules include: the number of the first bacteria exceeds the first threshold, resulting in unclean hands; the number of second bacteria exceeds the second threshold, resulting in unclean hands; and so on.
  • the sanitation status can be determined according to the final score in the manner of scoring statistics. For example, when the type of the first object is a hand with an apple, in the knowledge graph of the hand, bacteria related to unclean hands include Staphylococcus, Escherichia coli, influenza virus and so on. When the number of Staphylococcus exceeds the preset first threshold, it is determined that the bacteria will cause the hands to be unclean, and a scoring method is adopted.
  • the scoring method is adopted. At this time, the score of unclean hands due to Escherichia coli is 5 points, and the statistical score is 10 points. Statistical scores were performed on bacteria related to unclean hands in turn, and the hygiene status of hands was determined according to the final scores.
  • the number of the first bacteria exceeds the first threshold, the greater the number of the first bacteria, the deeper the influence of the first bacteria on the sanitary condition/the greater the calculation weight.
  • bacteria related to unclean hands include Staphylococcus, Escherichia coli, influenza virus, and the like.
  • the number of staphylococci exceeds the preset first threshold, it is determined that the staphylococcus will cause the hands to be unclean, and a scoring method is adopted. At this time, the score of unclean hands is 5 points.
  • the score for unclean hands due to staphylococcus at this time is 10 points.
  • bacteria related to unclean vegetables include mold, rod-shaped bacteria, salmonella, shigella, staphylococcus aureus, and the like. Since Salmonella, Shigella, and Staphylococcus aureus are pathogenic bacteria, the priority of pathogenic bacteria is set to the highest. Judging vegetables unsanitary.
  • the electronic device after judging that the sanitary condition of the first object is unsanitary, obtains the distribution of bacteria on the first object according to the first image of the first object, and then judges the first object according to the distribution of bacteria.
  • a specific area of an object that is unsanitary For example, in a possible implementation manner, the first object is a hand, the electronic device can divide the hand image into regions, and then evaluate the hygienic condition of the bacteria in each region, so that the hand is unsanitary. specific area. That is, when the electronic device outputs the prompt information, it can prompt on the macroscopic image which area is the unsanitary area.
  • the output prompt content can also indicate the hygiene status of the object from different angles.
  • the electronic device obtains that the type of the first object is an apple, it can obtain the knowledge graph of “the apple is not clean” according to the type of the first object, and can also obtain the knowledge graph of “the apple is rotten”, and can also obtain the knowledge graph of “the apple is rotten”. Apple is not fresh” Knowledge Graph.
  • these three knowledge maps combined with the electronic equipment to identify the bacteria present on the apple, according to the association rules, it can be determined whether the sanitation status of the apple is unclean, rotten or not fresh. It will be appreciated that three is an exemplary number and there may be more or less in practice.
  • the electronic device acquires the type of the first object as an apple, and acquires a knowledge graph of “unclean apple”, a knowledge graph of “corrupted apple”, and a knowledge graph of “apple is rotten” according to the type of the first object not new” knowledge graph.
  • the bacteria existing on the first object include bacteria 1, bacteria 2, bacteria 3, bacteria 4, and so on.
  • the first information may further include information such as texture, pores, and color of the object.
  • the freshness of the object can be determined by analyzing information such as texture, pores, and color on the first object.
  • Step S104 Output prompt information to indicate the sanitary condition of the first object.
  • the electronic device After the electronic device determines the sanitary status of the first object, it outputs prompt information to indicate the sanitary status of the first object.
  • the electronic device After determining the type of the first object and the first information of the first object, the electronic device obtains the sanitary status of the first object, and outputs prompt information after receiving an instruction to obtain the sanitary status of the first object. Exemplarily, as shown in part a in FIG. 8 , when the user clicks the cursor 70 for Apple, the electronic device outputs prompt information.
  • the electronic device when the user clicks the cursor 70 for the apple, the electronic device performs analysis and calculation in combination with the apple and the first information of the apple to obtain the health status of the apple, Output prompt information.
  • the electronic device displays a macro image or a micro image of the first object, and the prompt information is displayed on the macro image or micro image of the first object.
  • the electronic device displays macroscopic images of the first object and the second object.
  • the electronic device acquires the user operation on the display area for the first object, it outputs the first prompt information indicating the sanitary condition of the first object;
  • the electronic device acquires the user operation on the display area for the second object, it outputs the indication Second prompt information about the sanitary status of the second object.
  • the electronic device receives a user operation (such as a click) for the apple cursor, and in response to the click operation, the display area in part b in FIG. 8 displays the image captured by the first camera , and output prompt information about the health status of apples.
  • the electronic device receives a user operation (such as a click) for the hand cursor, and in response to the click operation, the display area in part d in FIG. 8 displays the image captured by the first camera, and outputs Information about hand hygiene.
  • the output mode of the prompt information may also be a direct output mode.
  • the electronic device After determining the sanitary condition of the first object, the electronic device outputs prompt information on the image of the first object. Referring to FIG. 9 , the electronic device in part a in FIG. 9 outputs prompt information of the apple and the hand on the macro image of the first object, and the electronic device in part b in FIG. 9 outputs the apple and the hand on the microscopic image of the first object. Hand hints.
  • the prompting method of the prompting information may be a way of outputting text (for example, the way of displaying the prompting area 60 or the prompting area 61 in FIG. 8 or FIG. 9 ), or it may be through images, voices, vibrations, indications
  • the hygienic condition can also be indicated by the display color of the cursor and text.
  • part a in FIG. 9 includes a cursor 70 indicating an apple and a cursor 71 indicating a hand. If the electronic device detects that the apple is unsanitary, the cursor 70 of the apple is displayed in red; if the electronic device detects that the hand is hygienic, the hand The cursor 71 of the section is displayed in green.
  • the output content of the prompt information is not limited.
  • the output content of the prompt information can include a description of the hygienic state of the object, such as the object is unsanitary, the object is unclean, and the object has a low degree of hygiene; it can also include suggestions for improving the hygienic state of the object, such as cleaning, wiping, and heating. ; It can also include advice on how to handle the object, such as recommending discarding; it can also include a description of the impact of bacterial species on sanitation, such as unsanitary food due to excessive E. coli, and it is recommended to heat and sterilize at 100 degrees Celsius; also Can include a description of how fresh the object is, eg apples are not fresh, bananas are spoiled; and so on.
  • the electronic device may, in any application interface, respond to a user operation received on the photographing control, photograph and save the display content of the display area in the application interface. Users can view microscopic images of objects and prompt information through the gallery.
  • the electronic device collects a microscopic image through the first camera, and displays it on the display screen of the electronic device, so that the user can view and photograph the microscopic world; and the electronic device can detect the bacteria in the microscopic image based on the collected microscopic image. Identify the type of bacteria to show the user the form of bacteria and the name of the bacteria on the object; the user can also perform some operations on the electronic device, so that the electronic device can determine the name of the type of bacteria that the user wants to detect, and the electronic device can be targeted. and the electronic device can analyze the sanitation status of the object based on the identified species and quantity of bacteria, prompt the user of the sanitation status of the object, and give corresponding sanitation suggestions.
  • FIG. 14 shows a schematic structural diagram of the electronic device 100 .
  • the electronic device 100 may have more or fewer components than those shown in the figures, may combine two or more components, or may have different component configurations.
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • graphics processor graphics processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • the processor 110 may be configured to determine the type of the first object and the first information of the first object, and the processor 110 determines the type of the first object and the first information of the first object according to the type of the first object and the first information of the first object.
  • health condition The hygiene status may be expressed in the form of scores, and the higher the score, the more sanitary the object; the hygiene condition may also be expressed in the form of text description, such as sanitary, unhygienic, and very sanitary. That is to say, the user can conveniently observe the microscopic image of the object in life, and determine the distribution of microorganisms on the object through the microscopic image, so as to obtain hygienic advice for the object.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140 and supplies power to the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the camera, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video. Including mid-focus camera, telephoto camera, wide-angle camera, ultra-wide-angle camera, TOF (time of night) depth camera, movie camera, macro camera, etc.
  • Electronic devices can be equipped with dual cameras (two cameras), triple cameras (three cameras), quad cameras (four cameras), five cameras (five cameras) or even six cameras (six cameras) for different functional requirements.
  • the object is projected onto the photosensitive element by generating an optical image from the camera.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CCD charge coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • the camera 193 may further include a microscope camera.
  • the microscope camera is used to collect microscopic images.
  • the microscope camera has a certain magnification to observe the bacteria.
  • the microscopic image of the object is collected by a microscopic camera, so as to obtain the type and quantity of bacteria existing on the object, and also obtain information such as the gloss, texture, and pores of the object. According to the analysis and calculation of the microscopic image, the hygienic status of the object is obtained.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 moving picture experts group
  • MPEG3 MPEG4
  • MPEG4 Moving Picture Experts Group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 .
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110, or some functional modules of the audio module 170 may be provided in the processor 110.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present invention takes an Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 as an example.
  • FIG. 15 is a block diagram of a software structure of an electronic device 100 according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and so on.
  • a floating launcher component may also be added to the application layer, which is used to display an application in the above-mentioned small window 30 as a default, and provides the user with an entrance to other applications.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, an activity manager, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the display screen, take a screenshot of the display screen, etc.
  • a FloatingWindow can be extended based on Android's native PhoneWindow, which is specially used to display the above-mentioned small window 30, so as to be different from a common window, and the window has the property of being displayed in the topmost layer of a series of windows.
  • the window size can be given an appropriate value according to the size of the actual screen and an optimal display algorithm.
  • the aspect ratio of the window may be the screen aspect ratio of a conventional mainstream mobile phone by default.
  • an additional close button and a minimize button can be drawn in the upper right corner.
  • some gesture operations of the user will be received, and if the operation gestures of the above small window are met, the window will be frozen and the animation effect of moving the small window will be played.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • a button view for closing, minimizing and other operations on the small window can be correspondingly added, and bound to the FloatingWindow in the above-mentioned window manager.
  • the phone manager is used to provide the communication function of the electronic device 100 .
  • the management of call status including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications from applications running in the background, and notifications on the display in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • the activity manager is used to manage the activities running in the system, including process, application, service, task information and so on.
  • an activity task stack dedicated to managing the application Activity displayed in the above-mentioned small window 30 can be added to the activity manager module, so as to ensure that the application activity and task in the small window will not be different from the application displayed in full screen in the screen. conflict.
  • Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: input manager (input manager), input dispatcher (input dispatcher), surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg : SGL) etc.
  • the input manager is responsible for obtaining event data from the underlying input driver, parsing and encapsulating it, and passing it to the input dispatch manager.
  • the input scheduling manager is used to store window information. After it receives the input event from the input manager, it will look for a suitable window in its managed window and dispatch the event to this window.
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • a corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into raw input events (including touch coordinates, timestamps of touch operations, etc.). Raw input events are stored at the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon, for example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer.
  • the camera 193 captures still images or video.
  • the software system shown in Figure 15 involves application presentation (such as gallery, file manager) using micro-display capabilities, an instant sharing module that provides sharing capabilities, a content providing module that provides data storage and acquisition, and the application framework layer that provides WLAN services , Bluetooth services, and the kernel and the bottom layer provide WLAN Bluetooth capabilities and basic communication protocols.
  • Embodiments of the present application also provide a computer-readable storage medium. All or part of the processes in the above method embodiments may be completed by a computer program instructing relevant hardware, the program may be stored in the above computer storage medium, and when executed, the program may include the processes in the above method embodiments.
  • the computer-readable storage medium includes: read-only memory (ROM) or random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted over a computer-readable storage medium.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

公开了一种识别物体的卫生状况方法及相关电子设备,涉及人工智能领域,与计算机视觉相关。包括:电子设备确定第一物体的种类;该电子设备通过第一摄像头采集该第一物体的第一图像,该第一图像为微观图像;该电子设备根据该第一物体的种类和该第一图像,获得该第一物体的卫生状况;其中电子设备根据第一物体的微观图像可以获取第一物体上存在的细菌的种类和数量等信息,也可以获取第一物体的色泽、纹理、气孔等信息。这样,电子设备能够结合物体的种类和物体的微观图像进行综合分析,确定出该物体的卫生状况,并输出智能提示。

Description

识别物体的卫生状况方法及相关电子设备
本申请要求于2020年6月30日提交中国专利局、申请号为2020106154846、申请名称为“识别物体的卫生状况方法及相关电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能领域以及对应的子领域计算机视觉相关,尤其涉及识别物体的卫生状况方法及相关电子设备。
背景技术
在自然科学中,微观世界通常是指分子、原子等粒子层面的物质世界。微观世界中存在着许多微生物,与我们的生活息息相关。微观世界是难以用肉眼观察到的,如果我们可以观察到生活中的细菌种类以及密度分布,可以帮助我们更加了解我们所处的生活环境。
一般来说,对细菌的鉴定要做以下几个方面工作。首先在传统显微镜下对细菌的个体形态进行观察,包括:进行革兰氏染色,分辨是革兰阳性菌(G+菌)还是革兰阴性菌(G-菌),并在传统显微镜下观察其形状、大小、有无芽孢及其位置等;然后在传统显微镜下对菌种形态进行观察,主要观察其形态、大小、边缘情况、***度、透明度、色泽、气味等特征;接着对细菌做动力试验,看细菌能否运动及其鞭毛着生类型(端生、周生);最后对细菌做生理生化反应及做血清学反应实验。根据以上实验项目的结果,通过查阅微生物分类检索表确定细菌种类。然而传统显微镜价格高昂、体积大且笨重,难以在日常生活中应用到,并且一般人很难具有识别细菌的实验条件和专业知识。
所以,如何在日常生活中方便的识别细菌种类,帮助用户了解微观世界,成为需要解决的问题。
发明内容
本申请实施例提供了一种识别物体的卫生状况方法及相关电子设备,能够通过电子设备确定物体的卫生状况,并给出智能提示。本文中所涉及的“物体”可以是人身上的某个部位(如手部、足部等),也可以是人之外的任何物体或任何物体的一部分(如可以是某种食物例如水果,又如可以是餐盘),等等。
需要说明的是,本申请提供的各实施例中,各步骤的执行顺序可以有多种可能的实现方式,其中的部分或全部步骤可以先后执行或并行执行。
第一方面,本申请提供了一种识别物体的卫生状况的方法,包括:电子设备确定第一物体的种类;该电子设备通过第一摄像头采集该第一物体的第一图像,该第一图像为微观图像;该电子设备根据该第一物体的种类和第一图像,输出第一提示信息,该第一提示信息用于指示该第一物体的卫生状况。电子设备确定第一物体的种类,可以是电子设备自身判断出第一物体的种类,或是电子设备从其他设备(如,服务器)获得确定出的第一物体的种类,或是电子设备根据用户输入的用于指示第一物体的种类的信息来确定第一物体的种类, 等等。电子设备根据该第一物体的种类和所述第一图像,输出第一提示信息,可以是电子设备自身对该第一物体的种类和所述第一图像进行分析后输出与分析结果相关的提示信息,也可以是电子设备至少将所述第一图像发送给其他设备(如,服务器),由其他设备进行分析后将分析结果发送给电子设备,然后电子设备输出与分析结果相关的提示信息,等等。
上述实施例中,各步骤的执行顺序可以有多种可能的实现方式,比如,电子设备确定第一物体的种类的步骤可以发生在电子设备通过第一摄像头采集该第一物体的第一图像的步骤之前或之后或同时。
实施第一方面提供的方法,第一摄像头可以为电子设备内置的显微摄像头,也可以为外置的显微摄像头1,在一种可能的实现方式中,该外置显微摄像头1可以与电子设备具有通信连接。该外置显微摄像头1可以安装在电子设备上,如夹在电子设备的侧边。电子设备通过显微摄像头1获取第一物体的微观图像(微观图像可以指对被拍摄物经过显微放大后拍摄得到的图像),根据该微观图像确定第一物体的第一信息。在另一些实现方式中,可以在电子设备内置的一个相机摄像头上安装一个外置的显微摄像头2,电子设备可以通过该外置的显微摄像头2和该内置的相机摄像头获取第一物体的微观图像,电子设备根据该微观图像确定第一物体的第一信息;该外置的显微摄像头2和电子设备之间可以没有通信连接,仅物理上被贴装在内置的相机摄像头的表面,改变内置的相机摄像头的外部视野里的内容(对被拍摄物进行了放大),然后,电子设备可以通过该内置的相机摄像头获取到第一物体的微观图像;这种实现方式中,电子设备借助了所述外置的显微摄像头2和该内置的相机摄像头来获取第一物体的微观图像(第一图像),所述“所述电子设备通过第一摄像头采集该第一物体的第一图像”中所述第一摄像头可以理解为所述外置的显微摄像头2和该内置的相机摄像头中的至少一个。
然后电子设备结合第一物体的种类和第一物体的第一图像(第一图像可以理解为微观图像)进行综合分析,判断出第一物体的卫生状况,并给出第一提示信息。其中,第一提示信息中描述的卫生状况可以是以打分的形式表示,分数越高表示物体越卫生;卫生状况还可以是以文字描述的形式表示,例如卫生、不卫生、非常卫生等文字来形容。也即是说,用户可以使用电子设备(如手机、平板电脑等便携的电子设备)方便的观察到生活中物体的微观图像,并能获得针对该物体的卫生建议。这种方法能够让用户在日常生活中方便的识别细菌种类,帮助用户了解微观世界,通过电子设备对物体的卫生状况作出判断,给用户提供智能的卫生提示。
结合第一方面,在一些实施例中,电子设备确定第一物体的种类之前,该方法还包括:该电子设备通过第二摄像头采集该第一物体的第二图像(第二图像可以理解为宏观图像,区别于微观图像,宏观图像对物体可以有一定倍数的放大但并没有进行较大倍率的放大,在一些示例场景下宏观图像也可以理解成如用手机目前常用的相机摄像头进行的日常拍照获得的图像);则该电子设备确定第一物体的种类具体包括:该电子设备根据该第二图像确定该第一物体的种类。这里,第二摄像头可以为相机摄像头(可以包括一个或多个),电子设备通过一个或多个相机摄像头获取第一物体的第二图像,从而确定第一物体的种类。所述第一物体的第二图像中可以包括了第一物体,还可以包括了其他物体,当第二图像中包括了多个物体的情况下,用户可以通过在电子设备的屏幕上进行点击操作等方式确定所 关注的对象为其中的第一物体。
在一些实施例中,第二图像还包括第二物体,上述方法还包括:电子设备获取针对于第二物体的显示区域上的用户操作,输出指示第二物体的卫生状况的第二提示信息。具体来说,在一种可能的实现方式中,电子设备根据第二图像确定第一物体的种类和第二物体的种类,当电子设备获取到针对于第一物体的显示区域上的用户操作(如,用户在第二图像上第一物体的显示区域中进行点击),则电子设备通过第一摄像头采集第一物体的第一图像,根据所述第一物体的种类和所述第一物体的第一图像输出指示该第一物体的卫生状况的第一提示信息;当电子设备获取到针对于第二物体的显示区域上的用户操作,则电子设备通过第一摄像头采集第二物体的第一图像,根据所述第二物体的种类和所述第二物体的第一图像输出指示该第二物体的卫生状况的第二提示信息。这种方式中,当电子设备通过第二摄像头采集的第二图像中包括两个或两个以上的物体时,可以根据接收到的用户对其中一个物体的选择操作,来显示该物体相关的提示信息,提升用户体验。
结合第一方面,在一些实施例中,电子设备确定第一物体的种类,包括:电子设备根据检测到的用户操作,确定第一物体的种类。这里,用户操作可以是输入语音/文本的用户操作、修正、点击选项的用户操作等等。例如,当电子设备检测到用户输入的语音信息时,识别该语音信息以确定第一物体的种类;当电子设备检测到用户输入的文本信息时,识别该文本信息以确定第一物体的种类。在一些实现方式中,可以在电子设备识别物体的种类之后,若用户想要更正该物体的种类,可以通过用户操作辅助电子设备正确确定该物体的种类。
结合第一方面,在一些实施例中,电子设备确定第一物体的种类,具体包括:电子设备根据该第一摄像头采集的第一物体的第二图像来确定该第一物体的种类。这里,第一摄像头可以为显微摄像头,其中显微摄像头的放大倍率可以调节。当显微摄像头的放大倍率很小时,电子设备可以通过显微摄像头采集物体的图像确定物体的种类;当显微摄像头的放大倍率调节到足以识别细菌时,电子设备可以通过显微摄像头采集物体的微观图像确定物体上的细菌分布情况。
在一些实施例中,所述方法还包括:根据第一图像确定第一物体的第一信息,其中,该第一物体的第一信息与第一物体的卫生状况之间具有关联关系,其中第一信息包括细菌的种类和数量。这种方式通过对第一物体上细菌的种类和数量进行分析,判断出第一物体的卫生状况。
在一些实施例中,第一信息可以包括纹理、气孔、色泽信息中的至少一种。这种方式通过对第一物体上纹理、气孔、色泽等信息中的至少一种进行分析,可以判断出第一物体(如水果、蔬菜等)的新鲜程度。电子设备输出的第一提示信息还可以用于指示第一物体的新鲜程度。
结合第一方面,在一些实施例中,该第一信息包括第一细菌的数量,当该第一细菌的数量为第一数量时,该第一提示信息指示该第一物体的卫生状况为第一卫生状况;当该第一细菌的数量为第二数量时,该第一提示信息指示该第一物体的卫生状况为第二卫生状况。具体来说,第一数量不超过第一阈值,该第一卫生状况可以表示为卫生;第二数量超过该第一阈值,该第二卫生状况可以表示为不卫生。第一数量超过该第一阈值,该第一卫生状 况可以表示为不卫生;第二数量超过第二阈值,该第二卫生状况可以表示为非常不卫生,其中第二阈值大于第一阈值。即当第一细菌的数量越多,对第一物体的卫生状况的影响程度就越大。
在一些实施例中,不同的细菌对第一物体的卫生状况的影响程度不同。例如,当第一物体上存在有致病菌,则可以直接确认第一物体不卫生;当第一物体存在有普通细菌,则可以通过普通细菌的数量进一步判断第一物体是否不卫生。
结合第一方面,在一些实施例中,电子设备输出第一提示信息,包括:电子设备显示该第一物体的第一图像,并在该第一物体的第一图像上显示第一提示信息。
在一些实施例中,还可以通过显微摄像头对应的应用软件来启动显微摄像头,该应用软件可以安装在电子设备上。
在一些实施例中,提示信息包括用于提高该第一物体的卫生状况的建议。这里,根据第一物体上细菌种类的不同,可以识别出导致第一物体不卫生的原因,从而给出相应的建议。例如,提示信息可以是建议清洗,建议高温加热,建议丢弃等等。
结合第一方面,在一些实施例中,电子设备根据第一物体的种类和第一图像,输出第一提示信息,包括:电子设备根据第一物体的种类所对应的知识图谱和第一图像,确定第一物体的卫生状况,其中,知识图谱包括第一物体的种类所对应的常见的细菌种类。知识图谱指示了第一物体的卫生状况和细菌种类的关联规则,关联规则可以为当某一种细菌存在,则说明第一物体不卫生,关联规则还可以为当某一种细菌的数量超过阈值,则说明第一物体不卫生,等等。
在一些实施例中,第一摄像头为显微摄像头;第二摄像头为相机摄像头;电子设备为手机;第一物体的种类为手部。
第二方面,本申请提供了一种电子设备,包括一个或多个处理器和一个或多个存储器和触控屏。该一个或多个存储器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,当一个或多个处理器执行计算机指令时,使得电子设备执行如上述第一方面及与第一方面相关的各实施例中任一方法,具体可参考前述相关内容。
第三方面,本申请实施例提供了一种计算机存储介质,包括计算机指令,当计算机指令在电子设备上运行时,使得电子设备执行上述任一方面任一项可能的实现方式中的识别物体的卫生状况的方法。
第四方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述第一方面任一项可能的实现方式中的识别物体的卫生状况的方法。
附图说明
图1a为本申请实施例提供的一种电子设备的结构示意图;
图1b为本申请实施例提供的又一种电子设备的结构示意图;
图2为本申请实施例提供的一组界面示意图;
图3为本申请实施例提供的又一组界面示意图;
图4a为本申请实施例提供的又一组界面示意图;
图4b为本申请实施例提供的又一组界面示意图;
图5为本申请实施例提供的又一组界面示意图;
图6为本申请实施例提供的又一组界面示意图;
图7a为本申请实施例提供的又一组界面示意图;
图7b为本申请实施例提供的又一组界面示意图;
图8-图10为本申请实施例提供的又一组界面示意图;
图11为本申请实施例提供的算法结构原理图;
图12为本申请实施例提供的知识图谱结构图;
图13为本申请实施例提供的一种识别物体的卫生状况方法的流程示意图;
图14为本申请实施例提供的一种电子设备的结构示意图;
图15为本申请实施例提供的一种软件架构示意图。
具体实施方式
下面将结合附图对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请实施例提供了一种识别物体的卫生状况方法,可应用在具有显微摄像头和相机摄像头的电子设备上,显微摄像头和相机摄像头可以同时工作或按预设顺序先后工作。其中,电子设备可以通过相机摄像头采集物体的图像,并识别该图像中的场景(例如食物、手部、餐桌等等)(或理解为识别该图像中的物体的种类)。电子设备还可以通过显微摄像头采集同一物体的图像,并识别该图像中的微观信息,微观信息包括细菌种类和数量,例如苹果的图像,可能的微观信息包括酵母菌、放线菌、食用菌等等。之后,电子设备可以通过物体对应的场景信息和微观信息来确定该物体的卫生状况。通过该方法,用户可以很方便地观察到在生活场景中物体上微生物的分布情况,使得用户做出相应的卫生处理。
本申请实施例可以结合物体对应的场景信息和微观信息进行综合分析,判断出物体的卫生状况,并给出智能提示。该智能提示可以包括物体的卫生状况的描述,还可以包括提高物体卫生状况的建议,还可以包括物体的处理方式的建议等等。电子设备给出提示的方式不限于文字、语音、震动、和/或指示灯等等。
在一种可能的实施方式中,显微摄像头可以包括平场消色差微型物镜,该微型物镜可以具有2μm光学分辨率,约20~400倍放大倍率,5mm视场直径。
首先,对本申请实施例所涉及的电子设备进行介绍。
请参阅图1a,图1a示例性的示出了一种电子设备的结构示意图。如图1a所示,电子设备的后盖10一侧包括相机摄像头11。其中,
相机摄像头11,可以包括多个摄像头,至少包括显微摄像头12(目前通常的相机摄像头中并不包括显微摄像头),还可以包括目前通常的相机摄像头(电子设备上用于拍照的摄像头),如:中焦摄像头、长焦摄像头、广角摄像头、超广角摄像头、TOF(time of night)深感摄像头、电影摄像头、和/或微距摄像头等等(此实现方式中,显微摄像头12内置于电子设备中,属于相机摄像头11中的一个)。电子设备针对不同的功能需求可以搭载双摄(两个摄像头)、三摄(三个摄像头)、四摄(四个摄像头)、五摄(五个摄像头)甚至六摄(六个摄像头)等多种组合的摄像头,以提高拍照的性能。一般来说,相机摄像头11的放大倍数在0.2-20倍之间。示例性的对摄像头的基本参数进行介绍,例如,中焦摄像头,50mm焦距,f/1.6光圈;长焦摄像头,200mm焦距,f/3.4光圈;广角摄像头,35mm焦距,f/2.2光圈。
显微摄像头12,具有一定的放大倍数,可以观察到细菌。一般来说,显微摄像头12的最大放大倍数在200倍以上。
本申请实施例中,显微摄像头12作为多个相机摄像头11中的一个摄像头被设置于电子设备中,电子设备通过多个相机摄像头11中非显微摄像头12的摄像头中的一个来采集图像,可以识别该图像中的场景,在一些实施例中,场景可以理解为采集的图像中的物体种类(例如食物、手部、餐桌等等)。电子设备通过显微摄像头12采集图像,可以识别该图像中微观信息,微观信息包括细菌种类和数量。相机摄像头11中显微摄像头12和其他摄像头可以同时工作,用户可以方便的观察到在生活场景中微生物的分布情况。
本申请实施例对显微摄像头12的位置不作限制,显微摄像头12可以置于电子设备的后盖一侧,或可以置于电子设备的显示屏一侧,或可以置于电子设备的显示屏的对侧,或可以置于电子设备的侧边屏幕的一侧。
请参阅图1b,图1b示例性的示出了另一种电子设备的结构示意图。如图1b所示,电子设备的后盖10一侧包括相机摄像头11,该后盖上还安装有一个显微摄像头12(此实现方式中,显微摄像头12并不属于相机摄像头11中的一个),其中,相机摄像头11可以包括多个摄像头(如中焦摄像头、长焦摄像头、广角摄像头等),显微摄像头12作为电子设备的一个配件被安装在多个相机摄像头11中的其中一个摄像头上(显微摄像头12可以是贴在一个相机摄像头的表面上,改变该相机摄像头的外部视野里的内容(对待拍摄物进行了放大),然后,电子设备可以通过该相机摄像头获取到第一物体的微观图像,该相机摄像头可以称之为被借用的摄像头)。电子设备可识别出被借用的摄像头,使用多个相机摄像头11中其他可用摄像头进行日常的拍摄(使用显微摄像头进行微观拍摄区别于目前日常/常规的拍摄)。下面示例性的对电子设备如何识别出被借用的摄像头的可能的方式进行介绍。
方式一,显微摄像头12安装在相机摄像头11中的其中一个摄像头上后,电子设备使用相机摄像头11中每一个摄像头进行图像拍摄,通过对比分析从每个摄像头拍摄获取的图像,确定出安装了显微摄像头12的摄像头。
方式二,显微摄像头12安装在相机摄像头11中的其中一个摄像头上后,电子设备接收到显微摄像头12对应的应用软件发送的信息,确定出安装了显微摄像头12的摄像头。
方式三,电子设备接收到启动显微摄像头12对应的应用软件的用户操作后,显示第一用户界面,该第一用户界面包括电子设备的相机摄像头11中支持安装显微摄像头12的摄像头。第一用户界面还可以包括电子设备推荐用户可安装显微摄像头12的摄像头。电子设备根据接收到的用户操作,确定出安装了显微摄像头12的摄像头。
以相机摄像头仅包括长焦摄像头和广角摄像头两个摄像头为例,当显微摄像头安装在长焦摄像头上时,电子设备可调用广角摄像头采集图像,根据该图像识别场景;当显微摄像头安装在广角摄像头上时,电子设备调用长焦摄像头采集图像,根据该图像识别场景。
以相机摄像头仅包括中焦摄像头、长焦摄像头、广角摄像头三个摄像头为例,当显微摄像头安装在中焦摄像头上时,电子设备可调用广角摄像头和/或长焦摄像头采集图像,根据该图像识别场景;当显微摄像头安装在长焦摄像头上时,电子设备可调用中焦摄像头和/或广角摄像头采集图像,根据该图像识别场景。
也即是说,当显微摄像头12安装在相机摄像头11的多个摄像头中的一个摄像头上时,电子设备可以调用相机摄像头11中其他摄像头进行常规的拍摄,识别拍摄到的图像中的场景。
本申请实施例中,电子设备可以是手机、平板电脑、手持计算机、可穿戴设备、虚拟现实设备、或智能家居设备等电子设备,还可以是安装于或运行于上述电子设备上的功能模块等。
本申请实施例中,电子设备上的显微摄像头12可以是外置(安装在电子设备外部)的摄像头,外置的显微摄像头12(一种可能的产品形态如Tipscope,一些实施例中外置的显微摄像头与电子设备的交互方式可以参考Tipscope与手机的交互方式)可以包括微型物镜,还可以包括其他的组件。一种可能的实施例中,外置的显微摄像头12可与电子设备进行通信连接,其中,显微摄像头12与电子设备的连接方式不限于有线或无线(例如蓝牙、WIFI等)。显微摄像头12所采集的图像被发送到电子设备,电子设备获取到该图像,获取图像中的微观信息。
为了方便描述,下列实施例中,在采集微观图像的过程中发挥了作用的摄像头可以称为第一摄像头,在采集宏观图像的过程中发挥了作用的摄像头可以称为第二摄像头。例如图1a中,第一摄像头为显微摄像头12,第二摄像头为相机摄像头11中除了显微摄像头12的摄像头中的一个或多个;图1b中,第一摄像头可以理解为显微摄像头12和/或相机摄像头11中安装了显微摄像头12的摄像头,第二摄像头为相机摄像头11中没有安装显微摄像头12的摄像头中的一个或多个。微观图像可以称为第一图像,宏观图像可以称为第二图像。
本申请实施例中,电子设备通过第一摄像头(可以包括一个或多个,性能可以彼此不同)采集微观图像,识别微观图像中的微观信息;电子设备通过第二摄像头(可以包括一个或多个,性能可以彼此不同)采集宏观图像,识别宏观图像中的场景。
本申请实施例中,用户操作包括但不限于触控操作、语音操作、手势操作等等。
下面从应用界面上详细说明电子设备如何识别物体的卫生状况。
首先,对如何触发电子设备通过第一摄像头获取微观信息做出介绍。
方式一,启动电子设备的相机应用,通过第一摄像头获取微观信息。
相机应用是电子设备具有的一款用于拍照的应用软件。当用户想要拍摄图像或视频时,启动相机应用,电子设备调用各个摄像头进行拍摄。其中,第一摄像头可以配置在相机应用中,通过相机以调用第一摄像头。如图2中的a部分所示,图2中的a部分的显示界面202陈列了多个应用图标。其中,显示界面202中包括相机205的应用图标。当电子设备检测到作用于相机205的应用图标的用户操作206,显示相机应用提供的应用界面。
参考图2中的b部分,图2中的b部分示出了一种可能的相机应用提供的用户界面。相机205的应用界面如图2中的b部分所示,该应用界面可以包括:显示区30、闪光灯图标301、设置图标302、模式选择区303、图库图标304、确认图标305、切换图标306。若用户想要获取微观信息,可以通过用户操作307触发模式选择区303中微观模式303A的应用图标。
图2中的b部分的显示区30显示电子设备当前使用的摄像头采集的数据的预览图像。电子设备当前使用的摄像头可以是相机应用设置的默认摄像头,即当相机应用被打开,显示区30的显示内容始终为第二摄像头采集的数据的预览图像;电子设备当前使用的摄像头还可以是上一次关闭相机应用时使用的摄像头。
闪光灯图标301,可以用于指示闪光灯的工作状态。闪光灯开启和关闭时的闪光灯图标301可以显示为不同的形式,例如当闪光灯是开启状态时,闪光灯图标为白色填充形式,当闪光灯是关闭状态时,闪光灯图标为黑色填充形式。用户通过对闪光灯图标301的触控操作,可以控制闪光灯的开启和关闭。一般来说,通过第一摄像头采集微观图像的同时闪光灯也会开启,通过闪光灯照亮被拍摄物体。
设置图标302,当检测到作用于设置图标302的用户操作时,响应于该操作,电子设备可显示其他快捷功能,例如调整分辨率、定时拍摄(又可称为延时拍摄,可以控制开启拍照的时间)、拍摄静音、声控拍照、笑脸抓拍(当摄像头检测到笑脸特征时,自动对焦到笑脸)等功能。
模式选择区303,用于提供不同的拍摄模式,根据用户选择的拍摄模式不同,电子设备启用的摄像头以及拍摄参数也不同。可以包括微观模式303A、夜景模式303B、拍照模式303C、录像模式303D以及更多303E。图2中的b部分中拍照模式303C的图标被标记,用于提示用户当前的模式为拍照模式。其中,
微观模式303A,在该模式下,用户可以观察到物体的微观图像。当检测到作用于微观模式303A的用户操作时,响应于该操作,电子设备通过第一摄像头采集微观图像。模式选择区303中拍照模式303C的图标不再被标记,而微观模式303A被标记(如图3中图标303A被标记为灰色),用于提示用户当前的模式为微观模式。此时,电子设备获取第一摄像头采集的图像,显示区30的显示内容为第一摄像头采集的图像。电子设备根据第一摄像头采集的图像获取微观信息,微观信息包括细菌种类和数量。
夜景模式303B,可以提升亮部和暗部的细节呈现能力,控制噪点,并呈现出更多的画面细节。拍照模式303C,适应大部分的拍摄场景,可以根据当前的环境自动调整摄影参数。录像模式303D,用于拍摄一段视频。更多303E,当检测到作用于更多303E的用户操作时, 响应于该操作,电子设备可显示其他选择模式,例如全景模式(实现自动拼接,电子设备将连续拍摄的多张照片拼接为一张照片,实现扩大画面视角的效果)、HDR模式(自动连拍欠曝光、正常曝光、过度曝光三张照片、并选取最好的部分合成为一张照片)等等。
当检测到作用于模式选择区303中任一模式的应用图标(例如微观模式303A、夜景模式303B、拍照模式303C、录像模式303D、全景模式、HDR模式等等)的用户操作时,响应于该操作,显示区30显示的图像为当前模式下处理后的图像。
模式选择区303中的各模式图标不限于虚拟图标,也可以实现为物理按键。
图库图标304,当检测到作用于图库图标304的用户操作时,响应于该操作,电子设备可进入图库,图库中显示已拍摄的照片和视频。其中,图库图标304可以显示为不同的形式,例如当电子设备保存摄像头当前采集的图像后,图库图标304中显示该图像的缩略图。
确认图标305,当检测到作用于确认图标305的用户操作(例如触控操作、语音操作、手势操作等)时,响应于该操作,电子设备获取当前模式下所使用的摄像头当前采集的图像(或当前采集的图像在经过当前所使用的模式所对应的处理后的图像),并保存在图库中。其中,可以通过图库图标304进入图库。
切换图标306,可以用于对前置摄像头和后置摄像头的切换。其中,前置摄像头和后置摄像头都属于相机摄像头11。前置摄像头的拍摄方向与用户使用的电子设备的屏幕的显示方向相同,后置摄像头的拍摄方向与用户使用的电子设备的屏幕的显示方向相反。若显示区30当前显示后置摄像头采集的图像,当检测到作用于切换图标306的用户操作时,响应于该操作,显示区30显示前置摄像头采集的图像。若显示区30当前显示前置摄像头采集的图像,当检测到作用于切换图标306的用户操作时,响应于该操作,显示区30显示后置摄像头采集的图像。
在图2中的b部分中,电子设备检测到作用于微观模式303A的用户操作307,响应于该用户操作,电子设备通过第一摄像头采集微观图像。模式选择区303中拍照模式303C的图标不再被标记,而微观模式303A被标记,用于提示用户当前的模式为微观模式。此时,电子设备获取第一摄像头采集的图像,显示区30的显示内容为第一摄像头采集的图像。
示例性的,如图2中的c部分所示,图2中的c部分示例性的示出了微观模式303A的应用界面。其中,模式选择区303中微观模式303A的图标被标记,表示当前的模式为微观模式。图2中的c部分的显示区30显示第一摄像头采集的图像,电子设备可根据第一摄像头采集的图像获取微观信息。
需要注意的是,在一种实施方式中,电子设备响应于作用于微观模式303A的用户操作,将切换图标306刷新/更新为转换图标307。图2中的c部分中的转换图标307,可以用于第二摄像头采集的宏观图像和第一摄像头采集的微观图像在显示区30的显示内容中的转换显示。若显示区30当前显示第一摄像头采集的图像,当检测到作用于转换图标307的用户操作时,响应于该操作,显示区30显示第二摄像头采集的图像。若显示区30当前显示第二摄像头采集的图像,当检测到作用于转换图标307的用户操作时,响应于该操作,显示区30显示第一摄像头采集的图像。
在一些实施方式中,在微观模式下,第二摄像头和第一摄像头可以同时采集图像,不 论显示区30显示的是第二摄像头或第一摄像头采集的图像,电子设备都可以根据第二摄像头采集的宏观图像获取图像中的场景信息(或理解为,获取图像中物体的种类),同时根据第一摄像头采集的微观图像获取微观信息。
本申请中,用户操作206和用户操作307包括但不限于点击、快捷按键、手势、悬浮触控、语音指令等用户操作。
方式二,启动电子设备的专门针对于获取微观图像的应用(如名为微观模式的应用),通过第一摄像头采集微观图像。
微观模式应用可以是一款专门针对于显微摄像头的应用,可以从网络上下载后安装在电子设备上。当用户想要使用显微摄像头拍摄图像或视频时,启动微观模式应用,电子设备调用第一摄像头进行拍摄。其中,“微观模式”仅为一种可能的名称的示例,还可以为其他名称。如图3中的a部分所示,图3中的a部分的显示界面202陈列了多个应用图标。其中,显示界面202中包括微观模式207的应用图标。若用户想要通过第一摄像头采集微观图像,通过用户操作208触发微观模式207的应用图标。电子设备响应于用户操作208,显示微观模式207的应用界面。
参考图3中的b部分,图3中的b部分示例性的示出了一种可能的微观模式应用提供的应用界面。该应用界面可以包括:显示区40、闪光灯图标401、设置图标402、图库图标403、确认图标404、转换图标405。其中,显示区40可以显示电子设备当前使用的摄像头采集的图像。
图库图标403,当检测到作用于图库图标403的用户操作时,响应于该操作,电子设备可进入微观图像图库,微观图像图库中显示已拍摄的微观照片和视频。其中,图库图标403可以显示为不同的形式,例如当电子设备保存第一摄像头当前采集的微观图像后,图库图标403中显示该微观图像的缩略图。
转换图标405,可以用于对第二摄像头的采集图像和第一摄像头的采集图像在显示区40的显示内容中的转换显示。如图4a所示,若显示区40当前显示第一摄像头采集的图像,当检测到作用于转换图标405的用户操作406时,响应于该操作,显示区40显示第二摄像头采集的图像。若显示区40当前显示第二摄像头采集的图像,当检测到作用于转换图标405的用户操作时,响应于该操作,显示区40显示第一摄像头采集的图像。
闪光灯图标401可参考图2中的b部分中闪光灯图标301的相关描述。设置图标402可参考图2中的b部分中设置图标302的相关描述。确认图标404可参考图2中的b部分中确认图标305的相关描述。
在一种可选的实施方式中,显示区40的显示内容还可以通过滑动的方式进行转换。示例性的,如图4b所示,用户操作407为向左的滑动操作,作用在显示区40中。电子设备的显示区40一开始显示微观图像(第一摄像头采集的图像),当电子设备检测到作用于显示区40中的操作407时,随着操作407逐步显示第二摄像头采集的图像,达到转换显示区40的显示内容的效果。同样的,若电子设备的显示区40一开始显示第二摄像头采集的图像,用户可以通过向右的滑动操作,达到转换显示区40的显示内容的效果。
在微观模式207的应用客户端中,第二摄像头和第一摄像头可以同时采集图像,不论显示区40显示的是第二摄像头或第一摄像头采集的图像,电子设备都可以根据第二摄像头采集的图像获取图像中的场景,同时根据第一摄像头采集的图像获取图像中的微观信息。
本申请中,用户操作208包括但不限于点击、快捷按键、手势、悬浮触控、语音指令等用户操作。
上述两种方式,介绍了电子设备通过第一摄像头采集微观图像的不同路径以及相应的显示界面。图2中的c部分和图3中的b部分都示例性的示出了微观模式的应用界面。图5和图6分别还提供了一种可能的微观模式的应用界面。
如图5中的a部分所示,显示区410可以实时显示第二摄像头采集的图像,显示区40可以实时显示第一摄像头采集的图像。其中显示区40和显示区410的图像是相互对应的,即显示区410的显示内容为显示区40的显示内容的微观图像。当用户不断变换拍摄角度或者拍摄物体,第二摄像头和第一摄像头采集的图像也在不断变换,显示区40和显示区410的显示内容也在不断变换。
相比于图5中的a部分,图5中的b部分中微观模式的应用界面上还可以包括控件411。该控件411用于触发电子设备识别物体的微观信息(进一步可选的,还可以用于触发电子设备识别物体对应的场景信息,电子设备识别物体对应的场景信息可以不需要用户通过控件411来触发而通过用户的其他操作来触发),从而确定该物体的卫生状况。在一种可能的实现方式中,当电子设备第一次检测到针对于控件411的用户操作(如用户点击选择了控件411),电子设备根据第二摄像头采集的图像获取图像中的场景,并且根据第一摄像头采集的图像获取图像中物体的微观信息。电子设备根据图像中的场景以及微观信息判断出物体的卫生状况,电子设备可以输出关于物体的卫生状况的提示信息。在一种可能的实现方式中,当电子设备第二次检测到针对于控件411的用户操作(可以是用户点击反选了控件411),电子设备可以不再识别物体对应的场景信息和微观信息,也即不再输出关于物体的卫生状况的提示信息,这种情况下,用户在微观模式的应用界面中可以查看到物体的微观图像和宏观图像。
一种可能的实施例中,图5中的b部分中的控件411还可以用于触发电子设备通过第二摄像头采集宏观图像(这种情况下,图5中的b部分中显示宏观图像(可以是宏观图像的缩略图)的显示区410可以是一开始并没有出现的,而是在用户操作了控件411后才出现的)。举例来说,当电子设备第一次检测到针对于控件411的用户操作,电子设备可以在显示区410实时显示第二摄像头采集的图像(可以是缩略图),电子设备根据第二摄像头采集的图像获取图像中的场景,同时根据第一摄像头采集的图像获取图像中的微观信息,从而可以确定物体的卫生状况,电子设备可以输出关于物体的卫生状况的提示信息。当电子设备第二次检测到针对于控件411的用户操作,电子设备无需再获取宏观图像,显示区410可以隐藏,或者显示黑屏。电子设备不再识别物体对应的场景信息和微观信息,也即不再输出关于物体的卫生状况的提示信息。
一种可能的实施例中,显示区40和显示区410的显示内容可以根据用户操作相互切换。举例来说,显示区410显示第二摄像头采集的图像,显示区40显示第一摄像头采集的图像。 当电子设备检测到针对于显示区410的点击操作,则显示区40和显示区410的显示内容相互切换,即显示区40显示第二摄像头采集的图像,显示区410显示第一摄像头采集的图像。其中,用户操作不限于针对于显示区410的点击操作,还可以是针对转换图标405的点击操作,还可以是针对于显示区410或显示区40拖动操作、双击操作、手势操作等等。
在本申请的一些实施例中,显示区410的大小可以和上述图5中不同。例如,显示区410覆盖的区域可以大于或小于上述图5中显示区410所覆盖的区域。
在一些可选的实施例中,显示区410的形状、位置及大小可以是***默认设置的。例如,如图5所示,***可以默认设置显示区410为在显示屏5右下侧区域的竖向矩形界面。
在一些可选的实施例中,显示区410的形状、位置及大小还可以根据用户操作实时确定。显示区410的大小和位置可以和用户双指在显示屏上停止滑动的位置相关。例如,当用户双指在显示屏上停止滑动的位置之间的距离越大时,显示区410越大。又例如,显示区410所在区域可以覆盖用户双指滑动的轨迹。
如图6所示,图6包括显示区41和显示区42,显示区41和显示区42采用分屏的方式分别显示在电子设备的显示屏上。其中显示区41可以实时显示第二摄像头采集的图像,显示区42可以实时显示第一摄像头采集的图像。其中显示区41和显示区42的图像是相互对应的,即显示区42的显示内容为显示区41的微观显示图像。当用户不断变换拍摄角度或者拍摄物体,第二摄像头和第一摄像头采集的图像也在不断变换,显示区40和显示区410的显示内容也在不断变换。
图6还包括位置框51,其中,显示区42的显示内容是位置框51中的微观图像。随着位置框51中图像的不同,显示区42的显示内容也相应的变化。
在一些可选的实施例中,显示区41和显示区42的大小可以根据用户操作实时确定。例如,上下拖动显示区41和显示区42的分屏线,当用户向上拖动分屏线,显示区41的长度变小,显示区42的长度变大;当用户向下拖动分屏线,显示区41的长度变大,显示区42的长度变小。可选的,显示区40和显示区410的显示内容可以根据用户操作相互切换。
上述实施例提供了微观模式可能的应用界面,电子设备响应于用户操作307,通过第一摄像头采集微观图像,显示微观模式303A的应用界面(如图2中的c部分所示);或者电子设备响应于用户操作208,通过第一摄像头采集微观图像,显示微观模式207的应用界面(如图3中的b部分所示);或者显示如图5和图6的应用界面;等等。在上述应用界面中,电子设备可以通过第一摄像头获取第一物体的微观信息,然后结合第一物体的种类对第一物体的卫生状况进行推断。
下面对电子设备如何确定第一物体的种类做出介绍。
电子设备通过第二摄像头采集第一物体的图像,自动检测采集的图像中第一物体的种类。如图7a的左侧附图所示,图7a的左侧附图的显示区40显示了第二摄像头采集的图像。光标70和光标71分别指示了电子设备检测的图像中的物体(桃子和手部),其中光标70显示在桃子的显示区域内,光标71显示在手部的显示区域内。光标的数量取决于电子设备检测到图像中物体的数量,并将识别出的物体种类的描述显示在光标的附近,光标70附近显示文本内容为“桃子”,光标71附近显示文本内容为“手部”,以提示用户了解到电子设备 检测出图像中包括的物体以及物体种类。由于实际上光标70所指示的物体并不是桃子,而是苹果,则用户可以针对显示“桃子”的显示区域进行点击操作。
当检测到作用于描述物体的文本内容的显示区域内的用户操作时,电子设备在应用界面上显示输入窗口,提示用户输入想要检测的对象,达到对电子设备识别的物体种类进行更正的功能效果。示例性的,当检测到作用于文本内容“桃子”的显示区域的用户点击操作时,如图7a的右侧附图所示,电子设备在应用界面上显示输入窗口50,提示用户输入想要检测的对象。用户可以在输入窗口50中输入物体的种类。举例来说,第二摄像头采集的图像为苹果,若电子设备识别出图像中的物体种类为桃子,则这时用户可以点击文本内容为“桃子”的显示区域,在输入窗口50中输入物体的种类为苹果。电子设备接收到用户输入的文本,将图像中物体种类纠正为苹果。这时,光标70附近显示文本内容为“苹果”。
如图7a的右侧附图所示,输入窗口50还可以包括重新识别501、语音输入502以及确认503等功能图标。其中,
重新识别501,当检测到作用于重新识别501的用户操作时,响应于该操作,电子设备再次对显示区40中的物体的种类进行识别,识别的结果与上一次识别结果不同,将识别结果显示在图像中物体的光标的附近,以提示用户了解到电子设备检测出图像中包括的物体以及物体种类。
语音输入502,当检测到作用于语音输入502的用户操作时,响应于该操作,电子设备获取用户输入的音频,识别该音频的内容,该音频中描述的物体作为第一物体的种类。
确认503,当检测到作用于确认503的用户操作时,响应于该操作,电子设备保存手动输入窗口50中用户输入的文本,该文本作为第一物体的种类。
在本申请实施例中,输入窗口50提供了一种辅助电子设备判断第一物体种类的方式,当用户想要检测的物体与电子设备识别的物体种类不符合,可以采用点击物体的文本内容的显示区域的方式进行纠偏。提高检测出物体卫生状况的准确性。
在一些可能的实施例中,本申请还提供了一种确定第一物体的种类的方式。即电子设备无需对第二摄像头采集的图像进行识别场景,而直接通过用户输入的文本信息、语音信息等确定需要检测的第一物体的种类。如图7b的左侧附图所示,图7b的左侧附图的显示区40显示了第一摄像头采集的图像。该应用界面还可以包括:手动输入图标73、图库图标701、以及确认图标702。其中,图库图标701可参考图3中图库图标403的相关描述,确认图标702可参考图3中确认图标404的相关描述。
手动输入图标73用于输入第一物体的种类。如图7b的右侧附图所示,响应于针对于手动输入图标73的用户操作,电子设备在应用界面上显示输入窗口51,提示用户输入想要检测的对象。用户可以在输入窗口51中输入物体的种类。举例来说,电子设备接收到输入的文本内容为“苹果”,则电子设备确定第一物体的种类为苹果。
其中,图7b的右侧附图中的语音输入图标和确认图标可参考图7a中语音输入502和确认503的相关描述。
这种实现方式,电子设备无需通过第二摄像头采集宏观图像,也无需对采集的图像进行识别场景,而直接通过用户输入的文本信息、语音信息等确定需要检测的第一物体的种类,节约电子设备的资源,提高效率。
上述实施例提供了电子设备确定第一物体的种类的方式,包括检测第二摄像头采集的图像或接收用户操作的方式确定第一物体的种类。本申请提供了一种识别物体的卫生状况的方法,电子设备可以通过第一摄像头获取第一物体的微观信息,结合第一物体的种类对第一物体的卫生状况进行推断,从而输出第一物体的卫生状况的提示信息。
示例性的,如图8中的a部分中,当用户点击针对于苹果的光标70时,电子设备结合苹果和苹果的微观信息进行分析计算,得出苹果的卫生状况,输出提示信息。
可选的,电子设备确定第一物体的种类以及第一物体的微观信息后,得出第一物体的卫生状况,当接收到获取第一物体的卫生状况的指令后,输出提示信息。示例性的,如图8中的a部分中,当用户点击针对于苹果的光标70时,电子设备输出提示信息。
在本申请实施例中,该提示信息可以用于提示用户该第一物体的卫生状况。在一些实施例中,该提示信息还可以用于提示用户如何提高物体卫生状况。在一些实施例中,提示信息还可以用于提示用户如何处理该物体。电子设备给出提示的方式不限于文字、语音、震动、指示灯等等。
下面对电子设备如何输出第一物体的卫生状况的提示信息做出介绍。
在本申请的一些实施例中,电子设备确定第一物体的卫生状况后,可以响应于接收到的用户操作来输出提示信息。用户可以选择查看想要了解的物体的卫生状况。参考图8,其示例性示出了电子设备在接收到用户操作后,输出提示信息的方式。
如图8所示,图8中的a部分的显示区40显示了第二摄像头采集的图像,光标70和光标71分别指示了电子设备检测的图像中的物体(苹果和手部),其中光标70显示在苹果的显示区域内,光标71显示在手部的显示区域内。
电子设备在显示区40输出提示(点击物体查看卫生状况),以提示用户可以点击物体查看该物体的卫生状况。具体来说,如图8中的b部分所示,若用户想要查看苹果的卫生状况,可以点击光标70,电子设备响应于该点击操作,图8中的b部分的显示区40显示第一摄像头采集的图像,还包括提示框60。该提示框60的提示内容包括细菌的种类和数量(杆状菌80万,青霉菌10万)、物体的卫生状况(苹果不干净,建议清洗)。
又如图8中的c部分所示,若用户想要查看手部的卫生状况,可以点击光标71,电子设备响应于该点击操作,图8中的d部分的显示区40显示第一摄像头采集的图像,还包括提示框61。该提示框61的提示内容包括细菌的种类和数量(大肠杆菌80万,葡萄球菌30万,流感病毒5万)、物体的卫生状况(手部不干净,建议清洗)。
在本申请的另一些实施例中,电子设备确定第一物体的卫生状况后,直接输出提示信息。用户可以在最快的时间了解到物体的卫生状况。参考图9,其示例性示出了电子设备直接输出提示信息的方式。
如图9所示,图9中的a部分的显示区40显示了第二摄像头采集的图像,光标70和光标71分别指示了电子设备检测的图像中的物体(苹果和手部),其中光标70显示在苹果的显示区域内,光标71显示在手部的显示区域内。
图9中的a部分还包括提示区60和提示区61。提示区60和提示区61分别描述了苹果和手部的卫生状况,提示区61输出提示信息为“苹果不干净,建议清洗”,提示区62输出 提示信息“手部不干净,建议清洗”。其中,提示区的数量取决于电子设备检测到图像中物体的卫生状况的数量。电子设备检测出两个物体的卫生状况,则输出两个提示区,电子设备检测出三个物体的卫生状况,则输出三个提示区,以此类推。
电子设备检测到针对于转换图标405的用户操作602时,响应于该用户操作602,图9中的b部分显示区40显示第一摄像头采集的图像。图9中的b部分还包括提示区60和提示区61。此处不再赘述。
本申请实施例中,提示信息的输出方式不作限制。可以是输出文本的方式(例如图8和图9中显示提示区60的方式),可以是图像、语音、震动、指示灯等方式,还可以是通过光标、文本的显示颜色指示卫生状况。举例来说,图9中的a部分包括指示苹果的光标70和指示手部的光标71,若电子设备检测到苹果不卫生,苹果的光标70显示为红色;若电子设备检测到手部卫生,手部的光标71显示为绿色。这种方式通过光标的颜色的不同,用户可以提前了解物体的卫生状况,并且在多个物体中更直观的发现不卫生的物体。然后对不卫生的物体进行具体查看,针对不卫生的物体做出卫生处理。
本申请实施例中,提示信息的输出内容不作限制。提示信息的输出内容可以包括对物体的卫生状况的描述(例如物体不卫生、物体不干净、物体卫生程度低),也可以包括对提高物体的卫生状况的建议(例如建议清洗、建议擦拭、建议加热)、对物体的处理方式建议(例如建议丢弃),还可以包括细菌种类对卫生状况的影响的描述(例如,由于大肠杆菌数量过多,导致食物不卫生,建议高温100度加热杀菌),还可以包括物体的新鲜程度(例如苹果不新鲜、香蕉已腐坏)等等。
在本申请的一些实施例中,提示区60或提示区61的大小可以和上述图8和图9中不同。例如,提示区60或提示区61覆盖的区域可以大于或小于提示区60或提示区61中提示区60或提示区61所覆盖的区域。
在一些可选的实施例中,提示区60或提示区61的形状、位置及大小可以是***默认设置的。也可以是根据用户操作实时确定。提示区60或提示区61的大小和位置可以和用户双指在显示屏上停止滑动的位置相关。例如,当用户双指在显示屏上停止滑动的位置之间的距离越大时,提示区60或提示区61越大。又例如,提示区60或提示区61所在区域可以覆盖用户双指滑动的轨迹。
上述实施例提供了电子设备输出第一物体的卫生状况的提示信息的相关方式,包括对于提示信息的输出内容以及提示信息的输出形式。
本申请实施例中,电子设备进入微观模式后,可以在任一应用界面中响应于在拍摄控件上接收到的用户操作,保存应用界面中显示区的显示内容。用户可以通过图库查看物体的微观图像以及提示信息。该拍摄控件例如可以是确认图标404以及确认图标305。如图10所示,显示区90示例性的显示了图像81、图像82和图像83。其中,图像81为图6的显示区的显示内容,电子设备检测到针对图6中的确认图标404的用户操作,获取图6中的显示区的显示内容,保存在图库中。图像82为图3中的b部分的显示区40的显示内容,电子设备检测到针对图3中的b部分中的确认图标404的用户操作,获取图3中的b部分中显示区40的显示内容,保存在图库中。图像83为图8中的b部分附图的显示区40的显 示内容(包括提示区60),电子设备检测到针对图8中的b部分中的确认图标的用户操作,获取图8中的b部分中显示区40的显示内容,保存在图库中。
其中,可以通过图库图标403进入图10所示的应用界面。
在本申请中,电子设备通过第二摄像头采集物体的图像,识别该图像中的场景,场景即为采集的图像中的物体种类(例如食物、手部、餐桌等等)。电子设备通过第一摄像头采集同一物体的图像,识别该图像中的微观信息,微观信息包括细菌种类和数量。
结合场景信息和微观信息进行综合分析,可以判断出场景的卫生状况,并给出智能提示。举例来说,当场景信息中第一物体的种类是食物时,微观信息中包括的细菌有酵母菌、放线菌、食用菌等等。电子设备针对该食物给出的智能提示包括该食物不卫生,建议清洗;还包括该食物建议高温加热,该食物建议丢弃等等。当第一物体的种类是手部时,第一物体上存在的细菌可以是葡萄球菌、大肠杆菌、流感病毒等等。电子设备针对手部给出的智能提示包括手部不卫生,建议清洗;还包括手部不卫生,建议使用洗手液清洗等等。当第一物体的种类是空气时,第一物体上存在的细菌可以是脑膜炎奈瑟氏菌、结核杆菌、溶血性球菌、白喉杆菌、百日咳杆菌等等。电子设备针对空气给出的智能提示包括空气质量较差,建议佩戴口罩;还包括空气质量较差,建议佩戴医用口罩等等。
下面介绍与本方案相关的技术原理。
(1)图像地点场景类别识别(Places CNN)
图像分类技术的一种,从图像中判断图像场景所处的地点类型的方式。利用现有成熟的网络框架(例如resnet)可以实现较高精度的图像及场所的识别。检测图像中的场景与物体,返回检测出的场景与物体名称,以及相应的置信度(准确度)。Places365是用于场景分类的开源数据集,包括Places365-standard和Places365-challenge。Places365–standard的训练集具有365个场景类别,其中每个类别最多有5000张图像。Places365-challenge的训练集具有620个场景类别,其中每个场景类别最多有40000张图像。通过Places365对图像地点场景类别识别的模型进行训练。在Places365数据库上训练的卷积神经网络可用于场景识别以及用于视觉识别的通用深层场景特征。
在本申请中,电子设备获取第二图像后,通过图像地点场景类别识别技术可以从第二图像中判断图像场景的类型。
(2)基于深度学习的目标检测算法(YOLO v3)
目标检测,即找出图像中所有感兴趣的物体,确定物体的位置和大小。识别过程包括分类(classification)、定位(location)、检测(detection)、分割(segmentation)。如图11所示,图11示出了YOLO v3的网络结构。其中,YOLOv3网络的结构具体包括:
darknet-53without FC layer:53表示darknet网络中的卷积层加全连接层的数量,darknet-53without FC layer表示darknet-53的前面的52层,没有全连接层。
Input层:416×416×3表示输入图像的像素为416*416,通道数为3。
DBL:Darknetconv2d_BN_Leaky,是yolo_v3的基本组件。就是卷积+BN+Leaky relu。用于对图像进行特征提取。
Resn:n代表数字,有res1,res2,…,res8等等,表示这个res_block里含有多少个res_unit。其输入与输出一般保持一致,并且不进行其他操作,只是求差。
Concat:张量拼接。将darknet中间层和后面的某一层的上采样进行拼接。拼接的操作和残差层add的操作是不一样的,拼接会扩充张量的维度,而add只是直接相加不会导致张量维度的改变。
Output层:包括3条预测之路,y1、y2和y3的深度都是255,边长的规律是13:26:52。yolo v3设定的是每个网格单元预测3个box,所以每个box需要有(x,y,w,h,confidence)五个基本参数,然后还要有80个类别的概率。所以3×(5+80)=255。
以下以416*416*3为输入图像为例对上述YOLOv3网络进一步进行说明。
Y1层:输入13*13的feature map(特征映射),一共1024个通道。经过一系列的卷积操作,feature map的大小不变,但是通道数最后减少为75个。最终输出13*13大小的feature map,75个通道,在此基础上进行分类和位置回归。
Y2层:将79层的13*13、512通道的feature map进行卷积操作,生成13*13、256通道的feature map,然后进行上采样,生成26*26、256通道的feature map,同时于61层的26*26、512通道的中尺度的feature map合并。再进行一系列卷积操作,feature map的大小不变,但是通道数最后减少为75个。最终输出26*26大小的feature map,75个通道,然后在此进行分类和位置回归。
Y3层:将91层的26*26、256通道的feature map进行卷积操作,生成26*26、128通道的feature map,然后进行上采样生成52*52、128通道的feature map,同时于36层的52*52、256通道的中尺度的feature map合并。再进行一系列卷积操作,feature map的大小不变,但是通道数最后减少为75个。最终输出52*52大小的feature map,75个通道,然后在此进行分类和位置回归。
综上,完成三个不同尺度的目标检测。
本申请运用基于深度学***滑、变换等操作,从而强化图像的重要特征。第三,特征抽取和选择,对预处理的图像进行特征的抽取和选择,通过图像所具有的本身特征来识别,提取有用的特征。第四,分类器设计,即通过训练得到的一种识别规则,通过该识别规则得到一种特征分类的方式。第五,分类决策,在特征空间中对被识别对象进行分类,从而识别出拍摄场景下被识别对象的具体类别。
(3)知识图谱
知识图谱可以理解为一个具有属性的实体通过关系链接而成的网状知识库,包括节点和连接线,节点为实体,连接线为关联规则。知识图谱将各种琐碎、零散的客观知识相互连接,以支持综合型知识检索、辅助决策和智能推断。
本申请以知识图谱的形式将宏观信息和微观信息进行关联,可以让使用此功能的用户及时快速的了解宏观物体周围的细菌情况,并结合智能推断给出的建议,提升自我防护意 识,同时也提高了电子设备的实用性。电子设备获取到第一物体的种类后,在知识图谱中获取第一物体的种类的知识图谱。举例来说,如图12所示,图12示出了分别以“手部不干净”、“食物不干净”、“苹果不干净”这三个节点为中心节点的知识图谱。从图12中可知,可以导致苹果不干净的细菌包括杆状菌、红酵母菌、青霉菌等细菌,其中连接线表示关联规则。关联规则可以为当细菌存在,则说明食物不干净,关联规则还可以为当细菌的数量超过阈值,则说明食物不干净,等等。当电子设备获取到第一物体的种类为苹果时,可以结合电子设备检测出的微生物的种类和数量,通过苹果的知识图谱判断该苹果的卫生状况。
基于上述的技术原理,下面结合示例介绍识别物体的卫生状况的方法过程。请参阅图13,图13是本申请实施例提供的一种识别物体的卫生状况的方法流程示意图。如图13所示,该识别物体的卫生状况的方法可包括如下步骤。
步骤S101A:通过第二摄像头采集第二图像。
电子设备通过第二摄像头采集第二图像。第二图像为宏观图像,第二摄像头为用于采集宏观图像的一个或多个摄像头,该第二图像包括第一物体。
步骤S102A:确定第二图像中第一物体的种类。
电子设备根据该第二图像确定第一物体的种类。第一物体的种类可以是食物、昆虫等大类,也可以是苹果、香蕉、葡萄、面包、蚂蚁、手部等细类,还可以是空气、河海等场景。举例来说,电子设备通过第二摄像头采集了一个苹果的图像,根据采集到的该图像,电子设备确定该图像中包括一个物体,根据图像识别为苹果,即第一物体的种类为苹果。又举例来说,电子设备通过第二摄像头采集了一个手握着苹果的图像,根据采集到的该图像,电子设备确定该图像中包括两个物体,分别为第一物体和第二物体,根据图像识别为苹果和手部。其中,第一物体的种类可以为苹果,第一物体的种类也可以为手部;当第一物体的种类为苹果,则第二物体的种类为手部;当第一物体的种类为手部,则第二物体的种类为苹果。
可以理解的,当第二图像中只有一个目标对象时,该目标对象即为第一物体,电子设备根据图像识别技术识别出第一物体的种类;当第二图像中有两个或两个以上的目标对象时,可以根据预设规则或接收的用户操作来确定第一物体,根据图像识别技术识别出第一物体的种类。下面示例性的说明几种确定第二图像中第一物体的方法。
方法一,根据预设规则在多个目标对象中确定第一物体。
预设规则可以是占据整个图像画面的比例最大的目标对象作为第一物体;还可以是占据图像画面中心位置的画面比例最大的目标对象作为第一物体;还可以是占据图像画面中心位置的目标对象均可作为第一物体;还可以是在图像画面中的所有目标对象均可作为第一物体;等等。电子设备根据预设规则在第二图像的多个目标对象中确定第一物体,确定第一物体后根据图像识别该第一物体的种类。
举例来说,电子设备通过第二摄像头采集第二图像,第二图像中包括四个目标对象,分别为苹果、手部、香蕉和桌子,预设规则为占据整个图像画面的比例最大的目标对象作为第一物体,则电子设备确定第一物体为桌子;预设规则为占据图像画面中心位置的比例最大的目标对象作为第一物体,则电子设备确定第一物体为苹果。
方法二,根据检测到的用户操作确定第一物体。
用户操作包括输入语音/文本/图像的用户操作。电子设备检测到用户操作,确定第一物体或第一物体的种类。
举例来说,用户可以在图像上绘制预设图形,选取第一物体。具体来说,若第二摄像头采集到的图像为一个手握着苹果的图像,用户在图像上绘制一个闭合图形,指示该闭合图形区域覆盖的范围内的物体为第一物体。若电子设备接收到用户绘制的闭合图形中的物体包括苹果,则第一物体为苹果;若电子设备接收到用户绘制的闭合图形中的物体包括手部,则第一物体为手部;若电子设备接收到用户绘制的闭合图形中的物体包括苹果和手部,则第一物体和第二物体为苹果和手部。
在一些可能的实施例中,若电子设备检测出的第一物体的种类与第一物体不匹配,用户可以修正第一物体的种类。举例来说,参考图7a附图,在图7a的左侧附图中,用户可以通过点击“桃子”的显示区域,对物体的种类进行更改。当电子设备接收到针对于该“桃子”的显示区域的用户操作时,则表示对“桃子”这个物体进行修改。如图7a的右侧附图所示,电子设备提示用户输入想要检测的对象,电子设备检测到用户在文本框中输入的“苹果”,则确定该物体的种类为苹果。
可选的,用户还可以通过语音输入的方式修改第一物体的种类为苹果。电子设备检测到用户在语音输入的“苹果”,则确定该物体的种类为苹果。
可选的,用户还可以通过触发电子设备重新识别的方式,使电子设备对图像中物体的种类进行重新识别,在图7a的左侧附图中,用户可以通过点击“桃子”的显示区域,对物体的种类进行更改。当电子设备接收到针对于该“桃子”的显示区域的用户操作时,则表示对“桃子”这个物体进行修改。如图7a的右侧附图所示,电子设备检测到用户针对重新识别图标做出的用户操作,对“桃子”进行重新识别,识别出的种类与桃子不同。
可选的,确定第一物体的种类的方式不限于上述步骤S101A和S102A,确定第二物体的种类方式同理于确定第一物体的种类的方式。在一些可能的实施例中,电子设备无需使用第二摄像头获取宏观图像,而可以根据检测到的用户操作来确定第一物体的种类。用户操作包括文本输入、图像输入、语音输入等方式。举例来说,参考图7b附图,电子设备检测到用户在文本框中输入的“苹果”,则确定第一物体的种类为苹果。电子设备还可以根据用户输入的图像,根据图像识别获取该用户输入的图像中物体的种类作为第一物体的种类,该图像可以来源于图库或网络。电子设备还可以根据用户输入的语音,根据语音识别获取该用户输入的语音中物体的种类作为第一物体的种类。
在一些可能的实施例中,电子设备无需使用第二摄像头获取宏观图像,而可以使用显微摄像头(例如图1a中的显微摄像头12)获取宏观图像,根据显微摄像头获取的宏观图像确定第一物体的种类。具体来说,显微摄像头的放大倍率可以在1~400倍之间,当显微摄像头的放大倍率在1~5倍时可以获取宏观图像,当显微摄像头的放大倍率在200~400倍之间时可以获取微观图像。通过自动转换显微摄像头的放大倍率,显微摄像头采集第一物体的宏观图像和微观图像,电子设备识别宏观图像中第一物体的种类。
步骤S101B:通过第一摄像头采集第一图像。
电子设备通过第一摄像头采集第一图像。第一图像为微观图像,第一摄像头为采集微观图像的一个或多个摄像头,该第一图像包括第一物体上存在的细菌。
在一些可能的实施例中,电子设备显示第一用户界面;当电子设备接收到作用于第一图标的用户操作,并响应作用于第一图标的用户操作,电子设备通过第一摄像头采集第一图像,第一用户界面包括该第一图标。以附图图2为例,第一用户界面可以为图2中的b部分的界面,第一图标为微观模式303A的图标,当电子设备接收到对微观模式303A的图标的用户操作(例如点击),电子设备通过第一摄像头采集第一图像。此时应用界面如图2中的c部分所示,电子设备在显示界面上实时显示第一摄像头采集的数据的预览图像,该预览图像为微观图像,显示了拍摄的物体上存在的细菌。
在一些可能的实施例中,电子设备显示主界面,主界面上包括多个应用图标,其中多个应用图标中包括第一应用图标;当电子设备接收到针对第一应用图标的用户操作,电子设备通过第一摄像头采集第一图像。参考附图图3,主界面为图3中的a部分的界面,第一应用图标为微观模式207的应用图标。当电子设备接收到对微观模式207的图标的用户操作(例如点击),通过第一摄像头采集第一图像。此时应用界面如图3中的b部分所示,电子设备在显示界面上显示第一摄像头采集的图像,该图像为微观图像,显示了拍摄的物体上存在的细菌。
步骤S102B:确定第一图像中第一物体的第一信息。
电子设备根据该第一图像确定第一物体的第一信息,其中第一信息包括第一物体上存在的细菌的情况,包括细菌的种类和数量。
电子设备通过第一摄像头采集到第一物体的第一图像,根据目标检测算法(例如上述YOLO v3算法),确定第一物体中细菌的种类和数量。举例来说,电子设备通过第一摄像头采集到第一物体的第一图像,根据YOLO v3算法,确定第一图像中第一物体上的细菌包括细菌1、细菌2、细菌3和细菌4,以及细菌1、细菌2、细菌3和细菌4的数量。
在一种可能的实施方式中,电子设备可以根据第一物体的知识图谱(具体的可以是第一物体的种类所对应的知识图谱)确定第一图像中第一物体的第一信息。其中,第一物体的知识图谱包括第一物体对应的常见的细菌种类,从而能在根据该物体的微观图像确定该物体上的细菌种类时起到参考作用(例如优先比对是否是该物体上常见的细菌种类),提高识别细菌的效率。举例来说,电子设备识别出第一物体的种类为手部,电子设备结合手部的知识图谱能够得出,分布在手部的常见细菌包括大肠杆菌、链球菌和绿农杆菌。电子设备在对手部的细菌种类进行识别的过程中,可以优先与大肠杆菌、链球菌、绿农杆菌这些手部常见细菌进行比对,当某细菌与大肠杆菌的相似度达到阈值,则可以确定该细菌种类为大肠杆菌。无需再与其他细菌(如手部上的非常见细菌)种类进行比对,提高了识别细菌种类的效率。
可选的,知识图谱的参考作用还可以体现在以下例子中:例如,电子设备识别出第一物体的种类为手部,电子设备在根据目标检测算法对手部的细菌种类进行识别的过程中,由于一些细菌的外观十分相似,例如沙门杆菌和大肠杆菌,电子设备难以通过外观准确识别。在这种情况下,电子设备结合手部的知识图谱能够得出,分布在手部的常见细菌包括大肠杆菌,并不包括沙门杆菌,则电子设备在识别一些细菌为沙门杆菌的概率与大肠杆菌 的概率近似时(例如沙门杆菌概率为51%,大肠杆菌为49%),优先识别为大肠杆菌,提高了识别细菌种类的效率和准确率。
在一种可能的实施方式中,电子设备可以接收用户输入的细菌种类,有针对性的对该细菌种类进行识别和筛选。举例来说,用户想要检测苹果上大肠杆菌的分布情况,则在输入框中输入细菌种类名称为大肠杆菌,电子设备响应于该用户输入,在苹果的微观图像中对大肠杆菌进行针对性的识别,获取大肠杆菌的数量和分布情况。第一物体的第一信息中包括关于大肠杆菌的数量和分布情况。在一种可能的场景下,假设大肠杆菌并非苹果上的常见细菌,则若电子设备仅基于苹果的知识图谱获得苹果上的常见细菌,并针对这些常见细菌进行识别,则可能并不能满足用户的需求,这种情况下,可以通过接收用户的输入来确定用户关注的特定细菌种类来进行针对性的识别(可以是优先识别物体上是否有该特定细菌种类),从而能不受该物体的知识图谱的限制而更能符合不同用户的个性化需求。
可选的,基于上述实施方式,知识图谱的参考作用还可以体现在以下例子中:例如,知识图谱可以只包括了某物体对应的常见的细菌种类,对一些不在物体的知识图谱中的细菌(与物体的关联性不强的细菌),包括新出现的细菌或近期大众关注的某种细菌,电子设备可以根据接收到的用户操作来确定用户关注的细菌种类(可以在界面上提供新出现的细菌或近期大众关注的某种细菌供用户选择来重点筛查微观图像中是否有该细菌,或是提供输入框接收用户输入的希望重点筛查的细菌名称。),有针对性的对该细菌种类进行识别和筛选。这样在根据微观图像确定细菌种类时,可以优先对用户特别关注的细菌进行筛查(进一步地,筛查的细菌种类还可以包括该物体上的常见细菌)。电子设备输出的提示信息可以包括用户关注的细菌是否存在的提示。举例来说,电子设备识别出第一物体的种类为手部,电子设备结合手部的知识图谱能够得出,分布在手部的常见细菌包括大肠杆菌、链球菌、绿农杆菌等等。电子设备接收到用户输入的想要检测的细菌名称为沙门杆菌,假设沙门杆菌和大肠杆菌的外观十分相似,则电子设备在对手部的细菌种类进行识别的过程中,电子设备难以通过外观准确识别。若电子设备在识别一些细菌为沙门杆菌的概率与大肠杆菌的概率近似时(例如沙门杆菌概率为51%,大肠杆菌为49%),可以输出关于沙门杆菌存在的概率信息(进一步还可以向用户提示该物体上存在的细菌为大肠杆菌的概率)。
步骤S103:根据第一物体的种类和第一物体的第一信息,确定第一物体的卫生状况。
电子设备获取到第一物体的种类和第一物体的第一信息后,根据第一物体的知识图谱和第一物体上存在的细菌的情况,确定第一物体的卫生状况。其中第一物体的知识图谱指示了至少一类细菌和第一物体的种类的卫生状况之间的关联关系。例如当第一物体的种类是食物时,第一物体上存在的细菌可以是酵母菌、放线菌、食用菌等等;当第一物体的种类是手部时,第一物体上存在的细菌可以是葡萄球菌、大肠杆菌、流感病毒等等;当第一物体的种类是空气时,第一物体上存在的细菌可以是脑膜炎奈瑟氏菌、结核杆菌、溶血性球菌、白喉杆菌、百日咳杆菌等等。
也即是说,电子设备获取到第一物体的种类后,在知识图谱中获取第一物体或与第一物体同一种类的物体(第二物体)的卫生状况的知识图谱。举例来说,电子设备获取到第一物体的种类为“苹果”,根据该第一物体的种类获取苹果不干净的知识图谱。苹果不干净的知识图谱指示了苹果不干净和细菌之间的关联关系,如图8中左下侧示例性的示出了苹 果的知识图谱,与苹果不干净相关的细菌包括杆状菌、青霉菌、红酵母菌等等。结合电子设备识别到苹果上存在的细菌,根据关联规则确定苹果的卫生状况。
其中上述关联规则包括:第一细菌的数量超过第一阈值,导致手部不干净;第二细菌的数量超过第二阈值,导致手部不干净;等等。本申请实施例可以采用计分统计的方式,根据最终的得分确定卫生状况。举例来说,与苹果当第一物体的种类为手部时,在手部的知识图谱中,与手部不干净相关的细菌包括葡萄球菌、大肠杆菌、流感病毒等等。当葡萄球菌的数量超过预设的第一阈值时,则判断该细菌会导致手部不干净,采用计分的方式,此时由于葡萄球菌导致手部不干净的得分为5分。当大肠杆菌的数量超过预设的第二阈值时,则判断大肠杆菌会导致手部不干净,采用计分的方式,此时由于大肠杆菌导致手部不干净的得分为5分,统计分数为10分。依次对与手部不干净相关的细菌进行统计得分,根据最终的得分确定手部的卫生状况。
可选的,当第一细菌的数量超过第一阈值时,第一细菌的数量越多,第一细菌对卫生状况的影响程度越深/计算权重越大。举例来说,当第一物体的种类为手部时,在手部的知识图谱中,与手部不干净相关的细菌包括葡萄球菌、大肠杆菌、流感病毒等等。当葡萄球菌的数量超过预设的第一阈值时,则判断该葡萄球菌会导致手部不干净,采用计分的方式,此时手部不干净的得分为5分。当葡萄球菌的数量在超过第一阈值的情况下,超过了第二阈值(第二阈值大于第一阈值),则此时由于葡萄球菌导致手部不干净的得分为10分。
可选的,不同的细菌对卫生状况的影响程度不同/计算权重不同。举例来说,当第一物体的种类为蔬菜时,在蔬菜的知识图谱中,与蔬菜不干净相关的细菌包括霉菌、杆状菌、沙门氏菌、志贺氏菌、金黄色葡萄球菌等等。由于沙门氏菌、志贺氏菌、金黄色葡萄球菌为致病菌,则将致病菌的优先级设为最高,若电子设备识别到蔬菜上存在上述致病菌(不管数量是否超过阈值),则判断蔬菜不卫生。
在一些可能的实施方式中,电子设备判断出第一物体的卫生状况为不卫生后,根据第一物体的第一图像获取第一物体上细菌的分布情况,然后根据细菌的分布情况判断出第一物体中不卫生的具体区域。举例来说,在一种可能的实现方式中,第一物体为手部,电子设备可以对手部图像进行区域划分,再对每个区域的细菌情况进行卫生状况的评估,从而得出手部不卫生的具体区域。即,电子设备在输出提示信息时,可以在宏观图像上提示出不卫生的区域具体是哪一块。
在一些可能的实施方式中,根据知识图谱的不断完善,输出的提示内容还可以从不同的角度指示物体的卫生状况。举例来说,电子设备获取到第一物体的种类为苹果,根据该第一物体的种类获取“苹果不干净”的知识图谱,还可以获取“苹果已腐坏”的知识图谱,还可以获取“苹果不新鲜”的知识图谱。根据这三个知识图谱,结合电子设备识别到苹果上存在的细菌,根据关联规则可以确定苹果的卫生状况是不干净还是已腐坏还是不新鲜。可以理解的,三个只是示例性的数字,实际情况中可以有更多或更少的情况。
在一些可能的实施方式中,电子设备获取到第一物体的种类为苹果,根据该第一物体的种类获取“苹果不干净”的知识图谱、“苹果已腐坏”的知识图谱、以及“苹果不新鲜”的知识图谱。根据第二图像确定出第一物体上存在的细菌包括细菌1、细菌2、细菌3和细菌4等等。若细菌1的数量能够导致“苹果不干净”,则“苹果不干净”这个推断计5分;若细菌2 的数量能够导致“苹果已腐坏”和“苹果不新鲜”,则“苹果已腐坏”和“苹果已腐坏”这两个推断分别计5分;若细菌3的数量能够导致“苹果已腐坏”,且细菌3的权重很大,达到了100的权重,则“苹果已腐坏”这个推断计100分;等等。根据最终得分,可以得出“苹果已腐坏”的推断。
在一些可能的实施方式中,第一信息还可以包括物体的纹理、气孔、色泽等信息。通过对第一物体上纹理、气孔、色泽等信息进行分析,可以判断出物体的新鲜程度。
步骤S104:输出提示信息,以指示第一物体的卫生状况。
电子设备确定第一物体的卫生状况后,输出提示信息,以指示第一物体的卫生状况。
电子设备确定第一物体的种类以及第一物体的第一信息后,得出第一物体的卫生状况,当接收到获取第一物体的卫生状况的指令后,输出提示信息。示例性的,如图8中的a部分中,当用户点击针对于苹果的光标70时,电子设备输出提示信息。
在一些可能的实施方式中,如图8中的a部分中,当用户点击针对于苹果的光标70时,电子设备再结合苹果和苹果的第一信息进行分析计算,得出苹果的卫生状况,输出提示信息。
可选的,电子设备显示第一物体的宏观图像或微观图像,提示信息显示在第一物体的宏观图像或微观图像上。
在一些可能的实施方式中,电子设备显示第一物体和第二物体的宏观图像。当电子设备获取针对于第一物体的显示区域上的用户操作,输出指示第一物体的卫生状况的第一提示信息;当电子设备获取针对于第二物体的显示区域上的用户操作,输出指示第二物体的卫生状况的第二提示信息。参考图8,图8中的a部分中,电子设备接收到针对于苹果光标的用户操作(例如点击),响应于该点击操作,图8中的b部分的显示区显示第一摄像头采集的图像,并输出关于苹果的卫生状况的提示信息。图8中的c部分中,电子设备接收到针对于手部光标的用户操作(例如点击),响应于该点击操作,图8中的d部分的显示区显示第一摄像头采集的图像,并输出关于手部的卫生状况的提示信息。
提示信息的输出方式还可以是直接输出的方式。电子设备确定第一物体的卫生状况后,在第一物体的图像上输出提示信息。参考图9,图9中的a部分中电子设备在第一物体的宏观图像上输出苹果和手部的提示信息,图9中的b部分中电子设备在第一物体的微观图像上输出苹果和手部的提示信息。
在一些可能的实施方式中,提示信息的提示方式可以是输出文本的方式(例如图8或图9中显示提示区60或提示区61的方式),还可以是通过图像、语音、震动、指示灯等方式,还可以是通过光标、文本的显示颜色指示卫生状况。举例来说,图9中的a部分包括指示苹果的光标70和指示手部的光标71,若电子设备检测到苹果不卫生,苹果的光标70显示为红色;若电子设备检测到手部卫生,手部的光标71显示为绿色。
在一些可能的实施方式中,提示信息的输出内容不作限制。提示信息的输出内容可以包括对物体的卫生状况的描述,例如物体不卫生、物体不干净、物体卫生程度低;也可以包括对提高物体的卫生状况的建议,例如建议清洗、建议擦拭、建议加热;也可以包括对物体的处理方式建议,例如建议丢弃;还可以包括细菌种类对卫生状况的影响的描述,例如,由于大肠杆菌数量过多,导致食物不卫生,建议高温100度加热杀菌;还可以包括对 物体的新鲜程度的描述,例如苹果不新鲜、香蕉已腐坏;等等。
在本申请实施例中,电子设备可以在任一应用界面中响应于在拍摄控件上接收到的用户操作,拍摄并保存应用界面中显示区的显示内容。用户可以通过图库查看物体的微观图像以及提示信息。
本申请实施例,电子设备通过第一摄像头采集到微观图像,显示在电子设备的显示屏上,可以实现用户对微观世界的查看和拍摄;以及电子设备可以基于采集的微观图像对微观图像中的细菌种类进行识别,为用户展示出物体上存在的细菌形态以及细菌名称;用户还可以在电子设备上进行一些操作,使得电子设备能确定出用户想要检测的细菌种类名称,电子设备可以有针对性的对该细菌种类进行检测和识别;以及电子设备可以基于识别出的细菌种类和数量对物体的卫生状况进行分析,提示用户物体的卫生状况,并给出相应的卫生建议。
图14示出了电子设备100的结构示意图。
下面以电子设备100为例对实施例进行具体说明。应该理解的是,电子设备100可以具有比图中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
在本申请中,处理器110可以用于确定第一物体的种类以及第一物体的第一信息,处理器110根据第一物体的种类以及第一物体的第一信息,确定该第一物体的卫生状况。其中,卫生状况可以是以打分的形式表示,分数越高表示物体越卫生;卫生状况还可以是以文字描述的形式表示,例如卫生、不卫生、非常卫生等文字来形容。也即是说,用户可以方便的观察到生活中物体的微观图像,通过微观图像确定物体上的微生物分布情况,从而获取针对该物体的卫生建议。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了***的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大 器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过摄像头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。包括中焦摄像头、长焦摄像头、广角摄像头、 超广角摄像头、TOF(time of night)深感摄像头、电影摄像头、微距摄像头等等。电子设备针对不同的功能需求可以搭载双摄(两个摄像头)、三摄(三个摄像头)、四摄(四个摄像头)、五摄(五个摄像头)甚至六摄(六个摄像头)等多种组合的摄像头,以提高拍照的性能。物体通过摄像头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
在本申请实施例中,摄像头193还可以包括显微摄像头。其中,显微摄像头用于采集微观图像。显微摄像头具有一定的放大倍数,可以观察到细菌。通过显微摄像头采集物体微观图像,从而获取物体上存在的细菌的种类和数量,还可以获取物体的光泽、纹理、气孔等信息。根据对微观图像的分析计算,得出物体的卫生状况。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器 110中。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
电子设备100的软件***可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android***为例,示例性说明电子设备100的软件结构。
图15是本发明实施例的电子设备100的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android***分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和***库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图15所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
本申请中,应用程序层还可新增浮窗启动组件(floating launcher),用于在上述提及的小窗口30中作为默认的显示应用,并提供给用户进入其他应用的入口。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图11所示,应用程序框架层可以包括窗口管理器(window manager),内容提供器,视图***,电话管理器,资源管理器,通知管理器、活动管理器(activity manager)等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定显示屏,截取显示屏等。本申请中,可基于Android原生的PhoneWindow,扩展出FloatingWindow,专门用于显示上述提及的小窗口30,以区别于普通的窗口,该窗口具有悬浮显示在系列窗口最顶层的属性。在一些可选的实施例中,该窗口大小可根据实际屏幕的大小,根据最优显示算法,给出合适的值。在一些可能的实施例中,该窗口的宽高比,可默认为常规主流手机的屏幕宽高比。同时,为方便用户关闭退出、隐藏小窗口,可在右上角额外绘制一个关闭按键和一个最小化按键。另外,在窗口管理模块中,会接收用户的一些手势操作,如果符合上述小窗口的操作手势,则会进行窗口冻结,并进行小窗口移动的动画效果播放。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图***包括可视控件,例如显示文字的控件,显示图片的控件等。视图***可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。本申请中,可相应增加小窗口上用于关闭、最小化等操作的按键视图,并绑定到上述窗口管理器中的FloatingWindow上。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在***顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在显示屏上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
活动管理器用于管理***里正在运行的activities,包括进程(process)、应用程序、服务(service)、任务(task)信息等。本申请中,可在活动管理器模块中,新增专门用于管理上述小窗口30中显示应用Activity的活动任务堆栈,以保证小窗口中的应用activity、task不会和屏幕中全屏显示的应用产生冲突。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓***的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
***库可以包括多个功能模块。例如:输入管理器(input manager)、输入调度管理器(input dispatcher)、表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
输入管理器负责从底层的输入驱动获取事件数据,解析并封装后传给输入调度管理器。
输入调度管理器用于保管窗口信息,其收到来自输入管理器的输入事件后,会在其保管的窗口中寻找合适的窗口,并将事件派发给此窗口。
表面管理器用于对显示子***进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
下面结合捕获拍照场景,示例性说明电子设备100软件以及硬件的工作流程。
当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为相机应用图标的控件为例,相 机应用调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过摄像头193捕获静态图像或视频。
图15所示的软件***涉及到使用微观显示能力的应用呈现(如图库,文件管理器),提供分享能力的即时分享模块,提供存放和获取数据的内容提供模块,以及应用框架层提供WLAN服务、蓝牙服务,以及内核和底层提供WLAN蓝牙能力和基本通信协议。
本申请实施例还提供了一种计算机可读存储介质。上述方法实施例中的全部或者部分流程可以由计算机程序来指令相关的硬件完成,该程序可存储于上述计算机存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。该计算机可读存储介质包括:只读存储器(read-only memory,ROM)或随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可存储程序代码的介质。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本申请实施例方法中的步骤可以根据实际需要进行顺序调整、合并和删减。本申请中所述的“根据”、“通过”可以理解为“至少根据”、“至少通过”。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (13)

  1. 识别物体的卫生状况的方法,其特征在于,包括:
    电子设备确定第一物体的种类;
    所述电子设备通过第一摄像头采集所述第一物体的第一图像,所述第一图像为微观图像;
    所述电子设备根据所述第一物体的种类和所述第一图像,输出第一提示信息,所述第一提示信息用于指示所述第一物体的卫生状况。
  2. 根据权利要求1所述的方法,其特征在于,所述电子设备确定第一物体的种类之前,所述方法还包括:所述电子设备通过第二摄像头采集所述第一物体的第二图像;
    所述电子设备确定第一物体的种类,具体包括:所述电子设备根据所述第二图像确定所述第一物体的种类。
  3. 根据权利要求1所述的方法,其特征在于,所述电子设备确定第一物体的种类,包括:所述电子设备根据检测到的用户操作,确定所述第一物体的种类。
  4. 根据权利要求1所述的方法,其特征在于,所述电子设备确定第一物体的种类,包括:所述电子设备根据所述第一摄像头采集的所述第一物体的第二图像来确定所述第一物体的种类。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述方法还包括:根据所述第一图像确定所述第一物体的第一信息,其中,所述第一物体的第一信息与所述第一物体的卫生状况之间具有关联关系,所述第一信息包括细菌的种类和数量。
  6. 根据权利要求5所述的方法,其特征在于,所述第一信息包括第一细菌的数量,当所述第一细菌的数量为第一数量时,所述第一提示信息指示所述第一物体的卫生状况为第一卫生状况;当所述第一细菌的数量为第二数量时,所述第一提示信息指示所述第一物体的卫生状况为第二卫生状况。
  7. 根据权利要求1-5任一所述的方法,其特征在于,所述电子设备输出第一提示信息,包括:
    所述电子设备显示所述第一物体的第一图像,并在所述第一物体的第一图像上显示第一提示信息。
  8. 根据权利要求1-7任一所述的方法,其特征在于,所述第一提示信息包括用于提高所述第一物体的卫生状况的建议。
  9. 根据权利要求1-8任一所述的方法,其特征在于,所述电子设备根据所述第一物体的种类和所述第一图像,输出第一提示信息,包括:所述电子设备根据所述第一物体的种类所对应的知识图谱和所述第一图像,确定所述第一物体的卫生状况,所述知识图谱包括所述第一物体的种类所对应的常见的细菌种类。
  10. 根据权利要求2、5-9中任一所述的方法,其特征在于,所述第二图像中还包括第二物体,所述方法还包括:
    所述电子设备获取针对于第二物体的显示区域上的用户操作,输出指示所述第二物体的卫生状况的第二提示信息。
  11. 根据权利要求1-10任一所述的方法,其特征在于,所述第一摄像头为显微摄像头;所述第二摄像头为相机摄像头;所述电子设备为手机;所述第一物体的种类为手部。
  12. 一种电子设备,包括触控屏,存储器,一个或多个处理器;其中,所述存储器中存储有一个或多个程序;其特征在于,所述一个或多个处理器在执行所述一个或多个程序时,使得所述电子设备实现如权利要求1至11任一项所述的方法。
  13. 一种计算机存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1至11任一项所述的方法。
PCT/CN2021/103541 2020-06-30 2021-06-30 识别物体的卫生状况方法及相关电子设备 WO2022002129A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP21834643.5A EP4167127A4 (en) 2020-06-30 2021-06-30 METHOD FOR IDENTIFYING THE HEALTH STATUS OF AN OBJECT, AND ASSOCIATED ELECTRONIC DEVICE
CN202180045358.4A CN115867948A (zh) 2020-06-30 2021-06-30 识别物体的卫生状况方法及相关电子设备
US18/003,853 US20230316480A1 (en) 2020-06-30 2021-06-30 Method for Identifying Hygiene Status of Object and Related Electronic Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010615484.6A CN113869087A (zh) 2020-06-30 2020-06-30 识别物体的卫生状况方法及相关电子设备
CN202010615484.6 2020-06-30

Publications (1)

Publication Number Publication Date
WO2022002129A1 true WO2022002129A1 (zh) 2022-01-06

Family

ID=74232821

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/103541 WO2022002129A1 (zh) 2020-06-30 2021-06-30 识别物体的卫生状况方法及相关电子设备

Country Status (4)

Country Link
US (1) US20230316480A1 (zh)
EP (1) EP4167127A4 (zh)
CN (3) CN112257508B (zh)
WO (1) WO2022002129A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115249339A (zh) * 2022-06-10 2022-10-28 广州中科云图智能科技有限公司 河道漂浮物识别***、方法、设备及存储介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257508B (zh) * 2020-06-30 2022-03-11 华为技术有限公司 识别物体的卫生状况方法及相关电子设备
CN113052005B (zh) * 2021-02-08 2024-02-02 湖南工业大学 一种用于家居服务的垃圾分拣方法和垃圾分拣装置
CN114216222A (zh) * 2021-11-09 2022-03-22 青岛海尔空调器有限总公司 空调细菌可视化的控制方法、控制***、电子设备和介质
CN116205982B (zh) * 2023-04-28 2023-06-30 深圳零一生命科技有限责任公司 基于图像分析的微生物计数方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2781406Y (zh) * 2005-01-12 2006-05-17 游学珪 袖珍投影型多能卫生显示仪
US20130245417A1 (en) * 2012-03-19 2013-09-19 Donald Spector System and method for diagnosing and treating disease
CN106415239A (zh) * 2014-05-20 2017-02-15 格里姆萨勒克健康服务和计算机产品工业贸易有限公司 可在不同波长(多光谱)中拍摄图像的移动显微成像设备
CN110879999A (zh) * 2019-11-14 2020-03-13 武汉兰丁医学高科技有限公司 基于手机的微型显微图像采集装置及图像拼接、识别方法
CN112257508A (zh) * 2020-06-30 2021-01-22 华为技术有限公司 识别物体的卫生状况方法及相关电子设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2924442B2 (ja) * 1992-04-28 1999-07-26 松下電器産業株式会社 パターン認識装置
JP5393310B2 (ja) * 2009-07-13 2014-01-22 メディア株式会社 口腔内細菌マップ作成システム
US8996098B2 (en) * 2012-03-19 2015-03-31 Donald Spector System and method for diagnosing and treating disease
AU2015306692B2 (en) * 2014-08-25 2021-07-22 Creatv Microtech, Inc. Use of circulating cell biomarkers in the blood for detection and diagnosis of diseases and methods of isolating them
CN104568932B (zh) * 2014-12-24 2018-05-15 深圳市久怡科技有限公司 一种物质检测方法及移动终端
WO2017223412A1 (en) * 2016-06-24 2017-12-28 Beckman Coulter, Inc. Image atlas systems and methods
CN108303420A (zh) * 2017-12-30 2018-07-20 上饶市中科院云计算中心大数据研究院 一种基于大数据和移动互联网的家用型***质量检测方法
CN108548770B (zh) * 2018-03-20 2020-10-16 合肥亨纳生物科技有限公司 一种基于便携式智能手机显微镜的颗粒计数器及计算方法
US11892299B2 (en) * 2018-09-30 2024-02-06 Huawei Technologies Co., Ltd. Information prompt method and electronic device
CN110895968B (zh) * 2019-04-24 2023-12-15 苏州图灵微生物科技有限公司 人工智能医学图像自动诊断***和方法
CN111260677B (zh) * 2020-02-20 2023-03-03 腾讯医疗健康(深圳)有限公司 基于显微图像的细胞分析方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2781406Y (zh) * 2005-01-12 2006-05-17 游学珪 袖珍投影型多能卫生显示仪
US20130245417A1 (en) * 2012-03-19 2013-09-19 Donald Spector System and method for diagnosing and treating disease
CN106415239A (zh) * 2014-05-20 2017-02-15 格里姆萨勒克健康服务和计算机产品工业贸易有限公司 可在不同波长(多光谱)中拍摄图像的移动显微成像设备
CN110879999A (zh) * 2019-11-14 2020-03-13 武汉兰丁医学高科技有限公司 基于手机的微型显微图像采集装置及图像拼接、识别方法
CN112257508A (zh) * 2020-06-30 2021-01-22 华为技术有限公司 识别物体的卫生状况方法及相关电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4167127A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115249339A (zh) * 2022-06-10 2022-10-28 广州中科云图智能科技有限公司 河道漂浮物识别***、方法、设备及存储介质
CN115249339B (zh) * 2022-06-10 2024-05-28 广州中科云图智能科技有限公司 河道漂浮物识别***、方法、设备及存储介质

Also Published As

Publication number Publication date
US20230316480A1 (en) 2023-10-05
EP4167127A4 (en) 2023-11-22
CN113869087A (zh) 2021-12-31
CN112257508A (zh) 2021-01-22
EP4167127A1 (en) 2023-04-19
CN112257508B (zh) 2022-03-11
CN115867948A (zh) 2023-03-28

Similar Documents

Publication Publication Date Title
WO2022002129A1 (zh) 识别物体的卫生状况方法及相关电子设备
WO2021190078A1 (zh) 短视频的生成方法、装置、相关设备及介质
WO2021244457A1 (zh) 一种视频生成方法及相关装置
US20220343648A1 (en) Image selection method and electronic device
CN113099146B (zh) 一种视频生成方法、装置及相关设备
WO2022095788A1 (zh) 目标用户追焦拍摄方法、电子设备及存储介质
WO2021219095A1 (zh) 一种活体检测方法及相关设备
WO2021082815A1 (zh) 一种显示要素的显示方法和电子设备
WO2020192761A1 (zh) 记录用户情感的方法及相关装置
CN108353129A (zh) 拍摄设备及其控制方法
WO2023160170A1 (zh) 拍摄方法和电子设备
CN113986070A (zh) 一种应用卡片的快速查看方法及电子设备
CN113970888A (zh) 家居设备控制方法、终端设备及计算机可读存储介质
WO2023001152A1 (zh) 一种推荐视频片段的方法、电子设备及服务器
WO2022228010A1 (zh) 一种生成封面的方法及电子设备
WO2022143230A1 (zh) 一种确定跟踪目标的方法及电子设备
CN116320716B (zh) 图片采集方法、模型训练方法及相关装置
CN114245011B (zh) 图像处理方法、用户界面及电子设备
US20240107092A1 (en) Video playing method and apparatus
CN115086710B (zh) 视频播放方法、终端设备、装置、***及存储介质
WO2024114493A1 (zh) 一种人机交互的方法和装置
WO2022127609A1 (zh) 图像处理方法及电子设备
CN114757955A (zh) 一种目标跟踪方法及电子设备
CN118151567A (zh) 设备控制方法及相关装置
CN118075513A (zh) 视频片段处理方法、终端设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21834643

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021834643

Country of ref document: EP

Effective date: 20230111

NENP Non-entry into the national phase

Ref country code: DE