CN112678373A - Garbage classification method, system, device, control equipment and storage medium - Google Patents

Garbage classification method, system, device, control equipment and storage medium Download PDF

Info

Publication number
CN112678373A
CN112678373A CN202011600238.XA CN202011600238A CN112678373A CN 112678373 A CN112678373 A CN 112678373A CN 202011600238 A CN202011600238 A CN 202011600238A CN 112678373 A CN112678373 A CN 112678373A
Authority
CN
China
Prior art keywords
garbage
information
target object
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011600238.XA
Other languages
Chinese (zh)
Other versions
CN112678373B (en
Inventor
冯大航
陈孝良
常乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing SoundAI Technology Co Ltd
Original Assignee
Beijing SoundAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing SoundAI Technology Co Ltd filed Critical Beijing SoundAI Technology Co Ltd
Priority to CN202011600238.XA priority Critical patent/CN112678373B/en
Publication of CN112678373A publication Critical patent/CN112678373A/en
Application granted granted Critical
Publication of CN112678373B publication Critical patent/CN112678373B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides a garbage classification method, a system, a device, a control device and a storage medium, and belongs to the technical field of environmental protection equipment. The method comprises the following steps: responding to a target object for throwing garbage in a target range of a target garbage station, and displaying a first virtual image; acquiring information to be identified of garbage held by a target object, and identifying the information to be identified to obtain garbage categories of the garbage; the first prompt information is output to the target object through the first virtual image and comprises a garbage category and is used for prompting the target object to put garbage into a target garbage can corresponding to the garbage category, the first virtual image is displayed in a target range on the target garbage station, information interaction is carried out between the first virtual image and the target object, the garbage category of the garbage held by the target object is prompted, therefore, garbage classification supervision personnel do not need to be arranged beside the garbage station, and manpower and material resources in the garbage classification process are saved.

Description

Garbage classification method, system, device, control equipment and storage medium
Technical Field
The present disclosure relates to the field of environmental protection equipment technologies, and in particular, to a garbage classification method, system, device, control device, and storage medium.
Background
With the rapid development of economy, environmental pollution and ecological damage are more and more serious. The garbage is classified, and part of classified garbage can be recycled, so that the environmental pollution is effectively prevented. Therefore, in order to improve the environment, garbage classification has been widely promoted in recent years.
In the related art, a garbage classification supervisor is arranged beside a garbage station, and the garbage classification supervisor guides a user who throws garbage to classify the garbage so as to remind the user to throw the garbage into a garbage bin of a corresponding category in the garbage station.
In the above related art, a garbage classification supervisor needs to be arranged beside the garbage station, which consumes a lot of manpower.
Disclosure of Invention
The embodiment of the disclosure provides a garbage classification method, a garbage classification system, a garbage classification device, a control device and a storage medium, which can save manpower and material resources in a garbage classification treatment process. The technical scheme is as follows:
in one aspect, a garbage classification method is provided, and the method includes:
responding to a target object for throwing garbage in a target range of a target garbage station, and displaying a first virtual image;
acquiring information to be identified of the garbage held by the target object, and identifying the information to be identified to obtain the garbage category of the garbage;
and outputting first prompt information to the target object through the first virtual image, wherein the first prompt information comprises the garbage category and is used for prompting the target object to put the garbage into a target garbage can corresponding to the garbage category.
In some embodiments, said presenting a first avatar comprises:
acquiring image characteristics of the target object, and determining a first virtual image matched with the image characteristics based on the image characteristics;
and displaying the first virtual image matched with the image characteristics.
In some embodiments, the outputting of the first prompt message to the target object through the first avatar includes:
determining tone color information matched with the image features based on the image features;
and controlling the first virtual image, and outputting the first prompt information to the target object through the tone information.
In some embodiments, the obtaining the visual characteristics of the target object includes:
receiving a first voice signal input by the target object, performing feature extraction on the first voice signal to obtain an audio feature of the first voice signal, and taking the audio feature as the image feature; alternatively, the first and second electrodes may be,
and acquiring a first image of the target object, performing feature extraction on the first image to obtain the image feature of the first image, and taking the image feature as the image feature.
In some embodiments, the obtaining information to be identified of the garbage held by the target object includes:
receiving a second voice signal input by the target object, wherein the second voice signal is a voice signal for inquiring the garbage category of the garbage;
and taking the second voice signal as the information to be recognized.
In some embodiments, the identifying the information to be identified to obtain the garbage category of the garbage includes:
performing semantic recognition on the second voice signal to obtain semantic information of the second voice signal;
extracting the garbage name of the garbage from the semantic information;
determining a garbage category of the garbage based on the garbage name.
In some embodiments, the obtaining information to be identified of the garbage held by the target object includes:
receiving a second image, wherein the second image is an image containing the garbage;
and taking the second image as the information to be identified.
In some embodiments, the identifying the information to be identified to obtain the garbage category of the garbage includes:
performing feature extraction on the second image to obtain garbage features of the garbage;
determining a garbage category of the garbage based on the garbage features.
In some embodiments, the outputting of the first prompt message to the target object through the first avatar includes:
generating a second avatar of the target object;
simulating a dialog scene with the target object through the first avatar and the second avatar;
and outputting the first prompt message in the conversation scene.
In some embodiments, before the obtaining of the information to be identified of the garbage held by the target object, the method further includes:
and outputting second prompt information through the first virtual image, wherein the second prompt information is used for prompting the target object to input the information to be identified.
In some embodiments, the method further comprises:
generating alarm information in response to the garbage being thrown into an area outside the target garbage can;
and outputting the alarm information through the first virtual image.
In some embodiments, after the outputting of the first prompt message to the target object through the first avatar, the method further includes:
opening a box cover of the target garbage box;
closing a lid of the target trash bin in response to a target condition being triggered;
the target conditions include: the opening time of the box cover of the target dustbin exceeds the preset time; alternatively, the refuse is thrown to the target refuse bin.
In another aspect, a waste classification system is provided, the waste classification system at least comprising: the system comprises a control device, a display device and an information acquisition device;
the control equipment is electrically connected with the display equipment and the information acquisition equipment respectively;
the display device is used for responding to a target object for putting garbage in a target range of a target garbage station and displaying a first virtual image;
the information acquisition equipment is used for acquiring information to be identified of the garbage held by the target object and sending the information to be identified to the control equipment;
the control equipment is used for identifying the information to be identified to obtain the garbage category of the garbage;
the display device is further configured to output first prompt information to the target object through the first avatar, where the first prompt information includes the garbage category and is used to prompt the target object to put the garbage into a target garbage bin corresponding to the garbage category.
In some embodiments, the information acquisition device comprises a signal acquisition device;
the signal acquisition equipment is used for receiving a second voice signal input by the target object, wherein the second voice signal is a voice signal for inquiring the garbage category of the garbage, and the second voice signal is used as the information to be identified.
In some embodiments, the information acquisition device comprises an image acquisition device;
the image acquisition device is used for acquiring a second image, wherein the second image is an image containing the garbage and is used as the information to be identified.
In some embodiments, the system further comprises: a signal playing device;
the signal playing equipment is electrically connected with the control equipment;
and the signal playing device is used for playing the first prompt message.
In another aspect, there is provided a waste sorting apparatus, the apparatus comprising:
the display module is configured to respond to the target object for putting garbage in the target range of the target garbage station and display the first virtual image;
the identification module is configured to acquire to-be-identified information of the garbage held by the target object, and identify the to-be-identified information to obtain the garbage category of the garbage;
and the output module is configured to output first prompt information to the target object through the first avatar, wherein the first prompt information comprises the garbage category and is used for prompting the target object to put the garbage into a target garbage box corresponding to the garbage category.
In some embodiments, the display module comprises:
a first determination unit configured to acquire an avatar feature of the target object, and determine a first avatar matching the avatar feature based on the avatar feature;
a presentation unit configured to present a first avatar matching the avatar characteristics.
In some embodiments, the output module comprises:
a second determination unit configured to determine tone color information matching the character feature based on the character feature;
an output unit configured to control the first avatar to output the first prompt information to the target object through the tone information.
In some embodiments, the first determining unit is configured to receive a first voice signal input by the target object, perform feature extraction on the first voice signal to obtain an audio feature of the first voice signal, and use the audio feature as the character feature; alternatively, the first and second electrodes may be,
the first determining unit is configured to acquire a first image of the target object, perform feature extraction on the first image to obtain an image feature of the first image, and use the image feature as the image feature.
In some embodiments, the identification module comprises:
a first receiving unit configured to receive a second voice signal input by the target object, wherein the second voice signal is a voice signal for inquiring the garbage category of the garbage;
a third determination unit configured to take the second voice signal as the information to be recognized.
In some embodiments, the identification module comprises:
the recognition unit is configured to perform semantic recognition on the second voice signal to obtain semantic information of the second voice signal;
a first extraction unit configured to extract a garbage name of the garbage from the semantic information;
a fourth determination unit configured to determine a garbage category of the garbage based on the garbage name.
In some embodiments, the identification module comprises:
a second receiving unit configured to receive a second image, the second image being an image containing the garbage;
a fifth determination unit configured to take the second image as the information to be recognized.
In some embodiments, the identification module comprises:
a second extraction unit configured to perform feature extraction on the second image to obtain a spam feature of the spam;
a sixth determining unit configured to determine a garbage category of the garbage based on the garbage feature.
In some embodiments, the output module comprises:
a generating unit configured to generate a second avatar of the target object;
a simulation unit configured to simulate a dialog scene with the target object through the first avatar and the second avatar;
the output unit is configured to output the first prompt information in the dialog scene.
In some embodiments, the output module is further configured to output, through the first avatar, second prompt information for prompting the target object to input the information to be recognized.
In some embodiments, the apparatus further comprises:
an alarm module configured to generate alarm information in response to the garbage being thrown to an area outside the target garbage bin;
the output module is further configured to output the alarm information through the first avatar.
In some embodiments, the apparatus further comprises:
an opening module configured to open a lid of the target trash bin;
a closing module configured to close a lid of the target trash bin in response to a target condition being triggered;
the target conditions include: the opening time of the box cover of the target dustbin exceeds the preset time; alternatively, the refuse is thrown to the target refuse bin.
In another aspect, a control device is provided, which includes a processor and a memory, where at least one program code is stored, and the program code is loaded by the processor and executed to implement the operations performed by the garbage classification method according to any one of the above-mentioned implementations.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, the program code being loaded and executed by a processor to implement the operations performed by the garbage classification method according to any of the above-mentioned implementations.
In another aspect, a computer program product is provided, which stores at least one program code, which is loaded and executed by a processor to implement the garbage classification method of the above aspect.
The technical scheme provided by the embodiment of the disclosure has the following beneficial effects:
in the embodiment of the disclosure, the first virtual image is displayed in the target range on the target garbage station, and the information interaction is performed between the first virtual image and the target object to prompt the garbage category of the garbage held by the target object, so that the target object throws the held garbage into a correct garbage can, and therefore, garbage classification supervisors do not need to be arranged beside the garbage station, and manpower and material resources in the garbage classification process are saved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
Fig. 1 is a schematic diagram of a garbage classification system provided by an embodiment of the present disclosure;
fig. 2 is a flowchart of a garbage classification method provided by an embodiment of the present disclosure;
fig. 3 is a flowchart of a garbage classification method provided by an embodiment of the present disclosure;
FIG. 4 is a schematic illustration of an avatar provided by an embodiment of the present disclosure;
fig. 5 is a block diagram of a garbage classification apparatus provided in an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a control device provided in an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a garbage classification system according to an embodiment of the present disclosure. The garbage classification system can be applied to places with a large number of people streams, such as residential districts, schools, stations, airports, hospitals and the like.
Referring to fig. 1, the garbage classification system includes: a control device 101, a display device 102 and an information acquisition device 103;
the control device 101 is electrically connected to the display device 102 and the information collecting device 103 respectively;
the display device 102 is configured to display a first avatar in response to a target object for delivering garbage appearing within a target range of a target garbage station;
the information acquisition device 103 is configured to acquire to-be-identified information of the garbage held by the target object, and send the to-be-identified information to the control device 101;
the control device 101 is configured to identify the information to be identified, and obtain a garbage category of the garbage;
the display device 102 is further configured to output first prompt information to the target object through the first avatar, where the first prompt information includes the garbage category and is used to prompt the target object to put the garbage into a target garbage bin 104 corresponding to the garbage category.
The controller is any equipment with the functions of receiving, transmitting and processing information. The control apparatus 101 is, for example, a computer or the like. The display device 102 is used to present a first avatar for interacting with a target object. In some embodiments, the display device 102 is a display screen. For example, the Display device 102 is an OLED (organic light-Emitting Diode) or an LCD (Liquid Crystal Display). In some embodiments, the display device 102 is a holographic projection device or the like. In the embodiment of the present disclosure, the type of the display device 102 is not particularly limited.
It should be noted that in some embodiments, the first avatar is always presented on the display device 102 after the garbage classification system is turned on. In some embodiments, the garbage classification system detects whether a target object for putting garbage appears in a target range of the target garbage station after being started, and displays the first virtual image only in response to the appearance of the target object.
In the embodiment of the disclosure, the first avatar is displayed in the target range of the target garbage station, and the information interaction is performed between the first avatar and the target object to prompt the garbage category of the garbage held by the target object, so that the target object throws the held garbage into the correct garbage can 104, and therefore, garbage classification supervisors do not need to be arranged beside the garbage station, and manpower and material resources in the garbage classification process are saved.
In some embodiments, the information to be recognized is voice information, and accordingly, the information collecting device 103 includes a signal collecting device; the signal acquisition equipment is used for receiving a second voice signal input by the target object, wherein the second voice signal is a voice signal for inquiring the garbage category of the garbage, and the second voice signal is used as the information to be identified.
The signal acquisition equipment is used for acquiring voice signals. For example, the signal acquisition device is a microphone. In the implementation mode, the voice signal input by the target object is acquired through the signal acquisition equipment, the garbage category of garbage held by the target object is determined through the voice signal, and information interaction is realized with the target object through the signal acquisition equipment, so that the user experience is optimized; in addition, the garbage category of the garbage can be determined directly from the second voice signal used for input by the target, and the efficiency of recognizing the garbage category is improved.
In some embodiments, the information to be identified is image information, and in response, the information acquisition device 103 comprises an image acquisition device; the image acquisition equipment is used for acquiring a second image, wherein the second image is an image containing the garbage and is used as the information to be identified.
The image acquisition equipment is used for acquiring images. For example, the video capture device is a camera or the like. In the implementation mode, the image acquisition equipment is used for acquiring the second image containing the garbage, so that the garbage category of the garbage is identified through the second image, and the accuracy of determining the garbage category is improved.
In some embodiments, if the first prompt message is a text message, the garbage classification system displays the text content corresponding to the first prompt message through the display device 102, so as to output the first prompt message to the target object. In some embodiments, if the first prompt message is a voice message, the garbage classification system further includes: a signal playing device; the signal playing device is electrically connected with the control device 101; the signal playing device is used for playing the first prompt message.
In the implementation mode, the first prompt information is played through the playing device, so that the first prompt information is displayed to the target object more intuitively, and the target object can acquire the first prompt information more directly.
In some embodiments, the capturing device is further configured to capture a first voice signal or a first image of the target object, determine an avatar characteristic of the target object based on the first voice signal or the first image, send the avatar characteristic to the control device 101, and the control device 101 determines a first avatar interacting with the target object based on the avatar characteristic. For example, if the character feature of the target object indicates that the target object is a child, the first avatar is determined to be a cartoon character or the like.
In some embodiments, the garbage classification system further includes a plurality of garbage bins 104, wherein the plurality of garbage bins 104 correspond to different garbage categories respectively.
In some embodiments, the plurality of bins 104 are all in an open state. The target object directly throws the garbage into the garbage bin 104 corresponding to the garbage category of the garbage according to the first prompt message. In some embodiments, the plurality of trash bins 104 are each provided with an open button, and the trash bin 104 corresponding to the open button is opened in response to the open button corresponding to the trash bin 104 being triggered.
In some embodiments, the plurality of trash bins 104 are communicatively connected to the control device 101, and the control device 101 is further configured to send an opening instruction to the trash bin 104 corresponding to the trash category based on the identified trash category, where the opening instruction is used to instruct the trash bin 104 to open, and then the target trash bin 104 receiving the opening instruction is opened.
In this implementation, the opening of the trash can 104 is controlled by the control device 101, preventing the target object from throwing trash into the wrong trash can 104.
It should be noted that in the implementation mode in which the control device 101 controls the opening of the target trash can 104, the control device 101 also controls the closing of the target trash can 104. Accordingly, in some implementations, the control device 101 counts an opening duration of the target trash can 104, and sends a first closing instruction to the target trash can 104 in response to the opening duration exceeding a preset duration, and the target trash can 104 is closed based on the first closing instruction. In some implementations, the garbage classification system further includes a plurality of sensors, the plurality of sensors are electrically connected to the control device 101, at least one sensor is disposed at an inlet of each garbage can 104, the sensor is configured to sense whether garbage enters the garbage can 104, in response to that garbage enters the target garbage can 104, the sensor sends a signal indicating that garbage enters the control device 101, the control device 101 generates a second closing instruction based on the signal, sends the second closing instruction to the target garbage can 104, and the target garbage can 104 is closed based on the closing instruction.
The sensor is an infrared sensor, an acoustic wave sensor, or the like, and in the embodiment of the present disclosure, the type of the sensor is not particularly limited. In this implementation, by detecting whether there is garbage thrown into the target garbage can 104, when there is garbage thrown into the target garbage can 104, the target garbage can 104 is closed in time, so as to prevent other types of garbage from being thrown into the target garbage can 104, thereby improving the accuracy of garbage classification.
Fig. 2 is a flowchart of a garbage classification method provided in an embodiment of the present disclosure. Referring to fig. 2, the embodiment includes:
step 201: and displaying the first virtual image in response to the target object for putting the garbage appearing in the target range of the target garbage station.
Step 202: and acquiring to-be-identified information of the garbage held by the target object, and identifying the to-be-identified information to obtain the garbage category of the garbage.
Step 203: and outputting first prompt information to the target object through the first virtual image, wherein the first prompt information comprises the garbage category and is used for prompting the target object to put the garbage into a target garbage can corresponding to the garbage category.
In some embodiments, the presenting the first avatar includes:
acquiring the image characteristics of the target object, and determining a first virtual image matched with the image characteristics based on the image characteristics;
and displaying the first virtual image matched with the image characteristic.
In some embodiments, the outputting of the first prompt to the target object through the first avatar includes:
determining tone information matched with the image characteristics based on the image characteristics;
and controlling the first virtual image and outputting the first prompt information to the target object through the tone information.
In some embodiments, the obtaining the visual characteristics of the target object includes:
receiving a first voice signal input by the target object, performing feature extraction on the first voice signal to obtain an audio feature of the first voice signal, and taking the audio feature as the image feature; alternatively, the first and second electrodes may be,
collecting a first image of the target object, performing feature extraction on the first image to obtain an image feature of the first image, and taking the image feature as the image feature.
In some embodiments, the obtaining information to be identified of the garbage held by the target object includes:
receiving a second voice signal input by the target object, wherein the second voice signal is a voice signal for inquiring the garbage category of the garbage;
and taking the second voice signal as the information to be recognized.
In some embodiments, the identifying the information to be identified to obtain the garbage category of the garbage includes:
performing semantic recognition on the second voice signal to obtain semantic information of the second voice signal;
extracting the garbage name of the garbage from the semantic information;
based on the garbage name, a garbage category of the garbage is determined.
In some embodiments, the obtaining information to be identified of the garbage held by the target object includes:
receiving a second image, wherein the second image is an image containing the garbage;
and taking the second image as the information to be identified.
In some embodiments, the identifying the information to be identified to obtain the garbage category of the garbage includes:
extracting the features of the second image to obtain the garbage features of the garbage;
based on the garbage features, a garbage category of the garbage is determined.
In some embodiments, the outputting of the first prompt to the target object through the first avatar includes:
generating a second avatar for the target object;
simulating a dialog scene with the target object through the first avatar and the second avatar;
and outputting the first prompt message in the dialog scene.
In some embodiments, before the obtaining the information to be identified of the garbage held by the target object, the method further includes:
and outputting second prompt information through the first virtual image, wherein the second prompt information is used for prompting the target object to input the information to be identified.
In some embodiments, the method further comprises:
generating alarm information in response to the garbage being thrown into an area outside the target garbage can;
and outputting the alarm information through the first virtual image.
In some embodiments, after the outputting of the first prompt message to the target object via the first avatar, the method further comprises:
opening the box cover of the target dustbin;
closing a lid of the target trash bin in response to a target condition being triggered;
the target conditions include: the opening time of the box cover of the target dustbin exceeds the preset time; alternatively, the trash is dropped to the target trash bin.
In the embodiment of the disclosure, the first virtual image is displayed in the target range on the target garbage station, and the information interaction is performed between the first virtual image and the target object to prompt the garbage category of the garbage held by the target object, so that the target object throws the held garbage into a correct garbage can, and therefore, garbage classification supervisors do not need to be arranged beside the garbage station, and manpower and material resources in the garbage classification process are saved.
Fig. 3 is a flowchart of a garbage classification method provided in an embodiment of the present disclosure. Referring to fig. 3, the embodiment includes:
step 301: and responding to the target object for throwing the garbage in the target range of the target garbage station, and acquiring the image characteristics of the target object by the control equipment.
The target garbage station is a garbage station equipped with the garbage classification system provided in the scheme. The garbage station comprises a plurality of garbage cans, and different garbage cans correspond to different garbage categories. The target range is a preset range where the garbage station is located. For example, the target range is a range within a circular area centered on the trash station with a target length as a radius. In the embodiments of the present disclosure, the target range is not particularly limited.
Wherein the target object is a person who puts garbage in. In some embodiments, the visual feature is a sound feature of the target object, and in this step, the visual feature of the target object is determined by recognizing a voice signal input by the target object. The process is as follows: the control equipment receives a first voice signal input by the target object, performs feature extraction on the first voice signal to obtain an audio feature of the first voice signal, and takes the audio feature as the image feature.
The first voice signal is a voice signal input by a target object. In some embodiments, the first voice signal is any voice signal emitted by the target object. In some embodiments, the first voice signal includes a target wake-up word, e.g., "hello," "hey," etc. In the embodiments of the present disclosure, this is not particularly limited.
In some embodiments, this process is implemented by a speech recognition model. Correspondingly, before the step, model training is carried out on the speech recognition model to be trained, and the speech recognition model for classification is obtained. And performing model training on the voice recognition model to be trained according to the type of the preset image characteristics. For example, if the category of the visual feature includes 5 categories of "young man", "young woman", "old man", "old woman" and "child", the speech recognition model to be trained is model-trained based on a large amount of speech training data, so that when the speech recognition model is recognized with the first speech signal, the first speech signal can be classified into one of the above five categories.
In the implementation mode, the first voice signal of the target object is identified, and the image feature of the target object is determined based on the first voice signal, so that the efficiency of acquiring the image feature of the target object is improved.
In some embodiments, the visual characteristic is an appearance characteristic of the target object, such as height, dressing, and the like. In this step, the visual characteristics of the target object are determined by recognizing the acquired first image including the target object. The process is as follows: the control equipment collects a first image of the target object, performs feature extraction on the first image to obtain image features of the first image, and takes the image features as the image features.
The first image is an image acquired by the information acquisition equipment after the target object enters the target range. In some embodiments, this process is implemented by an image recognition model. Correspondingly, before the step, model training is carried out on the image recognition model to be trained, and the image recognition model for classification is obtained. And performing model training on the image recognition model to be trained according to the preset image characteristic category. The model training process is similar to the training process of the speech recognition model, and is not described herein again.
In the implementation mode, the first image comprising the target object is collected and identified, so that the image characteristic of the target object is determined, and the accuracy of obtaining the image characteristic of the target object is improved.
It should be noted that the speech recognition model or the image recognition model is a model trained in advance, and the control device directly calls the trained speech recognition model or image recognition model in this step.
It should be noted that, in the embodiment of the present disclosure, the speech recognition model and the image recognition model are models obtained by model training performed by the control device, or the speech recognition model and the image recognition model are models obtained by model training performed by another electronic device, which is not particularly limited. In response to the voice recognition model or the image recognition model being a model trained by other electronic devices, the control device sends an acquisition request to the other electronic devices, the acquisition request is used for acquiring the voice recognition model or the image recognition model, the other electronic devices receive the acquisition request, acquire the voice recognition model or the image recognition model based on the acquisition request, send the voice recognition model or the image recognition model to the control device, and the control device receives the voice recognition model or the image recognition model.
Step 302: the control device determines a first avatar matching the avatar characteristics based on the avatar characteristics.
The first avatar is an avatar for display on a display device. In some embodiments, the control device stores therein a correspondence of the character feature to the avatar. In this step, the control device determines a first avatar matching the avatar characteristic from the correspondence of the avatar characteristic with an avatar based on the avatar characteristic of the target object.
It should be noted that the plurality of character features and the features of the avatar are in many-to-one correspondence. For example, referring to fig. 4, the image characteristics include "young man", "young woman", "old man", "old woman" and "child"; setting a first avatar corresponding to ' young man ' and ' young woman ' as ' man ' avatar '; setting a first virtual image corresponding to the 'old man' and the 'old woman' as a 'woman image'; and setting the first virtual image corresponding to the child as the cartoon image.
Alternatively, the plurality of character features and the features of the avatar are in one-to-one correspondence, for example, the character features include "man", "woman" and "child"; setting a first avatar corresponding to the man as a man avatar; setting a first virtual image corresponding to the woman as a lady image; and setting the first virtual image corresponding to the child as the cartoon image.
Step 303: the control device presents a first avatar matching the avatar characteristics.
In some embodiments, the control device directly presents the first avatar matching the avatar characteristic upon detecting that the target object comes within the target range. In some embodiments, the control device always presents the avatar, and in response to detecting the target object, changes the presented avatar to the first avatar matching the avatar characteristics of the target object.
It should be noted that the control device is capable of actively interacting with the target object when presenting the first avatar. Correspondingly, when the control equipment displays the first virtual image, the control equipment also displays third prompt information, and the third prompt information is used for prompting that the first virtual image of the target object is activated and can be interacted. For example, the third prompting message is "hello", "what garbage you want to throw", or "good children", etc. Wherein, the third prompt message is a text message or a voice message. In response to the third prompt message being a voice message, the control device determines a tone color matching the first avatar, and plays the third prompt message based on the tone color. In response to the third prompt message being a text message, the text message is displayed in a surrounding area of the first avatar. In some embodiments, a speech bubble is presented around the first avatar, the textual information being presented in the speech bubble. The form and position of the speech bubble can be set as desired, and this is not particularly limited in the embodiments of the present disclosure.
In the implementation mode, the control equipment controls the first virtual image to actively interact with the target object, and the user experience of the target object is optimized.
Step 304: and the control equipment acquires the information to be identified of the garbage held by the target object.
The information to be identified is the characteristic information of the garbage. For example, the feature information is information including features such as the name, material, and shape of the trash. In some embodiments, the information to be recognized is voice information, and in this step, the control device determines the information to be recognized based on the received voice signal. The process is realized by the following steps (a1) - (a2), including:
(A1) the control device receives a second voice signal input by the target object, wherein the second voice signal is a voice signal for inquiring the garbage category of the garbage.
(A2) The control device takes the second voice signal as the information to be recognized.
It should be noted that the second speech signal and the first speech signal are the same speech signal or different speech signals. In response to the second voice signal and the first voice signal being the same voice signal, the first voice signal and the second voice signal being voice signals containing the information to be recognized of the garbage, for example, the first voice signal being "what garbage is banana peel" and "what garbage is chewing gum", accordingly, after the control device receives and transmits the voice signals to the first voice signal, the control device determines the image feature of the target object based on the audio feature of the first voice signal, determines the first voice signal as the information to be recognized, and recognizes the garbage category of the garbage mentioned in the first voice signal.
In the implementation mode, the control device directly takes the collected first voice signal as the second voice signal to identify the garbage category of the garbage mentioned in the first voice signal, and the garbage processing flow is simplified.
In some embodiments, the information to be identified is image information. Then in this step the control device determines the information to be identified based on the received second image. The process is realized by the following steps (B1) - (B2), including:
(B1) the control device receives a second image, which is an image containing the spam.
In this step, the target object acquires the second image through an image acquisition device connected to the control device.
(B2) The control device takes the second image as the information to be identified.
In the implementation mode, the second image containing the garbage is collected and determined as the information to be identified, so that the control equipment can directly analyze the image characteristics of the garbage, and the accuracy of determining the garbage category is improved.
It should be noted that, in some embodiments, the control device directly receives the information to be recognized input by the target object after the first avatar is displayed. In some embodiments, the control device outputs, to the target object, second prompt information for prompting the target object to input information to be recognized through the first avatar before acquiring the second voice signal. The process is as follows: the control equipment outputs second prompt information through the first virtual image, and the second prompt information is used for prompting the target object to input the information to be identified. For example, the second prompting message is "what garbage you want to throw" or "please move garbage to me", etc.
Accordingly, after the control device outputs the second prompt message, the information to be identified input by the target object is received, for example, the second prompt message is "what garbage you want to throw", the information to be identified is "garbage management", or the information to be identified is a picture of the garbage management, etc.
In the implementation mode, the user is prompted to input the information to be identified through the second prompt information, information interaction between the first virtual image and the target user is optimized, and user experience is optimized.
Step 305: and the control equipment identifies the information to be identified to obtain the garbage category of the garbage.
In this step, the control device extracts the feature of the trash from the information to be identified, and determines the category of the trash based on the feature of the trash. The control equipment identifies the information to be identified based on different categories of the information to be identified. In some embodiments, the information to be recognized is a second voice signal, and the control device determines the garbage category of the garbage based on the following steps (a1) - (A3).
(A1) And the control equipment carries out semantic recognition on the second voice signal to obtain semantic information of the second voice signal.
For example, the control device performs keyword extraction on the second voice signal to obtain at least one keyword in the second voice signal. For example, if the second speech signal is "what banana peel is garbage", the control device extracts keywords in the second speech signal including "banana peel", "yes", "what", and "garbage".
(A2) The control device extracts the garbage name of the garbage from the semantic information.
The control device traverses the semantic information, determines the keyword belonging to the garbage name from the semantic information, and determines the keyword as the garbage name. For example, the control device traverses the keywords "banana peel", "yes", "what", and "garbage", extracting the garbage name "banana peel".
(A3) The control device determines a garbage category of the garbage based on the garbage name.
The control equipment realizes the corresponding relation between the stored garbage names and the garbage categories, and determines the garbage categories corresponding to the garbage names based on the identified garbage names.
In the implementation mode, the garbage category of the garbage is determined by recognizing the voice information of the voice signal, so that the garbage recognition mode is enriched, and the garbage recognition process is simplified.
In some implementations, the information to be identified is a second image, and the control device determines a garbage category of the garbage based on the following steps (B1) - (B2).
(B1) And the control equipment performs feature extraction on the second image to obtain the garbage features of the garbage.
And the control equipment performs feature extraction on the second image through the feature extraction model to obtain the garbage features of the garbage in the second image.
(B2) The control device determines a garbage category of the garbage based on the garbage feature.
In this step, the control device classifies the garbage based on the garbage feature to obtain the garbage category of the garbage.
In the implementation mode, the garbage category of the garbage is determined by identifying the garbage features in the image, so that the accuracy of determining the garbage category is improved.
Step 306: the control device outputs first prompt information to the target object through the first virtual image.
The first prompt message includes the garbage category and is used for prompting the target object to put the garbage into a target garbage can corresponding to the garbage category. In some embodiments, the first prompt message is a text message, and in this step, the control device displays the first prompt message on the display device where the first avatar is located. In some embodiments, the first prompt message is a voice message, and in this step, the control device plays the first prompt message. The control device plays the first prompt message based on default tone information, or plays the first prompt message through tone information matched with the image feature, and the process is as follows: the control device determines tone color information matching the character feature based on the character feature. The control device controls the first virtual image and outputs the first prompt information to the target object through the tone information.
In the implementation mode, the first prompt information is output through different tone color information, so that the sound for outputting the first prompt information is matched with the first virtual image, and the user experience is improved.
It should be noted that, in some implementations, the control device is further capable of generating a second avatar based on the avatar characteristic, the second avatar being an avatar corresponding to the avatar characteristic. Correspondingly, the information such as the action generated by the target object in the interaction process is mapped to the second virtual image, and correspondingly, the first virtual image outputs first prompt information to the second virtual image, so that the information interaction between the second virtual image and the first virtual image is realized. The process is realized by the following steps: generating a second avatar for the target object; simulating a dialog scene with the target object through the first avatar and the second avatar; and outputting the first prompt message in the dialog scene.
In some embodiments, the control device also monitors whether the target object throws garbage out of the garbage bin, and accordingly, before this step, the control device also generates a standard scene based on the current scene, the standard scene being a scene in which no garbage occurs within the target range of the target garbage station. And comparing the picture with a target scene by acquiring the picture in the target range, and determining whether the garbage is thrown to the area outside the target garbage can. In response to the garbage being thrown into an area outside the target garbage can, controlling a device to generate alarm information; and outputting the alarm information through the first virtual image. In response to the garbage not being thrown to an area outside the target garbage can, the control device continues to monitor the target range. The process of the control device outputting the alarm information through the first avatar is similar to the process of the control device outputting the first prompt information through the first avatar, and is not repeated herein.
In some embodiments, the control device also controls opening of a lid of the target trash bin. Correspondingly, the control equipment opens the box cover of the target dustbin; closing a lid of the target trash bin in response to a target condition being triggered; wherein the target conditions include: the opening time of the box cover of the target dustbin exceeds the preset time; alternatively, the trash is dropped to the target trash bin.
In the embodiment of the disclosure, the first virtual image is displayed in the target range on the target garbage station, and the information interaction is performed between the first virtual image and the target object to prompt the garbage category of the garbage held by the target object, so that the target object throws the held garbage into a correct garbage can, and therefore, garbage classification supervisors do not need to be arranged beside the garbage station, and manpower and material resources in the garbage classification process are saved.
Fig. 5 is a block diagram of a garbage classification apparatus provided in an embodiment of the present disclosure. Referring to fig. 5, the apparatus includes:
a display module 501 configured to display a first avatar in response to a target object for garbage disposal occurring within a target range of a target garbage station;
an identifying module 502, configured to obtain to-be-identified information of the garbage held by the target object, and identify the to-be-identified information to obtain a garbage category of the garbage;
an output module 503 configured to output first prompt information to the target object through the first avatar, where the first prompt information includes the garbage category and is used to prompt the target object to put the garbage into a target garbage bin corresponding to the garbage category.
In some embodiments, the display module 501 includes:
a first determination unit configured to acquire an avatar feature of the target object, and determine a first avatar matching the avatar feature based on the avatar feature;
and the display unit is configured to display the first virtual image matched with the image characteristic.
In some embodiments, the output module 503 includes:
a second determination unit configured to determine tone color information matching the character feature based on the character feature;
an output unit configured to control the first avatar to output the first prompt information to the target object through the tone information.
In some embodiments, the first determining unit is configured to receive a first voice signal input by the target object, perform feature extraction on the first voice signal, obtain an audio feature of the first voice signal, and use the audio feature as the character feature; alternatively, the first and second electrodes may be,
the first determining unit is configured to acquire a first image of the target object, perform feature extraction on the first image to obtain an image feature of the first image, and use the image feature as the character feature.
In some embodiments, the identification module 502 includes:
a first receiving unit configured to receive a second voice signal input by the target object, the second voice signal being a voice signal for inquiring about a garbage category of the garbage;
a third determining unit configured to take the second voice signal as the information to be recognized.
In some embodiments, the identification module 502 includes:
the recognition unit is configured to perform semantic recognition on the second voice signal to obtain semantic information of the second voice signal;
a first extraction unit configured to extract a garbage name of the garbage from the semantic information;
a fourth determination unit configured to determine a garbage category of the garbage based on the garbage name.
In some embodiments, the identification module 502 includes:
a second receiving unit configured to receive a second image, the second image being an image containing the garbage;
a fifth determination unit configured to take the second image as the information to be recognized.
In some embodiments, the identification module 502 includes:
a second extraction unit configured to perform feature extraction on the second image to obtain a spam feature of the spam;
a sixth determining unit configured to determine a garbage category of the garbage based on the garbage feature.
In some embodiments, the output module 503 includes:
a generating unit configured to generate a second avatar of the target object;
a simulation unit configured to simulate a dialog scene with the target object through the first avatar and the second avatar;
the output unit is configured to output the first prompt information in the dialog scene.
In some embodiments, the output module 503 is further configured to output a second prompt message through the first avatar, where the second prompt message is used to prompt the target object to input the information to be recognized.
In some embodiments, the apparatus further comprises:
an alarm module configured to generate alarm information in response to the garbage being thrown to an area outside the target garbage can;
the output module 503 is further configured to output the alarm information through the first avatar.
In some embodiments, the apparatus further comprises:
an opening module configured to open a lid of the target trash can;
a closing module configured to close a lid of the target trash bin in response to a target condition being triggered;
the target conditions include: the opening time of the box cover of the target dustbin exceeds the preset time; alternatively, the trash is dropped to the target trash bin.
In the embodiment of the disclosure, the first virtual image is displayed in the target range on the target garbage station, and the information interaction is performed between the first virtual image and the target object to prompt the garbage category of the garbage held by the target object, so that the target object throws the held garbage into a correct garbage can, and therefore, garbage classification supervisors do not need to be arranged beside the garbage station, and manpower and material resources in the garbage classification process are saved.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the garbage classification device provided in the above embodiment, only the division of each function module is exemplified when performing garbage classification, and in practical applications, the function distribution may be completed by different function modules according to needs, that is, the internal structure of the device is divided into different function modules to complete all or part of the functions described above. In addition, the garbage classification device provided by the above embodiment and the garbage classification method embodiment belong to the same concept, and specific implementation processes thereof are described in the method embodiment and are not described herein again.
Fig. 6 shows a block diagram of a control device 600 according to an exemplary embodiment of the present disclosure. The control device 600 may be: smart phones, tablet computers, notebook computers, desktop computers, or the like.
In general, the control device 600 includes: a processor 601 and a memory 602.
The processor 601 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 601 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 601 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 601 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 601 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
The memory 602 may include one or more computer-readable storage media, which may be non-transitory. The memory 602 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 602 is used to store at least one instruction for execution by processor 601 to implement the garbage classification method provided by the method embodiments herein.
In some embodiments, the control device 600 may further optionally include: a peripheral interface 603 and at least one peripheral. The processor 601, memory 602, and peripheral interface 603 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 603 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 604, a touch screen display 605, a camera assembly 606, an audio circuit 607, a positioning component 608, and a power supply 609.
The peripheral interface 603 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 601 and the memory 602. In some embodiments, the processor 601, memory 602, and peripheral interface 603 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 601, the memory 602, and the peripheral interface 603 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 604 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 604 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 604 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 604 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 604 may communicate with other electronic devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 604 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display 605 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 605 is a touch display screen, the display screen 605 also has the ability to capture touch signals on or over the surface of the display screen 605. The touch signal may be input to the processor 601 as a control signal for processing. At this point, the display 605 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 605 may be one, providing the front panel of the control device 600; in other embodiments, the display 605 may be at least two, respectively disposed on different surfaces of the control device 600 or in a folded design; in still other embodiments, the display 605 may be a flexible display disposed on a curved surface or on a folding surface of the control device 600. Even more, the display 605 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 605 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 606 is used to capture images or video. Optionally, camera assembly 606 includes a front camera and a rear camera. Generally, a front camera is disposed on a front panel of an electronic apparatus, and a rear camera is disposed on a rear surface of the electronic apparatus. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 606 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 607 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 601 for processing or inputting the electric signals to the radio frequency circuit 604 to realize voice communication. For stereo capture or noise reduction purposes, the microphones may be multiple and located at different locations of the control device 600. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 601 or the radio frequency circuit 604 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 607 may also include a headphone jack.
The positioning component 608 is used to position the current geographic Location of the controlling device 600 for navigation or LBS (Location Based Service). The Positioning component 608 can be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union's galileo System.
The power supply 609 is used to supply power to various components in the control device 600. The power supply 609 may be ac, dc, disposable or rechargeable. When the power supply 609 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the control device 600 also includes one or more sensors 610. The one or more sensors 610 include, but are not limited to: acceleration sensor 611, gyro sensor 612, pressure sensor 613, fingerprint sensor 614, optical sensor 615, and proximity sensor 616.
The acceleration sensor 611 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established to control the apparatus 600. For example, the acceleration sensor 611 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 601 may control the touch screen display 605 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 611. The acceleration sensor 611 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 612 may detect a body direction and a rotation angle of the control apparatus 600, and the gyro sensor 612 may cooperate with the acceleration sensor 611 to acquire a 3D motion of the user on the control apparatus 600. The processor 601 may implement the following functions according to the data collected by the gyro sensor 612: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 613 may be disposed on a side bezel of the control device 600 and/or on a lower layer of the touch display screen 605. When the pressure sensor 613 is disposed on the side frame of the control device 600, the holding signal of the user to the control device 600 can be detected, and the processor 601 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 613. When the pressure sensor 613 is disposed at the lower layer of the touch display screen 605, the processor 601 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 605. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 614 is used for collecting a fingerprint of a user, and the processor 601 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 614, or the fingerprint sensor 614 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 601 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 614 may be provided to control the front, back, or sides of the device 600. When a physical button or vendor Logo is provided on the control device 600, the fingerprint sensor 614 may be integrated with the physical button or vendor Logo.
The optical sensor 615 is used to collect the ambient light intensity. In one embodiment, processor 601 may control the display brightness of touch display 605 based on the ambient light intensity collected by optical sensor 615. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 605 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 605 is turned down. In another embodiment, the processor 601 may also dynamically adjust the shooting parameters of the camera assembly 606 according to the ambient light intensity collected by the optical sensor 615.
The proximity sensor 616, also called a distance sensor, is typically provided on the front panel of the control device 600. The proximity sensor 616 is used to capture the distance between the user and the front face of the control device 600. In one embodiment, the processor 601 controls the touch display 605 to switch from the bright screen state to the dark screen state when the proximity sensor 616 detects that the distance between the user and the front face of the control device 600 is gradually decreased; when the proximity sensor 616 detects that the distance between the user and the front face of the control device 600 is gradually increased, the touch display screen 605 is controlled by the processor 601 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 6 does not constitute a limitation of the control device 600, and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor in an electronic device to perform a method of garbage classification in the following embodiments is also provided. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a Random Access Memory (RAM), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is intended to be exemplary only and not to limit the present disclosure, and any modification, equivalent replacement, or improvement made without departing from the spirit and scope of the present disclosure is to be considered as the same as the present disclosure.

Claims (19)

1. A method of sorting waste, the method comprising:
responding to a target object for throwing garbage in a target range of a target garbage station, and displaying a first virtual image;
acquiring information to be identified of the garbage held by the target object, and identifying the information to be identified to obtain the garbage category of the garbage;
and outputting first prompt information to the target object through the first virtual image, wherein the first prompt information comprises the garbage category and is used for prompting the target object to put the garbage into a target garbage can corresponding to the garbage category.
2. The method of claim 1, wherein said presenting a first avatar comprises:
acquiring image characteristics of the target object, and determining a first virtual image matched with the image characteristics based on the image characteristics;
and displaying the first virtual image matched with the image characteristics.
3. The method of claim 2, wherein outputting first prompt information to the target object via the first avatar comprises:
determining tone color information matched with the image features based on the image features;
and controlling the first virtual image, and outputting the first prompt information to the target object through the tone information.
4. The method of claim 2, wherein the obtaining of the visual characteristics of the target object comprises:
receiving a first voice signal input by the target object, performing feature extraction on the first voice signal to obtain an audio feature of the first voice signal, and taking the audio feature as the image feature; alternatively, the first and second electrodes may be,
and acquiring a first image of the target object, performing feature extraction on the first image to obtain the image feature of the first image, and taking the image feature as the image feature.
5. The method according to claim 1, wherein the obtaining information to be identified of the garbage held by the target object comprises:
receiving a second voice signal input by the target object, wherein the second voice signal is a voice signal for inquiring the garbage category of the garbage;
and taking the second voice signal as the information to be recognized.
6. The method according to claim 5, wherein the identifying the information to be identified to obtain the garbage category of the garbage comprises:
performing semantic recognition on the second voice signal to obtain semantic information of the second voice signal;
extracting the garbage name of the garbage from the semantic information;
determining a garbage category of the garbage based on the garbage name.
7. The method according to claim 1, wherein the obtaining information to be identified of the garbage held by the target object comprises:
receiving a second image, wherein the second image is an image containing the garbage;
and taking the second image as the information to be identified.
8. The method according to claim 7, wherein the identifying the information to be identified to obtain the garbage category of the garbage comprises:
performing feature extraction on the second image to obtain garbage features of the garbage;
determining a garbage category of the garbage based on the garbage features.
9. The method according to any one of claims 1-8, wherein said outputting a first prompt to said target object via said first avatar comprises:
generating a second avatar of the target object;
simulating a dialog scene with the target object through the first avatar and the second avatar;
and outputting the first prompt message in the conversation scene.
10. The method according to any one of claims 1 to 8, wherein before the obtaining of the information to be identified of the spam held by the target object, the method further comprises:
and outputting second prompt information through the first virtual image, wherein the second prompt information is used for prompting the target object to input the information to be identified.
11. The method of claim 1, further comprising:
generating alarm information in response to the garbage being thrown into an area outside the target garbage can;
and outputting the alarm information through the first virtual image.
12. The method of claim 1, wherein after outputting the first prompt message to the target object via the first avatar, the method further comprises:
opening a box cover of the target garbage box;
closing a lid of the target trash bin in response to a target condition being triggered;
the target conditions include: the opening time of the box cover of the target dustbin exceeds the preset time; alternatively, the refuse is thrown to the target refuse bin.
13. A waste classification system, characterized in that it comprises at least: the system comprises a control device, a display device and an information acquisition device;
the control equipment is electrically connected with the display equipment and the information acquisition equipment respectively;
the display device is used for responding to a target object for putting garbage in a target range of a target garbage station and displaying a first virtual image;
the information acquisition equipment is used for acquiring information to be identified of the garbage held by the target object and sending the information to be identified to the control equipment;
the control equipment is used for identifying the information to be identified to obtain the garbage category of the garbage;
the display device is further configured to output first prompt information to the target object through the first avatar, where the first prompt information includes the garbage category and is used to prompt the target object to put the garbage into a target garbage bin corresponding to the garbage category.
14. The system of claim 13, wherein the information-gathering device comprises a signal-gathering device;
the signal acquisition equipment is used for receiving a second voice signal input by the target object, wherein the second voice signal is a voice signal for inquiring the garbage category of the garbage, and the second voice signal is used as the information to be identified.
15. The system of claim 13, wherein the information-gathering device comprises an image-gathering device;
the image acquisition device is used for acquiring a second image, wherein the second image is an image containing the garbage and is used as the information to be identified.
16. The system of claim 13, further comprising: a signal playing device;
the signal playing equipment is electrically connected with the control equipment;
and the signal playing device is used for playing the first prompt message.
17. A waste sorting device, characterized in that the device comprises:
the display module is configured to respond to the target object for putting garbage in the target range of the target garbage station and display the first virtual image;
the identification module is configured to acquire to-be-identified information of the garbage held by the target object, and identify the to-be-identified information to obtain the garbage category of the garbage;
and the output module is configured to output first prompt information to the target object through the first avatar, wherein the first prompt information comprises the garbage category and is used for prompting the target object to put the garbage into a target garbage box corresponding to the garbage category.
18. A control device, characterized in that it comprises a processor and a memory in which at least one program code is stored, which is loaded and executed by the processor to implement the operations performed by the garbage classification method according to any one of claims 1 to 12.
19. A computer-readable storage medium having stored therein at least one program code, the program code being loaded into and executed by a processor to perform operations performed by the method of garbage classification of any one of claims 1 to 12.
CN202011600238.XA 2020-12-30 2020-12-30 Garbage classification method, system, device, control equipment and storage medium Active CN112678373B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011600238.XA CN112678373B (en) 2020-12-30 2020-12-30 Garbage classification method, system, device, control equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011600238.XA CN112678373B (en) 2020-12-30 2020-12-30 Garbage classification method, system, device, control equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112678373A true CN112678373A (en) 2021-04-20
CN112678373B CN112678373B (en) 2022-07-15

Family

ID=75454410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011600238.XA Active CN112678373B (en) 2020-12-30 2020-12-30 Garbage classification method, system, device, control equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112678373B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113830459A (en) * 2021-09-24 2021-12-24 北京声智科技有限公司 Garbage can control method and device and electronic equipment
CN113844797A (en) * 2021-09-22 2021-12-28 成都鲁易科技有限公司 Control method and device for intelligent classification dustbin and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109051405A (en) * 2018-08-31 2018-12-21 深圳市研本品牌设计有限公司 A kind of intelligent dustbin and storage medium
CN109250353A (en) * 2018-08-30 2019-01-22 深圳市研本品牌设计有限公司 A kind of dustbin and storage medium with image identification function
WO2019061947A1 (en) * 2017-09-30 2019-04-04 深圳利万联科技有限公司 Intelligent collection system for classified garbage and method therefor
CN110482072A (en) * 2019-07-02 2019-11-22 上海净收智能科技有限公司 Refuse classification method, system, medium, garbage containing device and cloud platform
CN110822648A (en) * 2019-11-25 2020-02-21 广东美的制冷设备有限公司 Air conditioner, control method thereof, and computer-readable storage medium
CN111232483A (en) * 2020-01-16 2020-06-05 上海思依暄机器人科技股份有限公司 Garbage classification method and device and garbage can
CN111776536A (en) * 2020-07-07 2020-10-16 云知声智能科技股份有限公司 Intelligent garbage classification putting system and method
CN111907959A (en) * 2020-08-12 2020-11-10 山西全云平台大数据有限公司 Intelligent garbage recycling system based on artificial intelligence and big data technology
CN111959995A (en) * 2020-08-10 2020-11-20 昆明理工大学 Garbage classification voice interaction system based on ROS

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019061947A1 (en) * 2017-09-30 2019-04-04 深圳利万联科技有限公司 Intelligent collection system for classified garbage and method therefor
CN109250353A (en) * 2018-08-30 2019-01-22 深圳市研本品牌设计有限公司 A kind of dustbin and storage medium with image identification function
CN109051405A (en) * 2018-08-31 2018-12-21 深圳市研本品牌设计有限公司 A kind of intelligent dustbin and storage medium
CN110482072A (en) * 2019-07-02 2019-11-22 上海净收智能科技有限公司 Refuse classification method, system, medium, garbage containing device and cloud platform
CN110822648A (en) * 2019-11-25 2020-02-21 广东美的制冷设备有限公司 Air conditioner, control method thereof, and computer-readable storage medium
CN111232483A (en) * 2020-01-16 2020-06-05 上海思依暄机器人科技股份有限公司 Garbage classification method and device and garbage can
CN111776536A (en) * 2020-07-07 2020-10-16 云知声智能科技股份有限公司 Intelligent garbage classification putting system and method
CN111959995A (en) * 2020-08-10 2020-11-20 昆明理工大学 Garbage classification voice interaction system based on ROS
CN111907959A (en) * 2020-08-12 2020-11-10 山西全云平台大数据有限公司 Intelligent garbage recycling system based on artificial intelligence and big data technology

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113844797A (en) * 2021-09-22 2021-12-28 成都鲁易科技有限公司 Control method and device for intelligent classification dustbin and storage medium
CN113830459A (en) * 2021-09-24 2021-12-24 北京声智科技有限公司 Garbage can control method and device and electronic equipment

Also Published As

Publication number Publication date
CN112678373B (en) 2022-07-15

Similar Documents

Publication Publication Date Title
CN112911182B (en) Game interaction method, device, terminal and storage medium
CN110659542B (en) Monitoring method and device
CN108920059A (en) Message treatment method and mobile terminal
CN111359209B (en) Video playing method and device and terminal
CN110708630B (en) Method, device and equipment for controlling earphone and storage medium
CN110933468A (en) Playing method, playing device, electronic equipment and medium
CN112678373B (en) Garbage classification method, system, device, control equipment and storage medium
CN109215683A (en) A kind of reminding method and terminal
CN111028566A (en) Live broadcast teaching method, device, terminal and storage medium
CN111027490A (en) Face attribute recognition method and device and storage medium
CN108920572A (en) bus information processing method and mobile terminal
CN111613213A (en) Method, device, equipment and storage medium for audio classification
CN109949809A (en) A kind of sound control method and terminal device
CN114093360A (en) Calling method, calling device, electronic equipment and storage medium
CN109451158A (en) A kind of based reminding method and device
CN112990038A (en) Escalator safety reminding method and device and computer storage medium
CN110933454B (en) Method, device, equipment and storage medium for processing live broadcast budding gift
CN111986700A (en) Method, device, equipment and storage medium for triggering non-contact operation
CN111341317A (en) Method and device for evaluating awakening audio data, electronic equipment and medium
CN110992954A (en) Method, device, equipment and storage medium for voice recognition
CN112866470A (en) Incoming call processing method and device, electronic equipment and medium
CN114595019A (en) Theme setting method, device and equipment of application program and storage medium
CN114594751A (en) Vehicle function testing method, device, equipment and computer readable storage medium
CN113408809A (en) Automobile design scheme evaluation method and device and computer storage medium
CN110798572A (en) Method, device, electronic equipment and medium for lighting screen

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant