CN117284663B - Garden garbage treatment system and method - Google Patents

Garden garbage treatment system and method Download PDF

Info

Publication number
CN117284663B
CN117284663B CN202311460628.5A CN202311460628A CN117284663B CN 117284663 B CN117284663 B CN 117284663B CN 202311460628 A CN202311460628 A CN 202311460628A CN 117284663 B CN117284663 B CN 117284663B
Authority
CN
China
Prior art keywords
garbage
unmanned aerial
aerial vehicle
image
acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311460628.5A
Other languages
Chinese (zh)
Other versions
CN117284663A (en
Inventor
武鸿源
朱文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Anling Ecological Construction Co ltd
Original Assignee
Beijing Anling Ecological Construction Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Anling Ecological Construction Co ltd filed Critical Beijing Anling Ecological Construction Co ltd
Priority to CN202311460628.5A priority Critical patent/CN117284663B/en
Publication of CN117284663A publication Critical patent/CN117284663A/en
Application granted granted Critical
Publication of CN117284663B publication Critical patent/CN117284663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65FGATHERING OR REMOVAL OF DOMESTIC OR LIKE REFUSE
    • B65F1/00Refuse receptacles; Accessories therefor
    • B65F1/14Other constructional features; Accessories
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65FGATHERING OR REMOVAL OF DOMESTIC OR LIKE REFUSE
    • B65F1/00Refuse receptacles; Accessories therefor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65FGATHERING OR REMOVAL OF DOMESTIC OR LIKE REFUSE
    • B65F1/00Refuse receptacles; Accessories therefor
    • B65F1/14Other constructional features; Accessories
    • B65F1/1484Other constructional features; Accessories relating to the adaptation of receptacles to carry identification means
    • EFIXED CONSTRUCTIONS
    • E01CONSTRUCTION OF ROADS, RAILWAYS, OR BRIDGES
    • E01HSTREET CLEANING; CLEANING OF PERMANENT WAYS; CLEANING BEACHES; DISPERSING OR PREVENTING FOG IN GENERAL CLEANING STREET OR RAILWAY FURNITURE OR TUNNEL WALLS
    • E01H1/00Removing undesirable matter from roads or like surfaces, with or without moistening of the surface
    • E01H1/02Brushing apparatus, e.g. with auxiliary instruments for mechanically loosening dirt
    • E01H1/04Brushing apparatus, e.g. with auxiliary instruments for mechanically loosening dirt taking- up the sweepings, e.g. for collecting, for loading
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65FGATHERING OR REMOVAL OF DOMESTIC OR LIKE REFUSE
    • B65F2210/00Equipment of refuse receptacles
    • B65F2210/138Identification means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65FGATHERING OR REMOVAL OF DOMESTIC OR LIKE REFUSE
    • B65F2210/00Equipment of refuse receptacles
    • B65F2210/152Material detecting means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65FGATHERING OR REMOVAL OF DOMESTIC OR LIKE REFUSE
    • B65F2210/00Equipment of refuse receptacles
    • B65F2210/165Remote controls

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Architecture (AREA)
  • Civil Engineering (AREA)
  • Structural Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a garden garbage treatment system and a method, wherein the system comprises a garbage can, a first unmanned aerial vehicle parking apron, an unmanned aerial vehicle, a cleaning end, a cloud server, a garbage monitoring platform arranged on the garbage can and a second unmanned aerial vehicle parking apron; the cloud server is used for preprocessing and generating a total patrol line according to a garden line map; counting the roads to which all the garbage cans to be treated belong and planning to generate a garbage cleaning initial line; removing garbage from the total inspection line to clean an initial line, so as to generate an unmanned aerial vehicle acquisition line, and sending a video acquisition instruction to the unmanned aerial vehicle; and the receiving internet of things module transmits and collects video, performs preprocessing to identify image garbage, generates a garbage cleaning final circuit according to the identified garbage road combined garbage cleaning initial circuit, and sends a cleaning task to the cleaning end. The invention provides an integrated garden garbage treatment system, which improves the automation and intelligent level of garden garbage cleaning.

Description

Garden garbage treatment system and method
[ Field of technology ]
The invention relates to the technical field of gardens, in particular to a system and a method for processing garden garbage.
[ Background Art ]
Along with the acceleration of the urban landscaping process, the construction of urban landscaping is also increasingly emphasized, however, one of the following problems is that the garbage disposal in gardens is of great value in the intelligent garden field, and plays an important role in building intelligent green cities and guaranteeing the living quality of citizens.
The traditional garden garbage disposal mode mainly depends on manual inspection, and has the following problems:
The manual inspection is low in efficiency, time-consuming and labor-consuming, and a large-area garden area cannot be covered;
the real-time monitoring of the filling condition of the garbage can cannot be realized, so that the garbage is not cleaned timely or overflows and falls on various places;
The garbage cleaning efficiency is low, and resources are wasted.
[ Invention ]
In view of the above, the embodiment of the invention provides a system and a method for processing garden garbage.
In a first aspect, an embodiment of the present invention provides a system for processing garden garbage, including a garbage can, a first unmanned aerial vehicle parking apron, an unmanned aerial vehicle, a cleaning end, a cloud server, a garbage monitoring platform and a second unmanned aerial vehicle parking apron, wherein the garbage monitoring platform and the second unmanned aerial vehicle parking apron are arranged on the garbage can;
the garbage cans are arranged beside the garden road;
The first unmanned aerial vehicle parking apron comprises a plurality of parking places for parking and charging of the unmanned aerial vehicle;
The garbage monitoring platform is fixed on the garbage can and comprises a can body, a sensor, a processor, a sensor, a GPS module, an Internet of things module and a power module, wherein the processor, the sensor, the GPS module, the Internet of things module and the power module are arranged in the can body; the internet of things module is further used for receiving video data shared by the unmanned aerial vehicle parked by the second unmanned aerial vehicle, preprocessing the video data and forwarding the video data to the cloud server;
The unmanned aerial vehicle is used for acquiring a cloud server instruction to finish taking off and landing between a first unmanned aerial vehicle parking apron or a second unmanned aerial vehicle parking apron, acquiring a cloud server instruction to finish road video acquisition on a preset unmanned aerial vehicle inspection route, and sharing the acquired video to the Internet of things module;
the second unmanned aerial vehicle parking apron is fixed above the box body, is electrically connected with the power module, and is provided with a parking place for parking and charging the unmanned aerial vehicle;
The cloud server is used for preprocessing and generating a total patrol line according to a garden line map; when the height of the garbage in the target garbage can reaches a first height threshold, taking the target garbage can as a garbage can to be treated, and when the number of the garbage cans to be treated reaches the first number threshold or the height of the garbage in the garbage can to be treated is detected to reach a second height threshold, counting all roads to which the garbage can to be treated belongs and planning to generate a garbage cleaning initial line; removing garbage from the total inspection line to clean an initial line, so as to generate an unmanned aerial vehicle acquisition line, and sending a video acquisition instruction to the unmanned aerial vehicle; the method comprises the steps that an Internet of things receiving module transmits an acquisition video, preprocessing the acquisition video and identifying image garbage, generating a garbage cleaning final line according to the initial garbage cleaning line combined with the identified target garbage road, and sending a cleaning task to a cleaning end;
the cleaning end is used for acquiring all the filling height data of the garbage can and cleaning tasks sent by the cloud server, and cleaning the garbage of the road and the garbage can according to the final cleaning line.
Aspects and any possible implementation manner as described above, further provide an implementation manner, the sensor includes an infrared sensor and a smoke sensor, the infrared sensor is used for measuring the filling height of rubbish in the garbage bin, the smoke sensor is used for detecting fire in the garbage bin, the processor is further used for receiving data transmitted by the smoke sensor and sending the data to the cloud server through the internet of things module after processing, and the cloud server is further used for: generating a fire alarm when a fire is detected, and sending the fire alarm to a cleaning end, wherein the fire alarm comprises the positioning of a garbage can and the fire detection time; judging whether an unmanned aerial vehicle is parked on a second unmanned aerial vehicle parking apron above the fire alarm garbage can, if so, sending a photographing instruction and a photo sending instruction aiming at the fire alarm garbage can to the unmanned aerial vehicle, sending a return instruction returning to the first unmanned aerial vehicle parking apron to the unmanned aerial vehicle, acquiring a photo sent by the unmanned aerial vehicle, preprocessing the photo, and forwarding the photo to a cleaning end; when fire is detected to be eliminated, fire elimination information is generated, and the fire elimination information is sent to the cleaning end.
The foregoing aspects and any possible implementation manner, further provide an implementation manner, where the acquiring a photo sent by the unmanned aerial vehicle, after being preprocessed, is forwarded to the cleaning end, and specifically includes:
The unmanned aerial vehicle preprocesses the shot images of the fire alarm dustbin to generate difference data and complete image data of a change area, and transmits the difference data to the cloud server through a mobile network after compression;
The specific method for generating the difference data and the complete image data of the change area is as follows:
Dividing an image into different areas by adopting a segmentation image segmentation algorithm based on the areas;
based on the change detection technology, identifying dynamic areas with changed colors, brightness and motion characteristics;
carrying out data coding on the identified dynamic region, and capturing the change information of the region to generate difference data;
Encoding the dynamic region by adopting a compression encoding technology based on a motion vector, and combining the encoded dynamic region with other parts of the original image to reconstruct a complete image;
The cloud server transmits the difference data to the cleaning end;
After the unmanned aerial vehicle transmits the difference data of the change area to the cloud server for a preset time period, the unmanned aerial vehicle transmits the compressed complete image data to the cloud server through a mobile network;
and when the cloud server receives the complete image data request of the cleaning end, transmitting the complete image data to the cleaning end.
Aspects and any possible implementation manner as described above further provide an implementation manner, where the sending a video acquisition instruction to the unmanned aerial vehicle specifically includes:
If the unmanned aerial vehicle is mostly parked on a first unmanned aerial vehicle parking apron, sending a voyage acquisition instruction to the unmanned aerial vehicle, wherein the voyage acquisition instruction comprises: the unmanned aerial vehicle is enabled to complete take-off at a first unmanned aerial vehicle parking apron, road video acquisition of an unmanned aerial vehicle acquisition line and landing at a second unmanned aerial vehicle parking apron;
If the unmanned aerial vehicle is mostly parked on the second unmanned aerial vehicle parking apron, sending a return acquisition instruction to the unmanned aerial vehicle, wherein the return acquisition instruction is as follows: the unmanned aerial vehicle is enabled to finish taking off at a second unmanned aerial vehicle parking apron, road video acquisition of an unmanned aerial vehicle acquisition line and landing at a first unmanned aerial vehicle parking apron.
Aspects and any possible implementation manner as described above further provide an implementation manner, where the sending a video acquisition instruction to the unmanned aerial vehicle specifically includes:
judging whether the video acquisition instruction is an outgoing acquisition instruction or a return acquisition instruction, counting the number M 1 of unmanned aerial vehicles for acquisition, calculating the flight index of the unmanned aerial vehicles, and sequentially arranging the unmanned aerial vehicles according to the sequence of the flight index from high to low to generate an acquisition unmanned aerial vehicle set;
the calculation formula of the unmanned aerial vehicle flight index is as follows:
Wherein F represents an unmanned aerial vehicle flight index, B represents a battery residual quantity, D represents a flown mileage, w 1 and w 2 represent adjustment coefficients respectively, and w 1+w2 =1;
Acquiring data of roads provided with garbage cans to be processed, taking each independent road as an acquisition subtask, combining the roads as an acquisition subtask, counting the number M 2 of the acquisition subtasks, calculating the coverage flight distances of different acquisition subtasks, and sequentially arranging the coverage flight distances from long to short to generate an acquisition task set;
When the number M 1 of unmanned aerial vehicles is greater than or equal to the number M 2 of the acquisition subtasks and the video acquisition instruction is an out-of-flight acquisition instruction, selecting M 2 unmanned aerial vehicles from the collection unmanned aerial vehicle set according to the arrangement sequence, correspondingly distributing video acquisition tasks according to the collection task set, acquiring all garbage can positions of a garbage can road to be processed, selecting a second unmanned aerial vehicle parking apron of a last-flight garbage can for each unmanned aerial vehicle executing the acquisition subtasks as a drop point, and selecting M 2 drop points in total;
When the number M 1 of unmanned aerial vehicles is greater than or equal to the number M 2 of the acquisition subtasks and the video acquisition instruction is a return acquisition instruction, selecting M 2 unmanned aerial vehicles from the collection unmanned aerial vehicle set according to the arrangement sequence, correspondingly distributing video acquisition tasks according to the collection task set, and sending a direct return instruction to the rest unmanned aerial vehicles, wherein all unmanned aerial vehicles take a first unmanned aerial vehicle apron as a drop point;
when the number M 1 of unmanned aerial vehicles is smaller than the number M 2 of the acquisition subtasks and the video acquisition instruction is an out-of-flight acquisition instruction, selecting M 2 unmanned aerial vehicles from the collection unmanned aerial vehicle set according to the arrangement sequence, correspondingly distributing M 2 video acquisition tasks according to the collection task set, then distributing the tasks again according to the task distance until the number M 2-M1 of the acquisition subtasks is distributed completely, acquiring all garbage bin positions of a garbage bin road to be processed, selecting a second unmanned aerial vehicle apron of a flying end garbage bin as a landing point for the unmanned aerial vehicle which performs all task list tasks, and selecting M 1 landing points in total;
When the number M 1 of unmanned aerial vehicles is smaller than the number M 2 of the acquisition subtasks and the video acquisition instruction is a return acquisition instruction, selecting M 2 unmanned aerial vehicles from the acquisition unmanned aerial vehicle set according to the arrangement sequence, correspondingly distributing M 2 video acquisition tasks according to the acquisition task set, then distributing the tasks again according to the task distance until the number M 2-M1 of the acquisition subtasks is distributed, and taking a first unmanned aerial vehicle parking apron as a landing point for all unmanned aerial vehicles.
As described above, in one aspect and any possible implementation manner, there is further provided an implementation manner, where the garbage monitoring platform further includes an RFID reader, an intelligent door lock, and a status indicator light, which are electrically connected to the processor, respectively, where the RFID reader is disposed outside the garbage can and is configured to sense an RFID tag presented by the cleaning end, where the status indicator light includes a release prohibition indicator light and a release permission indicator light, and where the processor is further configured to control the intelligent door lock to close the garbage can when detecting that the garbage height of the garbage can to be treated reaches the second height threshold, and control the release prohibition indicator light to light, and control the intelligent door lock to open the garbage can and control the release permission indicator light to light when sensing the target RFID tag.
Aspects and any possible implementation manner as described above, further provide an implementation manner, where the preprocessing performs image garbage identification, and specifically includes:
Processing the acquired video into a video frame sequence, and carrying out denoising, brightness adjustment and stabilization treatment on each frame;
analyzing the motion trail of each pixel point in the video by using a dense optical flow algorithm, dividing the image into a plurality of motion areas according to the motion characteristics of the pixel points, and identifying the relatively stable area as a road area;
Extracting color features in each movement region, and combining adjacent regions with similar colors according to the color features to form a combined road region;
Carrying out geometric feature analysis on the combined road areas, and eliminating the misrecognized areas according to the geometric features;
Identifying the change on the boundary of the road area according to the change of the continuous frame images, and adjusting the boundary of the road area in real time according to the change on the boundary;
acquiring a road area image after road identification and segmentation;
labeling the road area image according to the acquisition and positioning of the unmanned aerial vehicle to form a road area image sequence;
converting each frame of the road area image sequence into a gray image, and calculating gray gradients by applying a Sobel operator to each frame of gray image;
Performing motion detection on the gray gradient image, analyzing the motion trail of each pixel point in the image by using a light flow algorithm, and calculating the image motion blur according to the motion speed and direction of the pixel point;
Performing two-dimensional Fourier transform on the gray level image of each frame, and performing frequency domain transform on the image to obtain a frequency domain energy value;
calculating an ambiguity evaluation value V (x, y), comparing the ambiguity evaluation value V (x, y) with a set ambiguity threshold V 0, and if V (x, y) is more than V 0, determining that the image has ambiguity at coordinates (x, y);
the calculation formula of the ambiguity evaluation value V (x, y) is as follows:
V(x,y)=αG+βM+γE,
Wherein V (x, y) represents the ambiguity of the image at coordinates (x, y), G represents the image gradient, M represents the image motion ambiguity, E represents the frequency domain energy value, and α, β and γ represent weights, respectively;
Determining a blurred region, and marking the blurred region by adopting a region-based image segmentation algorithm;
Fusing road images of the fuzzy area, and optimizing image definition;
Judging whether the optimized image definition meets the preset definition requirement, if not, acquiring image frames at the same position but at different times from a historical image database, comparing the image frames, judging whether foreign matters exist, and if so, judging that the foreign matters exist, namely, the garbage is considered;
If yes, inputting the optimized image and other road area image sequences into a pre-trained convolutional neural network model, performing foreign matter identification, and judging whether foreign matters exist or not;
If the foreign matter exists, inputting the foreign matter image into a preset garbage image classifier, and identifying the specific type of garbage;
And taking the garbage of the specified type as target garbage.
In the aspect and any possible implementation manner described above, there is further provided an implementation manner, where the fusing of the road image of the blurred area, and optimizing the image sharpness specifically includes:
Acquiring a plurality of image frames related to a fuzzy area, selecting two image frames with the lowest fuzzy degree, and extracting characteristic points of an image by using a SIFT algorithm;
confirming a matching pair in the fuzzy area through the feature descriptor, and selecting an optimal homography matrix according to the matching point;
Randomly selecting 4 matching points to calculate homography, judging whether the remaining matching points meet homography, if so, setting a count value for each point to mark the times of meeting, calculating the sum F i of registration errors, using the matched stable characteristic points as an internal point set, and using the internal point set to update homography;
if the count value is greater than the preset multiple N of the average count value of all the feature points, the feature points are set as invalid, N i is set as the total number of the feature points, the internal point set is updated, otherwise, the process goes to the step again to enter iteration: randomly selecting 4 matching points to calculate homography;
if the average count value of all the matching points is larger than a preset m, exiting iteration, otherwise, judging whether the number of all the matching points multiplied by a preset adjusting factor is larger than N i, if so, adding homography to candidate homography in the set, otherwise, discarding the homography;
After a plurality of candidate homography matrices are obtained, euclidean distance D i between each characteristic point of each candidate homography matrix is calculated, and an optimal homography matrix is selected according to max (D i/Ni) and min (F i/Ni);
The fusion of two image frames is completed through an optimal homography matrix, a multi-level fusion strategy is adopted, namely, euclidean distance threshold D 1 between one feature point is set, and the pixel values of the feature points with the distance smaller than D 1 are subjected to weighted average fusion to form a local fusion image; setting a Euclidean distance threshold D 2 between the feature points, and carrying out weighted average fusion on pixel values of the feature points with the distance smaller than D 2 and larger than D 1 to form a fusion image with a larger range; setting a Euclidean distance threshold D 3 between the feature points, and carrying out weighted average fusion on pixel values of the feature points with the distance smaller than D 3 and larger than D 2 to form an integral fusion image;
and sharpening the integral fusion image based on the Laplacian operator to finish optimizing the image definition.
The foregoing aspects and any one of possible implementation manners, further provides an implementation manner, where the inputting the optimized image and the sequence of other road area images into a pre-trained convolutional neural network model, performing foreign object identification, and determining whether a foreign object exists, specifically includes:
acquiring an image sample of a road area image with foreign matters and a road area image without foreign matters, marking the image sample, and marking the image classification attribute;
Dividing the image of the acceleration signal data into a training set, a testing set and a verification set, and matching each data set with a corresponding label;
constructing a convolutional neural network model:
Using the formula Calculating a convolution layer using the formula/>The standard deviation of the gaussian distribution is calculated,
Feature mapping of index set (i, j):
Wherein I represents an input image, k represents a convolution kernel, b represents an offset, p represents the number of feature maps created by the convolution layer, and u and v represent block sizes in the row and column directions of the feature maps, respectively;
Using the formula Calculating a pooling layer, wherein the size of the pooling layer is 12 multiplied by 12;
Using the formula And/>Vectorization and concatenation were performed for each 24×24 matrix/>Carrying out vectorization, wherein 6 vectors are connected to form a long vector with the length of 24 multiplied by 6;
Using the formula Calculating an expected value of a full connection layer, wherein f represents an input vector, and W represents a weight matrix;
Using the formula Calculating a loss function, wherein y represents a true value;
Training the convolutional neural network model through the training set, testing and verifying the trained convolutional neural network model through the testing set and the verifying set, and after meeting the requirements, utilizing the trained convolutional neural network model to conduct foreign matter identification to judge whether foreign matters exist.
In a second aspect, an embodiment of the present invention provides a method for processing garden garbage by using the above system for processing garden garbage, including the following steps:
Generating a total patrol line according to the pre-processing of the garden line map;
Acquiring data sent by an Internet of things module, when judging that the height of garbage in a target garbage can reaches a first height threshold, taking the target garbage can as a garbage can to be processed, and when the number of the garbage cans to be processed reaches a first number threshold or when detecting that the height of garbage in the garbage can to be processed reaches a second height threshold, counting roads to which all the garbage cans to be processed belong and planning to generate a garbage cleaning initial line;
Removing garbage from the total inspection line to clean an initial line, so as to generate an unmanned aerial vehicle acquisition line, and sending a video acquisition instruction to the unmanned aerial vehicle;
And the receiving internet of things module transmits and collects video, performs preprocessing to identify image garbage, generates a garbage cleaning final circuit according to the identified garbage road combined garbage cleaning initial circuit, and sends a cleaning task to the cleaning end.
One of the above technical solutions has the following beneficial effects:
The method of the embodiment of the invention provides a garden garbage treatment system and a method, and the system monitors the filling condition of a garbage can in real time and improves garbage cleaning efficiency; the unmanned aerial vehicle is used for road inspection and video acquisition, so that accurate data support of road garbage is provided; the cloud server intelligently plans the cleaning line, optimizes the cleaning workload and reduces the cleaning cost; the integrated garden garbage disposal system is provided, and the automation and intelligence level of garden garbage cleaning is improved.
[ Description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a block diagram of a garden garbage treatment system according to an embodiment of the present invention;
Fig. 2 is a flow chart of a method for treating garden garbage by using the system for treating garden garbage according to the embodiment of the invention.
[ Detailed description ] of the invention
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to specific embodiments and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Please refer to fig. 1, which is a block diagram of a garden garbage disposal system according to an embodiment of the present invention. As shown in fig. 1, the system includes: the system comprises a garbage can, a first unmanned aerial vehicle parking apron, an unmanned aerial vehicle, a cleaning end, a cloud server, a garbage monitoring platform and a second unmanned aerial vehicle parking apron, wherein the garbage monitoring platform and the second unmanned aerial vehicle parking apron are arranged on the garbage can;
the garbage cans are arranged beside the garden road;
The first unmanned aerial vehicle parking apron comprises a plurality of parking places for parking and charging of the unmanned aerial vehicle;
The garbage monitoring platform is fixed on the garbage can and comprises a can body, a sensor, a processor, a sensor, a GPS module, an Internet of things module and a power module, wherein the processor, the sensor, the GPS module, the Internet of things module and the power module are arranged in the can body; the internet of things module is further used for receiving video data shared by the unmanned aerial vehicle parked by the second unmanned aerial vehicle, preprocessing the video data and forwarding the video data to the cloud server;
The unmanned aerial vehicle is used for acquiring a cloud server instruction to finish taking off and landing between a first unmanned aerial vehicle parking apron or a second unmanned aerial vehicle parking apron, acquiring a cloud server instruction to finish road video acquisition on a preset unmanned aerial vehicle inspection route, and sharing the acquired video to the Internet of things module;
the second unmanned aerial vehicle parking apron is fixed above the box body, is electrically connected with the power module, and is provided with a parking place for parking and charging the unmanned aerial vehicle;
The cloud server is used for preprocessing and generating a total patrol line according to a garden line map; when the height of the garbage in the target garbage can reaches a first height threshold, taking the target garbage can as a garbage can to be treated, and when the number of the garbage cans to be treated reaches the first number threshold or the height of the garbage in the garbage can to be treated is detected to reach a second height threshold, counting all roads to which the garbage can to be treated belongs and planning to generate a garbage cleaning initial line; removing garbage from the total inspection line to clean an initial line, so as to generate an unmanned aerial vehicle acquisition line, and sending a video acquisition instruction to the unmanned aerial vehicle; the method comprises the steps that an Internet of things receiving module transmits an acquisition video, preprocessing the acquisition video and identifying image garbage, generating a garbage cleaning final line according to the initial garbage cleaning line combined with the identified target garbage road, and sending a cleaning task to a cleaning end;
the cleaning end is used for acquiring all the filling height data of the garbage can and cleaning tasks sent by the cloud server, and cleaning the garbage of the road and the garbage can according to the final cleaning line.
The intelligent garden garbage treatment system provided by the invention fully utilizes the cooperative operation of the unmanned aerial vehicle, the garbage monitoring platform, the cloud server and the cleaning end, so that intelligent monitoring and efficient cleaning of garden garbage are realized, the system can monitor the filling state of the garbage can in real time, when the garbage height in the garbage can reaches a preset threshold value, the system automatically generates an intelligent cleaning route and guides the unmanned aerial vehicle to carry out inspection and garbage collection in a garden area, the garbage position can be accurately identified through an image garbage identification technology, the garbage cleaning route is further optimized, and finally a cleaning task is sent to the cleaning end to finish cleaning; the intelligent garbage disposal system provided by the invention realizes automatic monitoring, positioning and cleaning of garden garbage, improves garbage disposal efficiency, reduces labor cost, improves garden environment, and provides a high-efficiency and feasible solution for urban garden management.
In a preferred embodiment of the present invention, the sensor of the present invention includes an infrared sensor and a smoke sensor, the infrared sensor is used for measuring a filling height of garbage in the garbage can, the smoke sensor is used for detecting a fire disaster in the garbage can, the processor is further used for receiving and processing data transmitted by the smoke sensor, and then transmitting the processed data to the cloud server through the internet of things module, and the cloud server is further used for: generating a fire alarm when a fire is detected, and sending the fire alarm to a cleaning end, wherein the fire alarm comprises the positioning of a garbage can and the fire detection time; judging whether an unmanned aerial vehicle is parked on a second unmanned aerial vehicle parking apron above the fire alarm garbage can, if so, sending a photographing instruction and a photo sending instruction aiming at the fire alarm garbage can to the unmanned aerial vehicle, sending a return instruction returning to the first unmanned aerial vehicle parking apron to the unmanned aerial vehicle, acquiring a photo sent by the unmanned aerial vehicle, preprocessing the photo, and forwarding the photo to a cleaning end; when fire is detected to be eliminated, fire elimination information is generated, and the fire elimination information is sent to the cleaning end.
The garbage can is provided with the infrared sensor and the smoke sensor, the use of the two sensors provides multi-level monitoring guarantee, the infrared sensor can accurately measure the filling height of garbage in the garbage can, intelligent garbage can state monitoring is realized, and accurate garbage can filling information is provided for a system; meanwhile, the smoke sensor is added to enable the system to have fire monitoring capability, and when the system detects that a fire occurs in the dustbin, the system can quickly generate fire alarms, including the positioning of the dustbin and the fire detection time, so that a fire event can be responded quickly. When a fire disaster occurs, the system not only sends a fire disaster alarm to the cleaning end, but also intelligently judges whether the unmanned aerial vehicle is parked on the second unmanned aerial vehicle parking apron above the garbage can; if yes, the system sends a photographing instruction and a photo sending instruction to the unmanned aerial vehicle, acquires a fire scene photo, and transmits the photo to the cleaning end after preprocessing; meanwhile, the system sends a return instruction for returning the first unmanned aerial vehicle parking apron to the unmanned aerial vehicle, so that safe return of the unmanned aerial vehicle is ensured. When the fire is eliminated, the system also generates fire elimination information and sends the fire elimination information to the cleaning end so as to know the elimination condition of the fire condition in time. By introducing the infrared sensor and the smoke sensor, the invention realizes the comprehensive monitoring of the state of the dustbin, so that the system has the quick response and processing capacity to emergency situations, and the safety and reliability of the system are improved.
In a preferred embodiment of the present invention, the method includes preprocessing a photo sent by an unmanned aerial vehicle, and forwarding the photo to a cleaning terminal, specifically including:
The unmanned aerial vehicle preprocesses the shot images of the fire alarm dustbin to generate difference data and complete image data of a change area, and transmits the difference data to the cloud server through a mobile network after compression;
The specific method for generating the difference data and the complete image data of the change area is as follows:
Dividing an image into different areas by adopting a segmentation image segmentation algorithm based on the areas;
based on the change detection technology, identifying dynamic areas with changed colors, brightness and motion characteristics;
carrying out data coding on the identified dynamic region, and capturing the change information of the region to generate difference data;
Encoding the dynamic region by adopting a compression encoding technology based on a motion vector, and combining the encoded dynamic region with other parts of the original image to reconstruct a complete image;
The cloud server transmits the difference data to the cleaning end;
After the unmanned aerial vehicle transmits the difference data of the change area to the cloud server for a preset time period, the unmanned aerial vehicle transmits the compressed complete image data to the cloud server through a mobile network;
and when the cloud server receives the complete image data request of the cleaning end, transmitting the complete image data to the cleaning end.
The image shot by the unmanned aerial vehicle is firstly divided by the region-based dividing algorithm, the image is divided into different regions, and then a change detection technology is adopted, so that the system can identify dynamic regions with changed colors, brightness and motion characteristics; aiming at the identified dynamic areas, the system carries out data coding, captures the change information of the areas and generates difference data, wherein the difference data contains the change information of the dynamic areas and is an efficient representation of the change part of the fire alarm dustbin image; the system encodes the dynamic region by a compression encoding technology based on the motion vector, so that the encoded dynamic region data can be combined with other parts of the original image to reconstruct a complete image, and the method not only maintains important change information, but also realizes high compression of the data and improves the efficiency of data transmission; the generated difference data is transmitted to the cleaning end through the cloud server, after a certain period of time, the unmanned aerial vehicle transmits the compressed complete image data to the cloud server, and the cloud server transmits the complete image data to the cleaning end when receiving the complete image data request of the cleaning end. By the method, the system can efficiently preprocess and transmit the fire alarm dustbin image shot by the unmanned aerial vehicle to the cleaning end, so that the quick response and processing of a fire event are realized, the emergency response capability of the whole system is improved, and the follow-up complete investigation and evidence collection of the fire are ensured.
In a preferred embodiment of the present invention, sending a video acquisition instruction to an unmanned aerial vehicle specifically includes:
If the unmanned aerial vehicle is mostly parked on a first unmanned aerial vehicle parking apron, sending a voyage acquisition instruction to the unmanned aerial vehicle, wherein the voyage acquisition instruction comprises: the unmanned aerial vehicle is enabled to complete take-off at a first unmanned aerial vehicle parking apron, road video acquisition of an unmanned aerial vehicle acquisition line and landing at a second unmanned aerial vehicle parking apron;
If the unmanned aerial vehicle is mostly parked on the second unmanned aerial vehicle parking apron, sending a return acquisition instruction to the unmanned aerial vehicle, wherein the return acquisition instruction is as follows: the unmanned aerial vehicle is enabled to finish taking off at a second unmanned aerial vehicle parking apron, road video acquisition of an unmanned aerial vehicle acquisition line and landing at a first unmanned aerial vehicle parking apron.
In the preferred embodiment of the invention, through an intelligent judging and deciding mechanism, the system can intelligently send video acquisition instructions to the unmanned aerial vehicle according to the unmanned aerial vehicle distribution conditions of the first unmanned aerial vehicle parking apron and the second unmanned aerial vehicle parking apron, thereby realizing more efficient and flexible inspection and data acquisition.
In a preferred embodiment of the present invention, sending a video acquisition instruction to an unmanned aerial vehicle specifically includes:
judging whether the video acquisition instruction is an outgoing acquisition instruction or a return acquisition instruction, counting the number M 1 of unmanned aerial vehicles for acquisition, calculating the flight index of the unmanned aerial vehicles, and sequentially arranging the unmanned aerial vehicles according to the sequence of the flight index from high to low to generate an acquisition unmanned aerial vehicle set;
the calculation formula of the unmanned aerial vehicle flight index is as follows:
Wherein F represents an unmanned aerial vehicle flight index, B represents a battery residual quantity, D represents a flown mileage, w 1 and w 2 represent adjustment coefficients respectively, and w 1+w2 =1;
Acquiring data of roads provided with garbage cans to be processed, taking each independent road as an acquisition subtask, combining the roads as an acquisition subtask, counting the number M 2 of the acquisition subtasks, calculating the coverage flight distances of different acquisition subtasks, and sequentially arranging the coverage flight distances from long to short to generate an acquisition task set;
When the number M 1 of unmanned aerial vehicles is greater than or equal to the number M 2 of the acquisition subtasks and the video acquisition instruction is an out-of-flight acquisition instruction, selecting M 2 unmanned aerial vehicles from the collection unmanned aerial vehicle set according to the arrangement sequence, correspondingly distributing video acquisition tasks according to the collection task set, acquiring all garbage can positions of a garbage can road to be processed, selecting a second unmanned aerial vehicle parking apron of a last-flight garbage can for each unmanned aerial vehicle executing the acquisition subtasks as a drop point, and selecting M 2 drop points in total;
When the number M 1 of unmanned aerial vehicles is greater than or equal to the number M 2 of the acquisition subtasks and the video acquisition instruction is a return acquisition instruction, selecting M 2 unmanned aerial vehicles from the collection unmanned aerial vehicle set according to the arrangement sequence, correspondingly distributing video acquisition tasks according to the collection task set, and sending a direct return instruction to the rest unmanned aerial vehicles, wherein all unmanned aerial vehicles take a first unmanned aerial vehicle apron as a drop point;
when the number M 1 of unmanned aerial vehicles is smaller than the number M 2 of the acquisition subtasks and the video acquisition instruction is an out-of-flight acquisition instruction, selecting M 2 unmanned aerial vehicles from the collection unmanned aerial vehicle set according to the arrangement sequence, correspondingly distributing M 2 video acquisition tasks according to the collection task set, then distributing the tasks again according to the task distance until the number M 2-M1 of the acquisition subtasks is distributed completely, acquiring all garbage bin positions of a garbage bin road to be processed, selecting a second unmanned aerial vehicle apron of a flying end garbage bin as a landing point for the unmanned aerial vehicle which performs all task list tasks, and selecting M 1 landing points in total;
When the number M 1 of unmanned aerial vehicles is smaller than the number M 2 of the acquisition subtasks and the video acquisition instruction is a return acquisition instruction, selecting M 2 unmanned aerial vehicles from the acquisition unmanned aerial vehicle set according to the arrangement sequence, correspondingly distributing M 2 video acquisition tasks according to the acquisition task set, then distributing the tasks again according to the task distance until the number M 2-M1 of the acquisition subtasks is distributed, and taking a first unmanned aerial vehicle parking apron as a landing point for all unmanned aerial vehicles.
According to the invention, through comprehensive evaluation of the battery allowance and the flown mileage of the unmanned aerial vehicle, the system can calculate the flight index of the unmanned aerial vehicle, so that the most suitable acquisition mode is determined, the intelligent judgment mechanism ensures the efficient utilization of the unmanned aerial vehicle, and the inspection efficiency of the garbage disposal system is improved; according to the number and the distance of the acquisition subtasks, the video acquisition tasks are intelligently distributed to unmanned aerial vehicles, when the number of the unmanned aerial vehicles is sufficient, the system can select a proper number of unmanned aerial vehicles according to the number of the tasks and intelligently distribute the tasks so that each task can be effectively executed, and when the number of the unmanned aerial vehicles is insufficient, the system can intelligently distribute the tasks again according to the distance of the tasks and the number of the remaining unmanned aerial vehicles so as to ensure the successful completion of each task; the system can intelligently select the most suitable drop point according to the position of the garbage bin of each task list; the intelligent selection mechanism ensures that the unmanned aerial vehicle can efficiently and safely land after completing the task, and is ready for the execution of the next round of task. Through the intelligent decisions and operations, the system can realize efficient and accurate garbage inspection, the overall performance and response speed of the system are improved, and reliable guarantee is provided for garden garbage treatment.
In a preferred embodiment of the invention, the garbage monitoring platform further comprises an RFID reader, an intelligent door lock and a status indicator light which are respectively and electrically connected with the processor, wherein the RFID reader is arranged outside the garbage can and is used for sensing an RFID label presented by the cleaning end, the status indicator light comprises a release prohibition indicator light and a release permission indicator light, and the processor is further used for controlling the intelligent door lock to close the garbage can and controlling the release prohibition indicator light to be on when detecting that the garbage height of the garbage can to be treated reaches a second height threshold value and controlling the intelligent door lock to open the garbage can and controlling the release permission indicator light to be on when sensing a target RFID label.
According to the invention, the RFID tag shown by the cleaning end can be sensed through the RFID reader, when the garbage height of the garbage can to be treated reaches the second height threshold value, the system can automatically control the intelligent door lock to close the garbage can, so that the condition of garbage overflow is effectively avoided, and the use safety and the environmental sanitation of the garbage can are ensured; when the intelligent door lock is closed, the throwing prompt lamp is prohibited to be lighted, the user is reminded that the current garbage can is full, garbage is prohibited to be thrown, and when the target RFID tag is sensed, the system can control the intelligent door lock to open the garbage can and control the throwing prompt lamp to be lighted, so that the user is prompted to throw garbage, and the garbage can is more convenient and efficient to use by the intelligent prompt mechanism of the throwing state; the application of the RFID technology enables the system to track the service condition of the garbage can, through the identification of the RFID tag, the system can accurately record the throwing and cleaning condition of each garbage can, a basis is provided for reasonable management of the garbage can, intelligent monitoring, control and tracking of the garbage can are realized, and the service efficiency and management level of the garbage can are improved.
In a preferred embodiment of the present invention, preprocessing for image garbage identification specifically includes:
Processing the acquired video into a video frame sequence, and carrying out denoising, brightness adjustment and stabilization treatment on each frame;
analyzing the motion trail of each pixel point in the video by using a dense optical flow algorithm, dividing the image into a plurality of motion areas according to the motion characteristics of the pixel points, and identifying the relatively stable area as a road area;
Extracting color features in each movement region, and combining adjacent regions with similar colors according to the color features to form a combined road region;
Carrying out geometric feature analysis on the combined road areas, and eliminating the misrecognized areas according to the geometric features;
Identifying the change on the boundary of the road area according to the change of the continuous frame images, and adjusting the boundary of the road area in real time according to the change on the boundary;
acquiring a road area image after road identification and segmentation;
labeling the road area image according to the acquisition and positioning of the unmanned aerial vehicle to form a road area image sequence;
converting each frame of the road area image sequence into a gray image, and calculating gray gradients by applying a Sobel operator to each frame of gray image;
Performing motion detection on the gray gradient image, analyzing the motion trail of each pixel point in the image by using a light flow algorithm, and calculating the image motion blur according to the motion speed and direction of the pixel point;
Performing two-dimensional Fourier transform on the gray level image of each frame, and performing frequency domain transform on the image to obtain a frequency domain energy value;
calculating an ambiguity evaluation value V (x, y), comparing the ambiguity evaluation value V (x, y) with a set ambiguity threshold V 0, and if V (x, y) is more than V 0, determining that the image has ambiguity at coordinates (x, y);
the calculation formula of the ambiguity evaluation value V (x, y) is as follows:
V(x,y)=αG+βM+γE,
Wherein V (x, y) represents the ambiguity of the image at coordinates (x, y), G represents the image gradient, M represents the image motion ambiguity, E represents the frequency domain energy value, and α, β and γ represent weights, respectively;
Determining a blurred region, and marking the blurred region by adopting a region-based image segmentation algorithm;
Fusing road images of the fuzzy area, and optimizing image definition;
Judging whether the optimized image definition meets the preset definition requirement, if not, acquiring image frames at the same position but at different times from a historical image database, comparing the image frames, judging whether foreign matters exist, and if so, judging that the foreign matters exist, namely, the garbage is considered;
If yes, inputting the optimized image and other road area image sequences into a pre-trained convolutional neural network model, performing foreign matter identification, and judging whether foreign matters exist or not;
If the foreign matter exists, inputting the foreign matter image into a preset garbage image classifier, and identifying the specific type of garbage;
And taking the garbage of the specified type as target garbage.
According to the invention, the motion trail of the pixel points in the video is analyzed through the dense optical flow algorithm, the relatively stable region is considered as the road region, the color feature extraction and combination further enhance the stability of the road region, and the false recognition region is removed; according to the change of continuous frame images, the boundary of the road area is adjusted in real time, and the real-time boundary adjustment ensures the accuracy and precision of the road area; the recognition and definition optimization of the fuzzy area are realized through the fuzzy evaluation and image fusion technology, the introduction of the fuzzy evaluation enables the system to judge whether the fuzzy area exists in the image, the definition of the fuzzy area is improved through the image fusion technology, and the image quality is ensured; through a pre-trained convolutional neural network model, the system can identify whether foreign matters exist in the image or not and further carry out garbage classification, and the image processing and identifying technology ensures accurate identification and classification of target garbage. According to the invention, through the complex image processing algorithm and model, accurate identification of the road area, definition optimization of the fuzzy area, identification of foreign matters and garbage classification are realized, and high-efficiency and accurate visual support is provided for garbage cleaning.
In a preferred embodiment of the present invention, the road image of the fuzzy area is fused, and the image definition is optimized, which specifically includes:
Acquiring a plurality of image frames related to a fuzzy area, selecting two image frames with the lowest fuzzy degree, and extracting characteristic points of an image by using a SIFT algorithm;
confirming a matching pair in the fuzzy area through the feature descriptor, and selecting an optimal homography matrix according to the matching point;
Randomly selecting 4 matching points to calculate homography, judging whether the remaining matching points meet homography, if so, setting a count value for each point to mark the times of meeting, calculating the sum F i of registration errors, using the matched stable characteristic points as an internal point set, and using the internal point set to update homography;
if the count value is greater than the preset multiple N of the average count value of all the feature points, the feature points are set as invalid, N i is set as the total number of the feature points, the internal point set is updated, otherwise, the process goes to the step again to enter iteration: randomly selecting 4 matching points to calculate homography;
if the average count value of all the matching points is larger than a preset m, exiting iteration, otherwise, judging whether the number of all the matching points multiplied by a preset adjusting factor is larger than N i, if so, adding homography to candidate homography in the set, otherwise, discarding the homography;
After a plurality of candidate homography matrices are obtained, euclidean distance D i between each characteristic point of each candidate homography matrix is calculated, and an optimal homography matrix is selected according to max (D i/Ni) and min (F i/Ni);
The fusion of two image frames is completed through an optimal homography matrix, a multi-level fusion strategy is adopted, namely, euclidean distance threshold D 1 between one feature point is set, and the pixel values of the feature points with the distance smaller than D 1 are subjected to weighted average fusion to form a local fusion image; setting a Euclidean distance threshold D 2 between the feature points, and carrying out weighted average fusion on pixel values of the feature points with the distance smaller than D 2 and larger than D 1 to form a fusion image with a larger range; setting a Euclidean distance threshold D 3 between the feature points, and carrying out weighted average fusion on pixel values of the feature points with the distance smaller than D 3 and larger than D 2 to form an integral fusion image;
and sharpening the integral fusion image based on the Laplacian operator to finish optimizing the image definition.
According to the invention, the feature points of the fuzzy areas in the plurality of image frames are extracted through the SIFT algorithm, and the optimal homography matrix is selected according to the matching points, so that the accurate matching of the feature points is ensured, and a reliable basis is provided for subsequent fusion; updating the homography matrix according to the stability and the matching degree of the feature points in an iterative mode, and selecting an optimal matrix from a plurality of candidate homography matrices, wherein the process ensures the accuracy and the stability of fusion; the multi-level fusion strategy is adopted, the pixel values of the feature points are subjected to weighted average fusion according to Euclidean distance among the feature points, and the fusion process is more flexible due to the setting of different thresholds, so that the fuzzy region with different ranges can be adapted; after fusion is completed, sharpening is performed based on the Laplace operator, and the step improves the definition of the whole fused image, so that the finally optimized image has more details and definition.
In a preferred embodiment of the present invention, the optimized image and other road area image sequences are input into a pre-trained convolutional neural network model to perform foreign object recognition, and the method for determining whether the foreign object exists specifically includes:
acquiring an image sample of a road area image with foreign matters and a road area image without foreign matters, marking the image sample, and marking the image classification attribute;
Dividing the image of the acceleration signal data into a training set, a testing set and a verification set, and matching each data set with a corresponding label;
constructing a convolutional neural network model:
Using the formula Calculating a convolution layer using the formula/>The standard deviation of the gaussian distribution is calculated,
Feature mapping of index set (i, j):
Wherein I represents an input image, k represents a convolution kernel, b represents an offset, p represents the number of feature maps created by the convolution layer, and u and v represent block sizes in the row and column directions of the feature maps, respectively;
Using the formula Calculating a pooling layer, wherein the size of the pooling layer is 12 multiplied by 12;
Using the formula And/>Vectorization and concatenation were performed for each 24×24 matrix/>Carrying out vectorization, wherein 6 vectors are connected to form a long vector with the length of 24 multiplied by 6;
Using the formula Calculating an expected value of a full connection layer, wherein f represents an input vector, and W represents a weight matrix; /(I)
Using the formulaCalculating a loss function, wherein y represents a true value;
Training the convolutional neural network model through the training set, testing and verifying the trained convolutional neural network model through the testing set and the verifying set, and after meeting the requirements, utilizing the trained convolutional neural network model to conduct foreign matter identification to judge whether foreign matters exist.
By constructing and training the convolutional neural network, the system realizes automatic identification of the foreign matters in the road area image, improves the accuracy and efficiency of the foreign matter identification, and provides more accurate target positioning for garbage cleaning.
Fig. 2 is a schematic flow chart of a method for processing garden garbage by using the system for processing garden garbage according to an embodiment of the present invention. As shown in fig. 2, the method comprises the steps of:
Generating a total patrol line according to the pre-processing of the garden line map;
Acquiring data sent by an Internet of things module, when judging that the height of garbage in a target garbage can reaches a first height threshold, taking the target garbage can as a garbage can to be processed, and when the number of the garbage cans to be processed reaches a first number threshold or when detecting that the height of garbage in the garbage can to be processed reaches a second height threshold, counting roads to which all the garbage cans to be processed belong and planning to generate a garbage cleaning initial line;
Removing garbage from the total inspection line to clean an initial line, so as to generate an unmanned aerial vehicle acquisition line, and sending a video acquisition instruction to the unmanned aerial vehicle;
And the receiving internet of things module transmits and collects video, performs preprocessing to identify image garbage, generates a garbage cleaning final circuit according to the identified garbage road combined garbage cleaning initial circuit, and sends a cleaning task to the cleaning end.
According to the garden garbage treatment method using the garden garbage treatment system, disclosed by the embodiment of the invention, the system can preprocess and generate a total inspection line by using a garden line map, so that intelligent inspection and coverage of a garden area are realized; when the height of the garbage in the target garbage can reaches a preset threshold value, the system can accurately identify the garbage as the garbage can to be treated, and meanwhile, the number of the garbage cans to be treated and the height of the garbage can be judged so as to carry out subsequent treatment; the system generates an unmanned aerial vehicle acquisition line by removing the garbage cleaning initial line from the total inspection line, and the intelligent strategy enables the cleaning process to be more efficient, so that the repeated cleaning and missing cleaning conditions are avoided; the system can combine the garbage cleaning initial lines according to the recognition result to form a garbage cleaning final line, and the method improves the accuracy and timeliness of garbage cleaning; once the final garbage cleaning circuit is formed, the system can intelligently send cleaning tasks to the cleaning end, so that the intelligent management and execution of the tasks are realized. Through the technical characteristics, the garbage treatment method provided by the invention realizes the intellectualization, high efficiency and precision of garden garbage treatment, and improves the efficiency and quality of garden garbage treatment.
On the basis of the above, there is also provided a computer readable storage medium on which a computer program stored which, when run, implements the above method.
It should be appreciated that the systems and modules thereof shown above may be implemented in a variety of ways. For example, in some embodiments, the system and its modules may be implemented in hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may then be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or special purpose design hardware. Those skilled in the art will appreciate that the methods and systems described above may be implemented using computer executable instructions and/or embodied in processor control code, such as provided on a carrier medium such as a magnetic disk, CD or DVD-ROM, a programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The system of the present application and its modules may be implemented not only with hardware circuitry such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also with software executed by various types of processors, for example, and with a combination of the above hardware circuitry and software (e.g., firmware).
It should be noted that, the advantages that may be generated by different embodiments may be different, and in different embodiments, the advantages that may be generated may be any one or a combination of several of the above, or any other possible advantages that may be obtained.
While the basic concepts have been described above, it will be apparent to those skilled in the art that the foregoing detailed disclosure is by way of example only and is not intended to be limiting. Although not explicitly described herein, various modifications, improvements and adaptations of the application may occur to one skilled in the art. Such modifications, improvements, and modifications are intended to be suggested within the present disclosure, and therefore, such modifications, improvements, and adaptations are intended to be within the spirit and scope of the exemplary embodiments of the present disclosure.
Meanwhile, the present application uses specific words to describe embodiments of the present application. Reference to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic is associated with at least one embodiment of the application. Thus, it should be emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various positions in this specification are not necessarily referring to the same embodiment. Furthermore, certain features, structures, or characteristics of one or more embodiments of the application may be combined as suitable.
Furthermore, those skilled in the art will appreciate that the various aspects of the application are illustrated and described in the context of a number of patentable categories or circumstances, including any novel and useful procedures, machines, products, or materials, or any novel and useful modifications thereof. Accordingly, aspects of the application may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.) or by a combination of hardware and software. The above hardware or software may be referred to as a "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the application may take the form of a computer product, comprising computer-readable program code, embodied in one or more computer-readable media.
The computer storage medium may contain a propagated data signal with the computer program code embodied therein, for example, on a baseband or as part of a carrier wave. The propagated signal may take on a variety of forms, including electro-magnetic, optical, etc., or any suitable combination thereof. A computer storage medium may be any computer readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated through any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or a combination of any of the foregoing.
The computer program code necessary for operation of portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, scala, smalltalk, eiffel, JADE, emerald, C ++, C#, VB NET, python, and the like, a conventional programming language such as the C language, visual Basic, fortran2003, perl, COBOL2002, PHP, ABAP, a dynamic programming language such as Python, ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer or as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any form of network, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or the use of services such as software as a service (SaaS) in a cloud computing environment.
Furthermore, the order in which the elements and sequences are presented, the use of numerical letters, or other designations are used in the application is not intended to limit the sequence of the processes and methods unless specifically recited in the claims. While certain presently useful inventive embodiments have been discussed in the foregoing disclosure, by way of example, it is to be understood that such details are merely illustrative and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements included within the spirit and scope of the embodiments of the application. For example, while the system components described above may be implemented by hardware devices, they may also be implemented solely by software solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in order to simplify the description of the present disclosure and thereby aid in understanding one or more inventive embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof. This method of disclosure does not imply that the subject application requires more features than are set forth in the claims. Indeed, less than all of the features of a single embodiment disclosed above.
In some embodiments, numbers describing the components, number of attributes are used, it being understood that such numbers being used in the description of embodiments are modified in some examples by the modifier "about," approximately, "or" substantially. Unless otherwise indicated, "about," "approximately," or "substantially" indicate that the numbers allow for adaptive variation. Accordingly, in some embodiments, numerical parameters set forth in the specification and claims are approximations that may vary depending upon the desired properties sought to be obtained by the individual embodiments. In some embodiments, the numerical parameters should take into account the specified significant digits and employ a method for preserving the general number of digits. Although the numerical ranges and parameters set forth herein are approximations in some embodiments for use in determining the breadth of the range, in particular embodiments, the numerical values set forth herein are as precisely as possible.
Each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited herein is hereby incorporated by reference in its entirety. Except for the application history file that is inconsistent or conflicting with this disclosure, the file (currently or later attached to this disclosure) that limits the broadest scope of the claims of this disclosure is also excluded. It is noted that the description, definition, and/or use of the term in the appended claims controls the description, definition, and/or use of the term in this application if there is a discrepancy or conflict between the description, definition, and/or use of the term in the appended claims.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present application. Other variations are also possible within the scope of the application. Thus, by way of example, and not limitation, alternative configurations of embodiments of the application may be considered in keeping with the teachings of the application. Accordingly, the embodiments of the present application are not limited to the embodiments explicitly described and depicted herein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (9)

1. The garden garbage treatment system is characterized by comprising a garbage can, a first unmanned aerial vehicle parking apron, an unmanned aerial vehicle, a cleaning end, a cloud server, a garbage monitoring platform and a second unmanned aerial vehicle parking apron, wherein the garbage monitoring platform and the second unmanned aerial vehicle parking apron are arranged on the garbage can;
the garbage cans are arranged beside the garden road;
The first unmanned aerial vehicle parking apron comprises a plurality of parking places for parking and charging of the unmanned aerial vehicle;
The garbage monitoring platform is fixed on the garbage can and comprises a can body, a sensor, a processor, a sensor, a GPS module, an Internet of things module and a power module, wherein the processor, the sensor, the GPS module, the Internet of things module and the power module are arranged in the can body; the internet of things module is further used for receiving video data shared by the unmanned aerial vehicle parked by the second unmanned aerial vehicle, preprocessing the video data and forwarding the video data to the cloud server;
The unmanned aerial vehicle is used for acquiring a cloud server instruction to finish taking off and landing between a first unmanned aerial vehicle parking apron or a second unmanned aerial vehicle parking apron, acquiring a cloud server instruction to finish road video acquisition on a preset unmanned aerial vehicle inspection route, and sharing the acquired video to the Internet of things module;
the second unmanned aerial vehicle parking apron is fixed above the box body, is electrically connected with the power module, and is provided with a parking place for parking and charging the unmanned aerial vehicle;
The cloud server is used for preprocessing and generating a total patrol line according to a garden line map; when the height of the garbage in the target garbage can reaches a first height threshold, taking the target garbage can as a garbage can to be treated, and when the number of the garbage cans to be treated reaches the first number threshold or the height of the garbage in the garbage can to be treated is detected to reach a second height threshold, counting all roads to which the garbage can to be treated belongs and planning to generate a garbage cleaning initial line; removing garbage from the total inspection line to clean an initial line, so as to generate an unmanned aerial vehicle acquisition line, and sending a video acquisition instruction to the unmanned aerial vehicle; the method comprises the steps that an Internet of things receiving module transmits an acquisition video, preprocessing the acquisition video and identifying image garbage, generating a garbage cleaning final line according to the initial garbage cleaning line combined with the identified target garbage road, and sending a cleaning task to a cleaning end;
the cleaning end is used for acquiring all the filling height data and cleaning tasks of the garbage can sent by the cloud server, and cleaning the garbage of the road and the garbage can according to the final cleaning line;
The preprocessing for image garbage identification specifically comprises the following steps:
Processing the acquired video into a video frame sequence, and carrying out denoising, brightness adjustment and stabilization treatment on each frame;
Analyzing the motion trail of each pixel point in the video by using a dense optical flow algorithm, dividing the image into a plurality of motion areas according to the motion characteristics of the pixel points, and recognizing the relatively stable area as a road area;
Extracting color features in each movement region, and combining adjacent regions with similar colors according to the color features to form a combined road region;
Carrying out geometric feature analysis on the combined road areas, and eliminating the misrecognized areas according to the geometric features;
Identifying the change on the boundary of the road area according to the change of the continuous frame images, and adjusting the boundary of the road area in real time according to the change on the boundary;
acquiring a road area image after road identification and segmentation;
labeling the road area image according to the acquisition and positioning of the unmanned aerial vehicle to form a road area image sequence;
converting each frame of the road area image sequence into a gray image, and calculating gray gradients by applying a Sobel operator to each frame of gray image;
Performing motion detection on the gray gradient image, analyzing the motion trail of each pixel point in the image by using a light flow algorithm, and calculating the image motion blur according to the motion speed and direction of the pixel point;
Performing two-dimensional Fourier transform on the gray level image of each frame, and performing frequency domain transform on the image to obtain a frequency domain energy value;
calculating an ambiguity evaluation value V (x, y), comparing the ambiguity evaluation value V (x, y) with a set ambiguity threshold V 0, and if V (x, y) is more than V 0, determining that the image has ambiguity at coordinates (x, y);
the calculation formula of the ambiguity evaluation value V (x, y) is as follows:
V(x,y)=αG+βM+γE,
Wherein V (x, y) represents the ambiguity of the image at coordinates (x, y), G represents the image gradient, M represents the image motion ambiguity, E represents the frequency domain energy value, and α, β and γ represent weights, respectively;
Determining a blurred region, and marking the blurred region by adopting a region-based image segmentation algorithm;
Fusing road images of the fuzzy area, and optimizing image definition;
Judging whether the optimized image definition meets the preset definition requirement, if not, acquiring image frames at the same position but at different times from a historical image database, comparing the image frames, judging whether foreign matters exist, and if so, judging that the foreign matters exist, namely, the garbage is considered;
If yes, inputting the optimized image and other road area image sequences into a pre-trained convolutional neural network model, performing foreign matter identification, and judging whether foreign matters exist or not;
If the foreign matter exists, inputting the foreign matter image into a preset garbage image classifier, and identifying the specific type of garbage;
And taking the garbage of the specified type as target garbage.
2. The system of claim 1, wherein the sensor comprises an infrared sensor and a smoke sensor, the infrared sensor is used for measuring the filling height of the garbage in the garbage can, the smoke sensor is used for detecting fire in the garbage can, the processor is further used for receiving data transmitted by the smoke sensor and transmitting the data to the cloud server through the internet of things module after processing, and the cloud server is further used for: generating a fire alarm when a fire is detected, and sending the fire alarm to a cleaning end, wherein the fire alarm comprises the positioning of a garbage can and the fire detection time; judging whether an unmanned aerial vehicle is parked on a second unmanned aerial vehicle parking apron above the fire alarm garbage can, if so, sending a photographing instruction and a photo sending instruction aiming at the fire alarm garbage can to the unmanned aerial vehicle, sending a return instruction returning to the first unmanned aerial vehicle parking apron to the unmanned aerial vehicle, acquiring a photo sent by the unmanned aerial vehicle, preprocessing the photo, and forwarding the photo to a cleaning end; when fire is detected to be eliminated, fire elimination information is generated, and the fire elimination information is sent to the cleaning end.
3. The system for processing garden garbage according to claim 2, wherein the obtained photo sent by the unmanned aerial vehicle is preprocessed and then forwarded to the cleaning end, and specifically comprises:
The unmanned aerial vehicle preprocesses the shot images of the fire alarm dustbin to generate difference data and complete image data of a change area, and transmits the difference data to the cloud server through a mobile network after compression;
The specific method for generating the difference data and the complete image data of the change area is as follows:
Dividing an image into different areas by adopting a segmentation image segmentation algorithm based on the areas;
based on the change detection technology, identifying dynamic areas with changed colors, brightness and motion characteristics;
carrying out data coding on the identified dynamic region, and capturing the change information of the region to generate difference data;
Encoding the dynamic region by adopting a compression encoding technology based on a motion vector, and combining the encoded dynamic region with other parts of the original image to reconstruct a complete image;
The cloud server transmits the difference data to the cleaning end;
After the unmanned aerial vehicle transmits the difference data of the change area to the cloud server for a preset time period, the unmanned aerial vehicle transmits the compressed complete image data to the cloud server through a mobile network;
and when the cloud server receives the complete image data request of the cleaning end, transmitting the complete image data to the cleaning end.
4. A system for processing garden garbage according to any one of claims 1 or 3, wherein the sending a video acquisition command to the unmanned aerial vehicle specifically comprises:
If the unmanned aerial vehicle is mostly parked on a first unmanned aerial vehicle parking apron, sending a voyage acquisition instruction to the unmanned aerial vehicle, wherein the voyage acquisition instruction comprises: the unmanned aerial vehicle is enabled to complete take-off at a first unmanned aerial vehicle parking apron, road video acquisition of an unmanned aerial vehicle acquisition line and landing at a second unmanned aerial vehicle parking apron;
If the unmanned aerial vehicle is mostly parked on the second unmanned aerial vehicle parking apron, sending a return acquisition instruction to the unmanned aerial vehicle, wherein the return acquisition instruction is as follows: the unmanned aerial vehicle is enabled to finish taking off at a second unmanned aerial vehicle parking apron, road video acquisition of an unmanned aerial vehicle acquisition line and landing at a first unmanned aerial vehicle parking apron.
5. The system for processing garden garbage according to claim 4, wherein the sending a video acquisition command to the unmanned aerial vehicle specifically comprises:
judging whether the video acquisition instruction is an outgoing acquisition instruction or a return acquisition instruction, counting the number M 1 of unmanned aerial vehicles for acquisition, calculating the flight index of the unmanned aerial vehicles, and sequentially arranging the unmanned aerial vehicles according to the sequence of the flight index from high to low to generate an acquisition unmanned aerial vehicle set;
the calculation formula of the unmanned aerial vehicle flight index is as follows:
Wherein F represents an unmanned aerial vehicle flight index, B represents a battery residual quantity, D represents a flown mileage, w 1 and w 2 represent adjustment coefficients respectively, and w 1+w2 =1;
Acquiring data of roads provided with garbage cans to be processed, taking each independent road as an acquisition subtask, combining the roads as an acquisition subtask, counting the number M 2 of the acquisition subtasks, calculating the coverage flight distances of different acquisition subtasks, and sequentially arranging the coverage flight distances from long to short to generate an acquisition task set;
When the number M 1 of unmanned aerial vehicles is greater than or equal to the number M 2 of the acquisition subtasks and the video acquisition instruction is an out-of-flight acquisition instruction, selecting M 2 unmanned aerial vehicles from the collection unmanned aerial vehicle set according to the arrangement sequence, correspondingly distributing video acquisition tasks according to the collection task set, acquiring all garbage can positions of a garbage can road to be processed, selecting a second unmanned aerial vehicle parking apron of a last-flight garbage can for each unmanned aerial vehicle executing the acquisition subtasks as a drop point, and selecting M 2 drop points in total;
When the number M 1 of unmanned aerial vehicles is greater than or equal to the number M 2 of the acquisition subtasks and the video acquisition instruction is a return acquisition instruction, selecting M 2 unmanned aerial vehicles from the collection unmanned aerial vehicle set according to the arrangement sequence, correspondingly distributing video acquisition tasks according to the collection task set, and sending a direct return instruction to the rest unmanned aerial vehicles, wherein all unmanned aerial vehicles take a first unmanned aerial vehicle apron as a drop point;
when the number M 1 of unmanned aerial vehicles is smaller than the number M 2 of the acquisition subtasks and the video acquisition instruction is an out-of-flight acquisition instruction, selecting M 2 unmanned aerial vehicles from the collection unmanned aerial vehicle set according to the arrangement sequence, correspondingly distributing M 2 video acquisition tasks according to the collection task set, then distributing the tasks again according to the task distance until the number M 2-M1 of the acquisition subtasks is distributed completely, acquiring all garbage bin positions of a garbage bin road to be processed, selecting a second unmanned aerial vehicle apron of a flying end garbage bin as a landing point for the unmanned aerial vehicle which performs all task list tasks, and selecting M 1 landing points in total;
When the number M 1 of unmanned aerial vehicles is smaller than the number M 2 of the acquisition subtasks and the video acquisition instruction is a return acquisition instruction, selecting M 2 unmanned aerial vehicles from the acquisition unmanned aerial vehicle set according to the arrangement sequence, correspondingly distributing M 2 video acquisition tasks according to the acquisition task set, then distributing the tasks again according to the task distance until the number M 2-M1 of the acquisition subtasks is distributed, and taking a first unmanned aerial vehicle parking apron as a landing point for all unmanned aerial vehicles.
6. The system of claim 1, wherein the garbage monitoring platform further comprises an RFID reader, an intelligent door lock, and a status indicator light electrically connected to the processor, respectively, the RFID reader being disposed outside the garbage can for sensing the RFID tag presented by the cleaning end, the status indicator light comprising a release prohibition indicator light and a release permission indicator light, the processor further being configured to control the intelligent door lock to close the garbage can and control the release prohibition indicator light to light when detecting that the garbage height of the garbage can to be treated reaches the second height threshold, and to control the intelligent door lock to open the garbage can and control the release permission indicator light to light when sensing the target RFID tag.
7. The system for processing garden garbage according to claim 1, wherein the fusing of road images in the fuzzy area optimizes the image definition, and specifically comprises:
Acquiring a plurality of image frames related to a fuzzy area, selecting two image frames with the lowest fuzzy degree, and extracting characteristic points of an image by using a SIFT algorithm;
confirming a matching pair in the fuzzy area through the feature descriptor, and selecting an optimal homography matrix according to the matching point;
Randomly selecting 4 matching points to calculate homography, judging whether the remaining matching points meet homography, if so, setting a count value for each point to mark the times of meeting, calculating the sum F i of registration errors, using the matched stable characteristic points as an internal point set, and using the internal point set to update homography;
if the count value is greater than the preset multiple N of the average count value of all the feature points, the feature points are set as invalid, N i is set as the total number of the feature points, the internal point set is updated, otherwise, the process goes to the step again to enter iteration: randomly selecting 4 matching points to calculate homography;
if the average count value of all the matching points is larger than a preset m, exiting iteration, otherwise, judging whether the number of all the matching points multiplied by a preset adjusting factor is larger than N i, if so, adding homography to candidate homography in the set, otherwise, discarding the homography;
After a plurality of candidate homography matrices are obtained, euclidean distance D i between each characteristic point of each candidate homography matrix is calculated, and an optimal homography matrix is selected according to max (D i/Ni) and min (F i/Ni);
The fusion of two image frames is completed through an optimal homography matrix, a multi-level fusion strategy is adopted, namely, euclidean distance threshold D 1 between one feature point is set, and the pixel values of the feature points with the distance smaller than D 1 are subjected to weighted average fusion to form a local fusion image; setting a Euclidean distance threshold D 2 between the feature points, and carrying out weighted average fusion on pixel values of the feature points with the distance smaller than D 2 and larger than D 1 to form a fusion image with a larger range; setting a Euclidean distance threshold D 3 between the feature points, and carrying out weighted average fusion on pixel values of the feature points with the distance smaller than D 3 and larger than D 2 to form an integral fusion image;
and sharpening the integral fusion image based on the Laplacian operator to finish optimizing the image definition.
8. The system according to claim 1 or 7, wherein the inputting the optimized image and the other road area image sequence into the pre-trained convolutional neural network model for foreign object recognition, and determining whether the foreign object exists, specifically comprises:
acquiring an image sample of a road area image with foreign matters and a road area image without foreign matters, marking the image sample, and marking the image classification attribute;
Dividing the image of the acceleration signal data into a training set, a testing set and a verification set, and matching each data set with a corresponding label;
constructing a convolutional neural network model:
Using the formula Calculating a convolution layer using the formula/>The standard deviation of the gaussian distribution is calculated,
Feature mapping of index set (i, j):
Wherein I represents an input image, k represents a convolution kernel, b represents an offset, p represents the number of feature maps created by the convolution layer, and u and v represent block sizes in the row and column directions of the feature maps, respectively;
Using the formula Calculating a pooling layer, wherein the size of the pooling layer is 12 multiplied by 12;
Using the formula And/>Vectorization and concatenation were performed for each 24×24 matrix/>Carrying out vectorization, wherein 6 vectors are connected to form a long vector with the length of 24 multiplied by 6;
Using the formula Calculating an expected value of a full connection layer, wherein f represents an input vector, and W represents a weight matrix;
Using the formula Calculating a loss function, wherein y represents a true value;
Training the convolutional neural network model through the training set, testing and verifying the trained convolutional neural network model through the testing set and the verifying set, and after meeting the requirements, utilizing the trained convolutional neural network model to conduct foreign matter identification to judge whether foreign matters exist.
9. A method for treating garden waste using the system for treating garden waste according to claim 8, comprising the steps of:
Generating a total patrol line according to the pre-processing of the garden line map;
Acquiring data sent by an Internet of things module, when judging that the height of garbage in a target garbage can reaches a first height threshold, taking the target garbage can as a garbage can to be processed, and when the number of the garbage cans to be processed reaches a first number threshold or when detecting that the height of garbage in the garbage can to be processed reaches a second height threshold, counting roads to which all the garbage cans to be processed belong and planning to generate a garbage cleaning initial line;
Removing garbage from the total inspection line to clean an initial line, so as to generate an unmanned aerial vehicle acquisition line, and sending a video acquisition instruction to the unmanned aerial vehicle;
And the receiving internet of things module transmits and collects video, performs preprocessing to identify image garbage, generates a garbage cleaning final circuit according to the identified garbage road combined garbage cleaning initial circuit, and sends a cleaning task to the cleaning end.
CN202311460628.5A 2023-11-04 2023-11-04 Garden garbage treatment system and method Active CN117284663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311460628.5A CN117284663B (en) 2023-11-04 2023-11-04 Garden garbage treatment system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311460628.5A CN117284663B (en) 2023-11-04 2023-11-04 Garden garbage treatment system and method

Publications (2)

Publication Number Publication Date
CN117284663A CN117284663A (en) 2023-12-26
CN117284663B true CN117284663B (en) 2024-05-24

Family

ID=89240949

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311460628.5A Active CN117284663B (en) 2023-11-04 2023-11-04 Garden garbage treatment system and method

Country Status (1)

Country Link
CN (1) CN117284663B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117952967B (en) * 2024-03-26 2024-06-28 广东先知大数据股份有限公司 Dustbin region abnormality detection method, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106081426A (en) * 2016-08-04 2016-11-09 苏州健雄职业技术学院 A kind of community based on unmanned plane intelligent garbage recovery system and method for work thereof
CN110498154A (en) * 2019-08-23 2019-11-26 中国科学院自动化研究所 Garbage cleaning device and rubbish clear up system
CN110550351A (en) * 2019-09-25 2019-12-10 哈尔滨哈工大机器人集团嘉利通科技股份有限公司 Sanitation management system of smart city
CN110641881A (en) * 2019-09-29 2020-01-03 北京智行者科技有限公司 Driverless garbage classification cleaning method
CN111591637A (en) * 2020-04-30 2020-08-28 苏州科技大学 Intelligent comprehensive management system for park garbage
CN114313697A (en) * 2022-01-07 2022-04-12 海南康泰旅游股份有限公司 Garbage management system for tourist attraction
CN115034415A (en) * 2022-06-30 2022-09-09 浙江立石工业互联科技有限公司 Intelligent kitchen garbage recycling Internet of things system
CN115796804A (en) * 2023-02-07 2023-03-14 知鱼智联科技股份有限公司 Intelligent monitoring and management method and system for multidimensional linkage environmental data
CN116835172A (en) * 2022-11-29 2023-10-03 马冲 Smart city garbage disposal system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3673239A4 (en) * 2017-08-25 2021-06-23 Nordsense ApS Storage and collection systems and methods for use

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106081426A (en) * 2016-08-04 2016-11-09 苏州健雄职业技术学院 A kind of community based on unmanned plane intelligent garbage recovery system and method for work thereof
CN110498154A (en) * 2019-08-23 2019-11-26 中国科学院自动化研究所 Garbage cleaning device and rubbish clear up system
CN110550351A (en) * 2019-09-25 2019-12-10 哈尔滨哈工大机器人集团嘉利通科技股份有限公司 Sanitation management system of smart city
CN110641881A (en) * 2019-09-29 2020-01-03 北京智行者科技有限公司 Driverless garbage classification cleaning method
CN111591637A (en) * 2020-04-30 2020-08-28 苏州科技大学 Intelligent comprehensive management system for park garbage
WO2021218449A1 (en) * 2020-04-30 2021-11-04 苏州科技大学 Intelligent comprehensive management system for park garbage
CN114313697A (en) * 2022-01-07 2022-04-12 海南康泰旅游股份有限公司 Garbage management system for tourist attraction
CN115034415A (en) * 2022-06-30 2022-09-09 浙江立石工业互联科技有限公司 Intelligent kitchen garbage recycling Internet of things system
CN116835172A (en) * 2022-11-29 2023-10-03 马冲 Smart city garbage disposal system
CN115796804A (en) * 2023-02-07 2023-03-14 知鱼智联科技股份有限公司 Intelligent monitoring and management method and system for multidimensional linkage environmental data

Also Published As

Publication number Publication date
CN117284663A (en) 2023-12-26

Similar Documents

Publication Publication Date Title
US10803328B1 (en) Semantic and instance segmentation
US10915793B2 (en) Method and system for converting point cloud data for use with 2D convolutional neural networks
CA3100842C (en) Architectures for vehicle tolling
WO2018068653A1 (en) Point cloud data processing method and apparatus, and storage medium
US20220261601A1 (en) Multiple Stage Image Based Object Detection and Recognition
CN117284663B (en) Garden garbage treatment system and method
US10713541B2 (en) Systems and methods for occlusion handling in a neural network via activation subtraction
CN114454875A (en) Urban road automatic parking method and system based on reinforcement learning
US11080837B2 (en) Architecture for improved machine learning operation
CN115495540B (en) Intelligent route identification method, system and medium for robot inspection
Silva et al. Automated road damage detection using UAV images and deep learning techniques
Singh et al. Improved YOLOv5l for vehicle detection: an application to estimating traffic density and identifying over speeding vehicles on highway scenes
CN110348407A (en) One kind is parked position detecting method
CN116206286A (en) Obstacle detection method, device, equipment and medium under high-speed road condition
CN114219073A (en) Method and device for determining attribute information, storage medium and electronic device
Wang et al. Semi-supervised image-to-image translation for lane detection in rain
Prawinsankar et al. Traffic Congession Detection through Modified Resnet50 and Prediction of Traffic using Clustering
CN117315407B (en) Method and device for identifying object, storage medium and electronic device
CN117612140B (en) Road scene identification method and device, storage medium and electronic equipment
US11592565B2 (en) Flexible multi-channel fusion perception
CN117116065B (en) Intelligent road traffic flow control method and system
CN116092038B (en) Point cloud-based large transportation key road space trafficability judging method
Jausevac et al. Multirole UAVs Supported Parking Surveillance System
CN118279876A (en) Automatic obstacle avoidance method and system for cleaning vehicle based on image processing
Camacho et al. An Artificial Vision Based Method for Vehicle Detection and Classification in Urban Traffic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant