CN113378706A - Drawing system for assisting children in observing plants and learning biological diversity - Google Patents
Drawing system for assisting children in observing plants and learning biological diversity Download PDFInfo
- Publication number
- CN113378706A CN113378706A CN202110645869.1A CN202110645869A CN113378706A CN 113378706 A CN113378706 A CN 113378706A CN 202110645869 A CN202110645869 A CN 202110645869A CN 113378706 A CN113378706 A CN 113378706A
- Authority
- CN
- China
- Prior art keywords
- plant
- knowledge
- children
- learning
- biodiversity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Electrically Operated Instructional Devices (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a drawing system for assisting children in observing plants and learning biological diversity, which comprises a plant identification learning system and a children sketching drawing system; the plant recognition learning system is used for recognizing the shot plant photos, displaying the plant names and the related biodiversity knowledge in a set area of the display interface and playing the plant names and the related biodiversity knowledge through voice; the children sketching drawing system is used for converting the shot plant photo into a contour map, comparing the colors of the plant photo and a drawing area when the children draw the biological characteristics, and feeding back and displaying information on an interface; the plant identification learning system comprises a plant image identification module and a plant knowledge retrieval module; the children sketching drawing system comprises a plant outline drawing generation module and an intelligent auxiliary drawing module. By utilizing the system, the learning process of the plant biodiversity can be combined into the process of drawing the children plant from life, and guidance is provided according to the drawing object when the children draw from life.
Description
Technical Field
The invention belongs to the technical field of artificial intelligence application, and particularly relates to a drawing system for assisting children in observing plants and learning biodiversity.
Background
Repeated observations of nature by children can increase their knowledge and understanding of biodiversity. Sketching is one of the important ways to encourage repeated observations by children. Sketching is a quick drawing method that shows interesting features of what is being observed, encouraging participants to repeat observations and comparisons.
However, sketching is a high threshold for children because children often cannot correctly grasp the morphology and color of the drawn object. There is also no tool or system to help children draw from life with the goal of learning biodiversity.
With the development of artificial intelligence technology, image recognition technology and knowledge systems or expert systems are more and more widely applied. Many systems for plant or biological identification emerge, which learn a large number of labeled pictures using neural networks, build an output layer of classification regression, and map it into a database of an expert system.
For example, chinese patent publication No. CN103902996A discloses a diversified plant mobile phone APP identification method based on an image identification technology, a plant expert database, a pattern identification module, and a link between the plant expert database and the pattern identification module are constructed on a mobile phone APP, image information of a plant is collected by a mobile phone camera, the image information is identified by the pattern identification module, and is queried in the plant expert database by a plant name, and the pattern identification module cannot be identified, and is issued by the mobile phone APP to wait for an expert to answer.
Chinese patent publication No. CN110555416A discloses a plant identification method and device, including: step S1, acquiring a plant image containing a target plant to be identified; step S2, determining the attribute of the target plant according to the attribute information of the plant image; step S3, calling a plant identification model associated with the attribute of the target plant from a plurality of plant identification models associated with different attributes of the plant to identify the plant image, and determining the plant information of the target plant, wherein the plant identification model is a neural network model.
However, these applications and methods do not incorporate the learning process of plant biodiversity into the process of children's plant drawing, nor do they provide drawing guidance based on the drawing object when children draw drawings.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a drawing system for assisting a child in observing plants and learning biodiversity, which can combine the learning process of the biodiversity of the plants into the process of drawing from life of the plants of the child and provide guidance according to a drawing object when the child draws from life.
A drawing system for assisting children in observing plants and learning biological diversity comprises a plant recognition learning system and a children sketching drawing system;
the plant recognition learning system is used for recognizing the shot plant photos, displaying the plant names and the related biodiversity knowledge in a set area of a display interface and playing the plant names and the related biodiversity knowledge through voice;
the children painting-from-life drawing system is used for converting a shot plant photo into a contour map, comparing colors of the plant photo and a drawing area when the children draw biological characteristics, and feeding back and displaying information on an interface.
Furthermore, the plant identification learning system comprises a plant image identification module and a plant knowledge retrieval module; the children sketching drawing system comprises a plant outline drawing generation module and an intelligent auxiliary drawing module;
the plant image identification module comprises a convolutional neural network and is used for carrying out feature identification and classification on the shot plant photos;
the plant knowledge retrieval module is used for retrieving the plant names identified by the plant image identification module to obtain related biological diversity encyclopedia knowledge, and inducing the plant knowledge base by using a knowledge map;
the plant outline map generation module comprises a generation countermeasure network and a generation strategy network, wherein the generation countermeasure network is used for extracting the outline of the plant photo and generating the outline map with the style of simple strokes;
the intelligent auxiliary drawing module is used for comparing colors of the plant image and the interface canvas area, analyzing color deviation by utilizing a machine vision technology and outputting deviation data.
Further, the plant image identification module is constructed as follows:
a1, collecting data of different plant images to form a plant image data set;
a2, constructing a plant image recognition model based on a convolutional neural network, wherein the convolutional neural network adopts a Mobilene-v 3 model, reduces the calculated amount of the model based on lightweight network design, model pruning and knowledge distillation, and adjusts the model architecture of an output layer into a Gaussian mixture model and a SoftMax logistic regression model;
step a3, migration learning is carried out on the convolutional neural network by using the plant image data set, and a data enhancement method is adopted to avoid overfitting in the training process.
In step a3, the data enhancement method includes:
carrying out random scaling on the size of the plant image, and expanding the size of a data set of model training;
and carrying out fuzzy processing on the plant images by using a corrosion algorithm, and increasing the noise of a data set so as to improve the robustness of model training.
Further, the construction process of the plant knowledge retrieval module is as follows:
b1, carrying out encyclopedic data collection on the plant image data in the plant image identification module to obtain a plant knowledge data set;
b2, constructing a knowledge graph based on the plant knowledge data set, and establishing a knowledge data graph of each plant according to the knowledge graph interface of Google;
and b3, establishing a knowledge graph-based retrieval system, and retrieving the plant knowledge data by using a semi-supervised community discovery algorithm.
The specific process of the step b3 is as follows:
step b3-1, establishing a transfer matrix formula:
wherein d isijIs the euclidean distance between the two data, σ is the parameter to be updated after random initialization, and e is the natural logarithm.
Step b3-2, classifying the plant knowledge data set collected in step b 1: initializing unlabeled sample data randomly; reserving a label for the labeled sample data;
b3-3, calculating the state quantity by using a transfer matrix formula;
step b3-4, updating and normalizing the transition matrix by using the state quantity;
and step b3-5, repeating the step b3-3 and the step b3-4 until convergence.
Further, the plant contour map generating module is constructed as follows:
step c1, using Amazon Mechanical Turk to collect the sketching strokes described by the user, wherein the database comprises a plurality of pictures of different categories, 5 pictures are selected for each category, and the obtained sketching pictures of the outline drawing form an outline drawing sketching data set;
step c2, constructing a contour map generation model based on the generated countermeasure network, optimizing and generating a cGAN framework of the countermeasure network and modifying a loss function according to the characteristics of the sparsity of the contour map;
and c3, training the generation of the confrontation network by using the outline sketch stroke data set.
In step c2, optimizing and generating the antagonistic network cGAN framework comprises optimizing by the following expression:
LcGAN(x,y)=minGmaXDΞx,y[logD(x,y)]+Ξx[log1-D(x,G(x))]
where x is a condition, y is a picture after mapping, z is originally input noise, G denotes a generator, D denotes a discriminator, G (x) denotes a sample generated by the generator based on the condition x, D (x, y) denotes a probability that the discriminator judges that the sample (x, y) is a true sample, and E denotes a gaussian distribution.
Modifying the loss function according to the features of the sparsity of the profile refers to that a plurality of reasonable output profiles are arranged according to one input, and the loss function is defined as follows:
where x is the condition, y is the output profile after mapping, M is the total number of output samples, λ is the ratio hyperparameter (initialized to 0.01 during training), LcGANIs expressed according to the condition xiGenerated sample yjIs generated to combat network loss. The first term of the loss function is an average value, so that the generated countermeasure network treats all target contour maps equally; the second term is the minimum value that allows the generation of the countermeasure network to tend to produce the most effective picture and prevent blurring of the picture.
Further, the construction process of the intelligent auxiliary drawing module is as follows:
step d1, constructing a drawing system by using Javascript and p5. js;
step d2, comparing the user drawing content with the photo, comparing the hexadecimal color values of the photo and the interface drawing area by using OpenCV, and calculating the Euclidean distance;
and d3, providing a prompt for correcting errors and drawing instructions, and presenting the comparative data analyzed in the step d2 in an interface.
The specific process of the step d2 is as follows:
step d2-1, selecting a minimum 16 × 16 calculation matrix, and extracting the average value of the hexadecimal color of each unit by using OpenCV;
d2-2, calculating the Euclidean distance between each unit photo and the interface drawing area;
and d2-3, comparing with the threshold value, wherein the Euclidean distance takes 0.2 as the threshold value, and the cells of the calculation matrix smaller than 0.2 represent similar colors, otherwise, the colors are different.
Compared with the prior art, the invention has the following beneficial effects:
the system provided by the invention combines plant image recognition, plant knowledge retrieval, plant outline map generation and intelligent auxiliary drawing, can facilitate children to recognize different plants in nature, converts photos into outline maps, observes and draws plants ahead of sight on the basis of the outline maps, learns related biodiversity knowledge and learns during observation.
Drawings
FIG. 1 is a flow chart of the operation of the various modules of a painting system of the present invention to assist children in observing vegetation and learning biodiversity;
FIG. 2 is a flow chart of the plant identification learning system of the present invention;
FIG. 3 is a flow chart of the operation of the children sketching system according to the present invention;
FIG. 4 is a diagram of a generation countermeasure network structure of a plant contour map generation module according to the present invention;
FIG. 5 is an exemplary diagram of a result of generating a profile map according to an embodiment of the present invention;
FIG. 6 is an initial interface diagram of the painting system of the present invention;
FIG. 7 is a diagram of a guidance interface of the painting system of the present invention.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention without limiting it in any way.
As shown in FIG. 1, a painting system for assisting children in observing plants and learning biodiversity comprises a plant recognition learning system and a children painting system.
The plant identification learning system comprises a plant image identification module and a plant knowledge retrieval module; the children sketching drawing system comprises a plant outline drawing generation module and an intelligent auxiliary drawing module.
And the plant image identification module is used for identifying and logically regressing the biological characteristics of the plant and classifying the plant images by utilizing the convolutional neural network.
And the plant knowledge retrieval module is used for retrieving the biological diversity encyclopedic knowledge of the plant and inducing the plant knowledge base by using the knowledge map.
And the plant contour map generating module is used for generating a plant contour map according to the plant image and extracting the image contour by utilizing the generated confrontation network to generate a contour map with a simple stroke style.
And the intelligent auxiliary drawing module is used for comparing colors of the plant image and the interface canvas area, analyzing color deviation by utilizing a machine vision technology and outputting deviation data.
As shown in fig. 2, the plant recognition learning system matches the plant image recognition module with the plant knowledge retrieval module for recognizing the photographed plant photos, presenting the plant names and the related biodiversity knowledge in the setting area of the display interface and playing them by voice.
As shown in fig. 3, the children sketching drawing system matches the plant outline generating module with the intelligent auxiliary drawing module, and is used for converting the shot plant photos into the outline, comparing the colors of the plant photos and the drawing area when the children draw the biological characteristics, and feeding back and displaying information on the interface.
Specifically, when constructing the plant image recognition module, the method includes:
step a1, collection of plant image dataset, data for 98 plant images were collected from the itanalist website using the urllib and selenium libraries in python, for a total of 7028 pictures.
Step a2, constructing a convolutional neural network-based plant image recognition algorithm, reducing the calculated amount of the model based on lightweight network design, model pruning and knowledge distillation according to a Mobilenet-v3 model proposed by Google, and adjusting the model architecture of an output layer to a Gaussian mixture model and a SoftMax logistic regression model.
The lightweight network design means that the network calculation amount is reduced by using technologies such as Group convolution, 1x1 convolution and the like, and meanwhile, the accuracy of the network is ensured as much as possible; model pruning means that a large network always has certain redundancy, and the calculated amount of the network is reduced by pruning out the redundant part; knowledge distillation refers to the fact that a large model is used for helping a small model to learn, and the accuracy of a learning model is improved.
Step a3, migration learning is carried out on the convolutional neural network by using the plant image data set, and a data enhancement method is adopted to avoid overfitting in the training process.
When constructing the plant knowledge retrieval module, the method comprises the following steps:
step b1, collecting a plant knowledge data set, and crawling encyclopedia data from Wikipedia according to the names of 98 plants by using the urllib library and the selenium library in python.
And b2, constructing a knowledge graph based on the plant knowledge data set, and establishing 98 knowledge data graphs of the plants according to the knowledge graph interface of the Google.
And b3, establishing a knowledge graph-based retrieval system, and retrieving the plant knowledge data by using a semi-supervised community discovery algorithm.
When the plant contour map generation module is constructed, the method comprises the following steps:
step c1, collecting outline drawing sketch line data set, using Amazon Mechanical Turk to collect sketch lines drawn by users, wherein the database contains 1000 kinds of pictures of different types, 5 pictures are selected from each type, and 5000 pictures of outline drawing sketch lines are contained;
and c2, constructing a contour map generation algorithm based on the generation of the countermeasure network, optimizing the cGAN framework of the generation countermeasure network and modifying the loss function according to the characteristics of the sparsity of the contour map.
Fig. 4 is a diagram of a structure of a generation countermeasure network employed in the present invention. cGAN is an improvement on GAN, and a condition generation model is implemented by adding additional condition information to the generator and discriminator of the original GAN.
Optimizing generation of the antagonistic network cGAN framework refers to optimizing by the following expression:
LcGAN(x,y,z)=minGmaxDΞx,y[logD(x,y)]+Ξx,z[log1-D(x,G(x,z))]
where x is the condition, y is the picture after mapping, and z is the noise of the original input, but in general z is negligible, i.e.:
LcGAN(x,y)=minGmaxDΞx,y[logD(x,y)]+Ξx[log1-D(x,G(x))]
modifying the loss function according to the features of the sparsity of the profile refers to defining the loss function according to one input with a plurality of reasonable output profiles as follows:
the first term is an average value, so that the generated countermeasure network treats all target contour maps equally; the second term is the minimum value that allows the generation of the countermeasure network to tend to produce the most effective picture and prevent blurring of the picture.
Step c3, training the generated confrontation network by using the profile data set, specifically:
step c3-1, constructing and generating a confrontation network outline drawing simplified stroke generation model by utilizing a pyrrch framework;
step c3-2, calculating 4,166,135 parameters in total to be trained;
step c3-3, selecting 4400 training set samples, 300 verification set samples and 300 test set samples;
step c3-4, set the size of Batch to 100, train 8000 batchs in total, average time for each Batch 2.39 s;
step c3-5, during the training process, each Batch is tested and verified as one step, every 200 steps.
When constructing supplementary drawing module of intelligence, include:
and d1, building a drawing system, and building the drawing system by using Javascript and p5. js.
Step d2, comparing the user drawing content with the photo, comparing the hexadecimal color values of the photo and the interface drawing area by using OpenCV, and calculating the Euclidean distance; the method specifically comprises the following steps:
step d2-1, selecting a minimum 16 × 16 calculation matrix, and extracting the average value of the hexadecimal color of each unit by using OpenCV;
d2-2, calculating the Euclidean distance between each unit photo and the interface drawing area;
and d2-3, comparing with the threshold value, wherein the Euclidean distance takes 0.2 as the threshold value, and the cells of the calculation matrix smaller than 0.2 represent similar colors, otherwise, the colors are different.
And d3, providing a prompt for correcting errors and drawing instructions, and presenting the comparative data analyzed in the step d2 in an interface.
Fig. 5 is an exemplary diagram of a contour diagram generation result in an embodiment of the present invention, where the left diagram is an original picture, and the right diagram is a contour diagram generation result.
As shown in FIG. 6, which is an initial interface diagram of the painting system of the present invention, a child can select a painting tool in the painting tool area 1 and adjust a color in the color selection area 2. The child may then perform a sketching drawing in drawing area 3. The initial interface will display the outline 4 generated from the picture of the plant taken by the child, the information box 5 on the left of the interface displays the guide text for the drawing step, and the information box 6 on the bottom of the interface displays the biodiversity knowledge of the drawn plant.
Fig. 7 is a guidance interface diagram of the drawing system according to the present invention. The system compares the hexadecimal color values of the photograph and the drawing area and calculates the Euclidean distance between the photograph and the drawing area. The areas smaller than the threshold are outlined by a dashed box 7 on the interface and the correct color is indicated on the color wheel 2.
The embodiments described above are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the present invention.
Claims (10)
1. A drawing system for assisting children in observing plants and learning biodiversity is characterized by comprising a plant recognition learning system and a children sketching drawing system;
the plant recognition learning system is used for recognizing the shot plant photos, displaying the plant names and the related biodiversity knowledge in a set area of a display interface and playing the plant names and the related biodiversity knowledge through voice;
the children painting-from-life drawing system is used for converting a shot plant photo into a contour map, comparing colors of the plant photo and a drawing area when the children draw biological characteristics, and feeding back and displaying information on an interface.
2. The system for assisting children in observing plants and learning biodiversity according to claim 1, wherein the plant recognition learning system comprises a plant image recognition module and a plant knowledge retrieval module; the children sketching drawing system comprises a plant outline drawing generation module and an intelligent auxiliary drawing module;
the plant image identification module comprises a convolutional neural network and is used for carrying out feature identification and classification on the shot plant photos;
the plant knowledge retrieval module is used for retrieving the plant names identified by the plant image identification module to obtain related biological diversity encyclopedia knowledge, and inducing the plant knowledge base by using a knowledge map;
the plant outline map generation module comprises a generation countermeasure network and a generation strategy network, wherein the generation countermeasure network is used for extracting the outline of the plant photo and generating the outline map with the style of simple strokes;
the intelligent auxiliary drawing module is used for comparing colors of the plant image and the interface canvas area, analyzing color deviation by utilizing a machine vision technology and outputting deviation data.
3. The system for assisting children in observing plants and learning biodiversity according to claim 2, wherein the plant image recognition module is constructed as follows:
a1, collecting data of different plant images to form a plant image data set;
a2, constructing a plant image recognition model based on a convolutional neural network, wherein the convolutional neural network adopts a Mobilene-v 3 model, reduces the calculated amount of the model based on lightweight network design, model pruning and knowledge distillation, and adjusts the model architecture of an output layer into a Gaussian mixture model and a SoftMax logistic regression model;
step a3, migration learning is carried out on the convolutional neural network by using the plant image data set, and a data enhancement method is adopted to avoid overfitting in the training process.
4. The system for assisting children in observing plants and learning biodiversity as claimed in claim 3, wherein the step a3, the data enhancement method comprises:
carrying out random scaling on the size of the plant image, and expanding the size of a data set of model training;
and carrying out fuzzy processing on the plant images by using a corrosion algorithm, and increasing the noise of a data set so as to improve the robustness of model training.
5. The system for assisting children in observing plants and learning biodiversity according to claim 2, wherein the plant knowledge retrieval module is constructed as follows:
b1, carrying out encyclopedic data collection on the plant image data in the plant image identification module to obtain a plant knowledge data set;
b2, constructing a knowledge graph based on the plant knowledge data set, and establishing a knowledge data graph of each plant according to the knowledge graph interface of Google;
and b3, establishing a knowledge graph-based retrieval system, and retrieving the plant knowledge data by using a semi-supervised community discovery algorithm.
6. The system for assisting children in observing plants and learning biodiversity as claimed in claim 5, wherein the step b3 is performed by the following steps:
step b3-1, establishing a transfer matrix formula:
wherein d isijIs the Euclidean distance between two data, sigma is the parameter to be updated after random initialization, e is the natural logarithm;
step b3-2, classifying the plant knowledge data set collected in step b 1: initializing unlabeled sample data randomly; reserving a label for the labeled sample data;
b3-3, calculating the state quantity by using a transfer matrix formula;
step b3-4, updating and normalizing the transition matrix by using the state quantity;
and step b3-5, repeating the step b3-3 and the step b3-4 until convergence.
7. The system for assisting children in observing plants and learning biodiversity according to claim 2, wherein the plant outline generating module is constructed as follows:
step c1, using Amazon Mechanical Turk to collect the sketching strokes described by the user, wherein the database comprises a plurality of pictures of different categories, 5 pictures are selected for each category, and the obtained sketching pictures of the outline drawing form an outline drawing sketching data set;
step c2, constructing a contour map generation model based on the generated countermeasure network, optimizing and generating a cGAN framework of the countermeasure network and modifying a loss function according to the characteristics of the sparsity of the contour map;
and c3, training the generation of the confrontation network by using the outline sketch stroke data set.
8. The system for assisting children in observing plants and learning biodiversity as claimed in claim 7, wherein the step c2, optimizing generation of antagonistic network cGAN frame comprises optimizing according to the following expression:
LcGAN(x,y)=minGmaxDΞx,y[logD(x,y)]+Ξx[log1-D(x,G(x))]
wherein x is a condition, y is a picture after mapping, z is originally input noise, G represents a generator, D represents a discriminator, G (x) represents that the generator generates a sample based on the condition x, D (x, y) represents the probability that the discriminator judges the sample (x, y) to be a real sample, and E represents Gaussian distribution;
modifying the loss function according to the features of the sparsity of the profile refers to that a plurality of reasonable output profiles are arranged according to one input, and the loss function is defined as follows:
where x is a condition, y is the output profile after mapping, M is the total number of output samples, λ is the ratio hyperparameter, LcGANIs expressed according to the condition xiGenerated sample yiGenerates a standard against network loss; the first term of the loss function is an average value, so that the generated countermeasure network treats all target contour maps equally; the second term is the minimum value that allows the generation of the countermeasure network to tend to produce the most effective picture and prevent blurring of the picture.
9. The system for assisting children in observing plants and learning biodiversity according to claim 2, wherein the intelligent assisted painting module is constructed as follows:
step d1, constructing a drawing system by using Javascript and p5. js;
step d2, comparing the user drawing content with the photo, comparing the hexadecimal color values of the photo and the interface drawing area by using OpenCV, and calculating the Euclidean distance;
and d3, providing a prompt for correcting errors and drawing instructions, and presenting the comparative data analyzed in the step d2 in an interface.
10. The system for assisting children in observing plants and learning biodiversity as claimed in claim 9, wherein the step d2 comprises the following steps:
step d2-1, selecting a minimum 16 × 16 calculation matrix, and extracting the average value of the hexadecimal color of each unit by using OpenCV;
d2-2, calculating the Euclidean distance between each unit photo and the interface drawing area;
and d2-3, comparing with the threshold value, wherein the Euclidean distance takes 0.2 as the threshold value, and the cells of the calculation matrix smaller than 0.2 represent similar colors, otherwise, the colors are different.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110645869.1A CN113378706B (en) | 2021-06-10 | 2021-06-10 | Drawing system for assisting children in observing plants and learning biological diversity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110645869.1A CN113378706B (en) | 2021-06-10 | 2021-06-10 | Drawing system for assisting children in observing plants and learning biological diversity |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113378706A true CN113378706A (en) | 2021-09-10 |
CN113378706B CN113378706B (en) | 2022-08-23 |
Family
ID=77573688
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110645869.1A Active CN113378706B (en) | 2021-06-10 | 2021-06-10 | Drawing system for assisting children in observing plants and learning biological diversity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113378706B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114783242A (en) * | 2022-02-28 | 2022-07-22 | 杭州小伴熊科技有限公司 | Drawing teaching method and device for online education |
CN115101047A (en) * | 2022-08-24 | 2022-09-23 | 深圳市人马互动科技有限公司 | Voice interaction method, device, system, interaction equipment and storage medium |
CN115686222A (en) * | 2022-11-18 | 2023-02-03 | 青岛酒店管理职业技术学院 | Virtual portrait drawing system based on mixed reality technology |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103473550A (en) * | 2013-09-23 | 2013-12-25 | 广州中医药大学 | Plant blade image segmentation method based on Lab space and local area dynamic threshold |
CN106355973A (en) * | 2016-10-28 | 2017-01-25 | 厦门优莱柏网络科技有限公司 | Method and device for guiding drawing |
CN107016713A (en) * | 2017-03-03 | 2017-08-04 | 北京光年无限科技有限公司 | Towards the vision data treating method and apparatus of intelligent robot |
CN107392238A (en) * | 2017-07-12 | 2017-11-24 | 华中师范大学 | Outdoor knowledge of plants based on moving-vision search expands learning system |
CN110515531A (en) * | 2019-08-28 | 2019-11-29 | 广东工业大学 | A kind of intelligence auxiliary painting system |
US20200401883A1 (en) * | 2019-06-24 | 2020-12-24 | X Development Llc | Individual plant recognition and localization |
-
2021
- 2021-06-10 CN CN202110645869.1A patent/CN113378706B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103473550A (en) * | 2013-09-23 | 2013-12-25 | 广州中医药大学 | Plant blade image segmentation method based on Lab space and local area dynamic threshold |
CN106355973A (en) * | 2016-10-28 | 2017-01-25 | 厦门优莱柏网络科技有限公司 | Method and device for guiding drawing |
CN107016713A (en) * | 2017-03-03 | 2017-08-04 | 北京光年无限科技有限公司 | Towards the vision data treating method and apparatus of intelligent robot |
CN107392238A (en) * | 2017-07-12 | 2017-11-24 | 华中师范大学 | Outdoor knowledge of plants based on moving-vision search expands learning system |
US20200401883A1 (en) * | 2019-06-24 | 2020-12-24 | X Development Llc | Individual plant recognition and localization |
CN110515531A (en) * | 2019-08-28 | 2019-11-29 | 广东工业大学 | A kind of intelligence auxiliary painting system |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114783242A (en) * | 2022-02-28 | 2022-07-22 | 杭州小伴熊科技有限公司 | Drawing teaching method and device for online education |
CN114783242B (en) * | 2022-02-28 | 2024-07-12 | 杭州小伴熊科技有限公司 | Drawing teaching method and device for online education |
CN115101047A (en) * | 2022-08-24 | 2022-09-23 | 深圳市人马互动科技有限公司 | Voice interaction method, device, system, interaction equipment and storage medium |
CN115101047B (en) * | 2022-08-24 | 2022-11-04 | 深圳市人马互动科技有限公司 | Voice interaction method, device, system, interaction equipment and storage medium |
CN115686222A (en) * | 2022-11-18 | 2023-02-03 | 青岛酒店管理职业技术学院 | Virtual portrait drawing system based on mixed reality technology |
Also Published As
Publication number | Publication date |
---|---|
CN113378706B (en) | 2022-08-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113378706B (en) | Drawing system for assisting children in observing plants and learning biological diversity | |
CN112232241B (en) | Pedestrian re-identification method and device, electronic equipment and readable storage medium | |
CN112308158A (en) | Multi-source field self-adaptive model and method based on partial feature alignment | |
CN111611924B (en) | Mushroom identification method based on deep migration learning model | |
CN110222718B (en) | Image processing method and device | |
CN106407986A (en) | Synthetic aperture radar image target identification method based on depth model | |
CN112215119B (en) | Small target identification method, device and medium based on super-resolution reconstruction | |
CN110866530A (en) | Character image recognition method and device and electronic equipment | |
CN109033107A (en) | Image search method and device, computer equipment and storage medium | |
CN112016601B (en) | Network model construction method based on knowledge graph enhanced small sample visual classification | |
CN113127737B (en) | Personalized search method and search system integrating attention mechanism | |
CN114463675B (en) | Underwater fish group activity intensity identification method and device | |
CN113159067A (en) | Fine-grained image identification method and device based on multi-grained local feature soft association aggregation | |
CN109165698A (en) | A kind of image classification recognition methods and its storage medium towards wisdom traffic | |
CN114419468A (en) | Paddy field segmentation method combining attention mechanism and spatial feature fusion algorithm | |
CN111126155B (en) | Pedestrian re-identification method for generating countermeasure network based on semantic constraint | |
CN113205103A (en) | Lightweight tattoo detection method | |
CN115331284A (en) | Self-healing mechanism-based facial expression recognition method and system in real scene | |
CN114972952A (en) | Industrial part defect identification method based on model lightweight | |
CN111310837A (en) | Vehicle refitting recognition method, device, system, medium and equipment | |
CN114492634A (en) | Fine-grained equipment image classification and identification method and system | |
CN116994295B (en) | Wild animal category identification method based on gray sample self-adaptive selection gate | |
CN117173677A (en) | Gesture recognition method, device, equipment and storage medium | |
CN115457366A (en) | Chinese herbal medicine multi-label recognition model based on graph convolution neural network | |
CN115630361A (en) | Attention distillation-based federal learning backdoor defense method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |