CN112906698B - Alfalfa plant identification method and device - Google Patents

Alfalfa plant identification method and device Download PDF

Info

Publication number
CN112906698B
CN112906698B CN201911224021.0A CN201911224021A CN112906698B CN 112906698 B CN112906698 B CN 112906698B CN 201911224021 A CN201911224021 A CN 201911224021A CN 112906698 B CN112906698 B CN 112906698B
Authority
CN
China
Prior art keywords
sub
image information
condition
information
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911224021.0A
Other languages
Chinese (zh)
Other versions
CN112906698A (en
Inventor
徐丽君
辛晓平
杨桂霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Agricultural Resources and Regional Planning of CAAS
Original Assignee
Institute of Agricultural Resources and Regional Planning of CAAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Agricultural Resources and Regional Planning of CAAS filed Critical Institute of Agricultural Resources and Regional Planning of CAAS
Priority to CN201911224021.0A priority Critical patent/CN112906698B/en
Publication of CN112906698A publication Critical patent/CN112906698A/en
Application granted granted Critical
Publication of CN112906698B publication Critical patent/CN112906698B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Agronomy & Crop Science (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Animal Husbandry (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Mining & Mineral Resources (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a device for alfalfa plant identification, comprising the following steps: acquiring image information of plants and condition information corresponding to the image information; the sub-image information is obtained by proportional cutting from the image information of the plant; analyzing the sub-image information by using the condition information and the first condition to obtain a first label of the sub-image information; a second condition of each sub-image information is determined by the first tag, the second condition being used to analyze the sub-image information to obtain whether the plant category belongs to an alfalfa plant. The difficulty and the workload of image information analysis are reduced by acquiring the condition information and cutting the whole image into the sub-image information, the accuracy is improved, a large amount of noise is objectively eliminated by setting the condition information, and the reference is provided for the selection of the subsequent identification points; the sub-image information provides possibility for selecting the focus recognition area, and improves the accuracy.

Description

Alfalfa plant identification method and device
Technical Field
The present specification relates to a method and apparatus for alfalfa plant identification.
Background
Alfalfa is an ancient crop commonly known as broccoli, and is a perennial flowering plant. Among them, alfalfa (Medicago sativa) is the most well known as pasture. Alfalfa is a wide variety of herbs, most of which are wild herbs. The types of the Chinese alfalfa are fewer, three types of alfalfa are mainly used, the alfalfa has high nutritional value, and the alfalfa has the effects of clearing spleen and stomach, benefiting large and small intestines and removing bladder stones.
Plant identification is increasingly important, whether for statistical purposes on the data, disease prevention, or plant species differentiation. The type of plant is generally identified by identifying the plant's outline, epidermis, and leaves. However, the manual mode is extremely low in efficiency, a large amount of manpower, material resources and financial resources are consumed, and the expertise of individuals is relatively relied on.
With the development of computer technology, it has become possible to apply computer technology to the identification of individual species. At present, features are generally divided based on a computer image processing technology, then some technical parameters are obtained by dividing the features, and the types of plants are obtained from the technical parameters. The processing method is a mode for obtaining and processing the picture information, and does not well utilize the characteristics of related plant types, such as alfalfa, to screen and discriminate the related information. This not only reduces the accuracy of recognition, but also increases the difficulty of recognition.
Disclosure of Invention
In one aspect, the present disclosure provides a method of alfalfa plant identification, comprising:
acquiring image information of plants and condition information corresponding to the image information;
the sub-image information is obtained by proportional cutting from the image information of the plant;
analyzing the sub-image information by using the condition information and the first condition to obtain a first label of the sub-image information;
a second condition of each sub-image information is determined by the first tag, the second condition being used to analyze the sub-image information to obtain whether the plant category belongs to an alfalfa plant. The difficulty and the workload of image information analysis are reduced by acquiring the condition information and cutting the whole image into the sub-image information, the accuracy is improved, a large amount of noise is objectively eliminated by setting the condition information, and the reference is provided for the selection of the subsequent identification points; the sub-image information provides possibility for selecting the focus recognition area, and improves the accuracy rate on the premise of reducing the workload of image information processing. Compared with oat, alfalfa is less in types in China, so that condition information can be utilized in advance to judge the stage to which the alfalfa belongs, and the treatment efficiency is improved.
Preferably, the condition information includes a generation time of the image information and a geographical location.
Preferably, the geographic location is a latitude and longitude coordinate.
Preferably, the proportional cutting is equal proportional cutting or cutting is performed according to a predetermined scheme in the height direction.
Preferably, the first condition is performed by the following method: acquiring sub-image information, carrying out feature extraction on the sub-image information by utilizing a convolution-pooling layer to obtain a feature image, calculating the percentage of the area of the sub-image information occupied by the feature image, and marking the feature represented by the feature image as a first label if the percentage exceeds a specified threshold; if the percentage does not exceed the specified threshold, continuing to extract the features of the remaining sub-image information to obtain the next-level feature image on the premise of deleting the first-level feature image, continuing to calculate the percentage of the area of the sub-image information occupied by the feature image obtained at the moment, accumulating the percentage and the percentage obtained before, and repeating the steps until the sum of all the obtained percentages exceeds the specified threshold, wherein the features represented by the feature images corresponding to the percentages to be accumulated are all marked as first labels. According to the method and the device, the information which can be represented by each piece of sub-picture information is obtained by processing each piece of sub-picture information, so that the real representative information of each piece of sub-picture information is obtained rapidly, the representative information is the representative information of the picture which is obtained originally, the picture processing is completed as soon as possible, and it is noted that in the processing process, partial information is possibly lost due to cutting and label selection, but in the using process, the accuracy is relatively high under the quite rapid recognition rate, the slight loss of the information is still within an acceptable range, and if the loss needs to be reduced, the information defect reduction purpose can be achieved by adjusting the proportion cutting mode.
Preferably, the first label comprises a leaf part, a stem part and a flower part.
Preferably, the second condition is performed as follows: the second condition consists of a plurality of sub second methods, the sub second methods are in one-to-one correspondence with the first labels, and the arrangement sequence of the sub second methods is predetermined according to the condition information.
Preferably, the sub-second method includes selecting sub-image information of which the label is a first label, if the proportion of the sub-image information belonging to the alfalfa plant obtained by recognition exceeds a specified threshold, completing recognition, otherwise, selecting all sub-image information of which the label contains, and if the proportion of the sub-image information belonging to the alfalfa plant obtained by recognition exceeds the specified threshold, completing recognition.
Preferably, the second condition is performed by the following method: the method comprises the steps of firstly selecting a first label as sub-image information of leaf parts, if the proportion of the first label belonging to alfalfa plants exceeds a specified threshold value, completing identification, otherwise, selecting all sub-image information containing leaf parts in the label, if the proportion of the first label belonging to alfalfa plants exceeds the specified threshold value, completing identification, otherwise, sequentially selecting a rod part and a flower part, and reciprocating the steps until a conclusion is obtained, and if the conclusion cannot be finally obtained, outputting the conclusion which cannot be identified.
In another aspect, there is also provided an apparatus for alfalfa plant identification, comprising:
the acquisition module is used for acquiring image information of the plants and condition information corresponding to the image information;
the slitting module is used for performing proportional slitting from the image information of the plants to obtain sub-image information;
the label module is used for analyzing the sub-image information by utilizing the first condition to obtain a first label of the sub-image information;
and the identification module is used for determining a second condition of each piece of sub-image information through the condition information and the first label, and the second condition is used for analyzing the sub-image information to acquire whether the plant category belongs to the alfalfa plant.
The application can bring the following beneficial effects:
1. compared with oat, alfalfa is less in types in China, so that condition information can be utilized in advance to judge the stage to which the alfalfa belongs, and the treatment efficiency is improved;
2. the difficulty and the workload of image information analysis are reduced by acquiring the condition information and cutting the whole image into the sub-image information, the accuracy is improved, a large amount of noise is objectively eliminated by setting the condition information, and the reference is provided for the selection of the subsequent identification points; the sub-image information provides possibility for selecting the focus recognition area, and improves the accuracy rate on the premise of reducing the workload of image information processing;
3. according to the method and the device, the information which can be represented by each piece of sub-picture information is obtained by processing each piece of sub-picture information, so that the real representative information of each piece of sub-picture information is obtained rapidly, the representative information is the representative information of the picture which is obtained originally, the picture processing is completed as soon as possible, and it is noted that in the processing process, partial information is possibly lost due to cutting and label selection, but in the using process, the accuracy is relatively high under the quite rapid recognition rate, the slight loss of the information is still within an acceptable range, and if the loss needs to be reduced, the information defect reduction purpose can be achieved by adjusting the proportion cutting mode.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. In the drawings:
FIG. 1 is a diagram illustrating the operation of an embodiment of the present disclosure;
fig. 2 is an operation device provided in the embodiment of the present specification.
Detailed Description
The electronic cigarette evaluation method provided by the specification is executed by computing equipment with a computing function, the evaluation value returned by each test user of the electronic cigarette aiming at each preset index and the weight value of each preset index are input into the computing equipment (can be input in a manual mode or can be input in other modes such as online transmission and the like), and the computing equipment adopts a certain algorithm to determine the comprehensive score of the electronic cigarette. Any computing device with the above functions, such as a personal computer (Personal Computer, PC), a mobile phone, a tablet computer, a server, etc., is within the scope of the present application.
In order to enable a person skilled in the art to better understand the technical solutions in one or more embodiments of the present specification, the technical solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
FIG. 1 is an evaluation process for alfalfa plant identification provided in the examples of this specification, specifically comprising the steps of:
s101: acquiring image information of plants and condition information corresponding to the image information;
the condition information is mainly used for determining the condition of the condition information, and for alfalfa plants, the condition information can be at least determined to be possibly of a large class and preliminary information of a seedling stage, a growth stage, a maturity stage and the like. The condition information includes a generation time of the image information and a geographical location.
S102: the sub-image information is obtained by proportional cutting from the image information of the plant; the proportional cutting is equal proportional cutting or cutting is carried out according to a preset scheme in the height direction;
s103: analyzing the sub-image information by using the condition information and the first condition to obtain a first label of the sub-image information;
the first condition is carried out by the following method: acquiring sub-image information, carrying out feature extraction on the sub-image information by utilizing a convolution-pooling layer to obtain a feature image, calculating the percentage of the area of the sub-image information occupied by the feature image, and marking the feature represented by the feature image as a first label if the percentage exceeds a specified threshold; if the percentage does not exceed the specified threshold, continuing to extract the features of the remaining sub-image information to obtain the next-level feature image on the premise of deleting the first-level feature image, continuing to calculate the percentage of the area of the sub-image information occupied by the feature image obtained at the moment, accumulating the percentage and the percentage obtained before, and repeating the steps until the sum of all the obtained percentages exceeds the specified threshold, wherein the features represented by the feature images corresponding to the percentages to be accumulated are all marked as first labels. According to the method and the device, the information which can be represented by each piece of sub-picture information is obtained by processing each piece of sub-picture information, so that the real representative information of each piece of sub-picture information is obtained rapidly, the representative information is the representative information of the picture which is obtained originally, the picture processing is completed as soon as possible, and it is noted that in the processing process, partial information is possibly lost due to cutting and label selection, but in the using process, the accuracy is relatively high under the quite rapid recognition rate, the slight loss of the information is still within an acceptable range, and if the loss needs to be reduced, the information defect reduction purpose can be achieved by adjusting the proportion cutting mode. The first label comprises a leaf part, a rod part and a flower part.
If the threshold is 50%, when the calculated area is 45% in the analysis of the leaf, the analysis of the stem is continued, and the stem accounts for 10% of the total area, and if the sum is 55%, the label at the moment is the leaf and the stem exceeding the threshold; when the leaf is analyzed, the calculated area is 55%, and the label at this time is the leaf when the threshold is exceeded.
S104: a second condition of each sub-image information is determined by the first tag, the second condition being used to analyze the sub-image information to obtain whether the plant category belongs to an alfalfa plant. The second condition is performed according to the following method: the second condition consists of a plurality of sub second methods, the sub second methods are in one-to-one correspondence with the labels, and the arrangement sequence of the sub second methods is predetermined according to the condition information. The sub-second method comprises selecting sub-image information with a label as one label, if the proportion of the sub-image information belonging to alfalfa plants is obtained through recognition and exceeds a specified threshold, completing recognition, otherwise, selecting all sub-image information with the label in the label, and if the proportion of the sub-image information belonging to alfalfa plants is obtained through recognition and exceeds the specified threshold, completing recognition.
For example, the method may be performed by selecting the sub-image information with the tag as the leaf, if the proportion of the sub-image information belonging to the alfalfa plant obtained by recognition exceeds a specified threshold, completing the recognition, otherwise selecting all the sub-image information with the leaf in the tag, if the proportion of the sub-image information belonging to the alfalfa plant obtained by recognition exceeds the specified threshold, completing the recognition, otherwise sequentially selecting the stem and the flower, and repeating the steps until a conclusion is obtained, and if the conclusion cannot be finally obtained, outputting the unidentified conclusion.
S105, finally, judging whether the plant is an alfalfa plant.
It is understood that the geographic location is a latitude and longitude coordinate.
In a second embodiment, as shown in FIG. 2, an apparatus for alfalfa plant identification, comprises: an acquisition module 201, configured to acquire image information of a plant and condition information corresponding to the image information; the slitting module 202 is configured to perform proportional slitting from the image information of the plant to obtain sub-image information; a tag module 203 for analyzing the sub-image information using a first condition to obtain a first tag of the sub-image information; the identification module 204 is configured to determine a second condition of each sub-image information by using the condition information and the first tag, where the second condition is used to analyze the sub-image information to obtain whether the plant category belongs to the alfalfa plant.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to one or more embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing is merely one or more embodiments of the present description and is not intended to limit the present description. Various modifications and alterations to one or more embodiments of this description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, or the like, which is within the spirit and principles of one or more embodiments of the present description, is intended to be included within the scope of the claims of the present description.

Claims (4)

1. A method of alfalfa plant identification, comprising:
acquiring image information of plants and condition information corresponding to the image information;
the sub-image information is obtained by proportional cutting from the image information of the plant;
analyzing the sub-image information by using the condition information and the first condition to obtain a first label of the sub-image information;
determining a second condition of each sub-image information by the first tag, the second condition being used to analyze the sub-image information to obtain whether the plant category belongs to an alfalfa plant;
the first condition is carried out by the following method: acquiring sub-image information, carrying out feature extraction on the sub-image information by utilizing a convolution-pooling layer to obtain a feature image, calculating the percentage of the area of the sub-image information occupied by the feature image, and marking the feature represented by the feature image as a first label if the percentage exceeds a specified threshold; if the percentage does not exceed the specified threshold, continuing to extract the features of the remaining sub-image information to obtain the next-level feature image on the premise of deleting the first-level feature image, continuing to calculate the percentage of the area of the sub-image information occupied by the feature image obtained at the moment, accumulating the percentage and the percentage obtained before, and repeating the steps until the sum of all the obtained percentages exceeds the specified threshold, wherein the features represented by the feature images corresponding to the percentages to be accumulated are all marked as first labels;
the first label comprises a leaf part, a rod part and a flower part;
the second condition is performed according to the following method: the second condition consists of a plurality of sub second methods, the sub second methods are in one-to-one correspondence with the first labels, and the arrangement sequence of the sub second methods is predetermined according to the condition information;
selecting the sub-image information of which the label is a first label, if the proportion of the sub-image information belonging to the alfalfa plant is obtained by recognition and exceeds a specified threshold, completing recognition, otherwise, selecting all sub-image information of which the label is contained in the label, and if the proportion of the sub-image information belonging to the alfalfa plant is obtained by recognition and exceeds the specified threshold, completing recognition;
firstly, selecting sub-image information of which the label is a leaf part, if the proportion of the sub-image information belonging to the alfalfa plant is obtained through recognition and exceeds a specified threshold value, completing recognition, otherwise, selecting all sub-image information containing the leaf part in the label, if the proportion of the sub-image information belonging to the alfalfa plant is obtained through recognition and exceeds the specified threshold value, completing recognition, otherwise, sequentially selecting a rod part and a flower part, and reciprocating the steps until a conclusion is obtained, and if the conclusion cannot be finally obtained, outputting the unidentified conclusion.
2. The method of claim 1, wherein the condition information includes a generation time of image information and a geographic location.
3. The method of claim 2, wherein the geographic location is a latitude and longitude coordinate.
4. The method of claim 1, wherein the proportional cutting is an equal proportional cutting or a cutting according to a predetermined scheme in a height direction.
CN201911224021.0A 2019-12-04 2019-12-04 Alfalfa plant identification method and device Active CN112906698B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911224021.0A CN112906698B (en) 2019-12-04 2019-12-04 Alfalfa plant identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911224021.0A CN112906698B (en) 2019-12-04 2019-12-04 Alfalfa plant identification method and device

Publications (2)

Publication Number Publication Date
CN112906698A CN112906698A (en) 2021-06-04
CN112906698B true CN112906698B (en) 2023-12-29

Family

ID=76104487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911224021.0A Active CN112906698B (en) 2019-12-04 2019-12-04 Alfalfa plant identification method and device

Country Status (1)

Country Link
CN (1) CN112906698B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049722B (en) * 2022-06-13 2023-03-07 中国热带农业科学院农业机械研究所 Intelligent identification method for machine-harvested sugarcane impurities

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140006440U (en) * 2013-06-19 2014-12-30 임완중 System and method for plant idendification using both images and taxonomic characters
CN107909072A (en) * 2017-09-29 2018-04-13 广东数相智能科技有限公司 A kind of vegetation type recognition methods, electronic equipment, storage medium and device
CN108256568A (en) * 2018-01-12 2018-07-06 宁夏智启连山科技有限公司 A kind of plant species identification method and device
CN208737487U (en) * 2018-08-24 2019-04-12 四川农业大学 A kind of Plant identification
CN110070101A (en) * 2019-03-12 2019-07-30 平安科技(深圳)有限公司 Floristic recognition methods and device, storage medium, computer equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10713484B2 (en) * 2018-05-24 2020-07-14 Blue River Technology Inc. Semantic segmentation to identify and treat plants in a field and verify the plant treatments

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140006440U (en) * 2013-06-19 2014-12-30 임완중 System and method for plant idendification using both images and taxonomic characters
CN107909072A (en) * 2017-09-29 2018-04-13 广东数相智能科技有限公司 A kind of vegetation type recognition methods, electronic equipment, storage medium and device
CN108256568A (en) * 2018-01-12 2018-07-06 宁夏智启连山科技有限公司 A kind of plant species identification method and device
CN208737487U (en) * 2018-08-24 2019-04-12 四川农业大学 A kind of Plant identification
CN110070101A (en) * 2019-03-12 2019-07-30 平安科技(深圳)有限公司 Floristic recognition methods and device, storage medium, computer equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
基于全卷积神经网络的空间植物图像快速识别;樊帅;王鑫;阎镇;;计算机***应用(第11期);全文 *
基于分层卷积深度学习***的植物叶片识别研究;张帅;淮永建;;北京林业大学学报(第09期);全文 *
基于双卷积链Fast R-CNN的番茄关键器官识别方法;周云成;许童羽;邓寒冰;苗腾;;沈阳农业大学学报(第01期);全文 *
基于有效区域筛选的复杂背景植物图像识别方法;宋晓宇;金莉婷;赵阳;孙越;刘童;;激光与光电子学进展(第04期);全文 *

Also Published As

Publication number Publication date
CN112906698A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN113095124B (en) Face living body detection method and device and electronic equipment
CN109086961A (en) A kind of Information Risk monitoring method and device
CN117010571A (en) Traffic prediction method, device and equipment
CN112784857B (en) Model training and image processing method and device
CN115618964B (en) Model training method and device, storage medium and electronic equipment
CN113887608B (en) Model training method, image detection method and device
CN116343314B (en) Expression recognition method and device, storage medium and electronic equipment
CN112906698B (en) Alfalfa plant identification method and device
CN115210773A (en) Method for detecting object in real time by using object real-time detection model and optimization method
CN109886804B (en) Task processing method and device
CN116994007B (en) Commodity texture detection processing method and device
CN117113174A (en) Model training method and device, storage medium and electronic equipment
CN108804563A (en) A kind of data mask method, device and equipment
CN114926706B (en) Data processing method, device and equipment
CN115082803B (en) Cultivated land abandoned land monitoring method and device based on vegetation season change and storage medium
CN112906437B (en) Oat plant identification method and device
CN116805393A (en) Hyperspectral image classification method and system based on 3DUnet spectrum-space information fusion
CN114841955A (en) Biological species identification method, device, equipment and storage medium
CN111242195B (en) Model, insurance wind control model training method and device and electronic equipment
CN114359935A (en) Model training and form recognition method and device
CN117011718B (en) Plant leaf fine granularity identification method and system based on multiple loss fusion
CN111291645A (en) Identity recognition method and device
CN113673601B (en) Behavior recognition method and device, storage medium and electronic equipment
CN117455015B (en) Model optimization method and device, storage medium and electronic equipment
CN117058492B (en) Two-stage training disease identification method and system based on learning decoupling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant