CN109584258B - Grassland boundary identification method and intelligent mowing device applying same - Google Patents

Grassland boundary identification method and intelligent mowing device applying same Download PDF

Info

Publication number
CN109584258B
CN109584258B CN201811484761.3A CN201811484761A CN109584258B CN 109584258 B CN109584258 B CN 109584258B CN 201811484761 A CN201811484761 A CN 201811484761A CN 109584258 B CN109584258 B CN 109584258B
Authority
CN
China
Prior art keywords
pixel
type
external scene
image
grassland
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811484761.3A
Other languages
Chinese (zh)
Other versions
CN109584258A (en
Inventor
陶思含
周国扬
刘楷
郑鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Sumec Intelligent Technology Co Ltd
Original Assignee
Nanjing Sumec Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Sumec Intelligent Technology Co Ltd filed Critical Nanjing Sumec Intelligent Technology Co Ltd
Priority to CN201811484761.3A priority Critical patent/CN109584258B/en
Publication of CN109584258A publication Critical patent/CN109584258A/en
Application granted granted Critical
Publication of CN109584258B publication Critical patent/CN109584258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

A grassland boundary identification method and an intelligent grass cutting device using the same are disclosed, wherein a characteristic value range of a grassland is determined according to the brightness of a working environment, pixels in an obtained external scene image are screened according to the characteristic value range, and the pixels communicated with the lower edge of the external scene image after screening are counted to judge whether the intelligent grass cutting device is in the grassland boundary. The grassland boundary identification and judgment method realizes identification and judgment of the grassland boundary by simply screening and counting the image pixel values, and the image is relatively simple in operation, so that the hardware cost and the operation overhead of the intelligent mowing device can be effectively reduced. The invention can improve the identification efficiency of the grassland boundary on the premise of ensuring the identification accuracy.

Description

Grassland boundary identification method and intelligent mowing device applying same
Technical Field
The invention relates to a garden tool, in particular to an intelligent mowing device.
Background
The intelligent mowing equipment mainly comprises a mowing unit, a control unit and a self-walking unit. When the intelligent mowing device works, the device for sensing the external environment senses relevant data of the external environment of the intelligent mowing device, and the control unit controls the self-walking unit to switch between different self-walking modes according to the relevant data of the external environment so as to realize braking, turning, cruising and the like and control the mowing unit to operate correspondingly.
At present, most of common intelligent mowing equipment or intelligent mowing robots in the market limit a working area by means of electromagnetic boundary lines or limit the working area by means of positioning technologies such as D-GPS and UWB. The electromagnetic boundary needs to be manually laid by a user in advance, and the complexity of installation and use of the electromagnetic boundary is high; in addition, the electromagnetic boundary line is high in cost and easy to wear, and needs to be maintained and replaced regularly. The manual setting of the working range of the mowing robot by means of positioning technologies such as D-GPS and UWB also requires additional arrangement of a positioning device, such as a positioning base station, and the use cost and the use difficulty of the mowing robot also lack advantages.
In view of the above-mentioned deficiencies, in recent years, there has been an increasing emergence of solutions for applying computer vision techniques to grass edge identification. However, most of the schemes have complex algorithms and high requirements on hardware computing capacity, so that the schemes are difficult to popularize on a large scale. Or, because the existing computer image recognition technology is sensitive to the illumination condition, the success rate of the lawn color recognition under different illumination conditions is different greatly, and the lawn color recognition technology cannot be put into commercial application.
Therefore, there is a need for a technique that can adapt to the computing power of the existing hardware, does not depend on the external positioning device, and can quickly and accurately determine the lawn edge, so as to solve the above problems.
Disclosure of Invention
In order to solve the defects in the prior art, the invention aims to provide a grassland boundary identification method and intelligent mowing equipment applying the method.
Firstly, in order to achieve the above object, a grassland boundary identification method is provided, which comprises the steps of: acquiring an external scene image, and converting the external scene image into a color space; judging whether each pixel value in the external scene image accords with the range of the characteristic value of the grassland in the color space, and if so, converting the pixel value into a first type of pixel value; otherwise, converting the pixel value into a second type pixel value; counting a first type of pixel value communicated with the lower edge of the image, and judging that the first type of pixel value is in the grassland boundary if the counting result exceeds a preset threshold value; otherwise, the grass land is judged to be outside the grass land boundary.
In parallel with the above method, the present invention also provides another grassland boundary identification method, which comprises the steps of: acquiring an external scene image, and converting the external scene image into a color space; judging whether each pixel in the external scene image accords with the range of the characteristic value of the grassland in the color space, and if so, marking the pixel as a first type of pixel; otherwise, marking the pixel as a second type pixel; counting the number of first type pixels communicated with the lower edge of the image, and judging that the first type pixels are in the grassland boundary if the counted number exceeds a preset threshold value; otherwise, the grass land is judged to be outside the grass land boundary.
In parallel with the above method, the present invention also provides another grassland boundary identification method, which comprises the steps of: acquiring an external scene image, and converting the external scene image into a color space; judging whether each pixel in the external scene image accords with the range of the characteristic value of the grassland in the color space, and if so, marking the pixel as a first type of pixel; otherwise, marking the pixel as a second type pixel; counting the proportion of the first type of pixels communicated with the lower edge of the image to all pixels, and judging that the first type of pixels are in the grassland boundary if the counted proportion exceeds a preset threshold value; otherwise, the grass land is judged to be outside the grass land boundary.
Optionally, in the above method, the resolution of the external scene image is not higher than 1280 × 960.
Optionally, in the foregoing method, the range of feature values of the grassland in the color space includes at least 2 groups, and each group of the range of feature values corresponds to a different luminance range of the external scene.
Optionally, in the above method, the range of the feature values of the grass in the color space is determined by the following steps: measuring the hue and/or saturation of the grass at different intensities; calculating the hue range and/or saturation range of the grassland under different brightness; and determining the characteristic value range as the hue range and/or the saturation range of the grassland under the brightness according to the brightness of the external scene.
Optionally, in the foregoing method, the first type of pixel value includes white, black, or a fixed gray value, and the second type of pixel value is different from the first type of pixel value.
Optionally, in the foregoing method, the preset threshold is: the proportion of the first type pixel values communicated with the lower edge of the image to all the pixel values, or the number of the first type pixel values communicated with the lower edge of the image.
Optionally, in the foregoing method, before determining whether each pixel value in the external scene image conforms to the range of the feature value of the grassland in the color space, the method further includes preprocessing the external scene image; the pretreatment comprises the following steps: adjusting a resolution of the external scene image; and/or adjusting the pixel value of a pixel point in the external scene image according to the brightness range of the external scene.
Optionally, in the foregoing method, the color space includes one or more of an HSI color space, an RGB color space, an HSV color space, a YCbCr color space, and a YUV color space.
Optionally, in the foregoing method, before counting the first type of pixel values connected to the lower edge of the image, the method further includes the following steps: and traversing all pixel points in the external scene image, and if the pixel value of at least one pixel point adjacent to any pixel point is the second-type pixel value, converting the pixel value of the pixel point into the second-type pixel value.
Optionally, in the foregoing method, the first type of pixel values communicated with the lower edge of the image are counted, specifically including the following steps: counting the number of pixel points from the lower edge of the external scene image to the first appearance of the second type of pixel values; or counting the proportion of all the pixel points between the pixel points from the lower edge of the external scene image to the first appearance of the second type of pixel values.
Optionally, in the foregoing method, the first type of pixel values communicated with the lower edge of the image are counted, specifically including the following steps: traversing all pixel points in the external scene image, and if the pixel value of at least one pixel point adjacent to any pixel point is a second-type pixel value, converting the pixel values of the pixel point and all the pixel points above the pixel point into the second-type pixel value; counting all first-type pixel values in the converted external scene image
Secondly, to achieve the above object, an intelligent lawn mowing device is also proposed, wherein the control unit is configured to execute the lawn boundary identification method.
Optionally, the intelligent mowing device further comprises a brightness sensing unit, wherein the brightness sensing unit is connected with the control unit and used for sensing a brightness range of an external scene; in the control unit, the range of the eigenvalues of the grassland in the color space comprises at least 2 groups, and each group of the range of the eigenvalues corresponds to a different luminance range of the external scene.
Advantageous effects
According to the grassland boundary automatic judgment method, the grassland boundary is automatically judged through an image recognition technology, a positioning device does not need to be additionally arranged on the grassland boundary, and the positioning device does not need to be installed and maintained. In addition, the grassland boundary identification method adopted by the invention has limited calculation amount, can save the calculation cost of processing the image signal by the system, and does not need to select a high-performance control unit. Meanwhile, the dependence of the recognition effect of the invention on the image quality acquired by image capturing units such as cameras is relatively small, and the invention can select the camera with lower resolution as the image capturing unit. Therefore, the intelligent mowing device can effectively control the hardware cost and the use cost of the intelligent mowing device, and is easy to popularize and apply.
On the other hand, the identification method is mainly realized by screening and counting the image pixel values, and the operation processing of the image pixels is relatively simple. The grassland boundary identification speed of the invention has obvious advantages compared with the prior art by matching with the external scene image which is acquired by the image capturing unit and has lower resolution and limited pixel number. Particularly, compared with the prior technical scheme that the identification result can be obtained only by processing the image such as wavelet transformation, image mechanism analysis, machine learning and the like, the method and the device can fully ensure the real-time performance and higher accuracy of grassland boundary identification. The intelligent mowing device has the advantages of high recognition efficiency and accurate and timely control over the intelligent mowing device.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic structural view of an intelligent mowing device according to the invention;
FIG. 2 is a flow chart of intelligent lawn mowing device identifying grass boundaries in accordance with the present invention;
FIG. 3 is an image of an external scene of the intelligent lawn mower captured by an image capturing unit in the intelligent lawn mower;
FIG. 4 is a schematic diagram of pixel points at an edge boundary during identification of a grass boundary;
fig. 5 is a schematic diagram of a marking manner of pixel points at an edge boundary in a grassland boundary identification process.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Fig. 1 is a diagram of an intelligent mowing device according to the invention, which comprises an image capturing unit 1 for acquiring an image of an external scene of the intelligent mowing machine; and the control unit 2 is connected with the image capturing unit, analyzes the external scene image acquired by the image capturing unit and outputs control signals to the mowing unit 3 and the self-walking unit 4 according to the external scene image, and the mowing unit or the self-walking unit is controlled to operate by the control signals respectively.
The mowing unit therein mainly comprises a motor and a mowing device, such as a mowing blade, driven by the motor. The self-walking unit mainly includes a self-walking motor and wheels driven by the self-walking motor. To facilitate control of the intelligent mowing device to turn, two or more of the self-traveling motors are generally selected. The two or more self-walking motors are respectively connected with different wheels and are independently controlled by the control unit to generate speed difference when necessary, and the intelligent mowing device can realize switching of different self-walking modes such as turning, braking, cruising and the like through the speed difference.
Referring to fig. 2, in the present embodiment, the control unit is configured to perform the following steps, so as to realize the identification of the grass boundary:
a first step of receiving an external scene image similar to that shown in fig. 3, and converting the external scene image into a color space;
secondly, judging whether the value of each pixel in the external scene image is in accordance with the range of the characteristic value of the grassland in the color space or not, and if so, marking the pixel as a first type of pixel, such as an area A, B, C, D in fig. 3; otherwise, the pixel is marked as a second type of pixel, e.g., white as marked in FIG. 3;
thirdly, counting all first-class pixels communicated with the lower edge of the image in the external scene image, and if the counting result exceeds a threshold value, judging that the intelligent mowing device is in the grassland boundary; otherwise, judging that the intelligent mowing device is outside the grassland boundary. In the example shown in fig. 3, all the first type pixels in the external scene image that are connected to the lower edge of the image, specifically all the pixels of the area a.
The area a can be determined by: traversing all pixel points in the external scene image, and if at least one pixel point adjacent to any pixel point is a second-type pixel, marking the pixel point as the second-type pixel; the pixel points between the second type pixels appear from the lower edge of the external scene image or from the partial lower edge of the external scene image up to the first time are all the first type pixels communicated with the lower edge of the image, that is, the area a identified in fig. 3. In order to further avoid the erroneous judgment of the region, the part above the pixel point marked as the second type pixel in the image, that is, the region B, D, C in fig. 3, may be marked as the second type pixel in a unified manner, and is removed as a background to avoid the erroneous judgment during the statistics.
Therefore, when the control unit judges that the intelligent mowing device is in the lawn boundary according to the steps, a control signal is output to the self-walking unit to keep the current self-walking mode; and when the control unit judges that the intelligent mowing device is outside the grassland boundary according to the steps, outputting a control signal to the self-walking unit to adjust the current self-walking mode. Therefore, the intelligent mowing equipment disclosed by the invention can automatically perform mowing operation within the lawn boundary without an additional preset positioning device.
In another embodiment of the invention, the mowing device adopts a monocular camera as an image capturing unit of the mowing device, and the monocular camera acquires a two-dimensional image of a target area. The two-dimensional image is processed by a vision processing module to obtain the data information required by the control unit. The vision processing module includes: the device comprises a pixel adjusting unit, a color characteristic processing unit, a black and white image processing unit and a grassland boundary judging unit.
The pixel adjusting unit adjusts the pixels of the two-bit image collected by the monocular camera into a low-resolution image suitable for visual processing, for example, an image with a resolution of 1280 × 960;
and the color characteristic processing unit converts the low-resolution image into different color channels, extracts corresponding color characteristic values, converts pixels which accord with preset grassland color characteristic values into black and converts pixels which do not accord with the grassland color characteristic values into white, and obtains a processed black-and-white image. Specifically, when a color is selected, the present embodiment converts the color space of the image into the HSI color space. The brightness I value of the HSI color space can effectively represent the brightness of the current image, and under the condition of different brightness I, the hue H and the saturation S value of the same grassland can also change correspondingly. Therefore, under the condition of measuring different brightness I in advance, the tone H and saturation S value ranges of the grassland are measured, when the image is actually processed, corresponding grassland tone and saturation threshold values are automatically selected according to the brightness value measured at that time, the image is subjected to binarization segmentation processing, the color of pixel points which accord with the preset grassland color characteristics is converted into black and is regarded as the image content, and the color of pixel points which do not accord with the grassland color channel characteristics is converted into white and is regarded as the image background; so as to divide the image contents which accord with the grassy color characteristic and do not accord with the grassy color characteristic. Therefore, the grassland is identified by selecting corresponding hue H and saturation S values according to the change of the brightness value, so that the influence of different illumination conditions such as cloudy days, shadows, direct sunlight and the like on identification is effectively reduced.
The black-and-white image processing unit is used for optimizing the black-and-white image, and comprises traversing all pixel points at the black-and-white junction of the image, and converting black pixel points containing more than one white pixel in eight neighborhoods into white; only the black pixel area communicated with the lower edge of the image is reserved; by eliminating the areas which are not communicated with the lower edge, the method can reduce the influence of the non-grassland objects which accord with the grassland colors in the picture on the recognition result. As shown in fig. 3, area a is grass and areas B, C, D are all non-grass objects that conform to the color characteristics of grass, such as trees, vehicles, etc. Region a is reserved and the pixels of region B, C, D are converted to white. Therefore, the method does not need to introduce extra algorithm steps with large calculation amount, such as texture, gradient identification and the like, and can effectively improve the operation speed.
The grassland boundary judging unit is used for counting the total number of black pixels in the image subjected to the optimization processing by the black-and-white image processing unit, if the proportion of the black pixels to all the pixels of the image is higher than a preset value, the mowing robot is judged to be in the boundary, and if the proportion of the black pixels to all the pixels of the image is lower than the preset value, the mowing robot is judged to be out of the boundary.
The processing of the external scene image acquired by the image capturing unit by the pixel adjusting unit specifically includes the following steps: and adjusting the resolution of the two-dimensional image, carrying out average sampling on a certain pixel point and 8 adjacent pixel points, and then taking the average value as the value of the pixel point.
The color feature processing unit specifically converts a color space preset by the low-resolution image block, such as RGB, HSV, YCbCr, or YUV, to obtain a color feature value of the image. Then, all pixels of the image are compared one by one, the color of the pixel points which accord with the preset grassland color characteristics is converted into black to be regarded as image content, and the color of the pixel points which do not accord with the grassland color channel characteristics is converted into white to be regarded as an image background; to segment image content that is both grass-color compliant and non-grass-color compliant, as shown in fig. 3.
Then, the black-and-white image processing unit performs optimization processing on the black-and-white image obtained by the color feature processing unit to reduce segmentation interference caused by the proximity of color features of trees, sky and the like to the color features of the grassland. The method specifically comprises the following steps:
step S41, the pixel at the black-white boundary of the divided imageProcessing is performed if a certain black pixel
Figure DEST_PATH_IMAGE002
If there is one or more white pixel points in the eight adjacent pixel points (as the left side of fig. 5), the color of the point is converted into white (as the right side of fig. 5);
in step S41, the binarized image is operated, and only pixels at the boundary between black and white edges are traversed in order to increase the processing speed. Defining pixels at black and white boundaries: if one of the 8 adjacent pixels of a certain pixel has a color different from that of the pixel, the pixel is a pixel at a black-white boundary, as shown in fig. 4, the boundary of the pixel in the dashed-line frame in fig. 3 is taken, and the pixel at the boundary is marked as "X" through the above-mentioned determination step;
step S42, only the black pixel region connected to the lower edge of the image is retained, and other black pixels not connected to the lower edge of the image are converted into white, as shown in fig. 3, the region a is retained, and the pixels in the region B, C, D are converted into white; here, the determination of the black pixel region communicating with the lower edge employs a conventional method in image processing: Two-Pass method.
The technical scheme of the invention is different from the prior art, and does not need to carry out Fourier transform, Gabor transform or haar-like template processing on the acquired image signals, so that the calculation overhead of the system for carrying out operation processing on the image signals can be saved. Therefore, the method of the invention can be directly applied to low-cost control hardware without depending on a control unit with high cost and high calculation performance, such as a DSP chip special for image processing and the like. Besides, the method is different from the existing induction type grassland boundary identification method, and an electromagnetic boundary line and a positioning device are not required to be additionally introduced. Therefore, the cost of the intelligent mowing robot can be effectively controlled, and the problems that the traditional method is large in calculation amount, hard to carry hardware, large in illumination influence on the recognition effect and the like are effectively solved due to the fact that the implementation mode of the scheme is simple and quick, and the scheme is easier to popularize and apply.
Furthermore, because the identification of the grass boundaries by the present invention does not need to rely on the identification of the texture of grass images, the identification effect of the present invention is also relatively less dependent on the quality of the images acquired by an image capture unit such as a camera. The invention can select the camera with lower resolution as the image capturing unit, thereby further controlling the cost of the whole machine.
Furthermore, the identification method is realized mainly based on screening and counting the image pixel values, and the operation performed on the image pixels is relatively simple. The grassland boundary identification speed of the invention has obvious advantages compared with the prior art by matching with external scene images with lower resolution and limited pixel number. Particularly, compared with the prior technical scheme that the identification result can be obtained only by processing the image such as wavelet transformation, image mechanism analysis, machine learning and the like, the method and the device can fully ensure the real-time performance and higher accuracy of grassland boundary identification. Its recognition efficiency has more the advantage, and is also more accurate timely to intelligent grass cutting device's control.
Those of ordinary skill in the art will understand that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (13)

1. A grassland boundary identification method is characterized by comprising the following steps:
acquiring an external scene image, and converting the external scene image into a color space;
judging whether each pixel value in the external scene image accords with the range of the characteristic value of the grassland in the color space, and if so, converting the pixel value into a first type of pixel value; otherwise, converting the pixel value into a second type pixel value;
traversing all pixel points in the external scene image, and if the pixel value of at least one pixel point adjacent to any pixel point is a second-type pixel value, converting the pixel value of the pixel point into the second-type pixel value;
counting a first type of pixel value communicated with the lower edge of the image, and judging that the first type of pixel value is in the grassland boundary if the counting result exceeds a preset threshold value; otherwise, judging that the grass land is outside the boundary of the grass land;
the preset threshold value is as follows: the proportion of the first type pixel values communicated with the lower edge of the image to all the pixel values or the number of the first type pixel values communicated with the lower edge of the image.
2. The method of grassland boundary identification of claim 1 wherein the resolution of the external scene image is no higher than 1280 x 960.
3. The method of identifying grass boundaries of claim 1 wherein the range of values characteristic of grass in the color space comprises at least 2 sets, each set of the range of values corresponding to a different range of intensities of the external scene.
4. A method of identifying grass boundaries as claimed in claim 3 wherein the range of values characteristic of grass in the colour space is determined by:
measuring the hue and/or saturation of the grass at different intensities;
calculating the hue range and/or saturation range of the grassland under different brightness;
and determining the characteristic value range as the hue range and/or the saturation range of the grassland under the brightness according to the brightness of the external scene.
5. The method of identifying grass boundaries of claim 1 wherein the first type of pixel values comprises white, black or a fixed gray value and the second type of pixel values is distinct from the first type of pixel values.
6. The method of identifying grass boundaries of claim 5 further comprising preprocessing the external scene image before determining whether the pixel values in the external scene image correspond to a range of values characteristic of grass in the color space;
the pretreatment comprises the following steps: adjusting a resolution of the external scene image; and/or adjusting the pixel value of a pixel point in the external scene image according to the brightness range of the external scene.
7. A method of grass boundary identification as claimed in claim 3 wherein the colour space comprises one or more of the HSI colour space, RGB colour space, HSV colour space, YCbCr colour space, YUV colour space.
8. A method for identifying grass boundaries as claimed in claim 7 wherein the first pixel values connected to the lower edge of the image are counted by the steps of:
counting the number of pixel points from the lower edge of the external scene image to the first second type pixel value;
or counting the proportion of all the pixel points between the pixel points from the lower edge of the external scene image to the first second-class pixel value.
9. A method for identifying grass boundaries as claimed in claim 7 wherein the first pixel values connected to the lower edge of the image are counted by the steps of:
traversing all pixel points in the external scene image, and if the pixel value of at least one pixel point adjacent to any pixel point is a second-type pixel value, converting the pixel values of the pixel point and all the pixel points above the pixel point into the second-type pixel value;
and counting all the first-type pixel values in the converted external scene image.
10. A grassland boundary identification method is characterized by comprising the following steps:
acquiring an external scene image, and converting the external scene image into a color space;
judging whether each pixel in the external scene image accords with the range of the characteristic value of the grassland in the color space, and if so, marking the pixel as a first type of pixel; otherwise, marking the pixel as a second type pixel;
traversing all pixel points in the external scene image, and if the pixel value of at least one pixel point adjacent to any pixel point is a second-type pixel value, converting the pixel value of the pixel point into the second-type pixel value;
counting the number of first type pixels communicated with the lower edge of the image, and judging that the first type pixels are in the grassland boundary if the counted number exceeds a preset threshold value; otherwise, the grass land is judged to be outside the grass land boundary.
11. A grassland boundary identification method is characterized by comprising the following steps:
acquiring an external scene image, and converting the external scene image into a color space;
judging whether each pixel in the external scene image accords with the range of the characteristic value of the grassland in the color space, and if so, marking the pixel as a first type of pixel; otherwise, marking the pixel as a second type pixel;
traversing all pixel points in the external scene image, and if the pixel value of at least one pixel point adjacent to any pixel point is a second-type pixel value, converting the pixel value of the pixel point into the second-type pixel value;
counting the proportion of the first type of pixels communicated with the lower edge of the image to all pixels, and judging that the first type of pixels are in the grassland boundary if the counted proportion exceeds a preset threshold value; otherwise, the grass land is judged to be outside the grass land boundary.
12. An intelligent mowing apparatus comprising a control unit, wherein the control unit is configured to perform the method of any one of claims 1 to 11.
13. The intelligent mowing device according to claim 12, further comprising a brightness sensing unit, wherein the brightness sensing unit is connected with the control unit, and is configured to sense a brightness range of an external scene;
in the control unit, the range of the eigenvalues of the grassland in the color space comprises at least 2 groups, and each group of the range of the eigenvalues corresponds to a different luminance range of the external scene.
CN201811484761.3A 2018-12-06 2018-12-06 Grassland boundary identification method and intelligent mowing device applying same Active CN109584258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811484761.3A CN109584258B (en) 2018-12-06 2018-12-06 Grassland boundary identification method and intelligent mowing device applying same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811484761.3A CN109584258B (en) 2018-12-06 2018-12-06 Grassland boundary identification method and intelligent mowing device applying same

Publications (2)

Publication Number Publication Date
CN109584258A CN109584258A (en) 2019-04-05
CN109584258B true CN109584258B (en) 2021-10-15

Family

ID=65926127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811484761.3A Active CN109584258B (en) 2018-12-06 2018-12-06 Grassland boundary identification method and intelligent mowing device applying same

Country Status (1)

Country Link
CN (1) CN109584258B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116619377A (en) * 2020-01-09 2023-08-22 上海山科机器人有限公司 Walking robot, method of controlling walking robot, and walking robot system
CN113496146A (en) * 2020-03-19 2021-10-12 苏州科瓴精密机械科技有限公司 Automatic work system, automatic walking device, control method thereof, and computer-readable storage medium
CN113495553A (en) * 2020-03-19 2021-10-12 苏州科瓴精密机械科技有限公司 Automatic work system, automatic walking device, control method thereof, and computer-readable storage medium
CN113256776B (en) * 2021-06-21 2021-10-01 炫我信息技术(北京)有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN117095015A (en) * 2022-05-12 2023-11-21 苏州科瓴精密机械科技有限公司 Image segmentation method, device, computer equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8224516B2 (en) * 2009-12-17 2012-07-17 Deere & Company System and method for area coverage using sector decomposition
CN103891464A (en) * 2012-12-28 2014-07-02 苏州宝时得电动工具有限公司 Automatic mowing system
CN105512689A (en) * 2014-09-23 2016-04-20 苏州宝时得电动工具有限公司 Lawn identification method based on images, and lawn maintenance robot
CN105785986A (en) * 2014-12-23 2016-07-20 苏州宝时得电动工具有限公司 Automatic working equipment
CN106647765A (en) * 2017-01-13 2017-05-10 深圳拓邦股份有限公司 Planning platform based on mowing robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101922852B1 (en) * 2017-01-10 2018-11-28 (주)베라시스 Method for Detecting Border of Grassland Using Image-Based Color Information

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8224516B2 (en) * 2009-12-17 2012-07-17 Deere & Company System and method for area coverage using sector decomposition
CN103891464A (en) * 2012-12-28 2014-07-02 苏州宝时得电动工具有限公司 Automatic mowing system
CN105512689A (en) * 2014-09-23 2016-04-20 苏州宝时得电动工具有限公司 Lawn identification method based on images, and lawn maintenance robot
CN105785986A (en) * 2014-12-23 2016-07-20 苏州宝时得电动工具有限公司 Automatic working equipment
CN106647765A (en) * 2017-01-13 2017-05-10 深圳拓邦股份有限公司 Planning platform based on mowing robot

Also Published As

Publication number Publication date
CN109584258A (en) 2019-04-05

Similar Documents

Publication Publication Date Title
CN109584258B (en) Grassland boundary identification method and intelligent mowing device applying same
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
US10592754B2 (en) Shadow removing method for color image and application
CN106548463B (en) Sea fog image automatic defogging method and system based on dark and Retinex
CN109635758B (en) Intelligent building site video-based safety belt wearing detection method for aerial work personnel
CN102364496B (en) Method and system for identifying automobile license plates automatically based on image analysis
CN111753577B (en) Apple identification and positioning method in automatic picking robot
CN102385753B (en) Illumination-classification-based adaptive image segmentation method
CN102938057B (en) A kind of method for eliminating vehicle shadow and device
CN108428239A (en) Intelligent grass-removing Boundary Recognition method based on image texture characteristic extraction
CN107705254B (en) City environment assessment method based on street view
CN111460903B (en) System and method for monitoring growth of field broccoli based on deep learning
CN112561899A (en) Electric power inspection image identification method
CN112070717B (en) Power transmission line icing thickness detection method based on image processing
CN107545550B (en) Cell image color cast correction method
CN116309607A (en) Ship type intelligent water rescue platform based on machine vision
CN110223253B (en) Defogging method based on image enhancement
CN112348018A (en) Digital display type instrument reading identification method based on inspection robot
CN116524196A (en) Intelligent power transmission line detection system based on image recognition technology
CN110263778A (en) A kind of meter register method and device based on image recognition
CN111612797B (en) Rice image information processing system
CN112598674A (en) Image processing method and device for vehicle and vehicle
CN115601690B (en) Edible fungus environment detection method based on intelligent agriculture
CN111985436A (en) Workshop ground mark line identification fitting method based on LSD
Yuming et al. Traffic signal light detection and recognition based on canny operator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant