CN108268811B - Image processing method, image processing apparatus, and computer-readable storage medium - Google Patents

Image processing method, image processing apparatus, and computer-readable storage medium Download PDF

Info

Publication number
CN108268811B
CN108268811B CN201810034642.1A CN201810034642A CN108268811B CN 108268811 B CN108268811 B CN 108268811B CN 201810034642 A CN201810034642 A CN 201810034642A CN 108268811 B CN108268811 B CN 108268811B
Authority
CN
China
Prior art keywords
area
region
detection pattern
preset
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810034642.1A
Other languages
Chinese (zh)
Other versions
CN108268811A (en
Inventor
刘新
宋朝忠
陆振波
周洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Echiev Autonomous Driving Technology Co ltd
Original Assignee
Shenzhen Echiev Autonomous Driving Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Echiev Autonomous Driving Technology Co ltd filed Critical Shenzhen Echiev Autonomous Driving Technology Co ltd
Priority to CN201810034642.1A priority Critical patent/CN108268811B/en
Publication of CN108268811A publication Critical patent/CN108268811A/en
Application granted granted Critical
Publication of CN108268811B publication Critical patent/CN108268811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps
    • G06K7/1491Methods for optical code recognition the method including quality enhancement steps the method including a reconstruction step, e.g. stitching two pieces of bar code together to derive the full bar code

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method, which is characterized in that a detection pattern is configured at the top of an AGV, and an image acquisition device is configured at a preset position, and the image processing method comprises the following steps: acquiring an image of a preset area by using the image acquisition equipment; performing connected domain filtering on the acquired image according to a preset filtering condition to obtain the detection pattern; and acquiring the coding information and the moving state information of the AGV according to the detection pattern. The invention also discloses an image processing device and a computer readable storage medium. The AGV guiding system can solve the problems that in the existing AGV guiding technology, the ground mark needs to be maintained regularly, and a vehicle-mounted camera is easy to loosen, so that the guiding precision is influenced.

Description

Image processing method, image processing apparatus, and computer-readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and a computer-readable storage medium.
Background
In recent years, along with the improvement of the automation degree of the manufacturing industry, the research on the AGV of the indoor mobile robot for realizing material transportation is concerned, and the AGV, as an intelligent mobile robot, integrates external environment perception, intelligent decision and motion control technologies into a whole and undertakes the task of material transportation and conveying in intelligent logistics. The conventional guiding method for the AGV generally adopts fixed path guiding, the course of the conventional fixed path guiding technology is determined by generally installing a camera on a trolley and acquiring a ground mark through the camera, in the method, the ground mark needs to be maintained regularly, and a vehicle-mounted camera is easy to loosen to influence the guiding precision.
Disclosure of Invention
The invention mainly aims to provide an image processing method, an image processing device and a computer readable storage medium, and aims to solve the problems that in the existing AGV car guiding technology, a ground mark needs to be regularly maintained, a vehicle-mounted camera is easy to loosen, and the guiding precision is influenced.
In order to achieve the above object, the present invention provides an image processing method for arranging a detection pattern on a roof of an AGV and an image capturing device at a preset position, the image processing method comprising:
acquiring an image of a preset area by using the image acquisition equipment;
performing connected domain filtering on the acquired image according to a preset filtering condition to obtain the detection pattern;
and determining the coding information and the moving state information of the AGV according to the detection pattern.
Preferably, the detection pattern includes at least an outer layer isolation portion, a pattern detection body, an inner layer isolation portion, and a coding region.
Preferably, the step of performing connected domain filtering on the acquired image according to a preset filtering condition to obtain the detection pattern includes:
carrying out binarization processing on the obtained image to obtain a binarized image;
the step of filtering the connected domain of the acquired image according to the preset filtering condition to obtain the detection pattern comprises:
and filtering the connected domain of the obtained binary image according to the preset filtering condition to obtain the detection pattern.
Preferably, the step of performing connected domain filtering on the obtained binarized image according to the preset filtering condition to obtain the detection pattern includes:
acquiring a connected domain matched with a first preset gray value in the binary image to obtain a first connected domain set;
filtering the first connected domain set according to a preset prior condition to obtain a second connected domain set, wherein the preset prior condition at least comprises a matching degree verification condition with a preset shape;
determining the detection pattern from the second set of connected components.
Preferably, the step of determining the detection pattern from the second set of connected components comprises:
determining a first region of each connected domain according to the shape characteristics of each connected domain in the second connected domain set, and determining a first area of a region of which the gray value is matched with a second preset gray value in the first region, wherein each connected domain is in the corresponding first region;
determining a corresponding second region according to a preset width value and the first region, and determining a second area of a region of which the gray value is matched with a second preset gray value in the second region, wherein the first region is in the second region;
filtering the second connected domain set according to the ratio of the difference value of the first area and the second area to obtain a third connected domain set;
determining the detection pattern from the third set of connected components.
Preferably, the step of determining the detection pattern from the third set of connected components comprises:
determining a third area corresponding to the inner layer isolation part in the first area according to the position relation between the inner layer isolation part and the pattern detection main body;
determining a third area of a region, which is matched with the second preset gray value, in the third region;
and determining the detection pattern according to the ratio of the third area to the third area and the third connected domain set.
Preferably, the step of determining the detection pattern according to the ratio of the third area to the third area and the third connected component set comprises:
and determining the course of the AGV or the position of the AGV according to the boundary of the connected domain in the detection pattern.
Preferably, the step of determining the detection pattern according to the ratio of the third area to the third area and the third connected component set further comprises:
determining the coding area and the coding sequence of the AGV according to the detection pattern;
determining the gray value information of the coding area and the coding sequence to determine the coding information of the AGV.
In order to achieve the above object, the present invention also provides an image processing apparatus comprising: a memory, a processor and an image processing program stored on the memory and executable on the processor, the image processing program, when executed by the processor, implementing the steps of the image processing method as described above.
Further, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon an image processing program which, when executed by a processor, implements the steps of the image processing method as described above.
The invention provides an image processing method, an image processing device and a computer readable storage medium. In the method, a detection pattern is arranged at the top of the AGV, and an image acquisition device is arranged at a preset position, and the image processing method comprises the following steps: acquiring an image of a preset area by using the image acquisition equipment; performing connected domain filtering on the acquired image according to a preset filtering condition to obtain the detection pattern; and acquiring the coding information and the moving state information of the AGV according to the detection pattern. By the mode, the image acquisition equipment is fixedly arranged at the preset position, the detection pattern is arranged on the AGV, the image of the preset position is obtained through the camera, the detection pattern is obtained according to the image, and course information and encoding information of the AGV are obtained from the detection pattern, so that the problems that in the prior art, the ground mark needs to be maintained regularly, and the vehicle-mounted camera is easy to loosen are solved.
Drawings
Fig. 1 is a schematic structural diagram of a terminal to which an image processing apparatus belongs in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first embodiment of an image processing method according to the present invention;
FIG. 3 is a flowchart illustrating a second embodiment of an image processing method according to the present invention;
FIG. 4 is a flowchart illustrating a third embodiment of an image processing method according to the present invention;
FIG. 5 is a flowchart illustrating a fourth embodiment of an image processing method according to the present invention;
FIG. 6 is a flowchart illustrating an image processing method according to a fifth embodiment of the present invention;
FIG. 7 is a flowchart illustrating an image processing method according to a sixth embodiment of the present invention;
FIG. 8 is a flowchart illustrating an image processing method according to a seventh embodiment of the present invention;
FIG. 9 is a diagram illustrating an exemplary design of a detection pattern according to an embodiment of the image processing method of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The existing fixed path guiding technology generally determines the course through a mode of installing a camera on a trolley and acquiring a ground mark through the camera, wherein the ground mark needs to be regularly maintained in the mode, and a vehicle-mounted camera is easy to loosen to influence the guiding precision.
In order to solve the technical problem, the invention provides an image processing method, wherein in the method, a detection pattern is configured on an AGV, an image acquisition device is configured at a preset position, an image of a preset area is acquired by using the image acquisition device, then connected domain filtering is performed on the acquired image according to preset filtering conditions to obtain the detection pattern, and then coding information and moving state information of the AGV are obtained according to the detection pattern. Therefore, the problem that the ground mark needs to be regularly maintained, and a vehicle-mounted camera is easy to loosen to influence the guiding precision in the existing AGV car guiding technology is solved.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a PC, and can also be a mobile terminal device with a display function, such as a smart phone, a tablet computer, an electronic book reader, an MP3(Moving Picture Experts Group Audio Layer III, dynamic video Experts compress standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, dynamic video Experts compress standard Audio Layer 4) player, a portable computer, and the like.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile terminal; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and an image processing program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call up an image processing program stored in the memory 1005 and perform the following operations:
acquiring an image of a preset area by using the image acquisition equipment;
performing connected domain filtering on the acquired image according to a preset filtering condition to obtain the detection pattern;
and determining the coding information and the moving state information of the AGV according to the detection pattern.
Further, the processor 1001 may call an image processing program stored in the memory 1005, and also perform the following operations:
carrying out binarization processing on the obtained image to obtain a binarized image;
the step of filtering the connected domain of the acquired image according to the preset filtering condition to obtain the detection pattern comprises:
and filtering the connected domain of the obtained binary image according to the preset filtering condition to obtain the detection pattern.
Further, the processor 1001 may call an image processing program stored in the memory 1005, and also perform the following operations:
acquiring a connected domain matched with a first preset gray value in the binary image to obtain a first connected domain set;
filtering the first connected domain set according to a preset prior condition to obtain a second connected domain set, wherein the preset prior condition at least comprises a matching degree verification condition with a preset shape;
determining the detection pattern from the second set of connected components.
Further, the processor 1001 may call an image processing program stored in the memory 1005, and also perform the following operations:
determining a first region of each connected domain according to the shape characteristics of each connected domain in the second connected domain set, and determining a first area of a region of which the gray value is matched with a second preset gray value in the first region, wherein each connected domain is in the corresponding first region;
determining a corresponding second region according to a preset width value and the first region, and determining a second area of a region of which the gray value is matched with a second preset gray value in the second region, wherein the first region is in the second region;
filtering the second connected domain set according to the ratio of the difference value of the first area and the second area to obtain a third connected domain set;
determining the detection pattern from the third set of connected components.
Further, the processor 1001 may call an image processing program stored in the memory 1005, and also perform the following operations:
determining a third area corresponding to the inner layer isolation part in the first area according to the position relation between the inner layer isolation part and the pattern detection main body;
determining a third area of a region, which is matched with the second preset gray value, in the third region;
and determining the detection pattern according to the ratio of the third area to the third area and the third connected domain set.
Further, the processor 1001 may call an image processing program stored in the memory 1005, and also perform the following operations:
and determining the course of the AGV or the position of the AGV according to the boundary of the connected domain in the detection pattern.
Further, the processor 1001 may call an image processing program stored in the memory 1005, and also perform the following operations:
determining the coding area and the coding sequence of the AGV according to the detection pattern;
determining the gray value information of the coding area and the coding sequence to determine the coding information of the AGV.
Based on the above hardware structure, an embodiment of the image processing method of the present invention is provided.
Referring to fig. 2, fig. 2 is a flowchart illustrating a first embodiment of an image processing method according to the present invention, in this embodiment, a detection pattern is disposed on a roof of an AGV, and an image capturing device is disposed at a predetermined position, where the detection pattern at least includes an outer layer isolation portion, a pattern detection main body, an inner layer isolation portion, and an encoding region. In the embodiment of the present invention, the image processing method is applied to the technical field of image processing, for example, in a guidance system of an indoor mobile robot AGV for material transportation. In recent years, along with the improvement of the automation degree of the manufacturing industry, the research on the AGV of the indoor mobile robot for realizing material transportation is concerned, and the AGV, as an intelligent mobile robot, integrates external environment perception, intelligent decision and motion control technologies into a whole and undertakes the task of material transportation and conveying in intelligent logistics. The conventional guiding method for the AGV generally adopts fixed path guiding, the course of the conventional fixed path guiding technology is determined by generally installing a camera on a trolley and acquiring a ground mark through the camera, in the method, the ground mark needs to be maintained regularly, manpower and material resources are consumed, and the vehicle-mounted camera is easy to loosen, so that the guiding precision is influenced. According to the method, the detection pattern is configured on the AGV, the image acquisition equipment is configured at the preset position, and the course of the AGV is determined by acquiring the pattern on the AGV, so that the problems that manpower and material resources are consumed and the precision is easily influenced in the prior art are solved. The detection pattern of the present embodiment may include encoded information and heading information of the AGV car. Specifically, the detection pattern may be designed as shown in fig. 9. In fig. 9, the detection pattern is composed of A, B, C and D, the outermost layer a is white, and is an outer isolation part for isolating interference of the surrounding environment and improving robustness; the second layer B is a pattern detection main body, is a black U-shaped symbol and is used for determining the course of the AGV and positioning the AGV; the inner layer is a white U-shaped symbol which is used for isolating the pattern detection main body from the coding area of the innermost layer and increasing the detection characteristics of the detection pattern; the innermost layer is four small rectangular lattices of white or black for cart coding, and the size of each part can be designed as shown in the figure or other sizes according to the needs. In fig. 9, the test pattern had dimensions of 68cm by 48cm, the width of the outer barrier portion was 4cm, and the size of the coded rectangular grid was 10cm by 8 cm. In this embodiment, the image acquisition device may be a camera or other devices, the camera is fixedly installed at a preset position, and the camera may be preferably installed on the ceiling for better acquiring the position of the AGV. The specific implementation process of this embodiment is as follows.
Step S10, acquiring an image of a preset area by using the image acquisition equipment;
in this embodiment, an image of the AGV vehicle activity area is obtained by the image capturing device. The preset area of the embodiment is an area where the trolley moves. The image acquisition equipment of the embodiment can be equipment such as a camera, and the camera can be installed on the ceiling in order to better acquire the global picture of the moving area of the trolley.
Step S20, performing connected domain filtering on the acquired image according to preset filtering conditions to obtain the detection pattern;
based on the steps, after the image of the AGV car moving region is obtained, binarization processing can be carried out on the obtained image, then connected domain analysis is carried out on the image, the connected domain is filtered according to preset conditions, and finally the detection pattern of the AGV car is obtained in the obtained image. The preset condition is mainly used for filtering other images in the image, such as images similar to the detection pattern, so as to obtain an accurate detection pattern. Specifically, in the present embodiment, the image of the detection pattern shown in fig. 9 may be subjected to binarization processing in advance, where the portion B in fig. 9 is a black pattern detection subject and the gradation value after binarization processing is 0. In this embodiment, an image of a preset region is acquired by a camera, and then binarization processing is performed to obtain a binarized image of the preset region, and a connected domain with a gray value of 0, that is, a black connected domain, is acquired from the binarized image. Because other objects or patterns may exist in the preset region and are correspondingly displayed as black connected domains in the binary image, the obtained black connected domains can be screened by taking the rectangle degree, the width-height ratio or the area and the like as preset prior conditions, and the black connected domains with the shapes or the width-height ratio and the area size matched with the detection patterns are obtained. For the filtered connected domains, the black connected domains can be further filtered according to the technical characteristic that white outer layer isolated parts and white inner layer isolated parts exist outside the pattern detection main body. Specifically, a rectangular region including the connected component may be determined according to the longest side of the connected component and a side perpendicular to the longest side, and it may be determined whether the white region area in a preset range outside the rectangular region and the white region area inside the rectangular region satisfy a preset condition to determine the detection pattern.
And step S30, according to the detection pattern, the coding information and the moving state information of the AGV are obtained.
Based on the above steps, after the detection pattern is determined, the identification information and the moving state information of the AGV are obtained from the detection pattern, where the moving state information of this embodiment refers to the heading information and the position information of the AGV. Based on the steps, after the detection pattern is determined, the black U-shaped detection main body in the detection pattern is determined, the black rectangular connected domain is obtained by taking each side of the black U-shaped detection main body as the bottom side, the maximum area of the rectangular connected domain obtained by each boundary is determined, the side L1 corresponding to the minimum area corresponds to the back of the AGV, the opposite side L2 of the L1 corresponds to the front of the AGV, and the direction of the L1 pointing to the L2 is the heading of the AGV. The position of the AGV is determined by the center of the detection pattern, when the detection pattern is configured, the center of the black U-shaped symbol is overlapped with the center of the AGV, and the position of the AGV is obtained by detecting the center of the black U-shaped symbol. After the detection pattern is determined, the corresponding encoding region may be determined, and based on the grayscale value of each rectangle degree in the encoding region, the encoding value of the black rectangular lattice with the grayscale value of 0 is determined to be 1, the encoding value of the white rectangular lattice with the grayscale value of 255 is determined to be 0, and encoding is started with the rectangular lattice closest to L1 as the first bit, for example, if the colors of the rectangular lattices in fig. 9 are black, white, black, and white, respectively, the corresponding encoding is 1010. Due to deformation of an image shot by a camera or interference of other shooting factors, a partial area image of a black grid may be white after binarization processing, in this embodiment, a grid whose black area exceeds a preset ratio may be determined as the black grid, and a corresponding grid code is set to 1.
In this embodiment, an image of a preset area is acquired by the image acquisition device; performing connected domain filtering on the acquired image according to a preset filtering condition to obtain the detection pattern; and determining the coding information and the moving state information of the AGV according to the detection pattern. By the mode, the image acquisition equipment is fixedly arranged at the preset position, the detection pattern is arranged on the AGV, the image of the preset position is obtained through the camera, the detection pattern is obtained according to the image, and the course information and the position information of the AGV and the coding information of the AGV are obtained from the detection pattern, so that the problems that in the prior art, the ground mark needs to be maintained regularly and the vehicle-mounted camera is easy to loosen are solved.
Further, referring to fig. 3, fig. 3 is a flowchart illustrating a second embodiment of the image processing method according to the present invention, and the second embodiment of the present invention is proposed based on the image processing method suggested by the present invention.
Based on the above embodiment, in the present embodiment, step S20 includes:
step S40, carrying out binarization processing on the acquired image to obtain a binarized image;
step S20 includes:
and step S50, performing connected domain filtering on the acquired binary image according to the preset filtering condition to acquire the detection pattern.
Based on the above embodiment, in this embodiment, after the image of the preset region is acquired by the camera, the acquired image may be binarized to obtain a binarized image of the preset region. Based on the above embodiment, the image obtained by the camera may include images of other articles in the area, background and noise in addition to the detection pattern, and to directly extract the detection pattern from the multi-valued digital image, a global threshold T may be set, and the data of the image is divided into two parts by T: pixel groups larger than T and pixel groups smaller than T. Setting the pixel value of the pixel group larger than T as white (or black) and the pixel value of the pixel group smaller than T as black (or white), so as to set the gray value of the pixel point of the image as 0 or 255, obtaining the binary image of the preset area, performing connected domain filtering on the binary image according to the preset condition based on the method of the embodiment, obtaining the detection pattern, and obtaining the coding information and the moving state information of the AGV according to the detection pattern.
In this embodiment, a binarization process is performed on an acquired image to obtain a binarized image; and filtering the connected domain of the obtained binary image according to the preset filtering condition to obtain the detection pattern. By the mode, the image acquired by the image acquisition equipment is subjected to binarization processing, so that the later image processing is simplified, the processing speed is increased, and the main image characteristics are highlighted.
Further, a third embodiment of the present invention is proposed with reference to fig. 4 based on the image processing method presented by the present invention.
Based on the above embodiment, in the present embodiment, step S50 includes:
step S60, acquiring a connected domain matched with a first preset gray value in the binary image to acquire a first connected domain set;
step S70, filtering the first connected domain set according to a preset prior condition to obtain a second connected domain set, wherein the preset prior condition at least comprises a matching degree verification condition with a preset shape;
step S80, determining the detection pattern according to the second connected domain set.
Based on the above embodiment, in this embodiment, after the binarized image of the preset region is obtained, the connected domain matched with the first preset gray value is obtained from the binarized image, and the first connected domain set is obtained. The first preset gray value of this embodiment is a preset gray value, and is determined according to the gray value after the pattern detection main body of the detection pattern performs binarization processing, based on the above embodiment, when the detection pattern is as shown in fig. 9, the first preset gray value is 0, which corresponds to black. In this embodiment, a connected domain with a gray value of 0, that is, a black connected domain, is obtained from the binarized image. Since other objects or patterns may exist in the preset region and are also displayed as a black connected component in the binarized image correspondingly, the obtained black connected component may be screened by using the rectangle degree, the width-height ratio or the area as a preset prior condition to obtain a black connected component with the shape or side line length ratio matching with the detected pattern, for example, a U-shaped connected component with a rectangular feature may be obtained, specifically, in the present embodiment, the pattern detection main body has 90 ° or 270 ° of each inner angle and has a rectangular feature, the preset shape matching degree verification condition of the present embodiment refers to a rectangular matching degree verification condition, after obtaining the black connected component from the image, a circular, an elliptical or other irregular pattern may be discarded by determining the rectangle degree to obtain a connected component with a rectangular feature, after obtaining the connected component with a rectangular feature, the ratio of each side of the connected component may be determined, for example, filtering may be performed by determining whether the ratio of the side perpendicular to the longest side of the connected component and the ratio of the corresponding side in the detection pattern match, to obtain a candidate connected component closer to the pattern detection subject of the detection pattern, and obtain a first connected component set, where the second connected component set in this embodiment is a set of black connected components with rectangular features obtained through a preset prior condition. And then determining a connected domain matched with the pattern detection subject according to the second connected domain set, and further determining the detection pattern.
In the embodiment, a connected domain matched with a first preset gray value in the binary image is obtained, and a first connected domain set is obtained; filtering the first connected domain set according to a preset prior condition to obtain a second connected domain set, wherein the preset prior condition at least comprises a matching degree verification condition with a preset shape; determining the detection pattern from the second set of connected components. In this way, the reference object range of the black connected domain can be reduced.
Further, referring to fig. 5, a fourth embodiment of the image processing method of the present invention is proposed. In the present embodiment, step S80 includes:
step S90, determining a first area of each connected domain according to the shape features of each connected domain in the second connected domain set, and determining a first area of an area in the first area, wherein the gray value of the area is matched with a second preset gray value, and each connected domain is in the corresponding first area;
step S100, determining a corresponding second region according to a preset width value and the first region, and determining a second area of a region of which the gray value is matched with a second preset gray value in the second region, wherein the first region is in the second region;
step S110, filtering the second connected domain set according to the ratio of the difference value of the first area and the second area to obtain a third connected domain set;
step S120, determining the detection pattern according to the third connected domain set.
Based on the above embodiments, in the present embodiment, after the second set of connected domains is obtained, since the connected domains in the set all have rectangular features, a rectangular region, i.e., the first region, may be determined according to the longest boundary L1 of each connected domain and the longest boundary L3 of the vertical L1, and each connected domain is in the first region corresponding to the connected domain. Then, the first area of the region in the first region whose gray value matches the second preset gray value is determined, and based on the above embodiment, the first area, that is, the area of the white region in the first region whose gray value is 255, the number of white pixels may be obtained as the area, which may be recorded as S1 in this embodiment. Then, moving each side of the first region outward by a preset width d to obtain an enlarged rectangle as a second region, and determining the area S2 of the white region in the second region, where the difference between S2 and S1 is the actual area of the white region between the boundaries of the first region and the second region. The preset width of the present embodiment may be set according to the width of the outer layer isolation portion of the detection pattern, and the preset width may be set equal to the width of the outer layer isolation portion of the detection pattern or smaller than the width of the outer layer isolation portion according to actual needs, and the present embodiment may obtain the length and width of the first region, if the obtained values are w and h, respectively, the total area of the region between the first region boundary and the second region boundary is S3 ═ w +2d ═ h +2d) — w ×, and if the connected domain is the pattern detection main body portion of the detection pattern, and the pattern detection main body outer layer of the actual detection pattern is the white outer layer isolation portion, ideally, S2-S1 are S3, but the difference S3 between S2 and S1 is slightly smaller than and is not strictly equal due to interference of the captured image or the obstacle in the environment. This embodiment may determine a proportional threshold, for example, 0.7, obtain the connected component whose value is greater than the threshold in (S2-S1)/S3, discard the connected component smaller than the threshold, filter the second connected component set further to obtain a third connected component set, and then determine the detection pattern from the third connected component set according to other preset conditions.
In this embodiment, a first region of each connected domain is determined according to the shape feature of each connected domain in the second connected domain set, and a first area of a region in the first region, where a gray value of the region matches a second preset gray value, is determined, where each connected domain is in the corresponding first region; determining a corresponding second region according to a preset width value and the first region, and determining a second area of a region of which the gray value is matched with a second preset gray value in the second region, wherein the first region is in the second region; and filtering the second connected domain set according to the ratio of the difference value of the first area and the second area to obtain a third connected domain set. Determining the detection pattern from the third set of connected components. In this manner, the second set of connected domains is further filtered.
Further, referring to fig. 6, a fifth embodiment of the image processing method of the present invention is proposed. Based on the above embodiment, in the present embodiment, step S120 includes:
step S130, determining a third area corresponding to the inner layer isolation part in the first area according to the position relation between the inner layer isolation part and the pattern detection main body;
step S140, determining a third area of a region in the third region, which is matched with the second preset grayscale value;
step S150, determining the detection pattern according to the ratio of the third area to the third area and the third connected component set.
As shown in fig. 9, the inside of the pattern detection main body is U rows of white inner layer isolation portions, and the positions of the inner layer isolation portions in the detection pattern can be determined according to the edge line lengths of the layers of the detection pattern. In the present embodiment, for each connected component in the third set of connected components, a third region, which is also a U-shaped region, corresponding to the position of the inner layer isolation portion in the detection pattern in the first region may be determined. Then, the area S4 of the third region is determined, and the ratio S4/(w h) of the area of the third region to the area of the first region is obtained, in this embodiment, the ratio k of the region corresponding to the detected pattern may be calculated in advance, and in an ideal case, when the value k × S4/(w × h) is 1, the ratio threshold may be set to 0.7 due to image error factors, and when the value k × S4/(w × h) is greater than 0.7, it is determined that the third region matches the inner layer of the detected pattern, the corresponding connected domain is determined as the pattern detection subject of the detected pattern, and the detected pattern corresponding to the connected domain is determined.
In this embodiment, a third region corresponding to the inner layer isolation portion in the first region is determined according to a positional relationship between the inner layer isolation portion and the pattern detection main body; determining a third area of a region, which is matched with the second preset gray value, in the third region; and determining the detection pattern according to the ratio of the third area to the third area and the third connected domain set. By the above manner, the detection pattern is accurately obtained.
Further, referring to fig. 7, a sixth embodiment of the image processing method of the present invention is proposed. Based on the above embodiment, in the present embodiment, step S150 is followed by:
and step S160, determining the course of the AGV or the position of the AGV according to the boundary of the connected domain in the detection pattern.
Based on the above-described embodiment, after the detection pattern is acquired, the heading direction may be acquired by detecting the direction of the detection pattern. Specifically, a black U-shaped detection main body in the detection pattern is determined, a black rectangular connected domain is obtained by taking each side of the black U-shaped detection main body as a bottom edge, the maximum area of the rectangular connected domain obtained by each boundary is determined, the side L1 corresponding to the minimum area corresponds to the rear of the AGV, the opposite side L2 of L1 corresponds to the front of the AGV, and the direction of the L1 pointing to the L2 is the heading of the AGV. The position of the AGV is determined by the center of the detection pattern, when the detection pattern is configured, the center of the black U-shaped symbol is overlapped with the center of the AGV, and the position of the AGV is obtained by detecting the center of the black U-shaped symbol.
In this embodiment, the heading of the AGV or the position of the AGV is determined according to the boundary of the connected domain in the detection pattern. Through the method, the course of the AGV is determined according to the boundary characteristics of the connected domain from the detection pattern.
Further, referring to fig. 8, a seventh embodiment of the image processing method of the present invention is proposed. Based on the foregoing embodiment, in this embodiment, after step S150, the method further includes:
step S170, determining a coding area and a coding sequence of the AGV according to the detection pattern;
and step S180, determining the gray value information of the coding area and the coding sequence to determine the coding information of the AGV.
In this embodiment, after the detection pattern and the heading are determined, the corresponding encoding region may be further determined, according to the grayscale value of each rectangle degree in the encoding region, the encoding value of the black rectangular lattice with the grayscale value of 0 is determined to be 1, the encoding value of the white rectangular lattice with the grayscale value of 255 is determined to be 0, encoding is started with the rectangular lattice closest to L1 as the first bit, for example, if the colors of the rectangular lattices in fig. 9 are black, white, black, and white, respectively, the corresponding binary code is 1010, and then the binary code is converted into the corresponding decimal code. Because the image shot by the camera is deformed or interfered by other shooting factors, the image of the black part area of the grid may be white after binarization processing, in this embodiment, the grid with the black area exceeding a preset proportion may be determined as a black grid, and the corresponding grid code is set to 1.
In the embodiment, the coding area and the coding sequence of the AGV are determined according to the detection pattern; determining the gray value information of the coding area and the coding sequence to determine the coding information of the AGV. By the mode, the AGV can be encoded, detected and identified.
In addition, the embodiment of the invention also provides a computer readable storage medium.
The computer-readable storage medium of the present invention has stored thereon an image processing program which, when executed by a processor, implements the steps of the image processing method as described above.
The method implemented when the image processing program running on the processor is executed may refer to each embodiment of the image processing method of the present invention, and details are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (7)

1. An image processing method is applied to an AGV automatic guiding trolley control system and is characterized in that a detection pattern is configured at the top of the AGV, and image acquisition equipment is configured at a preset position, and the image processing method comprises the following steps:
acquiring an image of a preset area by using the image acquisition equipment;
carrying out binarization processing on the obtained image to obtain a binarized image;
performing connected domain filtering on the acquired image according to a preset filtering condition to obtain the detection pattern;
acquiring coding information and moving state information of the AGV according to the detection pattern;
wherein, the step of filtering the connected domain of the acquired image according to the preset filtering condition to obtain the detection pattern comprises:
acquiring a connected domain matched with a first preset gray value in the binary image to obtain a first connected domain set;
filtering the first connected domain set according to a preset prior condition to obtain a second connected domain set, wherein the preset prior condition at least comprises a matching degree verification condition with a preset shape;
determining a first region of each connected domain according to the shape characteristics of each connected domain in the second connected domain set, and determining a first area of a region of which the gray value is matched with a second preset gray value in the first region, wherein each connected domain is in the corresponding first region;
determining a corresponding second region according to a preset width value and the first region, and determining a second area of a region of which the gray value is matched with a second preset gray value in the second region, wherein the first region is in the second region;
filtering the second connected domain set according to the ratio of the difference value of the first area and the second area to obtain a third connected domain set;
determining the detection pattern from the third set of connected components.
2. The image processing method of claim 1, wherein the detection pattern includes at least an outer layer isolation portion, a pattern detection main body, an inner layer isolation portion, and a coding region.
3. The image processing method of claim 2, wherein the step of determining the detection pattern from the third set of connected components comprises:
determining a third area corresponding to the inner layer isolation part in the first area according to the position relation between the inner layer isolation part and the pattern detection main body;
determining a third area of a region, which is matched with the second preset gray value, in the third region;
and determining the detection pattern according to the ratio of the third area to the third area and the third connected domain set.
4. The image processing method according to claim 3, wherein the step of determining the detection pattern from the ratio of the third area to the third area and the third connected component set is followed by:
and determining the course of the AGV or the position of the AGV according to the boundary of the connected domain in the detection pattern.
5. The image processing method of claim 3, wherein the step of determining the detection pattern based on the ratio of the third area to the third area and the third set of connected components further comprises, after the step of:
determining the coding area and the coding sequence of the AGV according to the detection pattern;
determining the gray value information of the coding area and the coding sequence to determine the coding information of the AGV.
6. An image processing apparatus characterized by comprising: memory, a processor and an image processing program stored on the memory and executable on the processor, the image processing program when executed by the processor implementing the steps of the image processing method according to any one of claims 1 to 5.
7. A computer-readable storage medium, characterized in that an image processing program is stored thereon, which when executed by a processor implements the steps of the image processing method according to any one of claims 1 to 5.
CN201810034642.1A 2018-01-15 2018-01-15 Image processing method, image processing apparatus, and computer-readable storage medium Active CN108268811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810034642.1A CN108268811B (en) 2018-01-15 2018-01-15 Image processing method, image processing apparatus, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810034642.1A CN108268811B (en) 2018-01-15 2018-01-15 Image processing method, image processing apparatus, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN108268811A CN108268811A (en) 2018-07-10
CN108268811B true CN108268811B (en) 2021-02-02

Family

ID=62775608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810034642.1A Active CN108268811B (en) 2018-01-15 2018-01-15 Image processing method, image processing apparatus, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN108268811B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109141451B (en) * 2018-07-13 2023-02-10 京东方科技集团股份有限公司 Shopping positioning system and method, intelligent shopping cart and electronic equipment
CN109048072B (en) * 2018-08-21 2020-07-17 深圳市创客工场科技有限公司 Laser processing method, apparatus, device and computer readable storage medium
CN109359645B (en) * 2018-08-29 2022-02-22 深圳市易成自动驾驶技术有限公司 AGV encoding marker, detection method and computer readable storage medium
CN109061610A (en) * 2018-09-11 2018-12-21 杭州电子科技大学 A kind of combined calibrating method of camera and radar
CN109858304B (en) * 2019-01-04 2022-02-01 广州广电研究院有限公司 Method and device for detecting two-dimensional code position detection graph and storage medium
CN111986378B (en) * 2020-07-30 2022-06-28 长城信息股份有限公司 Bill color fiber yarn detection method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077384A (en) * 2013-01-10 2013-05-01 北京万集科技股份有限公司 Method and system for positioning and recognizing vehicle logo
CN104517089A (en) * 2013-09-29 2015-04-15 北大方正集团有限公司 Two-dimensional code decoding system and method
CN104794421A (en) * 2015-04-29 2015-07-22 华中科技大学 QR (quick response) code positioning and recognizing methods
CN206312215U (en) * 2016-10-09 2017-07-07 浙江国自机器人技术有限公司 A kind of mobile unit and stock article management system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9323249B2 (en) * 2011-03-31 2016-04-26 King Abdulaziz City for Science & Technology Matrix code symbols for accurate robot tracking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077384A (en) * 2013-01-10 2013-05-01 北京万集科技股份有限公司 Method and system for positioning and recognizing vehicle logo
CN104517089A (en) * 2013-09-29 2015-04-15 北大方正集团有限公司 Two-dimensional code decoding system and method
CN104794421A (en) * 2015-04-29 2015-07-22 华中科技大学 QR (quick response) code positioning and recognizing methods
CN206312215U (en) * 2016-10-09 2017-07-07 浙江国自机器人技术有限公司 A kind of mobile unit and stock article management system

Also Published As

Publication number Publication date
CN108268811A (en) 2018-07-10

Similar Documents

Publication Publication Date Title
CN108268811B (en) Image processing method, image processing apparatus, and computer-readable storage medium
CN108009543B (en) License plate recognition method and device
US10600197B2 (en) Electronic device and method for recognizing object by using plurality of sensors
US11709282B2 (en) Asset tracking systems
US8588466B2 (en) Object area detection system, device, method, and program for detecting an object
CN107944450B (en) License plate recognition method and device
US8126264B2 (en) Device and method for identification of objects using color coding
CN106874906B (en) Image binarization method and device and terminal
EP3547253B1 (en) Image analysis method and device
CN111325141B (en) Interactive relationship identification method, device, equipment and storage medium
US9928429B2 (en) Image processing apparatus and image processing method
CN110414649B (en) DM code positioning method, device, terminal and storage medium
CN108846336B (en) Target detection method, device and computer readable storage medium
CN109901754B (en) Data self-calibration method and related device
CN109635700B (en) Obstacle recognition method, device, system and storage medium
CN110659548A (en) Vehicle and target detection method and device thereof
CN107223265B (en) Stripe set searching method, device and system
CN107967437B (en) Image processing method and device and computer readable storage medium
CN109816628B (en) Face evaluation method and related product
CN108881846B (en) Information fusion method and device and computer readable storage medium
CN108230680B (en) Vehicle behavior information acquisition method and device and terminal
CN117115774B (en) Lawn boundary identification method, device, equipment and storage medium
CN108268866B (en) Vehicle detection method and system
CN112822471A (en) Projection control method, intelligent robot and related products
CN110458004B (en) Target object identification method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Image processing method, device and computer readable storage medium

Effective date of registration: 20220623

Granted publication date: 20210202

Pledgee: Industrial and Commercial Bank of China Limited Shenzhen Fuyong sub branch

Pledgor: SHENZHEN ECHIEV AUTONOMOUS DRIVING TECHNOLOGY Co.,Ltd.

Registration number: Y2022980008778

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20230818

Granted publication date: 20210202

Pledgee: Industrial and Commercial Bank of China Limited Shenzhen Fuyong sub branch

Pledgor: SHENZHEN ECHIEV AUTONOMOUS DRIVING TECHNOLOGY Co.,Ltd.

Registration number: Y2022980008778

PC01 Cancellation of the registration of the contract for pledge of patent right