CN111191557A - Mark identification positioning method, mark identification positioning device and intelligent equipment - Google Patents

Mark identification positioning method, mark identification positioning device and intelligent equipment Download PDF

Info

Publication number
CN111191557A
CN111191557A CN201911354811.0A CN201911354811A CN111191557A CN 111191557 A CN111191557 A CN 111191557A CN 201911354811 A CN201911354811 A CN 201911354811A CN 111191557 A CN111191557 A CN 111191557A
Authority
CN
China
Prior art keywords
mark
target
candidate
screened
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911354811.0A
Other languages
Chinese (zh)
Other versions
CN111191557B (en
Inventor
郭奎
庞建新
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201911354811.0A priority Critical patent/CN111191557B/en
Publication of CN111191557A publication Critical patent/CN111191557A/en
Application granted granted Critical
Publication of CN111191557B publication Critical patent/CN111191557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a mark identification and positioning method, a mark identification and positioning device, intelligent equipment and a computer readable storage medium, wherein the method comprises the following steps: collecting a real-time environment image; screening to obtain more than one candidate mark in the real-time environment image according to a preset boundary outer frame and a preset anchor point shape; identifying a tag ID for each candidate tag; and determining a target mark in the candidate marks according to the mark ID of each candidate mark, and positioning according to the target mark. By the scheme, the intelligent device can visually locate the special mark in real time, and the current position of the intelligent device can be rapidly determined.

Description

Mark identification positioning method, mark identification positioning device and intelligent equipment
Technical Field
The present application belongs to the field of visual positioning technology, and in particular, relates to a mark identification and positioning method, a mark identification and positioning apparatus, an intelligent device, and a computer-readable storage medium.
Background
At present, the marks with the visual positioning function mainly comprise two-dimensional codes and Aruco codes. However, in the field of robots, most robots use an embedded hardware platform, which requires a high computational effort to implement real-time monitoring and positioning of two-dimensional codes or ArUco codes.
Disclosure of Invention
In view of this, the present application provides a method, an apparatus, an intelligent device and a computer-readable storage medium for identifying and positioning a mark, which can implement real-time visual positioning of a special mark by the intelligent device and can quickly determine the current location of the intelligent device.
A first aspect of the present application provides a method for identifying and positioning a mark, including:
collecting a real-time environment image;
screening to obtain more than one candidate mark in the real-time environment image according to a preset boundary outer frame and a preset anchor point shape;
identifying a tag ID for each candidate tag;
determining a target mark in the candidate marks according to the mark ID of each candidate mark;
and positioning according to the target mark.
A second aspect of the present application provides a marker identification and positioning device, including:
the acquisition unit is used for acquiring a real-time environment image;
the screening unit is used for screening more than one candidate mark in the real-time environment image according to a preset boundary outer frame and a preset anchor point shape;
an identification unit for identifying the tag ID of each candidate tag;
a determination unit configured to determine a target marker among the candidate markers based on the marker ID of each candidate marker;
and the positioning unit is used for positioning according to the target mark.
A third aspect of the present application provides a smart device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to the first aspect when executing the computer program.
A fourth aspect of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect as described above.
A fifth aspect of the application provides a computer program product comprising a computer program which, when executed by one or more processors, performs the steps of the method as described in the first aspect above.
In order to identify the special mark, firstly, a real-time environment image is collected, then, more than one candidate mark is obtained in the real-time environment image according to a preset boundary outer frame and a preset anchor point shape, then, the mark ID of each candidate mark is identified, finally, a target mark is determined in the candidate marks according to the mark ID of each candidate mark, and positioning is carried out according to the target mark. The identification process of the special mark does not require an embedded hardware platform to have higher computational power, the real-time visual positioning of the intelligent device on the special mark can be realized on the premise of not increasing the cost, and the current position of the intelligent device can be quickly determined according to the real-time visual positioning.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of a method for identifying and positioning a mark provided in an embodiment of the present application;
FIG. 2 is a schematic illustration of a special mark provided by an embodiment of the present application;
fig. 3 is a schematic flowchart illustrating a specific implementation procedure of step 105 in the method for identifying and positioning a mark according to the embodiment of the present application;
FIG. 4 is a block diagram of a tag identification and location device according to an embodiment of the present disclosure;
fig. 5 is a block diagram of a positioning unit in the marker identification and positioning apparatus according to the embodiment of the present application;
fig. 6 is a schematic diagram of an intelligent device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
The mark identifying and positioning method and the mark identifying and positioning device can be applied to intelligent equipment such as robots, unmanned vehicles and indoor unmanned aerial vehicles, which can automatically control movement. In order to explain the technical solution proposed in the present application, the following description will be given by way of specific examples.
Example one
The following embodiments will describe the marker identification and positioning method provided in the embodiments of the present application, taking an intelligent device as an example of a robot. Referring to fig. 1, the method for identifying and positioning a mark in the embodiment of the present application includes:
step 101, collecting a real-time environment image;
in the embodiment of the application, one or more cameras may be mounted on the robot in advance, and based on this, a real-time environment image of an environment where the robot is located may be acquired by the camera mounted on the robot. Alternatively, it may be detected first whether the robot is in a moving state, considering that it may be necessary to confirm its own environment only when the robot is moving; if the robot is in a moving state, a real-time environment image can be acquired through a camera carried by the robot; if the robot is in a static state, the camera does not need to be started. Further, the moving speed of the robot is not too high, the displacement of the robot in a short time period is small, and based on the fact that real-time environment images can be collected periodically through a camera carried by the robot, the problem that the camera is in a working state for a long time and wastes of robot system resources are caused is avoided.
102, screening to obtain more than one candidate mark in the real-time environment image according to a preset boundary outer frame and a preset anchor point shape;
in the embodiment of the application, a novel special mark is provided. To better illustrate the steps of the embodiments of the present application, the specific mark is described below, as shown in fig. 2: the symbol has four circles including three big circles and one small circle, and the four circles are set to different colors in practical applications, for example, the big circle at the upper left corner can be set to red, the big circle at the lower left corner can be set to blue, and the big circle at the upper right corner can be set to green to highlight the symbol. The three large circles are main positioning circles and are used for determining the original point of the mark and the direction of a coordinate axis; the small circle is an auxiliary positioning circle, and the auxiliary positioning circle and the three large circles form four positioning anchor points together, so that the relation between the mark coordinate system and the camera coordinate system is determined jointly. Specifically, in the index coordinate system, the great circle at the upper left corner is used as the origin of the index coordinate system, the X-axis of the index coordinate system is constructed by the great circle at the upper left corner and the great circle at the upper right corner, the Y-axis of the index coordinate system is constructed by the great circle at the upper left corner and the great circle at the lower left corner, and the Z-axis is determined by the right-hand coordinate system. Further, a marker ID is provided in the middle of the marker, and is indicated by character a in fig. 2, but the marker ID may be any other character, and is not limited herein. The boundary outer frame of the mark is in a circular arc-shaped frame mode, the boundary outer frame can be set to be gray, and the mark can be quickly positioned through the boundary outer frame; the four positioning anchor points inside the mark approximately form a rectangle, and the shape of the positioning anchor points can be deformed when the robot observes the mark from different angles, so that the shape of the anchor points can be set to be a parallelogram.
In order to accurately identify and obtain the marks in the environment, after the real-time environment image is obtained, the robot may obtain more than one candidate mark by screening according to a preset boundary outer frame and a preset anchor point shape, that is, the candidate marks obtained after screening all have the same or similar structures. Further, the step 102 may be represented as:
a1, performing target recognition on the real-time environment image to obtain more than one pattern to be screened;
after acquiring the real-time environment image, the robot may perform target identification on the real-time environment image to obtain all targets existing in the real-time environment image, that is, the pattern to be screened. Optionally, if any pattern to be screened cannot be obtained in the real-time environment image through target recognition, it may be considered that the robot has no content related to the mark in the current environment, and the frame of real-time environment image may be discarded first, and the screening and recognition operations may be performed after the next frame of real-time environment image is acquired.
A2, extracting and obtaining the outline of each pattern to be screened;
after more than one pattern to be screened is obtained, the contour of each pattern to be screened can be further continuously obtained, specifically, each pattern to be screened can be divided through a self-adaptive threshold value respectively to obtain a preliminary contour of each pattern to be screened, and then in order to obtain a smoother contour line, contour filtering processing can be performed on the preliminary contour of each pattern to be screened again to remove noise points and obtain a smoother contour of each pattern to be screened.
A3, respectively matching the outline of each pattern to be screened with the border outline;
as can be seen from the description of the special mark provided in the embodiment of the present application, the special mark has its specific boundary outer frame, which is represented by a circular arc-shaped frame pattern, and based on this, the outlines of the patterns to be screened can be respectively matched with the preset boundary outer frame.
A4, if the mark to be screened exists, detecting whether the mark to be screened meets the preset screening condition;
if the outline of the pattern to be screened can be matched with the preset border outer frame, the pattern to be screened can be preliminarily determined as the mark to be screened, so that the screening operation can be further carried out subsequently. Specifically, the positioning anchor point included in the mark to be screened may be obtained again, and the mark to be screened is detected by a preset screening condition, where the screening condition is: the number of the positioning anchor points contained in the mark to be screened is preset, and the shape of the anchor points can be constructed by connecting the centers of the positioning anchor points. As can be seen from fig. 2, the predetermined number is 4; the anchor point is in a parallelogram shape; that is, when a certain to-be-screened mark includes four positioning anchors, and the four positioning anchors can form a parallelogram, the to-be-screened mark can be determined as a candidate mark. On the contrary, if the outline of the pattern to be screened cannot be matched with the preset boundary outer frame, the pattern to be screened is considered to be not the special mark concerned by the robot actually, and the pattern to be screened can be screened at the moment without performing subsequent operation on the pattern to be screened; or if the mark to be screened does not meet the preset screening condition, the mark to be screened is considered to be actually not the special mark concerned by the robot, and the mark to be screened can be screened out at the moment without carrying out subsequent operation on the mark to be screened;
and A5, determining the mark to be screened meeting the screening condition as a candidate mark.
Step 103, identifying the mark ID of each candidate mark;
in the embodiment of the present application, after more than one candidate mark is obtained by screening, the mark IDs of the candidate marks can be continuously identified. The tag IDs of different tags are different, and it is considered that in one environment, a tag ID may uniquely refer to a particular tag. That is, the tag IDs of different special tags in the same environment are not duplicated. Optionally, the step 103 includes:
b1, positioning the target candidate mark based on the positioning anchor point in the target candidate mark;
since the processing flow is the same when the tag ID is identified for each candidate tag in the present embodiment, in order to describe the operation of step 103 more clearly, any candidate tag is determined as the target candidate tag to explain and describe the specific implementation flow of step 103. That is, each candidate flag may be determined as a target candidate flag and the operations of steps B1 through B3 are performed. Specifically, the positioning anchors refer to three large circles and one small circle in fig. 2, that is, the three large circles and one small circle in the mark constitute the positioning anchor of the candidate mark.
B2, performing perspective transformation on the target candidate mark according to the position of the positioning anchor point in the target candidate mark in the real-time environment image to obtain a front view of the target candidate mark;
the target candidate mark is preset, so that the robot actually knows the physical size of the target candidate mark in real life in advance; that is, the actual physical coordinates of the four localization anchor points (i.e., the four circles in fig. 2) on the target candidate markers are known to the robot. In step B2, the robot has further obtained the positions of the anchor points in the target candidate mark in the real-time environment image, that is, the corresponding pixel coordinates of the four anchor points in the real-time environment image, so that the robot can obtain the front view of the target candidate mark according to the perspective projection model of the camera.
B3, performing image recognition on the front view to determine the mark ID of the target candidate mark.
After the front view of the target candidate mark is obtained, image recognition may be directly performed on the front view, specifically, a character string included in the front view is recognized, so as to obtain the mark ID of the target candidate mark.
104, determining a target mark in the candidate marks according to the mark ID of each candidate mark;
in the embodiment of the application, the marker IDs of the candidate markers can be sequentially detected through a trained model carried in the robot to determine which of the candidate markers have legitimate marker IDs and which of the candidate markers have illegitimate marker IDs, and through the process, a candidate marker with a legitimate marker ID can be determined and obtained, and the candidate marker is a target marker.
And 105, positioning according to the target mark.
In the embodiment of the application, because the target mark is a special mark, four positioning anchor points contained in the target mark can be used for positioning; thus, after the target mark is determined, a positioning operation can be performed based on the target mark. Optionally, referring to fig. 3, the step 105 specifically includes:
step 1051, acquiring camera parameters and distortion coefficients;
in the embodiment of the application, camera parameters of a camera mounted on a smart device such as a robot may be obtained, where the camera parameters specifically include camera external parameters and camera internal parameters, and the distortion coefficients include a radial distortion coefficient and a tangential distortion coefficient.
Step 1052, acquiring the image coordinates and the corresponding space physical coordinates of the target mark;
in the embodiment of the present application, similarly to the step B2, since the target mark is preset, the robot actually knows the physical size of the target mark in real life in advance; that is, the actual physical coordinates of the four positioning anchor points (i.e., the four circles in fig. 2) on the target mark are known to the robot, and based on this, the image coordinates and the corresponding spatial physical coordinates of the target mark can be determined.
Step 1053, based on the camera parameters, the distortion coefficients, the image coordinates of the target mark and the corresponding space physical coordinates, obtaining the coordinate transformation relation between the mark coordinate system and the camera coordinate system;
in this embodiment, according to the Perspective projection model of the camera, the position relationship between the marker coordinate system and the camera coordinate system, that is, the coordinate transformation relationship between the marker coordinate system and the camera coordinate system, can be solved by using a Perspective N-point (pnp), the known camera parameters, the known distortion coefficients, the image coordinates of the target marker, and the corresponding spatial physical coordinates.
And 1054, determining the relative position of the device to be positioned and the target mark according to the coordinate transformation relation.
In the embodiment of the present application, what the device to be positioned actually refers to is the intelligent device such as the robot, the unmanned vehicle, and the indoor robot, to which the scheme of the embodiment of the present application is applied. When the coordinate transformation relationship between the marker coordinate system and the camera coordinate system is known, the position relationship between the target marker and the camera (i.e. the camera) is obtained, and the camera is mounted on the intelligent device, so that the position relationship between the target marker and the camera can be used as the position relationship between the target marker and the intelligent device, and based on the position relationship, the relative position between the intelligent device and the target marker can be determined. Furthermore, after the relative position of the intelligent device and the target mark is determined, the position of the target mark in the global map is known, so that the position of the intelligent device in the global map can be determined according to the relative position of the intelligent device and the target mark, and then the navigation route is updated according to the position of the intelligent device in the global map and the destination of the action, so that the intelligent device can realize autonomous navigation.
It is thus clear that, through this application embodiment, provide a neotype special mark, intelligent equipment such as robot, unmanned vehicle and indoor unmanned aerial vehicle can draw and sign operations such as ID discernment through the profile and discern the target mark at the in-process of walking, can realize discerning and location to special mark, realize intelligent equipment's autonomous navigation on this basis. The identification process of the special mark does not require an embedded hardware platform to have higher calculation power any more, and the real-time visual positioning of the intelligent device on the special mark can be realized on the premise of not increasing the cost.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Example two
This application embodiment two provides a sign discernment positioner, and above-mentioned sign discernment positioner can integrate in intelligent equipment that robot, unmanned car and indoor unmanned aerial vehicle etc. can control the removal by oneself, as shown in fig. 4, sign discernment positioner 400 in this application embodiment includes:
an acquisition unit 401, configured to acquire a real-time environment image;
a screening unit 402, configured to screen, in the real-time environment image, to obtain at least one candidate mark according to a preset boundary outer frame and a preset anchor point shape;
an identifying unit 403 for identifying the marker ID of each candidate marker;
a determining unit 404, configured to determine a target marker among the candidate markers according to the marker ID of each candidate marker;
and a positioning unit 405, configured to perform positioning according to the target mark.
Optionally, referring to fig. 5, the positioning unit 405 includes:
a camera parameter obtaining subunit 4051, configured to obtain a camera parameter and a distortion coefficient;
a coordinate parameter obtaining subunit 4052, configured to obtain image coordinates of the target mark and corresponding spatial physical coordinates;
a transformation relation determining subunit 4053, configured to obtain a coordinate transformation relation between the marker coordinate system and the camera coordinate system based on the camera parameter, the distortion coefficient, the image coordinate of the target marker, and the corresponding spatial physical coordinate;
the relative position determining subunit 4054 is configured to determine, according to the coordinate transformation relationship, a relative position between the device to be positioned and the target mark.
Optionally, the above-mentioned mark identification and positioning apparatus 400 further includes:
a map position determining unit, configured to determine a position of the device to be positioned in a global map according to a relative position between the device to be positioned and the target mark;
and the navigation route updating unit is used for updating the navigation route according to the position of the device to be positioned in the global map and the destination of the action.
Optionally, the screening unit 402 includes:
the target identification subunit is used for carrying out target identification on the real-time environment image to obtain more than one pattern to be screened;
the outline extraction subunit is used for extracting and obtaining the outline of each pattern to be screened;
the outline matching subunit is used for respectively matching the outline of each pattern to be screened with the border outline;
an anchor point detection subunit, configured to detect whether the mark to be screened meets a preset screening condition if the mark to be screened exists, where the mark to be screened is a pattern to be screened whose contour is successfully matched with the boundary outer frame, and the screening condition is: the mark to be screened comprises a preset number of positioning anchor points, and the shape of the anchor points can be constructed by connecting the centers of the positioning anchor points;
and the candidate determining subunit is used for determining the mark to be screened which meets the screening condition as a candidate mark.
Optionally, the contour extraction subunit includes:
the segmentation subunit is used for segmenting each pattern to be screened respectively through the self-adaptive threshold value to obtain a preliminary outline of each pattern to be screened;
and the filtering subunit is used for carrying out contour filtering processing on the preliminary contour of each pattern to be screened to obtain the contour of each pattern to be screened.
Optionally, the identifying unit 403 includes:
the anchor point positioning subunit is used for positioning a positioning anchor point in target candidate marks, wherein the target candidate marks are any candidate marks;
a perspective transformation subunit, configured to perform perspective transformation on the target candidate marker according to a position of a positioning anchor point in the target candidate marker in the real-time environment image, so as to obtain a front view of the target candidate marker;
and a marker ID determination subunit, configured to perform image recognition on the front view to determine a marker ID of the target candidate marker.
It is thus clear that, through this application embodiment, provide a neotype special mark, intelligent equipment such as robot, unmanned vehicle and indoor unmanned aerial vehicle can draw and sign operations such as ID discernment through the profile and discern the target mark at the in-process of walking, can realize discerning and location to special mark, realize intelligent equipment's autonomous navigation on this basis. The identification process of the special mark does not require an embedded hardware platform to have higher calculation power any more, and the real-time visual positioning of the intelligent device on the special mark can be realized on the premise of not increasing the cost.
EXAMPLE III
The third embodiment of the application provides an intelligent device, and the intelligent device can be a robot, an unmanned vehicle, an indoor unmanned aerial vehicle and the like, and is not limited here. Referring to fig. 6, the intelligent device 6 in the embodiment of the present application includes: a memory 601, one or more processors 602 (only one shown in fig. 6), and computer programs stored on the memory 601 and executable on the processors. Wherein: the memory 601 is used for storing software programs and modules, and the processor 602 executes various functional applications and data processing by running the software programs and units stored in the memory 601, so as to acquire resources corresponding to the preset events. Specifically, the processor 602 implements the following steps by running the above-mentioned computer program stored in the memory 601:
collecting a real-time environment image;
in the real-time environment image, more than one candidate mark is obtained through screening according to a preset boundary outer frame and a preset anchor point shape;
identifying a tag ID for each candidate tag;
determining a target mark in the candidate marks according to the mark ID of each candidate mark;
and positioning according to the target mark.
Assuming that the above is the first possible embodiment, in a second possible embodiment provided on the basis of the first possible embodiment, the above locating according to the target mark includes:
acquiring camera parameters and distortion coefficients;
acquiring the image coordinates and corresponding space physical coordinates of the target mark;
obtaining a coordinate transformation relation between a mark coordinate system and a camera coordinate system based on the camera parameters, the distortion coefficients, the image coordinates of the target mark and the corresponding space physical coordinates;
and determining the relative position of the device to be positioned and the target mark according to the coordinate transformation relation.
In a third possible embodiment based on the two possible embodiments, after determining the relative position between the device to be positioned and the target mark according to the coordinate transformation relationship, the processor 602 executes the computer program stored in the memory 601 to further implement the following steps:
determining the position of the device to be positioned in a global map according to the relative position of the device to be positioned and the target mark;
and updating the navigation route according to the position of the device to be positioned in the global map and the destination of the action.
In a fourth possible implementation manner provided on the basis of the one possible implementation manner, the two possible implementation manners, or the three possible implementation manners, the filtering to obtain at least one candidate mark according to a preset boundary outer frame and a preset anchor point shape in the real-time environment image includes:
carrying out target identification on the real-time environment image to obtain more than one pattern to be screened;
extracting to obtain the outline of each pattern to be screened;
respectively matching the outline of each pattern to be screened with the border outer frame;
if the mark to be screened exists, detecting whether the mark to be screened meets a preset screening condition, wherein the mark to be screened is a pattern to be screened, the outline of which is successfully matched with the boundary outer frame, and the screening condition is as follows: the mark to be screened comprises a preset number of positioning anchor points, and the shape of the anchor points can be constructed by connecting the centers of the positioning anchor points;
and determining the mark to be screened meeting the screening condition as a candidate mark.
In a fifth possible implementation manner provided as a basis for the fourth possible implementation manner, the extracting to obtain the outline of each pattern to be screened includes:
respectively segmenting each pattern to be screened through a self-adaptive threshold value to obtain a preliminary outline of each pattern to be screened;
and carrying out contour filtering treatment on the preliminary contour of each pattern to be screened to obtain the contour of each pattern to be screened.
In a sixth possible implementation form based on the one possible implementation form, the two possible implementation forms, or the three possible implementation forms, the tag ID for identifying each candidate tag includes:
positioning a positioning anchor point in a target candidate mark, wherein the target candidate mark is any candidate mark;
performing perspective transformation on the target candidate mark according to the position of a positioning anchor point in the target candidate mark in the real-time environment image to obtain a front view of the target candidate mark;
and performing image recognition on the front view to determine the mark ID of the target candidate mark.
It should be understood that in the embodiments of the present Application, the Processor 602 may be a Central Processing Unit (CPU), and the Processor may be other general processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Memory 601 may include both read-only memory and random-access memory, and provides instructions and data to processor 602. Some or all of memory 601 may also include non-volatile random access memory. For example, the memory 601 may also store device class information.
Therefore, the embodiment of the application provides a novel special mark, the intelligent device can identify the target mark through operations such as contour extraction, mark ID identification and the like in the walking process, the identification and the positioning of the special mark can be realized, and the autonomous navigation of the intelligent device is realized on the basis. The identification process of the special mark does not require an embedded hardware platform to have higher calculation power any more, and the real-time visual positioning of the intelligent device on the special mark can be realized on the premise of not increasing the cost.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of external device software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the above-described modules or units is only one logical functional division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer-readable storage medium may include: any entity or device capable of carrying the above-described computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer readable Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable storage medium may contain other contents which can be appropriately increased or decreased according to the requirements of the legislation and the patent practice in the jurisdiction, for example, in some jurisdictions, the computer readable storage medium does not include an electrical carrier signal and a telecommunication signal according to the legislation and the patent practice.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A mark identification and positioning method is characterized by comprising the following steps:
collecting a real-time environment image;
screening to obtain more than one candidate mark in the real-time environment image according to a preset boundary outer frame and a preset anchor point shape;
identifying a tag ID for each candidate tag;
determining a target mark in the candidate marks according to the mark ID of each candidate mark; and
and positioning according to the target mark.
2. The method for identifying and locating a target according to claim 1, wherein the locating according to the target includes:
acquiring camera parameters and distortion coefficients;
acquiring the image coordinates and the corresponding space physical coordinates of the target mark;
obtaining a coordinate transformation relation between a mark coordinate system and a camera coordinate system based on the camera parameters, the distortion coefficients, the image coordinates of the target mark and the corresponding space physical coordinates;
and determining the relative position of the device to be positioned and the target mark according to the coordinate transformation relation.
3. The landmark identifying and positioning method according to claim 2, further comprising, after determining the relative position of the device to be positioned and the target landmark according to the coordinate transformation relationship:
determining the position of the device to be positioned in a global map according to the relative position of the device to be positioned and the target mark;
and updating a navigation route according to the position of the device to be positioned in the global map and the destination of the action.
4. The method for identifying and locating markers of any one of claims 1 to 3, wherein the step of obtaining more than one candidate marker in the real-time environment image according to a preset boundary outline and a preset anchor shape comprises:
performing target identification on the real-time environment image to obtain more than one pattern to be screened;
extracting to obtain the outline of each pattern to be screened;
respectively matching the outline of each pattern to be screened with the border outer frame;
if the mark to be screened exists, detecting whether the mark to be screened meets a preset screening condition, wherein the mark to be screened is a pattern to be screened, the outline of which is successfully matched with the boundary outer frame, and the screening condition is as follows: the mark to be screened comprises a preset number of positioning anchor points, and the shape of the anchor points can be constructed by connecting lines between the centers of the positioning anchor points;
and determining the mark to be screened meeting the screening condition as a candidate mark.
5. The method for identifying and positioning signs according to claim 4, wherein the extracting the outlines of the patterns to be screened comprises:
respectively segmenting each pattern to be screened through a self-adaptive threshold value to obtain a preliminary outline of each pattern to be screened;
and carrying out contour filtering treatment on the preliminary contour of each pattern to be screened to obtain the contour of each pattern to be screened.
6. The method for identifying and locating labels of any one of claims 1 to 3, wherein the identification of the label ID of each candidate label comprises:
positioning a positioning anchor point in a target candidate mark, wherein the target candidate mark is any candidate mark;
carrying out perspective transformation on the target candidate mark according to the position of a positioning anchor point in the target candidate mark in the real-time environment image to obtain a front view of the target candidate mark;
image recognition is performed on the front view to determine a marker ID of the target candidate marker.
7. An apparatus for identifying and locating a landmark, comprising:
the acquisition unit is used for acquiring a real-time environment image;
the screening unit is used for screening more than one candidate mark in the real-time environment image according to a preset boundary outer frame and a preset anchor point shape;
an identification unit for identifying the tag ID of each candidate tag;
a determination unit configured to determine a target marker among the candidate markers based on the marker ID of each candidate marker;
and the positioning unit is used for positioning according to the target mark.
8. The landmark identifying and locating device of claim 7, wherein the locating unit comprises:
the camera parameter acquisition subunit is used for acquiring camera parameters and distortion coefficients;
the coordinate parameter acquisition subunit is used for acquiring the image coordinates of the target mark and the corresponding space physical coordinates;
the transformation relation determining subunit is used for obtaining a coordinate transformation relation between a mark coordinate system and a camera coordinate system based on the camera parameters, the distortion coefficients, the image coordinates of the target mark and the corresponding space physical coordinates;
and the relative position determining subunit is used for determining the relative position of the device to be positioned and the target mark according to the coordinate transformation relation.
9. An intelligent device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the steps of the method according to any one of claims 1 to 6 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN201911354811.0A 2019-12-25 2019-12-25 Mark identification positioning method, mark identification positioning device and intelligent equipment Active CN111191557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911354811.0A CN111191557B (en) 2019-12-25 2019-12-25 Mark identification positioning method, mark identification positioning device and intelligent equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911354811.0A CN111191557B (en) 2019-12-25 2019-12-25 Mark identification positioning method, mark identification positioning device and intelligent equipment

Publications (2)

Publication Number Publication Date
CN111191557A true CN111191557A (en) 2020-05-22
CN111191557B CN111191557B (en) 2023-12-05

Family

ID=70709348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911354811.0A Active CN111191557B (en) 2019-12-25 2019-12-25 Mark identification positioning method, mark identification positioning device and intelligent equipment

Country Status (1)

Country Link
CN (1) CN111191557B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686355A (en) * 2021-01-12 2021-04-20 树根互联技术有限公司 Image processing method and device, electronic equipment and readable storage medium
CN115457144A (en) * 2022-09-07 2022-12-09 梅卡曼德(北京)机器人科技有限公司 Calibration pattern recognition method, calibration device and electronic equipment
WO2024012463A1 (en) * 2022-07-11 2024-01-18 杭州海康机器人股份有限公司 Positioning method and apparatus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009035697A1 (en) * 2007-09-13 2009-03-19 Cognex Corporation System and method for traffic sign recognition
US20090190831A1 (en) * 2008-01-25 2009-07-30 Intermec Ip Corp. System and method for locating a target region in an image
CN103020632A (en) * 2012-11-20 2013-04-03 北京航空航天大学 Fast recognition method for positioning mark point of mobile robot in indoor environment
CN109271937A (en) * 2018-09-19 2019-01-25 深圳市赢世体育科技有限公司 Athletic ground Marker Identity method and system based on image procossing
CN109977935A (en) * 2019-02-27 2019-07-05 平安科技(深圳)有限公司 A kind of text recognition method and device
CN109993790A (en) * 2017-12-29 2019-07-09 深圳市优必选科技有限公司 Marker, the forming method of marker, localization method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009035697A1 (en) * 2007-09-13 2009-03-19 Cognex Corporation System and method for traffic sign recognition
US20090190831A1 (en) * 2008-01-25 2009-07-30 Intermec Ip Corp. System and method for locating a target region in an image
CN103020632A (en) * 2012-11-20 2013-04-03 北京航空航天大学 Fast recognition method for positioning mark point of mobile robot in indoor environment
CN109993790A (en) * 2017-12-29 2019-07-09 深圳市优必选科技有限公司 Marker, the forming method of marker, localization method and device
CN109271937A (en) * 2018-09-19 2019-01-25 深圳市赢世体育科技有限公司 Athletic ground Marker Identity method and system based on image procossing
CN109977935A (en) * 2019-02-27 2019-07-05 平安科技(深圳)有限公司 A kind of text recognition method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686355A (en) * 2021-01-12 2021-04-20 树根互联技术有限公司 Image processing method and device, electronic equipment and readable storage medium
CN112686355B (en) * 2021-01-12 2024-01-05 树根互联股份有限公司 Image processing method and device, electronic equipment and readable storage medium
WO2024012463A1 (en) * 2022-07-11 2024-01-18 杭州海康机器人股份有限公司 Positioning method and apparatus
CN115457144A (en) * 2022-09-07 2022-12-09 梅卡曼德(北京)机器人科技有限公司 Calibration pattern recognition method, calibration device and electronic equipment
CN115457144B (en) * 2022-09-07 2023-08-15 梅卡曼德(北京)机器人科技有限公司 Calibration pattern recognition method, calibration device and electronic equipment

Also Published As

Publication number Publication date
CN111191557B (en) 2023-12-05

Similar Documents

Publication Publication Date Title
CN107507167B (en) Cargo tray detection method and system based on point cloud plane contour matching
Yenikaya et al. Keeping the vehicle on the road: A survey on on-road lane detection systems
Li et al. Springrobot: A prototype autonomous vehicle and its algorithms for lane detection
CN111191557A (en) Mark identification positioning method, mark identification positioning device and intelligent equipment
CN110084243B (en) File identification and positioning method based on two-dimensional code and monocular camera
CN108182383B (en) Vehicle window detection method and device
JP2009055139A (en) Person tracking system, apparatus, and program
CN111414826A (en) Method, device and storage medium for identifying landmark arrow
CN109363770B (en) Automatic identification and positioning method for marker points of surgical navigation robot
CN112598922B (en) Parking space detection method, device, equipment and storage medium
CN111814752A (en) Indoor positioning implementation method, server, intelligent mobile device and storage medium
CN111295666A (en) Lane line detection method, device, control equipment and storage medium
CN113894799B (en) Robot and marker identification method and device for assisting environment positioning
Chang et al. An efficient method for lane-mark extraction in complex conditions
CN114037966A (en) High-precision map feature extraction method, device, medium and electronic equipment
Chen et al. Embedded vision-based nighttime driver assistance system
CN111199198B (en) Image target positioning method, image target positioning device and mobile robot
CN110673607B (en) Feature point extraction method and device under dynamic scene and terminal equipment
FAN et al. Robust lane detection and tracking based on machine vision
CN113029185B (en) Road marking change detection method and system in crowdsourcing type high-precision map updating
CN104766331A (en) Imaging processing method and electronic device
CN111157012B (en) Robot navigation method and device, readable storage medium and robot
CN110751163B (en) Target positioning method and device, computer readable storage medium and electronic equipment
CN114518106B (en) Method, system, medium and equipment for detecting update of vertical elements of high-precision map
CN115249407B (en) Indicator light state identification method and device, electronic equipment, storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant