CN113657164A - Method and device for calibrating target object, cleaning equipment and storage medium - Google Patents

Method and device for calibrating target object, cleaning equipment and storage medium Download PDF

Info

Publication number
CN113657164A
CN113657164A CN202110801537.8A CN202110801537A CN113657164A CN 113657164 A CN113657164 A CN 113657164A CN 202110801537 A CN202110801537 A CN 202110801537A CN 113657164 A CN113657164 A CN 113657164A
Authority
CN
China
Prior art keywords
feature point
target
image
frame
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110801537.8A
Other languages
Chinese (zh)
Inventor
张阳阳
韩冲
章丁盛
张志鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Midea Robozone Technology Co Ltd
Original Assignee
Midea Robozone Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Midea Robozone Technology Co Ltd filed Critical Midea Robozone Technology Co Ltd
Priority to CN202110801537.8A priority Critical patent/CN113657164A/en
Publication of CN113657164A publication Critical patent/CN113657164A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a method for calibrating a target object, which comprises the following steps: when the cleaning equipment executes cleaning operation, acquiring an ordered multi-frame image which is acquired by the cleaning equipment through a top view image collector and comprises a target object; analyzing adjacent frames in the ordered multi-frame image to obtain the characteristic point motion information of the target characteristic point of the target object; determining identification information of a target object based on the characteristic point motion information and the acquired collector motion information of the top-view image collector; and calibrating the area where the target object is located on the cleaning track of the cleaning equipment based on the identification information. The embodiment of the application also discloses a device for calibrating the target object, cleaning equipment and a storage medium.

Description

Method and device for calibrating target object, cleaning equipment and storage medium
Technical Field
The present application relates to the field of smart home technologies, and in particular, to a method for calibrating a target object, a device for calibrating a target object, a cleaning apparatus, and a storage medium.
Background
Along with the rapid development of intelligent clean technique and intelligent house product, the equipment that cleans that the integration has camera or camera module has got into different scenes such as family, market, mill and carries out cleaning work, realizes the indoor environment of control when clean. However, the cleaning device in the related art does not use the scene information in the indoor environment for positioning during the cleaning process. There is a need to provide a new method for calibrating a target object.
Disclosure of Invention
Embodiments of the present application provide a method for calibrating a target object, a device for calibrating a target object, a cleaning device, and a storage medium, so as to solve a problem that in a cleaning process of a cleaning device in the related art, scene information in an indoor environment is not utilized to perform positioning.
The technical scheme of the application is realized as follows:
a method of calibrating a target object, the method comprising:
when cleaning equipment performs cleaning operation, acquiring ordered multi-frame images including a target object, which are acquired by the cleaning equipment through a top-view image collector;
analyzing adjacent frames in the ordered multi-frame image to acquire feature point motion information of a target feature point of the target object;
determining identification information of the target object based on the characteristic point motion information and the acquired collector motion information of the top-view image collector;
and calibrating the area where the target object is located on the cleaning track of the cleaning equipment based on the identification information.
An apparatus for calibrating a target object, the apparatus comprising:
the device comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring ordered multi-frame images including a target object, which are acquired by a top-view image acquisition device, of the cleaning equipment when the cleaning equipment executes cleaning operation;
the processing module is used for analyzing adjacent frames in the ordered multi-frame image and acquiring the characteristic point motion information of the target characteristic point of the target object;
the processing module is further configured to determine identification information of the target object based on the feature point motion information and the acquired collector motion information of the top-view image collector;
the processing module is further configured to calibrate an area where the target object is located on the cleaning track of the cleaning device based on the identification information.
A sweeping apparatus, the sweeping apparatus comprising:
a memory for storing executable instructions;
a processor for executing the executable instructions stored in the memory to implement the above-described method.
A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the above-described method.
According to the method for calibrating the target object, the device for calibrating the target object, the cleaning equipment and the storage medium, when the cleaning equipment performs cleaning operation, the cleaning equipment acquires ordered multi-frame images including the target object, which are acquired by the cleaning equipment through a top-view image acquisition device; analyzing adjacent frames in the ordered multi-frame image to obtain the characteristic point motion information of the target characteristic point of the target object; determining identification information of a target object based on the characteristic point motion information and the acquired collector motion information of the top-view image collector; calibrating the area where the target object is located on the cleaning track of the cleaning equipment based on the identification information; that is to say, based on feature point motion information of a target feature point of a target object obtained by analyzing adjacent frames in an ordered multi-frame image and collector motion information of a top-view image collector, identification information of the target object is determined, so that the cleaning equipment can calibrate the area where the target object is located on the cleaning track based on the identification information; therefore, the area where the target object is located on the cleaning track is clearly and accurately positioned by combining the image of the target object acquired in the actual cleaning scene.
Drawings
Fig. 1 is an alternative schematic flow chart of a method for calibrating a target object according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart illustrating an alternative method for calibrating a target object according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart illustrating an alternative method for calibrating a target object according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart illustrating an alternative method for calibrating a target object according to an embodiment of the present disclosure;
fig. 5 is a schematic view of motion information of feature points when the cleaning device provided in the embodiment of the present application moves straight;
fig. 6 is a schematic view illustrating motion information of feature points when the cleaning device rotates according to the embodiment of the present application;
FIG. 7 is an alternative schematic flow chart diagram of a method for calibrating a target object according to an embodiment of the present application;
fig. 8 is a schematic diagram of a cleaning device provided in an embodiment of the present application, which determines a part of target feature points whose feature point moving speeds satisfy a condition;
fig. 9 is a schematic diagram after marking a region where a target object is located according to an embodiment of the present application;
FIG. 10 is a schematic flow chart illustrating an alternative method for calibrating a target object according to an embodiment of the present disclosure;
fig. 11 is an alternative structural schematic diagram of the apparatus for calibrating a target object according to the embodiment of the present application;
fig. 12 is an alternative structural schematic diagram of the sweeping device provided in the embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
An embodiment of the present application provides a method for calibrating a target object, which is applied to a cleaning device, and as shown in fig. 1, the method for calibrating the target object includes the following steps:
step 101, when cleaning equipment executes cleaning operation, acquiring ordered multi-frame images including a target object, which are acquired by the cleaning equipment through a top-view image acquisition device.
In the implementation of this application, the top view image collector can be camera or camera module, and wherein, this camera or camera module can set up in the top of cleaning equipment to make this camera in the in-process that cleaning equipment removed, can gather the top image that cleans equipment place position. The camera or the camera module can also be arranged on the side of the cleaning equipment, and the camera has a rotating function, so that images within a certain angle range corresponding to the upper, lower, left and right directions of the position of the cleaning equipment can be collected in the moving process of the cleaning equipment.
In the embodiment of the present application, the target object may be a furniture object equipped in a cleaning scene in which the cleaning device performs a cleaning operation, where the furniture object may be, for example, a sofa, a bed, a table, a television cabinet, a tea table, a bookcase, a dresser, or the like.
Here, the cleaning apparatus is an apparatus capable of performing cleaning work such as cleaning, dust collection, floor wiping, and the like. For example, the cleaning device may be a sweeper, a vacuum cleaner, or the like. The cleaning equipment is an intelligent household appliance and can automatically complete cleaning work in a cleaning scene after entering a cleaning stage. Here, taking a floor sweeper as an example, the floor sweeper generally adopts a brushing and vacuum mode, and sundries on the ground are firstly absorbed into a garbage storage box of the floor sweeper, so that cleaning of cleaning scenes such as the ground is completed.
In the embodiment of the application, when the cleaning equipment performs cleaning operation in a cleaning scene, a top-view image collector of the cleaning equipment collects an initial image sequence in the cleaning scene, after the cleaning equipment acquires the initial image sequence, whether the initial image sequence comprises a target object is identified through an image identification algorithm, and if the initial image sequence comprises the target object is determined, ordered multi-frame images comprising the target object are screened out from the initial image sequence.
And 102, analyzing adjacent frames in the ordered multi-frame image to acquire the characteristic point motion information of the target characteristic point of the target object.
In the embodiment of the application, the target feature points of the target object are feature points of which the feature points of the target object in each frame of image in the ordered multi-frame image meet the feature point screening condition; the feature point of the target object may be a pixel point with the largest local gray gradient in each frame of image, and the feature point of the target object may also be a pixel point with the local gradient amplitude and the change rate of the gradient direction in each frame of image satisfying the change rate condition. It should be noted that the feature points of the target object are also called corner points of the target object.
In this embodiment, the feature point motion information may include a position offset of the target feature point, a feature point moving speed of the target feature point, and the feature point motion information may further include a moving direction of the target feature point.
In the embodiment of the application, when the cleaning equipment performs the cleaning operation, the cleaning equipment analyzes each frame of the ordered multi-frame image under the condition that the cleaning equipment acquires the ordered multi-frame image including the target object, which is acquired by the top-view image acquisition device, so as to determine the characteristic point of the target object in each frame. And the cleaning equipment screens out the characteristic points meeting the conditions from all the characteristic points as target characteristic points. Further, the cleaning equipment analyzes adjacent frames in the ordered multi-frame image to obtain the characteristic point motion information of the target characteristic point of the target object.
It should be noted that the feature points of the target object in each frame extracted by the cleaning device can be obtained by using a feature Detection algorithm, and feature Detection (Corner Detection) is a method used in a computer vision system to obtain image features, and is widely applied in the fields of motion Detection, image matching, video tracking, three-dimensional modeling, target identification, and the like. Feature detection is also referred to as corner detection. The corner detection algorithm comprises a Harris corner detection algorithm, a Moravec corner detection algorithm and a FAST corner detection algorithm, and the corner detection algorithm further comprises a Shi-Tomasi corner detection algorithm. Here, the present application is not particularly limited.
And 103, determining identification information of the target object based on the characteristic point motion information and the acquired collector motion information of the top-view image collector.
In the embodiment of the application, the identification information is information with a specific identification preset by the target object.
In this embodiment, the collector motion information may include a collector position offset of the top view image collector, a collector moving speed of the top view image collector, and the collector motion information may further include a collector moving direction of the top view image collector.
In the embodiment of the application, the cleaning equipment analyzes adjacent frames of the ordered multi-frame images, and can also acquire collector motion information of the top-view image collector under the condition of acquiring the feature point motion information of the target feature point of the target object, so that the identification information with the feature identification, which is preset for the target object, is determined based on the feature point motion information and the collector motion information.
It should be noted that the step of acquiring collector motion information of the top view image collector performed by the cleaning device may be performed before acquiring an ordered multi-frame image including a target object, which is acquired by the cleaning device through the top view image collector in step 101, after step 101 is performed, after step 102 is performed, or simultaneously with step 101 or step 102.
And 104, calibrating the area where the target object is located on the cleaning track of the cleaning equipment based on the identification information.
In the embodiment of the application, the cleaning track is a cleaning route planned for the cleaning equipment in a cleaning environment where the cleaning equipment is located; in general, the cleaning apparatus performs cleaning along a planned cleaning track, and a plurality of cleaning tracks are planned for each cleaning environment, and the plurality of cleaning tracks constitute a cleaning map. The algorithm for determining the cleaning track includes, but is not limited to, a random coverage algorithm and a path planning algorithm, wherein the cleaning efficiency of the path planning algorithm is better than that of the random coverage algorithm. Here, in the process of determining the cleaning track based on the path planning algorithm, the cleaning track may be planned by using image displacement positioning, laser radar triangulation, or the like.
In the embodiment of the application, after the cleaning equipment determines the identification information of the target object based on the characteristic point motion information and the acquired collector motion information of the top-view image collector, the area where the target object is located is calibrated based on the identification information on the cleaning track of the cleaning equipment, so that the area where the target object is located on the cleaning track can be clearly and accurately positioned.
According to the method for calibrating the target object, when cleaning equipment performs cleaning operation, the orderly multi-frame image which is collected by the cleaning equipment through the top-view image collector and comprises the target object is obtained; analyzing adjacent frames in the ordered multi-frame image to obtain the characteristic point motion information of the target characteristic point of the target object; determining identification information of a target object based on the characteristic point motion information and the acquired collector motion information of the top-view image collector; calibrating the area where the target object is located on the cleaning track of the cleaning equipment based on the identification information; that is to say, based on feature point motion information of a target feature point of a target object obtained by analyzing adjacent frames in an ordered multi-frame image and collector motion information of a top-view image collector, identification information of the target object is determined, so that the cleaning equipment can calibrate the area where the target object is located on the cleaning track based on the identification information; therefore, the area where the target object is located on the cleaning track is clearly and accurately positioned by combining the image of the target object acquired in the actual cleaning scene.
An embodiment of the present application provides a method for calibrating a target object, which is applied to a cleaning device, and as shown in fig. 2, the method for calibrating the target object includes the following steps:
step 201, when the cleaning device performs a cleaning operation, acquiring an ordered multi-frame image including a target object, which is acquired by the cleaning device through a top view image collector.
Step 202, calling a window function to process each frame of image in the ordered multi-frame images, and screening out a plurality of target feature points in each frame of image.
In the embodiment of the application, the window function is used for determining the change information of a pixel point when a window corresponding to the window function moves in any direction by taking a certain pixel point as a center in an image.
In the embodiment of the present application, referring to fig. 3, step 202 calls a window function to process each frame of image in an ordered multiple frames of images, and screens out a plurality of target feature points in each frame of image, which can also be implemented by the following steps:
step 2021, calling a window function to process each frame of image, and obtaining a plurality of reference feature points in each frame of image.
In the embodiment of the present application, the plurality of reference feature points are feature points that satisfy the screening condition. Illustratively, if the cleaning device obtains feature points of other objects such as a wall edge in each frame of image, the obtained feature points of the wall edge are removed to obtain a plurality of reference points of the target object in each frame of image.
In the embodiment of the application, the cleaning device calls the window function, it is determined that the window corresponding to the window function moves in any direction on each frame of image in the ordered multi-frame image, and the gray level change of the pixel points of the window before and after moving meets the gray level change condition, and the pixel points meeting the gray level change condition are determined as the reference feature points, wherein the number of the reference feature points is multiple.
Step 2022, determining a metric value of each reference feature point based on the window local gradient magnitude and the change rate of the gradient direction corresponding to the window function.
In the embodiment of the application, the metric is used for measuring the change of the local gradient magnitude and the gradient direction of the window corresponding to the window function of each reference feature point between before and after movement.
Step 2023, determining one reference feature point, of the N-th regions included in each frame of image, whose metric satisfies the metric screening condition as a target feature point in the N-th region.
Wherein N is a positive integer greater than or equal to 1 and less than or equal to N, and N is the total number of the contained areas of each frame of image.
In this embodiment, the metric value screening condition includes screening the maximum metric value in the nth region.
In the embodiment of the application, under the condition that the cleaning equipment determines the metric value of each reference characteristic point based on the window local gradient amplitude value and the gradient direction change rate corresponding to the window function, each frame of image is divided into N areas with the same size, one reference characteristic point corresponding to the maximum metric value in the nth area in the N areas contained in each frame of image is obtained, and the reference characteristic point is determined as the target characteristic point in the nth area; therefore, the cleaning equipment has the characteristic that the target characteristic points extracted from each frame of image are uniformly distributed.
And step 203, acquiring a first position of a kth target feature point in an ith frame image in the ordered multi-frame image.
Wherein i is a positive integer which is greater than or equal to 1 and less than the total number of frames of the multi-frame images, and k is a positive integer which is greater than or equal to 1 and less than the total number of the target feature points in the ith frame image.
And 204, if the kth target feature point exists in the (i + 1) th frame of image in the ordered multi-frame image, acquiring a second position where the kth target feature point is located in the (i + 1) th frame of image.
In the embodiment of the application, the cleaning equipment acquires a first position where a kth target feature point in an ith frame image in an ordered multi-frame image is located; then, tracking a kth target feature point in the ith frame of image by the cleaning equipment based on an optical flow tracking algorithm, and acquiring a second position where the kth target feature point is located in the (i + 1) th frame of image when the cleaning equipment determines that the kth target feature point exists in the (i + 1) th frame of image in the ordered multi-frame image; when the cleaning equipment determines that the kth target feature point does not exist in the (i + 1) th frame image in the ordered multi-frame image, the kth target feature point in the ith frame image is not tracked any more, so that the cleaning equipment only keeps the target feature point which is successfully tracked in the process of tracking the target feature point and eliminates other feature points which are failed in tracking.
And step 205, determining the characteristic point motion information of the kth target characteristic point based on the first position and the second position.
In the embodiment of the application, after the cleaning equipment acquires a first position where a kth target feature point in an ith frame image in an ordered multi-frame image is located and a second position where the kth target feature point in an (i + 1) th frame image is located, feature point motion information of the kth target feature point is determined based on the first position and the second position; the feature point motion information comprises the position offset of the kth target feature point, the feature point moving speed of the kth target feature point and the moving direction of the kth target feature point.
In this embodiment of the application, referring to fig. 4, the step 205 of determining the feature point motion information of the kth target feature point based on the first position and the second position may be implemented by the following steps:
and step 2051, acquiring the t & ltth & gt time stamp of the ith frame of image and the t & lt +1 & gt time stamp of the (i & lt +1 & gt) th frame of image.
In the embodiment of the present application, the t-th timestamp refers to the acquisition time t of the ith frame image, and the t + 1-th timestamp refers to the acquisition time t +1 of the (i + 1) -th frame image.
Step 2052, determine the position offset of the kth target feature point from the first position to the second position.
In the embodiment of the application, the cleaning equipment tracks the kth target feature point based on an optical flow tracking algorithm so as to obtain the position offset of the kth target feature point based on the first position of the kth target feature point in the ith frame image and the second position of the kth target feature point in the (i + 1) th frame image.
And step 2053, determining the characteristic point moving speed of the kth target characteristic point based on the time interval between the t +1 th time stamp and the t-th time stamp and the position offset.
The characteristic point motion information comprises characteristic point moving speeds of all target characteristic points in the ith frame of image; the feature point motion information further includes the feature point movement directions of all the target feature points in the ith frame image.
In the embodiment of the application, the cleaning equipment calculates the time interval T between the T +1 th timestamp and the T th timestamp, determines the characteristic point moving speed and the characteristic point moving direction of the kth target characteristic point based on the time interval T and the position offset, and further obtains the characteristic point moving speed and the characteristic point moving direction of all target characteristic points contained in the ith frame of image.
Here, as shown in fig. 5, when the cleaning device is moving straight, the target feature points having different depths of field have the same feature point moving direction and different feature point moving speeds. Illustratively, when the target object is a ceiling, when the cleaning device performs a cleaning operation, the image collected by the top-view image collector is an ordered multi-frame image of the ceiling, and at this time, the target feature points of the ceiling of adjacent frames in the ordered multi-frame image obtained by the cleaning device have a low moving speed of the feature points of the target feature points determined based on the optical flow tracking algorithm due to the far scene of the collected image, and the moving directions of the feature points of all the target feature points are consistent. For another example, when the target object is furniture such as a sofa, when the cleaning device performs a cleaning operation and moves to the bottom of the sofa, the image acquired by the top-view image acquirer is an ordered multi-frame image of the bottom of the sofa, at this time, the target feature points of the sofa of adjacent frames in the ordered multi-frame image acquired by the cleaning device are close in depth of field because the acquired image is the bottom image of the sofa, the feature point movement speed of the target feature points determined based on the optical flow tracking algorithm is high, and the feature point movement directions of all the target feature points are consistent.
It should be noted that, as shown in fig. 6, when the cleaning device rotates, since the characteristic point moving speed of the characteristic point and the characteristic point moving direction are not consistent, the cleaning device does not process the image collected by the top view image collector when the cleaning device rotates.
And step 206, determining the identification information of the target object based on the characteristic point motion information and the acquired collector motion information of the top-view image collector.
In this embodiment of the application, referring to fig. 7, in step 206, based on the feature point motion information and the acquired collector motion information of the top view image collector, the identification information of the target object is determined, which may be implemented by the following steps:
step 2061, screening out the feature point moving speed of the part of the target feature points of which the feature point moving speed is smaller than the speed threshold value from the feature point moving speeds of all the target feature points corresponding to the ith frame image.
Step 2062, calculating the average value of the moving speeds of the feature points of the partial target feature points to obtain the average moving speed.
In the embodiment of the present application, referring to fig. 8, the cleaning device obtains the feature point movement speeds of all target feature points corresponding to the ith frame image, and screens out the feature point movement speeds of some target feature points whose feature point movement speeds are less than a speed threshold from the feature point movement speeds of all target feature points. At this time, the cleaning device averages the moving speeds of the feature points of the partial target feature points to obtain an average moving speed of the partial target feature points. Therefore, the cleaning equipment removes the maximum moving speed in the moving speeds of the feature points of all the feature points to obtain the average moving speed according to the moving speeds of the remaining feature points, so that the obtained average moving speed is more accurate, and the region where the target object is located on the cleaning track can be more clearly and accurately positioned on the basis of the accurate average moving speed.
Step 2063, based on the average moving speed and the collector moving speed, determining the identification information of the target object.
Wherein, the collector motion information comprises the collector moving speed.
In this embodiment of the application, step 2063 is to determine the identification information of the target object based on the average moving speed and the moving speed of the collector, and may be specifically implemented by the following steps:
step1, calculating the ratio of the average moving speed to the moving speed of the collector.
And Step2, determining the identification information matched with the ratio, which is the identification information of the target object.
In the embodiment of the application, different ratios match identification information of different target objects. It should be noted that the top view image collector has target feature points with different depth of field when collecting images,
in the embodiment of the application, the cleaning equipment obtains the ratio of the average moving speed to the moving speed of the collector, and determines the identification information matched with the ratio as the identification information of the target object by inquiring the identification information matched with the ratio in a preset matching relation table. Therefore, whether the cleaning equipment is positioned at the bottom of the target object or not is judged based on the ratio of the moving speed of the characteristic point to the moving speed of the collector of the cleaning equipment, and the identification information of the target object is determined so as to distinguish different target objects according to the identification information.
And step 207, calibrating the area where the target object is located on the cleaning track of the cleaning equipment based on the identification information.
In an implementation scenario, referring to fig. 9, in the case that the cleaning device determines the identification information matched with the ratio, and the identification information is the identification information of the target object, on the cleaning track of the cleaning device, the area where the target object is located is calibrated based on the identification information, so that the area where the target object is located on the cleaning track can be clearly and accurately located.
Firstly, the cleaning equipment acquires an ordered multi-frame image which is acquired by the cleaning equipment through the top-view image collector and comprises a target object; secondly, the cleaning equipment calls a window function to process each frame of image in the ordered multi-frame images, and a plurality of target feature points in each frame of image are screened out; thirdly, the cleaning equipment determines the characteristic point motion information of the kth target characteristic point based on the first position of the kth target characteristic point in the ith frame image and the second position of the kth target characteristic point in the (i + 1) th frame image in the ordered multi-frame image; then, the cleaning equipment determines identification information of the target object based on the characteristic point motion information and the acquired collector motion information of the top-view image collector, so that the area where the target object is located is calibrated on the cleaning track based on the identification information; therefore, the characteristics that the target characteristic points extracted from each frame of image are uniformly distributed are achieved by combining the images of the target object collected in the actual cleaning scene, meanwhile, in the process of tracking the target characteristic points, the cleaning equipment only keeps the target characteristic points which are successfully tracked, and eliminates other characteristic points which are failed in tracking, so that the region where the target object is located on the cleaning track is clearly and accurately positioned.
An embodiment of the present application provides a method for calibrating a target object, which is applied to a cleaning device, and as shown in fig. 10, the method for calibrating the target object includes the following steps:
301, when the cleaning device performs a cleaning operation, acquiring an ordered multi-frame image including a target object, which is acquired by the cleaning device through a top view image acquisition device.
Step 302, calling a window function to process each frame of image in the ordered multi-frame images, and screening out a plurality of target feature points in each frame of image.
And 303, acquiring a first position of a kth target feature point in an ith frame image in the ordered multi-frame image.
Wherein i is a positive integer which is greater than or equal to 1 and less than the total number of frames of the multi-frame images, and k is a positive integer which is greater than or equal to 1 and less than the total number of the target feature points in the ith frame image.
And 304, if the kth target feature point exists in the (i + 1) th frame of image in the ordered multi-frame image, acquiring a second position where the kth target feature point is located in the (i + 1) th frame of image.
And 305, acquiring a third position of the g-th target feature point in the p-th frame image in the ordered multi-frame image.
Wherein p is a positive integer which is greater than or equal to 1 and less than the total number of frames of the multi-frame images, p is different from i, and g is a positive integer which is greater than or equal to 1 and less than the total number of the target feature points in the p-th frame image.
And step 306, if the g-th target feature point exists in the p + 1-th frame image in the multiple ordered frames of images, acquiring a fourth position where the g-th target feature point exists in the p + 1-th frame image.
And 307, determining the characteristic point motion information of the kth target characteristic point based on the first position, the second position, the third position and the fourth position.
In this embodiment of the application, the step 307 is to determine the feature point motion information of the kth target feature point based on the first position, the second position, the third position, and the fourth position, and may be implemented by the following steps:
and A1, acquiring the t time stamp of the ith frame image and the t +1 time stamp of the (i + 1) th frame image.
And A2, determining a first position offset amount of the kth target characteristic point from the first position to the second position.
And A3, acquiring the u & ltth & gt time stamp of the p & ltth & gt frame image and the u & lt +1 & gt time stamp of the p & ltth & gt frame image.
And A4, determining a second position offset amount of the g-th target characteristic point from the third position to the fourth position.
And A5, determining the characteristic point moving speed of the k target characteristic point based on the first time interval between the t +1 th time stamp and the t-th time stamp, the first position offset amount, the second time interval between the u +1 th time stamp and the u-th time stamp and the second position offset amount.
The characteristic point motion information comprises the characteristic point moving speed of all target characteristic points in the ith frame image.
And 308, determining identification information of the target object based on the characteristic point motion information and the acquired collector motion information of the top-view image collector.
And 309, calibrating the area where the target object is located on the cleaning track of the cleaning equipment based on the identification information.
As can be seen from the above, the cleaning device determines the feature point moving speed of the kth target feature point based on the first position offset amount from the first position to the second position of the kth target feature point of the ith frame image, the second position offset amount from the third position to the fourth position of the gth target feature point of the pth frame image, the first time interval between the t +1 th timestamp and the t th timestamp, and the second time interval between the u +1 th timestamp and the u th timestamp, and further acquires the feature point moving speeds of all target feature points in the ith frame image; therefore, the motion information of the g-th target characteristic point of the p-th frame image is utilized to calibrate the motion information of all the target characteristic points in the i-th frame image, so that the motion information of all the target characteristic points in the i-th frame image is more accurate, and the region where the target object is located on the cleaning track is clearly and accurately positioned.
It should be noted that, for the descriptions of the same steps and the same contents in this embodiment as those in other embodiments, reference may be made to the descriptions in other embodiments, which are not described herein again.
Embodiments of the present application provide an apparatus for calibrating a target object, which may be used to implement a method for calibrating a target object provided in embodiments corresponding to fig. 1 to 4, 7 and 10, and as shown in fig. 11, the apparatus 11 for calibrating a target object includes:
the acquisition module 1101 is used for acquiring ordered multi-frame images including a target object, which are acquired by a top-view image acquisition device, of the cleaning equipment when the cleaning equipment executes cleaning operation;
the processing module 1102 is configured to analyze adjacent frames in the ordered multi-frame image and obtain feature point motion information of a target feature point of the target object;
the processing module 1102 is further configured to determine identification information of the target object based on the feature point motion information and the acquired collector motion information of the top view image collector;
the processing module 1102 is further configured to calibrate an area where the target object is located on the cleaning track of the cleaning device based on the identification information.
In other embodiments of the present application, the processing module 1102 is further configured to call a window function to process each frame of image in the ordered multiple frames of images, and screen out a plurality of target feature points in each frame of image; the obtaining module 1101 is further configured to obtain a first position where a kth target feature point in an ith frame image in the ordered multiple frame images is located, where i is a positive integer which is greater than or equal to 1 and smaller than a total number of frames of the multiple frame images, and k is a positive integer which is greater than or equal to 1 and smaller than a total number of the target feature points in the ith frame image; if a kth target feature point exists in an i +1 th frame image in the ordered multi-frame image, acquiring a second position where the kth target feature point exists in the i +1 th frame image; the processing module 1102 is further configured to determine feature point motion information of the kth target feature point based on the first position and the second position.
In other embodiments of the present application, the processing module 1102 is further configured to call a window function to process each frame of image, so as to obtain a plurality of reference feature points in each frame of image; determining the measurement value of each reference characteristic point based on the window local gradient amplitude value corresponding to the window function and the change rate of the gradient direction; and determining a reference feature point of which the metric value meets the metric value screening condition in the nth region of the N regions contained in each frame of image as a target feature point in the nth region, wherein N is a positive integer which is greater than or equal to 1 and less than or equal to N, and N is the total number of the regions contained in each frame of image.
In other embodiments of the present application, the obtaining module 1101 is further configured to obtain a t-th time stamp of the ith frame image and a t + 1-th time stamp of the (i + 1) -th frame image; the processing module 1102 is further configured to determine a position offset of the kth target feature point from the first position to the second position; and determining the characteristic point moving speed of the kth target characteristic point based on the time interval between the t +1 th time stamp and the t-th time stamp and the position offset, wherein the characteristic point motion information comprises the characteristic point moving speeds of all target characteristic points in the ith frame image.
In other embodiments of the present application, the processing module 1102 is further configured to screen out, from the feature point movement speeds of all target feature points corresponding to the ith frame of image, feature point movement speeds of some target feature points whose feature point movement speeds are smaller than a speed threshold; calculating the mean value of the characteristic point moving speeds of part of target characteristic points to obtain an average moving speed; determining identification information of the target object based on the average moving speed and the moving speed of the collector, wherein the motion information of the collector comprises the moving speed of the collector
In other embodiments of the present application, the processing module 1102 is further configured to calculate a ratio between the average moving speed and the moving speed of the collector; and determining the identification information matched with the ratio, which is the identification information of the target object.
In other embodiments of the present application, the obtaining module 1101 is further configured to obtain a third position where a g-th target feature point in a p-th frame image in the ordered multiple frame images is located, where p is a positive integer which is greater than or equal to 1 and smaller than a total number of frames of the multiple frame images, p is different from i, and g is a positive integer which is greater than or equal to 1 and smaller than a total number of target feature points in the p-th frame image; if the g-th target feature point exists in the p + 1-th frame image in the ordered multi-frame image, acquiring a fourth position where the g-th target feature point exists in the p + 1-th frame image; the processing module 1102 is further configured to determine feature point motion information of a kth target feature point based on the first position, the second position, the third position, and the fourth position.
Embodiments of the present application provide a cleaning apparatus, which may be used to implement a method for calibrating a target object provided by the embodiments corresponding to fig. 1 to 4, 7 and 10, and as shown in fig. 12, the cleaning apparatus 12 includes (the cleaning apparatus 12 in fig. 12 corresponds to the device 11 for calibrating a target object in fig. 11) a device including: a memory 1201 and a processor 1202, wherein; the processor 1202 is configured to execute the program for calibrating the target object stored in the memory 1201, and the cleaning device 12 implements the following steps through the processor 1202:
when the cleaning equipment executes cleaning operation, acquiring an ordered multi-frame image which is acquired by the cleaning equipment through a top view image collector and comprises a target object;
analyzing adjacent frames in the ordered multi-frame image to obtain the characteristic point motion information of the target characteristic point of the target object;
determining identification information of a target object based on the characteristic point motion information and the acquired collector motion information of the top-view image collector;
and calibrating the area where the target object is located on the cleaning track of the cleaning equipment based on the identification information.
In other embodiments of the present application, the processor 1202 is configured to execute a program for calibrating a target object stored in the memory 1201, so as to implement the following steps:
calling a window function to process each frame of image in the ordered multi-frame images, and screening out a plurality of target feature points in each frame of image; acquiring a first position where a kth target feature point in an ith frame image in an ordered multi-frame image is located, wherein i is a positive integer which is greater than or equal to 1 and smaller than the total frame number of the multi-frame image, and k is a positive integer which is greater than or equal to 1 and smaller than the total number of the target feature points in the ith frame image; if a kth target feature point exists in an i +1 th frame image in the ordered multi-frame image, acquiring a second position where the kth target feature point exists in the i +1 th frame image; and determining the characteristic point motion information of the k-th target characteristic point based on the first position and the second position.
In other embodiments of the present application, the processor 1202 is configured to execute a program for calibrating a target object stored in the memory 1201, so as to implement the following steps:
calling a window function to process each frame of image to obtain a plurality of reference feature points in each frame of image; determining the measurement value of each reference characteristic point based on the window local gradient amplitude value corresponding to the window function and the change rate of the gradient direction; and determining a reference feature point of which the metric value meets the metric value screening condition in the nth region of the N regions contained in each frame of image as a target feature point in the nth region, wherein N is a positive integer which is greater than or equal to 1 and less than or equal to N, and N is the total number of the regions contained in each frame of image.
In other embodiments of the present application, the processor 1202 is configured to execute a program for calibrating a target object stored in the memory 1201, so as to implement the following steps:
acquiring the t time stamp of the ith frame of image and the t +1 time stamp of the (i + 1) th frame of image; determining the position offset of the kth target characteristic point from the first position to the second position; and determining the characteristic point moving speed of the kth target characteristic point based on the time interval between the t +1 th time stamp and the t-th time stamp and the position offset, wherein the characteristic point motion information comprises the characteristic point moving speeds of all target characteristic points in the ith frame image.
In other embodiments of the present application, the processor 1202 is configured to execute a program for calibrating a target object stored in the memory 1201, so as to implement the following steps:
screening out the characteristic point moving speeds of partial target characteristic points of which the characteristic point moving speeds are smaller than a speed threshold value from the characteristic point moving speeds of all target characteristic points corresponding to the ith frame of image; calculating the mean value of the characteristic point moving speeds of part of target characteristic points to obtain an average moving speed; and determining identification information of the target object based on the average moving speed and the collector moving speed, wherein the collector motion information comprises the collector moving speed.
In other embodiments of the present application, the processor 1202 is configured to execute a program for calibrating a target object stored in the memory 1201, so as to implement the following steps:
calculating the ratio of the average moving speed to the moving speed of the collector; and determining the identification information matched with the ratio, which is the identification information of the target object.
In other embodiments of the present application, the processor 1202 is configured to execute a program for calibrating a target object stored in the memory 1201, so as to implement the following steps:
acquiring a third position where a g-th target feature point in a p-th frame image in the ordered multi-frame images is located, wherein p is a positive integer which is greater than or equal to 1 and smaller than the total frame number of the multi-frame images, p is different from i, and g is a positive integer which is greater than or equal to 1 and smaller than the total number of the target feature points in the p-th frame image; if the g-th target feature point exists in the p + 1-th frame image in the ordered multi-frame image, acquiring a fourth position where the g-th target feature point exists in the p + 1-th frame image; and determining the characteristic point motion information of the kth target characteristic point based on the first position, the second position, the third position and the fourth position.
Embodiments of the application provide a computer readable storage medium storing one or more programs executable by one or more processors to perform the steps of:
when the cleaning equipment executes cleaning operation, acquiring an ordered multi-frame image which is acquired by the cleaning equipment through a top view image collector and comprises a target object;
analyzing adjacent frames in the ordered multi-frame image to obtain the characteristic point motion information of the target characteristic point of the target object;
determining identification information of a target object based on the characteristic point motion information and the acquired collector motion information of the top-view image collector;
and calibrating the area where the target object is located on the cleaning track of the cleaning equipment based on the identification information.
In other embodiments of the present application, the one or more programs are executable by the one or more processors to perform the steps of:
calling a window function to process each frame of image in the ordered multi-frame images, and screening out a plurality of target feature points in each frame of image; acquiring a first position where a kth target feature point in an ith frame image in an ordered multi-frame image is located, wherein i is a positive integer which is greater than or equal to 1 and smaller than the total frame number of the multi-frame image, and k is a positive integer which is greater than or equal to 1 and smaller than the total number of the target feature points in the ith frame image; if a kth target feature point exists in an i +1 th frame image in the ordered multi-frame image, acquiring a second position where the kth target feature point exists in the i +1 th frame image; and determining the characteristic point motion information of the k-th target characteristic point based on the first position and the second position.
In other embodiments of the present application, the one or more programs are executable by the one or more processors to perform the steps of:
calling a window function to process each frame of image to obtain a plurality of reference feature points in each frame of image; determining the measurement value of each reference characteristic point based on the window local gradient amplitude value corresponding to the window function and the change rate of the gradient direction; and determining a reference feature point of which the metric value meets the metric value screening condition in the nth region of the N regions contained in each frame of image as a target feature point in the nth region, wherein N is a positive integer which is greater than or equal to 1 and less than or equal to N, and N is the total number of the regions contained in each frame of image.
In other embodiments of the present application, the one or more programs are executable by the one or more processors to perform the steps of:
acquiring the t time stamp of the ith frame of image and the t +1 time stamp of the (i + 1) th frame of image; determining the position offset of the kth target characteristic point from the first position to the second position; and determining the characteristic point moving speed of the kth target characteristic point based on the time interval between the t +1 th time stamp and the t-th time stamp and the position offset, wherein the characteristic point motion information comprises the characteristic point moving speeds of all target characteristic points in the ith frame image.
In other embodiments of the present application, the one or more programs are executable by the one or more processors to perform the steps of:
screening out the characteristic point moving speeds of partial target characteristic points of which the characteristic point moving speeds are smaller than a speed threshold value from the characteristic point moving speeds of all target characteristic points corresponding to the ith frame of image; calculating the mean value of the characteristic point moving speeds of part of target characteristic points to obtain an average moving speed; and determining identification information of the target object based on the average moving speed and the collector moving speed, wherein the collector motion information comprises the collector moving speed.
In other embodiments of the present application, the one or more programs are executable by the one or more processors to perform the steps of:
calculating the ratio of the average moving speed to the moving speed of the collector; and determining the identification information matched with the ratio, which is the identification information of the target object.
In other embodiments of the present application, the one or more programs are executable by the one or more processors to perform the steps of:
acquiring a third position where a g-th target feature point in a p-th frame image in the ordered multi-frame images is located, wherein p is a positive integer which is greater than or equal to 1 and smaller than the total frame number of the multi-frame images, p is different from i, and g is a positive integer which is greater than or equal to 1 and smaller than the total number of the target feature points in the p-th frame image; if the g-th target feature point exists in the p + 1-th frame image in the ordered multi-frame image, acquiring a fourth position where the g-th target feature point exists in the p + 1-th frame image; and determining the characteristic point motion information of the kth target characteristic point based on the first position, the second position, the third position and the fourth position.
Here, it should be noted that: the above description of the storage medium and device embodiments is similar to the description of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and apparatus of the present application, reference is made to the description of the embodiments of the method of the present application for understanding.
The computer storage medium/Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic Random Access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM), and the like; but may also be various terminals such as mobile phones, computers, tablet devices, personal digital assistants, etc., that include one or any combination of the above-mentioned memories.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment of the present application" or "a previous embodiment" or "some embodiments" or "some implementations" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" or "an embodiment of the present application" or "the preceding embodiments" or "some implementations" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of a unit is only one logical function division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the related art may be embodied in the form of a software product stored in a storage medium, and including several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
It should be noted that the drawings in the embodiments of the present application are only for illustrating schematic positions of the respective devices on the terminal device, and do not represent actual positions in the terminal device, actual positions of the respective devices or the respective areas may be changed or shifted according to actual conditions (for example, a structure of the terminal device), and a scale of different parts in the terminal device in the drawings does not represent an actual scale.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of calibrating a target object, the method comprising:
when cleaning equipment performs cleaning operation, acquiring ordered multi-frame images including a target object, which are acquired by the cleaning equipment through a top-view image collector;
analyzing adjacent frames in the ordered multi-frame image to acquire feature point motion information of a target feature point of the target object;
determining identification information of the target object based on the characteristic point motion information and the acquired collector motion information of the top-view image collector;
and calibrating the area where the target object is located on the cleaning track of the cleaning equipment based on the identification information.
2. The method according to claim 1, wherein the analyzing adjacent frames in the ordered multi-frame image to obtain feature point motion information of a target feature point of the target object comprises:
calling a window function to process each frame of image in the ordered multi-frame images, and screening out a plurality of target feature points in each frame of image;
acquiring a first position where a kth target feature point in an ith frame image in the ordered multi-frame image is located, wherein i is a positive integer which is greater than or equal to 1 and smaller than the total frame number of the multi-frame image, and k is a positive integer which is greater than or equal to 1 and smaller than the total number of the target feature points in the ith frame image;
if the kth target feature point exists in the (i + 1) th frame image in the ordered multi-frame image, acquiring a second position where the kth target feature point is located in the (i + 1) th frame image;
determining the feature point motion information of the k-th target feature point based on the first position and the second position.
3. The method of claim 2, wherein the invoking the window function processes each frame of the ordered multiple frames of images to filter out a plurality of target feature points in the each frame of images, comprising:
calling the window function to process each frame of image to obtain a plurality of reference feature points in each frame of image;
determining a metric value of each reference characteristic point based on a window local gradient amplitude value and a change rate of a gradient direction corresponding to the window function;
and determining an nth region of the N regions included in each frame of image, wherein one reference feature point whose metric value meets a metric value screening condition is the target feature point in the nth region, N is a positive integer greater than or equal to 1 and less than or equal to N, and N is the total number of the regions included in each frame of image.
4. The method according to claim 2, wherein the determining the feature point motion information of the k-th target feature point based on the first position and the second position comprises:
acquiring the t time stamp of the ith frame image and the t +1 time stamp of the (i + 1) th frame image;
determining a position offset of the kth target feature point from the first position to the second position;
determining a feature point moving speed of the kth target feature point based on a time interval between the t +1 th time stamp and the tth time stamp and the position offset, wherein the feature point motion information includes feature point moving speeds of all target feature points in the ith frame image.
5. The method according to claim 4, wherein the determining the identification information of the target object based on the feature point motion information and the acquired collector motion information of the top-view image collector comprises:
screening out the characteristic point moving speeds of part of target characteristic points of which the characteristic point moving speeds are smaller than a speed threshold value from the characteristic point moving speeds of all the target characteristic points corresponding to the ith frame of image;
calculating the mean value of the characteristic point moving speeds of the partial target characteristic points to obtain an average moving speed;
and determining identification information of the target object based on the average moving speed and the collector moving speed, wherein the collector motion information comprises the collector moving speed.
6. The method of claim 5, wherein the determining the identification information of the target object based on the average moving speed and the collector moving speed comprises:
calculating the ratio of the average moving speed to the moving speed of the collector;
and determining the identification information matched with the ratio, which is the identification information of the target object.
7. The method according to claim 2, wherein the determining the feature point motion information of the k-th target feature point based on the first position and the second position comprises:
acquiring a third position where a g target feature point in a p frame image in the ordered multi-frame images is located, wherein p is a positive integer which is greater than or equal to 1 and smaller than the total number of frames of the multi-frame images, p is different from i, and g is a positive integer which is greater than or equal to 1 and smaller than the total number of the target feature points in the p frame image;
if the g-th target feature point exists in the p + 1-th frame image in the ordered multi-frame image, acquiring a fourth position where the g-th target feature point exists in the p + 1-th frame image;
determining the feature point motion information of the kth target feature point based on the first position, the second position, the third position, and the fourth position.
8. An apparatus for calibrating a target object, comprising:
the device comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring ordered multi-frame images including a target object, which are acquired by a top-view image acquisition device, of the cleaning equipment when the cleaning equipment executes cleaning operation;
the processing module is used for analyzing adjacent frames in the ordered multi-frame image and acquiring the characteristic point motion information of the target characteristic point of the target object;
the processing module is further configured to determine identification information of the target object based on the feature point motion information and the acquired collector motion information of the top-view image collector;
the processing module is further configured to calibrate an area where the target object is located on the cleaning track of the cleaning device based on the identification information.
9. A sweeping apparatus, characterized in that it comprises:
a memory for storing executable instructions;
a processor for executing executable instructions stored in the memory to implement the method of any one of claims 1 to 7.
10. A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the method of any one of claims 1 to 7.
CN202110801537.8A 2021-07-15 2021-07-15 Method and device for calibrating target object, cleaning equipment and storage medium Pending CN113657164A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110801537.8A CN113657164A (en) 2021-07-15 2021-07-15 Method and device for calibrating target object, cleaning equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110801537.8A CN113657164A (en) 2021-07-15 2021-07-15 Method and device for calibrating target object, cleaning equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113657164A true CN113657164A (en) 2021-11-16

Family

ID=78489495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110801537.8A Pending CN113657164A (en) 2021-07-15 2021-07-15 Method and device for calibrating target object, cleaning equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113657164A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115994911A (en) * 2023-03-24 2023-04-21 山东上水环境科技集团有限公司 Natatorium target detection method based on multi-mode visual information fusion

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170106274A (en) * 2017-09-11 2017-09-20 엘지전자 주식회사 Robot cleaner and controlling method of the same
CN107610108A (en) * 2017-09-04 2018-01-19 腾讯科技(深圳)有限公司 Image processing method and device
CN108108748A (en) * 2017-12-08 2018-06-01 联想(北京)有限公司 A kind of information processing method and electronic equipment
US20190146517A1 (en) * 2017-11-15 2019-05-16 Samsung Electronics Co., Ltd. Moving apparatus for cleaning and control method thereof
CN110645986A (en) * 2019-09-27 2020-01-03 Oppo广东移动通信有限公司 Positioning method and device, terminal and storage medium
US20200159246A1 (en) * 2018-11-19 2020-05-21 Ankobot (Shenzhen) Smart Technologies Co., Ltd. Methods and systems for mapping, localization, navigation and control and mobile robot
US20200160539A1 (en) * 2018-11-16 2020-05-21 National Applied Research Laboratories Moving object detection system and method
CN111374607A (en) * 2018-12-29 2020-07-07 尚科宁家(中国)科技有限公司 Target identification method and device based on sweeping robot, equipment and medium
WO2020259360A1 (en) * 2019-06-28 2020-12-30 Oppo广东移动通信有限公司 Locating method and device, terminal, and storage medium
US20210103299A1 (en) * 2017-12-29 2021-04-08 SZ DJI Technology Co., Ltd. Obstacle avoidance method and device and movable platform
CN112783147A (en) * 2019-11-11 2021-05-11 科沃斯机器人股份有限公司 Trajectory planning method and device, robot and storage medium
CN112799400A (en) * 2020-12-28 2021-05-14 深兰人工智能(深圳)有限公司 Cleaning track planning method and device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610108A (en) * 2017-09-04 2018-01-19 腾讯科技(深圳)有限公司 Image processing method and device
KR20170106274A (en) * 2017-09-11 2017-09-20 엘지전자 주식회사 Robot cleaner and controlling method of the same
US20190146517A1 (en) * 2017-11-15 2019-05-16 Samsung Electronics Co., Ltd. Moving apparatus for cleaning and control method thereof
CN108108748A (en) * 2017-12-08 2018-06-01 联想(北京)有限公司 A kind of information processing method and electronic equipment
US20210103299A1 (en) * 2017-12-29 2021-04-08 SZ DJI Technology Co., Ltd. Obstacle avoidance method and device and movable platform
US20200160539A1 (en) * 2018-11-16 2020-05-21 National Applied Research Laboratories Moving object detection system and method
US20200159246A1 (en) * 2018-11-19 2020-05-21 Ankobot (Shenzhen) Smart Technologies Co., Ltd. Methods and systems for mapping, localization, navigation and control and mobile robot
CN111374607A (en) * 2018-12-29 2020-07-07 尚科宁家(中国)科技有限公司 Target identification method and device based on sweeping robot, equipment and medium
WO2020259360A1 (en) * 2019-06-28 2020-12-30 Oppo广东移动通信有限公司 Locating method and device, terminal, and storage medium
CN110645986A (en) * 2019-09-27 2020-01-03 Oppo广东移动通信有限公司 Positioning method and device, terminal and storage medium
CN112783147A (en) * 2019-11-11 2021-05-11 科沃斯机器人股份有限公司 Trajectory planning method and device, robot and storage medium
CN112799400A (en) * 2020-12-28 2021-05-14 深兰人工智能(深圳)有限公司 Cleaning track planning method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RICHARD BORMANN ET AL.: "New Brooms Sweep Clean-an Autonomous Robotic Clean Assistant for Professional Office Cleaning", 《2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)》, pages 4470 - 4477 *
邓斌: "人工智能算法赋能视觉导航清洁机器人的研究", 《信息技术与信息化》, pages 238 - 240 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115994911A (en) * 2023-03-24 2023-04-21 山东上水环境科技集团有限公司 Natatorium target detection method based on multi-mode visual information fusion

Similar Documents

Publication Publication Date Title
JP5487298B2 (en) 3D image generation
CN109076198B (en) Video-based object tracking occlusion detection system, method and equipment
US6529613B1 (en) Motion tracking using image-texture templates
US10163256B2 (en) Method and system for generating a three-dimensional model
US6741725B2 (en) Motion tracking using image-texture templates
Golightly et al. Corner detection and matching for visual tracking during power line inspection
KR101643672B1 (en) Optical flow tracking method and apparatus
EP1355274A2 (en) 3D reconstruction of multiple views with altering search path and occlusion modeling
EP1355273A2 (en) Method and system for 3D smoothing within the bound of error regions of matching curves
CN109977466B (en) Three-dimensional scanning viewpoint planning method and device and computer readable storage medium
CN111784728B (en) Track processing method, device, equipment and storage medium
US10096114B1 (en) Determining multiple camera positions from multiple videos
Kaczmarek Stereo vision with Equal Baseline Multiple Camera Set (EBMCS) for obtaining depth maps of plants
EP3593322B1 (en) Method of detecting moving objects from a temporal sequence of images
Guomundsson et al. ToF imaging in smart room environments towards improved people tracking
CN113657164A (en) Method and device for calibrating target object, cleaning equipment and storage medium
CN111899279A (en) Method and device for detecting motion speed of target object
CN114529566B (en) Image processing method, device, equipment and storage medium
CN110315538B (en) Method and device for displaying barrier on electronic map and robot
CN111780744A (en) Mobile robot hybrid navigation method, equipment and storage device
CN105451009B (en) A kind of information processing method and electronic equipment
CN109284707A (en) Moving target detection method and device
Hemmat et al. Improved ICP-based pose estimation by distance-aware 3D mapping
CN111684489B (en) Image processing method and device
Spurlock et al. Dynamic subset selection for multi-camera tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination