Visual map positioning method and system
Technical Field
The invention belongs to the technical field of visual maps, relates to a positioning method, and particularly relates to a visual map positioning method and a visual map positioning system.
Background
After matching the photo with the visual map, the position and orientation of the camera taking the photo can be obtained. A common form of visual localization map is a collection of 3d points with descriptors. Such a visual localization map stores a lot of information of 3d points, including the positions of the 3d points and a plurality of descriptors. A descriptor is a piece of data, such as 32 bytes or 32 float. The descriptor represents the characteristics of the corresponding characteristic point of this 3d point in the photograph.
Calculation of visual descriptors referring to fig. 1, each arrow in the right figure may be represented by a vector, and the vectors represented by all the arrows are combined together to form a descriptor of a feature point.
Positioning based on a visual map refers to calculating the position and orientation of a photograph relative to the map. For a photo, some corner points can be extracted, and each corner point can also extract descriptors; these descriptors store pattern features near this corner point. While these descriptors can be matched with descriptors of 3d points in the map. I.e. find out which 3d points in the map and 2d points in the photo are the same points by the difference of descriptors.
Through the matching of descriptors, the correspondence between the 2d points of a plurality of images and the 3d points of the positioning map can be established. Using this correspondence, the shooting position and orientation of each photo can be calculated.
However, the 3 d-to-2 d matching is generally computationally intensive, since the number of 3d points in the map is much greater than the number of feature points of a single 2d point. The method of calculating the camera position and orientation by 3d to 2d matching cannot be performed at too high a frequency. In addition, some areas are not covered by a map, and the areas cannot be positioned by the method.
The solution to this is to use a visual odometer that can calculate the position and orientation of the next picture relative to the previous picture in the succession of pictures. And the visual odometer only needs to match pictures, and the speed is faster than that of matching 3d to 2d points. It is assumed that the matrix t_w_i represents the position and orientation of the previous photo. t_i_j is the position of the next picture relative to the previous picture calculated by the visual odometer. Then the position of the next picture relative to the map can be obtained by t_w_j=t_w_i_i_j; i.e. the position of the next photo relative to the map can be obtained without performing the matching task of 3d to 2d points.
Each triangle in fig. 2 represents the position and orientation of the camera at a different time, and a successful match to the map at time t4 yields a position that differs from the position obtained using the visual odometer recurrence. So at time t4, the accumulated error of the visual odometer is corrected once by 3d to 2d matching.
However, this has the disadvantage that the position of the next picture relative to the map must be calculated. If the 3d to 2d matching time is long, for example it takes 1 second to get the result, the whole positioning system will have a long delay.
The existing visual map positioning mode is mainly based on a Kalman filtering method, and has the following defects:
(1) Without using information between image sequences, the positioning quality is poor in areas without map coverage. The method fuses the information between the image sequences obtained by the visual odometer with the matching information of the images and the map, and can obtain a high-precision positioning result through the information of the odometer in the area with poor map quality.
(2) The matching of a typical picture to a map is time consuming, so not every picture will be matched to the map. The picture position which is not matched with the map in the Kalman filtering-based method can be obtained quickly through prediction of the filter. But there is a large delay in the positioning result that needs map matching. For example, the matching time of the picture and the map is 1s, and the positioning delay of the picture is 1s.
In view of this, there is an urgent need to design a visual map positioning method so as to overcome the above-mentioned drawbacks of the existing visual map positioning method.
Disclosure of Invention
The invention provides a visual map positioning method and a visual map positioning system, which can obtain an estimated value of the current camera position with very low delay, update the camera position to obtain a more accurate position after the matching of 3d to 2d with long operation time is finished, and reduce the delay of positioning.
In order to solve the technical problems, according to one aspect of the present invention, the following technical scheme is adopted:
a visual map positioning method, the visual map positioning method comprising:
calculating a transformation matrix of the local map relative to the global map through matching descriptors of the 3d points of the local map and the 3d points of the global map;
acquiring the position and the orientation of the set photo relative to the reference photo;
and transforming the position and orientation of the acquired set photo relative to the reference photo into a global map by using the transformation matrix.
As one embodiment of the present invention, the process of calculating a transformation matrix of a local map with respect to a global map includes:
s1, acquiring a reference photo and a photo to be matched, and extracting characteristic points and descriptors of the reference photo and the photo to be matched;
s2, calculating the position and orientation of the characteristic points of the photos to be matched relative to the characteristic points of the reference photos; the 3d points of the known locations constitute a local map;
step S3, matching descriptors from the 3d point of the local map to the 3d point of the global map to find a matching relationship between the local map and the global map;
and S4, calculating a transformation matrix from the local map points to the global map.
In step S4, as an embodiment of the present invention, the global transformation is composed of the rotation amount R and the translation amount t; the following linear equation is listed: p2=r×p1+t;
listing a corresponding number of linear equations according to the logarithm of (p 1, p 2); and solving a solution of the linear equation set to obtain unknowns R and t.
As one embodiment of the present invention, the process of obtaining the position and orientation of the set photo with respect to the reference photo includes step S5: the position and orientation of the corresponding picture relative to the reference picture is calculated using a visual odometer.
As one embodiment of the present invention, the process of transforming the position and orientation of the acquired set photo with respect to the reference photo into the global map includes:
s6, obtaining the position of the corresponding picture relative to the map by using the transformation matrix;
and S7, after receiving the subsequent pictures, obtaining the position of the current camera relative to the reference picture through a visual odometer, and then obtaining the position of the current camera relative to the map.
As one embodiment of the present invention, the process of transforming the position and orientation of the acquired set photo relative to the reference photo into the global map further includes the step S8 of creating more 3d points of the local map together with the subsequent picture and the existing picture; the 3d point of the local map and the global map can be matched to obtain a more accurate transformation matrix.
As an embodiment of the present invention, in step S1, for each descriptor of the reference photo, a most similar one is found in the photos to be matched; extracting N descriptors from the reference picture to obtain N pairs of characteristic points; and reserving the set matching with the smallest difference according to the difference ordering of the descriptors.
In step S1, as an embodiment of the present invention, feature points are places where the brightness change is severe in the picture, FAST corner points are used as feature points, descriptors represent visual features around the feature points, and ORB descriptors are used.
As one embodiment of the present invention, in step S2, the process of calculating the 3d point position includes:
s21, obtaining a transformation matrix of the photo to be matched relative to the reference photo by using a five-point method;
and S22, obtaining the position of the 3d point relative to the reference photo by using a triangulation method.
As an embodiment of the present invention, in step S3, the matching method includes:
step S31, descriptor matching: for each descriptor of the reference photo, finding the most similar one in the photo to be matched; extracting N descriptors from the reference picture to obtain N pairs of characteristic points; according to the difference ordering of the descriptors, reserving M matching pairs with minimum difference;
and S32, screening out wrong pairs in the pairs by using a RANSAC algorithm to obtain a final pair.
As an embodiment of the present invention, in step S5, the position t_1_3 and the orientation r_1_3 of the third picture with respect to the first picture are calculated using a visual odometer;
the transformation matrix of the local map relative to the global map is marked as T_w_1; in step S6, the position t_w_3=t_w_1×t_1_3 of the third picture relative to the map is obtained by using the transformation matrix t_w_1.
In step S7, the position t_1—n of the current camera relative to the reference photo is obtained by the visual odometer immediately after the subsequent photo is received; and obtaining the position of the current camera relative to the map through t_w_n=t_w_1×t_1_n.
According to another aspect of the invention, the following technical scheme is adopted: a visual map positioning system, the visual map positioning system comprising:
the transformation matrix acquisition module is used for calculating a transformation matrix of the local map relative to the global map through descriptor matching of the 3d point of the local map and the 3d point of the global map;
the relative position and orientation acquisition module is used for acquiring the position and orientation of the set photo relative to the reference photo;
and the data transformation module is used for transforming the position and the orientation obtained by the visual odometer into the global map by utilizing the transformation matrix.
As one embodiment of the present invention, the transformation matrix acquisition module includes:
-a feature and description extraction unit for obtaining a reference photo, a photo to be matched, extracting feature points and descriptors of the reference photo and the photo to be matched;
-a position and orientation acquisition unit for calculating the position and orientation of the feature points of the photo to be matched relative to the feature points of the reference photo; the 3d points of the known locations constitute a local map;
-a matching relationship obtaining unit for finding a matching relationship of the local map and the global map by descriptor matching of the local map 3d point to the global map 3d point;
-a transformation matrix calculation unit for calculating a transformation matrix from the local map points to the global map.
As an embodiment of the present invention, the transformation matrix calculation unit calculates the following:
the global transformation consists of a rotation quantity R and a translation quantity t; the following linear equation is listed: p2=r×p1+t;
listing a corresponding number of linear equations according to the logarithm of (p 1, p 2); and solving a solution of the linear equation set to obtain unknowns R and t.
As one embodiment of the present invention, the relative position and orientation acquisition module includes a visual odometer; the data transformation module includes:
-a relative map position acquisition unit to obtain the position of the corresponding picture relative to the map using the transformation matrix;
-a camera relative position acquisition unit for obtaining the position of the current camera relative to the reference picture by means of a visual odometer after receiving the subsequent picture, and then obtaining the position of the current camera relative to the map.
As an embodiment of the present invention, for each descriptor of the reference photo, the feature and description extracting unit finds a most similar one of the photos to be matched; extracting N descriptors from the reference picture to obtain N pairs of characteristic points; and reserving the set matching with the smallest difference according to the difference ordering of the descriptors.
As one embodiment of the present invention, the position and orientation obtaining unit obtains a transformation matrix of the photo to be matched with respect to the reference photo using a five-point method, and obtains a position of the 3d point with respect to the reference photo using a triangulation method.
As one embodiment of the present invention, the matching relationship acquisition unit includes:
-a descriptor matching subunit for finding, for each descriptor of the reference picture, the most similar one of the pictures to be matched; extracting N descriptors from the reference picture to obtain N pairs of characteristic points; according to the difference ordering of the descriptors, reserving M matching pairs with minimum difference;
-pairing subunit screening out incorrect ones of these pairings using RANSAC algorithm, resulting in final pairings.
As one embodiment of the present invention, the relative position and orientation obtaining module calculates a position t_1_3 and an orientation r_1_3 of the third picture relative to the first picture; the transformation matrix of the local map relative to the global map is marked as T_w_1;
the relative map position obtaining unit obtains a position t_w_3=t_w_1×t_1_3 of the third picture relative to the map using the transformation matrix t_w_1.
As one embodiment of the present invention, the camera relative position obtaining unit obtains a position t_w_n=t_w_1×t_1_n of a current camera relative map.
The invention has the beneficial effects that: the visual map positioning method and the visual map positioning system can obtain the estimated value of the current camera position with very low delay, update the camera position to obtain a more accurate position after the matching of 3d to 2d with long operation time is finished, and therefore the delay of positioning is reduced.
In the invention, the visual odometer is not dependent on the map matching result, so that the positions of all pictures can be obtained rapidly through the visual odometer. And after the parallel map matching is finished, updating the position of the latest picture by using the map matching result. Compared with a Kalman filtering-based method, the method has better parallelism. The invention also provides possibility for map matching methods with better future use performance but slower speed.
Drawings
Fig. 1 is a schematic diagram showing 3d information in a conventional visual positioning map.
Fig. 2 is a schematic diagram of a conventional visual positioning method using a visual odometer for positioning.
FIG. 3 is a flow chart of a visual map positioning method according to an embodiment of the invention.
Fig. 4 is a flowchart of a visual map positioning method according to an embodiment of the invention.
Fig. 5 is a flowchart of a visual map positioning method according to an embodiment of the invention.
Fig. 6 is a schematic diagram of acquiring a partial map according to an embodiment of the invention.
FIG. 7 is a schematic diagram of computing a transformation matrix from local map points to a global map in accordance with an embodiment of the present invention.
FIG. 8 is a diagram of pairing a local map with a global map according to an embodiment of the present invention.
Fig. 9 is a schematic diagram of a visual map positioning system according to an embodiment of the invention.
Fig. 10 is a schematic diagram of a visual map positioning system according to an embodiment of the invention.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
For a further understanding of the present invention, preferred embodiments of the invention are described below in conjunction with the examples, but it should be understood that these descriptions are merely intended to illustrate further features and advantages of the invention, and are not limiting of the claims of the invention.
The description of this section is intended to be illustrative of only a few exemplary embodiments and the invention is not to be limited in scope by the description of the embodiments. It is also within the scope of the description and claims of the invention to interchange some of the technical features of the embodiments with other technical features of the same or similar prior art.
The invention discloses a visual map positioning method, which comprises the following steps: calculating a transformation matrix of the local map relative to the global map through matching descriptors of the 3d points of the local map and the 3d points of the global map; acquiring the position and the orientation of the set photo relative to the reference photo; and transforming the position and orientation of the acquired set photo relative to the reference photo into a global map by using the transformation matrix.
FIG. 3 is a flow chart of a visual map positioning method according to an embodiment of the invention; referring to fig. 3, in an embodiment of the present invention, the visual map positioning method includes:
step one, calculating a transformation matrix of a local map relative to a global map through matching descriptors of the 3d points of the local map and the 3d points of the global map;
step two, acquiring the position and orientation of the set photo relative to the reference photo;
and thirdly, transforming the position and orientation of the acquired set photo relative to the reference photo into a global map by utilizing the transformation matrix.
The first step and the second step have no requirement of sequence, and have no direct influence on each other.
FIG. 4 is a flow chart of a visual map positioning method according to an embodiment of the invention; referring to fig. 4, in an embodiment of the present invention, the first step includes:
s1, acquiring a reference photo and a photo to be matched, and extracting characteristic points and descriptors of the reference photo and the photo to be matched;
s2, calculating the position and orientation of the characteristic points of the photos to be matched relative to the characteristic points of the reference photos; the 3d points of the known locations constitute a local map;
step S3, matching descriptors from the 3d point of the local map to the 3d point of the global map to find a matching relationship between the local map and the global map;
and S4, calculating a transformation matrix from the local map points to the global map.
In one embodiment of the present invention, in step S4, the global transformation is set to be composed of the rotation amount R and the translation amount t (in one embodiment of the present invention, the global transformation includes other; the following linear equation is listed: p2=r×p1+t; since there are many pairs (p 1, p 2), a corresponding number of linear equations are listed according to the logarithm of (p 1, p 2); and solving a solution of the linear equation set to obtain unknowns R and t.
With continued reference to fig. 4, in an embodiment of the present invention, the second step includes step S5 of calculating the position and orientation of the corresponding picture relative to the reference picture using a visual odometer.
With continued reference to fig. 4, in an embodiment of the present invention, the second step includes:
s6, obtaining the position of the corresponding picture relative to the map by using the transformation matrix;
and S7, after receiving the subsequent pictures, obtaining the position of the current camera relative to the reference picture through a visual odometer, and then obtaining the position of the current camera relative to the map.
In an embodiment of the present invention, the second step further includes step S8, where the subsequent picture and the existing picture together establish more 3d points of the local map; the 3d point of the local map and the global map can be matched to obtain a more accurate transformation matrix.
The visual positioning method provided by the invention can enable the matching of the visual odometer and 3d to 2d to be carried out simultaneously; that is, the present invention can obtain the position of the next camera with respect to the map based on the result of the low-delay visual odometer only, even if the position of the previous image with respect to the map has not been calculated.
The matching of 3d to 2d is also called matching of a local map and a global map. A map composed of feature points of known 3d positions obtained by matching between successive pictures is called a partial map. The localization map is also called global map.
The principle of the invention is that the transformation matrix of the local map relative to the global map is calculated first, and then the position and orientation obtained by the visual odometer are transformed into the global map by using the matrix. The existing method is to calculate the position of a certain photo based on the matching of 3d to 2d, and the method provided by the invention is to calculate the transformation of the local map relative to the global map through the matching of the 3d point of the local map and the 3d point of the global map. Because the position and orientation of the previous picture relative to the map is not relied upon, the visual odometer can calculate the current camera position in real-time.
In an embodiment of the present invention, in step S1, for each descriptor of the reference photo, finding the most similar one in the photos to be matched; extracting N descriptors from the reference picture to obtain N pairs of characteristic points; and reserving the set matching with the smallest difference according to the difference ordering of the descriptors. The feature points are places with intense light and shade changes in the pictures, FAST corner points are used as the feature points, descriptors represent visual features around the feature points, and ORB descriptors are used.
In one embodiment of the present invention, in step S2, the process of calculating the 3d point position includes:
-step S21, obtaining a transformation matrix of the photo to be matched with respect to the reference photo using a five-point method;
step S22, obtaining the position of the 3d point relative to the reference photograph using a triangulation method.
In an embodiment of the present invention, in step S3, the matching method includes:
-step S31, descriptor matching: for each descriptor of the reference photo, finding the most similar one in the photo to be matched; extracting N descriptors from the reference picture to obtain N pairs of characteristic points; according to the difference ordering of the descriptors, reserving M matching pairs with minimum difference;
step S32, screening out wrong pairs of these pairs using RANSAC algorithm, resulting in the final pair.
In an embodiment of the present invention, in step S5, the position t_1_3 and the orientation r_1_3 of the third picture relative to the first picture are calculated using the visual odometer. The transformation matrix of the local map relative to the global map is marked as T_w_1; in step S6, the position t_w_3=t_w_1×t_1_3 of the third picture relative to the map is obtained by using the transformation matrix t_w_1.
In one embodiment of the present invention, in step S7, after receiving the subsequent photo, the position t_1—n of the current camera relative to the reference photo is obtained immediately through the visual odometer; and obtaining the position of the current camera relative to the map through t_w_n=t_w_1×t_1_n.
FIG. 5 is a flow chart of a visual map positioning method according to an embodiment of the invention; referring to fig. 5, in an embodiment of the invention, the visual map positioning method includes the following steps:
step 1, assume that there are two consecutive photos: a first photograph t1 and a second photograph t2. Feature points and descriptors of t1 and t2 are extracted first. Then for each descriptor of t1, the most similar one is found in t2. Assuming that N descriptors are extracted from t1, then N pairs of feature points are matched at this time. Finally, sorting by descriptor difference (using hamming distance to calculate difference), keeping the first 200 matches with minimum difference.
The feature point is where the bright-dark variation in the picture is severe, here FAST corner points are used as feature points, and the descriptor represents the visual features around this feature point, here ORB descriptors are used.
And 2, calculating the positions and the orientations of the characteristic points relative to the first picture. These 3d points of known position constitute a local map as illustrated in fig. 6. The process of calculating the 3d point position includes:
(1) Obtaining a transformation matrix of t2 relative to t1 by using a five-point method; and inputting the characteristic point pairs obtained in the previous step.
(2) The position of the 3d point relative to t1 is obtained by using a triangulation method; and inputting a transformation matrix of t2 relative to t1 obtained in the previous step.
And 3, matching descriptors from 3d to 3d, and finding a matching relation between the local map and the global map (positioning map).
The matching method comprises the following steps:
(1) Descriptor matching: the method is matched with the previous descriptor between t1 and t 2; and obtaining M matched pairs of 3d points.
(2) The incorrect ones of these pairs were screened out using RANSAC to obtain the final pair.
Step 4, a transformation matrix t_w_1 from the local map points to the global map is calculated, as shown in fig. 7. The pairing between the resulting local and global map 3d points is entered as the last step, as shown in fig. 8.
And 5, calculating the position (t_1_3) and the orientation (R_1_3) of the third picture relative to the first picture by using a visual odometer.
Step 6, obtaining the position of the third picture relative to the map by using the transformation matrix T_w_1: t_w_3=t_w_1×t_1_3.
Step 7, when a subsequent picture is received, the position of the current camera relative to the first image can be obtained immediately through a visual odometer: t_1—n; and obtaining the position of the current camera relative to the map through t_w_n=t_w_1×t_1_n.
And 8, simultaneously, the subsequent pictures and the previous pictures together establish more 3d points of the local map. These 3d points can be matched with the global map to obtain a more accurate T_w_1.
The invention also discloses a visual map positioning system, and FIG. 9 is a schematic diagram of the composition of the visual map positioning system according to an embodiment of the invention; referring to fig. 9, in an embodiment of the present invention, the visual map positioning system includes: a transformation matrix acquisition module 1, a relative position and orientation acquisition module 3 and a data transformation module 5. The transformation matrix acquisition module 1 is used for calculating a transformation matrix of the local map relative to the global map through descriptor matching of the 3d point of the local map and the 3d point of the global map; the relative position and orientation obtaining module 3 is used for obtaining the position and orientation of the set photo relative to the reference photo; the data transformation module 5 is used to transform the position and orientation of the visual odometer into a global map using the transformation matrix.
FIG. 10 is a schematic diagram of a visual map positioning system according to an embodiment of the present invention; referring to fig. 10, in an embodiment of the present invention, the transformation matrix acquisition module 1 includes: the feature and description extracting unit 11, the position and orientation acquiring unit 13, the matching relation acquiring unit 15, and the transformation matrix calculating unit 17. The feature and description extraction unit 11 is configured to obtain a reference photo, a photo to be matched, and extract feature points and descriptors of the reference photo and the photo to be matched. The position and orientation obtaining unit 13 is configured to calculate a position and orientation of a feature point of a photo to be matched relative to a feature point of a reference photo; the 3d points of the known locations constitute a local map. The matching relationship acquiring unit 15 is configured to find a matching relationship between the local map and the global map by descriptor matching from the point of the local map 3d to the point of the global map 3 d. The transformation matrix calculation unit 17 is used to calculate a transformation matrix from the local map points to the global map.
In an embodiment of the present invention, the calculation mode of the transformation matrix calculation unit is as follows: setting a global transformation consisting of a rotation amount R and a translation amount t (in an embodiment of the present invention, the global transformation also includes others); the following linear equation is listed: p2=r×p1+t; listing a corresponding number of linear equations according to the logarithm of (p 1, p 2); since there are many pairs (p 1, p 2), the unknowns R and t are obtained by solving a solution of a system of linear equations.
With continued reference to fig. 10, in an embodiment of the present invention, the data transformation module 5 includes: a relative map position acquisition unit 51, a camera relative position acquisition unit 53. The relative map position obtaining unit 51 is configured to obtain a position of the corresponding picture relative to the map using the transformation matrix; the camera relative position obtaining unit 53 is configured to obtain, after receiving the subsequent picture, a position of the current camera relative to the reference picture through the visual odometer, and then obtain a position of the current camera relative to the map.
In an embodiment of the present invention, for each descriptor of the reference photo, the feature and description extraction unit 11 finds the most similar one among the photos to be matched; extracting N descriptors from the reference picture to obtain N pairs of characteristic points; and reserving the set matching with the smallest difference according to the difference ordering of the descriptors. The position and orientation obtaining unit 13 obtains a transformation matrix of the photo to be matched relative to the reference photo by using a five-point method, and obtains a position of a 3d point relative to the reference photo by using a triangulation method.
In an embodiment of the present invention, the matching relationship obtaining unit 15 includes: the descriptor matches the subunit, pairs the subunit. The descriptor matching subunit is configured to find, for each descriptor of the reference photo, a most similar one of the descriptors in the photo to be matched; extracting N descriptors from the reference picture to obtain N pairs of characteristic points; and reserving M matching pairs with minimum differences according to the difference ordering of the descriptors. The pairing subunit uses the RANSAC algorithm to screen out the wrong pairing in the pairing to obtain the final pairing.
In one embodiment of the invention, the relative position and orientation acquisition module 3 comprises a visual odometer; the visual odometer calculates the position t_1_3 and the orientation R_1_3 of the third picture relative to the first picture; the transformation matrix of the local map relative to the global map is denoted t_w_1. The relative map position obtaining unit 51 obtains a position t_w_3=t_w_1×t_1_3 of the third picture relative to the map using the transformation matrix t_w_1. The camera relative position obtaining unit 53 obtains a position t_w_n=t_w_1×t_1—n of the current camera relative map.
In summary, the visual map positioning method and system provided by the invention can obtain the estimated value of the current camera position with very low delay, update the camera position to obtain a more accurate position after the matching of 3d to 2d with long operation time is finished, and thus the delay of positioning is reduced.
In the invention, the visual odometer is not dependent on the map matching result, so that the positions of all pictures can be obtained rapidly through the visual odometer. And after the parallel map matching is finished, updating the position of the latest picture by using the map matching result. Compared with a Kalman filtering-based method, the method has better parallelism. The invention also provides possibility for map matching methods with better future use performance but slower speed.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The description and applications of the present invention herein are illustrative and are not intended to limit the scope of the invention to the embodiments described above. Variations and modifications of the embodiments disclosed herein are possible, and alternatives and equivalents of the various components of the embodiments are known to those of ordinary skill in the art. It will be clear to those skilled in the art that the present invention may be embodied in other forms, structures, arrangements, proportions, and with other assemblies, materials, and components, without departing from the spirit or essential characteristics thereof. Other variations and modifications of the embodiments disclosed herein may be made without departing from the scope and spirit of the invention.