Disclosure of Invention
To this end, the present invention provides a new vehicle assisted driving solution in an attempt to solve or at least alleviate at least one of the problems presented above.
According to an aspect of the present invention, there is provided a driving assist method including the steps of: acquiring road data in a preset range, wherein the road data comprises static and/or dynamic information of each object in the preset range; identifying one or more vehicles and vehicle motion information in each object based on the road data; determining driving-related information of the identified vehicle based on the road data and the vehicle motion information; and transmitting the driving-related information to the identified vehicle through a predetermined communication means.
Alternatively, in the driving assist method according to the present invention, the step of acquiring road data within a predetermined range includes: acquiring static information which is stored in advance and is about the preset range; obtaining static and/or dynamic information of each object in a predetermined range by using each sensor in the drive test sensing equipment deployed in the predetermined range; the road data is generated by combining static information stored in advance and information obtained by the respective sensors.
Alternatively, in the driving assist method according to the present invention, the step of acquiring road data within a predetermined range further includes: receiving vehicle running information sent by a vehicle in a preset range in a preset communication mode; and combining the pre-stored static information, the information obtained by the respective sensors, and the received vehicle travel information to generate road data.
Alternatively, in the driving assist method according to the present invention, the step of acquiring static information about a predetermined range includes: determining the geographical position of the roadside sensing equipment; and obtaining static information from the server that is within a predetermined range of the geographic location.
Alternatively, in the driving assistance method according to the present invention, the identifying one or more vehicles and vehicle motion information in each object based on the road data includes: determining vehicle objects belonging to the vehicle and motion information thereof based on the motion characteristics of the objects; and identifying the identity of each vehicle object.
Optionally, in the driving assistance method according to the present invention, the communication means includes one or more of: V2X, 5G, 4G and 3G communications.
Alternatively, in the driving assist method according to the present invention, each of the objects includes one or more of the following objects: lane lines, guardrails, isolation strips, vehicles, pedestrians, and sprinkles; the static and/or dynamic information includes one or more of the following: location, distance, velocity, angular velocity, license plate, type and size, etc.
Optionally, in the driving assistance method according to the present invention, the sensor in the roadside sensing device includes one or more of: millimeter wave radar, laser radar, camera, infrared probe.
Alternatively, in the driving assist method according to the present invention, the vehicle travel information includes one or more of the following: current time, size, velocity, acceleration, angular velocity, and position.
Alternatively, in the driving-assist method according to the present invention, the driving-related information includes a potential collision risk, and the step of determining the driving-related information of the identified vehicle based on the road data and the vehicle motion information includes: potential collision hazards for the identified vehicle are determined based on road data and vehicle motion information by way of modeling or deep learning.
Alternatively, in the driving assist method according to the present invention, the step of determining the driving-related information of the identified vehicle based on the road data includes: receiving a scene request sent by a vehicle within a preset range; and determining driving-related information corresponding to the scene based on the road data.
Optionally, the driving assistance method according to the present invention is adapted to be executed in a roadside sensing device disposed in the predetermined range or on a cloud server coupled to the roadside sensing device.
According to another aspect of the present invention, there is provided a driving assistance method performed in a vehicle that travels on a road on which a roadside sensing device is disposed, the method including the steps of: receiving driving related information from roadside sensing equipment in a preset communication mode, wherein the driving related information is generated by the roadside sensing equipment according to road data in a preset range; and outputting the received driving-related information in the vehicle.
According to still another aspect of the present invention, there is provided a roadside sensing apparatus including: a sensor group adapted to obtain static and/or dynamic information of each object within a predetermined range thereof; a storage unit adapted to store the road data, the road data including static and/or dynamic information of each object within a predetermined range; and a calculation unit adapted to perform the driving assistance method according to the present invention.
According to still another aspect of the present invention, there is provided a driving assistance system including: the roadside sensing devices are deployed at the side positions of the road; and a vehicle that travels on a road and performs the driving assistance method according to the present invention.
According to still another aspect of the present invention, there is also provided a computing device. The computing device includes at least one processor and a memory storing program instructions, wherein the program instructions are configured to be adapted to be executed by the at least one processor and include instructions for performing the driving assistance method described above.
According to still another aspect of the present invention, there is also provided a readable storage medium storing program instructions that, when read and executed by a computing device, cause the computing device to execute the driving assistance method described above.
According to the driving assisting scheme, the sensing capability of the roadside sensing equipment is fully utilized, so that the requirement on a vehicle-mounted sensor is remarkably reduced. So that various driving assistance capabilities can be obtained even if no additional sensor is installed on the vehicle.
In addition, various driving related information is obtained by analyzing the perception data and is sent to the vehicle, so that more efficient and safer driving assisting capability can be provided for the vehicle, and the limitation of the conventional driving assisting system is broken through
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 shows a schematic view of a driving assistance system 100 according to an embodiment of the invention. As shown in fig. 1, the driving assistance system 100 includes a vehicle 110 and a roadside sensing device 200. Vehicle 110 is traveling on road 140. Roadway 140 includes a plurality of lanes 150. During the driving process of the vehicle 110 on the road 140, different lanes 150 may be switched according to the road condition and the driving target. The roadside sensing device 200 is disposed at the periphery of the road, and collects various information within a predetermined range around the roadside sensing device 200, particularly road data related to the road, using various sensors it has.
The roadside sensing device 200 has a predetermined coverage. According to the coverage range and the road condition of each roadside sensing device 200, a sufficient number of roadside sensing devices 200 can be deployed on two sides of the road, and the whole road can be fully covered. Of course, according to an embodiment, instead of fully covering the entire road, the roadside sensing devices 200 may be deployed at the feature points (corners, intersections, and diversions) of each road to obtain the feature data of the road. The present invention is not limited by the specific number of roadside sensing devices 200 and the coverage of the road.
When the roadside sensing devices 200 are deployed, the positions of the sensing devices 200 to be deployed are calculated according to the coverage area of a single roadside sensing device 200 and the condition of the road 140. The coverage area of the roadside sensing device 200 depends on at least the arrangement height of the sensing device 200, the effective distance sensed by the sensors in the sensing device 200, and the like. And the condition of road 140 includes road length, number of lanes 150, road curvature and grade, etc. The deployment location of the perceiving device 200 may be calculated in any manner known in the art.
After the deployment location is determined, the roadside sensing device 200 is deployed at the determined location. Since the data that the roadside sensing device 200 needs to sense includes motion data of a large number of objects, clock synchronization of the roadside sensing device 200 is performed, that is, the time of each sensing device 200 is kept consistent with the time of the vehicle 110 and the cloud platform.
Subsequently, the position of each deployed roadside sensing device 200 is determined. Since the perception device 200 is to provide the driving assistance function for the vehicle 110 traveling at a high speed on the road 140, the position of the perception device 200 must be highly accurate as the absolute position of the perception device. There are a number of ways to calculate the high accuracy absolute position of the perceiving device 200. According to one embodiment, a Global Navigation Satellite System (GNSS) may be utilized to determine a high accuracy position.
The roadside sensing device 200 collects and senses the static conditions (lane lines 120, guardrails, isolation belts and the like) and the dynamic conditions (running vehicles 110, pedestrians 130 and sprinklers) of the roads in the coverage area thereof by using the sensors thereof, and fuses the sensing data of the different sensors to form the road data of the section of the road. The road data comprises static and dynamic information of all objects within the coverage area of the perceiving device 200, in particular within the road-related field. The roadside sensing devices 200 may then calculate driving-related information for each vehicle based on the road data, such as whether the vehicle has a potential collision risk, traffic conditions outside the field of view of the vehicle (such as road conditions after a road curve, road conditions before a preceding vehicle), and the like.
A vehicle 110 entering the coverage area of one roadside sensing device 200 may communicate with the roadside sensing device 200. A typical communication method is the V2X communication method. Of course, the mobile internet provided by the mobile communication service provider may communicate with the roadside sensing devices 200 using mobile communication means such as 5G, 4G and 3G. In consideration of the fact that the vehicle runs at a high speed and the requirement for the time delay of communication is as short as possible, the V2X communication method is adopted in the general embodiment of the present invention. However, any communication means that can meet the time delay requirements required by the present invention is within the scope of the present invention.
The vehicle 110 may receive driving-related information related to the vehicle 110 from the roadside sensing device 200 and assist the vehicle driving using the driving-related information.
Optionally, the driving assistance system 100 further comprises a server 160. Although only one server 160 is shown in fig. 1, it should be understood that the server 160 may be a cloud service platform consisting of a plurality of servers. Each roadside sensing device 100 transmits the sensed road data to the server 160. The server 160 may combine the road data based on the location of each roadside sensing device 100 to form road data for the entire road. The server 160 may also perform further processing on the road data for the road to form driving-related information, such as traffic conditions, accident sections, expected transit times, etc. for the entire road.
The server 160 may transmit the road data and the driving related information of the formed whole road to each roadside sensing device 200, or may transmit the road related data and the driving related information of a section of road corresponding to several roadside sensing devices 200 adjacent to a certain roadside sensing device 200 to the roadside sensing device 200. In this way, the vehicle 110 may obtain a greater range of driving-related information from the roadside sensing device 200. Of course, the vehicle 110 may obtain the driving-related information and the road data directly from the server 160 without passing through the roadside sensing device 200.
If roadside sensing devices 200 are deployed on all roads within an area and the roadside sensing devices 200 transmit road data to the server 160, navigation instructions for road traffic within the area may be formed at the server 160. Vehicle 110 may receive the navigation instructions from server 160 and navigate accordingly.
FIG. 2 shows a schematic diagram of a roadside sensing device 200 according to one embodiment of the invention. As shown in fig. 2, the roadside sensing device 200 includes a communication unit 210, a sensor group 220, a storage unit 230, and a calculation unit 240.
The roadside sensing devices 200 are to communicate with each vehicle 110 entering its coverage area to provide driving-related information to the vehicle 110 and to receive vehicle driving information of the vehicle from the vehicle 110. At the same time, the roadside sensing devices 200 also need to communicate with the server 160. The communication unit 210 provides a communication function for the roadside sensing device 200. The communication unit 210 may employ various communication methods including, but not limited to, ethernet, V2X, 5G, 4G, and 3G mobile communication, etc., as long as they can complete data communication with as little time delay as possible. In one embodiment, roadside sensing device 200 may communicate with vehicle 110 entering its coverage area using V2X, while roadside sensing device 200 may communicate with server 160 using, for example, a high speed internet.
The sensor group 220 includes various sensors, for example, radar sensors such as a millimeter wave radar 222 and a laser radar 224, and image sensors such as a camera 226 and an infrared probe 228 having a light supplement function. For the same object, various sensors can obtain different properties of the object, for example, radar sensors can make object velocity and acceleration measurements, while image sensors can obtain object shape, relative angle, etc.
The sensor group 220 collects and senses static conditions (lane lines 120, guardrails, isolation strips, etc.) and dynamic conditions (running vehicles 110, pedestrians 130, and sprinklers) of roads in the coverage area using the respective sensors, and stores data collected and sensed by the respective sensors in the storage unit 230.
The computing unit 240 fuses the data sensed by the sensors to form road data for the road segment and also stores the road data in 234. In addition, the calculation unit 240 may further perform data analysis based on the road data, identify one or more vehicles and vehicle motion information therein, and further determine driving-related information for the vehicle 110. Such data and information may be stored in storage unit 230 for transmission to vehicle 110 or server 160 via communication unit 210.
In addition, various calculation models, such as a collision detection model, a license plate recognition model, and the like, may be stored in the storage unit 230. These computational models may be used by the computational unit 240 to implement the corresponding steps in the method 300 described below with reference to fig. 3.
Fig. 3 shows a schematic representation of a driving assistance method 300 according to an embodiment of the invention. The driving assistance method 300 is suitable for being executed in the roadside sensing device 200 shown in fig. 2 and is also suitable for being executed in the server 160 of fig. 1. When executed in server 160, all relevant data acquired by roadside sensing devices 200 may be sent to server 160 for execution in server 160.
As shown in fig. 3, the driving assistance method 300 starts at step S310.
In step S310, road data within a predetermined range of road positions is acquired. As described above with reference to fig. 1, the roadside sensing device 200 is generally fixedly disposed near a certain road, and thus has a corresponding road position. In addition, the roadside sensing device 200 has a predetermined coverage area depending on at least the arrangement height of the sensing device 200, the effective distance for sensing by the sensors in the sensing device 200, and the like. Once the roadside sensing device 200 is deployed at a side of a certain road, a predetermined range of the road that can be covered by the sensing device can be determined according to the specific positions, heights and effective sensing distances of the sensing device and the road.
The roadside sensing device 200 collects and/or senses the static conditions (lane lines 120, guardrails, isolation strips, etc.) and dynamic conditions (running vehicles 110, pedestrians 130, and sprinklers) of the road in the coverage area by using the various sensors thereof to obtain and store various sensor data.
As described above, the roadside sensing device 200 includes various sensors, for example, radar sensors such as the millimeter wave radar 222 and the laser radar 224, and image sensors such as the camera 226 and the infrared probe 228 having a light supplement function, and the like. For the same object, various sensors can obtain different properties of the object, for example, a radar sensor can perform object velocity and acceleration measurements, and an image sensor can obtain the shape and relative angle of the object.
In step S310, processing and fusion may be performed based on the obtained various sensor raw data, thereby forming unified road data. In one embodiment, step S310 may further include a substep S312. In step S312, static information on a predetermined range of road positions, which is stored in advance, is acquired. After the roadside sensing device is deployed at a certain position of a road, the range of the road covered by the sensing device is fixed. Static information of the predetermined range, such as road width, number of lanes, turning radius, etc., within the range may be obtained. There are a number of ways to obtain static information of a road. In one embodiment, this static information may be pre-stored in the perceiving device at the time of deployment of the perceiving device. In another embodiment, the location information of the perceiving device may be obtained first, and then a request containing the location information may be sent to the server 160, so that the server 160 returns the static information of the relevant road range according to the request.
Subsequently, in step S314, the raw sensor data is processed according to different sensors to form sensing data such as distance measurement, speed measurement, type identification, size identification, and the like. Next, in step S316, based on the road static data obtained in step S312, calibration is performed using different sensor data as a reference and other sensor data, and finally uniform road data is formed.
Steps S312-S136 describe one way to obtain road data. The invention is not limited to the particular manner in which the data of the various sensors is fused to form the roadway data. This approach is within the scope of the present invention as long as the road data contains static and dynamic information for various objects within a predetermined range of the road location.
According to one embodiment, each vehicle 110 entering the coverage area of the roadside sensing device 200 actively communicates with the sensing device 200 through various communication means (e.g., V2X). Therefore, as described in step S318, the vehicle 110 transmits the vehicle travel information of the vehicle to the perception device 200. The travel information of the vehicle includes the travel information that the vehicle has during travel, including, for example, the current time at which the travel information is generated, the size, speed, acceleration, angular velocity, and position of the vehicle. The method S310 further includes a step S319 in which the vehicle travel information obtained in the step S318 is further fused on the basis of the road data formed in the step S316 to form new road data.
Next, in step S320, one or more vehicles within the sensing unit coverage and motion information of the vehicles are identified based on the road data obtained at step S310. The identification in step S320 includes two aspects of identification. One aspect of the identification is vehicle identification, i.e. identifying which objects in the road data are vehicle objects. Since the vehicle objects have different motion characteristics, such as a relatively high speed, traveling in a lane in one direction, generally not sending collisions with other objects, and the like. A conventional classification detection model or a deep learning-based model may be constructed based on these motion characteristics, and the constructed model is applied to road data, thereby determining motion characteristics such as a vehicle object and a motion trajectory of the vehicle object in the road data.
Another aspect of the identification is identifying a vehicle identification. For the recognized vehicle object, its vehicle identification is further determined. One way to determine the identity of the vehicle is to determine the unique license plate of the vehicle, for example by means of image recognition or the like. When the license plate of the vehicle cannot be identified, another way to determine the vehicle identifier may be to generate a unique mark of the vehicle by combining the size, type, position information, driving speed, and the like of the vehicle object. The vehicle identification is the unique identification of the vehicle object within the road section and is used to distinguish it from other vehicle objects. The vehicle identification is used in subsequent data transmission and is transmitted in different road side sensing devices in the road so as to facilitate overall analysis.
Subsequently, in step S330, based on the road data obtained in step S310 and the vehicle object and its motion information recognized in step S320, data analysis is performed to determine driving-related information of the vehicle.
The present invention includes a variety of driving-related information and thus has a plurality of analysis models for analysis of the driving-related information.
According to one embodiment, data analysis is actively performed in step S330 to determine driving-related information. In this case, for example, if the driving-related information is a potential collision possibility, in step S330, a vehicle having a potential collision possibility in the road is detected. And the collision may include a forward collision, an overtaking collision, a lane change collision, etc. Potential collision detection may be performed in various ways. One way is a collision detection model to detect a vehicle having a possibility of collision from road data. Another way is to perform deep learning to determine vehicles with a possibility of collision by analyzing a large number of actual road collision examples. The invention is not limited to the particular manner in which the potential collision is made.
According to another embodiment, in step S330, data analysis may be performed to determine driving-related information of the vehicle 110 according to the request of the vehicle. In this case, the driving-related information is, for example, scene information related to a scene requested by the vehicle.
Each scene may be defined in advance, and information corresponding to each scene. For example, when the scene is night vision assistance, the driving-related information includes information on a road and a vehicle within a certain range in front of the vehicle; when the scene is a 360-degree panoramic view, the driving related information comprises all information in a certain range around the vehicle; and when the scene is beyond the visual range, the driving related information comprises all information in the visual range in which the vehicle is blocked.
For this, step S330 may further include step S332 and step S334. In step S332, a scene request transmitted from the vehicle 110 is received, and in step S334, driving-related information corresponding to the scene is determined based on the road data and the identification and motion information of the vehicle 110. Since the identity of the vehicle 110 is known, dynamic and static information of other vehicles and environments around the vehicle 110 may be determined from the road data, so that driving-related information corresponding to the requested scene may be provided.
Whether the data analysis is performed in an active manner at step S330 or in a passive manner at the request of the vehicle, vehicle matching is required to determine which vehicle object or objects within the coverage of the current perception device 200 the vehicle 110 that is to receive the driving analysis results is. For example, if the potential collision detection is performed at step S330, after determining a vehicle with a higher possibility of collision, it is necessary to determine a vehicle identification and a corresponding communication means that match it. If the driving-related information is scene-related information, after receiving a scene request from the vehicle 110, the vehicle that has made the request needs to be matched with vehicles within the coverage area, so as to determine which vehicle made the scene request, so as to perform data analysis for the vehicle.
Vehicle matching can be performed through various matching modes or combination of license plate matching, driving speed and type matching, position information fuzzy matching and the like. According to one embodiment, the vehicle 110 may bind the license plate information through V2X or application verification, and the license plate information may further be matched to the vehicle data of the corresponding license plate in the roadside sensing device and the server, thereby implementing license plate matching.
After the driving-related information is determined at step S330, the determined driving-related information is transmitted to the corresponding vehicle 110 through a predetermined communication means at step S340. In step S340, a communication manner associated with the matching vehicle 110 is determined, and the driving-related information is transmitted to the corresponding vehicle using the determined communication manner. Alternatively, the communication method is generally a mobile communication method such as V2X, 5G, 4G, or 3G.
The vehicle 110 may perform different processes according to the attribute of the driving-related information after receiving the driving-related information. For example, if the data is scene-related data, the driving-related information is displayed on a display screen or an application such as an in-vehicle center control large screen, an intelligent instrument panel, or navigation software, according to the scene definition.
If the warning information is warning information such as collision warning, the warning information can be prompted to the vehicle owner in various ways such as display, voice, alarm, vibration and the like according to the type and the urgency degree of the warning.
Fig. 4 shows a schematic representation of a driving assistance method 400 according to another embodiment of the invention. The driving assistance method 400 is adapted to be executed in a vehicle 110, and the vehicle 110 runs on a road on which the roadside sensing device 200 is disposed. The method 400 includes step S410. In step S410, driving-related information of the approaching road side perception device 200 is received through a predetermined communication means. Step S410 corresponds to step S340 in the method 300 described above with reference to fig. 3, and thus the driving-related information is generated by the roadside sensing device according to road data within a predetermined range of the road position thereof. The processing in S410 is not described again here.
Subsequently, in step S420, the received driving-related information is output in the vehicle 110. In step S420, the output manner may be determined according to the attribute of the driving-related information.
If the driving-related information is warning information such as a collision warning. The method 400 may include, in addition to presenting the alert information within the vehicle in a conventional manner, step S430, wherein the driver or owner of the vehicle is notified of the potential collision hazard, for example, the alert information may be presented to the owner in a number of different manners, such as display, voice, alarm, vibration, etc., depending on the type and urgency of the alert. In addition, the method 400 may further include step S440, which may convert the warning information into vehicle control, directly control the vehicle to run or provide various driving assistance capabilities including forward collision warning, overtaking warning, lane change warning, blind zone warning, and rear vehicle protection, so as to reduce the possibility of collision, thereby forming more efficient, safer, and more direct driving assistance.
If the driving-related information is scene-related data, the method 400 may further include a corresponding step S450, as described above with reference to fig. 3. In S450, a scene request is sent to the roadside sensing device through a predetermined communication mode, and in step S420, the driving related information is displayed on a display screen or an application such as a vehicle-mounted central control large screen, an intelligent instrument panel or navigation software according to a scene definition.
In addition, optionally, in order to better construct the road data, the method 400 may further include step S460, in which vehicle driving information is transmitted to the roadside sensing device through a predetermined communication manner. The processing in step S460 corresponds to step S318, and is not described here again.
According to the driving assisting scheme, the perception capability of the road side unit can be fully utilized, and the perceived data is further analyzed and processed and then provided for the vehicle, so that efficient driving assisting performance can be provided.
It should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.