Disclosure of Invention
To this end, the present invention provides a driving state detection method, a mobile terminal, and a storage medium in an effort to solve or at least alleviate at least one of the problems presented above.
According to an aspect of the present invention, there is provided a driving state detection method implemented in a mobile terminal, the mobile terminal including an infrared camera module adapted to continuously capture facial images of a user and generate a sequence of video frames, the method including: when one video frame is acquired, locating eyes and extracting eye patterns from the video frame; performing eyelid segmentation and iris segmentation on the extracted eye pattern; calculating the eye opening according to the segmentation result, wherein the eye opening is the ratio of the area of the overlapping part of the graph surrounded by the upper eyelid curve and the lower eyelid curve and the graph surrounded by the iris outline curve to the area of the graph surrounded by the iris outline curve; judging whether the eyes are closed or not according to the eye opening; and judging whether the user is fatigue driving or not according to the proportion of the video frames with the eyes in the closed state in a preset number of continuous video frames including the current video frame.
The scheme judges the closing state of the eyes according to the two-dimensional relative opening degree, is not influenced by small natural opening degree of the eyes of an individual, and can improve the accuracy of fatigue state detection by combining continuous multi-frame judgment.
Optionally, the method further comprises: calculating the size of a visual angle based on the relative position of the center of a reflected light spot of the pupil to infrared light and the center of the iris, wherein the direction of the visual angle is the direction in which the center of the reflected light spot points to the center of the iris; judging whether the eyes look ahead or not according to the visual angle; and judging whether the attention of the user is concentrated or not according to the proportion of the video frame in which the eyes are not in front of the front view in a preset number of continuous video frames including the current video frame.
Optionally, the method further comprises: and storing the eye closing state and the visual angle determined according to the video frames in a circular linked list so as to count the occupation ratio of the video frames in which the eyes are in the closing state and/or the occupation ratio of the video frames in which the eyes are not in front of the front view.
Optionally, the method further comprises: after judging that the user is in fatigue driving, sending first early warning information to prompt the user to be in a fatigue driving state at present; and/or sending out second early warning information to prompt the user to concentrate the attention after judging that the attention of the user is not concentrated.
Optionally, determining the position of the eyes of the user based on the position of the reflected light spot of the pupil to the infrared light in the video frame; and extracting an eye pattern for the video frame according to the determined position of the eye.
Optionally, determining a saturated area of the reflected light spot by binarizing the video frame based on the gray scale distribution of the reflected light spot and the pupil iris area; filtering the saturated area by using a horizontal gradient filter and a vertical gradient filter to determine the position of the reflected light spot; and determining the position of the user's eye based on the position of the reflected spot.
Optionally, fitting the upper and lower eyelid curves by a polynomial based on a gradient between the upper and lower eyelids and the sclera in a vertical direction; and fitting the iris outline curve through a circle based on the gradient between the left side and the right side of the iris and the sclera in the horizontal direction.
Optionally, the method further comprises: and acquiring the eye opening of the user in a non-fatigue state as an initial opening.
Optionally, based on the initial opening degree, determining a predetermined threshold value, the predetermined threshold value being a product of the initial opening degree and a predetermined coefficient; and when the eye opening degree is smaller than a preset threshold value, judging that the eyes are in a closed state.
Optionally, the viewing angle is calculated based on a distance between a center of the reflected spot and a center of the iris and a radius of the iris.
Optionally, the viewing angle is calculated by the following formula:
wherein alpha is a visual angle, R is an iris radius, and d is a distance between the center of the reflected light spot and the center of the iris.
According to another aspect of the present invention, there is provided a mobile terminal comprising one or more processors; and a memory; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing a driving state detection method.
According to another aspect of the present invention, there is provided a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a mobile terminal, cause the mobile terminal to perform a driving state detection method.
According to the scheme, the infrared camera module is adopted to collect continuous video frame sequences, face recognition and eye positioning are carried out, eyelid segmentation and iris segmentation are carried out on an eye diagram to determine upper and lower eyelid curves and iris outline curves, then the eye closing state and the front of the eye is judged according to the opening degree and the visual angle of the eye, the occupation ratio of the eye closing state and the front of the eye not being looked at is comprehensively analyzed, the driving state is judged, and early warning information is given. When the eye opening degree is calculated, the individual difference of people is considered, the ratio of the area of the overlapping part of the graph formed by the curve of the upper eyelid and the curve of the outer contour of the iris to the area of the graph formed by the curve of the outer contour of the iris is used as the eye opening degree, no additional device is needed in the method, only an application program is installed in the mobile terminal to execute a corresponding instruction, the driving state detection method is universal and simple, and the accuracy of driving state detection can be improved.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Common fatigue monitoring systems infer the fatigue state of a driver by using facial features, eye signals, head movements and the like of the driver, and send out early warning prompts. The invention carries out driving state detection based on the mobile terminal which collects the infrared image, for example, a mobile phone with active infrared illumination is used for obtaining the image of the eye area, the ratio of the area of the iris area between the upper eyelid and the lower eyelid to the whole iris area is used as the eye opening degree, the invention is not influenced by the difference of the individual eye opening degree, the state of the driver can be conveniently and effectively analyzed, and the driver in a fatigue state is reminded.
Fig. 1 illustrates a block diagram of a mobile terminal 100 according to an embodiment of the present invention. The mobile terminal 100 may include a memory interface 102, one or more data processors, image processors and/or central processing units 104, a display screen (not shown in fig. 1), and a peripheral interface 106.
The memory interface 102, the one or more processors 104, and/or the peripherals interface 106 can be discrete components or can be integrated in one or more integrated circuits. In the mobile terminal 100, the various elements may be coupled by one or more communication buses or signal lines. Sensors, devices, and subsystems can be coupled to peripheral interface 106 to facilitate a variety of functions.
For example, a motion sensor 110, a light sensor 112, and a distance sensor 114 may be coupled to the peripheral interface 106 to facilitate directional, lighting, and ranging functions. Other sensors 116 may also be coupled to the peripheral interface 106, such as a positioning system (e.g., a GPS receiver), a temperature sensor, a biometric sensor, or other sensing device, to facilitate related functions.
The camera subsystem 120 and optical sensor 122, which may be, for example, a Charge Coupled Device (CCD) or a complementary metal oxide semiconductor (centimeter OS) optical sensor, may be used to facilitate implementation of camera functions such as recording photographs and video clips. Communication functions may be facilitated by one or more wireless communication subsystems 124, which may include a radio frequency receiverAnd transmitters and/or optical (e.g., infrared) receivers and transmitters. The particular design and implementation of the wireless communication subsystem 124 may depend on the one or more communication networks supported by the mobile terminal 100. For example, the mobile terminal 100 may include a network designed to support LTE, 3G, GSM networks, GPRS networks, EDGE networks, Wi-Fi or WiMax networks, and BluetoothTMA communication subsystem 124 of the network.
The audio subsystem 126 may be coupled to a speaker 128 and a microphone 130 to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions. The I/O subsystem 140 may include a touch screen controller 142 and/or one or more other input controllers 144. The touch screen controller 142 may be coupled to a touch screen 146. For example, the touch screen 146 and touch screen controller 142 may detect contact and movement or pauses made therewith using any of a variety of touch sensing technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies. One or more other input controllers 144 may be coupled to other input/control devices 148 such as one or more buttons, rocker switches, thumbwheels, infrared ports, USB ports, and/or pointing devices such as styluses. The one or more buttons (not shown) may include up/down buttons for controlling the volume of the speaker 128 and/or microphone 130.
The memory interface 102 may be coupled with a memory 150. The memory 150 may include high speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory (e.g., NAND, NOR). The memory 150 may store an operating system 172, such as an operating system like Android, iOS or Windows Phone. The operating system 172 may include instructions for handling basic system services and for performing hardware dependent tasks. The memory 150 may also store one or more programs 174. While the mobile device is running, the operating system 172 is loaded from the memory 150 and executed by the processor 104. The program 174 is also loaded from the memory 150 and executed by the processor 104 when running. The program 174 runs on top of the operating system, and implements various user-desired functions, such as instant messaging, web browsing, picture management, and the like, using interfaces provided by the operating system and underlying hardware. The program 174 may be provided separately from the operating system or may be self-contained. In addition, when the program 174 is installed in the mobile terminal 100, a driver module may be added to the operating system. The program 174 may be arranged to execute the relevant instructions on the operating system by the one or more processors 104. In some embodiments, the mobile terminal 100 is configured to perform a driving state detection method according to the present invention. Among other things, the one or more programs 174 of the mobile terminal 100 include instructions for performing a driving state detection method according to the present invention.
The mobile terminal 100 may be a portable electronic device such as a smart phone, a tablet computer, etc., but is not limited thereto. Specifically, the camera subsystem 120 and the optical sensor 122 in the mobile terminal 100 are infrared camera modules, and can continuously capture infrared images of the face of the user to generate a sequence of video frames. The infrared camera module comprises an active infrared emitting device, the collection frame rate is generally not lower than 15 frames per second, the mobile terminal can be fixed right in front of a driver, so that face images can be collected, and video frames which are not captured to faces are removed through the integral gray scale of the images. And if the face images are not acquired by multiple continuous frames, corresponding prompts can be given. The information for judging the driving state comes from continuous face video frames collected by a mobile terminal with an infrared camera module in the driving process.
Fig. 2 shows a schematic flow diagram of a driving state detection method according to an embodiment of the invention. As shown in fig. 2, in step S200, for each video frame acquired by the infrared camera module, the position of the eye is located from the video frame and the eye diagram is extracted.
The reflection and the refractive index of different parts to infrared light are different, according to the cornea reflection principle, the infrared light is reflected by the front surface of the cornea, a small bright area, namely a reflection light spot, is formed on an image, in a face gray level image, the color of a pupil part is deepest, the reflection light spot corresponds to a brightest point in an eye image, and the gray level value of the rest part is between the two points, so that the position of the light spot can be positioned through the pupil hole relative to the surrounding gray level distribution mode in the image, and the position of the eye can be indirectly positioned.
The position of the eyes of the user can be determined based on the positions of the pupils in the video frame to the reflected light spots of the infrared light; and extracting an eye pattern for the video frame according to the determined position of the eye.
As can be seen from the video frame, the gray value of the reflection light spot area has a larger gradient relative to the pupil iris area, and the gray value of the center of the light spot basically tends to be saturated, i.e. the corresponding pixel value of the reflection light spot in the image is saturated or closest to the saturation, while the peripheral pixel value is relatively much smaller.
According to one embodiment of the invention, the saturated area of the reflected light spot can be determined by binarizing the image based on the gray scale distribution of the reflected light spot and the surrounding pupil iris area. For example, a gradation image is binarized, and a certain gradation level can be taken as a boundary of differentiation. If the gray scale value at level 1 is 255 (full white) and the gray scale value at level 2 is 0, each pixel in the image will take one of 0 and 1. Since noise interference may result in several saturation regions, these saturation regions need to be filtered.
For example, the spot position can be located by filtering a saturation region with edge gradient larger than a given threshold and with area and aspect ratio within a reasonable range by using a vertical filter and a horizontal filter. The contour of the reflected light spot can be located by edge detection using a pixel gradient filter. The edge detection is performed by detecting the state of each pixel and its neighborhood to determine whether the pixel is located on the boundary of the object. Edge detection algorithms are primarily based on first and second derivatives of image intensity, but the derivatives are very sensitive to noise and can be convolved with the image using filters to improve noise-related edge detection. For example, the aspect ratio of the light spot is determined by extracting the upper and lower profiles of the light spot with a vertical filter and the left and right profiles of the light spot with a horizontal filter. The small interfering spots that may be generated at the corners of the eyes or eyelashes can be filtered out by spot area and aspect ratio. When the light spots are positioned, the center position of the eyes is basically determined, and then eye patterns are extracted to carry out eyelid segmentation and iris segmentation.
In step S210, eyelid segmentation and iris segmentation may be performed on the extracted eye pattern.
Because there are great gradients between eyelid and sclera and between iris left and right and sclera, can be based on the gradient between upper and lower eyelid and sclera in the vertical direction, fit the upper and lower eyelid curve through the polynomial; and determining an iris outline curve by fitting a circle by a least square method based on gradients between the left and right sides of the iris and the sclera in the horizontal direction.
For example, using a gradient filter to localize discrete noisy upper eyelid curve segments, since localization is local only, curve fitting is required to obtain a complete eyelid curve. A polynomial fit curve is geometrically a curve that seeks to minimize the sum of the squares of the distances from a given point. For example, segments with a large gradient in the vertical direction are determined, and the segments with the large gradient are connected, so that the quadratic function property is more consistent with the practical situation, and meanwhile, in order to avoid the phenomenon that the high-order function is overfitting due to noise, a curve formed by the segments can be fitted by using a quadratic convex function with a downward opening. The lower eyelid curve is fitted similarly and will not be described in detail here.
The iris outline is shielded by the upper eyelid and the lower eyelid, so that the upper and the lower outlines of the iris are sometimes difficult to accurately position, discrete iris outline curve segments can be positioned by utilizing a gradient filter in the horizontal direction, for example, line segments with larger horizontal gradient of the left and the right boundaries of the iris are determined, the line segments with larger gradient are connected, various parameters are estimated by a least square method, and the curve segments are fitted by circles, so that the outer outline of the iris is determined. And other modes can be adopted for curve fitting, and the scheme is not limited.
Figure 3 shows a schematic diagram of upper and lower eyelid curves and iris curves, in accordance with one embodiment of the present invention.
In step S220, an eye opening may be calculated according to the segmentation result, wherein the eye opening is a ratio of an area of an overlapping portion of a graph surrounded by the upper and lower eyelid curves and a graph surrounded by the iris outline curve to an area of a graph surrounded by the iris outline curve. As shown in fig. 3, the hatched area is the overlapping area between the figure enclosed by the upper and lower eyelid curves and the figure enclosed by the iris outline curve.
According to an embodiment of the present invention, the opening degree of the eyes of the user in the non-fatigue state may be acquired as the initial opening degree.
The opening degree is the opening degree of the eyes, for an individual, the opening degree of the eyes is large in a non-fatigue state, namely the area of the shadow part is large, and before driving state detection is carried out, the area of the overlapping part of the figure surrounded by the upper eyelid and the lower eyelid of the eyes which are normally opened by the user in the non-fatigue state and the figure surrounded by the iris outline can be initialized to be S0The area of the figure enclosed by the outer outline of the iris is S, the initial opening degree is a0=S0and/S as a reference value for judging the eye closing state.
In step S230, it may be determined whether the eyes are closed according to the eye opening.
According to an embodiment of the present invention, a predetermined threshold value may be determined based on the initial opening degree, the predetermined threshold value being a product of the initial opening degree and a predetermined coefficient; and when the eye opening degree is smaller than a preset threshold value, judging that the eyes are in a closed state.
Calculating the area S of the overlapping part of the figure formed by the curves of the upper eyelid and the lower eyelid and the figure formed by the curve of the outer contour of the iris at a certain time t from the extracted eye patterntThe ratio a of the area S of the figure formed by the iris outline curvet,at=Stand/S. The predetermined coefficient can be adjusted according to actual conditions, and can be 0.5, a according to experiencet<0.5*a0Then the eyes may be considered to be in a closed state and an open state otherwise.
The eye closure states determined from the video frames may be saved in a circular linked list to count the occupancy of the video frames in which the eyes are in the closure state.
In step S240, it may be determined whether the user is driving fatigue according to a proportion of video frames with eyes in a closed state among a predetermined number of consecutive video frames including the current video frame.
For example, the eye closure state of the last 30 frames is counted, and the eye closure state is counted to be more than 0.5 a by traversing the history of the 30 frames0If the number of frames is more than 15, judging that the driver is in a fatigue driving state currently. After the user is judged to be in fatigue driving, first early warning information can be sent out to prompt the user to be in a fatigue state at present and to need to stop for rest.
According to an embodiment of the present invention, the viewing angle may be calculated based on the relative position of the center of the reflected light spot of the pupil to the infrared light and the center of the iris, and the direction of the viewing angle is a direction in which the center of the reflected light spot points to the center of the iris.
Figure 4 shows a schematic view of the location of the center of the reflected spot relative to the center of the iris according to one embodiment of the present invention. As shown in fig. 4, O is the center of the iris, P is the center of the reflected light spot, and the viewing direction is the direction in which the center P of the reflected light spot points to the center O of the iris.
Under the condition that the head is static, when the eyeball moves, the relative position of the pupil center and the light spot center changes, and the change condition of the sight line direction and the fixation point of the eyes can be obtained through the relative position relation. In the eye gray scale image, the pupil part is darkest in color, the reflection light spot corresponds to the brightest point in the eye image, and the gray scale value of the rest part is between the two points. And (3) carrying out binarization processing on the gray level image to obtain a binary image, extracting the maximum light spot in the binary image, and recording the position of the light spot, wherein the geometric center of the light spot is the center of the reflected light spot. Because the gray value of the pupil image is low, the gray value of the iris image is high, and the gray value is greatly changed near the edges of the pupil image and the iris image, the edge detection method based on the gradient filter can be adopted to extract the iris boundary points, and then the outer contour of the iris is determined through the least square fitting circle, so that the iris central point is determined.
When a person looks at the front, the light spot falls on the center of the iris, and when the person looks at the front, the reflected light spot falls on the iris on the other side opposite to the sight direction, and the larger the angle of strabismus is, the closer the reflected light spot is to the outline boundary on the other side of the iris.
Thus, the viewing angle can be calculated based on the distance between the center of the reflected spot and the center of the iris and the radius of the iris:
where α is the viewing angle, R is the iris radius, and d is the distance between the center O of the reflected spot and the center P of the iris (as shown in fig. 4). If the line of sight deflection in the horizontal direction is tested, d is the distance in the horizontal direction. Here more attention is paid to the view angle deflection in the horizontal direction. Because of the yaw in the vertical direction, i.e., the case of the head-up and head-down, the degree of opening of the eyes is smaller than normal, and it can be considered to be in a fatigue state.
The view angle determined from the video frames may be stored in a history, which uses a circular linked list to store view angles corresponding to a predetermined number of video frames, so as to count the proportion of video frames in which the eyes are not looking forward. The circular linked list structure has the characteristic that the table processing is more convenient and flexible only by slightly changing the link mode of the table without increasing the storage capacity.
Whether the eyes look ahead or not can be judged according to the visual angle; and judging whether the attention of the user is concentrated or not according to the proportion of the video frame in which the eyes are not in front of the front view in a preset number of continuous video frames including the current video frame. If the visual angle is larger than zero, the eyes are not looking ahead, if the frame number of the non-looking ahead is larger than a preset threshold value, such as larger than 10 frames, by traversing the visual angle of the latest 30 frames in the circular linked list, the situation may be that the user is not driving seriously and may be driving fatigue, and then early warning information is sent out to prompt the user to concentrate attention.
Fig. 5 shows a schematic flow diagram of driving state detection according to an embodiment of the invention. The method comprises the steps of firstly reading a video stream collected by an infrared camera module, judging whether a face image is collected or not, carrying out eye positioning on a video frame containing the face image, then carrying out eyelid segmentation and iris segmentation, obtaining an upper eyelid curve and a lower eyelid curve and an iris outline curve through fitting, judging whether an eye is in a closed state or not through calculating the opening degree of the eye, comparing the opening degree of the eye with an initial opening degree to judge whether the eye is in the closed state or not, wherein the opening degree of the eye is the ratio of the area of an overlapped area of a figure enclosed by the upper eyelid curve and the lower eyelid curve and the iris outline to the area of the figure enclosed by the iris outline, and calculating the visual angle of the eye according to the relative position of. Then, the comprehensive statistics includes the proportion of the video frames in front of which the eyes do not look right and the proportion of the video frames in a closed state in the preset number of continuous video frames of the current video frame, the driving state is judged, different driving states such as a waking state, a light fatigue driving state and a heavy fatigue driving state can be determined according to different proportions and indexes, and different early warning prompt information is sent out according to different states. And updating the history record in the circular linked list at preset time or preset frame number.
Fig. 6 is a schematic view illustrating a monitoring state of a mobile terminal using a driving state detection method according to an embodiment of the present invention. As shown in fig. 6, the mobile terminal can recognize a human face, determine the current driving state based on the positioning and analysis of the eyes, issue a warning when determining fatigue driving, prompt the user to avoid fatigue driving, display the duration of driving, count the driving state within the duration of driving, and provide personalized services through user settings.
According to the scheme of the invention, the infrared camera module is adopted to collect continuous video frame sequences to perform face recognition and eye positioning, eyelid segmentation and iris segmentation are performed on an eye diagram to determine upper and lower eyelid curves and iris outline curves, then the eye closing state and the front of. The individual difference of people is considered when the eye opening degree is calculated, whether the eyes are in the closed state or not is judged according to different reference values of the eye opening degree, no additional device is needed in the method, only an application program is installed in the mobile terminal to execute a corresponding instruction, the driving state detection method is universal and simple, and the accuracy of driving state detection can be improved.
A7, the method as in a6, wherein the step of determining whether the eye is closed according to the eye opening degree comprises: determining a predetermined threshold value based on the initial opening degree, wherein the predetermined threshold value is the product of the initial opening degree and a predetermined coefficient; and when the eye opening degree is smaller than a preset threshold value, judging that the eyes are in a closed state.
A8, the method as in a2, wherein the step of calculating the viewing angle based on the relative position of the center of the reflected spot of infrared light by the pupil and the center of the iris comprises: and calculating the visual angle based on the distance between the center of the reflected light spot and the center of the iris and the radius of the iris.
A9, the method of A8, wherein the viewing angle is calculated by the formula:
wherein alpha is a visual angle, R is an iris radius, and d is a distance between the center of the reflected light spot and the center of the iris.
It should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the method of the present invention according to instructions in the program code stored in the memory.
By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer-readable media includes both computer storage media and communication media. Computer storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.