Disclosure of Invention
In view of this, the present disclosure provides a positioning result visualization method and apparatus based on a virtual intelligent medical platform.
According to one aspect of the disclosure, a positioning result visualization method based on a virtual intelligent medical platform is provided, which includes:
obtaining a three-dimensional visual virtual image according to the target object data;
carrying out virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result;
combining a accelerator beam three-dimensional model in the virtual image and the registration result, and rendering to obtain a positioning result;
and displaying the positioning result during the radiotherapy positioning process.
In a possible implementation manner, the obtaining a three-dimensional visualized virtual image according to target object data includes:
acquiring DICOM (radio In DICOM) data of a target object through a DICOM network;
extracting the target object data according to the DICOM RT data;
establishing an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data;
and obtaining the three-dimensional visual virtual image according to the accelerator beam three-dimensional model and the target object related three-dimensional model.
In one possible implementation, the establishing an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data includes:
analyzing the target object data to obtain radiotherapy related data;
establishing corresponding three-dimensional model data according to the relevant data of the radiotherapy;
and converting the three-dimensional model data into a specified format to obtain the accelerator beam three-dimensional model and a target object related three-dimensional model.
In a possible implementation manner, the obtaining a registration result by performing virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene includes:
acquiring a real-time picture of the real positioning scene;
obtaining the characteristic points of the real positioning scene according to the real-time picture;
and matching the three-dimensional model related to the target object in the virtual image to the corresponding position in the real positioning scene according to the characteristic points to obtain a registration result.
In one possible implementation, the feature points correspond to position markers added to the skin of the target object during a Computed Tomography (CT) scan.
In one possible implementation, the displaying the positioning result during the radiotherapy positioning process includes:
determining at least one target position according to the position and the visual angle of the target object in the real radiotherapy scene;
and displaying the positioning result at the target position through a display device.
In one possible implementation, the target object data includes: basic information of a target object, CT image data, plan information, structure set information and dosage information;
the target object-related three-dimensional model comprises: a target area three-dimensional model, an ROI three-dimensional model and a dose distribution three-dimensional model.
According to another aspect of the present disclosure, there is provided a positioning result visualization apparatus based on a virtual intelligent medical platform, including:
the virtual image construction module is used for obtaining a three-dimensional visual virtual image according to the target object data;
the virtual-real registration module is used for carrying out virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result;
the rendering module is used for combining the accelerator beam three-dimensional model in the virtual image and the registration result, and rendering to obtain a positioning result;
and the display module is used for displaying the positioning result in the radiotherapy positioning process.
According to another aspect of the present disclosure, there is provided a positioning result visualization apparatus based on a virtual intelligent medical platform, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, by combining a mixed reality technology, information such as tumors and rays is visualized, and three-dimensional holographic display of medical images, three-dimensional display of radiotherapy plans and visual display of positioning results are realized; the target object can observe the positioning result more intuitively and efficiently, clearly know the positioning condition, confirm the positioning and positioning completion degree and reduce the positioning error. Meanwhile, the display information can be used for assisting communication, and the positioning efficiency is improved. In addition, the doctor can correct the positioning result through the display information, and the positioning accuracy is improved.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
With the change of disease spectrum, malignant tumors have become the first killer threatening human health; throughout the course of tumor progression, patients of about 2/3 will inevitably receive radiation therapy for the purpose of radical or palliative reduction, etc.
At present, in the actual radiotherapy positioning process in clinic, the positioning result is informed to a patient through dictation of a technician; however, since the tumor, normal tissues and radiation in human body are not visible to the naked eye and most patients have no medical background, the technical dictation can not make the patients understand the specific situation clearly, and can not reduce the fear and worry of the patients to the tumor and radiotherapy, so that the patients have many physical and psychological symptoms and adverse psychological reactions in the radiation therapy application stage, and the life quality and the treatment compliance of the patients are seriously affected, even the treatment effect is seriously affected.
Therefore, the technical scheme for visualizing the positioning result is provided, and the information such as tumors, rays and the like is visualized by combining a mixed reality technology, so that three-dimensional holographic display of medical images, three-dimensional display of radiotherapy plans and visual display of the positioning result are realized; the patient can more directly perceived more high-efficient observe the result of putting, and the clear situation of putting of knowing to can fix a position and put the completion and confirm, reduce the error of putting. Meanwhile, the display information can be used for assisting communication, and the positioning efficiency is improved. In addition, the doctor can correct the positioning result by displaying the information, so that the positioning accuracy is improved
Fig. 1 shows a flowchart of a positioning result visualization method based on a virtual intelligent medical platform according to an embodiment of the present disclosure. As shown in fig. 1, the method may include:
step 10, obtaining a three-dimensional visual virtual image according to target object data;
step 20, carrying out virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene to obtain a registration result;
step 30, combining the accelerator beam three-dimensional model in the virtual image and the registration result, and rendering to obtain a positioning result;
and step 40, displaying the positioning result in the radiotherapy positioning process.
The Virtual Intelligent (VI) medical platform is constructed by combining artificial intelligence, big data analysis and other methods based on Virtual reality, augmented reality, mixed reality and other holographic technologies, is used for assisting and guiding invasive, minimally invasive and noninvasive clinical diagnosis and treatment processes, assists diagnosis and education of patients, and can be applied to the fields of surgery, internal medicine, radiotherapy department, interventional department and the like. The positioning result is the result obtained in the positioning process of radiotherapy, i.e. a doctor firstly delineates the tumor on the image of the planning system so as to determine the central coordinate of the tumor of the patient, and a physicist and an operator place the tumor center of the patient on the treatment center (including isocenter) of radiotherapy equipment according to the central coordinate of the tumor.
Therefore, based on the virtual intelligent medical platform, the existing target object data information of the hospital is analyzed and converted into a three-dimensional visual virtual image, the virtual image is matched with a real scene through a virtual intelligent technology and is displayed on a display terminal, so that three-dimensional holographic display of the medical image, three-dimensional display of a radiotherapy plan and visual display of a positioning result are realized, the positioning result can be observed more visually and more efficiently by the target object, the positioning error is reduced, and the positioning efficiency is improved.
The positioning result visualization scheme based on the virtual intelligent medical platform is exemplified below with reference to fig. 2 and 3.
Fig. 2 shows a connection diagram of an apparatus for visualizing a positioning result according to an embodiment of the present disclosure, and as shown in fig. 2, the apparatus for visualizing a positioning result may include: the system comprises image acquisition equipment (namely a camera 01, a camera 02 and a camera 03 in the figure), display equipment (namely display equipment 01 and display equipment 02 in the figure), processing equipment PC, a server and an in-hospital information system; fig. 3 shows a schematic view of a radiotherapy positioning result visualization scenario according to an embodiment of the present disclosure, as shown in fig. 3, in the scenario, including: image acquisition equipment (namely a camera 01, a camera 02 and a camera 03 in the figure), display equipment (namely a display in the figure), a PC (personal computer), a server, an in-hospital information system and an accelerator.
In the above fig. 2 and fig. 3, the image acquisition device is configured to acquire a picture in a real-time positioning scene in real time, and transmit the picture to the PC in real time in a wired or wireless manner, the PC and the server acquire target object data and the like through an in-hospital information system to perform positioning result visualization processing on the data, and the PC and the server transmit an obtained processing result to the display device in real time to perform terminal display after data extraction, three-dimensional reconstruction, virtual-real registration and the like are performed on the data. It should be noted that the number, the installation position, the connection mode, and the like of each device such as the image capturing device and the display device in fig. 2 and 3 may be set according to actual needs, and the disclosure does not limit this.
In a possible implementation manner, in step 10, the obtaining a three-dimensional visualized virtual image according to the target object data may include: obtaining DICOM RT data through a DICOM network; extracting the target object data according to the DICOM RT data; establishing an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data; and obtaining the three-dimensional visual virtual image according to the accelerator beam three-dimensional model and the target object related three-dimensional model.
DICOM RT data is related data obtained from a hospital DICOM network, which is an international standard for medical images and related information (ISO 12052). DICOM RT data may be acquired by the in-hospital information system, and may include: the CT image data, the RT Plan information, the RT Structure Set information, and the RT Dose information may exemplarily obtain the CT image data of the target object through CI scanning, and further obtain related data such as the Plan information, the Structure Set information, and the Dose information according to the CT image data. Then, according to the information such as the identity of the target object undergoing radiotherapy, data extraction can be performed in the obtained DICOM RT data to obtain corresponding target object data, which may include: relevant information such as target object basic information, CT image data, plan information, structure set information, dosage information and the like; further, the target object data may be segmented and modeled to create a plurality of three-dimensional models, wherein the plurality of three-dimensional models may include: target object related three-dimensional models such as a target area three-dimensional model, a region of interest (ROI) three-dimensional model, a dose distribution three-dimensional model and the like, an accelerator beam three-dimensional model and the like; furthermore, the three-dimensional models can be combined to obtain a three-dimensional visual virtual image according to the established spatial relative position relationship among the three-dimensional models.
For example, a C-Store network service (a relational database for quick query) may be provided by interfacing with a DICOM network in a hospital through the server in fig. 2 or fig. 3, so as to receive DICOM RT data such as CT image data, RT Plan information, RT Structure Set information, RT Dose information, etc. transmitted through the DICOM protocol; then, the DICOM RT data can be analyzed to extract target object data such as target object basic information, target object CT image data, plan information, structure set information, dosage information and the like; furthermore, a plurality of three-dimensional models are established according to the target object data, and finally, a three-dimensional visual virtual image is obtained.
In one possible implementation, the establishing an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data includes: analyzing the target object data to obtain radiotherapy related data; establishing corresponding three-dimensional model data according to the relevant data of the radiotherapy; and converting the three-dimensional model data into a specified format to obtain the accelerator beam three-dimensional model and a target object related three-dimensional model.
For example, the extracted target object data may be subjected to data analysis to obtain radiotherapy-related data, a Json file (that is, a file saved in a Json data format) describing data information is generated, the radiotherapy-related data (that is, the Json file) is imported into the 3D Slicer of medical image processing software, and the segmented Editor and modeling Model Maker modules of the software may be used to segment and Model CT image data, structure set information, and dose information of the target region, the region of interest, the dose distribution, the accelerator beam, and other parts of the planning information of the target object using Python language, so as to obtain a plurality of corresponding three-dimensional models such as a target region three-dimensional Model, a dose distribution three-dimensional Model, and an accelerator beam three-dimensional Model; and finally, storing the target region three-dimensional model, the region-of-interest three-dimensional model, the dose distribution three-dimensional model, the accelerator beam three-dimensional model and the like as OBJ format model data files, and generating Json files for describing the model data files for subsequent processing. Thus, a three-dimensional model is obtained after modeling processing is carried out on target object data based on 3D slicer software, so that automatic and batch processing of data is realized; meanwhile, the target object can visually and efficiently acquire the positioning condition through a three-dimensional visual virtual image obtained by three-dimensional reconstruction of CT image data, subjective judgment is carried out, the positioning and positioning completion degree is confirmed, and errors are reduced.
In one possible implementation manner, in step 20, the obtaining a registration result by performing virtual-real registration on the three-dimensional model related to the target object in the virtual image and the real positioning scene includes: acquiring a real-time picture of the real positioning scene; obtaining the characteristic points of the real positioning scene according to the real-time picture; and matching the three-dimensional model related to the target object in the virtual image to the corresponding position in the real positioning scene according to the characteristic points to obtain a registration result.
In the embodiment of the present disclosure, the images of the real-world positioning scene may be acquired in real time through one or more image acquisition settings set in the real-world positioning scene, and for example, when the number of the image acquisition devices is greater than 1, the real-time images obtained by the image acquisition devices may be subjected to fusion processing, and then the images subjected to fusion processing and the three-dimensional models related to the target object in the obtained three-dimensional visual virtual image may be subjected to virtual-real registration to obtain the registration result.
Wherein the feature points correspond to position markers added on the target subject's skin during a computed tomography CT scan. Illustratively, it is possible to perform a CT scan at a particular location on the skin such as: marks are added in the middle and two sides of the chest, in the middle and two sides of the abdomen and the like, and the marks are generated into a two-dimensional code form; the mark corresponds to the spatial position of the characteristic point of the real scene obtained through the real-time picture, and meanwhile, the position of the virtual image reconstructed by the CT scanning data and the position of the mark are relatively unchanged; furthermore, according to the relative relation between the feature points obtained by the camera transmitting the picture in real time and the established three-dimensional visual virtual image, the three-dimensional models related to the target objects in the virtual image are matched into the real-time picture, so that the virtual-real registration is realized.
For example, as shown in fig. 3, when performing virtual-real registration, 3 cameras, a PC and other devices may be used, the 3 cameras installed at different positions perform multi-angle real-time image acquisition on a real positioning scene, and transmit the image to the PC in real time, and the PC obtains feature points of the real positioning scene in the image through calculation; then the PC acquires patient data information through a background, and matching is carried out according to the Json file information of the patient extracted from the data; and calling a corresponding three-dimensional reconstructed model data file from the server, and matching the target object related three-dimensional models such as a target area and an ROI interested area which are interested by a patient, a technician and a doctor to the corresponding position of the real positioning scene according to the relative relation between the feature points obtained by transmitting the picture into the camera in real time and the established three-dimensional visual virtual image.
In a possible implementation manner, in step 40, combining the accelerator beam three-dimensional model in the virtual image and the registration result, and rendering to obtain a positioning result; the method can comprise the following steps: according to the registration result, determining the position of an accelerator beam three-dimensional model (a portal model), for example, determining the position of the accelerator beam three-dimensional model based on the coincidence of the isocenter of the target object-related three-dimensional model and the accelerator beam three-dimensional model, and rendering the accelerator beam three-dimensional model and the registration result to obtain a positioning result; it should be noted that, in the embodiment of the present disclosure, the number of the accelerator beam three-dimensional models may be one or more, that is, the positioning result may include a plurality of accelerator beam three-dimensional models with different angles and different shapes.
In one possible implementation, in step 40, the displaying the positioning result during the radiotherapy positioning process includes: determining at least one target position according to the position and the visual angle of the target object in the real radiotherapy scene; and displaying the positioning result at the target position through a display device.
In the embodiment of the disclosure, the number of target positions can be set according to the position, the view angle, the actual environment and other elements of the target object to obtain one or more target positions, so that the registration result can be visually displayed; illustratively, the display device can distinguish the components in the registration result by setting different areas to different colors or different color depths, so that the target object can more intuitively and efficiently acquire the registration result, the target object can conveniently observe and confirm the positioning result, and the positioning efficiency is improved.
For example, as shown in fig. 3, the virtual-real registration result may be displayed by a display device (projector, display, etc.), and a plurality of display devices may be added at different positions and different angles according to the position and the angle of view of the patient in the radiotherapy scene. For example, for a lying patient, a display device can be projected or placed right above the patient, so that the patient can observe and confirm the positioning result conveniently. Therefore, the target object can visually and conveniently observe the registration result, and further subjectively judge the positioning condition by combining the self condition, so that the positioning condition can be corrected; meanwhile, a doctor can correct the positioning result by observing the display equipment, so that the positioning accuracy is improved.
It should be noted that, although the above embodiments are described as examples of a method for visualizing a positioning result based on a virtual intelligent medical platform, those skilled in the art will understand that the present disclosure should not be limited thereto. In fact, the user can flexibly set each implementation mode according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
Therefore, by combining the mixed reality technology, the information such as tumors, rays and the like is visualized, and the three-dimensional holographic display of medical images, the three-dimensional display of radiotherapy plans and the visual display of positioning results are realized; the patient can more directly perceived more high-efficient observe the result of putting, and the clear understanding is put the position condition to can carry out subjective judgement, participate in the location and put the position completion and confirm, reduce the error of putting. Meanwhile, the method can be used for assisting doctor-patient communication, improving the positioning efficiency, relieving the psychological pressure of patients, eliminating the fear of patients, keeping the patients in a healthy psychological state and good immune function, actively matching treatment, reducing the treatment error of the patients and positively influencing the treatment of tumor radiotherapy patients. In addition, the doctor can correct the positioning result through the three-dimensional image, and the positioning accuracy is improved.
Fig. 4 shows a block diagram of a positioning result visualization apparatus based on a virtual intelligent medical platform according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus may include: a virtual image construction module 41, configured to obtain a three-dimensional visual virtual image according to the target object data; a virtual-real registration module 42, configured to perform virtual-real registration on the virtual image and the real positioning scene to obtain a registration result; a rendering module 43, configured to combine the accelerator beam three-dimensional model in the virtual image and the registration result, and render to obtain a positioning result; and the display module 44 is used for displaying the positioning result in the radiotherapy positioning process.
In a possible implementation manner, the virtual image constructing module 41 may include: a DICOM RT data acquisition unit for acquiring DICOM RT data of the target object through a DICOM network; a target object data extraction unit, configured to extract the target object data according to the DICOM RT data; the three-dimensional model building unit is used for building an accelerator beam three-dimensional model and a target object related three-dimensional model according to the target object data; and the virtual image acquisition unit is used for obtaining the three-dimensional visual virtual image according to the accelerator beam three-dimensional model and the target object related three-dimensional model.
In a possible implementation manner, the three-dimensional model building unit may include: the data analysis subunit is used for analyzing and processing the target object data to obtain radiotherapy related data; the model data construction subunit is used for establishing corresponding three-dimensional model data according to the radiotherapy related data; and the format conversion subunit is used for converting the three-dimensional model data into a specified format to obtain the accelerator beam three-dimensional model and the target object related three-dimensional model.
In one possible implementation, the virtual-real registration module 42 may include: the real-time picture acquisition unit is used for acquiring a real-time picture of the real positioning scene; the characteristic point calculating unit is used for obtaining the characteristic points of the real positioning scene according to the real-time picture; and the virtual-real registration unit is used for matching the three-dimensional model related to the target object in the virtual image to the corresponding position in the real positioning scene according to the characteristic points to obtain a registration result.
In one possible implementation, the feature points correspond to position markers added on the skin of the target subject during a computed tomography CT scan.
In a possible implementation manner, the display module 44 may include: the target position selecting unit is used for determining at least one target position according to the position and the visual angle of a target object in the real radiotherapy scene; and the display unit is used for displaying the positioning result at the target position through display equipment.
In one possible implementation, the target object data includes: basic information of a target object, CT image data, plan information, structure set information and dosage information; the target object-related three-dimensional model comprises: a target region three-dimensional model, an ROI three-dimensional model, a dose distribution three-dimensional model and an accelerator beam three-dimensional model.
It should be noted that, although the above embodiments are described as examples of a positioning result visualization device based on a virtual intelligent medical platform, those skilled in the art can understand that the disclosure should not be limited thereto. In fact, the user can flexibly set each implementation mode according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
Therefore, by combining the mixed reality technology, the information such as tumors, rays and the like is visualized, and the three-dimensional holographic display of medical images, the three-dimensional display of radiotherapy plans and the visual display of positioning results are realized; the patient can more directly perceivedly observe the result of putting more high-efficiently, and the clear understanding is put the position condition to can carry out subjective judgement, participate in the confirmation of location pendulum position completion degree, reduce the position error. Meanwhile, the method can be used for assisting doctor-patient communication, improving the positioning efficiency, relieving the psychological pressure of patients, eliminating the fear of patients, keeping the patients in a healthy psychological state and good immune function, actively matching treatment, reducing the treatment error of the patients and positively influencing the treatment of tumor radiotherapy patients. In addition, the doctor can correct the positioning result through the three-dimensional image, and the positioning accuracy is improved.
Fig. 5 shows a block diagram of an apparatus 1900 for virtual intelligent medical platform-based posing result visualization, according to an embodiment of the present disclosure. For example, the apparatus 1900 may be provided as a server. Referring to FIG. 5, the device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by the processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The device 1900 may also include a power component 1926 configured to perform power management of the device 1900, a wired or wireless network interface 1950 configured to connect the device 1900 to a network, and an input/output (I/O) interface 1958. The device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, MacOS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the apparatus 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.