CN112580015A - Processing system including trust anchor computing instrument and corresponding method - Google Patents

Processing system including trust anchor computing instrument and corresponding method Download PDF

Info

Publication number
CN112580015A
CN112580015A CN202011057520.8A CN202011057520A CN112580015A CN 112580015 A CN112580015 A CN 112580015A CN 202011057520 A CN202011057520 A CN 202011057520A CN 112580015 A CN112580015 A CN 112580015A
Authority
CN
China
Prior art keywords
checking
trust anchor
validity
ssb
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011057520.8A
Other languages
Chinese (zh)
Inventor
A·塔鲁利
A·L·奇凯蒂
C·罗萨迪尼
W·内希
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Marilyn Europe
Original Assignee
Marilyn Europe
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Marilyn Europe filed Critical Marilyn Europe
Publication of CN112580015A publication Critical patent/CN112580015A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/44Program or device authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • Health & Medical Sciences (AREA)
  • Debugging And Monitoring (AREA)
  • Multi Processors (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
  • Hardware Redundancy (AREA)
  • Storage Device Security (AREA)

Abstract

The invention relates to a processing system configured to perform trusted operations, the processing system comprising at least one host computing module and a hardware trust anchor, the host computing module comprising a host processing unit and a host memory device, the hardware trust anchor comprising a corresponding secure processing unit, a hardware processing module dedicated to cryptographic operations and a secure storage device, the hardware trust anchor being configured to store and run a real-time secure operating system, the secure operating system being configured to perform validity checks on software used in the processing system.

Description

Processing system including trust anchor computing instrument and corresponding method
Technical Field
The present description relates to a processing system configured to perform trusted operations, comprising at least one host computing module and a hardware trust anchor, the host computing module comprising a host processing unit and a host memory device. Preferably, the processing system with the at least one host module and the hardware trust anchor is part of a system on chip, in particular the at least one host module is an ECU operating in a vehicle.
Background
A Hardware Trust Anchor (HTA) is a local and isolated computing device that can start a chain of trust and perform system critical and security functions, including active monitoring of the starting process and the closing of the process upon detection of tampering.
In a typical automotive scenario, a processing system like an Electronic Control Unit (ECU) is based on one or more socs (system on chip), consisting of an application processing unit (i.e. application core) for vehicle functions and an HTA for security support. In order to strengthen the ECU against external attacks (unauthorized activation of vehicle features, reading of sensitive key material, manipulation of software, and others), hardware trust anchor technology (HTA) is used.
The HTA may include a processing unit that utilizes a real-time operating system (RTOS) that embeds a set of mechanisms for consolidating basic security on the HTA side, and thus the system is referred to as a security-enhanced real-time operating system. In computing, strengthening is generally a process of protecting a system by reducing the vulnerability of its surfaces.
Thus, these protection techniques are configured to more effectively detect/react to attacks and limit their ability to perform utilization.
HTA is employed for secure operations involving confidential and sensitive information, such as:
key management;
certificate management;
an encryption operation;
software protection (secure boot and secure update);
data protection (secure storage and secure communication), using access mechanisms and encryption techniques for reading and writing sensitive data to reduce the visibility of the data;
thus, a security-enhanced real-time operating system should be able to:
performing multiple security operations simultaneously;
switching between various entities (processes or tasks) according to some predefined logical and scheduling policies;
assurance and maintenance of confidentiality (information isolation)
Ensuring that generic functions are executed and that execution time is known
A real-time operating system for security enforcement of HTA may be a static, non-modifiable operating environment. This means that objects, tasks, event resources of an application cannot be created or deleted during execution of the application. The correctness (integrity) of data and other memory portions should be a basic requirement for the security system.
Today, most automotive ECUs support downloading software so that application software or data can be updated at any time. Depending on the OEM requirements, some verification may be made during the software download to ensure that the new software is valid, some verification may be made during ECU power-up to ensure that the software is still valid, or a combination of both.
There are various methods for processing validity information.
For example, an integrity check, which is usually required for security reasons, can be provided, which is based on a checksum/CRC (cyclic redundancy check), calculated on the non-volatile software data and compared with a reference value. If the checksum is correct, the integrity of the complete software block is assumed. Incidentally, checksum algorithms like CRC do not provide authenticity checks, since no secret parameters are involved during the checksum calculation.
Instead, some ECUs require such authenticity checks, which require the use of software or data on the ECU (e.g., in a security-related ECU or as regulatory protection) only from legitimate sources. Authentication is typically performed by computing a cryptographic signature over the non-volatile software data. The signature may be provided by the OEM or the ECU vendor.
The signature computation algorithm combines a hash computation (hash calculation) with a hardware-based encryption convention (like the encryption conventions performed in modules 122a-122 e) or a software-based encryption convention to ensure the integrity and authenticity of the downloaded software. If the Hardware Trust Anchor (HTA) does not include modules 122a-122b, or only a subset of modules, then the computation is performed in software and no special hardware is required to perform the cryptographic operations.
The above method may also be used to check validity information at run-time.
Currently, only integrity checks are used at runtime in order to only meet security requirements. However, in this way, it is still not possible to detect a maneuver on the original ECU during normal operation of the ECU itself.
Disclosure of Invention
It is an object of one or more embodiments to overcome limitations inherent in solutions available in the prior art.
According to one or more embodiments, this object is achieved thanks to a system having the characteristics specified in claim 1. One or more embodiments may be directed to a corresponding system.
The claims form an integral part of the technical teaching provided herein in relation to the various embodiments.
According to the solution described herein, the solution relates to a processing system configured to perform trusted operations, the processing system comprising at least one host computing module and a hardware trust anchor, the host computing module comprising a host processing unit and a host memory device, the hardware trust anchor comprising a corresponding secure processing unit, a hardware processing module dedicated to cryptographic operations and a secure storage device, the hardware trust anchor being configured to store and run a real-time secure operating system, the secure operating system being configured to perform a validity check on software used in the processing system.
Wherein the secure operating system is configured to perform a runtime authenticity check to control the integrity of the software code at runtime, the runtime authenticity check including identifying signed software blocks and corresponding header and data blocks that are present at least in the program memory of the hardware trust anchor for execution and ultimately also in the program memory of the host; and for each signed software block:
a first step of checking the validity of a certificate associated with said signed software block,
a second step of checking the validity of the header of the signed software block,
a third step of checking the validity of the hash of the data of said signed software block, and
if one of the checking steps of the validity check detects an anomaly, writing information about the detected anomaly in a security log,
the secure operating system is configured to run the runtime authenticity check in a task having a highest priority relative to other hosts or hardware trust anchor services.
The solution described herein is also directed to a corresponding method of performing trusted operations in the above-mentioned processing system, comprising storing and running a real-time secure operating system that performs validity checks on software used in the processing system, the method comprising
Performing a runtime authenticity check to control the integrity of the software code at runtime, the runtime authenticity check including identifying signed software blocks and corresponding header and data block headers that are present in the program memory of the hardware trust anchor for execution and ultimately also in the program memory of the host; and performing, for each signed software block:
a first step of checking the validity of a certificate associated with said signed software block,
a second step of checking the validity of the header of the signed software block,
a third step of checking the validity of the hash of the data of said signed software block, an
If one of the checking steps of the validity check detects an anomaly, information about the detected anomaly is written in the security log,
the secure operating system is configured to run the runtime authenticity check in a task having a highest priority relative to other host modules or hardware trust anchor services.
Drawings
Embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:
figure 1 schematically represents an embodiment of the processing system disclosed herein;
FIG. 2 schematically illustrates the address space of a memory used by the processing system disclosed herein;
FIG. 3 schematically illustrates a memory area of a memory used by the processing system disclosed herein;
figure 4 shows a flow chart of an authenticity verification process implemented by the processing system disclosed herein;
FIG. 5 represents a time diagram representing the operating phases of the processing system disclosed herein;
FIG. 6 represents a block diagram of a stack of memories of the processing system disclosed herein;
FIG. 7 represents a flow chart of a stack overflow control procedure implemented by the processing system disclosed herein.
Figure 8 represents a flow chart of the detection process implemented by the processing system disclosed herein.
Detailed Description
The following description illustrates various specific details that are intended for a thorough understanding of the embodiments. Embodiments may be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail so that aspects of the embodiments are not obscured.
Reference to "an embodiment" or "one embodiment" within the framework of this specification is intended to indicate that a particular configuration, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Also, phrases such as "in an embodiment" or "in an embodiment" that may be present in various aspects of the present description do not necessarily refer to the same embodiment. Furthermore, the particular configurations, structures, or characteristics may be combined as suitable in one or more embodiments.
The references used herein are for convenience only and thus do not define the scope of protection or the scope of the embodiments.
In fig. 1, a processing system 10 is shown, which processing system 10 may for example correspond to an ECU on a SoC for automotive applications (e.g. in a control system of an engine or other functions of a vehicle) comprising an application processing unit (i.e. an application core 11) and a Hardware Trust Anchor (HTA) 12. The application core 11 includes a nonvolatile memory 111, a RAM memory 112, a bus interface 114 (e.g., communicating with a vehicle bus), and an application CPU 113. The HTA12 includes a secure memory 121, and the secure memory 121 includes a secure nonvolatile memory 121b and a secure RAM memory 121 a. The HTA12 also includes a security processing unit 123, an HTA interface module 124, and an encryption hardware acceleration module 122. the encryption hardware acceleration module 122 may contain a hash engine 122a for computing a hash value of the software data, a symmetric encryption engine 122b for performing a symmetric encryption algorithm, and an asymmetric encryption engine 122c for performing an asymmetric encryption algorithm, and then a random generator module 122d, which may include a TRNG and/or a PRNGS, and also a counter module 122 d.
As shown in fig. 2, in the following section, the security-enhanced real-time operating system software (which is itself software) is typically split in memory (i.e., secure storage 121), which schematically represents the address space of memory 121 of HTA12, where the operating system represented along with HOS exists:
text or source code TXS: a memory area storing executable code of a program. The memory block is typically read-only.
Data or data flash DT: a memory area that stores static/global variables initialized by a programmer. This is even a memory portion in which the OS-Applications may own private data.
BSS (block beginning with symbol) memory BS: a memory area storing uninitialized static/global variables. The segment will be filled with zeros by the operating system and therefore all uninitialized variables will be initialized with zeros.
Heap (heap) memory HP: the heap is used to provide space for dynamic memory allocation.
And stack SS: this is part of the volatile memory (RAM)121a into which local variables, registers, and data related to function calls (e.g., returns) defined within the function are pushed during task execution. Each task may have its own stack area.
The text storage area TXS and the data storage area DT are nonvolatile memories 121b, whereas the BSS storage area BS, the heap storage area HP, and the stack SS are volatile memories 121 a.
The real-time operating system for security enforcement for HTA described herein is configured to verify non-volatile and volatile memory by different specific procedures, including:
a runtime authenticity check process 500 for non-volatile memory, wherein the authenticity attribute is checked by encryption;
a stack overflow detection procedure 600 for a stack SS (volatile memory 121a), wherein the amount of stack SS used is checked against static boundaries.
The runtime authenticity verification process 500 is a hardened technique based on an authenticity approach that includes keeping the integrity of the code under control at runtime. This is done in parallel with normal system operation including start, where authenticity is verified starting from the root of trust (a root of trust), while other times may be done with a frequency associated with external events (e.g. updating part of the code) or predefined periodic mechanisms, or with inactive moments of the HTA.
In this way, the hardened real-time operating system prevents a compromised system from spoofing using its own memory contents.
The advantage of this enhanced technique is that, as shown in fig. 1, an HTA is obtained as another core, with full access to the flash memory of the ECU. As a result, the runtime authenticity verification process 500, which was developed to verify only the hardware trust anchor non-volatile memory, may be configured to verify the code integrity of the entire flash memory without any impact or delay on security critical applications executing on the host core.
For this reason, and to summarize the description thereof, it is assumed herein that the runtime authenticity verification process 500 is configured to verify the entire flash memory.
The runtime authenticity verification process 500 uses an authentication mechanism based on signed SW blocks and certificates.
As shown in fig. 3, schematically illustrating the non-volatile memory 111 of the host 11 and the non-volatile memory 121b of the HTA12, the signed SW block SSB (i.e. the software block to which the digital signature is applied to provide data authenticity and integrity) is stored in the program memory (in particular in the flash memory) in the hardware trust anchor 12 and the non-volatile memories 111 and 121b of the host 11, respectively, while the certificate C and finally the security log SL (which provides information about the detected anomalies), as better explained below, are stored in the non-volatile memory 121b in the secure memory 121, the non-volatile memory 121b comprising for example the program flash memory and the data flash memory, the non-volatile memory 121b also being used to store all sensitive data (e.g. keys).
The signed SW block SSB corresponds to binary data programmed into the flash memory of the hardware trust anchor 12 and eventually also into the host 11, in particular the ECU.
There are several signed SW blocks SSB in the ECU. Both the host and HTA have respective non-volatile memories, including program flash and data flash, so the SSB can be distributed in the host and HSM. They are executed at different times and have different purposes. For example in the host's flash memory:
the boot loader BL: is a piece of code that runs before any application code;
applying APP: is a piece of code that runs after the boot loader;
calibrating the CLB: is a piece of code that contains data necessary for the application to function properly.
In the flash memory of the hardware trust anchor:
guiding: is a piece of code that runs before any operating system runs;
applying APP: is a piece of code that runs after the operating system runs. The piece of code may include a real-time operating system;
each signed SW block SSB is associated with some information (called metadata) having two parts, namely a header SSBH and data SSBD:
the header SSBH field contains information about the block content and all cryptographic hashes and signatures;
the data SSSBD field is a data block that has been determined to require authenticity and integrity certification.
A certificate (or digital certificate) is a package of public keys used for communication purposes and to prove ownership and validity. It is signed by the certificate authority to convey trust in the contents of the certificate.
The certificate by way of non-limiting example conforms to the x.509v3 certificate and is encoded in asn.1der format. For example, there may be a root certificate in the HTA that represents the highest certificate from the ECU's perspective and cannot be validated against another certificate. There are certificates of application software running in the host and certificates of the host software itself (e.g., boot loader) which represent intermediate certificates from the ECU's perspective and are validated against such root certificates.
Typically, the runtime authenticity check process 500 is a low priority task (i.e., a background task) implemented in the HTA12 so as not to interfere with other runtime services required by the host. In particular the allowable delay of their execution should not be exceeded.
However, it must also be avoided that an overly demanding host 11 or attacker requiring a large number of operations from the HTA12 may prevent the runtime authenticity check 500 from running and detecting software tampering. While it must be avoided that the time to verify the entire flash memory exceeds a predefined value.
To merge these opposing vision, the runtime authenticity check runs in a periodic task with the highest priority relative to other host/HTA services, and divides the check into n steps of configurable duration.
In fig. 4, a flow diagram representing a runtime authenticity check process 500 is shown.
The runtime authenticity check 500 comprises identifying in step 505 the blocks SSB of signed software and the corresponding headers SSBH and data blocks (SSBD), and for each signed software block SSBj, which exists in the program memory of the hardware trust anchor 12 for execution and finally also in the program memory 111 of the host 11, j being the identification index of the signed block (identified signed software block SSB from zero to number N), the validity of each certificate C associated with each signed SW block SSB is checked in step 510. The verification of the validity of each certificate comprises: any new certificate to be stored in HTA12 must be verified as another certificate stored in HTA12 unless it is a self-signed certificate.
The header signature field SSBH, i.e. the header signature (HeaderSignature), of the currently signed SW block SSBj is then verified in step 520, which guarantees the authenticity and integrity of the content of the header of this block. This field contains the digital signature across the length from the header.
Then, in step 530, the hash of the current signed SW block code SSBj block is verified:
the File Digest (File Digest) field in the header of the current signed SW block code defines the qualified SHA-256 Digest computed on the data block;
the check passes only if the file digest field is equal to a qualified SHA-256 digest calculated by the runtime authenticity check on the data block (particularly in the cryptographic hardware acceleration module 122, particularly by the hash engine 122a computing the hash value).
If an anomaly is detected in the verification step 510, 520 or 530, it is verified in step 550 whether a security log SL has been created for the SSB under verification. In the negation of step 560, a security log SL is created in the dataram (non-volatile memory 121b) of the HTA12, in which information about the detected anomaly is written, i.e. the security log is invoked, otherwise the security log SL already created is updated. In general, if an anomaly is detected by one of such validation steps 510, 520, 530, then writing information about the detected anomaly into the security log SL takes place.
In fig. 5, a time diagram is shown, which shows that, given a duration DH for performing the services required by the hosts 11 to HTA12, a part of the authentication procedure 500 is performed within an interval of the duration DR with a period T having a length such that the service duration DH of a different host service is included.
Thus, as described above, in an embodiment, a processing system 10 configured to perform trusted operations comprises at least one host computing module 11 and a hardware trust anchor 12, the host computing module 11 comprising a host processing unit 113 and host storage 111,112 (e.g. volatile and non-volatile), the hardware trust anchor 12 comprising a respective secure processing unit 123, a hardware processing module 122 dedicated to cryptographic operations and a secure storage 121, wherein the hardware trust anchor 12 is configured to store and run a real-time secure operating system HOS configured to perform a validity check on software used in the processing system 10, wherein the secure operating system is configured to perform in particular a runtime authenticity check 500 to control the integrity of the software code at runtime, the runtime authenticity check 500 comprising
At least the signed software block SSB present in the program memory of said hardware trust anchor calculation module 12 for execution (i.e. the non-volatile memory 121b) and eventually also in the program memory of the host (i.e. the non-volatile memory 111) and the corresponding header SSBH and data block SSBD are identified 505 and for each signed software block SSB:
a first step 510, of checking the validity of the certificate C associated with said signed software block SSB,
a second step 520, checking the validity of the header SSBH of the signed software block SSB,
a third step 530 of checking the validity of the hash of the data SSBD of said Signed Software Block (SSB), an
If one of such checking steps 510, 520, 530 of validity detects an anomaly, information about the detected anomaly is written to the security log SL, i.e. a security log SL is created comprising information about the detected anomaly, or an anomaly is written in such a security log SL, the secure operating system HOS being configured to run said runtime authenticity check 500 in a task with highest priority PL with respect to other hosts 11 or hardware trust anchors 12.
As mentioned, the processing system 10 also implements the stack overflow detection process 600.
In order to handle multiple schedulable entities that need to run simultaneously, the operating system will implement multiprocessing. A process (or task in a security-enhanced real-time operating system) is an independent thread of execution that contains a series of independently schedulable codes. These entities that build a security-hardened real-time operating system can independently compete for CPU execution time at the expense of performance (time and memory overhead). Each task has its own dedicated area of volatile memory in which local variables, registers, and data related to function calls defined within the function are stored.
When the scheduler is to switch from running one task to another, it must save the contents of the old task and load the contents of the new task. The contents of a task are a set of registers (program counter, stack pointer, other work registers) used by the task. In particular, one of the registers (the stack pointer) is a pointer to the currently used stack area. Ensuring that there is no interference between different stack regions is essential for proper operation of the system. This aspect is important from a security point of view, since it is well known that stack interference is often an attack tool for malicious users.
Integrity checks on code and data portions in non-volatile memory may not always be with respect to volatile memory. The data contained in these portions of memory is continually modified during execution of the process, and thus, rather than verifying the integrity of the data, it is preferable to verify that the area of memory used by the process does not exceed a predetermined boundary.
In fig. 6, a bank register (bank register) R and a RAM memory 121a of the secure processing unit 123 are shown. A stack SS comprising a process stack PSS and a main stack MSS is shown in RAM memory 121 a. As is well known, a process stack or task stack is typically a pre-reserved region of system memory for return addresses, process parameters, temporarily held registers, and locally allocated variables. The processing unit typically includes a register, i.e. a CPU register R, which points to the top of the stack via a stack pointer STP.
As shown, the in-process stack MSS is a first stack SSA of a first task TA and a second stack SSB of a second task TB having respective stack sizes TSS defining respective stack ranges within the process stack MSS. As shown, the stack mode SP is presented in the last part of the stack range (e.g. corresponding to the last 8/16 bytes) in the first stack SSA of the first task TA and the second stack SSB of the second task TB, as will be explained with reference to the flowchart 11.
During the change task operation, the operating system needs a way to keep track of the ongoing tasks using the tasks or scheduler schedule. Three routines are then required:
performing a content switching as shown in fig. 6, saving the output task register present in the CPU (i.e. the output task register of the first task TA) within the stack area SSA of the first task TA, and then reloading the input task register (i.e. the input task register of the second task TB) from the stack area SSB of the second task TB inside the CPU register;
-initializing the system, updating the state machine and the internal structure constituting the secure real-time operating system;
-jumping to the new task, i.e. the second task TB.
In order to ensure the integrity of the process and system execution, it is necessary to implement a task overflow control mechanism, namely the stack overflow detection process 600, which also verifies that the limits of the effective stack area are not covered, and which is illustrated in the flow chart of FIG. 7.
In step 610, the stack (e.g., SSA, SSB) allocated to the task (e.g., TA, TB) is populated with a known pattern, namely stack pattern SP, when the system and task are initialized.
Then, during each content switch, the operating system checks 620 the last portion, e.g., the last 8/16 bytes, within the valid stack range to ensure that the pattern remains unmodified (uncovered).
If it is verified in test step 620 that any of these last partial bytes of the stack have changed from their original values, then a stack overflow hook function is called in step 630. Such a stack overflow hooking function intercepts the exception and makes it apparent to the operating system in order to raise error code 640 and record the exception event, i.e. exception 650 similar to step 550, preferably in the same log SL.
Program 600 captures (i.e., raises an error and logs an exception) most stack overflow occurrences, although it is conceivable that some may be missed, for example, where stack overflow occurs and the last byte is not written.
The system 10 is also configured to perform a real-time detection process 700 to ensure timely execution of the HTA12 functions: such a process 700 monitors task execution and reacts if time exceeds an allocated time budget (worst case execution time).
The mechanism controls and run-time authenticity checks to verify the integrity analysis that has been performed on a larger time scale, and performs and executes all security functions mapped to the HOS task in time on the condition of a smaller time scale.
To guarantee real-time execution of code, during analysis and design of the system, a maximum duration (time budget) has been allocated for each task to enable the system to schedule tasks and not violate hard real-time requirements. Verification of these execution times is performed by the system in real time during task execution and detects possible violations and may cause corrective and logging operations.
A periodic task TK (defined between the time of planned operation and the completion/response time) is defined by a triplet of parameters C, D, T, where C represents the Worst Case Execution Time (WCET) or resource time budget, i.e. the computation time, D is the relative deadline, i.e. the period time, and T is the period (time period).
Considering a sporadic task system, in which there are multiple cycles between two consecutive job instances of the same task, a job occurring at a certain time must be executed up to C time units in a time interval before the corresponding deadline D.
A special case of a sporadic task is a periodic task for which the period is the exact time separation between the arrival of two consecutive jobs generated by the task.
It is distinguished in that:
an implicit deadline (implicit deadline) system in which for each instance, a deadline D corresponds to a period T;
a constrained deadline (system) in which the deadline D is less than the period T for each instance;
an arbitrary deadline (arbitrary deadline) system in which there is no restriction between the deadline D and the period T for each instance.
The processing system 10 is configured to perform a time detection procedure 700, described herein with reference to the flow chart in fig. 8, wherein there is no restriction between the deadline D and the period, but to perform a step 710 of checking for a violation of a task worst case execution time C for the generic task TK, wherein for a given instance the value of the worst case execution time C is given by an offline time analysis. In step 720, an exception is recorded, preferably in the same security log SL.
Thus, from the above description, the advantages of the described solution are apparent.
The described system advantageously comprises a security-hardened real-time operating system, wherein the security hardening is obtained by authenticity checking of the operating system software or application software at run-time, by using high priority in a periodic manner and by using dedicated hardware for encryption and certificate validation.
In particular, by running the authentication process in a periodic task with the highest priority relative to other host/HTA services, the system disclosed herein also avoids: an attacker or an overly demanding host (requiring a large number of operations from the HTA) can prevent runtime authenticity verification runs and detect software tampering. While avoiding the time to verify the entire flash memory to exceed a predefined value.
Of course, without prejudice to the principle of the embodiment, the details of construction and the embodiment may vary widely with respect to what is described and illustrated herein, purely by way of example, without thereby departing from the scope of the present embodiment, as defined in the annexed claims.

Claims (13)

1. A processing system configured to perform trusted operations, the processing system comprising at least one host computing module (11) and a hardware trust anchor (12), the host computing module (11) comprising a host processing unit (113) and a host memory device (111,112), the hardware trust anchor (12) comprising a respective secure processing unit (123), a hardware processing module (122) dedicated to cryptographic operations and a secure storage device (121), the hardware trust anchor being configured to store and run a real-time secure operating system (HOS), the secure operating system (HOS) being configured to perform a validity check on software used in the processing system (10),
wherein the secure operating system (HOS) is configured to perform a runtime authenticity check (500) to control the integrity of software code at runtime, the runtime authenticity check (500) comprising identifying (505) at least a Signed Software Block (SSB) and corresponding header (SSBH) and data block (SSDB) present in a program memory (121b) in the hardware trust anchor for execution, and to:
a first step (510) of checking the validity of a certificate (C) associated with said Signed Software Block (SSB),
a second step (520) of checking the validity of the header (SSBH) of the signed software block (SBB),
a third step (530) of checking the validity of the hash of the data (SSBD) of said Signed Software Block (SSB), and
if one of said checking steps (510, 520, 530) of validity checking detects an anomaly, writing information about the detected anomaly into a Security Log (SL),
said secure operating system (HOS) is configured to run said runtime authenticity check (500) in a task with highest Priority (PL) with respect to other hosts (11) or hardware trust anchor (12) services.
2. The system according to claim 1, characterized in that the run-time authenticity check (500) is also applied to Signed Software Blocks (SSB) present in the non-volatile memory (111) of the host (11).
3. A system according to claim 1 or 2, characterized in that the secure operating system (HOS) is configured to run the runtime authenticity check (500) in a periodic task with highest Priority (PL) with respect to other hosts (11) or hardware trust anchor (12) services for a given period (T), and that the checking step (510, 520, 530) is divided into a plurality of sub-steps of configurable Duration (DH) performed at each period (T).
4. System according to claim 1, characterized in that said first step (510) of checking the validity of the certificate (C) associated with said Signed Software Block (SSB) comprises: any new certificate to be stored in the hardware trust anchor (12) is verified as another certificate stored in the hardware trust anchor (12) unless it is a self-signed certificate.
5. The system according to claim 1, characterized in that said second step (520) of checking the validity of the header (SSBH) of said Signed Software Block (SSB) comprises verifying the header signature field of the current signed software block (SBB).
6. The system according to claim 1, characterized in that said third step (530) of checking the validity of the hash of the data (SSBD) of said Signed Software Block (SSB) comprises: -calculating a hash value on said data block (SSBD) in said cryptographic hardware acceleration module (122), in particular by means of a hash engine 122a, and comparing said hash value with the content of a field in a header (SSBH) storing the hash value defining the qualified digest calculated on the data block.
7. The system according to claim 1, characterized in that the operating system is configured to perform a stack overflow detection procedure (600) on a stack (SS) of the volatile memory (121a) of the hardware trust anchor (12) for task execution, which comprises checking the amount of stack (SS) used against static boundaries.
8. The system of claim 7, wherein the hardware trust anchor (12) operating system (HOS) is configured to operate as a host
Filling (610) a stack area allocated to a task with a known pattern (SP),
during each content switch, for modification, the last portion within the valid stack range is checked (620),
if a modification to the fill operation is detected, a stack overflow hooking function is invoked (630).
9. A system according to claim 1, characterized in that the operating system is configured to perform a real-time detection procedure (700) comprising checking (710) for a violation of a task's worst-case execution time (C), the value of the worst-case execution time (C) being given by an offline time analysis for a given instance of a given task, and, if an anomaly is detected by the checking step (710), writing information about the detected anomaly into a Security Log (SL).
10. The system according to claim 1, characterized in that the at least one host unit (11) and the hardware trust anchor (12) are part of a system on chip, in particular the at least one host (11) is an ECU operating in a vehicle.
11. A system according to claim 1, characterized in that said writing of information about the detected anomaly into a Security Log (SL) comprises creating a security log SL comprising information about the detected anomaly or writing the anomaly into the Security Log (SL).
12. A method of performing trusted operations in a processing system as claimed in any one of claims 1 to 10, said method comprising: storing and running a real-time secure operating system (HOS) that performs a validation check on software used in a processing system, the method comprising
Performing a runtime authenticity check (500) to control the integrity of the software code at runtime, the runtime authenticity check (500) comprising identifying (505) Signed Software Blocks (SSB) and corresponding header (SSBH) and data blocks (SSDB) present at least in a program memory (121b) of a hardware trust anchor (12) for execution, and for each Signed Software Block (SSB),
a first step (510) of checking the validity of a certificate (C) associated with said Signed Software Block (SSB),
a second step (520) of checking the validity of the header (SSBH) of the signed software block (SBB),
a third step (530) of checking the validity of the hash of the data (SSBD) of said Signed Software Block (SSB), and
if one of said checking steps (510, 520, 530) of validity checking detects an anomaly, writing information about the detected anomaly into a Security Log (SL),
the secure operating system is configured to run the runtime authenticity check (500) in a task with highest Priority (PL) with respect to other hosts (11) or hardware trust anchor (12) services.
13. The method of claim 10, comprising operation of the system of any one of claims 2 to 11.
CN202011057520.8A 2019-09-30 2020-09-30 Processing system including trust anchor computing instrument and corresponding method Pending CN112580015A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT102019000017534 2019-09-30
IT102019000017534A IT201900017534A1 (en) 2019-09-30 2019-09-30 "Processing system including" trust anchor "type calculation apparatus and corresponding procedure"

Publications (1)

Publication Number Publication Date
CN112580015A true CN112580015A (en) 2021-03-30

Family

ID=69191192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011057520.8A Pending CN112580015A (en) 2019-09-30 2020-09-30 Processing system including trust anchor computing instrument and corresponding method

Country Status (3)

Country Link
JP (1) JP2021057043A (en)
CN (1) CN112580015A (en)
IT (1) IT201900017534A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113961362B (en) * 2021-11-14 2024-01-16 苏州浪潮智能科技有限公司 Process identification method, system, storage medium and equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8832454B2 (en) * 2008-12-30 2014-09-09 Intel Corporation Apparatus and method for runtime integrity verification
US8839458B2 (en) * 2009-05-12 2014-09-16 Nokia Corporation Method, apparatus, and computer program for providing application security

Also Published As

Publication number Publication date
JP2021057043A (en) 2021-04-08
IT201900017534A1 (en) 2021-03-30

Similar Documents

Publication Publication Date Title
US10148442B2 (en) End-to-end security for hardware running verified software
US9762399B2 (en) System and method for validating program execution at run-time using control flow signatures
Eldefrawy et al. Smart: secure and minimal architecture for (establishing dynamic) root of trust.
Nunes et al. {APEX}: A verified architecture for proofs of execution on remote devices under full software compromise
JP4498735B2 (en) Secure machine platform that interfaces with operating system and customized control programs
US7644287B2 (en) Portion-level in-memory module authentication
US7739516B2 (en) Import address table verification
EP1612666A1 (en) System and method for protected operating systems boot using state validation
US20080005794A1 (en) Information Communication Device and Program Execution Environment Control Method
US20090222653A1 (en) Computer system comprising a secure boot mechanism
US20040230949A1 (en) Native language verification system and method
CN112511306A (en) Safe operation environment construction method based on mixed trust model
CN112287357B (en) Control flow verification method and system for embedded bare computer system
CN112580015A (en) Processing system including trust anchor computing instrument and corresponding method
EP1811460A1 (en) Secure software system and method for a printer
AT&T
US9213864B2 (en) Data processing apparatus and validity verification method
Noorman Sancus: A low-cost security architecture for distributed IoT applications on a shared infrastructure
Nasser Securing safety critical automotive systems
Chaves et al. Reconfigurable cryptographic processor
CN118427891A (en) System security management method and device and electronic equipment
CN115982699A (en) Malicious attack defense method, device, equipment and medium based on secure memory
CN114201761A (en) Enhancing security of a metric agent in a trusted computing system
CN116776397A (en) Method for verifying data in a computing unit
Weiß System Architectures to Improve Trust, Integrity and Resilience of Embedded Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination