Trusted Computing Base
A Trusted Computing Base (TCB) comprises all parts of a computing system that provide a securing operating environment. The TCB extends beyond system software and can include the underlying hardware, firmware used for booting and other purposes, the operating system and device drivers, virtualization software (if any), middleware and application software.
The TCB extends beyond the computer system itself to include attributes of the system’s physical location, and the policies and security controls applied to the system.
The scope of the TCB can be complex, and can vary across systems, but its mission is straightforward: to establish and maintain the security of a computing system, principally by implementing and enforcing system-wide information security policies.
What does TCB do?
The TCB achieves system security through a combination of means and methods:
- Implementing access control
- Performing user authentication
- Managing/granting privileges only to specific applications, processes or users
- Enforcing policies, especially through access control
- Defending against malware and other software threats and bad actors
What is Trust?
The Miriam-Webster dictionary defines trust as
- assured reliance on the character, ability, strength, or truth of someone or something;
one in which confidence is placed - to rely on the truthfulness or accuracy of; to place confidence in: rely upon
These definitions hold for cybersecurity, even as applied to a Trusted Computing Base.
More precisely, tn the context of cybersecurity, “trusted” means critical to security within the scope of the system. But is “Trusted” the same as “Secure”?
Trusted vs. Trustworthy
Computer system security is predicated upon trust in key software and hardware. It is axiomatic that the TCB is trusted, but that trust not necessarily mean that a TCB is inherently trustworthy. For example, operating systems and other system software that comprise and/or participate in the TCB, routinely suffer from security-critical bugs.
To be wholly trustworthy, a TCB must demonstrate that it is also secure by including only trustworthy components. Such components must have security attributes that are
- Auditable/verifiable – can you perform checks on the integrity and secure configuration at build-time and run-time?
- Tamper-proof/tamper-evident – is the software resilient against malicious modification and/or do methods exist to determine whether it has been modified by factors or actors external to the TCB?
- Bypass-proof – does the software architecture prevent bad actors and malware from taking paths around security measures?
- Simple – A simple code base is easier to verify and maintain than a complex one.
For trustworthiness, smaller and simpler are better. The larger and more complex a TCB, the larger an attack surface it presents. Size also corelates inversely with code quality – more code has more bugs, many of which will prove exploitable. Moreover, size and complexity make the TCB more difficult to verify and audit. Ideally, a TCB should be as small and straightforward as possible while still offering the requisite functionality and security guarantees.
Architecturally, the inclusion of any component in the TCB should be carefully considered and revisited or reviewed with each system update – each added component can become a point of failure or insecurity.
Secure By Design
It is a common refrain of analysts, CISOs and other security practitioners that software developers do not make security of their code a priority design consideration. While the practice of DevSecOps has accomplished much to imbue the develop life-cycle with checks for and attestations of security of software components, those build controls end up acting as Band-Aids over software that is intrinsically insecure.
Secure by Design is increasingly the preferred development approach for ensuring security of software systems. In Secure by Design, security is paramount in system architecture and integrated into every layer.
The approach extends to all aspects of creating trusted software and the security functions implemented by that software: authentication, authorization, confidentiality, data integrity, privacy, accountability, availability, safety and non-repudiation.
At ProvenRun, we practice Secure by Design to create components and applications and to deliver engineering services that help our customers’ products be foundationally secure.
At the center of a Trusted Computing Base – the Secure OS
The most important component in a TCB is the trusted operating system. Other key aspects of the TCB depend upon the trusted OS – trusted applications, services and device interfaces in particular. But do trusted, secure OSes even exist? Or are they the unicorns of software engineering?
Pragmatically, there are two complementary assumptions regarding OSes among IT professionals, especially at device OEMs:
- OSes are black boxes, the security of which must be taken for granted, as value-added is delivered in applications (system-level security is someone else’s problem)
- OSes are complex and by consequence buggy and subject to vulnerabilities, but organizational expertise, resources and time are insufficient to build and maintain an operating system
These two assumptions have led development and deployment of vast fleets of insecure and thereby untrustworthy devices that plague the IoT ecosystem.
The central tenet of these assumptions, however, is not wrong – it is not the business of OEMs and other software developers to build and maintain their own OSes. For the last two decades, most system designs looked to third parties (ISVs, OSVs and open source projects) to obtain the OS used in IoT/embedded systems and other dedicated computing designs. Unfortunately, the vast majority of those OSes were not and are not secure.
The secure operating system is not a unicorn. It is quite possible and practical to develop and deliver a verifiably secure and trustworthy OS. At ProvenRun, we have found that in addition to Secure by Design, the best way to establish rigorous trustworthiness lies in formal software verification. This process, built upon mathematical proofs, testifies to the absence of bugs and associated faults and vulnerabilities.
Only two operating systems have ever been formally verified: SEL4 and ProvenRun ProvenCore. And only ProvenCore has a professional organization standing behind it, offering support, maintenance and indemnification.
Trust But Verify
Trust is always conditional. System components that are labeled as “trusted” should be treated with skepticism until they demonstrate security properties.
Don’t get stuck agonizing over the scope and content of your TCB, but don’t ignore the need for defining it as core to your designs. ProvenCore can make your trusted computing choices simpler and more straightforward.