Skip to main content
Version: 1.3

Introduction

Continuum is a framework for the secure serving of AI models. It has two main security goals: (1) Protect user data. (2) Protect the AI model. More precisely, Continuum protects user data and AI model weights against the infrastructure, the service provider, and others.

Continuum sketch

What does this mean? The "infrastructure" is the basic hardware and software stack that the given AI app runs on. This includes all hardware and software of the underlying cloud platform. In the case of ChatGPT, this would be Azure. The "service provider" is the entity that provides and controls the actual AI app. In the case of ChatGPT, this would be OpenAI. You can learn more about this in the security goals section.

Using Continuum

Edgeless Systems provides Continuum as SaaS platform at ai.confidential.cloud.

The web service that's running there is a ChatGPT-style chat interface that protects your prompts end-to-end with confidential computing. You can learn more about how this service works in the Overview section.

If you are interested in using the Continuum directly via an API, check the APIs section.

How Continuum works

Continuum uses a novel combination of confidential computing and advanced sandboxing. Only with this, it becomes possible to protect user data and AI models against both the infrastructure and the service provider.

Confidential computing is a hardware-based technology that keeps data encrypted even during processing. Further, confidential computing makes it possible to verify the integrity of workloads. Out of the box, without Continuum, confidential computing can at most protect data against the infrastructure.

You can learn about the inner workings of Continuum in the architecture section.