Security & Privacy

Your data. Your rules.

Most AI products say “we don’t train on your data.” That’s a policy. Primary is private by architecture — we built it so we can’t read your data, even if we wanted to.

Plain language

What we can and can’t see.

Capability
Primary
Typical AI products
Read your conversations
Yes (training data unless opt-out)
Read your email content
Often, when integrated
Train on your data
Default on Free/Plus tiers
Share with third parties
Variable, opaque
Sell your data
Delete on request
Eventually
Reside on your hardware (Pro+)
How it works

The architecture, briefly.

Per-tenant encryption

Every customer's data is encrypted with keys we don't hold in plaintext. The orchestrator processes requests without ever materializing your raw content into log lines, dashboards, or training pipelines.

Dedicated GPU compute

Models run on GPU we provision and operate. Your prompts never transit OpenAI, Anthropic, or any third party. No external API key, no shared inference pool.

Local-first desktop (Pro+)

On Pro and Ultimate, the desktop app runs the agent on your machine. Cloud sync is opt-in per category — your hardware is the trust boundary, not ours.

The claim

“We can’t read your data.”

Every AI product claims privacy. Most of them mean: we have a policy that says we won’t look. Primary’s claim is structural — if we wanted to read your data tomorrow, we’d have to redesign the system to do it.

Concretely: your messages, files, and memory live in encrypted storage that the orchestrator decrypts only inside the inference GPU process, transiently, for the duration of one request. No log lines, no dashboards, no analytics pipelines see plaintext. Our customer-support tooling exposes account metadata (email, plan, billing status) — never the contents of your interactions.

We don’t train on your data. Not for our models, not for anyone else’s. There’s no opt-in, no opt-out — there’s no training pipeline that touches customer content at all. The models we ship are trained on public datasets and curated synthetic data; your usage doesn’t feed them.

When you delete an account, we honor it. Hard delete is final at 90 days post-cancellation (with a 30-day soft-delete window in case you change your mind). Encrypted backups are wiped on the same schedule.

Have a security question we should be answering here? Email security@primary.net.

Compliance & roadmap

Where we are. Where we’re going.

GDPR-aligned data handling
shipped
Encrypted storage at rest (AES-256)
shipped
TLS 1.3 in transit, HSTS enforced
shipped
No third-party model providers
shipped
No training on customer data
shipped
Public uptime status page
in progress
SOC 2 Type 1
in progress
SOC 2 Type 2
planned
HIPAA BAA available
planned
Penetration test report (third party)
in progress

For DPA, security questionnaires, or specific compliance asks, email security@primary.net.

Threat model

What we’re defending against.

Account takeover

bcrypt password hashing (work factor 12), TOTP 2FA available on every account, session tokens scoped to device with 30-day expiry. We watch for anomalous sign-ins.

Insider access

Production access requires approval, MFA, and audited session recording. Customer content is never accessible from internal tools — only metadata.

Third-party model exposure

We don't use third-party model APIs for inference. Your prompts and outputs do not transit OpenAI, Anthropic, or any external provider.

Phishing and impersonation

Primary's outbound email is DKIM/SPF/DMARC signed. We never ask for your password by email. Your agent's phone number is fixed and verifiable.

Data exfiltration

Per-tenant encryption keys, no shared databases across customers, audit logs on every privileged operation.

Service compromise

The local-first desktop app (Pro+) lets you run sensitive tasks entirely on your machine. Cloud sync is opt-in per category.

Send it to Primary.

Your AI agent is one click away. Free for 7 days, no card required.