What If We Got Hacked? How We Protect Our Update Pipeline
What If We Got Hacked? How We Protect Our Update Pipeline
In December 2020, SolarWinds -- a company trusted by 300,000 organizations to manage their IT infrastructure -- distributed a backdoor called SUNBURST to approximately 18,000 customers through a routine software update. The backdoor was signed with SolarWinds' legitimate code signing certificate. It passed every integrity check. Victims included the U.S. Treasury, the Department of Homeland Security, and Microsoft.
The attackers didn't need to break into 18,000 networks. They broke into one update server.
This wasn't an isolated failure. In 2021, CodeCov's Bash Uploader was modified to exfiltrate CI/CD secrets from customer build pipelines -- undetected for two months. In 2023, 3CX distributed a trojanized desktop application to 600,000 customers. The pattern is always the same: compromise the vendor, compromise the customers.
As a security vendor that distributes software updates and detection signatures to enterprise environments, we have to ask ourselves the question that every vendor in this position should ask: if our update infrastructure were fully compromised, what could an attacker actually do with it?
We spent significant time designing an architecture where the answer is: nothing useful.
The Structural Problem with Traditional Update Systems
Most software update systems share a fundamental weakness. The server that distributes updates is also the system that signs them. Compromise the distribution server, get the signing keys. Get the signing keys, sign whatever you want. A backdoored update looks identical to a legitimate one.
This is precisely what happened at SolarWinds, CodeCov, and 3CX. The attack chain is simple and repeatable:
- Compromise the distribution server
- Access the signing key
- Sign the malicious payload
- Distribute to customers
We designed our architecture to break this chain at every link.
Our Core Principle: Separate Signing from Distribution
The Surface Portal -- our distribution server -- never holds any signing keys. Not encrypted keys. Not key shares. Not temporary keys. Zero cryptographic key material.
The portal stores pre-signed artifacts and serves them to deployments that request them. It is a relay. If an attacker gains complete control of the portal -- root access, database dump, admin sessions, everything -- they get a file server. They cannot create new signed updates because there is nothing to sign with.
Hardware Tokens, Human Hands
Every platform update that could execute code on a customer's system -- backend images, browser extension packages, native agent binaries -- requires cryptographic signatures from two out of three designated team members. Each team member holds a YubiKey hardware security token.
The signing keys are generated directly on the YubiKey hardware. The private key never exists anywhere else -- not in memory, not on disk, not in transit. When a team member signs an update, the YubiKey performs the cryptographic operation internally and outputs only the signature. The key itself cannot be extracted, even by its owner.
The release process for a platform update:
- A developer builds the update artifact
- Two team members independently verify the artifact against the source commit and build logs
- Each team member signs the artifact's hash using their personal YubiKey
- The two signatures are bundled with the artifact and uploaded to the portal
- The portal verifies both signatures before accepting the upload
- Customer deployments verify the same two signatures before applying the update
An attacker who compromises our portal gets nothing useful for signing. An attacker who compromises one team member's account still can't sign an update -- they'd need a second team member's physical YubiKey and its PIN.
Platform Update Release Process
Developer builds the update from source
Two team members verify source commit and build logs
Each member signs the artifact hash with their hardware token
Two signatures are bundled with the artifact and sent to the portal
Portal verifies both signatures before accepting the upload
Deployments verify 2-of-3 signatures locally before applying
Detection Signatures: A Different Risk Profile
Platform updates are infrequent -- a few times per month. Detection signatures for phishing, credential theft, and other threats may need to update daily. Requiring two team members to physically insert their YubiKeys for every signature push would be unsustainable.
This is where we made an important distinction. Platform updates can execute arbitrary code. Detection signatures cannot. Our signatures are structured data -- JSON patterns, regular expressions, URL rules. Even if an attacker pushed malicious signature definitions, they could not execute code on a customer's system. The realistic worst case is degraded detection: false positives or blind spots.
For detection signatures, we use a dedicated signing service on separate infrastructure from the portal. It holds its own hardware-backed key and validates that every payload matches the expected JSON schema before signing. It won't sign arbitrary data. Signing operations are rate-limited and every operation triggers a team notification.
If the portal is compromised, the attacker can't reach the signing service -- different infrastructure, different credentials. If the signing service itself is compromised, the attacker can only sign detection data (not executable code), and every operation is logged and immediately visible.
Tamper-Evident Signing Log
Every signing operation -- by a team member's YubiKey or by the signing service -- is recorded in a tamper-evident transparency log. Each entry includes the cryptographic hash of the previous entry, forming a hash chain. If any entry is modified, deleted, or inserted after the fact, the chain breaks.
The log serves as a canary. Even if our signing infrastructure were compromised in a way we didn't anticipate, the log would show evidence of unauthorized activity. Automated monitoring verifies chain integrity continuously and alerts on anomalies.
What Customer Deployments Verify
Surface Security deployments do not trust the portal's assertion that an update is legitimate. They verify it themselves.
Every deployment has a set of Ed25519 public keys embedded at build time -- baked into the binary itself. When a platform update arrives, the deployment checks:
- Signature threshold. Does this update carry at least 2 valid signatures from the 3 known keyholders?
- Version ordering. Is the version number higher than the currently installed version? This prevents replay attacks where an attacker re-serves an older, vulnerable release.
- Timestamp freshness. Is the signed timestamp reasonably recent? This prevents freeze attacks where an attacker withholds newer updates to keep a deployment on an older version.
If any check fails, the update is rejected. Verification is entirely local -- no callback to our infrastructure required. This is critical for air-gapped deployments with no external connectivity.
The Compromise Matrix
Every system can be attacked. The question is what an attacker can do after compromising each component.
| Compromised component | What they CAN do | What they CANNOT do |
|---|---|---|
| Portal server | View deployment metadata. Serve existing signed artifacts. | Sign new updates. Push malicious code. Access signing keys. |
| Portal database | Read stored data (public keys only). | Extract private key material. Forge signatures. |
| One team member | Produce one signature (insufficient -- 2-of-3 required). | Release any update unilaterally. |
| Signing service | Sign malicious detection patterns (data only, no code execution). | Sign platform updates. Execute code on customer systems. |
| One team member + the portal | Distribute existing signed bundles. | Create new signed platform updates. |
The most probable attack scenario -- an external attacker compromising our internet-facing portal -- has the least impact on customer security. That is by design.
This architecture means there is no single point of compromise that enables an attacker to push malicious code to customer environments. Even combining multiple compromised components does not bridge the gap.
What We're Still Improving
We believe in being transparent about what we've solved and what we're still working on.
- Build system hardening. Our signing architecture protects the distribution layer. We are continuing to harden the build layer through CI isolation, artifact provenance tracking, and SLSA compliance.
- Formal verification. We are evaluating formal verification of our threshold signing implementation to mathematically prove its correctness.
Why This Matters
Every security vendor distributes software to customer environments. That distribution channel is a position of trust. SolarWinds, CodeCov, and 3CX demonstrated what happens when that trust is backed only by operational practices rather than cryptographic guarantees.
We cannot promise we will never be breached -- no vendor can honestly make that claim. What we can promise is that a breach of our update infrastructure cannot become a breach of your network. The signing keys are not there to steal. The threshold requirement cannot be bypassed remotely. The verification happens on your side, not ours.
That is the standard we hold ourselves to, and the standard we believe every security vendor should meet.
If you have questions about our supply chain security architecture or want to discuss how this applies to your environment, get in touch.