Security

This page documents how I've built and secured the infrastructure behind Needinput, what protections are in place for your data, and where the honest limitations are. I believe you deserve to understand exactly what you're trusting when you use this service — including the parts that aren't perfect yet.

Philosophy

My security model is built around two principles: defense in depth and informed transparency. Defense in depth means no single layer of protection is treated as sufficient — physical, network, OS, application, and cryptographic controls each reinforce the others. Informed transparency means I tell you what I can and cannot protect, so you can make decisions accordingly rather than operating on assumptions.

I also operate under a clear division of responsibility: I secure the infrastructure, and you secure your VM. I provide a hardened starting point and the tools for strong privacy — the degree to which you use them is up to you.

Physical & Facility Security

I own all compute, storage, and network hardware outright. The only things I lease are rack space, power, bandwidth, and DDoS protection at the provider's edge. This means no shared equipment and no co-mingled hardware with other tenants.

Hardware is located in a private, locked cabinet within a professionally managed colocation facility. The data hall requires biometric access. The cabinet itself requires a physical key. The facility operates 24/7 with continuous surveillance and documented access logging.

Data Center Personnel Access

I want to be honest about something that applies to every colocation customer, regardless of provider: data center technicians have physical access to the facility, including my cabinet, for emergency and remote-hands operations. This includes responding to hardware failures, power events, and facility emergencies. In most cases this access is unescorted — that's standard practice across the industry and necessary for the facility to function.

Dedicated, escorted access and private caged space is available at colocation facilities, but at costs typically reserved for large enterprise customers. I'm a small operation and I accept this tradeoff transparently rather than pretending it doesn't exist. This is one of the reasons I've invested so heavily in encryption: physical access to hardware should yield nothing useful to an unauthorized party.

Network Architecture & Edge Security

The network is segmented into isolated layers with strict controls between them:

  • Customer VMs operate on a dedicated public network segment, isolated from infrastructure services
  • Infrastructure services (billing portal, provisioning, internal services) operate on a private network segment accessible only via VPN
  • Management interfaces (hypervisor nodes, storage, switches, iDRAC/BMC) are on a fully isolated out-of-band network with no path to the public internet

At the storage layer, Ceph pools are access-controlled via CephX authentication — customer and infrastructure storage pools use separate credentials, meaning neither can be accessed through the cluster without pool-specific authorization.

At the network edge, anti-spoofing ACLs and BGP prefix filtering prevent traffic manipulation and unauthorized route injection. The internet uplink is dual-homed with automatic failover, providing both redundancy and consistent availability.

Edge Load Balancer (HAProxy)

All public HTTPS traffic enters through a redundant pair of HAProxy load balancers. This is an architectural privacy decision: HAProxy does not terminate TLS sessions — your encrypted connection is passed through to the application layer without decryption at the edge. It is structurally incapable of inspecting the content of your HTTPS connections. TLS is terminated only at the application layer, inside the private infrastructure network.

Intrusion Detection & Rate Limiting

fail2ban is deployed at both the DMZ edge and application level, monitoring for authentication failures and connection floods. Rate limiting is applied at the HAProxy layer for HTTPS and mail protocols. CrowdSec is in active development and will add collaborative threat intelligence on top of fail2ban post-launch.

Administrative Access & Credentials

All administrative interfaces — including hypervisor management, storage, switches, hardware out-of-band (iDRAC/BMC), and application admin panels — are accessible exclusively through a physically segmented out-of-band management network. Physical management interfaces are isolated to a dedicated management switch stack with no path to the production or customer network. The management network itself sits behind a dedicated VPN gateway on its own separate uplink with its own address space. That address space is null-routed at the provider edge when the VPN is not in use, making the gateway unreachable from the public internet by default.

VPN access requires both PKI certificate authentication and a pre-shared key — there are no passwords involved. Where applicable, administrative application interfaces also require TOTP multi-factor authentication as an additional layer.

Where passwords are unavoidable (legacy interfaces, application accounts), they are entropy-generated and never reused. All administrative credentials are stored in a zero-knowledge encrypted credential manager operating in a jurisdiction with some of the strongest data privacy laws in the world. The manager account itself is protected by FIDO2 hardware key authentication with the recovery pass-phrase wrapped in a GPG encrypted archive, stored in a gocryptfs container, and synced to a cloud provider operating in the same jurisdiction with its own encryption layers.

I've deliberately chosen providers and jurisdictions that create meaningful legal friction against compelled disclosure.

Docker Swarm Autolock

Application secrets (database keys, API credentials, encryption keys) are managed through Docker Swarm Secrets with Autolock enabled. Autolock means the Swarm encryption key is never stored on disk — if a Swarm manager reboots, that manager remains locked and cannot rejoin the Swarm until I manually authenticate via VPN and provide the unlock key from my credential store. The remaining managers continue operating normally, maintaining quorum. This is a deliberate tradeoff: it introduces an operational step on reboot, but ensures secrets cannot be extracted from a stolen or compromised manager disk.

Customer VM Encryption

Every customer VM uses LUKS2 full-disk encryption with AES-256-XTS. What this means in practice depends on the unlock method you choose at first login:

Option 1: Zero-Knowledge (ZK) Unlock

You configure your own unique passphrase. A pre-boot SSH environment (dracut-sshd) is installed, allowing you to unlock the disk remotely at each boot by SSHing into the initramfs and entering your passphrase directly — your credentials travel encrypted end-to-end from your terminal to the VM, bypassing the hypervisor keyboard buffer entirely.

Under ZK unlock, the encryption key is mathematically derived from your passphrase and processed through the same class of algorithms used in secure password hashing. There is no meaningful key to extract which will unlock your volume. Without knowledge of your passphrase, I have no ability to decrypt your data under any circumstances, including legal compulsion. Your VM cannot boot without your intervention.

Option 2: Managed (Auto-Boot) Unlock

A unique encryption key is generated for your VM and embedded in the initramfs, allowing the system to boot without your intervention. This is convenient for workloads that need to survive reboots automatically, but it comes with a meaningful security distinction worth understanding.

Under managed unlock, a randomly generated binary key file is embedded in the VM's initramfs — an unencrypted archive stored in the boot partition. The key isn't human-readable, but it's not highly protected either. An attacker who could fully reconstruct your VM's virtual disk from storage could extract that key and use it to decrypt your data. The practical protection comes from the architectural barriers described below — not from the key itself being encrypted. You are trusting those barriers and me as the operator.

Even under managed unlock, your data benefits from significant practical protection due to the underlying storage architecture:

  • Your VM's virtual disk is stored as LUKS-encrypted blocks in a distributed Ceph storage cluster. What Ceph receives, stores, and replicates across nodes is already-encrypted ciphertext — the encryption happens inside the guest kernel before data reaches the virtual disk device.
  • VM data is striped across multiple physical nodes in non-contiguous blocks. An attacker attempting offline decryption would need to: (A) obtain physical access to multiple storage nodes, (B) determine which physical disks contain relevant blocks, (C) reconstruct the distributed object map across nodes, (D) reassemble the virtual disk, and only then (E) extract the boot key from the VM's initramfs.

The practical conclusion: data breaches via disk theft, recycled hardware, or partial storage compromise are extremely impractical regardless of unlock method, given the physical security, encryption layers, and distributed storage architecture combined.

What the Encryption Does NOT Protect: The Hypervisor Boundary

Honest disclosure: Once your VM is running and the disk is unlocked, encryption keys reside in your VM's allocated memory. An administrator with root-level access to the physical host could technically perform a memory dump of a running VM and recover keys or plaintext data from RAM.

Current hardware (Intel Xeon Scalable — Cascade Lake generation) does not support hardware-level memory encryption (such as Intel TDX). This means there is no technical guarantee against a compromised or malicious host administrator accessing in-memory data.

I mitigate this through strict internal access controls, a hardened hypervisor configuration, and a commitment not to inspect VM memory. If you require protection against a potentially compromised provider — which is a reasonable thing to want — you should implement application-level encryption for your most sensitive data, in addition to the disk-level LUKS protection I provide.

Hardware-level memory encryption is on my roadmap as hardware is upgraded.

VM Access Controls

In addition to LUKS encryption, I've taken steps to limit what I can do inside your VM without your knowledge:

  • QEMU Guest Agent (QGA) disabled: The guest agent has been fully removed from all VM templates. QGA provides a channel for the hypervisor to inject commands into a running VM — disabling it closes a vector for credential changes or code injection without customer awareness.
  • Cloud-init limitation (transparent): Cloud-init is currently used for initial VM configuration. I can technically reinstall a cloud-init drive and reset the root password via cloud-init, which would take effect on reboot. Any such action would require rebooting your VM — an event you should notice and could monitor for — but the capability exists nonetheless. I'm actively working to eliminate this capability as my provisioning tooling matures, but I don't have a firm timeline yet.
  • Default firewall baseline: Customer VMs ship with a default firewall configuration that drops all inbound connections except SSH on port 2222. All outbound and established inbound connections are allowed. This gives you a secure starting point from which to build your own configuration — I'm not leaving your VM open and expecting you to figure it out.

SSH Hardening

Customer VMs are configured with a hardened SSH server by default:

  • Non-standard port (2222) to reduce automated scanning noise
  • Modern cipher suites only: ChaCha20-Poly1305, AES-256-GCM, AES-256-CTR
  • Strong key exchange algorithms: Curve25519, ECDH with SHA-256
  • Compression disabled (prevents CRIME-class attacks)
  • X11 forwarding disabled
  • Verbose authentication logging
  • Maximum 3 authentication attempts per connection
  • Password authentication disabled automatically after SSH key installation (ZK unlock path)

Proxmox node SSH is similarly hardened: Ed25519 keys only, password authentication disabled, listening on the management network exclusively.

Data Protection Summary

Data At Rest In Transit Admin Readable?
VM data (ZK unlock) LUKS AES-256-XTS (customer key) Encrypted blocks over Ceph No — key unknown to provider
VM data (managed unlock) LUKS AES-256-XTS (provider key) Encrypted blocks over Ceph Yes — significant practical barriers apply
Portal passwords bcrypt/argon2 hash TLS only, never plaintext No — not reversible
Billing PII (name, email) InnoDB AES-CTR encryption Docker overlay AES-GCM Yes — required for invoicing
Payment tokens InnoDB AES-CTR encryption Docker overlay AES-GCM Tokenized — card data held by processor

Honest caveat on billing PII: Your name, email address, and payment tokens must be accessible to the application in plaintext to generate invoices and manage orders. This data is encrypted at rest and in transit at the infrastructure layer, but is necessarily accessible to me as the service operator at the application layer. I don't sell it, share it, or use it for anything beyond operating your account.

Redundancy

All production hardware is fully redundant:

  • Compute: Four independent nodes — the cluster tolerates loss of any single node without service interruption
  • Storage: All pools replicated or erasure-coded across nodes — no single disk or node failure causes data loss
  • Network: Dual switch stack in vPC configuration, dual upstream BGP paths with automatic failover, dual 25GbE per node bonded via LACP
  • Power: All nodes connected to dispersed A/B power feeds

The only single point of failure in the current rack is the chassis housing the compute nodes. Chassis are engineered specifically to be highly resilient, but I'm planning to address this with a spare chassis as the next hardware purchase. Power and network infrastructure are already provisioned to support it.

Software & Patching

All software applications running this service are open-source and subject to public audit — in most cases, massively audited by the broader community. I maintain a full list of software in use and will provide it to any member upon request.

Hardware firmware is kept current via vendor maintenance release channels. I've chosen hardware vendors specifically because they make security-relevant firmware updates accessible without enterprise licensing requirements. CVE tracking and firmware updates are an ongoing operational responsibility, not an afterthought.

Security Roadmap

These are known gaps I'm actively working to close, in rough priority order:

  1. CrowdSec deployment — Collaborative threat intelligence layer on top of existing fail2ban, providing adaptive protection against distributed attacks
  2. Docker Swarm manager root encryption + vTPM — Encrypting Swarm manager root filesystems and binding them to virtual TPM state
  3. vTPM on customer VMs — Replacing the boot.key-in-initramfs approach for managed unlock with vTPM-backed key storage, improving the security of the auto-boot model
  4. Hardware memory encryption — Future hardware upgrades will prioritize support for hardware-level memory encryption, closing the hypervisor RAM boundary disclosure above

Reporting a Security Issue

If you discover a security vulnerability or have a concern about the infrastructure, please reach out. I take these reports seriously and will respond promptly. Security improvements suggested by members have a direct path to implementation.
support@needinput.host