How flexFS Works

FlexFS separates metadata from data. Clients resolve names and permissions through a fast, lightweight metadata server, then read and write blocks directly to object storage — no expensive central file servers in the path.

For latency-sensitive or hybrid-cloud workloads, Enterprise proxy groups can act as regional caches between clients and object storage — reducing round-trip times and enabling on-prem writeback.

Performance That Scales

FlexFS is designed for throughput-intensive workloads where every second of compute time costs money.

Linear Read Scaling

Aggregate read throughput scales linearly with every mount client added. Each client reads directly from object storage, so adding compute capacity automatically adds I/O capacity.

Benchmarked Against the Field

Tested head-to-head against AWS EFS and FSx for Lustre. FlexFS delivers comparable or superior throughput at a fraction of the cost by leveraging metered object storage instead of provisioned infrastructure.

Real-World Validation

PLINK genomic analysis on UK Biobank data: flexFS dramatically reduced wall-clock time and total cost of ownership compared to traditional cloud filesystem approaches.

The Cost Insight

Optimizing for storage cost alone can dramatically increase computing costs. Slow storage means idle CPUs burning money. FlexFS optimizes both — cheaper storage and faster compute.

Split-Path Architecture

Every flexFS operation follows one of two paths. Understanding this split is key to understanding why flexFS scales.

flexFS architecture: mount clients communicate metadata via RPC/TLS to the metadata server, and read/write data blocks via HTTPS directly to object storage or through proxy groups flexFS architecture: mount clients communicate metadata via RPC/TLS to the metadata server, and read/write data blocks via HTTPS directly to object storage or through proxy groups

Metadata Path

Binary RPC over TLS

  • File names, directory structure, and hierarchy
  • Permissions, ownership, and ACLs
  • File locks (POSIX and BSD)
  • Block-to-object mapping resolution

Data Path

HTTPS REST to Object Storage

  • Block reads directly from S3, GCS, Azure, or OCI
  • Block writes directly to object storage
  • Optional proxy servers for caching and writeback
  • Each client adds throughput capacity

"The metadata server is never a throughput bottleneck for large file I/O."

Metadata operations are small and fast. Data flows directly between clients and object storage.

Core Capabilities

Every feature is designed around a single goal: make object storage behave like a local filesystem without compromising on performance or correctness.

Time-Travel Mounting

Mount any volume at any point in time as a read-only snapshot. Block retention policies enable auditing, compliance, and disaster recovery without maintaining separate backup infrastructure.

Zero-Downtime Updates

Seamless session handoff during client updates. The new process takes over the mount point without unmounting. No data loss, no interrupted reads or writes, completely transparent to applications.

Three-Tier Caching

L1 in-memory LRU for hot data. L2 on-disk persistent cache survives restarts. L3 enterprise proxy groups provide shared, regional caching across mount clients. Each tier reduces round-trips to object storage.

Block-Based I/O

Files are split into configurable blocks (256 KiB to 8 MiB). Each block is individually compressed and encrypted, then stored as a separate object. Enables parallel I/O and granular caching.

Full POSIX Compliance

All FUSE operations implemented. File locking via POSIX fcntl and BSD flock. Hard links, symbolic links, extended attributes, and ACLs. Your existing tools work unchanged.

Compression

Choose LZ4 for speed, Snappy for balance, or zstd for maximum compression. Applied per-block before encryption. Reduces storage costs and network bandwidth without application changes.

Enterprise Features

Security, scale, and operational controls for production deployments.

Proxy Groups

CDN-like regional caching infrastructure. Clients select the lowest-latency proxy via RTT-based probing. Rendezvous hashing ensures consistent block placement across proxies. Supports writeback mode for write-heavy workloads.

Multi-Volume Management

Create unlimited volumes with independent configurations. Set per-volume quotas, issue volume-scoped access tokens, and restrict access to specific mount-path subtrees for fine-grained security boundaries.

End-to-End Encryption

AES-256-GCM encryption with Argon2id key derivation. Encryption and decryption happen entirely on the client. Keys never leave your infrastructure and are never transmitted to the metadata server or object storage.

Kubernetes Dynamic Provisioning

CSI driver with full StorageClass support. Dynamically provision flexFS volumes for pods. Integrates with standard Kubernetes storage workflows, persistent volume claims, and access modes.

Community vs Enterprise

Start free with the Community Edition. Upgrade when you need enterprise-grade security and scale.

Feature Community Free Enterprise Contact Sales
Volumes 1 Unlimited
Storage 5 TiB Unlimited
Files 5M Unlimited
Cloud backends
Full POSIX compliance
On-disk caching
Kubernetes (static provisioning)
End-to-end encryption
Proxy groups
Kubernetes (dynamic provisioning)
Usage metering
Fine-grained access control

See flexFS In Action

Deploy the Community Edition in minutes. Full POSIX compliance, all cloud backends, up to 5 TiB — free forever.