A mini-PC. No GPU. No surprise.
Full Fathom AI runs on a single x86_64 mini-PC. Most vessels already have suitable hardware onboard; where they don't, we provision. Below is what the runtime needs and what we'll size for your fleet during a pilot.
What the onboard runtime needs.
Recommended configurations, tested hardware list and measured performance numbers (cold start, warm start, p50 / p95 latency, memory footprint) are published per release against a dated reference rig. We share the current set under pilot rather than as a headline number that will drift between releases.
One binary. Four ways to run it.
The onboard runtime ships as a statically-linked Linux binary — one file, approximately 27 MB, no runtime dependencies. How you host that binary is up to you.
Drop the binary + bundle onto a Linux mini-PC. Start it with systemd. Quickest install; requires physical presence on the vessel once.
Run the binary inside a Linux VM on an existing onboard hypervisor (VMware, Hyper-V, KVM). No additional licensing — the guest is just a Linux process host.
Run the binary alongside other onboard services on a shared Linux host. Resource carve-out reviewed as part of the pilot so nothing starves.
We can produce an OCI-compatible container image for fleets with an onboard container host or a management plane. Not part of the default install; raise it during scoping.
— We size for your fleet —