Why More of Our Clients Are Moving Their On-Prem Servers Into Our Private Cloud

Team Neuron

You know the server. It's the one humming in the back office or wedged into the wiring closet between the breaker panel and a stack of empty toner boxes. Maybe it's a Dell PowerEdge from 2016 or 2017, the one we spec'd back when your shop was still running Sage 100 on a single VM and you had half the headcount you do now.

It still works. That's the problem.

We've had this conversation with a lot of our manufacturing clients over the past year, so we figured it was worth writing down. This isn't a pitch. It's the honest version of what we tell people when they ask whether they should keep that server for another cycle or move things into the colo.

Why This Conversation Keeps Coming Up

A few things have shifted in the last eighteen months and they're all pushing in the same direction.

Windows Server 2012 R2 hit end of support back in 2023, and 2016 is now in extended support with a hard wall in early 2027. If you've got a domain controller or file server on either of those, you're either paying for ESUs or running unpatched. Neither is a great place to be.

Hardware warranties are the other piece. Most of the servers we put in for clients between 2017 and 2019 are now past their original ProSupport coverage. We can extend it, and we do, but it gets expensive fast and there are parts that just aren't stocked anymore. When a PSU fails on a 2017 R740, we can usually source one in a day. When the backplane goes, that turns into a multi-day outage and a real conversation about whether to keep nursing the box along.

And then there's the "it's been fine" problem. Most of the on-prem servers we manage have been running fine for years. The trap is that "fine" tends to last right up until it doesn't, and the failures are rarely convenient. We've seen RAID controllers die on a Friday afternoon and take a finance team's entire month-end with them. We've seen server room AC units fail over a long weekend in August and cook two drives at once. The backup is supposed to save you in those moments. The question is always how recent the backup actually is and how long restoring it actually takes.

What Moving to a Private Cloud Actually Looks Like

Our private cloud is a five-node Proxmox cluster in a Burbank colocation facility. The hardware sits behind a UPS and generator, the building has multiple ISPs, and the cluster is built so any one node can fail without your VMs going down. We back everything up to a Proxmox Backup Server on-site and replicate that out to a second site in Dallas every night.

When we migrate a client workload in, here's roughly what happens.

We start with an assessment of what's actually running on your server. Domain controller, file shares, ERP database, RDS hosts, license servers, whatever's in the mix. Some workloads come over cleanly. A few raise questions we need to answer first, and we'll flag those before we touch anything.

Then we seed the data. For a file server with three terabytes of engineering drawings, we don't move that over the WAN cold. We image the VM, ship the seed to the colo, and stage it there. From your end, the disruption is usually a scheduled cutover window after hours where DNS and routing get pointed at the new VM. People come in Monday morning, log in the same way they did Friday, and their drive mappings work. Most users never notice anything changed.

The VM itself is the same Windows Server you had on-prem, with the same shares, the same permissions, the same applications. We're not rebuilding from scratch. We're moving the running server into better hardware in a better building with better backups behind it.

The Real Benefits (And Where They Come From)

The benefits people actually care about are downstream of two things: better hardware with redundancy, and a better building to put it in. The rest follows from there.

You stop buying servers. The five-to-seven year hardware refresh becomes our problem, not yours. There's no capital expense to plan around, no procurement cycle, no afternoon spent racking and cabling. You pay a predictable monthly cost that covers the compute, the storage, the backup, and the management.

Backups stop being something you hope work. Our backup setup is built into the platform. PBS takes incremental snapshots throughout the day, retention is configurable, and the offsite replication to Dallas runs without anyone needing to remember to rotate a tape or swap a drive. When we test restores, we test them on actual VMs, not on the assumption that the backup file is good.

Uptime gets better because the building is better and the cluster is redundant. A failed disk on your on-prem server is a service call. A failed disk in our cluster is a swap that happens without your VM noticing. If an entire node dies, the workload restarts on another node within a few minutes. That's not 100% uptime, nothing is, but it's a different category of resilience than a single 1U box sitting on a shelf in a back room.

Security patching gets centralized. We manage the underlying hypervisor and the OS patching cycle as part of the service, so the Windows updates that have been deferred on your domain controller because nobody wants to reboot it actually get applied on a schedule.

And you get your server room back. For shops where the "server room" is a closet with a window AC unit and a UPS that's been beeping for six months, that's not nothing. The power bill goes down a little. The ambient temperature on the shop floor goes down a little. The thing in the corner you've been worried about for two years stops being a thing in the corner you're worried about.

When It Doesn't Make Sense

This is the part we want to be upfront about, because the answer isn't "move everything to the colo." Some workloads should stay where they are.

Anything physically tied to hardware on your floor stays on your floor. CNC controllers talking to machines over serial or proprietary cables, label printers with USB connections to specific workstations, badge readers and access control panels wired into the building, anything with a hardware dongle plugged into a specific physical port. Those don't virtualize cleanly and they shouldn't try to.

Latency-sensitive line equipment is the other category. If you've got a SCADA system or an MES doing sub-millisecond polling against PLCs on the shop floor, that needs to live on the LAN with the equipment it's talking to. The round trip from your facility to Burbank and back isn't huge, but it's not zero, and for a handful of workloads that matters.

There's also the question of internet dependency. If your shop loses its connection, your colo workloads are unreachable. We mitigate that with secondary circuits and SD-WAN where it makes sense, but if your operation has to run heads-down for hours without internet, some local services need to stay local.

For most of our clients, the right answer ends up being a hybrid. The domain controller, file server, ERP database, RDS hosts, and the bulk of the back-office workloads move into the colo. A small on-prem server or two stays on site for the things that need to be there, and we keep those running with the same monitoring and backup discipline as everything else.

What Happens Next

If you've been looking at your server and wondering whether this is the year, we're happy to do an honest assessment of what you're running and what would and wouldn't make sense to move. No quotes, no proposal, no follow-up sequence. Just a conversation about your specific setup and where the actual risk and cost are sitting.

The easiest path is to reply to your account manager or send a note to the usual support address and ask for a private cloud assessment. We'll walk through your environment, look at what's on the floor, and give you a straight read on whether it's worth doing now, worth doing later, or fine to leave alone for another cycle.