I started in IT in the mid-'90s. Spent 26 years working with a lot of the same loyal clients as a network engineer and consultant, designing and supporting infrastructure for businesses that needed things to work. Not "work most of the time." Work. Reliably. Every day.
When I moved into AI, I expected everything to be different. New technology, new paradigms, new rules. And some of it is. But the things that actually matter? The principles that determine whether a project succeeds or fails? Those haven't changed at all.
Here are the lessons that transferred directly from building networks to building AI systems.
Lesson 1: Redundancy Is Not Optional
In networking, we learned this the hard way. A single point of failure will eventually fail. That's not pessimism. That's physics. So we built redundancy into everything, dual ISP connections, failover switches, backup power, replicated data.
AI has the same requirement, but most implementations ignore it. They depend on a single model provider. A single API endpoint. A single data pipeline. When OpenAI has an outage (and they do), your entire AI-powered workflow stops.
The transfer: Build AI systems that can fail over. Use multiple model providers. Design graceful degradation. If your AI can't reach Claude, can it fall back to GPT? If neither is available, does the process stop entirely, or does it route to a human? This is the same thinking we applied to network design for decades. It works just as well for AI.
Lesson 2: If You Can't Monitor It, You Can't Manage It
Every network engineer knows, you don't wait for users to tell you something is broken. You monitor. Uptime, latency, packet loss, bandwidth utilization. You set thresholds and get alerts before problems become outages.
Most AI deployments have zero monitoring. The model is deployed, and everyone assumes it's working until someone notices a bad output. By then, it might have been producing bad outputs for weeks.
The transfer: Monitor your AI outputs the same way you'd monitor network traffic. Track accuracy rates. Log edge cases. Set up alerts for unusual patterns (sudden changes in output length, confidence scores dropping, response times increasing). AI systems drift over time. Monitoring catches the drift before it becomes a problem.
Lesson 3: Documentation Saves Lives (and Projects)
In network engineering, documentation isn't optional. When it's 2 AM and a core switch is down, you need to know what's connected to what, what the configuration is, and what changed recently. Without documentation, you're troubleshooting blind.
AI projects are notoriously under-documented. Nobody records why specific prompts were written a certain way. Nobody tracks which training data was used. Nobody documents the edge cases that were discovered and how they were handled. Six months later, when something breaks, nobody remembers how it was built.
The transfer: Document your AI systems like critical infrastructure. What data was used? Why were these prompts structured this way? What edge cases were found? What decisions were made and why? Future you (or the person who inherits the system) will be grateful.
Lesson 4: Security Is Not a Feature. It's a Foundation.
Twenty-six years of working with SonicWall firewalls, Meraki security appliances, Active Directory, and enterprise security policies taught me one thing, security is not something you add later. It's something you build from the start. Every shortcut comes back as a vulnerability.
AI makes this even more critical because the data flowing through AI systems is often the most sensitive data in the organization. Customer communications. Financial records. HR documents. Strategic plans. If your AI system isn't secured properly, you're essentially creating a new attack surface with access to your most valuable information.
The transfer: Apply zero-trust principles to AI. Least-privilege access. Encrypted data in transit and at rest. Audit logging. Regular security reviews. And for the love of everything, understand where your data goes when a third-party AI service processes it. The same security discipline that protects your network should protect your AI pipeline.
Lesson 5: Change Management Isn't Bureaucracy. It's Survival.
In enterprise networking, you don't just push a firmware update on a Friday afternoon and hope for the best. You test in a staging environment. You schedule a maintenance window. You have a rollback plan. You communicate with affected teams.
AI needs the same discipline. When you update a prompt, retrain a model, or change a data pipeline, you need to test the impact before it goes to production. One word change in a system prompt can completely alter an AI's behavior. I've seen it happen.
The transfer: Treat AI configuration changes like network changes. Test before deploying. Have a rollback plan. Monitor closely after changes go live. Keep a changelog. The discipline that prevents network outages is the same discipline that prevents AI disasters.
Lesson 6: Vendor Lock-In Is the Oldest Trick in the Book
Every network vendor wants to be your only network vendor. Cisco wants you all-Cisco. Meraki wants you all-Meraki. They make it easy to buy in and hard to get out. I've spent decades helping clients maintain vendor independence because the moment you're locked in, you lose negotiating power and flexibility.
AI vendor lock-in is even more aggressive. Proprietary model formats. Proprietary data structures. Proprietary APIs that don't translate to other platforms. If your entire operation depends on one AI vendor and they raise prices 300% (it happens), what are your options?
The transfer: Build model-agnostic. Use abstraction layers. Keep your data in formats you control. Make sure you can switch providers without rebuilding everything. This isn't paranoia. It's the same vendor management principle that's saved my clients money for three decades.
The Bigger Point
AI is new. The hype is real. The capabilities are genuinely impressive. But underneath all of that, it's still technology being deployed in business environments by organizations that need it to work reliably.
The principles that make technology reliable haven't changed in 30 years, plan before building, monitor what you deploy, document what you build, secure everything, manage change carefully, and never let a vendor own you.
The organizations that will succeed with AI aren't necessarily the ones with the most advanced models or the biggest budgets. They're the ones that bring engineering discipline to the table. The same discipline that built reliable networks, reliable servers, and reliable infrastructure.
New technology. Same principles. That's what 30 years teaches you.
"Technology changes every five years. Engineering discipline is forever. The organizations that understand this build things that last."
- Daryl Lantz, MindXpansion