Thank you for Subscribing to Apac CIO Outlook Weekly Brief
How AWS Killed the Data Center
By Samuel Chesterman, Global CIO, IPG MB
Almost 20 years ago, I was building token ring networks, rack mounting IBM OS/2 “Warp 4” servers, and ripping out “old” IBM 3270 terminals. It was an “out with the old and in with the new” exercise. Mainframes were on their way out and companies were replacing them at an alarming, yet exciting rate!
Client server technology was the way to go
At that point in time, datacenters had been around for 30+ years, but 95 percent of them were still designed to house gigantic mainframe systems. Typically, the room was designed specifically for the handful of systems that were in it.
To the outsiders, they looked like a hermetically sealed glass chamber with an unusually complex mechanism at the door designed to grant access to certain individuals in lab coats. Inside they were filled with an ugly mix of oversized beige refrigerators with your grandpa’s reel-to-reel recorder embedded on top of them. There were usually one or two “high tech” desks with keyboards and green or beige screen terminals and a handful of dot matrix printers that were often larger than the desk they sat next to.
Once client server technology took over, those datacenters changed from looking like the offspring of a laboratory and the Starship Enterprise, to rows and rows of four post racks filled with various devices and systems. This brought about SO many challenges. Aside from rack layout, the first challenge that had to be tackled was power. At 110v and 3/5amps PER MACHINE, most of which had two power supplies and some devices running 208v power, required a specialised electrician to design the power distribution footprint.
The specialised electrician was even required to build and design redundant power solution, which involved a room full of what looked like thousands of car batteries, a generator, or many generators. In some instances you also needed contracts with fuel providers to guarantee fuel delivery if you were running on generator power “when the big one hit”, or the area suffered from any form of natural disaster.
Once the racks were mounted and populated with systems, the next challenge was managing a rats nest of power cabling. This is not to mention data cabling to move the bits from your systems back to your network core.
There Will Always Be a Need for Datacenters for Various Security and Compliance Reasons
There were lots of wires involved, and to this day, a well-done cabling job in a datacenter still makes me smile. It’s incredibly hard to maintain modern cable density and still look like a showcase to the outside observer.
These datacenters did and still continue to produce insane amounts of heat. So air conditioning needed to be designed according to the rack layout, providing intake, and exhaust rows within the room itself. Similar to power, of course, there had to be redundancy in this area if uptime were in any way critical to your business.
So, Why The Trip Into Yesteryear, Mr. CIO?
Well it’s simple. For the first 16 years, I spent freezing my butt off in those very datacenters. I was freezing and building datacenters here, across the US, Europe, and Asia. No matter what continent, the same or at least similar challenges arose.
I’ve put my time in with electricians and data providers. Running cables, rack mounting servers, switches, routers and firewalls, dropping gear on my feet, planning rack elevations, being paged in the middle of the night and driving there because we had switched to generator power (sometimes erroneously). I’ve cut my hands building datacenters more than the average construction professional probably, and burnt the candle at both ends to ensure these things were built on schedule - trying to accommodate someone else’s deadline.
AWS Has Changed All That
Today, I don’t worry about cabling, getting quotes for hardware from multiple vendors; I don’t worry whether the equipment will ship/ arrive on time, or about local VAT/ customs in foreign countries. I don’t worry about system uptime, inbound data circuits, or server band-width issues
And that hasn’t even scratched the surface of what efficiencies AWS brings to the table for my team/our business. Sure, a huge part of AWS is about rapidly provisioning servers with a handful of clicks sans any of the challenges above and working within the time frame.
The application level efficiencies allow one person to do what it would take a whole team, previously. Elastic application environments like Beanstalk and EMR would be huge undertakings in the past. Again, using the cloud a few individuals can do what took an army before.
The data warehouse technologies like Redshift would have required extremely expensive appliances or applications in the past. S3 would require the physical addition of hard disks. Again, this all happens for IPG Mediabrands today in a matter of clicks.
From a compliance and disaster recovery perspective, we can mirror in scope application servers across Zones. Cutting over in the event of a real disaster just involves running a script to alter DNS zone files to point towards the failover environment.
Outside the sheer nerd-dom, it also saves the business time and money. We’re not depreciating Capex like we used to because we’re not buying the hardware outright. We pay one bill and it covers 80 percent of our infrastructure costs. Support is only one phone call and doesn’t usually need a conference between three vendors. Our operational costs are down. Our CFO is happy. Life is good.
I titled this “How AWS killed the datacenter”. The reality is that there will always be a need for datacenters for various security and compliance reasons. However, if these constraints do not apply to you, not only would I encourage you to use a cloud provider, but I’d go so far as to say you’d be silly not to!