Tips for a pain-free journey to software-defined infrastructure

By some estimates, 70% of the servers in enterprise data centers are now virtualized, meaning that nearly every company is enjoying the benefits of flexibility, high utilization rates and automation that virtualization provides.

If you’re one of them, you might be tempted to move your network, storage and desktops to software-defined infrastructure (SDI) as quickly as possible. That’s a great long-term strategy. In fact, Gartner predicts that programmatic infrastructure will be a necessity for most enterprises by 2020. But you should move at your own pace and for the right reasons. Don’t rush the journey, and be aware of these common pitfalls.

Have a strategy and a plan. Think through what you want to virtualize and why you want to do it. Common reasons include improving the efficiency of equipment you already have, improving application performance or building the foundation for hybrid cloud. Knowing your objectives will give you, and your technology partner, a better fix on what to migrate and when.

Be aware that many areas of SDI are still in early-stage development and standards are incomplete or nonexistent. This makes mission-critical applications poor candidates for early migration. Start with low-risk applications and implement in phases, being aware that a full migration may take years and that some legacy assets may not be worth virtualizing all. If you’re new to SDI, consider virtualizing a small part of your infrastructure, such as firewalls or a handful of desktops, to become familiar with the process.

For all the flexibility SDI provides, it also introduces complexity. You’ll now have a virtual layer to monitor in addition to your existing physical layers. That’s not a reason to stay put, but be aware that management and troubleshooting tasks may become a bit more complex.

Map dependencies. In a perfect world, all interfaces between software and hardware would be defined logically, but we know this isn’t a perfect world. In the rush to launch or repair an application, developers may create shortcuts by specifying physical dependencies between, say, a database and storage device. These connections may fail if storage is virtualized. Understand where any such dependencies may exist and fix them before introducing a software-defined layer.

SDI requires a new approach to systems management as well. Since new devices can be introduced to the network with little or no manual intervention, it can be difficult to forecast their performance impact in advance. Be sure to factor analytics and performance management metrics into your planning so that you have a way of modeling the impact of changes before making them.

Use standards. Many SDI standards are still a work-in-progress. While most vendors do a good job of adhering to a base set of standards, they may also include proprietary extensions that could affect compatibility with third-party products. To ensure you have the greatest degree of flexibility, look for solutions that conform to standards like the Open Networking Foundation’s OpenFlow and OpenSDS for storage.

SDI relies heavily on application program interfaces for communication. Since there are no universal standards for infrastructure APIs, they are potential source of lock-in if your SDI solution requires APIs specific to a particular vendor. Look for solutions that adhere to APIs defined by industry standards instead.

Double down on security. Virtual connections create certain security vulnerabilities that don’t exist in a world where everything is physically attached. For example, the heart of a software-defined network is an SDN controller, which manages all communications between applications and network devices. If the controller is breached, the entire network is at risk, so it’s essential to choose a trusted platform with the ability to validate any new applications or components. Make sure the platforms that manage your virtual processes are locked down tight.

Don’t forget the human factor. One of the great benefits of SDI is that it enables many once-manual processes to be automated. This will impact the skill sets you need in your data center. Deep hardware knowledge will become less important than the ability to manage applications and infrastructure at a high level. Prepare your staff for this shift and be ready to retrain the people whom you believe can make the transition.

These relatively modest pitfalls shouldn’t stop you from getting your organization ready to take advantage of the many benefits of SDI. Working with an experienced partner is the best way to ensure a smooth and successful journey.

Agile IT: a better way of doing business

One of the most powerful new ideas to emerge from the cloud computing revolution is IT agility. Agile IT organizations are able to easily adapt to changing business needs by delivering applications and infrastructure quickly to those who need it. Does your organization have what it takes to be truly agile?

There are many components of agile IT infrastructure, but three that we think are particularly important are containers, microservices and automation. These form the foundation of the new breed of cloud-native applications, and they can be used by any organization to revolutionize the speed and agility of application delivery to support the business.

Containers: Fast and Flexible

Containers are a sort of lightweight virtual machine, but they differ from VMs in fundamental ways. Containers run as a group of namespaced processes within an operating system, with each having exclusive access to resources such as processor, memory and all of the supporting elements needed for an application. They are typically stored in libraries for reuse and can be spun up and shut down in seconds. They’re also portable, meaning that an application running in a container can be moved to any other environment that supports that type of container.

Containers have only been on the IT radar screen for about three years, but they are being adopted with astonishing speed. One recent study found that 40% of organizations are already using containers in production and just 13% have no plans to adopt them during the coming year. Containers are especially popular with developers because coders can configure and launch their own workspaces without incurring the delay and overhead of involving the IT organization.

Microservices: a Better Approach to Applications

Use of containers frequently goes hand-in-hand with the adoption of microservices architectures. Applications built from microservices are based upon a network of independently deployable, modular services that use a lightweight communications mechanism such as a messaging protocol. Think of it as an object assembled from Lego blocks. Individual blocks aren’t very useful by themselves, but when combined, they can create elaborate structures.

Service-oriented architecture is nothing new, but the technology has finally matured to the point that it’s practical to rethink applications in that form. The microservices approach is more flexible and efficient than the vertically integrated applications that have dominated IT for decades. By assembling applications from libraries of services, duplication is minimized and software can move into production much more quickly. There’s less testing overhead and more efficient execution, since developers can focus on improving existing microservices rather than reinventing the wheel with each project.

Containers are an ideal platform for microservices. They can be launched quickly and custom-configured to use only the resources they need. A single microservice may be used in many ways by many different applications. Orchestration software such as Kubernetes keeps things running smoothly, handles exceptions and constantly balances resources across a cluster.

Automation: Departure from Routine

Automation is essential to keeping this complex environment running smoothly. Popular open-source tools such as Puppet and Ansible make it possible for many tasks that were once performed by systems administrators – such as defining security policies, managing certificates, balancing processing loads and assigning network addresses – to be automated via scripts. Automation tools were developed by cloud-native companies to make it possible for them to run large-scale IT operations without legions of administrators, but the tools are useful in any context.

Automation not only saves money but improves job satisfaction. Manual, routine tasks can be assigned to scripts so that administrators can tend to more important and challenging work. And in a time of severe IT labor shortages, who doesn’t want happier employees?

Agile IT makes organizations nimbler, more responsive and faster moving. When planned and executed with the help of an experienced integration partner, it saves money as well.

 

Is your network ready for digital transformation?

If your company has more than one location, you know the complexity that’s involved in maintaining the network. You probably have several connected devices in each branch office, along with firewalls, Wi-Fi routers and perhaps VoIP equipment. Each patch, firmware update or new malware signature needs to be installed manually, necessitating a service call. The more locations you have, the bigger the cost and the greater the delay.

This is the state of technology at most distributed organizations these days, but it won’t scale well for the future. Some 50 billion new connected smart devices are expected to come online over the next three years, according to Cisco. This so-called “Internet of things” (IoT) revolution will demand a complete rethinking of network infrastructure.

Networks of the future must flexibly provision and manage bandwidth to accommodate a wide variety of usage scenarios. They must be also be manageable from a central point. Functionality that’s currently locked up in hardware devices must move into software. Security will become part of the network fabric, rather than distributed to edge devices. Software updates will be automatic.

Cisco calls this vision “Digital Network Architecture” (DNA). It’s a software-driven approach enabled by intelligent networks, automation and smart devices. By virtualizing many functions that are now provided by physical hardware, your IT organization can gain unparalleled visibility and control over every part of their network.

For example, you can replace hardware firewalls with a single socket connection. Your network administrators can get a complete view of every edge device, and your security operations staff can use analytics to identify and isolate anomalies. New phones, computers or other devices can be discovered automatically and appropriate permissions and policies enforced centrally. Wi-Fi networks, which are one of the most common entry points for cyber attackers, can be secured and monitored as a unit.

One of the most critical advantages of DNA is flexible bandwidth allocation. Many organizations today provision bandwidth on a worst-case scenario basis, resulting in excess network capacity that sits idle much for the time. In a fully software defined scenario, bandwidth is allocated only as needed, so a branch office that’s experiencing a lull doesn’t steal resources from a busy one. Virtualized server resources can also be allocated in the same way, improving utilization and reducing waste.

IoT will demand unprecedented levels of network flexibility. Some edge devices – such as point-of-sale terminals – will require high-speed connections that carry quick bursts of information for tasks such as credit card validation. Others, like security cameras, need to transmit much larger files but have greater tolerance for delay. Using a policy-based DNA approach, priorities can be set to ensure that each device gets the resources it needs.

Getting to DNA isn’t an overnight process. Nearly every new product Cisco is bringing to the market is DNA-enabled. As you retire older equipment, you can move to a fully virtualized, software-defined environment in stages. In some cases, you may find that the soft costs of managing a large distributed network – such as travel, staff time and lost productivity – already justify a switch. Whatever the case, ESI has the advisory and implementation expertise to help you make the best decision.

The IT Catch-22

OK, so everyone’s taking about it. Our industry is undergoing major changes. It’s out there. It started with a first architecture of reference with mainframes and minicomputers designed to serve thousands of applications used by millions of users worldwide. It then evolved with the advent of the Internet into the “client-server” architecture, this one designed to run hundreds of thousands of applications used by hundreds of millions of users. And where are we now? It appears we are witnessing the birth of a third generation of architecture, one of which is described by the IDC as “the next generation compute platform that is accessed from mobile devices, utilizes Big Data, and is cloud based”. It is referred to as “the third platform”. It is destined to deliver millions of applications to billions of users.

3rd platformVirtualization seems to have been the spark that ignited this revolution. The underlying logic of this major shift is that virtualization allows to make abstraction of hardware, puts it all in a big usable pool of performance and assets that can be shared by different applications for different uses according to the needs of different business units within an organization. The promise of this is that companies can and have more with less. Therefore, IT budgets can be reduced!

These changes are huge. In this third platform IT is built, is run, is consumed and finally is governed differently. Everything is changed from the ground up. It would seem obvious that one would need to invest in careful planning of the transition from the second to the third platform. What pace can we go at? What can be moved out into public clouds? What investments are required on our own infrastructure? How will it impact our IT staff? What training and knowledge will they require? What about security and risks?

The catch is the following: the third platforms allows IT to do much more with less. Accordingly, IT budgets are reduced or at best, flattened. Moving into the third platform requires investments. Get it? Every week we help CIOs and IT managers raise this within their organization so that they can obtain the required investments they need to move into the third platform to reap the benefits of it.

SDN- The Mystery Uncovered

As I continue to attend conferences and sessions with many of our core partners, I continue on my quest for data centre innovation. Most recently I visited the sunny coast of the Bay Area to visit Brocade Communications, Hitachi Data Systems and VMware specifically the NSX division.

Within my role “Office of the CTO” I am always exploring new trends and innovation in designs and solutions for our clients, in particular how Software-Defined Everything becomes a part of our clients’ data centre evolution. For many years we have been speaking about the cloud and its adoption in main stream IT. We have new technologies appear and some just take a new face. Today, I would like to explore the concept of Software Defined Data Centers (SDDC) or in this case specially Software Defined Networks (SDN), with an overview of some of the most interesting solutions on the market.

Like many of you I have experienced the virtualization becoming more and more common of the compute platform. It just seems like yesterday that my manager at the time asked me to assist in SAN connectivity with Microsoft version 1 of Virtual Machine management! Today we are experiencing the continued evolution of virtualization. Server and storage virtualization are common place within the data center. We are seeing Canadian companies 100% virtualized within the compute space. These same companies are looking for the next step in consolidation, agility and cost containment. That next step is network virtualization. But what is SDN? Software defined networking (SDN) is a model for network control, based on the idea that network traffic flow can be made programmable at scale, thus enabling new dynamic models for traffic management.

SDN image

VMware NSX – a product purchased by VMware to add to their virtual network strategy. The product is sound and provides a close coupling with VMware and the networking and security of East/West traffic within a VM. The NSX Data and management plane provides an excellent framework to allow the SME hypervisor to lock down the VM traffic, and virtual properties such as a vRouter, vVPN, vLoad Balancer, all of which work within the VM construct.

Brocade Vyatta – A technology acquired by Brocade 2 years ago. Today we see the vRouter and Vyatta OpenDaylight controller lead the pack. Brocade has v5400 and v5600 additions of the predefined Vyatta OpenFlow controller. The Vyatta implementation provides vRouter, vFirewall, vVPN and has also developed a vADX load balancer as well.

Cisco ACI or Nexus 9000L – Cisco announced in 2014 the spin-in of the ‎Insieme product to provide an ACI (Application Centric Infrastructure) platform. The first release was a 40 Gb Ethernet switch with no real ACI functionality. Today we see the product with enhanced port/policy control strategy using the Cloupia Spin-in Technology (UCS Director) policy based engines to control the various functions within an ACI architecture.

‎The real mystery of software defined networking starts with the basic understanding of a business need for a “programmable network” based on X86 architecture within the virtualization layer.

Nicholas Laine, Director Solutions Architect – Office of the CTO