Tips for a pain-free journey to software-defined infrastructure

By some estimates, 70% of the servers in enterprise data centers are now virtualized, meaning that nearly every company is enjoying the benefits of flexibility, high utilization rates and automation that virtualization provides.

If you’re one of them, you might be tempted to move your network, storage and desktops to software-defined infrastructure (SDI) as quickly as possible. That’s a great long-term strategy. In fact, Gartner predicts that programmatic infrastructure will be a necessity for most enterprises by 2020. But you should move at your own pace and for the right reasons. Don’t rush the journey, and be aware of these common pitfalls.

Have a strategy and a plan. Think through what you want to virtualize and why you want to do it. Common reasons include improving the efficiency of equipment you already have, improving application performance or building the foundation for hybrid cloud. Knowing your objectives will give you, and your technology partner, a better fix on what to migrate and when.

Be aware that many areas of SDI are still in early-stage development and standards are incomplete or nonexistent. This makes mission-critical applications poor candidates for early migration. Start with low-risk applications and implement in phases, being aware that a full migration may take years and that some legacy assets may not be worth virtualizing all. If you’re new to SDI, consider virtualizing a small part of your infrastructure, such as firewalls or a handful of desktops, to become familiar with the process.

For all the flexibility SDI provides, it also introduces complexity. You’ll now have a virtual layer to monitor in addition to your existing physical layers. That’s not a reason to stay put, but be aware that management and troubleshooting tasks may become a bit more complex.

Map dependencies. In a perfect world, all interfaces between software and hardware would be defined logically, but we know this isn’t a perfect world. In the rush to launch or repair an application, developers may create shortcuts by specifying physical dependencies between, say, a database and storage device. These connections may fail if storage is virtualized. Understand where any such dependencies may exist and fix them before introducing a software-defined layer.

SDI requires a new approach to systems management as well. Since new devices can be introduced to the network with little or no manual intervention, it can be difficult to forecast their performance impact in advance. Be sure to factor analytics and performance management metrics into your planning so that you have a way of modeling the impact of changes before making them.

Use standards. Many SDI standards are still a work-in-progress. While most vendors do a good job of adhering to a base set of standards, they may also include proprietary extensions that could affect compatibility with third-party products. To ensure you have the greatest degree of flexibility, look for solutions that conform to standards like the Open Networking Foundation’s OpenFlow and OpenSDS for storage.

SDI relies heavily on application program interfaces for communication. Since there are no universal standards for infrastructure APIs, they are potential source of lock-in if your SDI solution requires APIs specific to a particular vendor. Look for solutions that adhere to APIs defined by industry standards instead.

Double down on security. Virtual connections create certain security vulnerabilities that don’t exist in a world where everything is physically attached. For example, the heart of a software-defined network is an SDN controller, which manages all communications between applications and network devices. If the controller is breached, the entire network is at risk, so it’s essential to choose a trusted platform with the ability to validate any new applications or components. Make sure the platforms that manage your virtual processes are locked down tight.

Don’t forget the human factor. One of the great benefits of SDI is that it enables many once-manual processes to be automated. This will impact the skill sets you need in your data center. Deep hardware knowledge will become less important than the ability to manage applications and infrastructure at a high level. Prepare your staff for this shift and be ready to retrain the people whom you believe can make the transition.

These relatively modest pitfalls shouldn’t stop you from getting your organization ready to take advantage of the many benefits of SDI. Working with an experienced partner is the best way to ensure a smooth and successful journey.

Agile IT: a better way of doing business

One of the most powerful new ideas to emerge from the cloud computing revolution is IT agility. Agile IT organizations are able to easily adapt to changing business needs by delivering applications and infrastructure quickly to those who need it. Does your organization have what it takes to be truly agile?

There are many components of agile IT infrastructure, but three that we think are particularly important are containers, microservices and automation. These form the foundation of the new breed of cloud-native applications, and they can be used by any organization to revolutionize the speed and agility of application delivery to support the business.

Containers: Fast and Flexible

Containers are a sort of lightweight virtual machine, but they differ from VMs in fundamental ways. Containers run as a group of namespaced processes within an operating system, with each having exclusive access to resources such as processor, memory and all of the supporting elements needed for an application. They are typically stored in libraries for reuse and can be spun up and shut down in seconds. They’re also portable, meaning that an application running in a container can be moved to any other environment that supports that type of container.

Containers have only been on the IT radar screen for about three years, but they are being adopted with astonishing speed. One recent study found that 40% of organizations are already using containers in production and just 13% have no plans to adopt them during the coming year. Containers are especially popular with developers because coders can configure and launch their own workspaces without incurring the delay and overhead of involving the IT organization.

Microservices: a Better Approach to Applications

Use of containers frequently goes hand-in-hand with the adoption of microservices architectures. Applications built from microservices are based upon a network of independently deployable, modular services that use a lightweight communications mechanism such as a messaging protocol. Think of it as an object assembled from Lego blocks. Individual blocks aren’t very useful by themselves, but when combined, they can create elaborate structures.

Service-oriented architecture is nothing new, but the technology has finally matured to the point that it’s practical to rethink applications in that form. The microservices approach is more flexible and efficient than the vertically integrated applications that have dominated IT for decades. By assembling applications from libraries of services, duplication is minimized and software can move into production much more quickly. There’s less testing overhead and more efficient execution, since developers can focus on improving existing microservices rather than reinventing the wheel with each project.

Containers are an ideal platform for microservices. They can be launched quickly and custom-configured to use only the resources they need. A single microservice may be used in many ways by many different applications. Orchestration software such as Kubernetes keeps things running smoothly, handles exceptions and constantly balances resources across a cluster.

Automation: Departure from Routine

Automation is essential to keeping this complex environment running smoothly. Popular open-source tools such as Puppet and Ansible make it possible for many tasks that were once performed by systems administrators – such as defining security policies, managing certificates, balancing processing loads and assigning network addresses – to be automated via scripts. Automation tools were developed by cloud-native companies to make it possible for them to run large-scale IT operations without legions of administrators, but the tools are useful in any context.

Automation not only saves money but improves job satisfaction. Manual, routine tasks can be assigned to scripts so that administrators can tend to more important and challenging work. And in a time of severe IT labor shortages, who doesn’t want happier employees?

Agile IT makes organizations nimbler, more responsive and faster moving. When planned and executed with the help of an experienced integration partner, it saves money as well.

 

Is your network ready for digital transformation?

If your company has more than one location, you know the complexity that’s involved in maintaining the network. You probably have several connected devices in each branch office, along with firewalls, Wi-Fi routers and perhaps VoIP equipment. Each patch, firmware update or new malware signature needs to be installed manually, necessitating a service call. The more locations you have, the bigger the cost and the greater the delay.

This is the state of technology at most distributed organizations these days, but it won’t scale well for the future. Some 50 billion new connected smart devices are expected to come online over the next three years, according to Cisco. This so-called “Internet of things” (IoT) revolution will demand a complete rethinking of network infrastructure.

Networks of the future must flexibly provision and manage bandwidth to accommodate a wide variety of usage scenarios. They must be also be manageable from a central point. Functionality that’s currently locked up in hardware devices must move into software. Security will become part of the network fabric, rather than distributed to edge devices. Software updates will be automatic.

Cisco calls this vision “Digital Network Architecture” (DNA). It’s a software-driven approach enabled by intelligent networks, automation and smart devices. By virtualizing many functions that are now provided by physical hardware, your IT organization can gain unparalleled visibility and control over every part of their network.

For example, you can replace hardware firewalls with a single socket connection. Your network administrators can get a complete view of every edge device, and your security operations staff can use analytics to identify and isolate anomalies. New phones, computers or other devices can be discovered automatically and appropriate permissions and policies enforced centrally. Wi-Fi networks, which are one of the most common entry points for cyber attackers, can be secured and monitored as a unit.

One of the most critical advantages of DNA is flexible bandwidth allocation. Many organizations today provision bandwidth on a worst-case scenario basis, resulting in excess network capacity that sits idle much for the time. In a fully software defined scenario, bandwidth is allocated only as needed, so a branch office that’s experiencing a lull doesn’t steal resources from a busy one. Virtualized server resources can also be allocated in the same way, improving utilization and reducing waste.

IoT will demand unprecedented levels of network flexibility. Some edge devices – such as point-of-sale terminals – will require high-speed connections that carry quick bursts of information for tasks such as credit card validation. Others, like security cameras, need to transmit much larger files but have greater tolerance for delay. Using a policy-based DNA approach, priorities can be set to ensure that each device gets the resources it needs.

Getting to DNA isn’t an overnight process. Nearly every new product Cisco is bringing to the market is DNA-enabled. As you retire older equipment, you can move to a fully virtualized, software-defined environment in stages. In some cases, you may find that the soft costs of managing a large distributed network – such as travel, staff time and lost productivity – already justify a switch. Whatever the case, ESI has the advisory and implementation expertise to help you make the best decision.