Why Agile infrastructure and DevOps go hand-in-hand

Did you know that some software development operations deploy code up to 200 times more frequently than others? Or that they can make code changes in less than an hour, recover from downtime in minutes and experience a 60-times lower code failure rate than their peers?

The difference is DevOps, a new approach to agile development that emphasizes modularity, frequent releases and a constant feedback cycle. DevOps is a sharp contrast with the conventional waterfall approach to software development. Traditionally, software development has involved extensive upfront design and specification, a process that could take weeks. Developers then went away – often for months – and built a prototype application. After a lengthy review process, changes were specified and developers disappeared again to implement them. As a result, it wasn’t unusual for enterprise applications to take a year or more to deliver, during which time user needs often changed, requiring still more development.

No one has the luxury of that kind of time anymore. Today’s software development timeframe is defined by the apps on your smartphone. DevOps is a way to make enterprise software development work at internet speed.

DevOps replaces the monolithic approach to building software with components that can be quickly coded and prototyped. Users are closely involved with the development process at every step. Code reviews may happen as frequently as once a day, with the focus being on steady, incremental progress of each component rather than delivery of a complete application.

A significant departure DevOps take from traditional development is that coders have control not only over code, but also the operating environment. Agile infrastructure makes this possible. Technologies like containers and microservices enable developers to provision their own workspaces, including variables like memory, storage, operating system and even co-resident software, to emulate the production environment as closely as possible. This reduces the risk of surprise when migrating from test to production, while also enhancing developer control and accountability.

With the advent of containers, IT organizations can now create libraries of lightweight virtual machines that are customized to the needs of individual developers. Containers can be quickly spun up and shut down without the need for IT administrative overhead, a process that can save weeks over the course of a project. Pre-configured containers can also be shared across development teams. Instead of waiting days for a virtual machine to be provisioned, developers can be up and running in minutes, which is one of the reasons DevOps enables such dramatic speed improvements.

Many organizations that adopt DevOps have seen exceptional results. High-performing DevOps organizations spend 50% less time on unplanned work and rework, which means they spend that much more time on new features, according to Puppet Labs’ 2016 State of DevOps report. Code failure rates are less than 15%, compared to up to 45% for middle-of-the-road performers. Lead time for changes is under an hour, compared to weeks for traditional waterfall development.

As enticing as those benefits sound, the journey to DevOps isn’t a simple one. Significant culture change is involved. One major change is that developers assume much greater accountability for the results of their work. This quickly helps organizations identify their best performers – and their weakest ones.

But accountability also gives top performers the recognition they deserve for outstanding work. This is one reason why Puppet Labs found that employees in high-performing DevOps organizations are 2.2 times more likely to recommend their company as a great place to work. Excellence breeds excellence, and top performers like working with others who meet their standards.

Users must shift their perspective as well. DevOps requires much more frequent engagement with development teams. However, high-performing organizations actually spend less time overall in meetings and code reviews because problems are identified earlier in the process and corrected before code dependencies layer on additional complexity. Meetings are more frequent, but shorter and more focused.

If you’re adopting agile infrastructure, DevOps is the next logical step to becoming a fully agile IT organization.

How Big Data changes the rules in banking

Online banking has been a win-win proposition for banks and their customers. Customers get the speed and convenience of self-service and banks enjoy big savings on transaction costs. But for many consumer banks in particular, going online has also meant losing the critical customer engagement that branches have long provided. When one online banking service looks pretty much like every other, banks need to find new ways to set themselves apart. By leveraging cloud and big data analytics, banks can re-engage in new and innovative ways via web and mobile platforms.

In a recent report outlining a new digital vision for consumer banks, Accenture offer examples of what some banks are already doing to enhance the online experience and strengthen bonds with their customers.

  • Banco Bilbao Vizcaya Argentaria of Spain captures more than 80 transaction characteristics every time customers use their debit card and uses the information to help consumers manage and forecast day-to-day spending.
  • BNP Paribas Fortis of Belgium partnered with the country’s largest telecommunications provider to create a mobile e-commerce platform that enables consumers to shop and pay for products from their smart phones. The service makes it easier for consumers to find merchants and helps local businesses get paid more quickly, which is good for the bank’s commercial business.
  • Commonwealth Bank of Australia has a mobile app that enables customers to use augmented reality to get detailed information about homes they might want to buy by simply pointing their smartphone camera at the property. The app also tells users exactly how much they will pay for a mortgage from the bank.
  • Five of Canada’s largest banks have partnered with Sensibill to integrate automated receipt management functionality into their digital banking apps. Customers can use the service to organize receipts and get reports that help them with budgeting.

These efforts are successful because the banks see themselves as more than just money managers. They’ve broadened their perspective to become allies to their customers in helping them become more efficient and achieve their dreams.

The cloud offers unprecedented capabilities for banks to integrate other services into their core applications through APIs. For example, many financial services companies now offer credit reporting as a free feature. Credit agencies are eager to promote their brands through this kind of integration, and they make it easy for banks to work with them.

When cloud is combined with big data, banks can put their existing knowledge of their customers to work in new ways. For example, they can segment customers by spending and saving behavior and offer services tuned to different budgets. They can target services to distinct customer segments based on geography or age group by overlaying demographics on customer data. They can even listen in on social media conversations to pinpoint opportunities to offer, for example, car loans to fans of specific vehicle makes and models.

The biggest impediments to this kind of transformation aren’t technical but rather cultural. If banks see themselves as simply stewards of money, they limit their potential to break out of historical niches. But when they see themselves as allies in their customers’ financial success, they can use the cloud and big data to expand and enrich relationships. The mobile app is the new branch, and a good service provider can help your financial institution realize its transformative potential.

Agile IT: a better way of doing business

One of the most powerful new ideas to emerge from the cloud computing revolution is IT agility. Agile IT organizations are able to easily adapt to changing business needs by delivering applications and infrastructure quickly to those who need it. Does your organization have what it takes to be truly agile?

There are many components of agile IT infrastructure, but three that we think are particularly important are containers, microservices and automation. These form the foundation of the new breed of cloud-native applications, and they can be used by any organization to revolutionize the speed and agility of application delivery to support the business.

Containers: Fast and Flexible

Containers are a sort of lightweight virtual machine, but they differ from VMs in fundamental ways. Containers run as a group of namespaced processes within an operating system, with each having exclusive access to resources such as processor, memory and all of the supporting elements needed for an application. They are typically stored in libraries for reuse and can be spun up and shut down in seconds. They’re also portable, meaning that an application running in a container can be moved to any other environment that supports that type of container.

Containers have only been on the IT radar screen for about three years, but they are being adopted with astonishing speed. One recent study found that 40% of organizations are already using containers in production and just 13% have no plans to adopt them during the coming year. Containers are especially popular with developers because coders can configure and launch their own workspaces without incurring the delay and overhead of involving the IT organization.

Microservices: a Better Approach to Applications

Use of containers frequently goes hand-in-hand with the adoption of microservices architectures. Applications built from microservices are based upon a network of independently deployable, modular services that use a lightweight communications mechanism such as a messaging protocol. Think of it as an object assembled from Lego blocks. Individual blocks aren’t very useful by themselves, but when combined, they can create elaborate structures.

Service-oriented architecture is nothing new, but the technology has finally matured to the point that it’s practical to rethink applications in that form. The microservices approach is more flexible and efficient than the vertically integrated applications that have dominated IT for decades. By assembling applications from libraries of services, duplication is minimized and software can move into production much more quickly. There’s less testing overhead and more efficient execution, since developers can focus on improving existing microservices rather than reinventing the wheel with each project.

Containers are an ideal platform for microservices. They can be launched quickly and custom-configured to use only the resources they need. A single microservice may be used in many ways by many different applications. Orchestration software such as Kubernetes keeps things running smoothly, handles exceptions and constantly balances resources across a cluster.

Automation: Departure from Routine

Automation is essential to keeping this complex environment running smoothly. Popular open-source tools such as Puppet and Ansible make it possible for many tasks that were once performed by systems administrators – such as defining security policies, managing certificates, balancing processing loads and assigning network addresses – to be automated via scripts. Automation tools were developed by cloud-native companies to make it possible for them to run large-scale IT operations without legions of administrators, but the tools are useful in any context.

Automation not only saves money but improves job satisfaction. Manual, routine tasks can be assigned to scripts so that administrators can tend to more important and challenging work. And in a time of severe IT labor shortages, who doesn’t want happier employees?

Agile IT makes organizations nimbler, more responsive and faster moving. When planned and executed with the help of an experienced integration partner, it saves money as well.

 

Cloud Strategy: data collection

Here is part 6 of our series covering the key issues to consider before adopting cloud technologies. This month, we discuss how to build your strategy and data points that must be considered.

When considering & building a cloud strategy, organisations need to consider business objectives/outcomes desired, quantifiable and time-bound goals as well as identify specific initiatives that the enterprise can and should undertake in order to execute the strategy and achieve the goals set. As shown by surveys on the subject by Gartner in 2013 and 2014, process and culture are likely to be big hurdles in any move to cloud. Therefore, involving all aspects of the business and gathering the right information can assist in building the right strategy and identify potential problems ahead of time.

The first concrete step to take to building this strategy is to gather the data points to identify and define those objectives, goals and initiatives for the entreprise in the near – and mid – terms. Once the data is collected, you can review, analyze and identify the business outcomes desired, set the (quantifiable) goals and define the specific initiatives you want to put in place to achieve them. This should not be a strict price or technology evaluation.

Data Collection
The data points needed will have to come from various parts of the organisation (business units, finance, HR and IT). Some of the information required may take the form of files, but a lot of the required information will reside with your staff directly, and so interviews should be a part of the data collection process. These interviews should take up to a few hours each and focus on the interviewees functions, processes used and required/desired business outcomes, to provide insight into the actual impacts to the business before creating your cloud strategy.

With this data, you will be in a position to account for all aspects touching cloud computing, to see what it will affect and how, to evaluate its effect on the balance sheet (positive or negative) and decide on your strategy moving forward.

Benoit Quintin, Director Cloud Services – ESI Technologies

Cloud Strategy – human impacts across organization

Here is part five of our series covering the key issues to consider before adopting cloud technologies. This month, we discuss the impact on human resources.

Resources in your organisation will be impacted by this change. Both on the IT side and on the business side. While helping companies move to cloud we have had to assist with adapting IT job descriptions, processes and roles within the organisation.

As the IT organisation moves into a P&L role, its success starts to be tied to the adoption by the stakeholders of the services offered. To do this, IT needs to get closer to the business units, understand their requirements and deliver access to resources on-demand. All this cannot happen unless things change within the IT group.

As companies automate their practice, and create a self-service portal to provision resources, some job descriptions need to evolve. A strong and clear communication plan with set milestones helps employees understand the changes coming to the organisation, and involving them in the decision process will go a long way to assist in the transition. We have seen that IT organisations with a clear communication plan at the onset that involved their employees in the process had a much easier transition, and faster adoption rate than those who did not.

Our experience helping customers with cloud computing shows that cloud alters significantly IT’s role and relationship with the business, and employees’ roles need to evolve. Training, staff engagement in the transition and constant communication will help your organisation significantly move to this new paradigm.

Benoit Quintin, Director Cloud Services – ESI Technologies

Cloud Strategy: technological impacts

Here is part four of our series covering the key issues to consider before adopting cloud technologies. This article focuses specifically on technological impacts to consider.

Not all software technology is created equal. Indeed, not every application will migrate gracefully to the cloud, some will never tolerate the latency, while others were never designed to have multiple smaller elements working together, rather than a few big servers. This means your business applications will need to be evaluated for cloud readiness. Indeed, this is possibly the largest technological hurdle, but, as with all technology, this may prove to be easier to solve that some of the other organisational issues.

One should look at the application’s architecture (n-tiered or monolithic), tolerance to faults/issues (e.g. latency, network errors, services down, servers down) and how the users consume the application (always from a PC, from the office, or fully decentralized, with offline and mobile access), to evaluate options for migrating an application to the cloud. Current growth rate and state of the organisation are often times mirrored in its IT consumption rate and requirements. Certainly, an organisation that’s under high growth rates or launching a project where growth is not easily identifiable can possibly benefit significantly from a scalable, elastic cloud model, whereas an organisation with slower growth, familiar / standard projects and predictable IT requirements will not likely assess the value of cloud computing the same way. Accountability of resources and traceability of all assets in use may be of bigger concern.

Architecture, applications and legacy environments are all technological considerations that should be factored in any cloud computing viability & readiness assessment, but that should probably not be the main driver for your cloud strategy.

Benoit Quintin, Director Cloud Services – ESI Technologies

Cloud Strategy: legal impacts across the organization

Here is part three of our series covering the key issues to consider before adopting cloud technologies. This article focuses specifically on legal impacts on your organization.

“Location, location, location”. We’re more accustomed to hearing this in the context of the housing market. However, where your company’s headquarters reside, where your company does business and where its subsidiaries are located directly impact how you need to manage sensitive information, such as strategic projects, HR/personnel information, etc.; essentially, IT needs to account for data sovereignty laws and regulations.

Various countries have already voted or are moving towards voting on more restrictive data sovereignty legislations that will control the transit of information out of border. For example, the Canadian Personal Information Protection and Electronic Documents Act (PIPEDA) already governs how IT organisations can collect, use and disclose personal information in the course of commercial business. In addition, the Act contains various provisions to facilitate the use of electronic documents. Essentially, all personally identifiable information must stay in country, at rest and in transit, meaning that using a cloud provider in the US or any other country with said data could expose the company – and you – to a lawsuit, unless the cloud provider can guarantee no aforementioned data ever leaves the country at any time, including for redundancy/DR purposes.

While the previous Act covered what must be protected, the American law (the USA Freedom Act, and its previous incarnation, the Patriot Act) enables the US government to access any and all data residing on its soil, without owner’s authorization, need for warrant and without even the need to notice the owner before or after the fact. The few data privacy provisions in the bill apply to American citizens and entities only. This means all data housed in the US are at risk, especially if said data is owned by an organisation whose headquarters are out of country.

If in Europe, laws vary from country to country, we find that the regulations on data protection are becoming more stringent, requiring the establishment of procedures and controls to protect personal data and obtaining the explicit authorization of persons to collect and use their information. All this imposes guidelines to the use of the cloud within the country or outside their borders.

Typically, data sovereignty should be a concern for most organisations when looking at cloud and, as the current trend is for countries to vote in more stringent laws, any and all cloud strategy should account for local, national and international regulations.

Benoit Quintin – Director Cloud Services – ESI Technologies

Cloud Strategy: business impacts across the organization

Here is the second part of our series covering the key issues to consider before adopting cloud technologies. This article focuses specifically on business impacts on your organization.

Most markets are evolving faster than ever before, and the trend seems to be accelerating, so organisations globally need to adapt and change the way they go to market. From a business standpoint, the flexibility and speed with which new solutions can be delivered via cloud help enable the business units to react faster and better. So much so, that where IT organisations have not considered automating aspects of provisioning to provide more flexibility and faster access to resources, business units have started going outside of IT, to some of the public cloud offerings, for resources.

Planning for cloud should consider people and processes, as both will likely be directly impacted. From the requisition of resources, all the way to charging back the different business units for resources consumed, managed independently from projects’ budgets, processes that were created and used before the advent of cloud in your organisation should be adapted, if not discarded and rebuilt from scratch. IT will need to change and evolve as it becomes an internal service provider (in many instances, a P&L entity) – and resources broker for the business units.

Considering the large capital investments IT has typically been getting as budget to ‘keep the lights on’, and considering that, until recently, this budget had been growing at double digits rate since the early days of mainframe; the switch from a capital investment model to an operational model can impact the way IT does business significantly. Indeed, we have seen the shift forcing IT to focus on what it can do better, review its relationships with the vendors, ultimately freeing up the valuable investment resources. In many organisations, this has also translated to enabling net new projects to come to life, in and out of IT.

Once this transformation is underway, you should start seeing some of the benefits other organisations have been enjoying, starting with faster speed to market on new offerings. Indeed, in this age of mobile everything, customers expect access to everything all the time, and your competition is likely launching new offerings every day. A move towards cloud enables projects to move forward at an accelerated pace, letting you go to market with updated offerings much faster.

Benoit Quintin, Director Cloud Services, ESI Technologies

Cloud computing: strategy and IT readiness – Transformation in IT

Here is the first of a series of articles that provide both business and IT executives insights into the key issues that they should consider when evaluating cloud services, paying particular attention to business and legal ramifications of moving to the cloud environment, whether it is private, hybrid or public.

cloud-question-mark-710x345For the last few decades, IT organisations have been the only option for provisioning IT resources for projects. Indeed, all new projects would involve IT, and the IT team was responsible for acquiring, architecting and delivering the solution that would sustain the application/project during its lifecycle, planning for upgrades along the way.
This led to silo-based infrastructures – and teams -, often designed for peak demand, without possibilities of efficiency gains between projects. The introduction of compute virtualization, first for test/dev and then for production, showed other options were possible and available and that by aggregating requirements across projects, IT could get significant efficiencies of scale and costs while getting more flexibility and speed to market, as provisioning a virtual server suddenly became a matter of days, rather than weeks or months.
Over time, IT started applying these same methods to storage and network and these showed similar flexibility, scalability and efficiency improvements. These gains, together with automation capabilities and self-service portals, were combined over time to become what we know as ‘cloud offerings’.
In parallel to this, IT, in some organisations, has become structured, organized, usually silo’d, and, unfortunately, somewhat slow to respond to business needs. This has led to a slow erosion of IT’s power and influence over IT resources acquisition, delivery and management. Coupled with the existing commercial/public cloud options these days, capital is rapidly leaving the organisation for 3rd party public cloud vendors, also known as shadow IT. This raises concerns, not the least of which being that funds are sent outside the organisation to address tactical issues, typically without regard to legal implications, data security or cost efficiency. These issues highlight IT’s necessity to react faster, become more customer driven, deliver more value and provide its stakeholders with flexibility matching that of public cloud. Essentially, IT needs to evolve to become a business partner; cloud computing providing the tools by which IT offers flexibility, scalability and speed to market that the business units are looking for in today’s market.

Benoit Quintin, Director Cloud Services, ESI Technologies

Where’s the promised agility?

The world of technology solutions integrators has changed dramatically in the last 10 years. Customers are more educated than ever before through access to a world of information available on the Internet. It is estimated that 80% of customer decision-making is made online before they even reach out to us. This is not just true of our industry. The Internet is now woven into the fabric of society and clients now go to the veterinary clinic with the belief that they already identified their pet’s disease since “the Internet” provided them with a diagnosis!

agility380w_0What about the promises of industry giants? Simplified IT, reduced OPEX, increased budgets for projects instead of maintenance, etc.?

How can we explain that we don’t witness this in our conversations with customers? How is it that we still see today clients who have embraced those technologies also admit they are now faced with greater complexity than before? Perhaps the flaw comes exactly from the fact that 80% of decisions are made based on well designed and manufactured web marketing strategies…

Regardless of the technological evolution, the key it seems is still architecture design, thought with a business purpose and IT integration strategy tailored to your specific needs with the help of professionals. Just as a veterinarian is certainly a better source of information than the Internet to look after your pet…

For over 20 years, ESI designs solutions that are agile, scalable and customized to the specific needs of organisations. ESI works closely with customers to bridge the gap between business needs and technology, maximizing ROI and providing objective professional advice.