Why Agile infrastructure and DevOps go hand-in-hand

Did you know that some software development operations deploy code up to 200 times more frequently than others? Or that they can make code changes in less than an hour, recover from downtime in minutes and experience a 60-times lower code failure rate than their peers?

The difference is DevOps, a new approach to agile development that emphasizes modularity, frequent releases and a constant feedback cycle. DevOps is a sharp contrast with the conventional waterfall approach to software development. Traditionally, software development has involved extensive upfront design and specification, a process that could take weeks. Developers then went away – often for months – and built a prototype application. After a lengthy review process, changes were specified and developers disappeared again to implement them. As a result, it wasn’t unusual for enterprise applications to take a year or more to deliver, during which time user needs often changed, requiring still more development.

No one has the luxury of that kind of time anymore. Today’s software development timeframe is defined by the apps on your smartphone. DevOps is a way to make enterprise software development work at internet speed.

DevOps replaces the monolithic approach to building software with components that can be quickly coded and prototyped. Users are closely involved with the development process at every step. Code reviews may happen as frequently as once a day, with the focus being on steady, incremental progress of each component rather than delivery of a complete application.

A significant departure DevOps take from traditional development is that coders have control not only over code, but also the operating environment. Agile infrastructure makes this possible. Technologies like containers and microservices enable developers to provision their own workspaces, including variables like memory, storage, operating system and even co-resident software, to emulate the production environment as closely as possible. This reduces the risk of surprise when migrating from test to production, while also enhancing developer control and accountability.

With the advent of containers, IT organizations can now create libraries of lightweight virtual machines that are customized to the needs of individual developers. Containers can be quickly spun up and shut down without the need for IT administrative overhead, a process that can save weeks over the course of a project. Pre-configured containers can also be shared across development teams. Instead of waiting days for a virtual machine to be provisioned, developers can be up and running in minutes, which is one of the reasons DevOps enables such dramatic speed improvements.

Many organizations that adopt DevOps have seen exceptional results. High-performing DevOps organizations spend 50% less time on unplanned work and rework, which means they spend that much more time on new features, according to Puppet Labs’ 2016 State of DevOps report. Code failure rates are less than 15%, compared to up to 45% for middle-of-the-road performers. Lead time for changes is under an hour, compared to weeks for traditional waterfall development.

As enticing as those benefits sound, the journey to DevOps isn’t a simple one. Significant culture change is involved. One major change is that developers assume much greater accountability for the results of their work. This quickly helps organizations identify their best performers – and their weakest ones.

But accountability also gives top performers the recognition they deserve for outstanding work. This is one reason why Puppet Labs found that employees in high-performing DevOps organizations are 2.2 times more likely to recommend their company as a great place to work. Excellence breeds excellence, and top performers like working with others who meet their standards.

Users must shift their perspective as well. DevOps requires much more frequent engagement with development teams. However, high-performing organizations actually spend less time overall in meetings and code reviews because problems are identified earlier in the process and corrected before code dependencies layer on additional complexity. Meetings are more frequent, but shorter and more focused.

If you’re adopting agile infrastructure, DevOps is the next logical step to becoming a fully agile IT organization.

When choosing a cloud provider, it pays to think small!

When you buy wine, do you go to the big discount store or the local specialty retailer? Chances are you do both, depending on the situation. The big-box store has selection and low prices, but the people who run wine store on the corner can delight you with recommendations you couldn’t find anywhere else.

The same dynamics apply to choosing a cloud service provider. When you think of cloud vendors, there are probably four or five company names that immediately come to mind. But if you Google rankings of cloud vendors according to customer satisfaction or relevance to small businesses, you’ll find quite a different list. There are hundreds of small, regional and specialty infrastructure-as-a-service providers out there. In many cases, they offer value that the giants can’t match. Here are five reasons to consider them.

Customer service – this is probably the number one reason to go with a smaller hosting provider. If you have a problem, you can usually get a person on the phone. Over time, the service provider gets to know you and can offer advice or exclusive discounts. The company just can’t match this personalized service.

Specialty knowledge – You can find apps for just about anything in the marketplace sections of the big cloud companies, but after that you’re pretty much on your own. If struggling with configuration files and troubleshooting Apache error messages isn’t your cup of tea, then look for a service provider that specializes in the task you’re trying to accomplish. Not only do you usually get personal service, but the people are experts in the solutions they support. They’ll get answers fast.

A smile and a handshake – There are several good reasons to choose a vendor in your geographic area. For one thing, government-mandated data protection laws may require it. Local providers also offer a personal touch that call centers can’t match. You can visit their facilities, meet with them to plan for your service needs and get recommendations for local developers or contractors you might need. Many small vendors also offer colocation options and on-site backup and disaster recovery. The technology world where sometimes everything seems to have gone virtual, it’s nice to put a name with a face.

Low cost – This sounds counterintuitive, but the reality is that many specialty providers are cheaper than the cloud giants. That’s particularly true if they specialize in an application like WordPress or Drupal, or in a service like backup. These companies can leverage economies of scale to offer competitive prices, then you get all the other benefits of their specialized knowledge. Shop around; you might be surprised.

Performance – If the primary users of the cloud service are people in your company and/or in your geographic region, you will probably realize better performance with a local vendor. That’s simply the law of physics. The farther electrons have to travel, the longer it takes them to reach their destination. This is particularly important if you plan to use services like cloud storage or if you need to transfer large files, an error-prone process that only gets worse with distance.

Network challenges? Optimize your environment!

Business networks are often like children: they grow unnoticed, sometimes in a disorganized and often unexpected way. The company can quickly end up with a lot of unoptimized equipment to manage, which may look like this…

But it keeps on growing: management wants to install a videoconferencing system, make backup copies of a subsidiary and keep them at the head office…

Can your network support these new features? The answer is probably not.

From there, problems multiply. Over time, users experience slowdowns, phone calls are sometimes jerky, intermittent breakdowns may even occur. How to solve these problems? Where to look?

With a multitude of disparate equipment, and often without a centralized logging system, it is difficult to investigate and find a problem.

Network analysis: why and how

For ESI, each client is different. The most important part of our work is, first of all, to determine our client’s situation, and what led him to need a network analysis. An added feature? Intermittent breakdowns? A willingness to plan future investments to be made in the network?

Once this objective is established, we analyze the most recent network diagrams, if any. We examine the equipment, the configurations, the redundancy, the segmentation… We evaluate all this in order to assess the global health of the equipment.

We can thus identify:

  • End-of-life equipment
  • Equipment close to failure
  • Configuration problems / optimizations
  • Limiting network points

But most importantly, depending on your needs, we help you identify priorities for investment in the network in the short, medium and long term. At the end of the analysis, our clients obtain :

  • An accurate view of their network
  • An action plan on existing equipment
  • An investment plan.

Why ESI?

ESI Technologies has been assisting companies to plan and modify their infrastructure for more than 22 years now!
Contact us now to find out more about what ESI can do for you!

Account of the NetApp Insight 2016 Conference

The 2016 Edition of NetApp Insight took place in Las Vegas from September 26 to 29.
Again this year, NetApp presented its ‘Data Fabric’ vision unveiled two years ago. According to NetApp, the growth in capacity, velocity and variety of data can no longer be handled by the usual tools. As stated by NetApp’s CEO George Kurian, “data is the currency of the digital economy” and NetApp wants to be compared to a bank helping organizations manage, move and globally grow their data. The current challenge of the digital economy is thus data management and NetApp clearly intends to be a leader in this field. This vision is realized more clearly every year accross products and platforms added to the portfolio.

New hardware platforms

NetApp took advantage of the conference to officially introduce its new hardware platforms that integrate 32Gb FC SAN ports, 40GbE network ports, NVMe SSD embedded read cache and SAS-3 12Gb ports for back-end storage. Additionally, FAS9000 and AFF A700 are using a new fully modular chassis (including the controller module) to facilitate future hardware upgrades.

Note that SolidFire platforms have been the subject of attention from NetApp and the public: the first to explain their position in the portfolio, the second to find out more on this extremely agile and innovative technology. https://www.youtube.com/watch?v=jiL30L5h2ik

New software solutions

  • SnapMirror for AltaVault, available soon through the SnapCenter platform (replacing SnapDrive/SnapManager): this solution allows backup of NetApp volume data (including application databases) directly in the cloud (AWS, Azure & StorageGrid) https://www.youtube.com/watch?v=Ga8cxErnjhs
  • SnapMirror for SolidFire is currently under development. No further details were provided.

The features presented reinforce the objective of offering a unified data management layer through the NetApp portfolio.

The last two solutions are more surprising since they do not require any NetApp equipment to be used. These are available on the AWS application store (SaaS).

In conclusion, we feel that NetApp is taking steps to be a major player in the “software defined” field, while upgrading its hardware platforms to get ready to meet the current challenges of the storage industry.

Olivier Navatte, Senior Consultant – Storage Architecture

Review of NetApp Insight 2015

Logo NetApp Insight 2015

The 2015 Edition of NetApp Insight was held in Las Vegas from October 12 to 15. The event is comprised of general sessions, more than 400 breakout sessions, the Insight Central zone with partner booths, hands-on labs, a “meet the engineer” section and offers the possibility to complete certification exams onsite.
The general sessions were presented by different NetApp personalities, CEO, CIO, technical directors, engineers, the NetApp cofounder Dave Hitz, as well as partners and guests (including Cisco, Fujitsu, VMware, 3D Robotics).
Last year, the “Data Fabric” term was unveiled to identify NetApp’s vision of cloud computing. This year, most of the presentations were intended to make that vision more concrete, through examples, demonstrations and placed in context.
For NetApp, Data Fabric is synonymous with data mobility, wherever it resides, whether in traditional datacentres or in the cloud. The key to this mobility lies in SnapMirror, which should soon be supported by various NetApp platforms, FAS, Cloud ONTAP, NetApp Private Storage (PS), AltaVault, etc. and orchestrated by global tools such as OnCommand Cloud Manager and the adaptation of existing tools.
Still on the topic of cloud, a Cisco speaker presented the current issues and future trends: with the exponential use of devices (tablets, smartphones and connected devices) and the increasingly frequent move of data (and even of the compute) to the edge, accessibility, availability, security and data mobility therefore becomes an increasingly important issue. In short, the cloud trend belongs to the past, we now must talk about edge!
NetApp has also put forward its All-Flash FAS type entreprise solutions which, thanks to new  optimizations can now seriously compete in high performance and very low latency environments.
The number of breakout sessions was impressive and in four days, one can only expect to attend about 20 of the 400 sessions available.
Insight is open to clients since last year, but some sessions remain reserved for NetApp partners and employees. Some information are confidential, but without giving details and non-exhaustively, we can mention that a new generation of controllers and tablets are to be expected soon, that SnapCenter will eventually replace SnapManager (in cDOT only) and that new much more direct transition options from 7-Mode to cDOT will be made available.
Other sessions also helped to deepen knowledge or to discover some very interesting tools and features.
In conclusion, NetApp Insight is a must, to soak up in the NetApp line of solutions as much as to find out what NetApp’s vision and future direction will be.

Olivier Navatte, ESI Storage Specialist

IBM Flash Storage – it’s all about applications

Your backup, storage and archiving processes must provide availability and quick and easy real-time data access, whilst evolving with constant technological changes. As a specialized storage solution integrator, ESI offers storage audit assessments analyzing companies’ lifecycle of digital information, validating its integrity and the interaction of the storage solutions with their other systems.

As a storage expert, we are often required to recommend products to our clients to improve their performance targets, solutions to consolidate application hardware and licensing, while increasing response time of critical applications.

IBM’s FlashSystem is a solution tailored for the needs of entreprises faced with performance, productivity and reliability issues.

IBMFlashSystemsFlash Storage increases user and data center productivity, company revenues, saves on software licence fees (low TCO), electricity consumption, rack space and computer downtime.

FlashSystem accelerates applications, eliminates single points of failure with 2D Flash RAID.

IBM’s FlashSystem is easy to install, easy to manage and easy to service!

ESI helps you optimize your IBM FlashSystem and takes your organization at its highest level of performance!

Contact us to plan a demo of IBM FlashSystem features!

If technology is not an issue, what is the value of an integrator?

It was during an exploratory meeting with a new customer where we discussed his issues, that was confirmed what most of us in the business know: bad technology is rare. And by “bad technology”, he was talking about a technology that simply doesn’t work and doesn’t do what it is supposed to do. In his experience, 95% of issues with technology are rather related to configuration problems. He was referring to a real case of “bad” technology” that he had to deal with in his environment, related to hardware or firmware. More specifically, problems with a firmware component that would fail after upgrades and updates, forcing him to roll back to a prior version of firmware, thus going back to the problematic state that the update or upgrade was supposed to fix in the first place since it wasn’t doing what the technology had promised to do. That’s bad technology!

I have come to agree with his view. Some technologies may be more robust, some more performant, some offer unique features that are desirable for your IT environment at many different prices, considering levels of performance, reliability and features available but overall, you usually get the expected value versus the purchase price but rarely a technology that’s “bad”. These simply don’t last very long in the market place.

So with 95% of the issues with technology resting on misconfigurations or sub-optimized configurations, where does this leave us? This is the role of integrators. My observations are that a typical network administrator will go through major tech refreshes once every three to five years whereas it is the bread and butter and yearlong reality of network integrators that do this day in and day out. It’s hard enough to keep up with so many manufacturers selling so many features and advantages at different price points and most of it good or honest technology, never mind making sure it’s well deployed and perfectly tweaked to your environment especially if you go through this process once every three to five years with new technology. Optimized configuration work also becomes more complex as you try to integrate new technologies from new manufacturers into your environment in coexistence with technology from other vendors you already have in place. Don’t get me wrong, choosing the right technology for your environment is important but more importantly is finding people with strong deployment experience that understand your business objectives. With 95% of issues being related to configuration, this is where integrators such as ESI Technologies bring value to the table.

Charles Tremblay, Account manager, ESI