Understanding and adopting Splunk

Splunk has been a trend in the industry for quite some time, but what do we know about its use and the market Splunk is targeting?

Splunk comes from the word “spelunking”, which refers to the activities of locating, exploring, studying and mapping.

  1. Data indexing: Splunk collects data from different locations, combines them and stores them in a centralized index.
  2. Using indexes for searches: the use of indexes gives Splunk a high degree of speed when searching for problem sources.
  3. Filtering results: Splunk provides user with several tools for filtering results, for faster detection of problems.

For more than a year I have been experimenting with Splunk in several facets: security, storage, infrastructure, telecom and more. We at ESI have a very complete laboratory which allowed me to push my experiments.

In addition to using all these amounts of data, I used open data to experiment with Splunk’s ability to interpret them.

I tested the open data of the site “montreal.bixi.com”; this is raw data formatted as follows:

Start date –  Start station number –  Start station –  End date –  End station number –  End station –  Account type – Total duration (ms)

With this data, we are able to find the most common routes, estimate the average duration of a trip, the anchorage points most requested for the entry or exit of bicycles.

For the operations team of the service, this provides real-time or predicted for the next day which anchors should have more bicycles, and mostly where these bicycles will go. They could predict the lack or surplus of bikes in the anchor points. If data is collected in real-time, alerts could be issued to indicate potential shortage or surplus in the anchor points. Thus the system facilitates planning and allows to be proactive to meet demand, rather than reactive. We would even be able to detect an undelivered bicycle; for instance a bike that has not been anchored for more than 24 hours could issue an alert, so the operations team attempts to trace it.

For marketers, one might think this data is useless, while the opposite is true; the same data can be used to put in place potential offers to attract customers, since we have the data that give the time of departure and arrival, time of use of the trips, and the most used routes. One can thus know the most used time slots and make promotions or adjust the rates according to objectives of traffic or customer loyalty.

For the management, open data unfortunately does not give the price of races according to the status of the users (members or non-members), but the beauty of Splunk is that one can enrich the collected data with data coming from a third-party system, a database or simply manually collected data. Management could then obtain reports and dashboards based on various factors, such as user status, travel time, days of the week, and much more. We could even make comparisons with previous months or the same month of the previous year. The applications are virtually limitless with data that resides in Splunk: the only limitation is that of our imagination!

These are of course fictitious examples made with available open data, but which could be real with your own systems and data.

The collection of information from a website can provide visibility for all users of a company, operations receive system overload alerts, marketers get information about the origin of the connections to target their campaigns based on this data, management gets a view of the user experience, as well as performance metrics that confirm SLAs.

Whether it is security, operations, marketing, analytics or whatever, Splunk can address your needs. In addition to the 1,200 applications available in its portal, you can create your own tables, reports, or alerts. You can use their Power Pivot to allow people to easily use the data and build their own dashboard.

The platform is easy to use and does not require special expertise: you only need the data there.

Do not hesitate to contact ESI for a presentation or a demo; it will be my pleasure to show you how to “Splunk”.

Guillaume Paré
Senior Consultant, Architecture & Technologies – ESI Technologies

Account of the NetApp Insight 2016 Conference

The 2016 Edition of NetApp Insight took place in Las Vegas from September 26 to 29.
Again this year, NetApp presented its ‘Data Fabric’ vision unveiled two years ago. According to NetApp, the growth in capacity, velocity and variety of data can no longer be handled by the usual tools. As stated by NetApp’s CEO George Kurian, “data is the currency of the digital economy” and NetApp wants to be compared to a bank helping organizations manage, move and globally grow their data. The current challenge of the digital economy is thus data management and NetApp clearly intends to be a leader in this field. This vision is realized more clearly every year accross products and platforms added to the portfolio.

New hardware platforms

NetApp took advantage of the conference to officially introduce its new hardware platforms that integrate 32Gb FC SAN ports, 40GbE network ports, NVMe SSD embedded read cache and SAS-3 12Gb ports for back-end storage. Additionally, FAS9000 and AFF A700 are using a new fully modular chassis (including the controller module) to facilitate future hardware upgrades.

Note that SolidFire platforms have been the subject of attention from NetApp and the public: the first to explain their position in the portfolio, the second to find out more on this extremely agile and innovative technology. https://www.youtube.com/watch?v=jiL30L5h2ik

New software solutions

  • SnapMirror for AltaVault, available soon through the SnapCenter platform (replacing SnapDrive/SnapManager): this solution allows backup of NetApp volume data (including application databases) directly in the cloud (AWS, Azure & StorageGrid) https://www.youtube.com/watch?v=Ga8cxErnjhs
  • SnapMirror for SolidFire is currently under development. No further details were provided.

The features presented reinforce the objective of offering a unified data management layer through the NetApp portfolio.

The last two solutions are more surprising since they do not require any NetApp equipment to be used. These are available on the AWS application store (SaaS).

In conclusion, we feel that NetApp is taking steps to be a major player in the “software defined” field, while upgrading its hardware platforms to get ready to meet the current challenges of the storage industry.

Olivier Navatte, Senior Consultant – Storage Architecture

The IT Catch-22

OK, so everyone’s taking about it. Our industry is undergoing major changes. It’s out there. It started with a first architecture of reference with mainframes and minicomputers designed to serve thousands of applications used by millions of users worldwide. It then evolved with the advent of the Internet into the “client-server” architecture, this one designed to run hundreds of thousands of applications used by hundreds of millions of users. And where are we now? It appears we are witnessing the birth of a third generation of architecture, one of which is described by the IDC as “the next generation compute platform that is accessed from mobile devices, utilizes Big Data, and is cloud based”. It is referred to as “the third platform”. It is destined to deliver millions of applications to billions of users.

3rd platformVirtualization seems to have been the spark that ignited this revolution. The underlying logic of this major shift is that virtualization allows to make abstraction of hardware, puts it all in a big usable pool of performance and assets that can be shared by different applications for different uses according to the needs of different business units within an organization. The promise of this is that companies can and have more with less. Therefore, IT budgets can be reduced!
These changes are huge. In this third platform IT is built, is run, is consumed and finally is governed differently. Everything is changed from the ground up. It would seem obvious that one would need to invest in careful planning of the transition from the second to the third platform. What pace can we go at? What can be moved out into public clouds? What investments are required on our own infrastructure? How will it impact our IT staff? What training and knowledge will they require? What about security and risks?
The catch is the following: the third platforms allows IT to do much more with less. Accordingly, IT budgets are reduced or at best, flattened. Moving into the third platform requires investments. Get it? Every week we help CIOs and IT managers raise this within their organization so that they can obtain the required investments they need to move into the third platform to reap the benefits of it.

Review of NetApp Insight 2015

Logo NetApp Insight 2015

The 2015 Edition of NetApp Insight was held in Las Vegas from October 12 to 15. The event is comprised of general sessions, more than 400 breakout sessions, the Insight Central zone with partner booths, hands-on labs, a “meet the engineer” section and offers the possibility to complete certification exams onsite.
The general sessions were presented by different NetApp personalities, CEO, CIO, technical directors, engineers, the NetApp cofounder Dave Hitz, as well as partners and guests (including Cisco, Fujitsu, VMware, 3D Robotics).
Last year, the “Data Fabric” term was unveiled to identify NetApp’s vision of cloud computing. This year, most of the presentations were intended to make that vision more concrete, through examples, demonstrations and placed in context.
For NetApp, Data Fabric is synonymous with data mobility, wherever it resides, whether in traditional datacentres or in the cloud. The key to this mobility lies in SnapMirror, which should soon be supported by various NetApp platforms, FAS, Cloud ONTAP, NetApp Private Storage (PS), AltaVault, etc. and orchestrated by global tools such as OnCommand Cloud Manager and the adaptation of existing tools.
Still on the topic of cloud, a Cisco speaker presented the current issues and future trends: with the exponential use of devices (tablets, smartphones and connected devices) and the increasingly frequent move of data (and even of the compute) to the edge, accessibility, availability, security and data mobility therefore becomes an increasingly important issue. In short, the cloud trend belongs to the past, we now must talk about edge!
NetApp has also put forward its All-Flash FAS type entreprise solutions which, thanks to new  optimizations can now seriously compete in high performance and very low latency environments.
The number of breakout sessions was impressive and in four days, one can only expect to attend about 20 of the 400 sessions available.
Insight is open to clients since last year, but some sessions remain reserved for NetApp partners and employees. Some information are confidential, but without giving details and non-exhaustively, we can mention that a new generation of controllers and tablets are to be expected soon, that SnapCenter will eventually replace SnapManager (in cDOT only) and that new much more direct transition options from 7-Mode to cDOT will be made available.
Other sessions also helped to deepen knowledge or to discover some very interesting tools and features.
In conclusion, NetApp Insight is a must, to soak up in the NetApp line of solutions as much as to find out what NetApp’s vision and future direction will be.

Olivier Navatte, ESI Storage Specialist

What about Big Data & Analytics?

After the “cloud” hype, here comes the “big data & analytics” one and it’s not just hype. Big data & analytics enables companies to make better business decisions faster than ever before; helps identify opportunities with new products and services and bring innovative solutions to the marketplace faster; assists IT and helpdesk in reducing mean time to repair and troubleshoot as well as giving reliable metrics for better IT spending planning; guides companies in improving their security posture by having more visibility on the corporate network and identify suspicious activities that go undetected with traditional signature-based technologies; serves to meet compliance requirements… in short, it makes companies more competitive! One simply has to go on Youtube to see the amazing things companies are doing with Splunk for example.

BIG-DATA-1I remember when I started working in IT sales in the mid 90’s, a “fast” home Internet connexion was 56k and the Internet was rapidly gaining in popularity. A small company owner called me and asked “What are the competitive advantages of having a website?” to which I replied “it’s no longer a competitive advantage, it’s a competitive necessity” and to prove my point I asked him to search his competitors out on the Internet: he saw that all of his competitors’ had websites!
The same can now be said of big data & analytics. With all the benefits it brings, it is becoming a business necessity. But before you start rushing into big data & analytics, know the following important facts:

  1. According to Gartner, 69% of corporate data have no business value whatsoever
  2. According to Gartner still, only 1.5% of corporate data is high value data

This means that you will have to sort through a whole lot of data to find the valuable stuff that you need to grow your business, reducing costs, outpacing competition, finding new revenue sources, etc. It is estimated that every dollar invested in a big data & analytics solution brings four to six dollars in infrastructure investments (new storage to hold all that priceless data, CPU to analyze, security for protection etc.). So before you plan a 50,000$ investment in a big data & analytics solution and find out it comes with a 200,000$ to 300,000$ investment in infrastructure, you should talk to subject matter experts. They can help design strategies to hone in on the 1.5% of high value data, and reduce the required investment while maximizing the results.

Charles Tremblay, ESI Account Manager

The greatest IT confusion ever?

Does it even beat Y2K? It’s been a year now since I rejoined the IT integration industry. When I left it in 2003 to focus on PKI technologies, it was still the good old days of client server IT infrastructure right after Y2K and the dot-com bubble burst. For a year now I have been trying to understand clients’ challenges to see how I can help. For a year now I have observed my clients trying themselves to understand the mutations that appear to be changing the IT industry and how it affects them not only on a business level but also on professional AND personal levels as well. I find them fearful and closed. Witnessing this, I told a colleague of mine “it seems our clients are capable of telling us what they don’t want but rarely have a clear vision of what they’re aiming for”!Trending concepts
Big data, the internet of things, stuff called cloud, anything anywhere anytime on any device, the software defined companies etc. – all these new terminologies are being bombarded to our clients and are supposed to showcase the many new trends in the industry. I have recently been to a seminar where the audience was separated in three categories: traditional IT folks who resist these changes and new trends because they reshape traditional IT infrastructure and thus may even jeopardize their job definition or security, new line of business managers who embrace change and are shopping for apps that get the job done and high management who talk the numbers’ language (growth percentage, market share and other measurable KPIs) with whom you need to be able to prove ROI (not TCO this is the IT folks’ concerns).
And there we have it: widespread confusion and fear. Y2K all over again? People forget, BI has been around for a while, so has the Internet, thin client environments, databases etc. It’s just happening on a different scale and the challenge remains to bridge the gap between corporate and business objectives as defined by high management, finding the right tools and processes to get the job done by line of business owners and IT that still has an important role in solution selection, integration and support be it on site or off site.
My challenge over the last year has been to overcome those fears so as to allow my clients to have open discussions on their business objectives and avoid the use of buzz words to refocus on “where do you want to be in three to five years as a company, what IT tools will be required to help you get there and what are the ones I can help you with”.

Charles Tremblay, ESI account manager

SDN- The mystery uncovered – part 1

As I continue to attend conferences and sessions with many of our core partners, I continue on my quest for data centre innovation. Most recently I visited the sunny coast of the Bay Area to visit Brocade Communications, Hitachi Data Systems and VMware specifically the NSX division. This is part one of a three part overview of the technology offering.

Within my role “Office of the CTO” I am always exploring new trends and innovation in designs and solutions for our clients, in particular how “software defined everything” becomes a part of our clients’ data centre evolution. For many years we have been speaking about the cloud and its adoption in main stream IT. We have new technologies appear and some just take a new face. Today, I would like to explore the concept of Software Defined Data Centers (SDDC) or in this case specially Software Defined Networks (SDN), with an overview of some of the most interesting solutions on the market.

Like many of you I have experienced the virtualization becoming more and more common of the compute platform. It just seems like yesterday that my manager at the time asked me to assist in SAN connectivity with Microsoft version 1 of Virtual machine management! Today we are experiencing the continued evolution of virtualization. Server and storage virtualization are common place within the data centre. We are seeing Canadian companies 100% virtualized within the compute space. These same companies are looking for the next step in consolidation, agility and cost containment. That next step is network virtualization. But what is SDN? Software defined networking (SDN) is a model for network control, based on the idea that network traffic flow can be made programmable at scale, thus enabling new dynamic models for traffic management.

SDN imageSource of above photo:  https://www.opennetworking.org/sdn-resources/sdn-definition

VMware NSX – a product purchased by VMware to add to their virtual network strategy. The product is sound and provides a close coupling with VMware and the networking and security of East/West traffic within a VM. The NSX Data and management plane provides an excellent framework to allow the SME hypervisor to lock down the VM traffic, and virtual properties such as a vRouter, vVPN, vLoad Balancer, all of which work within the VM construct.
Brocade Vyatta – A technology acquired by Brocade 2 years ago. Today we see the vRouter and Vyatta OpenDaylight controller lead the pack. Brocade has v5400 and v5600 additions of the predefined Vyatta OpenFlow controller. The Vyatta implementation provides vRouter, vFirewall, vVPN and has also developed a vADX load balancer as well.
Cisco ACI or Nexus 9000L – Cisco announced in 2014 the spin-in of the ‎Insieme product to provide an ACI (Application Centric Infrastructure) platform. The first release was a 40 Gb Ethernet switch with no real ACI functionality. Today we see the product with enhanced port/policy control strategy using the Cloupia Spin-in Technology (UCS Director) policy based engines to control the various functions within an ACI architecture.

‎The real mystery of software defined networking starts with the basic understanding of a business need for a “programmable network” based on X86 architecture within the virtualization layer. In the next installment I will breakdown the VMware NSX and what ESI is exploring with this leading edge SDN contributor.

Nicholas Laine, Director Solutions Architect – Office of the CTO

Don’t fall for marketing blurb

While watching a pickup truck commercial on TV lately, I couldn’t help but ask myself “How can all pickups have the best fuel efficiency in their category?” In a funny way, I hear the same in our industry with “most IOPS or terabytes per $”. It seems everyone’s the best at it. In one case, a client got the most IOPS per dollar he could get and he ended up having to change his whole data centre infrastructure because the IOPS he got were not of the right type!

IOPS

In the storage industry, it seems that IOPS is the equivalent buzzword to horsepower (HP) in the automotive industry. So you’re going to try to get the most horsepower per $ when you purchase a car. You can get 350HP out of a small sports car or a pickup truck. Just don’t try to race with a pickup truck or tow something with the sports car! There’s a reason why you won’t see a Ferrari with a hitch! Though they both have 350HP, one has torque, the other one doesn’t. One is built for performance and speed, the other for heavy workloads. The same goes for data centres. Manufacturers will give you the IOPS you asked for and they can usually prove it! But do you know what IOPS type you’re looking for (sequential, random, read or write)? Why are you requiring those IOPS? Performance or heavy workloads? If you’re not sure, it’s an integrator’s core business and value to help you make sense of all the marketing blurb thrown at you, to help you choose wisely and protect your investment.

Charles Tremblay, ESI Account Manager

Citrix Summit 2015 – news to come

Back from the 2015 edition of Citrix Summit in Las Vegas, where the latest innovations and a lot of NDA content were presented to participants. Citrix is still a dominant player in the world of virtualization of applications and workstations, while adding a lot of new features in the various areas surrounding the practice.

From innovation to a better experience, Citrix did not officially announce its new Receiver X1 (X1 for Experience First), but mentioned it often during the different sessions. X1 made quite an impression when it was presented. Web-based on HTML 5, the X1 client is centrally managed and really easy to update. The web version of X1 enables customization of user experience through the browser. The experience will be the same regardless of the platform.

Citrix announced a broad selection of tech previews and updates for this year. Amongst which HDX Framehawk, Optimization Pack for Lync, DesktopPlayer for Windows and an XenDesktop Virtual Desktop agent for Linux.

The XenMobile 10 version respects the “Experience First” concept, as demonstrated in the following blog: http://blogs.citrix.com/2015/01/13/xenmobile-10-what-experience-means-for-it/

We now have today an average of 2.9 devices by user, which explains the importance of keeping them under control. For more information on the latest Xen Mobile version: http://www.citrix.com/news/announcements/jan-2015/citrix-delivers-superior-user-experience-and-security-in-new-xen.html

Citrix gave each participant of the Summit a prototype X1 mouse; this mouse offers Receiver X1 users the possibility to use X1 with their iPhone or iPad in their Citrix XenApp or XenDesktop environment.

Citrix mouse

In parallel with user experience, Citrix offers increased flexibility and security with their innovative products.

To conclude, the conference gave participants a technological preview of products and innovations proposed by Citrix and their partners. The increase of mobile devices is becoming an element of change that organizations must address and Citrix has found a tailored approach to this new reality.

Mobility transforms enterprises and people.

Guillaume Paré, Senior Consultant –  architecture & technologies