Agile IT: a better way of doing business

One of the most powerful new ideas to emerge from the cloud computing revolution is IT agility. Agile IT organizations are able to easily adapt to changing business needs by delivering applications and infrastructure quickly to those who need it. Does your organization have what it takes to be truly agile?

There are many components of agile IT infrastructure, but three that we think are particularly important are containers, microservices and automation. These form the foundation of the new breed of cloud-native applications, and they can be used by any organization to revolutionize the speed and agility of application delivery to support the business.

Containers: Fast and Flexible

Containers are a sort of lightweight virtual machine, but they differ from VMs in fundamental ways. Containers run as a group of namespaced processes within an operating system, with each having exclusive access to resources such as processor, memory and all of the supporting elements needed for an application. They are typically stored in libraries for reuse and can be spun up and shut down in seconds. They’re also portable, meaning that an application running in a container can be moved to any other environment that supports that type of container.

Containers have only been on the IT radar screen for about three years, but they are being adopted with astonishing speed. One recent study found that 40% of organizations are already using containers in production and just 13% have no plans to adopt them during the coming year. Containers are especially popular with developers because coders can configure and launch their own workspaces without incurring the delay and overhead of involving the IT organization.

Microservices: a Better Approach to Applications

Use of containers frequently goes hand-in-hand with the adoption of microservices architectures. Applications built from microservices are based upon a network of independently deployable, modular services that use a lightweight communications mechanism such as a messaging protocol. Think of it as an object assembled from Lego blocks. Individual blocks aren’t very useful by themselves, but when combined, they can create elaborate structures.

Service-oriented architecture is nothing new, but the technology has finally matured to the point that it’s practical to rethink applications in that form. The microservices approach is more flexible and efficient than the vertically integrated applications that have dominated IT for decades. By assembling applications from libraries of services, duplication is minimized and software can move into production much more quickly. There’s less testing overhead and more efficient execution, since developers can focus on improving existing microservices rather than reinventing the wheel with each project.

Containers are an ideal platform for microservices. They can be launched quickly and custom-configured to use only the resources they need. A single microservice may be used in many ways by many different applications. Orchestration software such as Kubernetes keeps things running smoothly, handles exceptions and constantly balances resources across a cluster.

Automation: Departure from Routine

Automation is essential to keeping this complex environment running smoothly. Popular open-source tools such as Puppet and Ansible make it possible for many tasks that were once performed by systems administrators – such as defining security policies, managing certificates, balancing processing loads and assigning network addresses – to be automated via scripts. Automation tools were developed by cloud-native companies to make it possible for them to run large-scale IT operations without legions of administrators, but the tools are useful in any context.

Automation not only saves money but improves job satisfaction. Manual, routine tasks can be assigned to scripts so that administrators can tend to more important and challenging work. And in a time of severe IT labor shortages, who doesn’t want happier employees?

Agile IT makes organizations nimbler, more responsive and faster moving. When planned and executed with the help of an experienced integration partner, it saves money as well.

 

When choosing a cloud provider, it pays to think small!

When you buy wine, do you go to the big discount store or the local specialty retailer? Chances are you do both, depending on the situation. The big-box store has selection and low prices, but the people who run wine store on the corner can delight you with recommendations you couldn’t find anywhere else.

The same dynamics apply to choosing a cloud service provider. When you think of cloud vendors, there are probably four or five company names that immediately come to mind. But if you Google rankings of cloud vendors according to customer satisfaction or relevance to small businesses, you’ll find quite a different list. There are hundreds of small, regional and specialty infrastructure-as-a-service providers out there. In many cases, they offer value that the giants can’t match. Here are five reasons to consider them.

Customer service – this is probably the number one reason to go with a smaller hosting provider. If you have a problem, you can usually get a person on the phone. Over time, the service provider gets to know you and can offer advice or exclusive discounts. The company just can’t match this personalized service.

Specialty knowledge – You can find apps for just about anything in the marketplace sections of the big cloud companies, but after that you’re pretty much on your own. If struggling with configuration files and troubleshooting Apache error messages isn’t your cup of tea, then look for a service provider that specializes in the task you’re trying to accomplish. Not only do you usually get personal service, but the people are experts in the solutions they support. They’ll get answers fast.

A smile and a handshake – There are several good reasons to choose a vendor in your geographic area. For one thing, government-mandated data protection laws may require it. Local providers also offer a personal touch that call centers can’t match. You can visit their facilities, meet with them to plan for your service needs and get recommendations for local developers or contractors you might need. Many small vendors also offer colocation options and on-site backup and disaster recovery. The technology world where sometimes everything seems to have gone virtual, it’s nice to put a name with a face.

Low cost – This sounds counterintuitive, but the reality is that many specialty providers are cheaper than the cloud giants. That’s particularly true if they specialize in an application like WordPress or Drupal, or in a service like backup. These companies can leverage economies of scale to offer competitive prices, then you get all the other benefits of their specialized knowledge. Shop around; you might be surprised.

Performance – If the primary users of the cloud service are people in your company and/or in your geographic region, you will probably realize better performance with a local vendor. That’s simply the law of physics. The farther electrons have to travel, the longer it takes them to reach their destination. This is particularly important if you plan to use services like cloud storage or if you need to transfer large files, an error-prone process that only gets worse with distance.

Public, private or hybrid cloud? Make the smart choice!

You know you want to move to the cloud, but you don’t know which of the three major options – public, private and hybrid – are right for you. We’re here to help with this quick overview of the options, as well as the pros and cons of each.

Public Cloud

Think of this as a server in the sky. Public cloud, also known as infrastructure-as-a-service, provides the equivalent of a data center in a highly scalable, virtualized environment accessed over the internet. Customers can provision virtual servers – called “instances” – and pay only for the capacity they use. Many public cloud features are automated for self-service. Users can deploy their own servers when they wish and without IT’s involvement. Accounting and chargeback are automated. In fact, organizations often find the public cloud delivers the most significant savings not in equipment costs, but in administrative overhead.

The best applications to deploy in a public cloud are those that are already virtualized or that run on unmodified Linux or Windows operating systems. Commercial, off-the-shelf applications are a good example. Public cloud is also a good platform to use in evaluating and testing new applications, since many public cloud providers offer a wide variety of applications on a pay-as-you-go basis. Public cloud is also well suited to developing so-called “cloud native” applications, such as mobile apps.

Public cloud isn’t ideal for every use. Homegrown applications on legacy platforms or those with significant interdependencies may not migrate smoothly. Organizations that aren’t careful to manage instances can can end up paying for unused capacity. There are also hidden costs to be aware of, such as surcharges for data uploads and downloads or upcharges for guaranteed levels of service. Regulatory issues may also limit the use of public cloud for some applications entirely.Private Cloud

This is essentially a public cloud for use only by a single customer. Private clouds may be constructed on premises using virtualization and automation software, or licensed from service providers who deliver cloud services either from their own data centers or even on the customer’s own premises.

Private cloud is popular with companies that need tight control over data, whether for security, privacy or regulatory purposes. In regulated industries that specify how customer data must be stored and managed, it is sometimes the only cloud option. It’s also attractive for companies that need guaranteed service levels without the unpredictability of the public internet. Finally, private cloud provides the highest level of control for organizations that want deep visibility into who is using resources and how.

Private cloud is typically more expensive than public cloud because service providers must allocate capacity exclusively to the dedicated environment. However, that isn’t always the case. For companies with large capital investments in existing infrastructure, an on-premises private cloud is a good way to add flexibility, automation and self provisioning while preserving the value of their existing equipment. For predictable workloads, it can be the cheapest of the three models.

Hybrid Cloud

This is the most popular option for large corporations, and is expected to dominate the cloud landscape for the foreseeable future. Hybrid cloud combines elements of both public and private cloud in a way that enables organizations to shift workloads flexibly while keeping tight control over their most important assets. Companies typically move functions that are handled more efficiently to the public cloud but keep others in-house. The public cloud may act as an extension of an on-premises data center or be dedicated to specific uses, such as application development. For example, a mobile app developed in the public cloud may draw data from data stores in a private cloud.

Many of the benefits of hybrid cloud are the same as those of private cloud: control, security, privacy and guaranteed service levels. Organizations can keep their most sensitive data on premises but shift some of it to the public cloud at lower costs. They can also reduce costs by using public cloud to handle occasional spikes in activity that overtax their own infrastructure, a tactic known as “cloud bursting.” Hybrid cloud is also a transition stage that companies use as they move from on-premises to public cloud infrastructure.

There are many more dimensions to the public/private/hybrid cloud decision. A good managed service provider can help you understand the options and estimate the benefits and trade-offs.

It’s time to rethink cybersecurity.

For many years, organizations have focused their security efforts on endpoint protection. Firewalls, antivirus software, intrusion detection and anti-spyware tools are all effective to a point, but they are failing to stop the vast majority of threats.

A recent ServiceNow survey of 300 chief information security officers found that 81% are highly concerned that breaches are going unaddressed and 78% are worried about their ability to detect breaches in the first place. IBM’s 2017 X-Force Threat Intelligence Index reported a 566% increase in the number of compromised records in 2016 compared to the previous year. FireEye reported that the average time it takes an organization to detect an intrusion is over 200 days.

Endpoint security measures will only become less effective as the number of endpoints proliferates. Smart phones introduced a whole new class of threats, and the internet of things (IoT) will add billions of endpoint devices to networks over the next few years, many of which have weak or no security.

That’s why cybersecurity, in the words of Cisco CEO Chuck Robbins, “needs to start in the network.” The approach that Cisco is championing recognizes the reality that breaches today are inevitable but that they needn’t be debilitating. The increasing popularity of security operations centers shows that IT organizations are shifting their attention to creating an integrated view of all the activity on their networks – including applications, databases, servers and endpoints – and adopting tools that can identify patterns that indicate a breach. For example, multiple access attempts from a certain IP address or large outbound file transfers may indicate an intrusion, and that activity can be stopped before much damage is done.

Fortunately, technology is evolving to support the network-centric approach. Big data platforms like Hadoop have made it practical and affordable for organizations to store large amounts of data for analysis. Streaming platforms like Apache Spark and Kafka can capture and analyze data in near real-time. Machine learning programs, when applied to large data stores like Hadoop, can continuously sort through network and server logs to find anomalies, becoming “smarter” as they go.

And the cloud presents new deployment options. That’s why security is rapidly migrating from dedicated hardware to cloud-based solutions using a software-as-a-service model. Grandview Research estimates that the managed security services market was worth more than $17.5 billion in 2015, and that it will grow to more than $40 billion in 2021. As organizations increasingly virtualize their networks, these services will become integrated into basic network services. That means no more firmware upgrades, no more site visits to fix balky firewalls and no more anti-malware signature updates.

It’s too early to say that the tide has turned favorably in the fight with cyber-criminals, but the signs are at least promising. It’s heartening to see Cisco making security such important centerpiece of its strategy. Two recent acquisitions – Jasper and Lancope – give the company a prominent presence in cloud-based IoT security and deep learning capabilities for network and threat analysis. The company has said that security will be integrated into every new product it produces going forward. Perhaps that’s why Robbins has called his company, “the only $2 billion security business that is growing at double digits.”

Security solutions are not enough to fight ransomware. Make sure you have a good recovery strategy.

If the notion of ransomware was unknown to you until now, the attack of the WannaCryptor virus of May 12th that has had global repercussions in all spheres of activity has certainly made you aware of the consequences of such attacks that know no borders.

Computer attacks by ransomware cost businesses millions of dollars a year and are becoming increasingly sophisticated and difficult to avoid. The peculiarity of this type of attack is that it spreads quickly through shared files, sometimes in a matter of hours, as the attack of May 12 demonstrated. Ransomware generally infiltrates through the weakest point in the network, typically the user’s email account or social networking sites.

The ransomware locks the computer or encrypts the files, requiring payment of a “ransom” to give users access to their data. But the payment of the ransom does not guarantee the recovery of the data *, not to mention that organizations which give way to the hackers blackmail become targets of choice for a next time…

If you are lucky, your business was not targeted by the virus and you feel relieved to have been spared. In this case, remember the lesson: you were lucky this time, but rest assured that this type of attack will happen again, and that your organization may well be the victim next time.

Forward-thinking organizations have invested large sums of money to secure their IT environments and the data that transits them, which is often critical and whose destruction can jeopardize business continuity. Although security solutions are part of the equation when it comes to protecting your assets, they are only part of the strategy to counter these threats.

A complete solution to protect you from viral attacks must include a recovery plan with accessible and full backup copies in order to restore your environment as it was before the attack.

Implementing a recovery plan gives you assurance that you can quickly recover and minimize your idle time, which is often the weakest link in the management of computer attacks. The faster you get back to your pace, less your customers and suppliers will have to turn to alternatives that could ultimately be very costly to your business and reputation, even putting it at risk.

Companies that have industry-specific compliance standards are generally more aware and better equipped to quickly restore their infrastructure in the event of an attack. To find out if your company  has an adequate recovery strategy, ask yourself the following questions:

  • Is your backup off site (i.e. away from your primary site)?
  • Can you verify that the backups are happening?
  • How quickly can you restore data that’s taken hostage?
  • Is your original data backed up in an unalterable way, ensuring a complete and integral recovery of your data in the event of a ransomware attack?

By answering these questions, you will take the first step to address the gaps in your recovery strategy in the event of a computer attack. Be prepared to face upcoming threats to protect your assets!

* A recent survey found that of those victims of ransomware who paid the ransom, only 71% had their files restored.

 

Network challenges? Optimize your environment!

Business networks are often like children: they grow unnoticed, sometimes in a disorganized and often unexpected way. The company can quickly end up with a lot of unoptimized equipment to manage, which may look like this…

But it keeps on growing: management wants to install a videoconferencing system, make backup copies of a subsidiary and keep them at the head office…

Can your network support these new features? The answer is probably not.

From there, problems multiply. Over time, users experience slowdowns, phone calls are sometimes jerky, intermittent breakdowns may even occur. How to solve these problems? Where to look?

With a multitude of disparate equipment, and often without a centralized logging system, it is difficult to investigate and find a problem.

Network analysis: why and how

For ESI, each client is different. The most important part of our work is, first of all, to determine our client’s situation, and what led him to need a network analysis. An added feature? Intermittent breakdowns? A willingness to plan future investments to be made in the network?

Once this objective is established, we analyze the most recent network diagrams, if any. We examine the equipment, the configurations, the redundancy, the segmentation… We evaluate all this in order to assess the global health of the equipment.

We can thus identify:

  • End-of-life equipment
  • Equipment close to failure
  • Configuration problems / optimizations
  • Limiting network points

But most importantly, depending on your needs, we help you identify priorities for investment in the network in the short, medium and long term. At the end of the analysis, our clients obtain :

  • An accurate view of their network
  • An action plan on existing equipment
  • An investment plan.

Why ESI?

ESI Technologies has been assisting companies to plan and modify their infrastructure for more than 22 years now!
Contact us now to find out more about what ESI can do for you!

Take a unified approach to Wi-Fi security!

For many organizations, Wi-Fi access is no longer a luxury. Employees need flexible access as they roam about the office, and customers and partners expect to connect whenever they are on site. But providing unsecured access opens a host of potential security problems if access points aren’t rigorously monitored, patched and maintained. As the number of access points grows, it’s easy to let this important maintenance task slip.

Security teams are so busy fighting fires that preventing maintenance is often overlooked. Kaspersky Labs recently analyzed data from nearly 32 million Wi-Fi hotspots around the world and reported that nearly 25% had no encryption at all. That means passwords and personal data passing through those devices can be easily intercepted by anyone connected to the network.

Virtual private networks (VPNs) are one way to keep things secure, but 82% of mobile users told IDG they don’t always bother to use them. The profusion of software-as-a-service (SaaS) options encourages this. Gartner has estimated that by 2018, 25% of corporate data will bypass perimeter security and flow directly to the cloud.

The Wi-Fi landscape is changing, thanks to mobile devices, cloud services and the growing threat of cyber attacks. This means that Wi-Fi security must be handled holistically, with a centralized approach to management and an architecture that integrates both endpoint protection and network traffic analysis. Cisco has spent more than $1 billion on security acquisitions since 2015, and it has put in place the necessary pieces to provide this integration.

Cisco Umbrella, which the company announced last month, is a new approach to securing the business perimeter that takes into account the changing ways people access the internet. Umbrella gives network and security managers a complete picture of all the devices on the network and what they are doing. For example, by combining Umbrella with Cisco Cloudlock Cloud Access Security Broker technology, organizations can enforce policies customized to individual SaaS applications and even block inappropriate services entirely. They can also block connections to known malicious destinations at the DNS and IP layers, which cuts down on the threat of malware. Umbrella can even discover and control sensitive data in SaaS applications, even if they’re off the network.

Cisco’s modernized approach to security also uses the power of the cloud for administration and analysis. Cisco Defense Orchestrator resolves over 100 billion Internet requests each day. Its machine learning technology compares this traffic against a database of more than 11 billion historical events to look for patterns that identify known malicious behavior. Defense Orchestrator can thus spot breaches quickly so they can be blocked or isolated before they do any damage. Thanks to the cloud, anonymized data from around the Internet can be combined with deep learning to continually improve these detection capabilities. Predictive analytical models enable Cisco to identify where current and future attacks are staged. In other words, Cisco’s security cloud gets smarter every day.

Umbrella can integrate with existing systems, including appliances, feeds and in-house tools, so your investments are protected. It’s built upon OpenDNS, a platform that has been cloud-native since its inception more than a decade ago. It’s the bases for Cisco’s security roadmap going forward.

A great way to get started with Cisco Umbrella is by revisiting protection on your Wi-Fi access points. We know Cisco networks inside and out, so let us put you on the on-ramp to the future of network security.

Is your network ready for digital transformation?

If your company has more than one location, you know the complexity that’s involved in maintaining the network. You probably have several connected devices in each branch office, along with firewalls, Wi-Fi routers and perhaps VoIP equipment. Each patch, firmware update or new malware signature needs to be installed manually, necessitating a service call. The more locations you have, the bigger the cost and the greater the delay.

This is the state of technology at most distributed organizations these days, but it won’t scale well for the future. Some 50 billion new connected smart devices are expected to come online over the next three years, according to Cisco. This so-called “Internet of things” (IoT) revolution will demand a complete rethinking of network infrastructure.

Networks of the future must flexibly provision and manage bandwidth to accommodate a wide variety of usage scenarios. They must be also be manageable from a central point. Functionality that’s currently locked up in hardware devices must move into software. Security will become part of the network fabric, rather than distributed to edge devices. Software updates will be automatic.

Cisco calls this vision “Digital Network Architecture” (DNA). It’s a software-driven approach enabled by intelligent networks, automation and smart devices. By virtualizing many functions that are now provided by physical hardware, your IT organization can gain unparalleled visibility and control over every part of their network.

For example, you can replace hardware firewalls with a single socket connection. Your network administrators can get a complete view of every edge device, and your security operations staff can use analytics to identify and isolate anomalies. New phones, computers or other devices can be discovered automatically and appropriate permissions and policies enforced centrally. Wi-Fi networks, which are one of the most common entry points for cyber attackers, can be secured and monitored as a unit.

One of the most critical advantages of DNA is flexible bandwidth allocation. Many organizations today provision bandwidth on a worst-case scenario basis, resulting in excess network capacity that sits idle much for the time. In a fully software defined scenario, bandwidth is allocated only as needed, so a branch office that’s experiencing a lull doesn’t steal resources from a busy one. Virtualized server resources can also be allocated in the same way, improving utilization and reducing waste.

IoT will demand unprecedented levels of network flexibility. Some edge devices – such as point-of-sale terminals – will require high-speed connections that carry quick bursts of information for tasks such as credit card validation. Others, like security cameras, need to transmit much larger files but have greater tolerance for delay. Using a policy-based DNA approach, priorities can be set to ensure that each device gets the resources it needs.

Getting to DNA isn’t an overnight process. Nearly every new product Cisco is bringing to the market is DNA-enabled. As you retire older equipment, you can move to a fully virtualized, software-defined environment in stages. In some cases, you may find that the soft costs of managing a large distributed network – such as travel, staff time and lost productivity – already justify a switch. Whatever the case, ESI has the advisory and implementation expertise to help you make the best decision.

Understanding and adopting Splunk

Splunk has been a trend in the industry for quite some time, but what do we know about its use and the market Splunk is targeting?

Splunk comes from the word “spelunking”, which refers to the activities of locating, exploring, studying and mapping.

  1. Data indexing: Splunk collects data from different locations, combines them and stores them in a centralized index.
  2. Using indexes for searches: the use of indexes gives Splunk a high degree of speed when searching for problem sources.
  3. Filtering results: Splunk provides user with several tools for filtering results, for faster detection of problems.

For more than a year I have been experimenting with Splunk in several facets: security, storage, infrastructure, telecom and more. We at ESI have a very complete laboratory which allowed me to push my experiments.

In addition to using all these amounts of data, I used open data to experiment with Splunk’s ability to interpret them.

I tested the open data of the site “montreal.bixi.com”; this is raw data formatted as follows:

Start date –  Start station number –  Start station –  End date –  End station number –  End station –  Account type – Total duration (ms)

With this data, we are able to find the most common routes, estimate the average duration of a trip, the anchorage points most requested for the entry or exit of bicycles.

For the operations team of the service, this provides real-time or predicted for the next day which anchors should have more bicycles, and mostly where these bicycles will go. They could predict the lack or surplus of bikes in the anchor points. If data is collected in real-time, alerts could be issued to indicate potential shortage or surplus in the anchor points. Thus the system facilitates planning and allows to be proactive to meet demand, rather than reactive. We would even be able to detect an undelivered bicycle; for instance a bike that has not been anchored for more than 24 hours could issue an alert, so the operations team attempts to trace it.

For marketers, one might think this data is useless, while the opposite is true; the same data can be used to put in place potential offers to attract customers, since we have the data that give the time of departure and arrival, time of use of the trips, and the most used routes. One can thus know the most used time slots and make promotions or adjust the rates according to objectives of traffic or customer loyalty.

For the management, open data unfortunately does not give the price of races according to the status of the users (members or non-members), but the beauty of Splunk is that one can enrich the collected data with data coming from a third-party system, a database or simply manually collected data. Management could then obtain reports and dashboards based on various factors, such as user status, travel time, days of the week, and much more. We could even make comparisons with previous months or the same month of the previous year. The applications are virtually limitless with data that resides in Splunk: the only limitation is that of our imagination!

These are of course fictitious examples made with available open data, but which could be real with your own systems and data.

The collection of information from a website can provide visibility for all users of a company, operations receive system overload alerts, marketers get information about the origin of the connections to target their campaigns based on this data, management gets a view of the user experience, as well as performance metrics that confirm SLAs.

Whether it is security, operations, marketing, analytics or whatever, Splunk can address your needs. In addition to the 1,200 applications available in its portal, you can create your own tables, reports, or alerts. You can use their Power Pivot to allow people to easily use the data and build their own dashboard.

The platform is easy to use and does not require special expertise: you only need the data there.

Do not hesitate to contact ESI for a presentation or a demo; it will be my pleasure to show you how to “Splunk”.

Guillaume Paré
Senior Consultant, Architecture & Technologies – ESI Technologies

Are you ready to face any unexpected interruption?

Many small and medium-sized enterprises have gaps in their technological infrastructure that prevent them from protecting themselves against the unexpected events that cause interruption to their activities.

One company had its offices robbed: servers, computers, client files and even backup copies have disappeared. How to recover from this situation quickly, and minimize consequences? Without a recovery solution, the company’s activities are seriously compromised…

Natural or industrial disasters, thefts, power outages or telecommunications breakdowns, piracy, terrorism, etc. Even a short-term interruption of operations can jeopardize your market share, make you lose several important customers, and threaten the survival of your company. It is essential for any organisation, whatever its size, to be prepared to face any eventuality by protecting its information assets.

A Disaster Recovery solution (DRaaS) allows you to secure your assets and mitigate the unfortunate consequences of an interruption of your activities. ESI offers you the protection of your environment without the burden of spending and managing a recovery site.

Our DRaaS gives you access to our Tier III certified datacentre, equipped with best-of-breed, fully redundant equipment, that guarantees elastic scaling and flexible subscription terms.

Cloud solutions tailored to your needs, affordable and offered by a company with more than 20 years of data management experience, that understands the importance of protecting and safeguarding your assets… Don’t wait for emergency situations to take advantage of it!

Alex Delisle, Vice-President Business Development, Cloud Solutions – ESI Technologies