Network challenges? Optimize your environment!

Business networks are often like children: they grow unnoticed, sometimes in a disorganized and often unexpected way. The company can quickly end up with a lot of unoptimized equipment to manage, which may look like this…

But it keeps on growing: management wants to install a videoconferencing system, make backup copies of a subsidiary and keep them at the head office…

Can your network support these new features? The answer is probably not.

From there, problems multiply. Over time, users experience slowdowns, phone calls are sometimes jerky, intermittent breakdowns may even occur. How to solve these problems? Where to look?

With a multitude of disparate equipment, and often without a centralized logging system, it is difficult to investigate and find a problem.

Network analysis: why and how

For ESI, each client is different. The most important part of our work is, first of all, to determine our client’s situation, and what led him to need a network analysis. An added feature? Intermittent breakdowns? A willingness to plan future investments to be made in the network?

Once this objective is established, we analyze the most recent network diagrams, if any. We examine the equipment, the configurations, the redundancy, the segmentation… We evaluate all this in order to assess the global health of the equipment.

We can thus identify:

  • End-of-life equipment
  • Equipment close to failure
  • Configuration problems / optimizations
  • Limiting network points

But most importantly, depending on your needs, we help you identify priorities for investment in the network in the short, medium and long term. At the end of the analysis, our clients obtain :

  • An accurate view of their network
  • An action plan on existing equipment
  • An investment plan.

Why ESI?

ESI Technologies has been assisting companies to plan and modify their infrastructure for more than 22 years now!
Contact us now to find out more about what ESI can do for you!

Take a unified approach to Wi-Fi security!

For many organizations, Wi-Fi access is no longer a luxury. Employees need flexible access as they roam about the office, and customers and partners expect to connect whenever they are on site. But providing unsecured access opens a host of potential security problems if access points aren’t rigorously monitored, patched and maintained. As the number of access points grows, it’s easy to let this important maintenance task slip.

Security teams are so busy fighting fires that preventing maintenance is often overlooked. Kaspersky Labs recently analyzed data from nearly 32 million Wi-Fi hotspots around the world and reported that nearly 25% had no encryption at all. That means passwords and personal data passing through those devices can be easily intercepted by anyone connected to the network.

Virtual private networks (VPNs) are one way to keep things secure, but 82% of mobile users told IDG they don’t always bother to use them. The profusion of software-as-a-service (SaaS) options encourages this. Gartner has estimated that by 2018, 25% of corporate data will bypass perimeter security and flow directly to the cloud.

The Wi-Fi landscape is changing, thanks to mobile devices, cloud services and the growing threat of cyber attacks. This means that Wi-Fi security must be handled holistically, with a centralized approach to management and an architecture that integrates both endpoint protection and network traffic analysis. Cisco has spent more than $1 billion on security acquisitions since 2015, and it has put in place the necessary pieces to provide this integration.

Cisco Umbrella, which the company announced last month, is a new approach to securing the business perimeter that takes into account the changing ways people access the internet. Umbrella gives network and security managers a complete picture of all the devices on the network and what they are doing. For example, by combining Umbrella with Cisco Cloudlock Cloud Access Security Broker technology, organizations can enforce policies customized to individual SaaS applications and even block inappropriate services entirely. They can also block connections to known malicious destinations at the DNS and IP layers, which cuts down on the threat of malware. Umbrella can even discover and control sensitive data in SaaS applications, even if they’re off the network.

Cisco’s modernized approach to security also uses the power of the cloud for administration and analysis. Cisco Defense Orchestrator resolves over 100 billion Internet requests each day. Its machine learning technology compares this traffic against a database of more than 11 billion historical events to look for patterns that identify known malicious behavior. Defense Orchestrator can thus spot breaches quickly so they can be blocked or isolated before they do any damage. Thanks to the cloud, anonymized data from around the Internet can be combined with deep learning to continually improve these detection capabilities. Predictive analytical models enable Cisco to identify where current and future attacks are staged. In other words, Cisco’s security cloud gets smarter every day.

Umbrella can integrate with existing systems, including appliances, feeds and in-house tools, so your investments are protected. It’s built upon OpenDNS, a platform that has been cloud-native since its inception more than a decade ago. It’s the bases for Cisco’s security roadmap going forward.

A great way to get started with Cisco Umbrella is by revisiting protection on your Wi-Fi access points. We know Cisco networks inside and out, so let us put you on the on-ramp to the future of network security.

Is your network ready for digital transformation?

If your company has more than one location, you know the complexity that’s involved in maintaining the network. You probably have several connected devices in each branch office, along with firewalls, Wi-Fi routers and perhaps VoIP equipment. Each patch, firmware update or new malware signature needs to be installed manually, necessitating a service call. The more locations you have, the bigger the cost and the greater the delay.

This is the state of technology at most distributed organizations these days, but it won’t scale well for the future. Some 50 billion new connected smart devices are expected to come online over the next three years, according to Cisco. This so-called “Internet of things” (IoT) revolution will demand a complete rethinking of network infrastructure.

Networks of the future must flexibly provision and manage bandwidth to accommodate a wide variety of usage scenarios. They must be also be manageable from a central point. Functionality that’s currently locked up in hardware devices must move into software. Security will become part of the network fabric, rather than distributed to edge devices. Software updates will be automatic.

Cisco calls this vision “Digital Network Architecture” (DNA). It’s a software-driven approach enabled by intelligent networks, automation and smart devices. By virtualizing many functions that are now provided by physical hardware, your IT organization can gain unparalleled visibility and control over every part of their network.

For example, you can replace hardware firewalls with a single socket connection. Your network administrators can get a complete view of every edge device, and your security operations staff can use analytics to identify and isolate anomalies. New phones, computers or other devices can be discovered automatically and appropriate permissions and policies enforced centrally. Wi-Fi networks, which are one of the most common entry points for cyber attackers, can be secured and monitored as a unit.

One of the most critical advantages of DNA is flexible bandwidth allocation. Many organizations today provision bandwidth on a worst-case scenario basis, resulting in excess network capacity that sits idle much for the time. In a fully software defined scenario, bandwidth is allocated only as needed, so a branch office that’s experiencing a lull doesn’t steal resources from a busy one. Virtualized server resources can also be allocated in the same way, improving utilization and reducing waste.

IoT will demand unprecedented levels of network flexibility. Some edge devices – such as point-of-sale terminals – will require high-speed connections that carry quick bursts of information for tasks such as credit card validation. Others, like security cameras, need to transmit much larger files but have greater tolerance for delay. Using a policy-based DNA approach, priorities can be set to ensure that each device gets the resources it needs.

Getting to DNA isn’t an overnight process. Nearly every new product Cisco is bringing to the market is DNA-enabled. As you retire older equipment, you can move to a fully virtualized, software-defined environment in stages. In some cases, you may find that the soft costs of managing a large distributed network – such as travel, staff time and lost productivity – already justify a switch. Whatever the case, ESI has the advisory and implementation expertise to help you make the best decision.

Understanding and adopting Splunk

Splunk has been a trend in the industry for quite some time, but what do we know about its use and the market Splunk is targeting?

Splunk comes from the word “spelunking”, which refers to the activities of locating, exploring, studying and mapping.

  1. Data indexing: Splunk collects data from different locations, combines them and stores them in a centralized index.
  2. Using indexes for searches: the use of indexes gives Splunk a high degree of speed when searching for problem sources.
  3. Filtering results: Splunk provides user with several tools for filtering results, for faster detection of problems.

For more than a year I have been experimenting with Splunk in several facets: security, storage, infrastructure, telecom and more. We at ESI have a very complete laboratory which allowed me to push my experiments.

In addition to using all these amounts of data, I used open data to experiment with Splunk’s ability to interpret them.

I tested the open data of the site “montreal.bixi.com”; this is raw data formatted as follows:

Start date –  Start station number –  Start station –  End date –  End station number –  End station –  Account type – Total duration (ms)

With this data, we are able to find the most common routes, estimate the average duration of a trip, the anchorage points most requested for the entry or exit of bicycles.

For the operations team of the service, this provides real-time or predicted for the next day which anchors should have more bicycles, and mostly where these bicycles will go. They could predict the lack or surplus of bikes in the anchor points. If data is collected in real-time, alerts could be issued to indicate potential shortage or surplus in the anchor points. Thus the system facilitates planning and allows to be proactive to meet demand, rather than reactive. We would even be able to detect an undelivered bicycle; for instance a bike that has not been anchored for more than 24 hours could issue an alert, so the operations team attempts to trace it.

For marketers, one might think this data is useless, while the opposite is true; the same data can be used to put in place potential offers to attract customers, since we have the data that give the time of departure and arrival, time of use of the trips, and the most used routes. One can thus know the most used time slots and make promotions or adjust the rates according to objectives of traffic or customer loyalty.

For the management, open data unfortunately does not give the price of races according to the status of the users (members or non-members), but the beauty of Splunk is that one can enrich the collected data with data coming from a third-party system, a database or simply manually collected data. Management could then obtain reports and dashboards based on various factors, such as user status, travel time, days of the week, and much more. We could even make comparisons with previous months or the same month of the previous year. The applications are virtually limitless with data that resides in Splunk: the only limitation is that of our imagination!

These are of course fictitious examples made with available open data, but which could be real with your own systems and data.

The collection of information from a website can provide visibility for all users of a company, operations receive system overload alerts, marketers get information about the origin of the connections to target their campaigns based on this data, management gets a view of the user experience, as well as performance metrics that confirm SLAs.

Whether it is security, operations, marketing, analytics or whatever, Splunk can address your needs. In addition to the 1,200 applications available in its portal, you can create your own tables, reports, or alerts. You can use their Power Pivot to allow people to easily use the data and build their own dashboard.

The platform is easy to use and does not require special expertise: you only need the data there.

Do not hesitate to contact ESI for a presentation or a demo; it will be my pleasure to show you how to “Splunk”.

Guillaume Paré
Senior Consultant, Architecture & Technologies – ESI Technologies

Are you ready to face any unexpected interruption?

Many small and medium-sized enterprises have gaps in their technological infrastructure that prevent them from protecting themselves against the unexpected events that cause interruption to their activities.

One company had its offices robbed: servers, computers, client files and even backup copies have disappeared. How to recover from this situation quickly, and minimize consequences? Without a recovery solution, the company’s activities are seriously compromised…

Natural or industrial disasters, thefts, power outages or telecommunications breakdowns, piracy, terrorism, etc. Even a short-term interruption of operations can jeopardize your market share, make you lose several important customers, and threaten the survival of your company. It is essential for any organisation, whatever its size, to be prepared to face any eventuality by protecting its information assets.

A Disaster Recovery solution (DRaaS) allows you to secure your assets and mitigate the unfortunate consequences of an interruption of your activities. ESI offers you the protection of your environment without the burden of spending and managing a recovery site.

Our DRaaS gives you access to our Tier III certified datacentre, equipped with best-of-breed, fully redundant equipment, that guarantees elastic scaling and flexible subscription terms.

Cloud solutions tailored to your needs, affordable and offered by a company with more than 20 years of data management experience, that understands the importance of protecting and safeguarding your assets… Don’t wait for emergency situations to take advantage of it!

Alex Delisle, Vice-President Business Development, Cloud Solutions – ESI Technologies

Denial of service attacks – understanding and avoiding them

In October, a cyber attack on Internet provider Dyn made many web services and sites inaccessible, including several newscasters (Fox News, HBO, CNN, Weather Channel, etc.) and world-class sites Netflix, Paypal, Yelp, Starbucks, just to name a few.

This attack is considered the largest denial of service attack ever made. In order to better understand what happened, we will first of all recall some basic notions of Internet communications. We will continue by talking about botnets and their evolution, before we see the specifics of this recent attack. Finally, we will see how we can guard against such attacks.

Internet Communication Basics

Most Internet communications are of the client-server type. The Internet browser is often used as a “client” and sends requests to the server, asking it to display a Youtube video, for example.

Each server has its own IP address. When navigating on Google, for instance, the server that responds to our request may be different depending on our geographical location. This is made possible by using a Domain Name System (DNS).

These DNS servers will translate an address with the words “www.google.com” into an IP address. This notion is important for understanding the attack that targeted Dyn.

History of botnets

A “botnet” (combination of robot and network) is a network of computers infected by a virus, which turns them into passive entities that remain listening to future instructions. The person controlling the botnet can then send commands to his army of infected computers. For example, ask his robots to send spam or launch distributed denial of service attacks (DDoS). The distributed nature of this architecture makes detection of DDoS attacks difficult.

With the miniaturization and ever-decreasing cost of computing devices, more and more objects become “connected”. This creates an ever-growing network of printers, IP cameras and all kinds of objects that are connected to the web. All these devices are ultimately small computers, and like all computers, they are vulnerable to attacks.

Moreover, since few people take the time to configure these connected objects, most of them are configured with default passwords, making it even simpler for an attacker to compromise and infect them viruses.

We find ourselves in a situation where many objects connected to the Internet are infected by a virus. And these devices, like IP cameras, are constantly on, unlike our computers. During the most recent DDoS attack, this botnet managed to generate up to 1.2 Tb of data per second! This is a data rate equivalent to nearly 2,000 DVD-quality movies sent per second!

Why did this attack hurt so badly?

Denial of service attacks have traditionally targeted servers or websites of companies that are chosen either for activism (or hacktivism) reasons, or for the purpose of extorting money.

The reasons for this attack are not yet known, but what differs from previous ones is the target. For the first time, it was not site servers that were targeted, but the DNS servers of the Dyn company.

The sites of Twitter, Paypal and Netflix, for example, were fully functional. But by preventing us from knowing the address of the servers to connect, this attack made all these sites inaccessible.

How to defend against these attacks?

DDoS attacks often follow a well-established pattern. A first way to protect oneself therefore is to use systems that will detect the signatures of these attacks.

Another way to prevent is to implement redundancy on servers. By using load balancers, you can intelligently route traffic to multiple servers, improving the system’s resilience to high traffic flows.

But that’s not all! We also need to guard against infections, to prevent one of our systems from becoming a botnet member. To do this, you must first protect computers with antivirus software.

However, many connected devices are too simple to install an antivirus. It is therefore essential to analyze the inbound network traffic in your corporate network, both to detect known threats and zero-day vulnerabilities.

It is possible to further minimize the risk of infection of your systems by correlating and monitoring event logs, such as continuous network and systems monitoring, which is part of the services offered by ESI Technologies.

Finally, remember to keep systems updated, in order to mitigate the risk that known vulnerabilities can be exploited and use unique and complex passwords. Password management software exist to make your life easier.

A specialized information security firm such as ESI Technologies will be able to assist you in analyzing your needs and selecting the most effective and efficient solutions to mitigate the risks of botnet attacks on your systems.

Tommy Koorevaar, Security Advisor – ESI Technologies

Cloud Strategy: data collection

Here is part 6 of our series covering the key issues to consider before adopting cloud technologies. This month, we discuss how to build your strategy and data points that must be considered.

When considering & building a cloud strategy, organisations need to consider business objectives/outcomes desired, quantifiable and time-bound goals as well as identify specific initiatives that the enterprise can and should undertake in order to execute the strategy and achieve the goals set. As shown by surveys on the subject by Gartner in 2013 and 2014, process and culture are likely to be big hurdles in any move to cloud. Therefore, involving all aspects of the business and gathering the right information can assist in building the right strategy and identify potential problems ahead of time.

The first concrete step to take to building this strategy is to gather the data points to identify and define those objectives, goals and initiatives for the entreprise in the near – and mid – terms. Once the data is collected, you can review, analyze and identify the business outcomes desired, set the (quantifiable) goals and define the specific initiatives you want to put in place to achieve them. This should not be a strict price or technology evaluation.

Data Collection
The data points needed will have to come from various parts of the organisation (business units, finance, HR and IT). Some of the information required may take the form of files, but a lot of the required information will reside with your staff directly, and so interviews should be a part of the data collection process. These interviews should take up to a few hours each and focus on the interviewees functions, processes used and required/desired business outcomes, to provide insight into the actual impacts to the business before creating your cloud strategy.

With this data, you will be in a position to account for all aspects touching cloud computing, to see what it will affect and how, to evaluate its effect on the balance sheet (positive or negative) and decide on your strategy moving forward.

Benoit Quintin, Director Cloud Services – ESI Technologies

Account of the NetApp Insight 2016 Conference

The 2016 Edition of NetApp Insight took place in Las Vegas from September 26 to 29.
Again this year, NetApp presented its ‘Data Fabric’ vision unveiled two years ago. According to NetApp, the growth in capacity, velocity and variety of data can no longer be handled by the usual tools. As stated by NetApp’s CEO George Kurian, “data is the currency of the digital economy” and NetApp wants to be compared to a bank helping organizations manage, move and globally grow their data. The current challenge of the digital economy is thus data management and NetApp clearly intends to be a leader in this field. This vision is realized more clearly every year accross products and platforms added to the portfolio.

New hardware platforms

NetApp took advantage of the conference to officially introduce its new hardware platforms that integrate 32Gb FC SAN ports, 40GbE network ports, NVMe SSD embedded read cache and SAS-3 12Gb ports for back-end storage. Additionally, FAS9000 and AFF A700 are using a new fully modular chassis (including the controller module) to facilitate future hardware upgrades.

Note that SolidFire platforms have been the subject of attention from NetApp and the public: the first to explain their position in the portfolio, the second to find out more on this extremely agile and innovative technology. https://www.youtube.com/watch?v=jiL30L5h2ik

New software solutions

  • SnapMirror for AltaVault, available soon through the SnapCenter platform (replacing SnapDrive/SnapManager): this solution allows backup of NetApp volume data (including application databases) directly in the cloud (AWS, Azure & StorageGrid) https://www.youtube.com/watch?v=Ga8cxErnjhs
  • SnapMirror for SolidFire is currently under development. No further details were provided.

The features presented reinforce the objective of offering a unified data management layer through the NetApp portfolio.

The last two solutions are more surprising since they do not require any NetApp equipment to be used. These are available on the AWS application store (SaaS).

In conclusion, we feel that NetApp is taking steps to be a major player in the “software defined” field, while upgrading its hardware platforms to get ready to meet the current challenges of the storage industry.

Olivier Navatte, Senior Consultant – Storage Architecture

Cryptolocker virus : how to clear the infection

Cryptolocker is a now well-known type of virus that can be particularly harmful to data stored on computer. The virus carries a code that encrypts files, making them inaccessible to users and demands a ransom (as bitcoin, for example) to decipher them, hence their name “ransomware”.
Cryptolocker type viruses infiltrate by different vectors (emails, file sharing websites, downloads, etc.) and are becoming more resistant to antivirus solutions and firewalls; it is safe to say that these viruses will continue to evolve and become increasingly good at circumventing corporate security measures. Cryptolocker is already in its 6th or 7th variation!

Is there an insurance policy?

All experts agree that a solid backup plan is always the best prescription for dealing with this type of virus. But what does a good backup plan imply, what would a well-executed plan look like?
The backup plan must be tested regularly and preferably include an offsite backup copy. Using the ESI cloud backup service is an easy solution to implement.
The automated copy acts as an insurance policy in case of intrusion. Regular backups provide a secondary offsite datastore, and acts as a fallback mechanism in case of malicious attack.

What to do in case of infection?

From the moment your systems are infected with a Cryptolocker, you are already dealing with several encrypted files. If you do not have in place a mechanism to detect or monitor file changes (eg a change of 100 files per minute), damage can be very extensive.

  1. Notify the Security Officer of your IT department.
  2. Above all, do not pay this ransom, because you might be targeted again.
  3. You will have no choice but to restore your files from a backup copy. This copy becomes invaluable in your recovery efforts, as it will provide you a complete record of your data.

After treatment, are you still vulnerable?

Despite good backup practices, you still remain at risk after restoring your data.
An assessment of your security policies and your backup plan by professionals such as ESI Technologies will provide recommendations to mitigate such risks in the future. Some security mechanisms exist to protect you from viruses that are still unknown to detection systems. Contact your ESI representative to discuss it!

Roger Courchesne  – Director, Security and Internetworking Practice – ESI Technologies

Cloud Strategy – human impacts across organization

Here is part five of our series covering the key issues to consider before adopting cloud technologies. This month, we discuss the impact on human resources.

Resources in your organisation will be impacted by this change. Both on the IT side and on the business side. While helping companies move to cloud we have had to assist with adapting IT job descriptions, processes and roles within the organisation.

As the IT organisation moves into a P&L role, its success starts to be tied to the adoption by the stakeholders of the services offered. To do this, IT needs to get closer to the business units, understand their requirements and deliver access to resources on-demand. All this cannot happen unless things change within the IT group.

As companies automate their practice, and create a self-service portal to provision resources, some job descriptions need to evolve. A strong and clear communication plan with set milestones helps employees understand the changes coming to the organisation, and involving them in the decision process will go a long way to assist in the transition. We have seen that IT organisations with a clear communication plan at the onset that involved their employees in the process had a much easier transition, and faster adoption rate than those who did not.

Our experience helping customers with cloud computing shows that cloud alters significantly IT’s role and relationship with the business, and employees’ roles need to evolve. Training, staff engagement in the transition and constant communication will help your organisation significantly move to this new paradigm.

Benoit Quintin, Director Cloud Services – ESI Technologies