Why Agile infrastructure and DevOps go hand-in-hand

Did you know that some software development operations deploy code up to 200 times more frequently than others? Or that they can make code changes in less than an hour, recover from downtime in minutes and experience a 60-times lower code failure rate than their peers?

The difference is DevOps, a new approach to agile development that emphasizes modularity, frequent releases and a constant feedback cycle. DevOps is a sharp contrast with the conventional waterfall approach to software development. Traditionally, software development has involved extensive upfront design and specification, a process that could take weeks. Developers then went away – often for months – and built a prototype application. After a lengthy review process, changes were specified and developers disappeared again to implement them. As a result, it wasn’t unusual for enterprise applications to take a year or more to deliver, during which time user needs often changed, requiring still more development.

No one has the luxury of that kind of time anymore. Today’s software development timeframe is defined by the apps on your smartphone. DevOps is a way to make enterprise software development work at internet speed.

DevOps replaces the monolithic approach to building software with components that can be quickly coded and prototyped. Users are closely involved with the development process at every step. Code reviews may happen as frequently as once a day, with the focus being on steady, incremental progress of each component rather than delivery of a complete application.

A significant departure DevOps take from traditional development is that coders have control not only over code, but also the operating environment. Agile infrastructure makes this possible. Technologies like containers and microservices enable developers to provision their own workspaces, including variables like memory, storage, operating system and even co-resident software, to emulate the production environment as closely as possible. This reduces the risk of surprise when migrating from test to production, while also enhancing developer control and accountability.

With the advent of containers, IT organizations can now create libraries of lightweight virtual machines that are customized to the needs of individual developers. Containers can be quickly spun up and shut down without the need for IT administrative overhead, a process that can save weeks over the course of a project. Pre-configured containers can also be shared across development teams. Instead of waiting days for a virtual machine to be provisioned, developers can be up and running in minutes, which is one of the reasons DevOps enables such dramatic speed improvements.

Many organizations that adopt DevOps have seen exceptional results. High-performing DevOps organizations spend 50% less time on unplanned work and rework, which means they spend that much more time on new features, according to Puppet Labs’ 2016 State of DevOps report. Code failure rates are less than 15%, compared to up to 45% for middle-of-the-road performers. Lead time for changes is under an hour, compared to weeks for traditional waterfall development.

As enticing as those benefits sound, the journey to DevOps isn’t a simple one. Significant culture change is involved. One major change is that developers assume much greater accountability for the results of their work. This quickly helps organizations identify their best performers – and their weakest ones.

But accountability also gives top performers the recognition they deserve for outstanding work. This is one reason why Puppet Labs found that employees in high-performing DevOps organizations are 2.2 times more likely to recommend their company as a great place to work. Excellence breeds excellence, and top performers like working with others who meet their standards.

Users must shift their perspective as well. DevOps requires much more frequent engagement with development teams. However, high-performing organizations actually spend less time overall in meetings and code reviews because problems are identified earlier in the process and corrected before code dependencies layer on additional complexity. Meetings are more frequent, but shorter and more focused.

If you’re adopting agile infrastructure, DevOps is the next logical step to becoming a fully agile IT organization.

How Big Data changes the rules in banking

Online banking has been a win-win proposition for banks and their customers. Customers get the speed and convenience of self-service and banks enjoy big savings on transaction costs. But for many consumer banks in particular, going online has also meant losing the critical customer engagement that branches have long provided. When one online banking service looks pretty much like every other, banks need to find new ways to set themselves apart. By leveraging cloud and big data analytics, banks can re-engage in new and innovative ways via web and mobile platforms.

In a recent report outlining a new digital vision for consumer banks, Accenture offer examples of what some banks are already doing to enhance the online experience and strengthen bonds with their customers.

  • Banco Bilbao Vizcaya Argentaria of Spain captures more than 80 transaction characteristics every time customers use their debit card and uses the information to help consumers manage and forecast day-to-day spending.
  • BNP Paribas Fortis of Belgium partnered with the country’s largest telecommunications provider to create a mobile e-commerce platform that enables consumers to shop and pay for products from their smart phones. The service makes it easier for consumers to find merchants and helps local businesses get paid more quickly, which is good for the bank’s commercial business.
  • Commonwealth Bank of Australia has a mobile app that enables customers to use augmented reality to get detailed information about homes they might want to buy by simply pointing their smartphone camera at the property. The app also tells users exactly how much they will pay for a mortgage from the bank.
  • Five of Canada’s largest banks have partnered with Sensibill to integrate automated receipt management functionality into their digital banking apps. Customers can use the service to organize receipts and get reports that help them with budgeting.

These efforts are successful because the banks see themselves as more than just money managers. They’ve broadened their perspective to become allies to their customers in helping them become more efficient and achieve their dreams.

The cloud offers unprecedented capabilities for banks to integrate other services into their core applications through APIs. For example, many financial services companies now offer credit reporting as a free feature. Credit agencies are eager to promote their brands through this kind of integration, and they make it easy for banks to work with them.

When cloud is combined with big data, banks can put their existing knowledge of their customers to work in new ways. For example, they can segment customers by spending and saving behavior and offer services tuned to different budgets. They can target services to distinct customer segments based on geography or age group by overlaying demographics on customer data. They can even listen in on social media conversations to pinpoint opportunities to offer, for example, car loans to fans of specific vehicle makes and models.

The biggest impediments to this kind of transformation aren’t technical but rather cultural. If banks see themselves as simply stewards of money, they limit their potential to break out of historical niches. But when they see themselves as allies in their customers’ financial success, they can use the cloud and big data to expand and enrich relationships. The mobile app is the new branch, and a good service provider can help your financial institution realize its transformative potential.

Tips for a pain-free journey to software-defined infrastructure

By some estimates, 70% of the servers in enterprise data centers are now virtualized, meaning that nearly every company is enjoying the benefits of flexibility, high utilization rates and automation that virtualization provides.

If you’re one of them, you might be tempted to move your network, storage and desktops to software-defined infrastructure (SDI) as quickly as possible. That’s a great long-term strategy. In fact, Gartner predicts that programmatic infrastructure will be a necessity for most enterprises by 2020. But you should move at your own pace and for the right reasons. Don’t rush the journey, and be aware of these common pitfalls.

Have a strategy and a plan. Think through what you want to virtualize and why you want to do it. Common reasons include improving the efficiency of equipment you already have, improving application performance or building the foundation for hybrid cloud. Knowing your objectives will give you, and your technology partner, a better fix on what to migrate and when.

Be aware that many areas of SDI are still in early-stage development and standards are incomplete or nonexistent. This makes mission-critical applications poor candidates for early migration. Start with low-risk applications and implement in phases, being aware that a full migration may take years and that some legacy assets may not be worth virtualizing all. If you’re new to SDI, consider virtualizing a small part of your infrastructure, such as firewalls or a handful of desktops, to become familiar with the process.

For all the flexibility SDI provides, it also introduces complexity. You’ll now have a virtual layer to monitor in addition to your existing physical layers. That’s not a reason to stay put, but be aware that management and troubleshooting tasks may become a bit more complex.

Map dependencies. In a perfect world, all interfaces between software and hardware would be defined logically, but we know this isn’t a perfect world. In the rush to launch or repair an application, developers may create shortcuts by specifying physical dependencies between, say, a database and storage device. These connections may fail if storage is virtualized. Understand where any such dependencies may exist and fix them before introducing a software-defined layer.

SDI requires a new approach to systems management as well. Since new devices can be introduced to the network with little or no manual intervention, it can be difficult to forecast their performance impact in advance. Be sure to factor analytics and performance management metrics into your planning so that you have a way of modeling the impact of changes before making them.

Use standards. Many SDI standards are still a work-in-progress. While most vendors do a good job of adhering to a base set of standards, they may also include proprietary extensions that could affect compatibility with third-party products. To ensure you have the greatest degree of flexibility, look for solutions that conform to standards like the Open Networking Foundation’s OpenFlow and OpenSDS for storage.

SDI relies heavily on application program interfaces for communication. Since there are no universal standards for infrastructure APIs, they are potential source of lock-in if your SDI solution requires APIs specific to a particular vendor. Look for solutions that adhere to APIs defined by industry standards instead.

Double down on security. Virtual connections create certain security vulnerabilities that don’t exist in a world where everything is physically attached. For example, the heart of a software-defined network is an SDN controller, which manages all communications between applications and network devices. If the controller is breached, the entire network is at risk, so it’s essential to choose a trusted platform with the ability to validate any new applications or components. Make sure the platforms that manage your virtual processes are locked down tight.

Don’t forget the human factor. One of the great benefits of SDI is that it enables many once-manual processes to be automated. This will impact the skill sets you need in your data center. Deep hardware knowledge will become less important than the ability to manage applications and infrastructure at a high level. Prepare your staff for this shift and be ready to retrain the people whom you believe can make the transition.

These relatively modest pitfalls shouldn’t stop you from getting your organization ready to take advantage of the many benefits of SDI. Working with an experienced partner is the best way to ensure a smooth and successful journey.

Agile IT: a better way of doing business

One of the most powerful new ideas to emerge from the cloud computing revolution is IT agility. Agile IT organizations are able to easily adapt to changing business needs by delivering applications and infrastructure quickly to those who need it. Does your organization have what it takes to be truly agile?

There are many components of agile IT infrastructure, but three that we think are particularly important are containers, microservices and automation. These form the foundation of the new breed of cloud-native applications, and they can be used by any organization to revolutionize the speed and agility of application delivery to support the business.

Containers: Fast and Flexible

Containers are a sort of lightweight virtual machine, but they differ from VMs in fundamental ways. Containers run as a group of namespaced processes within an operating system, with each having exclusive access to resources such as processor, memory and all of the supporting elements needed for an application. They are typically stored in libraries for reuse and can be spun up and shut down in seconds. They’re also portable, meaning that an application running in a container can be moved to any other environment that supports that type of container.

Containers have only been on the IT radar screen for about three years, but they are being adopted with astonishing speed. One recent study found that 40% of organizations are already using containers in production and just 13% have no plans to adopt them during the coming year. Containers are especially popular with developers because coders can configure and launch their own workspaces without incurring the delay and overhead of involving the IT organization.

Microservices: a Better Approach to Applications

Use of containers frequently goes hand-in-hand with the adoption of microservices architectures. Applications built from microservices are based upon a network of independently deployable, modular services that use a lightweight communications mechanism such as a messaging protocol. Think of it as an object assembled from Lego blocks. Individual blocks aren’t very useful by themselves, but when combined, they can create elaborate structures.

Service-oriented architecture is nothing new, but the technology has finally matured to the point that it’s practical to rethink applications in that form. The microservices approach is more flexible and efficient than the vertically integrated applications that have dominated IT for decades. By assembling applications from libraries of services, duplication is minimized and software can move into production much more quickly. There’s less testing overhead and more efficient execution, since developers can focus on improving existing microservices rather than reinventing the wheel with each project.

Containers are an ideal platform for microservices. They can be launched quickly and custom-configured to use only the resources they need. A single microservice may be used in many ways by many different applications. Orchestration software such as Kubernetes keeps things running smoothly, handles exceptions and constantly balances resources across a cluster.

Automation: Departure from Routine

Automation is essential to keeping this complex environment running smoothly. Popular open-source tools such as Puppet and Ansible make it possible for many tasks that were once performed by systems administrators – such as defining security policies, managing certificates, balancing processing loads and assigning network addresses – to be automated via scripts. Automation tools were developed by cloud-native companies to make it possible for them to run large-scale IT operations without legions of administrators, but the tools are useful in any context.

Automation not only saves money but improves job satisfaction. Manual, routine tasks can be assigned to scripts so that administrators can tend to more important and challenging work. And in a time of severe IT labor shortages, who doesn’t want happier employees?

Agile IT makes organizations nimbler, more responsive and faster moving. When planned and executed with the help of an experienced integration partner, it saves money as well.

 

When choosing a cloud provider, it pays to think small!

When you buy wine, do you go to the big discount store or the local specialty retailer? Chances are you do both, depending on the situation. The big-box store has selection and low prices, but the people who run wine store on the corner can delight you with recommendations you couldn’t find anywhere else.

The same dynamics apply to choosing a cloud service provider. When you think of cloud vendors, there are probably four or five company names that immediately come to mind. But if you Google rankings of cloud vendors according to customer satisfaction or relevance to small businesses, you’ll find quite a different list. There are hundreds of small, regional and specialty infrastructure-as-a-service providers out there. In many cases, they offer value that the giants can’t match. Here are five reasons to consider them.

Customer service – this is probably the number one reason to go with a smaller hosting provider. If you have a problem, you can usually get a person on the phone. Over time, the service provider gets to know you and can offer advice or exclusive discounts. The company just can’t match this personalized service.

Specialty knowledge – You can find apps for just about anything in the marketplace sections of the big cloud companies, but after that you’re pretty much on your own. If struggling with configuration files and troubleshooting Apache error messages isn’t your cup of tea, then look for a service provider that specializes in the task you’re trying to accomplish. Not only do you usually get personal service, but the people are experts in the solutions they support. They’ll get answers fast.

A smile and a handshake – There are several good reasons to choose a vendor in your geographic area. For one thing, government-mandated data protection laws may require it. Local providers also offer a personal touch that call centers can’t match. You can visit their facilities, meet with them to plan for your service needs and get recommendations for local developers or contractors you might need. Many small vendors also offer colocation options and on-site backup and disaster recovery. The technology world where sometimes everything seems to have gone virtual, it’s nice to put a name with a face.

Low cost – This sounds counterintuitive, but the reality is that many specialty providers are cheaper than the cloud giants. That’s particularly true if they specialize in an application like WordPress or Drupal, or in a service like backup. These companies can leverage economies of scale to offer competitive prices, then you get all the other benefits of their specialized knowledge. Shop around; you might be surprised.

Performance – If the primary users of the cloud service are people in your company and/or in your geographic region, you will probably realize better performance with a local vendor. That’s simply the law of physics. The farther electrons have to travel, the longer it takes them to reach their destination. This is particularly important if you plan to use services like cloud storage or if you need to transfer large files, an error-prone process that only gets worse with distance.

Public, private or hybrid cloud? Make the smart choice!

You know you want to move to the cloud, but you don’t know which of the three major options – public, private and hybrid – are right for you. We’re here to help with this quick overview of the options, as well as the pros and cons of each.

Public Cloud

Think of this as a server in the sky. Public cloud, also known as infrastructure-as-a-service, provides the equivalent of a data center in a highly scalable, virtualized environment accessed over the internet. Customers can provision virtual servers – called “instances” – and pay only for the capacity they use. Many public cloud features are automated for self-service. Users can deploy their own servers when they wish and without IT’s involvement. Accounting and chargeback are automated. In fact, organizations often find the public cloud delivers the most significant savings not in equipment costs, but in administrative overhead.

The best applications to deploy in a public cloud are those that are already virtualized or that run on unmodified Linux or Windows operating systems. Commercial, off-the-shelf applications are a good example. Public cloud is also a good platform to use in evaluating and testing new applications, since many public cloud providers offer a wide variety of applications on a pay-as-you-go basis. Public cloud is also well suited to developing so-called “cloud native” applications, such as mobile apps.

Public cloud isn’t ideal for every use. Homegrown applications on legacy platforms or those with significant interdependencies may not migrate smoothly. Organizations that aren’t careful to manage instances can can end up paying for unused capacity. There are also hidden costs to be aware of, such as surcharges for data uploads and downloads or upcharges for guaranteed levels of service. Regulatory issues may also limit the use of public cloud for some applications entirely.Private Cloud

This is essentially a public cloud for use only by a single customer. Private clouds may be constructed on premises using virtualization and automation software, or licensed from service providers who deliver cloud services either from their own data centers or even on the customer’s own premises.

Private cloud is popular with companies that need tight control over data, whether for security, privacy or regulatory purposes. In regulated industries that specify how customer data must be stored and managed, it is sometimes the only cloud option. It’s also attractive for companies that need guaranteed service levels without the unpredictability of the public internet. Finally, private cloud provides the highest level of control for organizations that want deep visibility into who is using resources and how.

Private cloud is typically more expensive than public cloud because service providers must allocate capacity exclusively to the dedicated environment. However, that isn’t always the case. For companies with large capital investments in existing infrastructure, an on-premises private cloud is a good way to add flexibility, automation and self provisioning while preserving the value of their existing equipment. For predictable workloads, it can be the cheapest of the three models.

Hybrid Cloud

This is the most popular option for large corporations, and is expected to dominate the cloud landscape for the foreseeable future. Hybrid cloud combines elements of both public and private cloud in a way that enables organizations to shift workloads flexibly while keeping tight control over their most important assets. Companies typically move functions that are handled more efficiently to the public cloud but keep others in-house. The public cloud may act as an extension of an on-premises data center or be dedicated to specific uses, such as application development. For example, a mobile app developed in the public cloud may draw data from data stores in a private cloud.

Many of the benefits of hybrid cloud are the same as those of private cloud: control, security, privacy and guaranteed service levels. Organizations can keep their most sensitive data on premises but shift some of it to the public cloud at lower costs. They can also reduce costs by using public cloud to handle occasional spikes in activity that overtax their own infrastructure, a tactic known as “cloud bursting.” Hybrid cloud is also a transition stage that companies use as they move from on-premises to public cloud infrastructure.

There are many more dimensions to the public/private/hybrid cloud decision. A good managed service provider can help you understand the options and estimate the benefits and trade-offs.

It’s time to rethink cybersecurity.

For many years, organizations have focused their security efforts on endpoint protection. Firewalls, antivirus software, intrusion detection and anti-spyware tools are all effective to a point, but they are failing to stop the vast majority of threats.

A recent ServiceNow survey of 300 chief information security officers found that 81% are highly concerned that breaches are going unaddressed and 78% are worried about their ability to detect breaches in the first place. IBM’s 2017 X-Force Threat Intelligence Index reported a 566% increase in the number of compromised records in 2016 compared to the previous year. FireEye reported that the average time it takes an organization to detect an intrusion is over 200 days.

Endpoint security measures will only become less effective as the number of endpoints proliferates. Smart phones introduced a whole new class of threats, and the internet of things (IoT) will add billions of endpoint devices to networks over the next few years, many of which have weak or no security.

That’s why cybersecurity, in the words of Cisco CEO Chuck Robbins, “needs to start in the network.” The approach that Cisco is championing recognizes the reality that breaches today are inevitable but that they needn’t be debilitating. The increasing popularity of security operations centers shows that IT organizations are shifting their attention to creating an integrated view of all the activity on their networks – including applications, databases, servers and endpoints – and adopting tools that can identify patterns that indicate a breach. For example, multiple access attempts from a certain IP address or large outbound file transfers may indicate an intrusion, and that activity can be stopped before much damage is done.

Fortunately, technology is evolving to support the network-centric approach. Big data platforms like Hadoop have made it practical and affordable for organizations to store large amounts of data for analysis. Streaming platforms like Apache Spark and Kafka can capture and analyze data in near real-time. Machine learning programs, when applied to large data stores like Hadoop, can continuously sort through network and server logs to find anomalies, becoming “smarter” as they go.

And the cloud presents new deployment options. That’s why security is rapidly migrating from dedicated hardware to cloud-based solutions using a software-as-a-service model. Grandview Research estimates that the managed security services market was worth more than $17.5 billion in 2015, and that it will grow to more than $40 billion in 2021. As organizations increasingly virtualize their networks, these services will become integrated into basic network services. That means no more firmware upgrades, no more site visits to fix balky firewalls and no more anti-malware signature updates.

It’s too early to say that the tide has turned favorably in the fight with cyber-criminals, but the signs are at least promising. It’s heartening to see Cisco making security such important centerpiece of its strategy. Two recent acquisitions – Jasper and Lancope – give the company a prominent presence in cloud-based IoT security and deep learning capabilities for network and threat analysis. The company has said that security will be integrated into every new product it produces going forward. Perhaps that’s why Robbins has called his company, “the only $2 billion security business that is growing at double digits.”

Security solutions are not enough to fight ransomware. Make sure you have a good recovery strategy.

If the notion of ransomware was unknown to you until now, the attack of the WannaCryptor virus of May 12th that has had global repercussions in all spheres of activity has certainly made you aware of the consequences of such attacks that know no borders.

Computer attacks by ransomware cost businesses millions of dollars a year and are becoming increasingly sophisticated and difficult to avoid. The peculiarity of this type of attack is that it spreads quickly through shared files, sometimes in a matter of hours, as the attack of May 12 demonstrated. Ransomware generally infiltrates through the weakest point in the network, typically the user’s email account or social networking sites.

The ransomware locks the computer or encrypts the files, requiring payment of a “ransom” to give users access to their data. But the payment of the ransom does not guarantee the recovery of the data *, not to mention that organizations which give way to the hackers blackmail become targets of choice for a next time…

If you are lucky, your business was not targeted by the virus and you feel relieved to have been spared. In this case, remember the lesson: you were lucky this time, but rest assured that this type of attack will happen again, and that your organization may well be the victim next time.

Forward-thinking organizations have invested large sums of money to secure their IT environments and the data that transits them, which is often critical and whose destruction can jeopardize business continuity. Although security solutions are part of the equation when it comes to protecting your assets, they are only part of the strategy to counter these threats.

A complete solution to protect you from viral attacks must include a recovery plan with accessible and full backup copies in order to restore your environment as it was before the attack.

Implementing a recovery plan gives you assurance that you can quickly recover and minimize your idle time, which is often the weakest link in the management of computer attacks. The faster you get back to your pace, less your customers and suppliers will have to turn to alternatives that could ultimately be very costly to your business and reputation, even putting it at risk.

Companies that have industry-specific compliance standards are generally more aware and better equipped to quickly restore their infrastructure in the event of an attack. To find out if your company  has an adequate recovery strategy, ask yourself the following questions:

  • Is your backup off site (i.e. away from your primary site)?
  • Can you verify that the backups are happening?
  • How quickly can you restore data that’s taken hostage?
  • Is your original data backed up in an unalterable way, ensuring a complete and integral recovery of your data in the event of a ransomware attack?

By answering these questions, you will take the first step to address the gaps in your recovery strategy in the event of a computer attack. Be prepared to face upcoming threats to protect your assets!

* A recent survey found that of those victims of ransomware who paid the ransom, only 71% had their files restored.

 

Network challenges? Optimize your environment!

Business networks are often like children: they grow unnoticed, sometimes in a disorganized and often unexpected way. The company can quickly end up with a lot of unoptimized equipment to manage, which may look like this…

But it keeps on growing: management wants to install a videoconferencing system, make backup copies of a subsidiary and keep them at the head office…

Can your network support these new features? The answer is probably not.

From there, problems multiply. Over time, users experience slowdowns, phone calls are sometimes jerky, intermittent breakdowns may even occur. How to solve these problems? Where to look?

With a multitude of disparate equipment, and often without a centralized logging system, it is difficult to investigate and find a problem.

Network analysis: why and how

For ESI, each client is different. The most important part of our work is, first of all, to determine our client’s situation, and what led him to need a network analysis. An added feature? Intermittent breakdowns? A willingness to plan future investments to be made in the network?

Once this objective is established, we analyze the most recent network diagrams, if any. We examine the equipment, the configurations, the redundancy, the segmentation… We evaluate all this in order to assess the global health of the equipment.

We can thus identify:

  • End-of-life equipment
  • Equipment close to failure
  • Configuration problems / optimizations
  • Limiting network points

But most importantly, depending on your needs, we help you identify priorities for investment in the network in the short, medium and long term. At the end of the analysis, our clients obtain :

  • An accurate view of their network
  • An action plan on existing equipment
  • An investment plan.

Why ESI?

ESI Technologies has been assisting companies to plan and modify their infrastructure for more than 22 years now!
Contact us now to find out more about what ESI can do for you!

Take a unified approach to Wi-Fi security!

For many organizations, Wi-Fi access is no longer a luxury. Employees need flexible access as they roam about the office, and customers and partners expect to connect whenever they are on site. But providing unsecured access opens a host of potential security problems if access points aren’t rigorously monitored, patched and maintained. As the number of access points grows, it’s easy to let this important maintenance task slip.

Security teams are so busy fighting fires that preventing maintenance is often overlooked. Kaspersky Labs recently analyzed data from nearly 32 million Wi-Fi hotspots around the world and reported that nearly 25% had no encryption at all. That means passwords and personal data passing through those devices can be easily intercepted by anyone connected to the network.

Virtual private networks (VPNs) are one way to keep things secure, but 82% of mobile users told IDG they don’t always bother to use them. The profusion of software-as-a-service (SaaS) options encourages this. Gartner has estimated that by 2018, 25% of corporate data will bypass perimeter security and flow directly to the cloud.

The Wi-Fi landscape is changing, thanks to mobile devices, cloud services and the growing threat of cyber attacks. This means that Wi-Fi security must be handled holistically, with a centralized approach to management and an architecture that integrates both endpoint protection and network traffic analysis. Cisco has spent more than $1 billion on security acquisitions since 2015, and it has put in place the necessary pieces to provide this integration.

Cisco Umbrella, which the company announced last month, is a new approach to securing the business perimeter that takes into account the changing ways people access the internet. Umbrella gives network and security managers a complete picture of all the devices on the network and what they are doing. For example, by combining Umbrella with Cisco Cloudlock Cloud Access Security Broker technology, organizations can enforce policies customized to individual SaaS applications and even block inappropriate services entirely. They can also block connections to known malicious destinations at the DNS and IP layers, which cuts down on the threat of malware. Umbrella can even discover and control sensitive data in SaaS applications, even if they’re off the network.

Cisco’s modernized approach to security also uses the power of the cloud for administration and analysis. Cisco Defense Orchestrator resolves over 100 billion Internet requests each day. Its machine learning technology compares this traffic against a database of more than 11 billion historical events to look for patterns that identify known malicious behavior. Defense Orchestrator can thus spot breaches quickly so they can be blocked or isolated before they do any damage. Thanks to the cloud, anonymized data from around the Internet can be combined with deep learning to continually improve these detection capabilities. Predictive analytical models enable Cisco to identify where current and future attacks are staged. In other words, Cisco’s security cloud gets smarter every day.

Umbrella can integrate with existing systems, including appliances, feeds and in-house tools, so your investments are protected. It’s built upon OpenDNS, a platform that has been cloud-native since its inception more than a decade ago. It’s the bases for Cisco’s security roadmap going forward.

A great way to get started with Cisco Umbrella is by revisiting protection on your Wi-Fi access points. We know Cisco networks inside and out, so let us put you on the on-ramp to the future of network security.