How Big Data changes the rules in banking

Online banking has been a win-win proposition for banks and their customers. Customers get the speed and convenience of self-service and banks enjoy big savings on transaction costs. But for many consumer banks in particular, going online has also meant losing the critical customer engagement that branches have long provided. When one online banking service looks pretty much like every other, banks need to find new ways to set themselves apart. By leveraging cloud and big data analytics, banks can re-engage in new and innovative ways via web and mobile platforms.

In a recent report outlining a new digital vision for consumer banks, Accenture offer examples of what some banks are already doing to enhance the online experience and strengthen bonds with their customers.

  • Banco Bilbao Vizcaya Argentaria of Spain captures more than 80 transaction characteristics every time customers use their debit card and uses the information to help consumers manage and forecast day-to-day spending.
  • BNP Paribas Fortis of Belgium partnered with the country’s largest telecommunications provider to create a mobile e-commerce platform that enables consumers to shop and pay for products from their smart phones. The service makes it easier for consumers to find merchants and helps local businesses get paid more quickly, which is good for the bank’s commercial business.
  • Commonwealth Bank of Australia has a mobile app that enables customers to use augmented reality to get detailed information about homes they might want to buy by simply pointing their smartphone camera at the property. The app also tells users exactly how much they will pay for a mortgage from the bank.
  • Five of Canada’s largest banks have partnered with Sensibill to integrate automated receipt management functionality into their digital banking apps. Customers can use the service to organize receipts and get reports that help them with budgeting.

These efforts are successful because the banks see themselves as more than just money managers. They’ve broadened their perspective to become allies to their customers in helping them become more efficient and achieve their dreams.

The cloud offers unprecedented capabilities for banks to integrate other services into their core applications through APIs. For example, many financial services companies now offer credit reporting as a free feature. Credit agencies are eager to promote their brands through this kind of integration, and they make it easy for banks to work with them.

When cloud is combined with big data, banks can put their existing knowledge of their customers to work in new ways. For example, they can segment customers by spending and saving behavior and offer services tuned to different budgets. They can target services to distinct customer segments based on geography or age group by overlaying demographics on customer data. They can even listen in on social media conversations to pinpoint opportunities to offer, for example, car loans to fans of specific vehicle makes and models.

The biggest impediments to this kind of transformation aren’t technical but rather cultural. If banks see themselves as simply stewards of money, they limit their potential to break out of historical niches. But when they see themselves as allies in their customers’ financial success, they can use the cloud and big data to expand and enrich relationships. The mobile app is the new branch, and a good service provider can help your financial institution realize its transformative potential.

Tips for a pain-free journey to software-defined infrastructure

By some estimates, 70% of the servers in enterprise data centers are now virtualized, meaning that nearly every company is enjoying the benefits of flexibility, high utilization rates and automation that virtualization provides.

If you’re one of them, you might be tempted to move your network, storage and desktops to software-defined infrastructure (SDI) as quickly as possible. That’s a great long-term strategy. In fact, Gartner predicts that programmatic infrastructure will be a necessity for most enterprises by 2020. But you should move at your own pace and for the right reasons. Don’t rush the journey, and be aware of these common pitfalls.

Have a strategy and a plan. Think through what you want to virtualize and why you want to do it. Common reasons include improving the efficiency of equipment you already have, improving application performance or building the foundation for hybrid cloud. Knowing your objectives will give you, and your technology partner, a better fix on what to migrate and when.

Be aware that many areas of SDI are still in early-stage development and standards are incomplete or nonexistent. This makes mission-critical applications poor candidates for early migration. Start with low-risk applications and implement in phases, being aware that a full migration may take years and that some legacy assets may not be worth virtualizing all. If you’re new to SDI, consider virtualizing a small part of your infrastructure, such as firewalls or a handful of desktops, to become familiar with the process.

For all the flexibility SDI provides, it also introduces complexity. You’ll now have a virtual layer to monitor in addition to your existing physical layers. That’s not a reason to stay put, but be aware that management and troubleshooting tasks may become a bit more complex.

Map dependencies. In a perfect world, all interfaces between software and hardware would be defined logically, but we know this isn’t a perfect world. In the rush to launch or repair an application, developers may create shortcuts by specifying physical dependencies between, say, a database and storage device. These connections may fail if storage is virtualized. Understand where any such dependencies may exist and fix them before introducing a software-defined layer.

SDI requires a new approach to systems management as well. Since new devices can be introduced to the network with little or no manual intervention, it can be difficult to forecast their performance impact in advance. Be sure to factor analytics and performance management metrics into your planning so that you have a way of modeling the impact of changes before making them.

Use standards. Many SDI standards are still a work-in-progress. While most vendors do a good job of adhering to a base set of standards, they may also include proprietary extensions that could affect compatibility with third-party products. To ensure you have the greatest degree of flexibility, look for solutions that conform to standards like the Open Networking Foundation’s OpenFlow and OpenSDS for storage.

SDI relies heavily on application program interfaces for communication. Since there are no universal standards for infrastructure APIs, they are potential source of lock-in if your SDI solution requires APIs specific to a particular vendor. Look for solutions that adhere to APIs defined by industry standards instead.

Double down on security. Virtual connections create certain security vulnerabilities that don’t exist in a world where everything is physically attached. For example, the heart of a software-defined network is an SDN controller, which manages all communications between applications and network devices. If the controller is breached, the entire network is at risk, so it’s essential to choose a trusted platform with the ability to validate any new applications or components. Make sure the platforms that manage your virtual processes are locked down tight.

Don’t forget the human factor. One of the great benefits of SDI is that it enables many once-manual processes to be automated. This will impact the skill sets you need in your data center. Deep hardware knowledge will become less important than the ability to manage applications and infrastructure at a high level. Prepare your staff for this shift and be ready to retrain the people whom you believe can make the transition.

These relatively modest pitfalls shouldn’t stop you from getting your organization ready to take advantage of the many benefits of SDI. Working with an experienced partner is the best way to ensure a smooth and successful journey.

Is your network ready for digital transformation?

If your company has more than one location, you know the complexity that’s involved in maintaining the network. You probably have several connected devices in each branch office, along with firewalls, Wi-Fi routers and perhaps VoIP equipment. Each patch, firmware update or new malware signature needs to be installed manually, necessitating a service call. The more locations you have, the bigger the cost and the greater the delay.

This is the state of technology at most distributed organizations these days, but it won’t scale well for the future. Some 50 billion new connected smart devices are expected to come online over the next three years, according to Cisco. This so-called “Internet of things” (IoT) revolution will demand a complete rethinking of network infrastructure.

Networks of the future must flexibly provision and manage bandwidth to accommodate a wide variety of usage scenarios. They must be also be manageable from a central point. Functionality that’s currently locked up in hardware devices must move into software. Security will become part of the network fabric, rather than distributed to edge devices. Software updates will be automatic.

Cisco calls this vision “Digital Network Architecture” (DNA). It’s a software-driven approach enabled by intelligent networks, automation and smart devices. By virtualizing many functions that are now provided by physical hardware, your IT organization can gain unparalleled visibility and control over every part of their network.

For example, you can replace hardware firewalls with a single socket connection. Your network administrators can get a complete view of every edge device, and your security operations staff can use analytics to identify and isolate anomalies. New phones, computers or other devices can be discovered automatically and appropriate permissions and policies enforced centrally. Wi-Fi networks, which are one of the most common entry points for cyber attackers, can be secured and monitored as a unit.

One of the most critical advantages of DNA is flexible bandwidth allocation. Many organizations today provision bandwidth on a worst-case scenario basis, resulting in excess network capacity that sits idle much for the time. In a fully software defined scenario, bandwidth is allocated only as needed, so a branch office that’s experiencing a lull doesn’t steal resources from a busy one. Virtualized server resources can also be allocated in the same way, improving utilization and reducing waste.

IoT will demand unprecedented levels of network flexibility. Some edge devices – such as point-of-sale terminals – will require high-speed connections that carry quick bursts of information for tasks such as credit card validation. Others, like security cameras, need to transmit much larger files but have greater tolerance for delay. Using a policy-based DNA approach, priorities can be set to ensure that each device gets the resources it needs.

Getting to DNA isn’t an overnight process. Nearly every new product Cisco is bringing to the market is DNA-enabled. As you retire older equipment, you can move to a fully virtualized, software-defined environment in stages. In some cases, you may find that the soft costs of managing a large distributed network – such as travel, staff time and lost productivity – already justify a switch. Whatever the case, ESI has the advisory and implementation expertise to help you make the best decision.

Cloud Strategy: data collection

Here is part 6 of our series covering the key issues to consider before adopting cloud technologies. This month, we discuss how to build your strategy and data points that must be considered.

When considering & building a cloud strategy, organisations need to consider business objectives/outcomes desired, quantifiable and time-bound goals as well as identify specific initiatives that the enterprise can and should undertake in order to execute the strategy and achieve the goals set. As shown by surveys on the subject by Gartner in 2013 and 2014, process and culture are likely to be big hurdles in any move to cloud. Therefore, involving all aspects of the business and gathering the right information can assist in building the right strategy and identify potential problems ahead of time.

The first concrete step to take to building this strategy is to gather the data points to identify and define those objectives, goals and initiatives for the entreprise in the near – and mid – terms. Once the data is collected, you can review, analyze and identify the business outcomes desired, set the (quantifiable) goals and define the specific initiatives you want to put in place to achieve them. This should not be a strict price or technology evaluation.

Data Collection
The data points needed will have to come from various parts of the organisation (business units, finance, HR and IT). Some of the information required may take the form of files, but a lot of the required information will reside with your staff directly, and so interviews should be a part of the data collection process. These interviews should take up to a few hours each and focus on the interviewees functions, processes used and required/desired business outcomes, to provide insight into the actual impacts to the business before creating your cloud strategy.

With this data, you will be in a position to account for all aspects touching cloud computing, to see what it will affect and how, to evaluate its effect on the balance sheet (positive or negative) and decide on your strategy moving forward.

Benoit Quintin, Director Cloud Services – ESI Technologies

Cloud Strategy – human impacts across organization

Here is part five of our series covering the key issues to consider before adopting cloud technologies. This month, we discuss the impact on human resources.

Resources in your organisation will be impacted by this change. Both on the IT side and on the business side. While helping companies move to cloud we have had to assist with adapting IT job descriptions, processes and roles within the organisation.

As the IT organisation moves into a P&L role, its success starts to be tied to the adoption by the stakeholders of the services offered. To do this, IT needs to get closer to the business units, understand their requirements and deliver access to resources on-demand. All this cannot happen unless things change within the IT group.

As companies automate their practice, and create a self-service portal to provision resources, some job descriptions need to evolve. A strong and clear communication plan with set milestones helps employees understand the changes coming to the organisation, and involving them in the decision process will go a long way to assist in the transition. We have seen that IT organisations with a clear communication plan at the onset that involved their employees in the process had a much easier transition, and faster adoption rate than those who did not.

Our experience helping customers with cloud computing shows that cloud alters significantly IT’s role and relationship with the business, and employees’ roles need to evolve. Training, staff engagement in the transition and constant communication will help your organisation significantly move to this new paradigm.

Benoit Quintin, Director Cloud Services – ESI Technologies

Cloud Strategy: technological impacts

Here is part four of our series covering the key issues to consider before adopting cloud technologies. This article focuses specifically on technological impacts to consider.

Not all software technology is created equal. Indeed, not every application will migrate gracefully to the cloud, some will never tolerate the latency, while others were never designed to have multiple smaller elements working together, rather than a few big servers. This means your business applications will need to be evaluated for cloud readiness. Indeed, this is possibly the largest technological hurdle, but, as with all technology, this may prove to be easier to solve that some of the other organisational issues.

One should look at the application’s architecture (n-tiered or monolithic), tolerance to faults/issues (e.g. latency, network errors, services down, servers down) and how the users consume the application (always from a PC, from the office, or fully decentralized, with offline and mobile access), to evaluate options for migrating an application to the cloud. Current growth rate and state of the organisation are often times mirrored in its IT consumption rate and requirements. Certainly, an organisation that’s under high growth rates or launching a project where growth is not easily identifiable can possibly benefit significantly from a scalable, elastic cloud model, whereas an organisation with slower growth, familiar / standard projects and predictable IT requirements will not likely assess the value of cloud computing the same way. Accountability of resources and traceability of all assets in use may be of bigger concern.

Architecture, applications and legacy environments are all technological considerations that should be factored in any cloud computing viability & readiness assessment, but that should probably not be the main driver for your cloud strategy.

Benoit Quintin, Director Cloud Services – ESI Technologies

Cloud Strategy: legal impacts across the organization

Here is part three of our series covering the key issues to consider before adopting cloud technologies. This article focuses specifically on legal impacts on your organization.

“Location, location, location”. We’re more accustomed to hearing this in the context of the housing market. However, where your company’s headquarters reside, where your company does business and where its subsidiaries are located directly impact how you need to manage sensitive information, such as strategic projects, HR/personnel information, etc.; essentially, IT needs to account for data sovereignty laws and regulations.

Various countries have already voted or are moving towards voting on more restrictive data sovereignty legislations that will control the transit of information out of border. For example, the Canadian Personal Information Protection and Electronic Documents Act (PIPEDA) already governs how IT organisations can collect, use and disclose personal information in the course of commercial business. In addition, the Act contains various provisions to facilitate the use of electronic documents. Essentially, all personally identifiable information must stay in country, at rest and in transit, meaning that using a cloud provider in the US or any other country with said data could expose the company – and you – to a lawsuit, unless the cloud provider can guarantee no aforementioned data ever leaves the country at any time, including for redundancy/DR purposes.

While the previous Act covered what must be protected, the American law (the USA Freedom Act, and its previous incarnation, the Patriot Act) enables the US government to access any and all data residing on its soil, without owner’s authorization, need for warrant and without even the need to notice the owner before or after the fact. The few data privacy provisions in the bill apply to American citizens and entities only. This means all data housed in the US are at risk, especially if said data is owned by an organisation whose headquarters are out of country.

If in Europe, laws vary from country to country, we find that the regulations on data protection are becoming more stringent, requiring the establishment of procedures and controls to protect personal data and obtaining the explicit authorization of persons to collect and use their information. All this imposes guidelines to the use of the cloud within the country or outside their borders.

Typically, data sovereignty should be a concern for most organisations when looking at cloud and, as the current trend is for countries to vote in more stringent laws, any and all cloud strategy should account for local, national and international regulations.

Benoit Quintin – Director Cloud Services – ESI Technologies

Cloud Strategy: business impacts across the organization

Here is the second part of our series covering the key issues to consider before adopting cloud technologies. This article focuses specifically on business impacts on your organization.

Most markets are evolving faster than ever before, and the trend seems to be accelerating, so organisations globally need to adapt and change the way they go to market. From a business standpoint, the flexibility and speed with which new solutions can be delivered via cloud help enable the business units to react faster and better. So much so, that where IT organisations have not considered automating aspects of provisioning to provide more flexibility and faster access to resources, business units have started going outside of IT, to some of the public cloud offerings, for resources.

Planning for cloud should consider people and processes, as both will likely be directly impacted. From the requisition of resources, all the way to charging back the different business units for resources consumed, managed independently from projects’ budgets, processes that were created and used before the advent of cloud in your organisation should be adapted, if not discarded and rebuilt from scratch. IT will need to change and evolve as it becomes an internal service provider (in many instances, a P&L entity) – and resources broker for the business units.

Considering the large capital investments IT has typically been getting as budget to ‘keep the lights on’, and considering that, until recently, this budget had been growing at double digits rate since the early days of mainframe; the switch from a capital investment model to an operational model can impact the way IT does business significantly. Indeed, we have seen the shift forcing IT to focus on what it can do better, review its relationships with the vendors, ultimately freeing up the valuable investment resources. In many organisations, this has also translated to enabling net new projects to come to life, in and out of IT.

Once this transformation is underway, you should start seeing some of the benefits other organisations have been enjoying, starting with faster speed to market on new offerings. Indeed, in this age of mobile everything, customers expect access to everything all the time, and your competition is likely launching new offerings every day. A move towards cloud enables projects to move forward at an accelerated pace, letting you go to market with updated offerings much faster.

Benoit Quintin, Director Cloud Services, ESI Technologies

Cloud computing: strategy and IT readiness – Transformation in IT

Here is the first of a series of articles that provide both business and IT executives insights into the key issues that they should consider when evaluating cloud services, paying particular attention to business and legal ramifications of moving to the cloud environment, whether it is private, hybrid or public.

cloud-question-mark-710x345For the last few decades, IT organisations have been the only option for provisioning IT resources for projects. Indeed, all new projects would involve IT, and the IT team was responsible for acquiring, architecting and delivering the solution that would sustain the application/project during its lifecycle, planning for upgrades along the way.
This led to silo-based infrastructures – and teams -, often designed for peak demand, without possibilities of efficiency gains between projects. The introduction of compute virtualization, first for test/dev and then for production, showed other options were possible and available and that by aggregating requirements across projects, IT could get significant efficiencies of scale and costs while getting more flexibility and speed to market, as provisioning a virtual server suddenly became a matter of days, rather than weeks or months.
Over time, IT started applying these same methods to storage and network and these showed similar flexibility, scalability and efficiency improvements. These gains, together with automation capabilities and self-service portals, were combined over time to become what we know as ‘cloud offerings’.
In parallel to this, IT, in some organisations, has become structured, organized, usually silo’d, and, unfortunately, somewhat slow to respond to business needs. This has led to a slow erosion of IT’s power and influence over IT resources acquisition, delivery and management. Coupled with the existing commercial/public cloud options these days, capital is rapidly leaving the organisation for 3rd party public cloud vendors, also known as shadow IT. This raises concerns, not the least of which being that funds are sent outside the organisation to address tactical issues, typically without regard to legal implications, data security or cost efficiency. These issues highlight IT’s necessity to react faster, become more customer driven, deliver more value and provide its stakeholders with flexibility matching that of public cloud. Essentially, IT needs to evolve to become a business partner; cloud computing providing the tools by which IT offers flexibility, scalability and speed to market that the business units are looking for in today’s market.

Benoit Quintin, Director Cloud Services, ESI Technologies

The IT Catch-22

OK, so everyone’s taking about it. Our industry is undergoing major changes. It’s out there. It started with a first architecture of reference with mainframes and minicomputers designed to serve thousands of applications used by millions of users worldwide. It then evolved with the advent of the Internet into the “client-server” architecture, this one designed to run hundreds of thousands of applications used by hundreds of millions of users. And where are we now? It appears we are witnessing the birth of a third generation of architecture, one of which is described by the IDC as “the next generation compute platform that is accessed from mobile devices, utilizes Big Data, and is cloud based”. It is referred to as “the third platform”. It is destined to deliver millions of applications to billions of users.

3rd platformVirtualization seems to have been the spark that ignited this revolution. The underlying logic of this major shift is that virtualization allows to make abstraction of hardware, puts it all in a big usable pool of performance and assets that can be shared by different applications for different uses according to the needs of different business units within an organization. The promise of this is that companies can and have more with less. Therefore, IT budgets can be reduced!

These changes are huge. In this third platform IT is built, is run, is consumed and finally is governed differently. Everything is changed from the ground up. It would seem obvious that one would need to invest in careful planning of the transition from the second to the third platform. What pace can we go at? What can be moved out into public clouds? What investments are required on our own infrastructure? How will it impact our IT staff? What training and knowledge will they require? What about security and risks?

The catch is the following: the third platforms allows IT to do much more with less. Accordingly, IT budgets are reduced or at best, flattened. Moving into the third platform requires investments. Get it? Every week we help CIOs and IT managers raise this within their organization so that they can obtain the required investments they need to move into the third platform to reap the benefits of it.