The subject of “Maximising Success when Migrating Big Data Workloads to the Cloud” was discussed in a recent A-Team webinar. Participants included Ian Lester, Senior Principal Developer, AI Labs at Nomura, Peter Williams. Head of Partner Technology, Global Financial Services at AWS, and Dan Seal, Chief Product Officer at KX. Topics covered ranged from approaches to migrating big data and analytics to the cloud, and the challenges involved, to developing a microservices architecture and adopting Continuous Integration/Continuous Deployment (CI/CD) deployment techniques.
Participants agreed that the motivation for cloud migration was two-fold: cost and agility. The cloud’s unrivalled scalability and cost-effective storage options, augmented by its wide range of standards and services, enable organisations to do things that were not previously possible (or at least not affordable) in terms of capacity and compute cycles. That, in turn, spawns innovation possibilities that enable them to better serve their customers in delivering swift, opportunistic solutions in timescales never previously envisioned. Peter Williams cited an example where a client wanted to examine the correlation between physical location and buying patterns by combining cellular and transaction data. The exploratory, short term, but data-intensive nature of the analysis was something that the cloud was easily able to accommodate.
There are challenges, of course. Many on-prem solutions are old and complex and are deeply ingrained in the organizational fabric. For that reason, a graduated approach is often advised. A “lift and shift” exercise, for example, can deliver initial benefits in terms of storage costs and will help in familiarizing development and operational staff with the new technologies. But the greater value comes from the application refactoring that enables better use of cloud protocols (for auto-scaling and fault tolerance, for example) and services. This applies not only in the use of cloud-based services (for system monitoring and AI-processing, for example) but also in decomposing existing functionality into independent components, i.e. microservices, that promote reuse and, in eliminating duplication, dramatically reduce maintenance overhead.
Much of that benefit is achieved by the adoption of new tooling and a CI/CD approach in which the development lifecycle is accelerated by technologies like Jenkins for testing, Docker for encapsulation and Kubernetes for orchestration. In combination, they help ensure that new functionality is well defined in terms of test coverage, interfaces and dependencies that help streamline rollout and greatly reduce deployment risk.
This has a particular advantage in enabling partial decomposition of functionality in complex environments. Dan Seal commented that it could facilitate, for example, streaming data and analytics being performed on-prem with historical data being hosted in the cloud. It can also lead to a new operational approach where system support is less dependent on domain-specific knowledge of the underlying applications and more about generic cloud maintenance.
In closing comments on guidance on migrating data and analytics to the cloud, the panel concurred that it was important to begin with not just with any quick win, but one that evidences tangible long-term value. As an example, achieving high-performance by simply throwing extra hardware at the problem rather than considering efficiency and longer-term TCO would be a mistake. They counselled instead on beginning with an important area where success was critical so that it would both foster engagement and enable them to see the longer-term benefits, one of which, as mentioned by Ian Lester, is the opportunity to completely rethink application architecture and reimagine the processing opportunities the cloud can offer.
To listen to the webinar in full please CLICK HERE