Re-engineering Traditional Analytics for Cloud Scalability and Growth (Adventures with Hazelcast)

Speaker: Steve Worthington

A traditional “Scale Up” Call Center Analytics application re-engineered to use an In Memory Data Grid to “Scale-Out” and grow in the cloud.

Problem: How to scale out a traditional analytics application. How to re-engineer, and re-architect for massive growth. Need to take an traditional three-tier application with a traditional SQL Database and Lucene indexing, and scale it to be multi-tenant, and process hundreds thousands of files per hour. Upload 300,000 data files per hour from an https web interface, record the meta data for each file in a SQL Database, link files from different sources (clients) and do some complex analytical processing updating both the SQL database and Lucene Indexes with the result.

Solution: Add a Redundant, Sharded, In Memory Data Grid with Multiple Queues and Data Maps, to control Processing, SQL Database queries and updates, and Lucene Indexing, along with Automated Cloud Bursting and dumb processing modules.

We will discuss the Architecture of this project at a high level, and how a Hazelcast IMDG was used to build a prototype at AWS that scaled to process 300,000 calls per hour.

FAR Con – A Financial And Retail Conference on Analytics

MinneAnalytics

Wednesday, August 12, 2015 from 8:00 AM to 5:00 PM (CDT)