Skip to content

Big Data Can Cost Big Money – 2 Key Tips to Avoid Overspending and Get Faster Results

April 21, 2014

Hoover Dam created Lake Mead 80 years ago, capable of storing over 8 Trillion gallons of water. The 5 years it took to build the dam was a safe bet as water hasn’t changed terribly much in the past eons and reservoirs have been around for over 5000 years. The term ‘Big Data’, per a NY Times Bits column, is from the late 1990’s and the underlying Hadoop database was only invented in 2005. It’s a far safer bet to invest heavily to keep water in a central place than it is to make your own Lake Mead of Data.

Still, the insurance industry seems to be going the Lake Mead route. All too often, a Big Data strategy is a multi-year push to shove every piece of data a company can get into an uber data warehouse expecting some Big Data Analytics tool will come along and reveal previously unknown relationships. Will this mass of data take on its own purpose, requiring constant alignment to your business goals, i.e. is too inwardly focused, or someone telling you in a year, “you never asked for it, so we don’t have it and quite frankly, we can’t even store it in our database?” Can you have too much data, and not enough insights? Does the past axiomatically predict the future as the predictive analytics vendors claim? Ironically, Lake Mead’s water level is falling due to unforeseen consumption and climate changes. Pouring tons of concrete does not imply continuing viability.

The NY Times had an OpEd article on 4/7, from 2 NYU professors, Gary Marcus and Ernest Davis, highlighting the potential hazards of relying too much on insights by number crunching. Not the least, and most relevant to the ‘Big Data needs the Big Warehouse’ approach, is

‘If you look 100 times for correlations between 2 variables, you risk finding, purely by chance, about 5 bogus correlations that appear statistically significant – even though there is no actual meaningful connection between the variables’

My 2 favorite examples in the piece are the extremely strong correlation, from 1998 to 2007, between increased Autism diagnoses and increased sales of organic foods. Similarly, from 2006 to 2011, the murder rate and market share of Internet Explorer both went down sharply.

Why are you being pushed into the biggest Big Data implementation? Probably because, as that gangster once said, “That’s where the money is.” It’s a combination of IT responding to Board pressure for business benefits to support budgets, and vendors in a feeding frenzy before this also becomes yesterday’s hype. Tech industry reports show BI revenues growing to over $50B by 2017. Who wouldn’t like a piece of that? Consulting companies will tell you it’s hard, and takes over a year, if not years. If you implement Big Data the usual way, it is hard, there aren’t enough Data Scientists to make sense of all the information in the universe, tools with sex appeal, but without insurance content, appear every day via email announcements, and budgets are exceeded with little to show for it.

Most of today’s Big Data oriented Data Warehouses, and especially the underlying infrastructures, aren’t going to handle the Internet of Everything exceptionally well, or at all, which will become apparent when telematics driven usage based pricing becomes standard in just a few years, rather than today’s 2% market share. Most companies are just starting to think through the Big Data implications of an Internet of Everything based insurance industry, where Google states their autonomous vehicle generates about 1 GB of data for every second of driven time. Many newer cars generate approximately 100 MB of data per driven second. Take away irrelevant elements such as tire pressure, RPM, etc, and even of you cut it by 85%, the volume, when multiplied by just the 250M cars currently registered in the US, is staggering.

Before you build your own Lake Mead of Data, short-term, widely deployed, business function specific BI solutions may be more useful right now until the collective technology, automotive, wireless data and insurance industries think through implementation and operational realities. Here is an analogy – I live near a congested and dangerous State highway, concrete poured in the 1930’s, designed, and implemented without extrapolating to today’s volumes. With development on both sides of the highway, it cannot be adapted to current, let alone projected, volumes. We learned to live sluggishly and to hold our breath when we approach an entrance.

Here are 2 tips based on our experiences:

1 – Be audacious, think of Big Data as part of a Product Roadmap – start with today and think stages.

Blow right by “enhancements” or ‘’incremental” improvements. As Ray Kurzweil said, “take 30 linear steps and you end up 30 paces away, but if you think exponentially, you wind up a billion steps away.” Think the uncomfortable:

“If I gave a really smart 20 year old $10K, how would they affect my customer acquisition and retention process? What benefit justifies my Big Data spend if this college sophomore can disrupt me after dinner?”

Many Health insurers, for example, are in the early stages of revolutionizing their business through deeply integrated social apps, tying wearables to doctors to hospitals to patients to pricing.

Big Data will change insurance products from static entities into a more dynamic world where increased data and analytical capabilities will shorten product lifecycles to a year. Just as tech vendors think of their offerings over time via a phased Product Roadmap, insurers need to do likewise where Big Data is simply an ingredient, which in itself, will change over time.

2 – Don’t serve up comfort food.

That 20 year old isn’t thinking “today,” let alone “yesterday”; they’re too busy creating your demise. Big Data can be mental comfort food if not managed properly – it’s always reassuring to revisit the past. Again, think the uncomfortable by shifting from product focused to ethnography:

How will customers use my product in their daily lives? How will new data sources and types define these new products? In 10 years or sooner, will we continue to be an insurance company with a Digital presence, or will we evolve into a tech-focused company, one of whose main revenue sources is insurance? Will pouring all this Big Data concrete today contribute to, or impede, future agility?

Big Data does not axiomatically require Big Money upfront – it needs Big Innovative Thought. “Talk is cheap, show me the code,” Linus Torvalds (the developer of Linux) said. “Data is everywhere, show me the future” is what we should be demanding.

 

Richard Eichen is the Founder and Managing Principal of Return on Efficiency, LLC,  http://www.growroe.com , focusing on companies, initiatives and products where technology is the primary means of delivery and revenue. He is one of their senior Turnaround, Transformation, Program Rescue and Process Rescue leaders.  As a Change Agent, Trusted Advisor, Program Leader and Interim Executive, Rich has over 25 years hands-on experience reshaping companies, Operations, IT/Systems Integration and strategic initiatives.  He can be reached at richard.eichen@growroe.com, and followed on Twitter, @RDEgrowroe

Advertisements
Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: