Machine Learning & Big Data Blog

Who Cares How Big the Data Is? It Doesn’t Really Matter

3 minute read
Alon Lebenthal

Only a couple of years ago, while everyone was talking about big data, the impression was that very few people knew what it meant. Different people had different interpretations of the term big data. Yes, most everyone in the industry could name the Volume – Variety – Velocity trilogy but the market was still immature, and the majority of big data projects were in their infancy.

Only a few years have passed and while big data is now considered mainstream, and the big data market is flourishing, we also hear more and more that “Big data is dead”. A bit confusing, I admit.

Let’s try and clear this confusion by understanding what has changed in the big data market and key trends driving those changes including cloud, Machine Learning and Artificial Intelligence (AI).

The cloud. Who doesn’t have something “moving to the cloud”? With that in mind, many of the new big data applications are being developed to run in the cloud. Given some of the clear advantages that cloud can offer for managing and “crunching” a lot of data, it has become the platform of choice for many big data projects. The major cloud vendors (AWS, Azure and Google Cloud) are offering a comprehensive, rich list of big data related services; they’ll move your data, store your data, process your data and, of course, analyze your data.

Machine Learning and AI. Expecting systems to self-improve means learning from experience and that requires being able to use large amounts of data. Machine Learning algorithms rely on data – on its quantity but nonetheless, also on its quality.

Nobody is questioning that there is an enormous amount of data available and more being collected every day, hour, minute, even by the microsecond. And there are reports out there that estimate how much data is produced today and how much will be produced by some date years down the road. How big is big then? Does it really matter?

Think about it this way. Organizations are sitting on goldmines of data and they care only about one thing – how to make the most out of this data and how to provide insights to the business, not only to remain competitive and increase profits, but also to thrive, not just survive, in the market.

Organizations implementing big data need to adopt new technologies and new platforms such as Hadoop, Spark, Mesos and of course the multiple cloud vendor provided solutions. They will be ingesting high volumes of data from multiple sources and processing this data, before making it available for analytics.

And finally, organizations use many, many tools, lots of time and much of their valuable talent to develop scripts to integrate it all. Yet, integrations are not easy and manual scripting doesn’t always easily deliver scalable results. This is where many organizations are struggling:

  • How do I successfully orchestrate my big data workflows?
  • How do I ensure my SLAs are met?
  • How do I have my data engineers focused on actionable data rather than spending precious time on operational plumbing?

Here are a few tips for orchestrating your big data workflows:

  • Start early – Orchestration of the data pipelines is critical to the success of the projects and delaying it until the application is ready to move to production, may result in unnecessary errors and delivery delays.
  • Think big – You will be using multiple technologies for the various steps of your big data implementation and the technologies will change often. Consider having an application workflow orchestration solution that can cope with this diverse data.
  • Avoid automation silos – you want to be able to get end-to-end visibility of the data pipeline across disparate sources of data
  • Developers are expensive – get them to focus on the data itself rather than building scripts for operational plumbing

Want to learn more? Check out Control-M for Big Data

Also, you can visit with our experts at the AI & Big Data expo in Amsterdam (booth #514). Robby Dick, Control-M expert, will share how one customer automated their big data project – the outcomes are definitely noteworthy and adoptable. Session: “How a Leading Vehicle Manufacturer cut Fleet Downtime with Automation, Big Data and IoT” on Wednesday, June 19, 12:40PM.

4 easy steps to integrate Snowflake and Control-M

A faster data pipeline can accelerate the value of your Big Data strategy. This white paper presents a four-step approach to connect Snowflake with app workflow orchestration and automate key activities so you can:
Get the e-book now ›

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

See an error or have a suggestion? Please let us know by emailing blogs@bmc.com.

BMC Bring the A-Game

From core to cloud to edge, BMC delivers the software and services that enable nearly 10,000 global customers, including 84% of the Forbes Global 100, to thrive in their ongoing evolution to an Autonomous Digital Enterprise.
Learn more about BMC ›

About the author

Alon Lebenthal

Alon Lebenthal is a Senior Manager in the Digital Business Automation Solutions Marketing in BMC Software. Alon has over 25 years of experience in the IT industry, joining BMC Software in 1999 with the acquisition of New Dimension Software.

In the past 20 years Alon served in various leadership positions in the Control-M Brand Management, Channels and Solutions Marketing.

Alon is a regular speaker in Big Data conferences and BMC events around the world.