From Hand-Coded Integration to Hadoop skill integration, the evolution of integration have come at a very fast pace.
Hadoop, a software framework for storing and processing large data sets, provides a foundation for developing big data applications. According to Information Builders, Big Data Integrator simplifies Hadoop, providing a graphical interface for Hadoop and Apache Spark data ingestion and transformation that reduces the need for coding. Apache Spark is a data processing engine that supports in-memory computing and can run on Hadoop.
Big Data Integrator, however, targets more mainstream developers who understand data and business requirements, but may not be experts in Hadoop or related technologies, such as Sqoop and Flume.
Eventually, big data will be assimilated and we’ll just think of it as a special category of enterprise data. Likewise, Hadoop will become a common data persistence platform alongside other forms of open source software and the many brands of relational databases available from vendors of enterprise software.
The top two barriers to working with Hadoop and big data (as seen in surveys for both reports) are:
Inadequate staffing or skills for big data analytics — 62 percent of survey respondents in 2013; 42 percent in 2015.
Lack of a compelling business case — 40 percent of survey respondents in 2013; 31 percent in 2015.
Big data integration-Tool addresses skills shortage
Menninger said Big Data Integrator can improve the efficiency of consulting firms focusing on Hadoop and big data. Freivald, meanwhile, said the data integration platform can also help channel partners move a big data application from one environment to another. For example, a VAR that built an application that uses Cloudera’s Hadoop distribution may want to replicate that app for a customer that uses the MapR Technologies Inc. Hadoop distribution. “The Big Data Integrator-based processes would remain essentially the same,” Freivald said.
In addition, Big Data Integrator can potentially lower the Hadoop skill barrier that prevents consulting firms and other channel companies from entering the big data sector, according to Menninger.
“This is a way for them to get involved in the market and participate,” he said.
A valuable but safe starting point
Getting started is a barrier. One way to cope with this barrier is to identify a preexisting program or solution and integrate big data and/or Hadoop into that program. That way you have a manageable, incremental update that can add more value to an already valuable solution. As a bonus, this approach defines a business case that will help you get approval and support for your new work with big data and Hadoop.
Once you have skills and experience with big data and Hadoop, you can move on to creating new solutions. For example, many organizations want to expand their programs for advanced analytics to leverage new big data and also to run the business on analytics insight. For these programs, big data is important source material (along with traditional enterprise data), and Hadoop can be a both a storage strategy and an analytics processing engine.
Potential channel impact
A stumbling block for organizations — including channel players such as VARs — seeking to create big data applications via Hadoop has been the need to acquire specialized programming talent, Freivald noted. A VAR selling big data software, for example, might need to hire people to customize the application for individual customers.
“But the people they hire are very technical and don’t necessarily understand the business requirements for customization,” Freivald said. Such companies find themselves “spending more on highly paid developers who are relatively narrow in what they can do.”