Building an Agile Data Lake on Cloudera HDFS/HIVE using Talend & Spark
In this ‘Hands-On’ session we will open the doors to the Talend Studio and walk you through building an Agile Data Lake on the Talend Big Data Platform with Cloudera Hadoop. You will create and execute your job to generate, extract, and push data to HDFS. From there we will walk through the jobs that load all the data into a Raw Data Vault and then de-normalize the data into a Business Vault.
We will also take a brief stroll covering some important new features being developed at Talend for Data Streams, Data Catalog, and the new Component SDK.
Resources to view before Attending
Please take the time to register and view the recent Expert Session: An Introduction to the Agile Data Lake