Hortonworks is sponsoring a quick, hands-on introduction to key Apache projects. Come and listen to a short technical introduction and then get hands-on with your personal machine, ask questions, and leave with a working environment to continue your journey.

Apache Nifi Crash Course

Introduction: This workshop will provide a hands on introduction to simple event data processing and data flow processing using a Sandbox on students’ personal machines.

Format: A short introductory lecture to Apache NiFi and computing used in the lab followed by a demo, lab exercises and a Q&A session. The lecture will be followed by lab time to work through the lab exercises and ask questions.

Objective: To provide a quick and short hands-on introduction to Apache NiFi. In the lab, you will install and use Apache NiFi to collect, conduct and curate data-in-motion and data-at-rest with NiFi. You will learn how to connect and consume streaming sensor data, filter and transform the data and persist to multiple data sources.

Pre-requisites: Registrants must bring a laptop that has the latest VirtualBox installed and an image for Hortonworks DataFlow (HDF) Sandbox will be provided.


Speakers: Andy LoPresto

Location: Room 109

Data in the Cloud Crash Course

This workshop is a hands-on session to quickly deploy Hadoop and Streaming on AWS / Azure / Google Cloud.

Cloudbreak simplifies the deployment of Hadoop in cloud environments. It enables the enterprise to quickly run big data workloads in the cloud while optimizing the use of cloud resources.

Format

A short introductory lecture about Cloudbreak. This is followed by a walk through and lab leveraging Hadoop and Streaming in the Cloud with Cloudbreak.

Objective

To provide a quick and short hands-on introduction to Hadoop on the cloud. Review key benefits of cluster deployment automation.

This lab will use Cloudbreak to quickly and effortlessly stand up Hadoop and Streaming clusters in a cloud provider of your choice. The lab shows the use of Ambari blueprints that are your declarative definitions of your Hadoop or Streaming clusters. Steps to dynamically change these blueprints and use external databases and external authentication sources and in essence showing a way to provide Shared Authentication, Authorization and Audit across ephemeral and long-lasting clusters. However it is not limited to only custom blueprints, the lab also shows how Cloudbreak provides easy to use custom scripts called recipes that can be executed before or after Ambari start or after cluster installation.

Pre-requisites

Registrants must bring a laptop for the lab.  These labs will be done in the Cloud. Please follow below steps to setup an AWS or Azure account prior to this session starting.


Speakers: Santosh Gowda

Location: Room 109