TensorFlow is all kind of fancy, from helping startups raising their Series A in Silicon Valley to detecting if something is a cat. However, when things start to get “real,” you may find yourself no longer dealing with mnist.csv, and instead needing do large-scale data prep as well as training.
This talk will explore how TensorFlow can be used in conjunction with Apache BEAM, Flink, and Spark to create a full machine learning pipeline including those annoying “feature engineering” and “data prep” components that we like to pretend don’t exist. We’ll also talk about how these feature prep stages need to be integrated into the serving layer. In addition to Apache BEAM this talk also examines changing industry trends, like Apache Arrow, and how they impact cross-language development for things like deep learning. Even if you’re not trying to raise a round of funding in Silicon Valley, this talk will give you tools to do interesting machine learning problems at scale