最高技術責任者（CTO）であるスコットは、Hortonworks の全体の技術的なビジョンを担当し、同社のエンジニアリング、製品管理およびサポート組織を統括しています。スコットは、キャリアの全体をデータ業界で過ごしてきました。直近では Teradata Labs の社長として、Teradata の統合データウェアハウス、ビッグデータ分析、関連ソリューションに関連する研究、開発、販売支援活動に対して、先見性のある方向性を打ち出してきました。また、Teradata Labs のソリューションに関連する Teradata の技術への投資と買収を実行しました。スコットは、Drexel University にて電気工学士号を取得しています。
Dr. Kerem Tomak brings more than 15 years of experience as a marketing scientist and executive. He comes from Sears Holdings where he was responsible for the digital marketing of the retail trade company as Chief Marketing and Analytics Officer. He studied mathematics, economics and information systems in Turkey and the USA. Dr. Tomak embarked on his professional career as an assistant professor at the University of Texas, Austin.
He has expertise in the areas of omnichannel and cross-device attribution, price and revenue optimization, assessing promotion effectiveness, yield optimization in digital marketing and real-time analytics. He has managed mid and large-size analytics and digital marketing teams in Fortune 500 companies and delivered large-scale analytics solutions for marketing and merchandising units. His out-of-the-box thinking and problem-solving skills led to 4 patent awards and numerous academic publications. He is also a sought-after speaker in Big Data and BI Platforms for Analytics.
Since March 2017, Peter Weckesser serves as Airbus Defence and Space´s Digital Transformation Officer. He joined Airbus from Siemens, where he has been the COO of Product Lifecycle Management, leading the Siemens IoT and Digital Enterprise business and activities. Prior to this, he held various executive-level positions at Siemens, being CEO of Industry Services and CEO of Value Service Business Unit, as well as Vice-President of “Human Machine Interface”.
Peter Weckesser holds a degree in Physics and a PhD degree in Computer Science, both from the University of Karlsruhe (Germany), although he also spent a year studying at the Armstrong State University in the USA as part of his education in Informatics.
Szilard studied Physics in the 90s and obtained a PhD by using statistical methods to analyze the risk of financial portfolios. He worked in finance, then more than a decade ago moved to become the Chief Scientist of a tech company in Santa Monica, California doing everything data (analysis, modeling, data visualization, machine learning, data infrastructure etc). He is the founder/organizer of several meetups in the Los Angeles area (R, data science etc) and the data science community website datascience.la. He is the author of a well-known machine learning benchmark on github (1000+ stars), a frequent speaker at conferences (keynote/invited at KDD, R-finance, Crunch, eRum and contributed at useR!, PAW, EARL etc.), and he has developed and taught graduate data science and machine learning courses as a visiting professor at two universities (UCLA in California and CEU in Europe).
Product Management lead at Uber with a focus on Data Platforms and Infra. I manage Uber's Storage, Analytics, BI, and Machine Learning product lines.
After finishing his computer science studies with focus on information systems in 2011, Volker started to work on backend systems at InnoGames based in Hamburg, Germany where he has worked with data-intensive and scalable applications through his corporate career. Nowadays he is working as a Senior Developer Analytics, managing the data infrastructure of the company. With around 200 million registered players, InnoGames is one of the worldwide leading developers and publishers of online games. Currently, more than 400 people from 30 nations are working in the Hamburg-based headquarters. Together with his data engineering team he build up a data pipeline and platform based on technologies such as Hadoop, Flink, Kafka and Spark. It processes more than a billion gameplay events per day in order to generate a better gaming experience.
I am currently a Big Data Delivery Lead at Optum (UnitedHealth Group) and based in Dublin (Ireland). Me and my teams deal with projects in the PI (fraud, waste and abuse, claims processing) and the healthcare space. I worked previously at IBM Ireland, where I switched my career path from Test Automation to Analytics and Machine Learning.
I am passionate about coding, Big Data, AI/ML/DL, test automation, Open Source, DevOps and cooking (home made pizza is my speciality).
I share my tech thoughts through my blog (http://googlielmo.blogspot.ie/) and DZone (https://dzone.com/users/2532948/virtualramblas.html) where I am a Golden Member.
During 2018 I have presentend to several international conferences such as DataWorks Summit Berlin, Google I/O Extended, Predictive Analytics World for Industry 4.0 and many others.
My first book "Hands-on Deep Learning with Apache Spark" (https://tinyurl.com/y7d98s64) is going to be released in December 2018.
Rachit Arora is a Senior Developer at IBM,India Software Labs. He is key designer of the IBM's offerings on Cloud for Hadoop ecosystem . He has extensive experience in architecture, design and agile developmemt. Rachit is an expert in application development in Cloud architecture and development using hadoop and it's ecosystem. He has been active speaker for BigData technologies in various conference like Information Management Technical Conference-2015 , ContainerCon NA-2016, Container Camp Sydeny 2017 etc.
Elliot is a principal engineer at Hotels.com in London where he designs tooling and platforms in the big data space. Prior to this Elliot worked in Last.fm’s data team, developing services for managing large volumes of music metadata.
Jay is a final year student at King’s College London studying Computer Science. She joined Hotels.com in the Big Data Platform team for her industrial placement year where she spent time working with Apache Hive, modularization techniques for SQL, and mutation testing tools.
アンディ・ロプレストは、Hortonworksの技術スタッフ シニアメンバーで、Hortonworks DataFlow チームに勤務しています。彼は、Apache NiFi、オープンソース、ロバスト、安全なデータルーティングと配信システムのコミッタおよび製品管理委員という役割を果たしています。アンディはアイデンティティ管理、TLS交渉、データ保護、アクセス制御、暗号化とハッシングを含む、NiFiの安全性に注力しています。また、安全な指令および管理、即時のデータ来歴と統制を含む、エッジデータ収集を推進する、サブプロジェクトのApache MiNiFiにも携わっています。彼はシンガポール、東京、メルボルン、ベルリン、シドニー、サンホセ、ブリュッセルで開催されたFOSDEM '17、そしてOpenIoT Summit 2017にてNiFiに関する講演を行っています。
現在、Banco Santanderで企業投資銀行業務のイノベーションとアーキテクチャとして取り組んでいます。Santanderグループに勤務して13年になります。以前は、IDG Communications SpainにてITマネージャーとして5年間勤務しました。私の主な役割はグループのウェブ戦略の開発と改善でした。また、PC World、Computerworld、Macworldなどの出版物に20以上の記事を投稿しています。
Universidad Ponitifica de Salamancaにてコンピュータの学位を取得しています。
Tim Spann was a Senior Solutions Architect at AirisData working with Apache Spark and Machine Learning. Previously he was a Senior Software Engineer at SecurityScorecard ("http://securityscorecard.com/) helping to build a reactive platform for monitoring real-time 3rd party vendor security risk in Java and Scala. Before that he was a Senior Field Engineer for Pivotal focusing on CloudFoundry, HAWQ and Big Data. He is an avid blogger and the Big Data Zone Leader for Dzone (https://dzone.com/users/297029/bunkertor.html).
He runs the the very successful Future of Data Princeton meetup with over 830 members at http://www.meetup.com/futureofdata-princeton/.
He is currently a Solutions Engineer at Hortonworks in the Princeton New Jersey area.
You can find all the source and material behind his talks at his Github and Community blog:
Shauna Davis has worked in the IT industry for 14+ years and has held various positions throughout application and big data development. She's most recently leading the Big Data Development Team at Trac Intermodal as the Technical Manager of Big Data Architecture.
A Big Data Tech Lead at the Nielsen Marketing Cloud. I have been dealing with Big Data challenges for the past 6 years, using tools like Spark, Druid, Kafka, and others.
I'm keen about sharing my knowledge and have presented my real-life experience in various forums in the past (e.g meetups, conferences, etc.).
Yakir Buskilla is a Director of Big Data at the Nielsen Marketing Cloud. His fields of interest are Big Data solutions and large scale machine learning.
Don Bosco Durai (Bosco) is a thought leader in enterprise security and is a committer in open source projects like Apache Ranger, Apache Ambari, and Apache HAWQ. He has also contributed towards the security for most of the Hadoop components. Bosco was the co-founder of XA Secure, which is the genesis of Apache Ranger. Bosco is currently the co-founder of Privacera where he is tackling the data security challenges in modern data architecture, like Big Data and Cloud, where large data set constantly moves between different environments, which can result major security breaches or compliance violation if not managed properly. Privacera automates discovery of sensitive data, does transparent encryption/anonymization, manages access policies and monitors access.
Madhan Neethiraj is an Apache committer and PMC for Apache Atlas and Apache Ranger projects. He works at Hortonworks as Sr. Director of Engineering in Enterprise Security Team. His contributions include Apache Ranger features like audit framework, stack model, tag-based policies, masking and row-filter policies; and Apache Atlas features like V2 APIs, search enhancements. Prior to Hortonworks, Madhan was at Oracle in development of security access management suite, governance and real-time fraud detection/prevention products. Prior to Oracle, he was with Bharosa Inc. responsible for the development of real-time fraud detection solution for Financial Institutes, HealthCare and eCommerce.
- Committer of Apache Impala (May, 2018~)
- Senior software engineer at SK Telecom (Mar, 2017~)
Lead scrum of cloud platform development using Kubernetes, Docker, Apache Druid and Apache Hadoop.
Designed and implemented Dockerized DevOps framework.
- Senior software engineer at SAP Labs (Apr, 2014 ~ Feb, 2017)
Development of SAP HANA in-memory engine
- Software engineer at SAP Labs (Jan, 2008 ~ Mar, 2014)
Development of SAP HANA in-memory engine
- Internship at Samsung Electronics (Mar, 2003 ~ Dec, 2005)
With more than fifteen years of experience in Java technologies, Monica is currently responsible for defining Big Data architectures for Engineering’s Big Data and Analytics Center of Excellence. She deals mainly with leading the Group in the development of projects and POCs, offering consulting services to clients and assisting the pre-sales phase by bringing in the contribution on Big Data technologies. She runs internal courses and for clients and also carries out activities to support the dissemination of Engineering’s expertise in this context, both nationally and internationally.
Christopher Crosbie has over fifteen years of experience developing and deploying data technology in enterprise environments. He is currently on the Cloud Partner Engineering team at Google where he serves a trusted advisor to software vendors that build Data, Analytics and ML solutions on the Google Cloud platform.
Previous to joining Google, Chris was a development manager at Amazon and before that he headed up the data science team at Memorial Sloan Kettering Cancer Center where he implemented the enterprise Hortonworks architecture and strategy. Chris started his career as a biostatistics application engineer at the NSABP, a not-for-profit clinical trials cooperative group supported by the National Cancer Institute. He holds an MPH in Biostatistics and an MS in Information Science.
Have more then 15+ years of Java experiences and during theses years worked with allmost all the form of Java solutions from the low-latency multithread application to highly distributed enterprise application as developer, architect and trainer. Currently working with the Apache bigdata projects and created various type of containerized solution for the components of the Hadoop ecosystem.
Founder of the first Hungarian Java User group and regular speaker at meetup events and conferences.
Committer of Apache Hadoop and Apache Ratis project and working on the Apache Hadoop Ozone project and the dockerization of Apache Hadoop,
Magnus Runesson is Senior Data Engineer at Svenska Spel responsible for architect, develop and operate their Hadoop environment. He has a Master of Science and Engineering from Linköping University, Sweden. Magnus has long experience to develop and operate distributed systems with high requirements on availability, performance and integrity from organization such as Spotify and the Swedish weather service. Magnus is the lead developer of open source tool cobra-policytool and was the driving force to open sourced it.
Dor has over a decade of experience developing big data products for security industries, financial markets and banking industries. His research on metric learning and cost-sensitive learning has earned him publications in NIPS, AISTATS and a monetary prize in Cha-Learn competitions. As a senior data scientist at ING Bank, he is involved with multiple projects modelling consumer and market behavior, optimizing business and IT processes and contributing to the data science way-of-working, rapid exploration and continuous delivery processes.
Jose Luis has been working with data since the very beginning of his carreer. For more than 9 years JL has been dealing with small and big data in quite different industries such as banking, utilities, airlines, software...He has been working in the full stack of a Data Engineer starting from pure development roles to operations. He is currently Platform Manager in Zurich at ServiZurich Technology Delivery Center managing a Big Data Plaftorm in order to enable Big Data processing and advanced analytics across the organization. Teacher in MBIT school in BI & Big Data Master, speaker @ codemotion 2016, expert in distributed systems and data transformation, cloud believer and happy father of 2 kids.
Trevor Grant is PMC Member of the Apache Mahout and Apache Streams projects. He is a tinker extraordinaire and does a poor job of documenting his projects on www.rawkintrevo.org. He has an M.S. of Applied Math, a dog, a cat, an M.B.A., and a home in Chicago. He speaks a fair amount at locations internationally, and in general his talks are usually pretty fun.
Holden is a transgender Canadian open source developer advocate @ Google with a focus on Apache Spark, BEAM, and related "big data" tools. She is the co-author of Learning Spark, High Performance Spark, and another Spark book that's a bit more out of date. She is a commiter on and PMC on Apache Spark and committer on SystemML & Mahout projects. She was tricked into the world of big data while trying to improve search and recommendation systems and has long since forgotten her original goal.
Data Processing Ninja with with over 10 years of experience in the software engineering industry. PhD in distributed databases, working at allegro.pl - petabyte scale ecommerce platform.
Uwe Weber is working since almost 20 years in the IT environment and became a Big Data Engineer at Telefónica in 2014. He initially set up Telefónica’s Hadoop environment and infrastructure and supports business departments to utilize the “new world”.
Oscar Martinez Rubi is an expert on Big Data, Business Intelligence and Data Management solutions. He currently leads the Advanced Business Analytics department at ClearPeaks where, together with his team, he works in multiple Big Data, Cloud and Advanced Analytics projects throughout various industries. Before joining ClearPeaks, Oscar was an engineer in the Netherlands eScience Center, a center specialized in Big Data for scientific projects in the Netherlands. Before that, he was the Data Manager in a scientific project leveraging the LOFAR radio-telescope. He also worked in the implementation of several data processing systems for an ESA space mission.
Carsten works as a Big Data Architect at Audi Business Innovation GmbH. Audi Business Innovation GmbH, a subsidiary of Audi, is a small company focused on developping new mobility services as well as innovative IT solutions for Audi. Carsten has more than 10 year experience in delivering Data Warehouse and BI solutions to his customers. He started working with Hadoop in 2013 and since then he has focused on both big data infrastructure and solutions. Currently Carsten is helping Audi to extend their Big Data platform based on Hadoop and Kafka to the cloud. Further, as an solution architect he is responsible for developing and running analytical applications on that platform.
Nicolas is a researcher overseeing the performance and scalability of new Spark releases at Databricks. Where he along with the Amsterdam SQL performance team is implementing the new benchmarking and monitoring infrastructure for the Databricks cloud platform. Previously, he was leading a project on upcoming architectures for Big Data processing at the Barcelona Supercomputing (BSC) - Microsoft Research joint center. Nicolas received his Ph.D. in Distributed Systems and Computer Architecture at UPC/BarcelonaTech, where he is still contributing part of the HPC and of the Data Centric Computing research groups.
Bogdan Ghit is a computer scientist and software engineer at Databricks, where he works on optimizing the SQL performance of Apache Spark. Prior to joining Databricks, Bogdan pursued his PhD at Delft University of Technology where he worked broadly on datacenter scheduling with a focus on data analytics frameworks such as Hadoop and Spark. His thesis has led to a large number of publications in top conferences such as ACM Sigmetrics and ACM HPDC.
An InfoSec Generalist. CISSP. My more than a decade long work experience revolves around all aspects of security mainly Secure-SDLC, Source Code Analysis, Vulnerability Assessment, Penetration Testing for Web Applications, Architecture Review, Incident Response, ISMS Compliance, Doing and facilitating 3rd Party Audits. Managed multiple Federal Data Center Operations, O/S and Application Hardening, Linux System Administration. Solution Deployment and Integration for Federal and various State Governments. Contributor to Apache Knox, Apache Zeppelin and Apache Spark.
Have also years of experience in leading and managing a team for monitoring, securing and ensuring "Availability Round-the-Clock" for National Critical Infrastructure. Solving Brain-Teasing needle-in-haystack production issues (Architecture, Application, System & Network) and incorporating new requirements. Conducting Vulnerability Analysis and analyzing VA reports for suggesting corrective and preventive actions (Hotfixes/ CVEs / Design Change/ Hardening/ Patching /Upgrades) to Engineering and Operations team. Panelist for Big Data Security Work Group.
Designing Solution Architecture and Capacity Planning for highly-available applications on Cloud/Data Centre environment.
Larry is a Senior Development Manager and Architect on the Hortonworks security team. He is also a committer and PMC member for the Apache Knox and Apache Ranger projects, committer for Apache Hadoop and contributor to security aspects of multiple Hadoop related projects. He is a veteran in the enterprise middleware space with a specialization in platform management and security. Larry has extensive experience in the Java EE application server technologies and has served on various expert groups for JSRs within the JCP for Java EE security. He has worked on various webservices technologies and stacks including SOAP and REST with a focus on security.
Adam Hudson is a software engineer with research and development experience in many diverse industries, including social media, video gaming, finance and online health. He was awarded a PhD from the University of Sydney in 2008 for his research into mobile networking applications. Originally from Sydney, Australia, he recently moved to the San Francisco Bay Area to join Uber on their exciting journey to change the world.
Atul Gupte is a Product Manager on the Product Platform team at Uber. He holds a BS in Computer Science from the University of Illinois at Urbana-Champaign. At Uber, he helps drive product decisions to ensure Uber’s data science teams are able to achieve their full potential, by providing access to foundational infrastructure, stable compute resources & advanced tooling to power Uber’s global ambitions. Previously, at Zynga, he spent time building some of the world’s leading social games and also helped build out the company’s mobile advertising platform.
I am part of Hive engineering team at Hortonworks and I primarily work on Hive compiler.
Suneel is a Member of Apache Software Foundation and is a Committer and PMC on Apache Mahout, Apache OpenNLP, Apache Streams. He's presented in the past at Flink Forward, Hadoop Summit, Berlin Buzzwords, Machine Learning Conference, Big Data Tech Warsaw and Apache Big Data.
Vladimir Kroz is an architect at Search group in WalmartLabs, where he is building next generation of e-commerce search for walmart.com. Vladimir works on large scale low latency search, big data and machine learning systems, and has acute passion in large scale computing and AI. Prior to Walmart he has led engineering teams at number of Fortune 500 international companies in e-commerce and telecom field. He also co-founded real-time data integration company Wisdomforce. Vladimir holds Master’s degree in Computer Information Systems and Electrical Engineering.
I'm in software business for 25 years and did a lot of different things, like C/C++, Java, JS, and .NET coding, consulting for software architectures, and even SCRUM master and programing for Arduino. Accompanying to my work I always did coaching and holding talks at DATEV.
My current passion is keeping our ETL platform up to date and show product owners and management how we can get business value out of our data.
John Mertic is the Director of Program Management for The Linux Foundation. Under his leadership, he has helped ODPi, R Consortium, and Open Mainframe Project accelerate open source innovation and transform industries. John has an open source career spanning two decades, both as a contributor to projects such as SugarCRM and PHP, and in open source leadership roles at SugarCRM, OW2, and OpenSocial. With an extensive open source background, he is a regular speaker at various Linux Foundation and other industry trade shows each year. John is also an avid writer and has authored two books “The Definitive Guide to SugarCRM: Better Business Applications” and “Building on SugarCRM” as well as published articles on IBM Developerworks, Apple Developer Connection, and PHP Architect.
I am a data custodian and data engineer at Port of Rotterdam. It is my job to protect data in our data lake and to make sure, that when users may see it, it can easily be found. We use Ranger for security and Atlas to store metadata. I've familiarized myself quite a bit with these two products. Not only using the UI, but also very much with the REST API of both products.
It hadn't always been like this. In 2016 I decided to leave my 20 years of Oracle database experience behind to learn Big Data and become a data engineer. While learning, I started making Youtube videos on things I learned along the way. I experimented with open source products, like Hadoop, MongoDB and ElasticSearch, and showed my work at my Youtube channel (https://www.youtube.com/channel/UCVt-roCRXgsNtIb0WjfTa4g).
Ruslan is a Scala and Spark enthusiast with a degree in High Performance Computing. He lives in Prague, Czech Republic. Until 2016 he worked on seismic wave simulation software for Oil and Gas industry in Kiev, Ukraine. Also he taught Parallel Programming at a university there for some time. Now he works for ABSA, a multinational African bank as a big data engineer in Big Data R&D team. His interests include distributed systems, concurrent and parallel programming.
Senior Big Data Engineer with experience in Information Retrieval and Machine Learning.
マイケル・ガーは、業界および情報テクノロジー戦略の担当者として25年の勤務経験があります。彼は商品開発、製造、サプライチェーンおよび顧客経験関連の事業プロセスに関して産業間共通の深い知識を持っています。Hortonworksの製造および自動車部門の部長として、マイクはソリューション ビジョンおよび各業界の市場開拓戦略の推進に貢献し、業界のリーダーと提携しビッグデータ分析を通して次世代の事業洞察を推進します。Hortonworksに入社する以前、マイクは、Oracleの自動車業界部門のリーダーとして20年以上勤務し、A.T.カーニーにてオートモーティブマネジメント顧問、ジェネラルモータース (サターン部門) にて生産技師として勤務しました。
Sanjay is a telecom industry veteran with extensive experience in the strategy and execution of next generation data-centric industry solutions for enhancing customer experience, optimizing network operations and increasing revenue generation through digital transformation.
Sanjay currently leads the global communications & media business at Hortonworks helping communication service providers leverage Hadoop and NiFi to transform their data into a force of business growth and competitive differentiation and to drive data-centric solutions for the connected world & for Industrial IOT. Previously, he held executive roles, leading the global telecom industry business, solutions, and strategy at VMware, Pivotal, Progress Software, Savvion, and TMNG and has help drive business transformation, end-to-end architecture and new business initiatives at Bell Canada, Level3, AT&T Canada, Iowa Telecom, ETB, ATT/Ameritech, Wingcast, and other global service providers.
With more than 20 years working in the IT industry, Olaf has earned experiences as architect, developer, administrator, trainer and project manager in many different areas. Storing and processing huge amounts of data, was always a focal point of his work. At ORDIX AG, he is responsible for Big Data and Data Warehouse technologies and solutions. He has built up a powerful team of Big Data consultants, created several training courses, speaks at conferences and regularly publishes technical articles.
Talks in the past:
Cloudera Sessions, München 2017: Fast analytics on fast data - Kudu als Storage Layer für Banking Applikationen
DOAG, Nürnberg 2017: Big Data - Quickstart mit Hadoop und der Oracle Big Data Platform
Big Data Summit, Hanau 2018: Fast analytics on fast data - Kudu als Storage Layer für Banking Applikationen
Strata Data Converence, London 2018: Fast analytics on fast data - Kudu as storage layer for banking applications
DOAG Big Data Days, Dresden 2018: Fast analytics on fast data - Digitalisierung von Kreditprozessen mit Kudu
IT Tage, Frankfurt2018: Fast analytics on fast data - Digitalisierung von Kreditprozessen mit Kudu
Big Data - Informationen neu gelebt (Teil VII): Apache Kudu; ORDIX news 2/2017
Informationen neu gelebt (Teil II): Apache Cassandra; ORDIX news 2/2015
Informationen neu gelebt (Teil I): Wie big ist Big Data?; ORDIX news 1/2015
Neuerungen in der Oracle Database 12c (Teil V): Erweiterungen im DWH-Umfeld; ORDIX news 3/2014
Dokumentenschredder: Zerlegen und Zusammensetzen von XML-Dokumenten mit dem DB2 XML Extender; XML Magazin Ausgabe 1.2004
Wangda Tan is Product Management Committee (PMC) member of Apache Hadoop and engineering manager of YARN team at Hortonworks. His major working field is Hadoop YARN GPU isolation and resource scheduler, participated features like node labeling, resource preemption, container resizing etc. Before join Hortonworks, he was working at Pivotal, working on integration OpenMPI/GraphLab with Hadoop YARN. Before that, he was working at Alibaba cloud computing, participated creating a large scale machine learning, matrix and statistics computation platform using Map-Reduce and MPI.
Sunil Govindan is contributing to Apache Hadoop project since 2013 in various roles as Hadoop Contributor, Hadoop Committer and member Project Management Committee (PMC). He is working as Staff Software Engineer at Hortonworks in YARN team. He is majorly contributing in YARN Scheduling improvements such as Intra-Queue Resource preemption, Multiple Resource types support in YARN with Resource Profiles, Absolute Resource configuration support in Queues etc. He also drove efforts to improve YARN UI for better user experience with community. Before Hortonworks, he worked at Juniper on a custom resource scheduler. Prior to that, he was associated with Huawei and worked on Platform and Middleware distributed systems including Hadoop platform. He loves reading books, an ardent music lover and passionate about go-green efforts.
As a former retail and consumer goods executive and more recently as a business strategy consultant and solution provider, Brent has extensive experience working with a variety of retail and consumer goods companies to provide thought leadership and help them to align strategic business objectives with technology and analytic solutions to create a differentiated competitive advantage in the marketplace.
He has an extensive track record of imagining, designing and executing high impact business solutions, driving innovation and transformation for retail and consumer goods organizations. Brent is passionate about analytics, emerging technologies, consumer behavior, collaborative supply chains and retail transformation.
As General Manager of Retail and Consumer Goods Solutions at Hortonworks, Brent is responsible for driving the solution vision and go-to-market strategies with each segment. As industry leaders increasingly invest in Big Data Analytics to help drive transformation within their organizations,
Brent engages globally to share, discuss, provide keynote talks, and facilitated workshops to help define and create solutions to drive next-generation insights and positive business outcomes across the value chain.
Billie Rinaldi is a Principal Software Engineer I at Hortonworks, currently prototyping new features related to long-running services and containers in Apache Hadoop YARN. Prior to August 2012, Billie engaged in big data science and research at the National Security Agency, where she provided early leadership for Apache Accumulo. Billie is a member of the Apache Software Foundation and a committer for Apache Hadoop and a number of other Apache projects in the Hadoop ecosystem. She holds a Ph.D. in applied mathematics from Rensselaer Polytechnic Institute.
Gour is Principal Engineer in Apache Hadoop/YARN team
Yanbo is a staff software engineer at Hortonworks. He is working on the intersection of system and algorithm for machine learning and deep learning. He is an Apache Spark PMC member and contributes to several open source projects such as TensorFlow, Keras and XGBoost. He delivered the implementation of some major Spark MLlib algorithms. Prior to Hortonworks, he was a software engineer at Yahoo! and France Telecom working on machine learning and distributed system.
シンディ マイキは、Hortonworksで保険部門のジェネラルマネージャーとして、グローバル保険業界向けの戦略および顧客エンゲージメントに携わっています。シンディは、現代の事業成長の分析を活用して顧客やパートナーと提携し、進化する保険業界でイノベーションを推進するための新しいデータの使用を探索しています。金融、そして保険業界でコンサルティングおよびアドバイザリー サービスで25年の経験を持ち、アナリティクスとテクノロジーを活用した事業戦略で業績を推進するために世界のクライアントと提携してきました。
シンディは保険請求と引受の両方で深い業界知識を持ち、アナリティクスおよびデータを使用してビジネス成果を向上することに注力しています。IBM Watsonソリューショングループ、Carrier Insurance、ACORDの戦略部長での職務経験を持ち、Strategy Meets Action Research and Advisory Services の共同設立者でもあります。彼女はまた、公認会計士でもあります。
Owen O'Malley is a co-founder and technical fellow at Hortonworks, a rapidly growing company (25 to 1,000 employees in 5 years), which develops the completely open source Hortonworks Data Platform (HDP). HDP includes Hadoop and the large ecosystem of big data tools that enterprises need for their data analytics. Owen has been working on Hadoop since the beginning of 2006 at Yahoo, was the first committer added to the project, and used Hadoop to set the Gray sort benchmark in 2008 and 2009. In the last 8 years, he has been the architect of MapReduce, Security, and now Hive. Recently he has been driving the development of the ORC file format and adding ACID transactions to Hive. Before working on Hadoop, he worked on Yahoo Search's WebMap project, which was the original motivation for Yahoo to work on Hadoop. Prior to Yahoo, he wandered between testing (UCI), static analysis (Reasoning), configuration management (Sun), and software model checking (NASA). He received his PhD in Software Engineering from University of California, Irvine.
スリカンス・ベンカットは、現在、HortonworksにてApache Knox、Apache Ranger、Apache Atlas、プラットフォーム ワイド セキュリティ、Hortonworks DataPlane Serviceを含む、製品のセキュリティ＆ガバナンスのポートフォリオに携わっています。Hortonworksに入社する以前は、クラウドサービス、市場、セキュリティ、ビジネスアプリケーションなどの分野で様々な職務の経験があります。スリカンスは、製品管理から、戦略および運営、テクニカルアーキテクチャまで様々な分野でリーダーシップの経験があり、TelefonicaやSalesforce、Cisco-Webex、Proofpoint、Dataguise、Trilogy Software、Hewlett-Packardを含む、新興企業からグローバル企業まで広範囲の職務経験を持ちます。スリカンスは、ピッツバーグ大学で人工知能に焦点を置いたエンジニアリングの博士号、インディアナ大学でGeneral ManagementのMBA、サンダーバード国際経営大学院にてグローバルマネジメントの修士号を取得しています。趣味はデータサイエンスと機械学習で、ビッグデータテクノロジーを触ることを楽しんでいます。
Solution Architect with more than 15 years experience in DWH and BI and last years also Big Data environments.
Designed many data warehouses including a Customer Intelligence System, Marketing Data Warehouse, Enterprise Data Warehouse and Basel II data warehouse. Currently responsible for the architecture of Data Lake, Data Factory and Data Lab based on Cloudera and HortonWorks technology.
Patrick de Vries is an OSS manager (Demand), IT architect with more than 10 years experience in the mobile networks. He has a passion for data management and data warehousing. In this time he successfully led many IT architecture, design and implementation activities for operations readiness, assurance, service quality and business continuity projects. Currently, Patrick works at KPN in the Netherlands at further improvements in servicing both customer experience and operational excellence particularly within the even growing digital/online environment.
I am an employee of T-Mobile Austria (TMA), working as a data scientist for both commercial and network related use cases. I have profound experience of designing and implementing both analytical and machine learning algorithms in Apache Hadoop ecosystem.
My interest in data modeling started six years ago when I got the chance to work on experimental data during pursuit of my PhD degree. Unlike structural modeling where the true nature of the data generating process can be modeled in close form, majority of the processes in real world are too complex to be understood in their entirety. Consequently, I gained expertise in several discriminative models, such as dynamic stochastic models, time-series analysis and the state-space modeling.
From the beginning of my career as a data scientist at TMA, I have made adequate use of machine learning and applied research in market science and mobile network, which led to several data science projects with attribution to high business value. Since industry demands the end-to-end working solution but not just a prototype model, so I have mastered several programming languages and have served as the data engineer for most of my use-cases as well.
A brief list of my prowess and skills along with the projects and publications can be found on my linkedIn profile at: https://www.linkedin.com/in/wasifmasood/