最高技術責任者（CTO）であるスコットは、Hortonworks の全体の技術的なビジョンを担当し、同社のエンジニアリング、製品管理およびサポート組織を統括しています。スコットは、キャリアの全体をデータ業界で過ごしてきました。直近では Teradata Labs の社長として、Teradata の統合データウェアハウス、ビッグデータ分析、関連ソリューションに関連する研究、開発、販売支援活動に対して、先見性のある方向性を打ち出してきました。また、Teradata Labs のソリューションに関連する Teradata の技術への投資と買収を実行しました。スコットは、Drexel University にて電気工学士号を取得しています。
Bernard Marr is an internationally best-selling business author, keynote speaker and strategic advisor to companies and governments. He is one of the world's most highly respected experts when it comes to business performance, digital transformation and the intelligent use of data in business.
Bernard is a regular contributor to the World Economic Forum and writes weekly columns for Forbes, Huffington Post and LinkedIn where his articles are read by millions. His expert comments also regularly feature on TV and radio (e.g. BBC News, Sky News and BBC World) as well as in high-profile publications such as The Times, The Guardian, The Financial Times, the CFO Magazine and the Wall Street Journal.
Bernard is a major social media influencer with 1.2 million followers on LinkedIn, the world’s leading professional network, as well as a strong presence on Twitter, Facebook and SlideShare. In fact, Bernard is recognised by LinkedIn as one of the top 5 business influencers in the world.
He has written 15 books and hundreds of high profile reports and articles, including the international best-sellers ‘Data Strategy’, ‘Big Data in Practice’, 'Big Data', 'Key Business Analytics', ‘Key Performance Indicators’, 'The Intelligent Company’, ‘Managing and Delivering Performance’ and ‘Strategic Performance.
Enza is an analyst on the Security and Risk team and a Certified Information Privacy Professional (CIPP/E). Enza helps organizations worldwide design and execute strategies that leverage privacy to drive differentiation in the marketplace. Her research focuses on the impact of internet regulations and data privacy issues on digital business models as well as the technologies that underpin them. Her research coverage includes data protection, privacy in the context of cloud computing, analytics, and the internet of things. Enza speaks regularly at national and international executive conferences and her research is often quoted in the media, including The Wall Street Journal and Forbes.
Andreas Kohlmaier is the Head of Data Engineering for Munich Re, one of the world's leading reinsurers. He joined the company in 2008 as an IT architect. He is currently heading the Data Engineering team in Munich which is setting up the group-wide data lake and supporting the transformation of Munich Re to a data-driven organization. He holds a Master in Computer Science and has more than 15 years of experience in IT and data projects. His main areas of expertise are microservices, data management, IT architecture and agile project management.
Mandy Chessell CBE FREng CEng FBCS is an IBM Distinguished Engineer, Master Inventor and Fellow of the Royal Academy of Engineering. Mandy is a trusted advisor to executives from large organizations, working with them to develop their strategy and architecture relating to the governance, integration and management of information. She is also driving IBM's strategic move to open metadata and governance through the Apache Atlas open source project. More information about Mandy’s work and publications can be found on LinkedIn: http://www.linkedin.com/pub/mandy-chessell/22/897/a49 and her blog at https://poimnotes.blog/
Dr. Frank Saeuberlich is the Director of Advanced Analytics and Data Innovation at Teradata Germany and a member of the Teradata Germany leadership team. Previously, as Director of Data Science International, he led Teradata's International Data Science team in a role combining demand generation across EMEA and APJ with analytical innovation. He joined Teradata in July 2012. Before joining Teradata Frank worked at Urban Science International where he was responsible for the Urban Science Customer Solutions practice in Europe. He holds a Doctors degree in economics and a Masters degree in economic mathematics from the University of Karlsruhe.
Jamie Engesser is the Senior Vice President Product Management at Hortonworks. With more than twenty years of professional experience in the software industry, Jamie most recently had global responsibility for Hortonworks Solutions Engineering organization which is focused on guiding organizations to identify their Hadoop opportunity from Business Case, to Proof of Concept, to successful Project Delivery. Prior to Hortonworks, Jamie led Global Solutions Engineering teams at SpringSource and VMware. Jamie has extensive experience spanning Open Source, Java, Platform as a Service (PaaS), Application Infrastructure and Big Data. He holds a Bachelor of Science in Industrial Engineering from Montana State University.
スリカンス・ベンカットは、現在、HortonworksにてApache Knox、Apache Ranger、Apache Atlas、プラットフォーム ワイド セキュリティ、Hortonworks DataPlane Serviceを含む、製品のセキュリティ＆ガバナンスのポートフォリオに携わっています。Hortonworksに入社する以前は、クラウドサービス、市場、セキュリティ、ビジネスアプリケーションなどの分野で様々な職務の経験があります。スリカンスは、製品管理から、戦略および運営、テクニカルアーキテクチャまで様々な分野でリーダーシップの経験があり、TelefonicaやSalesforce、Cisco-Webex、Proofpoint、Dataguise、Trilogy Software、Hewlett-Packardを含む、新興企業からグローバル企業まで広範囲の職務経験を持ちます。スリカンスは、ピッツバーグ大学で人工知能に焦点を置いたエンジニアリングの博士号、インディアナ大学でGeneral ManagementのMBA、サンダーバード国際経営大学院にてグローバルマネジメントの修士号を取得しています。趣味はデータサイエンスと機械学習で、ビッグデータテクノロジーを触ることを楽しんでいます。
Kamélia completed her engineering education at Centrale Paris, where she earned her Master's degree in open computer systems. After a first professional experience at Bouygues Telecom as an IT Architect, Kamélia joined the Renault Group in 2015. She first held the position of Big Data Architect in the IT department "Innovation and Architecture". In just a few months, Kamélia has established itself as one of the big data specialists at Renault.
Currently, she holds the position of Data Lake Squad Lead, responsible for designing, building and operating the Renault Datalake platform. Her mission is to Achieve the operational excellence of Data Lake by optimizing performance in a continuous improvement cycle.
Artem Ervits is a Solutions Engineer at Hortonworks. Hortonworks is a leading big data software company based in Santa Clara, California. The company develops and supports Apache Hadoop, for the distributed processing of large data sets across computer clusters. Artem is an organizer of the NYC Future of Data Meetup and contributor to Apache Oozie. He works with Workflow Manager and Oozie product management and engineering teams to shape the future direction for Workflow Manager and Oozie. You may reach him with questions on Oozie, HBase, Phoenix, Pig and Hive.
Clay Baenziger - is an architect of the Hadoop Infrastructure Team at Bloomberg. Clay comes from a diverse background in systems infrastructure and analytics ranging from operating systems engineering to financial portfolio analytics. He has been involved in the Hadoop ecosystem for eight years and provides numerous talks each year on Bloomberg's community contributions.
アンディ・ロプレストは、Hortonworksの技術スタッフ シニアメンバーで、Hortonworks DataFlow チームに勤務しています。彼は、Apache NiFi、オープンソース、ロバスト、安全なデータルーティングと配信システムのコミッタおよび製品管理委員という役割を果たしています。アンディはアイデンティティ管理、TLS交渉、データ保護、アクセス制御、暗号化とハッシングを含む、NiFiの安全性に注力しています。また、安全な指令および管理、即時のデータ来歴と統制を含む、エッジデータ収集を推進する、サブプロジェクトのApache MiNiFiにも携わっています。彼はシンガポール、東京、メルボルン、ベルリン、シドニー、サンホセ、ブリュッセルで開催されたFOSDEM '17、そしてOpenIoT Summit 2017にてNiFiに関する講演を行っています。
With 20 years of experience on Cyber Security, Andre Fucs de Miranda has spent good part of his life securing systems around the globe.
Andre is Apache NiFi PMC Member and an accidental big data user.
Nicolette is the Head of Data Engineering at Santander UK Technology. She is a technical manager with 18 years’ experience in the IT services industry, and has previously led large-scale multi-location change projects comprising of: Data Provision, Managed MI and Data warehouses, ETL, System Integration and IT alignment.
I have a great passion for technology. Always in to learn new skills and loving the way how fast things are changing in this industry. I am technical person and also like to connect with other passionate people in the technology sector. Knowledge sharing is very important to me and love the role of mentoring colleagues. My latest challenge is to know everything about Docker .
I am really a hands-on person and love to solve difficult and challenging problems. There is no greater joy then getting things done and have a good working system. Always there to go the extra mile to deliver a finished and proper implemented project.
Carsten works as a Big Data Architect at Audi Business Innovation GmbH. Audi Business Innovation GmbH, a subsidiary of Audi, is a small company focused on developping new mobility services as well as innovative IT solutions for Audi. Carsten has more than 10 year experience in delivering Data Warehouse and BI solutions to his customers. He started working with Hadoop in 2013 and since then he has focused on both big data infrastructure and solutions. Currently Carsten is helping Audi to extend their Big Data platform based on Hadoop and Kafka to the cloud. Further, as an solution architect he is responsible for developing and running analytical applications on that platform.
Matthias Graunitz is a big data architect at Audi, where he works at the company’s Competence Center for Big Data and Business Intelligence, where he is responsible for the architectural framework of the Hadoop ecosystem, a separate Kafka Cluster as well as for the data science tool kits provided by the Center of Competence for all business departments at Audi. Matthias has more than 10 years’ experience in the field of business intelligence and big data.
シンディ マイキは、Hortonworksで保険部門のジェネラルマネージャーとして、グローバル保険業界向けの戦略および顧客エンゲージメントに携わっています。シンディは、現代の事業成長の分析を活用して顧客やパートナーと提携し、進化する保険業界でイノベーションを推進するための新しいデータの使用を探索しています。金融、そして保険業界でコンサルティングおよびアドバイザリー サービスで25年の経験を持ち、アナリティクスとテクノロジーを活用した事業戦略で業績を推進するために世界のクライアントと提携してきました。
シンディは保険請求と引受の両方で深い業界知識を持ち、アナリティクスおよびデータを使用してビジネス成果を向上することに注力しています。IBM Watsonソリューショングループ、Carrier Insurance、ACORDの戦略部長での職務経験を持ち、Strategy Meets Action Research and Advisory Services の共同設立者でもあります。彼女はまた、公認会計士でもあります。
Rachit Arora is a Senior Developer at IBM,India Software Labs. He is key designer of the IBM's offerings on Cloud for Hadoop ecosystem . He has extensive experience in architecture, design and agile developmemt. Rachit is an expert in application development in Cloud architecture and development using hadoop and it's ecosystem. He has been active speaker for BigData technologies in various conference like Information Management Technical Conference-2015 , ContainerCon NA-2016, Container Camp Sydeny 2017 etc.
Amer Issa, a Platform and Security Architect at Hortonworks based in Canada. I specialize in Hadoop platform engineering with an emphasis on DevOps and Security. Started my career as a System’s Engineer and transitioned into Cloud and Big Data. I have a spent the majority of my career in highly governed and secured environments; mostly financial and health related. Currently I act as an SME when it comes to security implementations and the integration of Hadoop with the existing frameworks of organizations. I also help transform organizations to a mentality of automation and infrastructure as code.
George Vetticaden is Vice President of Product Management within Emerging Products at Hortonworks. In this role, he is responsible for the strategic vision and concerted delivery across all the products within Emerging Products including Hortonworks DataFlow (HDF) that includes Nifi, Storm, Kafka, Streaming Analytics Manager, Schema Registry as well as solutions built on top of the platform including CyberSecurity/Metron.
Over the last 5 years at Hortonworks, George has spent a lot of time in field with enterprise customers helping them build big data solutions on top of Hadoop. In his previous role at Hortonworks, George was the Director of Solutions Engineering where he led a team of 15 Big Data Senior Solution Architects helping large enterprise customers with use case inception, design, architecture, to implementation of use cases monetizing data with Hadoop. In addition, he is also a committer on the Apache Metron project. George graduated from Trinity University with a BA in Computer Science.
(LinkedIn Profile: https://www.linkedin.com/in/georgevetticaden)
Beniamino Del Pizzo is a big data engineer working on data ingest and focusing
on Apache Kafka and Spark applications at Data Reply IT, leading Italian consulting company of big data industry.
Beniamino has a master’s degree in computer engineering with a thesis on “an evolutionary approach on Apache Spark to learn TSK-fuzzy systems for big data”. He is passionate about big data, streaming application, distributed computation and data analysis and he had a talk at the Strata Data Conference in New York in September 2017.
Marco is a Big Data Engineer in Milan and currently works as consultant at Data Reply IT where he develops effective big data solutions based on open-source technologies. He holds a master’s degree in Computer Science from the Università degli Studi di Milano-Bicocca. He is passionate about big data, streaming applications and data analysis.
Tim Spann was a Senior Solutions Architect at AirisData working with Apache Spark and Machine Learning. Previously he was a Senior Software Engineer at SecurityScorecard ("http://securityscorecard.com/) helping to build a reactive platform for monitoring real-time 3rd party vendor security risk in Java and Scala. Before that he was a Senior Field Engineer for Pivotal focusing on CloudFoundry, HAWQ and Big Data. He is an avid blogger and the Big Data Zone Leader for Dzone (https://dzone.com/users/297029/bunkertor.html).
He runs the the very successful Future of Data Princeton meetup with over 830 members at http://www.meetup.com/futureofdata-princeton/.
He is currently a Solutions Engineer at Hortonworks in the Princeton New Jersey area.
You can find all the source and material behind his talks at his Github and Community blog:
Stephan Ewen is a PMC member and one of the original creators of Apache Flink, and co-founder and CTO of data Artisans (data-artisans.com). He holds a Ph.D. from the Berlin University of Technology.
Flavio Junqueira leads the Pravega team at DellEMC. He holds a PhD in computer science from the University of California, San Diego and is interested in various aspects of distributed systems, including distributed algorithms, concurrency, and scalability. Previously, Flavio held a software engineer position with Confluent and research positions with Yahoo! Research and Microsoft Research. Flavio has contributed to a few important open-source projects. Most of his current contributions are to the Pravega open-source project, and previously he contributed and started Apache projects such as Apache ZooKeeper and Apache BookKeeper. Flavio co-authored the O’Reilly "ZooKeeper: Distributed process coordination" book.
Jose Luis is Group Chief Architect at Orwell Group, producers of the ipagoo a new FinTech challenger multi-entity/country/currency Banking Platform As-A-Service, Compliance As-A-Service.
Jose Luis Caldeira was the Lead Architect in Big Data, Information Architecture & financial Crime, taking for 6 years the technology leadership on Information Management, financial crime systems, regulatory reporting as well as introducing new architecture paradigms and technology to the Group. Information architecture Lead for the PSD2 program.
Prior to joining Lloyds Bank, Jose Luis worked in the Santander Group as Lead UK Architect for the Migration of the Cahoot bank. Integration of the Alliance & Leicester Bank web into the Santander Systems. Responsible for the group strategy for customer communications and messaging system in UK.
Gian Marco became Orwell's Head of Engineering in 2017, managing strategic partnerships under the technology space and the Engineering experience across the whole group, with developer centres in London, Madrid and Poland. He joined Orwell in 2014 as Head of Mobile Payments Architecture. Prior to Orwell he designed mobile solutions and digital signage interactive applications for famous worldwide firms and their stores.
He has more than a decade of experience working as a freelance IT consultant in the start-up scene, specialising in finance, cryptocurrencies and software and hardware bio-metrics applied to retail and customer engagement.
During 2013 he focused on designing and managing the development of a payment platform that integrated the financial, mobile and online worlds which later became the point of contact with Orwell.
He led the development of several software architectures interacting with both the banking world and the crypto-currencies universe.
マイケル・ガーは、業界および情報テクノロジー戦略の担当者として25年の勤務経験があります。彼は商品開発、製造、サプライチェーンおよび顧客経験関連の事業プロセスに関して産業間共通の深い知識を持っています。Hortonworksの製造および自動車部門の部長として、マイクはソリューション ビジョンおよび各業界の市場開拓戦略の推進に貢献し、業界のリーダーと提携しビッグデータ分析を通して次世代の事業洞察を推進します。Hortonworksに入社する以前、マイクは、Oracleの自動車業界部門のリーダーとして20年以上勤務し、A.T.カーニーにてオートモーティブマネジメント顧問、ジェネラルモータース (サターン部門) にて生産技師として勤務しました。
Wade Salazar serves Hortonworks as a Solutions Engineering in Houston TX. Educated as an electrical engineer, fluent in many programming languages, and having worked in the control systems trade for over a decade before joining Hortonworks. Wade's operational technology background adds depth to Hortonworks solutions engineering team and for customers in the energy and manufacturing sectors where the most valuable data sources are often control systems. Outside of work Wade is passionate about technology, the outdoors, cooking, dogs, horses and Texas lore.
Gerhard Messelink is Solutions Engineer for the Data Automation team at KPN.
Gerhard has a background of infrastructure automation, web/api and data flow automation.
Main goal is to promote CI/CD and devOps and enabling team autonomy with (open source) tools and technologies.
With a passion for big data, streaming applications, data analysis, cloud and container solutions
changing the world of data warehousing rapidly.
John Mertic is the Director of Program Management for The Linux Foundation. Under his leadership, he has helped ODPi, R Consortium, and Open Mainframe Project accelerate open source innovation and transform industries. John has an open source career spanning two decades, both as a contributor to projects such as SugarCRM and PHP, and in open source leadership roles at SugarCRM, OW2, and OpenSocial. With an extensive open source background, he is a regular speaker at various Linux Foundation and other industry trade shows each year. John is also an avid writer and has authored two books “The Definitive Guide to SugarCRM: Better Business Applications” and “Building on SugarCRM” as well as published articles on IBM Developerworks, Apple Developer Connection, and PHP Architect.
Maryna Strelchuk is an Information Architect and Application Developer at ING. She has background in software development and Artificial Intelligence. Currently, she is involved in the Open Metadata initiative, including Apache Atlas.
I am currently a Big Data Delivery Lead at Optum (UnitedHealth Group) and based in Dublin (Ireland). Me and my teams deal with projects in the PI (fraud, waste and abuse, claims processing) and the healthcare space. I worked previously at IBM Ireland, where I switched my career path from Test Automation to Analytics and Machine Learning.
I am passionate about coding, Big Data, AI/ML/DL, test automation, Open Source, DevOps and cooking (home made pizza is my speciality).
I share my tech thoughts through my blog (http://googlielmo.blogspot.ie/) and DZone (https://dzone.com/users/2532948/virtualramblas.html) where I am a Golden Member.
During 2018 I have presentend to several international conferences such as DataWorks Summit Berlin, Google I/O Extended, Predictive Analytics World for Industry 4.0 and many others.
My first book "Hands-on Deep Learning with Apache Spark" (https://tinyurl.com/y7d98s64) is going to be released in December 2018.
Raphael Radowitz is a Developer at SAP Labs Korea with comprehensive, detailed and up-to-date work experiences with global teams in creating and implementing databases, project management, generating reports, Spark and Hadoop. He has recently submitted his master thesis Evaluation of TPC-H on Spark and SparkSQL in ALOJA and obtained the master degree in Management Information Systems at the Johann-Wolfgang Goethe University in Frankfurt, Germany. His core fields of expertise include:
• Databases, Hadoop Ecosystem, Apache Spark and Business Intelligence
• Reporting Technologies
• Project Management and Microsoft Project 2007 to 2013
2017 Master of Science: Management Information Systems at Johann Wolfgang Goethe University Frankfurt am Main, Germany
2013 Semester abroad, Sungkyunkwan University (성균관대학교) Seoul, South Korea
2012 Student of the Master Degree programme Management Information Systems (1 semester) at Technische Hochschule Mittelhessen (THM), University of Applied Sciences, Germany
2012 Bachelor of Science: Management Information Systems at Fachhochschule Frankfurt am Main, University of Applied Sciences Frankfurt, Germany
2009 Semester abroad, Konkuk University, (건국대학교) Seoul, South Korea
Adrian is a principal engineer at Hotels.com in London where he works with teams focusing on the services powering their big data processing systems. Prior to this Adrian led the big data team at Last.fm and has been using Hadoop and various other parts of the big data ecosystem since 2007. He has previously spoken at Strata and co-wrote a chapter in the early editions of the seminal “Hadoop: The Definitive Guide”.
Elliot is a principal engineer at Hotels.com in London where he designs tooling and platforms in the big data space. Prior to this Elliot worked in Last.fm’s data team, developing services for managing large volumes of music metadata.
Ohad Shacham is Senior Research Scientist at Yahoo Research. He works on scalable big data and search platforms. Most recently, he focused on extending the Omid transaction processing system with high availability and integrating Omid with Apache Phoenix. Ohad received his PhD in concurrent software verification from Tel-Aviv University CS in 2012. Prior to Yahoo, Ohad lead the SAT based formal verification activities at IBM Research and worked on automatic software vectorization at Intel.
Bas van de Lustgraaf is working since 2015 as Big Data Engineer at the R&D department of QSight IT and its legal predecessors. During his study Master of Information Systems Development, his first interest in data was raised. This resulted in a job as Business Information Developer at Onsight Solutions, where he developed the company’s first Data Warehouse. After two years he decided to jump into the world of Big Data. Since then he was involved in installing, maintaining the company’s Big Data cluster, developing Big Data applications, and being a big part of designing and building the new security platform based on Apache Metron.
I have been working in the software industry since 1997. Starting out as a SAP functional consultant, later on working with the Business Intelligence Solutions from SAP as a consultant at several large international companies. Working with huge amounts of data, later also with Big Data. I focus on the user perspective to find meaningful information in the massive amount of data to give end-users on all levels insight in their processes. I design dashboards and reports in order to provide KPI's to support the PDCA cirlce.
I work with several tools, either commercial or open source
Thomas Phelan is cofounder and chief architect of BlueData. Previously, Tom was an early employee at VMware; as senior staff engineer, he was a key member of the ESX storage architecture team. During his 10-year stint at VMware, he designed and developed the ESX storage I/O load-balancing subsystem and modular “pluggable storage architecture.” He went on to lead teams working on many key storage initiatives, such as the cloud storage gateway and vFlash. Earlier, Tom was a member of the original team at Silicon Graphics that designed and implemented XFS, the first commercially available 64-bit file system.
Dongjoon Hyun is an Apache REEF PMC member and committer. Currently, he works for Hortonworks and is focusing on Apache Spark and Apache ORC.
Gustavo Arocena is a Big Data Architect at the IBM Toronto Lab, with over 15 years of experience in database technology. Recently, Gustavo lead the design and implementation of several components of the Big SQL engine, including the Hive-compatible IO layer, the INSERT statement, the integration with Apache Spark and the high-performance ORC ingestion layer.
Gustavo has several publications and has presented at multiple conferences. He holds a Master's degree in Computer Science from the University of Toronto in the area of database language processing.
Chris Nauroth is an Apache Software Foundation member who has worked as a committer and PMC member for Hadoop, ZooKeeper and Yetus. He is a software architect at Disney working on shared services for large-scale consumer messaging and content management. His responsibilities include evaluation and technology selection for big data solutions.
Sandeep Chandra is the Director for the SDSC Health CI Division and the Executive Director for Sherlock Cloud. Since joining SDSC in 2003, he has worked on different aspects of infrastructure deployment for scientific data management, as a principal investigator and in other leadership roles across a wide range of cross-disciplinary NSF, DOE, NIH and Foundation initiatives. As the Director of the Health Cyberinfrastructure division, Sandeep Chandra provides strategic vision, direction, management and implementation of concepts and methodologies for building Sherlock’s technology platforms including Cloud computing and Big Data solutions. Sandeep brings strong knowledge of the healthcare ecosystem with deep focus in compliance including NIST, FISMA and HIPAA requirements. He led the deployment of Sherlock’s compliant services in AWS making it the first compliant, hybrid Cloud platform in academia. Sandeep holds as MS in Computer Science from North Carolina State University and has over 15 years of experience providing direct policy, business, operations, and technology advice to the leadership at federal, state and academic institutions.
Boaz is a Big Data expert with a passion for interactive Business Intelligence performance. Boaz is the brain behind Jethro’s breakthrough architecture and leads the team that develops it. He has 25 years of experience with developing search engines, indexing and database technologies. Before founding Jethro Boaz lead the development of many large scale information retrieval and big data projects.
Thomas architects the S3 layer and the Hadoop integration of Western Digital's object storage system 'ActiveScale'. Together with the team, he has contributed multiple improvements to the Apache Hadoop s3a connector and has co-architected the HDFS Provided Storage feature.
He joined WD through the Amplidata acquisition. Previousl,y he obtained a Computer Science PhD in Queueing Theory at Ghent University, Belgium.
Ewan is a Software Architect at Western Digital where he works on tiered storage between HDFS and S3. Previously, he's worked on embedded systems, held various roles in the finance industry - mostly managing market data, and worked as a systems administrator for Ghent University's HPC group.
Nishant is Druid PMC member and Software Engineer at Hortonworks. He is part of Business Intelligence team at Hortonworks. Prior to that he was part of Metamarkets backend team and was responsible for analytics infrastructure, including real-time analytics in Druid. He holds a B.Tech in Computer Science from National Institute of Technology, Kurukshetra, India.
Jason Plurad is a software developer in IBM Digital Business Group. He develops open source software and builds open communities in the big data and analytics space. His focus has been on graph databases and graph analytics. He is a Technical Steering Committee member and committer on JanusGraph (scalable graph database) and Apache TinkerPop (graph computing platform). He has spoken previously at Apache: Big Data, DataWorks Summit, Graph Day, HBaseCon, Open Camps, Scylla Summit, IBM Insight, IBM InterConnect, IBM World of Watson, and local meetups.
Chin Huang is a software engineer at the IBM Open Technologies and Performance. He has worked on various enterprise and open source projects. His current focus is JanusGraph and node.js development and performance characterization.
Senior Director, Product Development
Oracle USA Inc
Prabhu Thukkaram heads engineering for Oracle Stream Analytics cloud service and manages software development for the product in U.S., Hungary, India, and China. As product owner and engineering lead he provides technical evangelism and design guidance for the Oracle Stream Analytics product. In addition to managing a large technical team in diverse geographies, Prabhu is an expert in distributed systems, service oriented architectures, data integration, big data analytics, and business intelligence software. Prabhu has more than 24 years of experience in building enterprise software products, beginning his career at a startup in India as Software Engineer where he and his team built a platform to emulate expensive IBM mainframe environments on low-cost Unix systems. Subsequently Prabhu joined Oracle in 1993 contributing to many of Oracle’s Analytics products. He has a Masters in Computer Science from University of Colorado and Masters in Business Administration from University of Santa Clara. Prabhu also has several patents in the areas of stream processing and distributed systems and speaks every year at Oracle Open World.
Hoyong is an Architect at Oracle where he leads development of the Stream Analytics Plaform focusing on architecture and design for the next generation self-service Stream Analytics cloud service. He has over 25 years of software engineering experience and has been working on Real-time Streaming and Business Intelligence products at Oracle for last 13 years. He has wide experience in developing highly distributed systems and scalable technologies. Before entering the industry, he spent three years in a PhD program at Korea Advanced Institute of Science and Technology, primarily researching scientific and high-performance computing. He is an expert in geo-spatial technologies and has presented at several symposiums. Hoyong also holds 17+ patents and has been a primary inventor in areas of stream processing, distributed computing, and geo spatial processing.
Solution Architect with more than 15 years experience in DWH and BI and last years also Big Data environments.
Designed many data warehouses including a Customer Intelligence System, Marketing Data Warehouse, Enterprise Data Warehouse and Basel II data warehouse. Currently responsible for the architecture of Data Lake, Data Factory and Data Lab based on Cloudera and HortonWorks technology.
Martijn Groen has 22 years of experience in the Financial Services sector and about 6 years working within the (Big) Data domain. He is well known as someone who get things moving and taking on challenging projects. He is PMP (project management professional) certified, but also has many years of experience as a business analyst in the Financial Service sector. Currently he is working as Delivery manager at the Rabobank and responsible for development and maintenance of data environments within the Client Data (Distribution) domain. The Rabobank DevOps teams of Distribution are working on a Hadoop Data Lake, based on Cloudera and HDF (Hortonworks), Oracle datawarehouses and MS SQL Server environments.
Dave Russell is a Principal Solutions Engineer at Hortonworks and spends his time guiding a variety of different organisations through the complexities of their big data journeys. Most recently focussed on the areas of Cybersecurity and applying real-time big data solutions to that increasingly hot area. He has spent over 20 years in Open Source, with much of that being spent in scale out areas such as cloud and big data. Dave is also a founding co-host of the Roaring Elephant Podcast about Apache Hadoop, advanced analytics and the surrounding ecosystem.
Nick Pentreath is a principal engineer at IBM's Center for Open-Source Data & AI Technologies (CODAIT), where he works on open-source machine learning projects. Previously, he cofounded Graphflow, a machine learning startup focused on recommendations. He has also worked at Goldman Sachs, Cognitive Match, and Mxit. He is a committer and PMC member of the Apache Spark project and author of Machine Learning with Spark. Nick is passionate about combining commercial focus with machine learning and cutting-edge technology to build intelligent systems that learn from data to add business value.
Ian Pillay (25yrs) is a Hadoop Administrator at Standard Bank in South Africa. He is an enthusiastic Computer Science graduate with a keen interest for all things Technology, whether it be Hardware, Software and the Security surrounding it.
He is passionate about new technology and the open source landscape, and loves to learn new things. He has set up Hadoop clusters using Ambari and Apache’s Flavour, as well as implemented Kerberos and AD/ LDAP integration from OS up including SSL, and SSSD-like solutions. He also has experience in MySQL cluster administration although at an intermediate level, and naturally a decent level of Linux (SuSE, CentOS, Ubuntu) administration. He does not know everything, but what he does, he will share, and what he does not, he will learn from others.
All of his experiences are not without issues. He is fairly experienced when it comes to problem solving, and solution tackling and while he might not program for a ‘living’, he does get his fix of it as explained below. (Languages include: Java, C#, Java Script, HTML5, CSS, SQL – Learning Python)
--Non-Work Related --
In his spare time, he envelops himself in game development (includes programming, 3d modeling, graphic design, story production, marketing and a host of other Game development necessities) from start to finish using open source technology (Blender3D, Unity). He has published games to Google’s PlayStore and in the near future – Steam Corporations Store. He also is an ex-professional gamer, having represented his country on an international level for e-sports at the Interactive E-Sports Federation World Cup (IeSF) in South Korea.
Bradley Smith is an information scientist specializing in big data. Graduating university with Honours in 2015, he joined Standard Bank South Africa as a distributed technologies administrator and has been integral to the development of multiple key initiatives. Accomplishments include the design and deployment of HDP as a "data lake", the introduction of automation frameworks and the development of countless other pilot projects from "A" (Ansible) to "Z" (Zabbix). Bradley attained the title HDP Certified Administrator from Hortonworks in 2016.
For the past year, Bradley has focused on automation to enable security, governance and multi-tenancy. The prime directive has been to provide and support a distributed compute platform focused on flexibility, allowing data scientists to execute effectively and efficiently - without introducing risk. Going forward, Bradley is exploring opportunities in enterprise architecture and cloud computing.
Erik Zeitler holds a PhD from Uppsala University, where he performed research on highly scalable data stream management systems. This work is published at premier conferences such as EDBT and VLDB.
Erik has been architecting Klarna's data infrastructure for analytics and real time services. He regularly serves as a speaker at universities, meetups and trade shows.
Erik regularly serves as an expert advisor for research and development at the European Commission.
Per Ullberg is a dedicated lead java developer with a strong interest in quality-abling processes and tools. Per works at the Klarna Bank in the teams that implements and enables the lambda architecture as a long term solution for analytics at Klarna. Per has earlier worked at companies like NasdaqOMX, Unibet and Bwin with a strong track record of enabling quality by introducing testability to domains where testability previously was lacking. Per strongly believes that the biggest influencers to product quality are developers.
Evangelos Motesnitsalis is a Big Data Engineer at the IT Department of CERN. He supports the scientific communities at CERN in their quest to perform big data analytics over physics and accelerator data. He has led the development of the Hadoop-XRootD Connector library, a project that provides direct access of data from XRootD-based storage systems directly into Hadoop and Spark. He is a former Escalation Engineer and Big Data Devops Support Engineer at Amazon Web Services in Dublin, Ireland. He obtained his MSc in Distributed Systems from Imperial College London in 2015 and he has also studied at King's College London and Aristotle University of Thessaloniki.
As the General Manager, Energy, Kenneth is responsible for establishing and leading the execution of a winning go-to-market strategy for Hortonworks in the energy industry. Duties include development of successful thought leadership, and proactively working with Hortonworks sales & product management, prospects, customers, and partners to identify new solutions and industry specific requirements for the energy market. He also ensures appropriate energy industry expertise is provided and leveraged during the sales and solution implementation cycle, and helps develop successful strategies to drive adoption of Hortonworks products and services in the segment.
Kenneth has over 15 years of proven success as a sales executive across multiple industries, the last seven spent in the energy sector, through the delivery of high-value propositioned products and solutions, with a track record of developing reciprocal, rewarding relationships with his customers, partners, and colleagues.
Chris has over 10 years of experience in building and supporting Oil and Gas data solutions. His experience ranges in data management, global deployments, enterprise architecture and big data.
Dr. Dhabaleswar K. (DK) Panda is a Professor and University Distinguished Scholar of Computer Science at the Ohio State University. He obtained his Ph.D. in computer engineering from the University of Southern California. His research interests include parallel computer architecture, high-performance computing, communication protocols, big data, deep learning, files systems, network-based computing, and Quality of Service. He has published over 450 papers in major journals and international conferences related to these research areas. Dr. Panda and his research group members have been doing extensive research on modern networking technologies including InfiniBand, Omni-Path, High-Speed Ethernet and RDMA over Converged Enhanced Ethernet (RoCE). His research group is currently collaborating with National Laboratories and leading InfiniBand and Ethernet/iWARP companies on designing various subsystems of next-generation high-end systems. The MVAPICH2 (High-Performance MPI over InfiniBand, iWARP, and RoCE) open-source software package, developed by his research group, are currently being used by more than 2,925 organizations worldwide (in 86 countries). This software has enabled several InfiniBand clusters (including the 1st one) to get into the latest TOP500 ranking. These software packages are also available with the Open Fabrics stack for network vendors (InfiniBand and iWARP), server vendors and Linux distributors. The new RDMA-enabled Apache Hadoop and Memcached packages, consisting of acceleration for HDFS, MapReduce, RPC and Memcached, are publicly available from http://hibd.cse.ohio-state.edu. Dr. Panda's research is supported by funding from US National Science Foundation, US Department of Energy, and several industry including Intel, Cisco, SUN, Mellanox, QLogic, NVIDIA and NetApp. He is an IEEE Fellow and a member of ACM. More details about Dr. Panda, including a comprehensive CV and publications are available at http://web.cse.ohio-state.edu/~panda.2/.
Dr. Xiaoyi Lu is a Research Assistant Professor in the Department of Computer
Science and Engineering at the Ohio State University, USA. His current research
interests include high performance interconnects and protocols, Big Data,
Hadoop/Spark/Memcached Ecosystem, Parallel Computing Models (MPI/PGAS),
Virtualization, Cloud Computing, and Deep Learning. He has published over 100
papers in International journals and conferences related to these research
areas. He has been actively involved in various professional activities (PC
Co-Chair, PC Member, and Reviewer) in academic journals and conferences.
Recently, Dr. Lu is leading the research and development of RDMA-based
accelerations for Apache Hadoop, Spark, HBase, and Memcached, and OSU HiBD
micro-benchmarks, which are publicly available from
(http://hibd.cse.ohio-state.edu). These libraries are currently being used by
more than 290 organizations from 34 countries. More than 27,700 downloads of
these libraries have taken place from the project site. He is a core member of
the MVAPICH2 (High-Performance MPI over InfiniBand, Omni-Path, Ethernet/iWARP,
and RoCE) project and he is leading the research and development of
MVAPICH2-Virt (high-performance and scalable MPI for hypervisor and container
based HPC cloud). He is a member of IEEE and ACM. More details about Dr. Lu are
available at http://web.cse.ohio-state.edu/~lu.932/.
Jon Ratcliff is currently a Managing Enterprise Architect within O2. His key responsibilities are across the Digital domain where he manages a team of Enterprise Architects as well as leading the overall architecture strategy for Business Intelligence and Data.
Jon joined O2 in 2015 after he worked at EE and Orange where he held several roles in the Business Intelligence area. Jon is both a Chartered Engineer and a Chartered Manager and holds an MSc in Advanced Digital Systems and BEng(Hons) in Electrical and Electronic Engineering.
Kieran Miller delivered with O2 the Financial Data Hub acting as the Accenture Programme Manager from Concept to Run & Operate. Kieran has 10 years of Analytics experience within Accenture working across multiple Comms and Media clients. His background is within Architecture and Design before moving into E2E complex delivery.
Abdelkrim is a Solution Engineer at Hortonworks with 10 years experience on several distributed systems (Big Data, IoT, Peer to Peer and Cloud). Before joining Hortonworks, he held several positions including Big Data lead, CTO and Software Engineer at several companies. He was a speaker at various international conferences and published several scientific papers at well known IEEE and ACM Journals. Abdelkrim holds a PhD, MSc and MSe degrees in Computer Science.
Richard Hogg is responsible for the cross-IBM GDPR capabilities, services and solution. He has 15+ years global experience across ECM and #infogov. He spent the last 3 years working with heavily-regulated clients worldwide on their GDPR journey. He is a workstream leader owner of IBM’s Internal GDPR Readiness Program and a frequent speaker annually on GDPR & InfoGov across AIIM, ARMA, MER, LegalTech, Insight, World of Watson, InfoGovCon & IPBA.
I'm working in IT environments since 15 years. After several experience in IT engineering, I started playing with Big Data systems 4 years ago. I've spend 2 years helping SNCF to build its Big Data Platform. I joined EDF a couple of months ago. I'm currently Big Data Tech Lead for for the IT Production department of EDF.
David has over 20 years’ technical leadership expertise and has led the development and management of complex BI solutions, supporting technical architectures for a wide range of organisations spanning SME start-ups to large enterprise.
In his role at Worldpay, David specialises in developing and delivering the Enterprise Data Platform, a multi-tenant highly secure Hadoop platform for decision engines, analytics and reporting using his experience and knowledge in technical architecture, data modelling, ETL design, data quality, and metadata management.
A key aspect of David’s role also involves acting as the lynchpin between Worldpay’s commercial and technical business leaders by regularly engaging at the executive level. David also manages cross-cultural teams in the analysis of technical infrastructures and the delivery of innovative and successful change programmes.
Diego Baez is General Manager of Financial Services for Hortonworks with a focus on Big Data and Machine learning applied to the Financial Industry. He has over 20 years experience in technology, initially at IBM Speech Recognition Labs and AT&T Research Unix Systems Labs, and then with a focus in Financial Technology and Algorithm Trading. During his 18 years in the Financial Services industry, Mr. Baez held positions at Citigroup, JPMorgan, Lehman Brothers, Knight Securities and Goldman Sachs.
Holden is a transgender Canadian open source developer advocate @ Google with a focus on Apache Spark, BEAM, and related "big data" tools. She is the co-author of Learning Spark, High Performance Spark, and another Spark book that's a bit more out of date. She is a commiter on and PMC on Apache Spark and committer on SystemML & Mahout projects. She was tricked into the world of big data while trying to improve search and recommendation systems and has long since forgotten her original goal.
Jesús Camacho Rodríguez is a Member of Technical Staff at Hortonworks, and a PMC member of Apache Hive and Apache Calcite. His current work focuses on extending and improving query processing and optimization, ensuring that the increasingly complex workloads supported by Hive are executed quickly and efficiently. Prior to that, Jesús obtained his PhD in Computer Science from Paris-Sud University and Inria, working on large-scale Web data management. Jesús received his Computer Science and Engineering degree from University of Almería, Spain.
Billie Rinaldi is a Principal Software Engineer I at Hortonworks, currently prototyping new features related to long-running services and containers in Apache Hadoop YARN. Prior to August 2012, Billie engaged in big data science and research at the National Security Agency, where she provided early leadership for Apache Accumulo. Billie is a member of the Apache Software Foundation and a committer for Apache Hadoop and a number of other Apache projects in the Hadoop ecosystem. She holds a Ph.D. in applied mathematics from Rensselaer Polytechnic Institute.
アラン・ゲイツは Hortonworks の創業者であり、Pig を Yahoo!から入手した人物です。Labの研究プロジェクトから Apache オープンソースプロジェクトへ導いたエンジニアリングチームのオリジナルメンバーです。アランは Apache Hive、Pig、その他多くの Apache プロジェクトの PMC メンバーです。Apache のインキュベーター PMC の一環として、彼は多くの Apache コミュニティーに対するメンターを務めてきました。アランは、Oregon State University にて数学の理学士号を、Fuller Theological Seminary にて神学の修士号を取得しています。また、「Programming Pig」(O'Reilly Press 出版) の著者でもあります。Twitterでアランをフォロー: @alanfgates
Ali Bajwa is Principal Partner Solutions Engineer at Hortonworks, where he helps partners learn about and integrate with open source Big Data technologies. He has developed Ambari plugins for NiFi and Zeppelin and training materials related to security/governance. Prior to joining Hortonworks, he worked as a Principal Member of Technical Staff at Oracle.
Ashish Thapliyal is a Principal Program Manager in Azure HDInsight, where he focuses on building and delivering Open Source Big Data technologies such as Hadoop, Hive, LLAP, HBase as Managed PaaS services to the customers.
Wangda Tan is Product Management Committee (PMC) member of Apache Hadoop and engineering manager of YARN team at Hortonworks. His major working field is Hadoop YARN GPU isolation and resource scheduler, participated features like node labeling, resource preemption, container resizing etc. Before join Hortonworks, he was working at Pivotal, working on integration OpenMPI/GraphLab with Hadoop YARN. Before that, he was working at Alibaba cloud computing, participated creating a large scale machine learning, matrix and statistics computation platform using Map-Reduce and MPI.
Sunil Govindan is contributing to Apache Hadoop project since 2013 in various roles as Hadoop Contributor, Hadoop Committer and member Project Management Committee (PMC). He is working as Staff Software Engineer at Hortonworks in YARN team. He is majorly contributing in YARN Scheduling improvements such as Intra-Queue Resource preemption, Multiple Resource types support in YARN with Resource Profiles, Absolute Resource configuration support in Queues etc. He also drove efforts to improve YARN UI for better user experience with community. Before Hortonworks, he worked at Juniper on a custom resource scheduler. Prior to that, he was associated with Huawei and worked on Platform and Middleware distributed systems including Hadoop platform. He loves reading books, an ardent music lover and passionate about go-green efforts.
Yanbo is a staff software engineer at Hortonworks. He is working on the intersection of system and algorithm for machine learning and deep learning. He is an Apache Spark PMC member and contributes to several open source projects such as TensorFlow, Keras and XGBoost. He delivered the implementation of some major Spark MLlib algorithms. Prior to Hortonworks, he was a software engineer at Yahoo! and France Telecom working on machine learning and distributed system.
Piotr begun his career at IBM in 2009 at IBM’s Centre for Advanced Studies as a Prototype Developer concentrating on graph theory and linked data applications. After two years in research Piotr joined DB2 SQL compiler development team to lead improvements in automation and problem detection in DB2 engine.
In 2014 Piotr transitioned to manage DB2 Build team were his team was able to redesign and innovate drastically improving DB2 organization operations. In 2016 Piotr moved to lead IBM Watson Data Platform Security, Integration and DevOps teams
Most recently Piotr was promoted to Program Director of IBM Data Science Experience, new data science and machine learning platform for private clouds.
Mingjie Tang is an engineer at Hortonworks. He is working on SparkSQL, Spark MLlib and Spark Streaming. He has broad research interest in database management system, similarity query processing, data indexing, big data computation, data mining and machine learning. Mingjie completed his PhD in Computer Science from Purdue University.
Dr. Alex Xiaoyang Yang is the CTO and Chief Architect of IBM China Development Laboratory.
He has extensive experience with big data analytics in FSS, Transportation, and Telecom.
A seasoned Software Engineer and Apache Member, Kellen has spent two years working on large scale machine translation systems. He has recently focused on optimizing deep learning and machine translation models for use in environments ranging from large cloud-based services to IoT devices at the edge. He is an active developer contributing the the Apache MXNet project, and has previously contributed to the Apache Joshua (incubating) project.
Suneel is a Member of Apache Software Foundation and is a Committer and PMC on Apache Mahout, Apache OpenNLP, Apache Streams. He's presented in the past at Flink Forward, Hadoop Summit, Berlin Buzzwords, Machine Learning Conference, Big Data Tech Warsaw and Apache Big Data.
Davor is serving as a chair of the Apache Beam Project Management Committee, and is a CEO of Operiant, a company he founded that helps users get Big Data to production. He was previously a software engineer at Google where he worked on Google Cloud Dataflow, the predecessor to Apache Beam, since its beginnings.
Apache Hadoop コミッタ、およびApache Hadoop PMC
また、Sun Microsystems と INRIA ではシニアエンジニアの役職を果たし、分散システム、グリッド/ユーティリティ·コンピューティング·インフラストラクチャのためのソフトウェアを開発しました。
サンジェイは、カナダの University of Waterloo にてコンピュータ科学の博士号を取得しています。
Isabel Drost-Fromm is Open Source Strategist at Europace AG Germany. She's a member of the Apache Software Foundation, co-founder of Apache Mahout and mentored several incubating projects. Isabel is interested in all things FOSS, search and text mining with a decent machine learning background. True to the nature of people living in Berlin she loves having friends fly in for a brief visit - as a result she co-founded and is still one of the creative heads behind Berlin Buzzwords, a tech conference on all things search, scale and storage.
電気技師の訓練を受けたアブハス リッキーは、ベテラン戦略コンサルタントで情熱的な起業家です。熱心なイノベーターで、デジタル スタートアップ企業を培養し、その企業を1000万ドル以上の収益をもたらすリーンな代替の原動力として育てあげるという独自の経験を持っています。
彼は世界経済フォーラムに「グローバルシェイパー」として選ばれ、ノーベル賞受賞者マララ・ユスフザイと共に、Real Leaders Magazineの「100 Visionaries under 30」に選ばれ、Founders Forumの「Founders of the Future under 35」にも選ばれました。
Paul Codding is a Product Management Director focussing on Apache Hadoop Operations at Hortonworks. Paul joined Hortonworks six years ago as a Solutions Engineer and was responsible for helping customers deploy Apache Hadoop at scale successfully. With those lessons learned, he currently leads the Hortonworks SmartSense and Apache Ambari projects.
I'm a senior software engineer at Hortonworks, currently I'm working on Ambari Apache project, inside that I'm mainly focusing on Ambari Log Search and Solr
Nick has worked for Teradata as a field data scientist and in product marketing for advanced analytics. Nick earned his PhD in business and his MS in statistics and machine learning from Stanford University. Prior to joining Teradata, he was a professor at the Kellogg School of Management at Northwestern University, and a data scientist at McKinsey & Co.
Santiago Cabrera-Naranjo is Consulting Director at Teradata Think Big Analytics. He is thereby responsible of advising Enterprises within their big data and advanced analytics strategy; leading them to innovative best-practice implementations with the goal to speed up time-to-market regardless their technology stack and initial organization. Furthermore, Santiago has been responsible to launch and grow Teradata’s Hub in Berlin and is active as keynote speaker.
His computer science studies helped him to collect experience from different point of views across several sectors. From building up data infrastructures for Rocket-Internet Ventures to founding himself stampfy after winning the German StartupBus in 2011 at the Age of 24. Before joining Teradata, Santiago was responsible of building up the Analytical Landscape for BILD and for the adoption of new cutting-edge technologies at Axel Springer SE.
Steve Loughran works on Hadoop att Hortonworks, currently cloud storage integration, including improving integration with Amazon's S3 in Hadoop, Hive and Spark
He's the author of Ant in Action, a member of the Apache Software Foundation, and a committer on the Hadoop core since 2009. Prior to joining Hortonworks in 2012, he was a Research Scientist at HP Laboratories.
He lives and works in Bristol, England.
Liam began his career as a software development engineer with Digital, Japan, in 1984 and worked
with IDA/ Forbairt/Enterprise Ireland in Japan from 1986 to 1996. From 1996 to 2000 he founded
and ran Biasia Ltd., a successful technology trading company based in Tokyo before founding
Bluemetrix in 2001 to enter the emerging Web Analytics market.
He holds a 1st Class Honours degree in Computer Science from UCC, Cork and an Honours MBA in
International Business from Jochi University, Tokyo.
He writes about Hadoop automation and security at https://www. bluemetrix.com/blog
Balaji Ganesan is a co-founder and CEO at Privacera, a leading data security startup and partner for Hortonworks. Balaji leads all functions in Privacera, steering the company in its vision to help enteprises manage security and compliance risks that come with data.
Balaji previously led security and governance work at Hortonworks where led the work to make Hadoop enterprise ready. Balaji had come into Hortonworks through the acquisition of his previous startup, XA Secure. The XA Secure product was eventually open sourced and became Apache Ranger.
Alberto is, since the beginning of 2017, the Big Data Architect at G-Research, which is a leading FinTech company that uses scientific techniques, big data and world-class technology to predict future movements in financial markets. Prior to working for G-Research, Alberto joined Hortonworks as a Professional Services Engineer in 2014 and helped kick-start the Big Data lakes at HSBC and Lloyds Banking Group in the UK
Arindam is a Principal Group Program Manager in the Microsoft Big Data group where he leads the Azure HDInsight team. He comes to big data after extensive experience in Transaction Processing, Authentication & Authorization technologies in Windows Core OS, and designing the Microsoft Dynamics security model. He is currently focused on making it easy for developers to build analytics applications using tools of their choice on cloud platforms that offer best of breed security, compliance, efficiency and cost-control.
Felix is a software engineer and a data architect at BMW, where he works on the autonomous driving project. He graduated from University of Applied Sciences at Augsburg. He started working as an intern for several automotive industries while studying computer science. After graduating from university, he started working at system integration and consulting companies focusing on IT projects for automotive companies. During these years Felix developed deep expertise on automotive IT systems, especially in the area of data management and data mining. He joined BMW in 2015 at the central IT department starting to build Big Data architectures based on Hadoop frameworks. He is now part of the Big Data team that supports the development of autonomous driving, where he designs and implements Hadoop platform-based solutions to ingest and process massive amounts of vehicle data and provide the data to the autonomous driving development team.
Dogukan is a software engineer with many years of experience in building scalable solutions for various domains like big data, machine learning and cloud computing. He studied computer science and engineering at Marmara University in Istanbul. Currently he works on the autonomous driving project at BMW and prior to BMW he worked at SAP, Siemens and Sony for different research and development projects.
Software consultant, Big data systems, Java performance
Magnus Runesson is Senior Data Engineer at Svenska Spel responsible for architect, develop and operate their Hadoop environment. He has a Master of Science and Engineering from Linköping University, Sweden. Magnus has long experience to develop and operate distributed systems with high requirements on availability, performance and integrity from organization such as Spotify and the Swedish weather service. Magnus is the lead developer of open source tool cobra-policytool and was the driving force to open sourced it.
Cloudbreak Partner Engineer at Hortonworks June 2016 - Present
Technology Consultant at SAS Institute June 2011 - June 2016
Software Developer at Alerant Zrt. August 2009 - June 2011
Senior Member of the Technical Staff at Hortonworks
April 2015 - Present
Senior Software Engineer at SequenceIQ
February 2014 - April 2015 (1 year 3 months)
Software Developer at EPAM Systems
June 2012 - February 2014 (1 year 9 months)
Developing web applications using the following technology stack: Spring (-JPA, -MVC, -Integration),
Hibernate, JSP, JSF, jQuery
Software Developer at NSN - Nokia Solutions and Networks
January 2011 - June 2012 (1 year 6 months)
Development of the analyze system for network elements (MSS, MGW).
-Sr. Director, Product Management, Hadoop Core, Data Science and Data Management, Hortonworks, 2016-Present
-CEO/Co-Founder, Stealth Mode Startup, 2015-2016
-Head of Product Management & Technical Marketing, Skeyra (acquired by Western Digital), 2013-2015
-Director, Product Management, EMC, 2011-2013
-Staff Engineer->Sr. Product Manager, Brocade, 2001-2011