Sophie is a futurist whose research entails meeting weirdos and troublemakers in off-the-beaten-track labs, makerspaces, garages around the globe - Shenzhen, Seoul, Detroit, Mumbai. As part of her research, she consults for exec teams and boards of large companies on understanding the explosive new technologies defining the new economy. Sophie is also CEO of a data and AI company, 1715 Labs, that she's currently spinning out of the Astrophysics department at Oxford University with her academic co-founder. This follows a career building businesses for WIRED magazine, for Singularity University at the NASA Research Park in Silicon Valley, and prior to California, the interdisciplinary Oxford Martin School at Oxford University, where Sophie raised more than $120m of research investment.
Since March 2017, Peter Weckesser serves as Airbus Defence and Space´s Digital Transformation Officer. He joined Airbus from Siemens, where he has been the COO of Product Lifecycle Management, leading the Siemens IoT and Digital Enterprise business and activities. Prior to this, he held various executive-level positions at Siemens, being CEO of Industry Services and CEO of Value Service Business Unit, as well as Vice-President of “Human Machine Interface”.
Peter Weckesser holds a degree in Physics and a PhD degree in Computer Science, both from the University of Karlsruhe (Germany), although he also spent a year studying at the Armstrong State University in the USA as part of his education in Informatics.
Hilary is general manager, machine learning, at Cloudera. She was the founder and CEO of Fast Forward Labs, an applied machine learning research company that Cloudera acquired in 2017. She also serves as data scientist in residence at Accel Partners, a leading global venture capital firm. Previously, Hilary was chief scientist at bitly. She co-hosts DataGotham, a conference for New York's home-grown data community, and co-founded HackNY, a non-profit that helps engineering students find opportunities in New York's creative technical economy. She is on the board of the Anita Borg Institute and an advisor to several companies, including Sparkfun Electronics and Wonder. Hilary served on Mayor Bloomberg’s Technology Advisory Board and is a member of the Brooklyn hacker collective NYC Resistor.
Dr. Kerem Tomak brings more than 15 years of experience as a marketing scientist and executive. He comes from Sears Holdings where he was responsible for the digital marketing of the retail trade company as Chief Marketing and Analytics Officer. He studied mathematics, economics and information systems in Turkey and the USA. Dr. Tomak embarked on his professional career as an assistant professor at the University of Texas, Austin.
He has expertise in the areas of omnichannel and cross-device attribution, price and revenue optimization, assessing promotion effectiveness, yield optimization in digital marketing and real-time analytics. He has managed mid and large-size analytics and digital marketing teams in Fortune 500 companies and delivered large-scale analytics solutions for marketing and merchandising units. His out-of-the-box thinking and problem-solving skills led to 4 patent awards and numerous academic publications. He is also a sought-after speaker in Big Data and BI Platforms for Analytics.
As chief marketing officer, Mick leads Cloudera’s worldwide marketing efforts, including advertising, brand, communications, demand, partner, solutions, and web. Mick has had a successful 25-year career in enterprise and cloud software. Prior to joining Cloudera in 2016, he served as CMO of sales acceleration and machine learning company InsideSales.com. Under Mick’s leadership, InsideSales pioneered a shift to data-driven marketing and sales that has served as a model for organizations around the globe. Previous to InsideSales, Mick served as global vice president of marketing and strategy at Citrix, where he led the company’s push into the high-growth desktop virtualization market. Before Citrix, Mick managed executive marketing at Microsoft and held numerous leadership positions at IBM Software. Mick is an advisory board member for InsideSales and a contributing author on Inc.com. He is also an accomplished public speaker who has shared his insightful messages about the business impact of technology with audiences around the world. Mick graduated from the Georgia Institute of Technology, with a bachelor’s of science degree in management.
In this role, he has European responsibility for the sales, strategy, management and delivery of IBM’s Cloud Software including Hybrid Data Management solutions, Unified Governance & Integration solutions, Data Science & Business Analytics solutions, Management & Platform solutions, Integration & Development solutions & Digital Business Automation solutions used by over 35,000 clients throughout Europe to run their enterprises efficiently and securely and to deliver engaging experiences for customers and employees, for Private, Public, Dedicated, Local clouds and on-premise solutions. In this role, Mr. Brown also oversees all of the Software Client Leaders and Dealmakers for Europe.
Mr. Brown's previous senior leadership roles in Software include VP WW Hybrid Cloud Software, delivering global sales, strategy, and management of IBM’s Hybrid Cloud Software including Pure Application, ITSM, WAS, DevOps & Testing, Messaging, Integration, Mobile API Economy, Process Transformation & Video Services Software used by over 50,000 clients. He was also previously the VP WW Sales Systems Middleware & Director of Worldwide Sales Cloud & Smarter Infrastructure, delivering PaaS and on-premise software for major customer engagements worldwide. He was also previously, Business Unit Executive Cloud & Smarter Infrastructure Software Canada, responsible for driving the Sales, management and strategic direction for IBM's Tivoli middleware technologies. He was also the Software Business Unit Executive for IBM Financial Markets Software Business in the UK based out of London in the mid 2000's covering a number of Financial clients delivering IBM software solutions across all middleware brands.
Mr Brown has also extensive experience in Services having worked as the Strategy leader for Global Technology Services in Canada part of the board of executives in that business. Additionally, he also served as Business Unit Executive, Global Technology Services Business Continuity & Resiliency Services IBM UK through to 2010. In this role, he led IBM’s Integrated Technology Services and managed the company's resiliency and delivery efforts. Mr. Brown joined IBM in 1994 in the UK as a systems programmer trainee in the Warwick Laboratory.
Mr. Brown holds a BSC in Computer Science from Edinburgh Napier University and an MBA from Henley Management College.
Vaughn is a VP of Technology Alliance Partners at Pure Storage. He helps organizations capitalize on what’s possible when pairing memory-based storage technologies with traditional and next-generation applications. Prior to Pure he spent 13 years in various leadership roles at NetApp and has been awarded a U.S. patent.
Vaughn strives to simplify the technically complex and advocates thinking outside the box. You can find his perspective online at vaughnstewart.com and in print; he’s coauthored multiple books including “Virtualization Changes Everything: Storage Strategies for VMware vSphere & Cloud Computing”.
Szilard studied Physics in the 90s and obtained a PhD by using statistical methods to analyze the risk of financial portfolios. He worked in finance, then more than a decade ago moved to become the Chief Scientist of a tech company in Santa Monica, California doing everything data (analysis, modeling, data visualization, machine learning, data infrastructure etc). He is the founder/organizer of several meetups in the Los Angeles area (R, data science etc) and the data science community website datascience.la. He is the author of a well-known machine learning benchmark on github (1000+ stars), a frequent speaker at conferences (keynote/invited at KDD, R-finance, Crunch, eRum and contributed at useR!, PAW, EARL etc.), and he has developed and taught graduate data science and machine learning courses as a visiting professor at two universities (UCLA in California and CEU in Europe).
After finishing his computer science studies with focus on information systems in 2011, Volker started to work on backend systems at InnoGames based in Hamburg, Germany where he has worked with data-intensive and scalable applications through his corporate career. Nowadays he is working as a Senior Developer Analytics, managing the data infrastructure of the company. With around 200 million registered players, InnoGames is one of the worldwide leading developers and publishers of online games. Currently, more than 400 people from 30 nations are working in the Hamburg-based headquarters. Together with his data engineering team he build up a data pipeline and platform based on technologies such as Hadoop, Flink, Kafka and Spark. It processes more than a billion gameplay events per day in order to generate a better gaming experience.
Tim Spann was a Senior Solutions Architect at AirisData working with Apache Spark and Machine Learning. Previously he was a Senior Software Engineer at SecurityScorecard ("http://securityscorecard.com/) helping to build a reactive platform for monitoring real-time 3rd party vendor security risk in Java and Scala. Before that he was a Senior Field Engineer for Pivotal focusing on CloudFoundry, HAWQ and Big Data. He is an avid blogger and the Big Data Zone Leader for Dzone (https://dzone.com/users/297029/bunkertor.html).
He runs the the very successful Future of Data Princeton meetup with over 1192 members at http://www.meetup.com/futureofdata-princeton/.
He is currently a Senior Solutions Engineer at Cloudera in the Princeton New Jersey area.
You can find all the source and material behind his talks at his Github and Community blog:
I am currently Regional (EMEA) Associate Director at MSD Biotech and was previously at Optum (UnitedHealth Group) and am based in Dublin, Ireland. My teams and I deal with projects in the PI (fraud, waste and abuse, claims processing) and the healthcare space. I worked previously at IBM Ireland, where I switched my career path from Test Automation to Analytics and Machine Learning.
I am passionate about coding, Big Data, AI/ML/DL, test automation, Open Source, DevOps and cooking (homemade pizza is my speciality!). I share my tech thoughts via my blog (http://googlielmo.blogspot.ie/) and DZone (https://dzone.com/users/2532948/virtualramblas.html) where I am a Golden Member.
During 2018 I have presented at several international conferences such as DataWorks Summit Berlin, Google I/O Extended, Predictive Analytics World for Industry 4.0 and many others. My first book "Hands-on Deep Learning with Apache Spark" (https://tinyurl.com/y7d98s64) is going to be released in December 2018.
Rachit Arora is a Senior Developer at IBM,India Software Labs. He is key designer of the IBM's offerings on Cloud for Hadoop ecosystem . He has extensive experience in architecture, design and agile developmemt. Rachit is an expert in application development in Cloud architecture and development using hadoop and it's ecosystem. He has been active speaker for BigData technologies in various conference like Information Management Technical Conference-2015 , ContainerCon NA-2016, Container Camp Sydeny 2017 etc.
Elliot is a principal engineer at Hotels.com in London where he designs tooling and platforms in the big data space. Prior to this Elliot worked in Last.fm’s data team, developing services for managing large volumes of music metadata.
Jay is a final year student at King’s College London studying Computer Science. She joined Hotels.com in the Big Data Platform team for her industrial placement year where she spent time working with Apache Hive, modularization techniques for SQL, and mutation testing tools.
アンディ・ロプレストは、Hortonworksの技術スタッフ シニアメンバーで、Hortonworks DataFlow チームに勤務しています。彼は、Apache NiFi、オープンソース、ロバスト、安全なデータルーティングと配信システムのコミッタおよび製品管理委員という役割を果たしています。アンディはアイデンティティ管理、TLS交渉、データ保護、アクセス制御、暗号化とハッシングを含む、NiFiの安全性に注力しています。また、安全な指令および管理、即時のデータ来歴と統制を含む、エッジデータ収集を推進する、サブプロジェクトのApache MiNiFiにも携わっています。彼はシンガポール、東京、メルボルン、ベルリン、シドニー、サンホセ、ブリュッセルで開催されたFOSDEM '17、そしてOpenIoT Summit 2017にてNiFiに関する講演を行っています。
A Big Data Tech Lead at the Nielsen Marketing Cloud. I have been dealing with Big Data challenges for the past 6 years, using tools like Spark, Druid, Kafka, and others.
I'm keen about sharing my knowledge and have presented my real-life experience in various forums in the past (e.g meetups, conferences, etc.).
Yakir Buskilla is a Director of Big Data at the Nielsen Marketing Cloud. His fields of interest are Big Data solutions and large scale machine learning.
Don Bosco Durai (Bosco) is a thought leader in enterprise security and is a committer in open source projects like Apache Ranger, Apache Ambari, and Apache HAWQ. He has also contributed towards the security for most of the Hadoop components. Bosco was the co-founder of XA Secure, which is the genesis of Apache Ranger. Bosco is currently the co-founder of Privacera where he is tackling the data security challenges in modern data architecture, like Big Data and Cloud, where large data set constantly moves between different environments, which can result major security breaches or compliance violation if not managed properly. Privacera automates discovery of sensitive data, does transparent encryption/anonymization, manages access policies and monitors access.
Madhan Neethiraj is an Apache committer and PMC for Apache Atlas and Apache Ranger projects. He works at Hortonworks as Sr. Director of Engineering in Enterprise Security Team. His contributions include Apache Ranger features like audit framework, stack model, tag-based policies, masking and row-filter policies; and Apache Atlas features like V2 APIs, search enhancements. Prior to Hortonworks, Madhan was at Oracle in development of security access management suite, governance and real-time fraud detection/prevention products. Prior to Oracle, he was with Bharosa Inc. responsible for the development of real-time fraud detection solution for Financial Institutes, HealthCare and eCommerce.
- Committer of Apache Impala (May, 2018~)
- Senior software engineer at SK Telecom (Mar, 2017~)
Lead scrum of cloud platform development using Kubernetes, Docker, Apache Druid and Apache Hadoop.
Designed and implemented Dockerized DevOps framework.
- Senior software engineer at SAP Labs (Apr, 2014 ~ Feb, 2017)
Development of SAP HANA in-memory engine
- Software engineer at SAP Labs (Jan, 2008 ~ Mar, 2014)
Development of SAP HANA in-memory engine
- Internship at Samsung Electronics (Mar, 2003 ~ Dec, 2005)
With more than fifteen years of experience in Java technologies, Monica is currently responsible for defining Big Data architectures for Engineering’s Data and Analytics Center of Excellence. She deals mainly with leading the Group in the development of projects and POCs, offering consulting services to clients and assisting the pre-sales phase by bringing in the contribution on Big Data technologies. She runs internal courses and for clients and also carries out activities to support the dissemination of Engineering’s expertise in this context, both nationally and internationally.
Christopher Crosbie has over fifteen years of experience developing and deploying data technology in enterprise environments. He is currently on the Cloud Partner Engineering team at Google where he serves a trusted advisor to software vendors that build Data, Analytics and ML solutions on the Google Cloud platform.
Previous to joining Google, Chris was a development manager at Amazon and before that he headed up the data science team at Memorial Sloan Kettering Cancer Center where he implemented the enterprise Hortonworks architecture and strategy. Chris started his career as a biostatistics application engineer at the NSABP, a not-for-profit clinical trials cooperative group supported by the National Cancer Institute. He holds an MPH in Biostatistics and an MS in Information Science.
Nishant is Druid PMC member and Software Engineer at Hortonworks. He is part of Business Intelligence team at Hortonworks. Prior to that he was part of Metamarkets backend team and was responsible for analytics infrastructure, including real-time analytics in Druid. He holds a B.Tech in Computer Science from National Institute of Technology, Kurukshetra, India.
Flavio Junqueira is a senior director of software engineering at Dell EMC, where he leads the Pravega team. He is interested in various aspects of distributed systems, including distributed algorithms, concurrency, and scalability. Previously, Flavio held an engineering position with Confluent and research positions with Yahoo Research and Microsoft Research. He contributes to Apache projects, including Apache ZooKeeper (as PMC and committer), Apache BookKeeper (as PMC and committer), and Apache Kafka. Flavio coauthored the O’Reilly ZooKeeper book. He holds a PhD in computer science from the University of California, San Diego.
Have more then 15+ years of Java experiences and during theses years worked with allmost all the form of Java solutions from the low-latency multithread application to highly distributed enterprise application as developer, architect and trainer. Currently working with the Apache bigdata projects and created various type of containerized solution for the components of the Hadoop ecosystem.
Founder of the first Hungarian Java User group and regular speaker at meetup events and conferences.
Committer of Apache Hadoop and Apache Ratis project and working on the Apache Hadoop Ozone project and the dockerization of Apache Hadoop,
Magnus Runesson is Senior Data Engineer at Tink responsible for architect, develop and operate Tinks BigData environment. He has a Master of Science and Engineering from Linköping University, Sweden. Magnus has a long experience to develop and operate distributed systems with high requirements on availability, performance, and integrity from organizations such as Spotify and the Swedish weather service. Magnus is the lead developer of open source tool cobra-policytool and was the driving force to open sourced it.
Dor has over a decade of experience developing big data products for security industries, financial markets and banking industries. His research on metric learning and cost-sensitive learning has earned him publications in NIPS, AISTATS and a monetary prize in Cha-Learn competitions. As a senior data scientist at ING Bank, he is involved with multiple projects modelling consumer and market behavior, optimizing business and IT processes and contributing to the data science way-of-working, rapid exploration and continuous delivery processes.
Jose Luis has been working with data since the very beginning of his carreer. For more than 9 years JL has been dealing with small and big data in quite different industries such as banking, utilities, airlines, software...He has been working in the full stack of a Data Engineer starting from pure development roles to operations. He is currently Platform Manager in Zurich at ServiZurich Technology Delivery Center managing a Big Data Plaftorm in order to enable Big Data processing and advanced analytics across the organization. Teacher in MBIT school in BI & Big Data Master, speaker @ codemotion 2016, expert in distributed systems and data transformation, cloud believer and happy father of 2 kids.
Abhishek Sakhuja is a solution oriented Big Data and Cloud Architect with a rich experience of 7.4 years in the field of framework architectural designing, R&D, data modelling, Development, Administration and Data Science.
An IT professional with an extensive knowledge in cross-functional IT project management and techniques, required for consulting, designing and managing projects.
A certified cloud professional with a deep understanding of technologies to design, develop and implement technical infrastructure to foster organizational technology adaptation.
Trevor Grant is PMC Member of the Apache Mahout and Apache Streams projects. He is a tinker extraordinaire and does a poor job of documenting his projects on www.rawkintrevo.org. He has an M.S. of Applied Math, a dog, a cat, an M.B.A., and a home in Chicago. He speaks a fair amount at locations internationally, and in general his talks are usually pretty fun.
Holden is a transgender Canadian open source developer advocate @ Google with a focus on Apache Spark, BEAM, and related "big data" tools. She is the co-author of Learning Spark, High Performance Spark, and another Spark book that's a bit more out of date. She is a commiter on and PMC on Apache Spark and committer on SystemML & Mahout projects. She was tricked into the world of big data while trying to improve search and recommendation systems and has long since forgotten her original goal.
Data Processing Ninja with with over 10 years of experience in the software engineering industry. PhD in distributed databases, working at allegro.pl - petabyte scale ecommerce platform.
Gustav is a developer focusing on big data and data science. Currently he is working in a research project setting standards for how to work with fleet telematics data at Scania. Gustav has over 20 years of experience in creating IT-solutions - always in a role that has included hands-on programming.
Sara has 10+ years experiance of analytics within the manufacturing industry and has therefore acuired knowledge about the processes and methods you must master before you can do big data analytics at a traditional company. Sara has a PhD in theoretical physics and has presented her academic work at several big international conferences.
Uwe Weber is working since almost 20 years in the IT environment and became a Big Data Engineer at Telefónica in 2014. He initially set up Telefónica’s Hadoop environment and infrastructure and supports business departments to utilize the “new world”.
Oscar Martinez Rubi is an expert on Big Data, Business Intelligence and Data Management solutions. He currently leads the Advanced Business Analytics department at ClearPeaks where, together with his team, he works in multiple Big Data, Cloud and Advanced Analytics projects throughout various industries. Before joining ClearPeaks, Oscar was an engineer in the Netherlands eScience Center, a center specialized in Big Data for scientific projects in the Netherlands. Before that, he was the Data Manager in a scientific project leveraging the LOFAR radio-telescope. He also worked in the implementation of several data processing systems for an ESA space mission.
Carsten works as a Big Data Architect at Audi Business Innovation GmbH. Audi Business Innovation GmbH, a subsidiary of Audi, is a small company focused on developping new mobility services as well as innovative IT solutions for Audi. Carsten has more than 10 year experience in delivering Data Warehouse and BI solutions to his customers. He started working with Hadoop in 2013 and since then he has focused on both big data infrastructure and solutions. Currently Carsten is helping Audi to extend their Big Data platform based on Hadoop and Kafka to the cloud. Further, as an solution architect he is responsible for developing and running analytical applications on that platform.
Nicolas is a researcher overseeing the performance and scalability of new Spark releases at Databricks. Where he along with the Amsterdam SQL performance team is implementing the new benchmarking and monitoring infrastructure for the Databricks cloud platform. Previously, he was leading a project on upcoming architectures for Big Data processing at the Barcelona Supercomputing (BSC) - Microsoft Research joint center. Nicolas received his Ph.D. in Distributed Systems and Computer Architecture at UPC/BarcelonaTech, where he is still contributing part of the HPC and of the Data Centric Computing research groups.
Bogdan Ghit is a computer scientist and software engineer at Databricks, where he works on optimizing the SQL performance of Apache Spark. Prior to joining Databricks, Bogdan pursued his PhD at Delft University of Technology where he worked broadly on datacenter scheduling with a focus on data analytics frameworks such as Hadoop and Spark. His thesis has led to a large number of publications in top conferences such as ACM Sigmetrics and ACM HPDC.
An InfoSec Generalist. CISSP. My more than a decade long work experience revolves around all aspects of security mainly Secure-SDLC, Source Code Analysis, Vulnerability Assessment, Penetration Testing for Web Applications, Architecture Review, Incident Response, ISMS Compliance, Doing and facilitating 3rd Party Audits. Managed multiple Federal Data Center Operations, O/S and Application Hardening, Linux System Administration. Solution Deployment and Integration for Federal and various State Governments. Contributor to Apache Knox, Apache Zeppelin and Apache Spark.
Have also years of experience in leading and managing a team for monitoring, securing and ensuring "Availability Round-the-Clock" for National Critical Infrastructure. Solving Brain-Teasing needle-in-haystack production issues (Architecture, Application, System & Network) and incorporating new requirements. Conducting Vulnerability Analysis and analyzing VA reports for suggesting corrective and preventive actions (Hotfixes/ CVEs / Design Change/ Hardening/ Patching /Upgrades) to Engineering and Operations team. Panelist for Big Data Security Work Group.
Designing Solution Architecture and Capacity Planning for highly-available applications on Cloud/Data Centre environment.
Larry is a Senior Development Manager and Architect on the Hortonworks security team. He is also a committer and PMC member for the Apache Knox and Apache Ranger projects, committer for Apache Hadoop and contributor to security aspects of multiple Hadoop related projects. He is a veteran in the enterprise middleware space with a specialization in platform management and security. Larry has extensive experience in the Java EE application server technologies and has served on various expert groups for JSRs within the JCP for Java EE security. He has worked on various webservices technologies and stacks including SOAP and REST with a focus on security.
Suneel is a Member of Apache Software Foundation and is a Committer and PMC on Apache Mahout, Apache OpenNLP, Apache Streams. He's presented in the past at Flink Forward, Hadoop Summit, Berlin Buzzwords, Machine Learning Conference, Big Data Tech Warsaw and Apache Big Data.
Matthias is in software business for 25 years and used a lot of different techniques, like C/C++, Java, JS, and .NET while coding, consulting for software architectures, and was one of the first SCRUM masters at DATEV. He has a degree as Graduate Computer Scientist from Nueremberg Tech. Accompanying to his work he always did coaching and holding talks at DATEV.
His current passion is keeping the ETL platform up to date and show product owners and management how they can get business value out of big data analysis.
John Mertic is the Director of Program Management for The Linux Foundation. Under his leadership, he has helped ODPi, R Consortium, and Open Mainframe Project accelerate open source innovation and transform industries. John has an open source career spanning two decades, both as a contributor to projects such as SugarCRM and PHP, and in open source leadership roles at SugarCRM, OW2, and OpenSocial. With an extensive open source background, he is a regular speaker at various Linux Foundation and other industry trade shows each year. John is also an avid writer and has authored two books “The Definitive Guide to SugarCRM: Better Business Applications” and “Building on SugarCRM” as well as published articles on IBM Developerworks, Apple Developer Connection, and PHP Architect.
Ruslan is a Scala and Spark enthusiast with a degree in High Performance Computing. He lives in Prague, Czech Republic. Until 2016 he worked on seismic wave simulation software for Oil and Gas industry in Kiev, Ukraine. Also he taught Parallel Programming at a university there for some time. Now he works for ABSA, a multinational African bank as a big data engineer in Big Data R&D team. His interests include distributed systems, concurrent and parallel programming.
Senior Big Data Engineer with experience in Information Retrieval and Machine Learning.
Gábor Hermann is a data engineer at bol.com, working on recommendations and previously on measuring user activity. Before that, he worked at the Hungarian Academy of Sciences as a researcher. His main interests are scalable machine learning and real-time data processing. He has been working with distributed stream processing and recommendation systems, and he used to contribute to the Apache Flink project.
マイケル・ガーは、業界および情報テクノロジー戦略の担当者として25年の勤務経験があります。彼は商品開発、製造、サプライチェーンおよび顧客経験関連の事業プロセスに関して産業間共通の深い知識を持っています。Hortonworksの製造および自動車部門の部長として、マイクはソリューション ビジョンおよび各業界の市場開拓戦略の推進に貢献し、業界のリーダーと提携しビッグデータ分析を通して次世代の事業洞察を推進します。Hortonworksに入社する以前、マイクは、Oracleの自動車業界部門のリーダーとして20年以上勤務し、A.T.カーニーにてオートモーティブマネジメント顧問、ジェネラルモータース (サターン部門) にて生産技師として勤務しました。
Sanjay is a telecom industry veteran with extensive experience in the strategy and execution of next generation data-centric industry solutions for enhancing customer experience, optimizing network operations and increasing revenue generation through digital transformation.
Sanjay currently leads the global communications & media business at Hortonworks helping communication service providers leverage Hadoop and NiFi to transform their data into a force of business growth and competitive differentiation and to drive data-centric solutions for the connected world & for Industrial IOT. Previously, he held executive roles, leading the global telecom industry business, solutions, and strategy at VMware, Pivotal, Progress Software, Savvion, and TMNG and has help drive business transformation, end-to-end architecture and new business initiatives at Bell Canada, Level3, AT&T Canada, Iowa Telecom, ETB, ATT/Ameritech, Wingcast, and other global service providers.
With more than 20 years working in the IT industry, Olaf has earned experiences as architect, developer, administrator, trainer and project manager in many different areas. Storing and processing huge amounts of data, was always a focal point of his work. At ORDIX AG, he is responsible for Big Data and Data Warehouse technologies and solutions. He has built up a powerful team of Big Data consultants, created several training courses, speaks at conferences and regularly publishes technical articles.
Talks in the past:
Cloudera Sessions, München 2017: Fast analytics on fast data - Kudu als Storage Layer für Banking Applikationen
DOAG, Nürnberg 2017: Big Data - Quickstart mit Hadoop und der Oracle Big Data Platform
Big Data Summit, Hanau 2018: Fast analytics on fast data - Kudu als Storage Layer für Banking Applikationen
Strata Data Converence, London 2018: Fast analytics on fast data - Kudu as storage layer for banking applications
DOAG Big Data Days, Dresden 2018: Fast analytics on fast data - Digitalisierung von Kreditprozessen mit Kudu
IT Tage, Frankfurt2018: Fast analytics on fast data - Digitalisierung von Kreditprozessen mit Kudu
Big Data - Informationen neu gelebt (Teil VII): Apache Kudu; ORDIX news 2/2017
Informationen neu gelebt (Teil II): Apache Cassandra; ORDIX news 2/2015
Informationen neu gelebt (Teil I): Wie big ist Big Data?; ORDIX news 1/2015
Neuerungen in der Oracle Database 12c (Teil V): Erweiterungen im DWH-Umfeld; ORDIX news 3/2014
Dokumentenschredder: Zerlegen und Zusammensetzen von XML-Dokumenten mit dem DB2 XML Extender; XML Magazin Ausgabe 1.2004
I am currently the Domain SME for the Data Analytics & Modelling Domain. My role requires me to architect Hadoop based solutions for the analysis of large volumes of finance data and outputs from complex statistical models in R/ Python. I work closely with the Domain’s developers and more broadly the Bank’s Data Scientists, Analysts an Econometricians as well as technology staff from Infrastructure, Security, GDPR & Storage.
Adrian Waddy works in the Data Analytics and Modelling domain within Technology at the Bank of England. He is the current Technical Lead working on the implementation of the next Data Platform designed to provide a further step change in the Banks capabilities to manage and analyse “Big Data”. Previously he has worked on a variety of Data Warehousing and Data Mart projects, mainly delivering functionality to the Prudential Regulation Authority and Financial Stability. Prior to joining the Bank in 2013 Adrian worked in a small company providing platforms to assist car manufacturers in managing and marketing their used car stocks. Different widgets, same analytical and reporting problems! Adrian is not fond of the tube and has a dislike for unreliable trains and so cycles 20 miles at each end of the day to avoid them.
Sunil Govindan is contributing to Apache Hadoop project since 2013 in various roles as Hadoop Contributor, Hadoop Committer and member Project Management Committee (PMC). He is working as Staff Software Engineer at Hortonworks in YARN team. He is majorly contributing in YARN Scheduling improvements such as Intra-Queue Resource preemption, Multiple Resource types support in YARN with Resource Profiles, Absolute Resource configuration support in Queues etc. He also drove efforts to improve YARN UI for better user experience with community. Before Hortonworks, he worked at Juniper on a custom resource scheduler. Prior to that, he was associated with Huawei and worked on Platform and Middleware distributed systems including Hadoop platform. He loves reading books, an ardent music lover and passionate about go-green efforts.
Zhankun Tang is a code monkey who’s interested in big data, cloud and operating system. He is doing customer resource plugin, GPU topology support now. Prior to Hortonworks, he works in Intel for 7 years after he got his master degree. In the recent past years, he leads a small group focusing on Intel cutting-edge technology enabling in Hadoop and performance optimization in Apache Spark. He was also doing customer engagement as well as path-finding in open source community. And he’s a participant of Apache Mesos, Kubernetes and Tensorflow community.
As a former retail and consumer goods executive and more recently as a business strategy consultant and solution provider, Brent has extensive experience working with a variety of retail and consumer goods companies to provide thought leadership and help them to align strategic business objectives with technology and analytic solutions to create a differentiated competitive advantage in the marketplace.
He has an extensive track record of imagining, designing and executing high impact business solutions, driving innovation and transformation for retail and consumer goods organizations. Brent is passionate about analytics, emerging technologies, consumer behavior, collaborative supply chains and retail transformation.
As General Manager of Retail and Consumer Goods Solutions at Hortonworks, Brent is responsible for driving the solution vision and go-to-market strategies with each segment. As industry leaders increasingly invest in Big Data Analytics to help drive transformation within their organizations,
Brent engages globally to share, discuss, provide keynote talks, and facilitated workshops to help define and create solutions to drive next-generation insights and positive business outcomes across the value chain.
I am a data scientist with Miner & Kasch, a data science consulting firm. I specialize in developing automated solutions for our clients using machine learning, specifically in the domains of computer vision and natural language processing. Additionally, I lead the deep learning training sessions that Miner and Kasch holds.
Across a variety of domains I have successfully applied deep learning to computer vision problems involving image classification, object detection and segmentation. For Natural Language Processing tasks I have created neural information retrieval systems, semantic similarity search engines, and question answering systems. My favorite machine learning techniques are representation learning methods that result in surprising and useful latent variables that facilitate higher level tasks.
Billie Rinaldi is a Principal Software Engineer I at Hortonworks, currently prototyping new features related to long-running services and containers in Apache Hadoop YARN. Prior to August 2012, Billie engaged in big data science and research at the National Security Agency, where she provided early leadership for Apache Accumulo. Billie is a member of the Apache Software Foundation and a committer for Apache Hadoop and a number of other Apache projects in the Hadoop ecosystem. She holds a Ph.D. in applied mathematics from Rensselaer Polytechnic Institute.
Robert is an AI evangelist at Cloudera and has over 12 years of experience working on various projects related to Artificial Intelligence, Robotics, IoT, Enterprise & Embedded Software. His primary focus at Cloudera is building communities around IoT, Big Data and Data Science, and enabling Enterprises to accelerate adoption of cutting edge open-source technologies (from Edge to AI).
As VP of Industry Solutions, Cindy Maike is responsible for global industry solutions and customer engagement for Cloudera. She works with customers and partners leveraging analytics for current day business growth and exploring the use of new data sources to drive innovation in the evolving world of insurance. She has over 25 years of finance, consulting and advisory services experience in the insurance industry working with clients globally on their business strategy leveraging analytics and technology to further drive business results.
シンディは保険請求と引受の両方で深い業界知識を持ち、アナリティクスおよびデータを使用してビジネス成果を向上することに注力しています。IBM Watsonソリューショングループ、Carrier Insurance、ACORDの戦略部長での職務経験を持ち、Strategy Meets Action Research and Advisory Services の共同設立者でもあります。彼女はまた、公認会計士でもあります。
Owen O'Malley is a co-founder and technical fellow at Hortonworks, a rapidly growing company (25 to 1,000 employees in 5 years), which develops the completely open source Hortonworks Data Platform (HDP). HDP includes Hadoop and the large ecosystem of big data tools that enterprises need for their data analytics. Owen has been working on Hadoop since the beginning of 2006 at Yahoo, was the first committer added to the project, and used Hadoop to set the Gray sort benchmark in 2008 and 2009. In the last 8 years, he has been the architect of MapReduce, Security, and now Hive. Recently he has been driving the development of the ORC file format and adding ACID transactions to Hive. Before working on Hadoop, he worked on Yahoo Search's WebMap project, which was the original motivation for Yahoo to work on Hadoop. Prior to Yahoo, he wandered between testing (UCI), static analysis (Reasoning), configuration management (Sun), and software model checking (NASA). He received his PhD in Software Engineering from University of California, Irvine.
スリカンス・ベンカットは、現在、HortonworksにてApache Knox、Apache Ranger、Apache Atlas、プラットフォーム ワイド セキュリティ、Hortonworks DataPlane Serviceを含む、製品のセキュリティ＆ガバナンスのポートフォリオに携わっています。Hortonworksに入社する以前は、クラウドサービス、市場、セキュリティ、ビジネスアプリケーションなどの分野で様々な職務の経験があります。スリカンスは、製品管理から、戦略および運営、テクニカルアーキテクチャまで様々な分野でリーダーシップの経験があり、TelefonicaやSalesforce、Cisco-Webex、Proofpoint、Dataguise、Trilogy Software、Hewlett-Packardを含む、新興企業からグローバル企業まで広範囲の職務経験を持ちます。スリカンスは、ピッツバーグ大学で人工知能に焦点を置いたエンジニアリングの博士号、インディアナ大学でGeneral ManagementのMBA、サンダーバード国際経営大学院にてグローバルマネジメントの修士号を取得しています。趣味はデータサイエンスと機械学習で、ビッグデータテクノロジーを触ることを楽しんでいます。
Edwin Scheepstra has 15+ years of experience in different roles in the data domain. He developed and designed multiple data warehouses in finance, telco and retail. For 10 years now he has been working at Rabobank as lead business analyst responsible for the data design and functional requirements of several data warehouses. The last 2 years Edwin fulfils that role at the Rabobank data lake, a Hadoop environment serving diverse business domains within Rabobank.
Solution Architect with more than 15 years experience in DWH and BI and last years also Big Data environments.
Designed many data warehouses including a Customer Intelligence System, Marketing Data Warehouse, Enterprise Data Warehouse and Basel II data warehouse. Currently responsible for the architecture of Data Lake, Data Factory and Data Lab based on Cloudera and HortonWorks technology.
Patrick de Vries is an OSS manager (Demand), IT architect with more than 10 years experience in the mobile networks. He has a passion for data management and data warehousing. In this time he successfully led many IT architecture, design and implementation activities for operations readiness, assurance, service quality and business continuity projects. Currently, Patrick works at KPN in the Netherlands at further improvements in servicing both customer experience and operational excellence particularly within the even growing digital/online environment.
I am an employee of Deutsche Telekom AG, working as a data scientist for both commercial and network related use cases. I have profound experience of designing and implementing both analytical and machine learning algorithms in Apache Hadoop ecosystem.
My interest in data modeling started six years ago when I got the chance to work on experimental data during the pursuit of my PhD degree. Unlike structural modeling where the true nature of the data generating process can be modeled in close form, majority of the processes in real world are too complex to be understood in their entirety. Consequently, I gained expertise in several parametric models such as, dynamic stochastic models, time-series analysis and the state-space modeling.
From the beginning of my career as a data scientist at T-Mobile Austria, I have made adequate use of machine learning and applied research in market science and mobile network, which led to several data science projects with attribution to high business value. Since industry demands the end-to-end working solution and not just a prototype, I have mastered several programming languages and have served as the data engineer for most of my use-cases as well.
A brief list of my prowess and skills along with the projects and publications can be found on my linkedIn profile at: https://www.linkedin.com/in/wasifmasood/
Zekeriya Besiroglu has progressive experience(+18 years) in IT. Zekeriya is one of the few people in the EMEA area, having knowledge and accepted as expert in RAC&Exadata & Exalogic & Big Data &Cloud and Engineered Systems. He is Oracle ACE.
7+ yrs experience in deploying and managing the multi-node development, testing and production Hadoop cluster with different Hadoop components (HIVE, PIG, SQOOP, OOZIE, FLUME, HCATALOG, HBASE,COUCHBASE, ZOOKEEPER,NIFI) using Cloudera Manager and Ambari.
Provide architectural and technical guidance to help customers understand the cloud, and make best use of the Amazon Web Services (AWS) cloud computing platform to build scalable, robust, and secure applications.
•Strong knowledge in configuring Name Node High Availability and Name Node Federation.
•Familiar with importing and exporting data using Sqoop from RDBMS MySQL, Oracle, Teradata and also using fast loaders and connectors Experience.
•Experience in using Flume to stream data into HDFS
Programming in Python, Matlab, SQL, R, C#, Mathematica
• Tools: Pandas, NumPy, SciPy, scikit-learn, matplotlib, ggplot, Highcharts, Tableau, LaTex
• Classification and Clustering, Predictive Modeling, Regression, Dimensionality Reduction (PCA, SVD), •Ensemble Methods (Boosting, Bagging), Anomaly Detection
• Deep experience with AI/DL/ML platformsWorking knowledge of things like Sci-kit Learn, Tensorflow, Cntk, caffe2 with ML/DL/AI Independent Software Vendors
AWS Big Data Services - EMR, Redshift, DynamoDB, RDS, Kinesis, Data Pipeline
Hortonworks Certified Instructor
Hortonworks Certified Hadoop Administrator
Certified Spark Developer
Certified R Programmer
Oracle Certified RAC Expert&EXADATA
Certified Bea Weblogic,Websphere,SOA System Admin
Emre Tokel - Big Data Team Leader
Central Bank of the Republic of Turkey
Emre has 15+ years of experience in software development. He has taken role as developer and project manager in various projects. For 2 years now, he has been involved in big data and data intelligence studies within the Bank. Emre has been leading the big data team since last year and is responsible for the architecture of the Big Data Platform, which is based on Hortonworks technologies. He has an MBA degree and is pursuing his Ph.D in finance. Besides IT, he is a divemaster and teaching SCUBA.
Kerem Basol - Big Data Engineer
Central Bank of the Republic of Turkey
Kerem has 10+ years of experience in software development including mobile, back-end and front-end. For the past two years, he focused on big data technologies and currently working as a big data engineer. Kerem is responsible for data ingestion and building custom solution stacks for business needs using the Big Data Platform, which is based on Hortonworks technologies. He holds an MS degree in CIS from UPENN.
M. Yağmur Sahin - Big Data Engineer
Central Bank of the Republic of Turkey
Yağmur has been developing software for 10 years. Being experienced in software development, he has completed his masters degree in 2016 on distributed stream processing where he was first introduced with big data technologies. For the last 2 years, he has been designing and implementing big data solutions for the Bank using Hortonworks Data Platform. Yağmur is also pursuing his Ph.D at Medical Informatics department of METU. He loves running and hopefully will complete a marathon in coming years.
Mohamed Mehdi BEN AISSA - Big Data Technical Architect & Infrastructure Technical Owner at Credit Agricole Group Infrastructure Platform (CA-GIP, CIB branch).
Speaker at Paris Apache Kafka (Confluent) Meetup, Future of Data (Hortonworks) Meetup & Big Data Paris 2018 and 2019.
Abdelkrim is a Solution Engineer at Cloudera with 10 years experience on several distributed systems (Big Data, IoT, Peer to Peer and Cloud). Before joining Cloudera, he held several positions including Big Data lead, CTO and Software Engineer at several companies. He was a speaker at various international conferences and published several scientific papers at well known IEEE and ACM Journals. Abdelkrim holds a PhD, MSc and MSe degrees in Computer Science.
Paul serves the CIO role, focusing on the industrial internet of things (IoT) and cloud computing. His research explores how new technologies challenge established organizations and their business models. Cloud computing may not always save its adopters money, but cloud-based approaches consistently provide the flexibility and agility required to win, serve, and retain business in the age of the customer. Paul also looks at the way IoT is creating opportunities in sectors from healthcare and utilities to manufacturing and retail. At the simplest level, connected devices are used to track the use of expensive physical assets. But the opportunities are far larger than that, altering the ways in which machines are built, sold, used, and maintained, and transforming the relationship between makers, operators, and customers in ways that we are only just beginning to understand.
Previous Work Experience
Paul joined Forrester in 2015, having spent the previous six years as an independent analyst and founder of The Cloud of Data. His areas of focus included cloud computing and big data, with a particular emphasis on exploring the implications of integrating new approaches into existing business models and workflows. Paul worked with a variety of vendors, customer organizations, and public bodies. He was also a long-serving and active contributor to Gigaom Research's Analyst Network. Prior to that, Paul was a member of the senior management team at a UK software company. He has also filled roles related to shaping public policy around the use of information technology, particularly in the culture and education sectors, culminating in a period as the first director of the UK's Common Information Environment. This initiative was supported by organizations including the National Health Service, Jisc, the British Library, BBC, and others.
Paul earned a Ph.D in archaeology from the University of York, in the UK.
Dinesh Chandrasekhar is a technology evangelist, a thought leader and a seasoned product marketer with over 24+ years of industry experience. He has an impressive track record of taking new integration/mobile/IoT/Big Data products to market with a clear GTM strategy of pre-and-post launch activities. He has extensive experience working on enterprise software as well as SaaS products delivering sophisticated solutions for customers with complex architectures. His areas of expertise include IoT, Application/Data integration, BPM, Analytics, B2B, API management, Microservices and Mobility. He can articulate detailed use cases across multiple industry verticals like retail, manufacturing, utilities and healthcare. He is a prolific speaker, blogger and a weekend coder. He currently works at Cloudera, managing their Data-in-Motion product line. He is fascinated about new technology trends including blockchain and deep learning.
Tristan Zajonc is CTO for Machine Learning at Cloudera. Tristan previously led engineering for Cloudera Data Science Workbench and was the cofounder and CEO of Sense, an enterprise data science platform that was acquired by Cloudera in 2016. He has over 15 years experience in applied data science, machine learning, and machine learning systems development across academia and industry and holds a PhD from Harvard University.
Vidya leads product management for Machine Learning at Cloudera. Prior to Cloudera, she has helped build highly successful software portfolios in several industry verticals ranging from Telecom, Healthcare, Energy and IoT. Her experience spans early-stage startups, pre-IPO companies to big enterprises. Vidya has a Masters in Business Administration from Duke University.
Alice Albrecht leads our strategic engagements and advising at Cloudera Fast Forward Labs. She is passionate about helping organizations see a return on their investment in data and helping them build the future. Previously she was a research engineer at Cloudera Fast Forward Labs where she spent her days researching the latest and greatest in machine learning and artificial intelligence and bringing that knowledge to working prototypes and delivering concrete advice for clients. Prior to joining Cloudera, Alice worked in both finance and technology companies as a practicing data scientist, data science leader, and a data product manager. In addition to helping organizations harness the power of machine learning, Alice is passionate about mentoring and helping others grow in their careers. Alice holds a PhD from Yale in cognitive neuroscience where she studied how humans summarize sensory information from the world around them.
Rafael Arana (@RafaArana) is a veteran architect with over 19+ years of software industry experience. He has an impressive track record in taking Data Products and Technical Solutions to market using Machine Learning, Big Data, Integration/Middleware and Cloud technologies on some of the biggest European enterprises.
Rafael has extensive experience working on enterprise software as well as Open Source community delivering all kind of use cases across multiple industry verticals like banking, telco, retail and utilities.
He currently works at Cloudera as Senior Architect. He is passion for innovation and continued learning and fascinated by Deep Learning.
Rafael holds a bachelor’s degree in Theoretical Physics from Autonomus University of Madrid University and a Bachelor’s degree in Chemical Physics from the Complutense University of Madrid
Zuling Kang is a Senior Solutions Architect at Cloudera, Inc., and holds a Ph.D. in Computer Science. Before joining Cloudera, he worked as an architect of big data systems at China Mobile Zhejiang Co., Ltd. Currently, he has published nine academic/technical papers, of which seven are indexed by the Science Citation Index (SCI)/Ei Compendex (formerly the Engineering Index). One of these papers, "Performance-Aware Cloud Resource Allocation via Fitness-Enabled Auction," is published in the "IEEE Transactions on Parallel and Distributed Systems." Zuling's current research and engineering interests include architectures for big data platforms, big data processing technologies, and machine learning.
Chris Wallace is a data scientist at Cloudera Fast Forward Labs. He works on making breakthroughs in machine intelligence accessible and applicable in the "real world". He has previous experience doing data science in organisations both large (the UK NHS) and small (first employee at a tech startup). Chris likes building data products and cares deeply about making technology work for people, not vice versa. He holds a PhD in particle physics from the University of Durham.
Sagar Kewalramani is a Strategic Solution Architect & Data Scientist at Cloudera, where he helps Customers Install, Build, Secure, Optimize & tune their Hadoop clusters. He also helps new customers transition to Hadoop platform and implement their initial use cases. Sagar has worked with customers from all verticals, including Banking, Manufacturing, Healthcare, Retail etc. He has wide experience in building business use cases, high volume real-time data ingestion, transformation and movement, and data lineage and discovery. He has led the discovery and development of big data and machine-learning applications to accelerate digital business and simplify data management and analytics. He has spoken in multiple Hadoop & Big Data Conferences including Oreilly Strata. Previously, he was an Data Architect at Meijer Inc. where he was primary focused in Architecture Design and Administration roles for ETL tools and databases including Teradata.
Justin leads Cloudera's Fast Forward Labs team. Justin is a career data professional and Data Science leader with experience in multiple industries and companies. Previously, Justin was the head of Applied Machine Learning at Fitbit, the head of Cisco’s Enterprise Data Science Office and a Big Data Systems Engineer with Booz Allen Hamilton after serving as a Marine Corps Officer, with a focus in Systems Analytics and Device Intelligence. Justin is a graduate of the US Naval Academy with a degree in Computer Science and the University of Southern California with a Master’s Degree in Business Administration and Business Analytics.
Nathan began his career working at the Australian Department of Defence working in big data and using NiFi. In 2018, Nathan moved to the US to begin a new role as security engineer at Hortonworks for the NiFi Hortonworks Dataflow team.
Clay Baenziger - is an architect of the Hadoop Infrastructure Team at Bloomberg. Clay comes from a diverse background in systems infrastructure and analytics ranging from operating systems engineering to financial portfolio analytics. He has been involved in the Hadoop ecosystem for nine years and provides numerous talks each year on Bloomberg's community contributions.
電気技師の訓練を受けたアブハス リッキーは、ベテラン戦略コンサルタントで情熱的な起業家です。熱心なイノベーターで、デジタル スタートアップ企業を培養し、その企業を1000万ドル以上の収益をもたらすリーンな代替の原動力として育てあげるという独自の経験を持っています。
彼は世界経済フォーラムに「グローバルシェイパー」として選ばれ、ノーベル賞受賞者マララ・ユスフザイと共に、Real Leaders Magazineの「100 Visionaries under 30」に選ばれ、Founders Forumの「Founders of the Future under 35」にも選ばれました。
Ali Bajwa is Principal Partner Solutions Engineer at Hortonworks, where he helps partners learn about and integrate with open source Big Data technologies. He has developed Ambari plugins for NiFi and Zeppelin and training materials related to security/governance. Prior to joining Hortonworks, he worked as a Principal Member of Technical Staff at Oracle.
Robert is an AI evangelist at Cloudera and has over 12 years of experience working on various projects related to Artificial Intelligence, Robotics, IoT, Enterprise & Embedded Software. His primary focus at Cloudera is building communities around IoT, Big Data and Data Science, and enabling Enterprises to accelerate adoption of cutting edge open-source technologies.
Jason Dere has been a Software Engineer at Cloudera/Hortonworks since 2013, working on Apache Hive.
Purnima is a Big Data evangelist with 15 years of experience in the industry. Purnima comes to Hortonworks after working with IBM and ADP. She works with customers on their Cloud and Big Data strategies.
Throughout a decade of virtualisation and launching two startups, Dan has now been nerdy on three continents and in every line of business from UK bulge bracket banking to Australian desert public services.
Joining Hortonworks as a Solutions Engineer in 2016, he swiftly automated a sales manager using Apache Nifi and now drives the international practice for enterprise adoption and automation of the HDF product line, and maintains a public project for Apache NiFi python automation (NiPyAPI) on github.
Dan is based in London with his family and pet Samoyed, he can most recently be found building an open source baby monitor out of Raspberry Pi's while mining Cryptocurrency in his shed.
Ankur currently drives the data strategy and roadmap of Data & Analytics area in Networks area in O2. Prior to this role, Ankur led the BI and Big data design at O2 where he owned the data architecture, design and validation of data deliveries.
Before joining O2, Ankur was leading the Presales and Solutions function for Data and Analytics practice at TCS where he has helped a number of clients in UK and EU on their data initiatives.
Ajay Kaushik is a Platform Design Lead in Telefonica’s Big Data Analytics team with a diverse background in systems engineering and platform design. He has wide ranging experience in the digital and network domain evangelising DevOps.
Gary Tomchuk is a WW Sales and Business Development Executive from the Software Defined Infrastructure (SDI) team. He currently is working with the Hortonworks teams in regard to the partnership they have with IBM Storage. Gary has been with IBM for 34 years, and has held many roles within hardware and software solution brands. Gary has been working with HPC and Big Data scale-out software defined solutions over the past 5 years, and is currently leading the sales and business development effort around IBM Spectrum Storage for analytics and AI.
Data Solutions Manager with more than 10 years experience in Data Management, Business Intelligence and Big Data solutions.
Arda is currently responsible for the management and leadership of the IT Data Solutions team including Data Solution Architects, Big Data Engineers, DevOps Engineers and Delivery Managers. He is working with the team to explore innovative ways to deliver data driven applications.
From the Devoteam NL consultancy team, taking the Lead Data & Analytics Architect role at Liberty Global - the world's largest international TV and broadband company. Designed hybrid cloud and on-premise Data Intensive platform. Closely collaborating with the customer, shaped the design and implemented the platform to enable agile, rapid on-boarding and implementation , data-ops enabled, exploratory driven, customer-friendly application design and implementation. Data intensive success solution track in Data Warehouse, Big Data, Business Intelligence and Analytical solutions for Banking, Telecommunications, Insurance and Public Administration.
Victoria Gómez is Big Data sales Leader for Europe at IBM with a broad experience in the IT and Business Consulting industry and main expertise areas of Sales, Software business, Strategy and Consulting. Victoria holds a BsC. in Industrial Engineering and MsC. in Robotics & Electronics from Escuela Técnica Superior de Ingenieros Industriales de Madrid, Executive MBA from Instituto de Empresa and Business & Industry Insight executive program from London Business School. She holds a patent for a 3D computer vision system for drones.
He is graduated in Physics and Astrophysics (University of Florence) and studied Mathematical Finance (UPMC, Paris)
Before joining Experian, he worked in Banca Monte dei Paschi di Siena in Credit Risk Trading Desk and for more than 9 year in Florence Municipality
He works from 2 years in Experian as Senior Data Scientist and he is involved in Advanced Analytics solutions for Credit and Fraud Portfolio Management, PSD2, Web Data and Innovation
Joshua Robinson is a Founding Engineer on the FlashBlade team at Pure Storage and is currently lead architect for AI and modern analytics solutions. Previously, he was a data scientist on the search infrastructure team at Google and has a PhD in Electrical Engineering from Rice University.
Niels Basjes (1971) has been working for bol.com since May 2008. Before that he was working as a Webanalytics architect for Moniforce, and as an IT architect/researcher at the National Aerospace Laboratory in Amsterdam.
Since the second half of the 1990s he has been working on processing problems that require scalability. He has applied these concepts in the past 20 years in aircraft/runway planning, IT operations and in the field of web analytics to build reports for some of the biggest websites in the Netherlands.
Also at bol.com the primary focus of Niels Basjes are scalability problems and he is responsible for a shift in thinking about data and the business value it contains. Niels designed and implemented many of the personalization algorithms that are in production today at bol.com.
Niels studied Computer Science at the TU Delft, and has Business administration degree at Nyenrode University.
Niels is an active opensource developer who is one of the Apache Avro PMC members and has authored ( https://github.com/nielsbasjes/ ) and contributed various improvements and bugfixes to projects like Hadoop, HBase, Pig and Flink.