First
- I speak only for myself as an individual, not representing any current or previous employer, ASF ...etc.
- I'm not going anywhere near business aspects; out of scope.
- I am only looking at Apache Hadoop, which is of course the foundation of Amazon EMR. Which also means: if someone says "yes but projects X, Y & Z, ..." my response is "that's nice, but coming back to the project under discussion, ..."
- lots of people I know & respect work for Amazon. I am very much looking at the actions of the company, not the individuals.
- And I'm not making any suggestions about what people should do, only arguing that the current stance is harmful to everyone.
- I saw last week that EMR now has a reimplementation of the S3A committers, without any credit whatsoever for something I consider profound. This means I'm probably a bit sensitive right now. I waited a couple of days before finishing this post,
As I type this a nearby terminal window runs MiniMR jobs against a csv.gz file listing AWS landsat photos, stored somewhere in a bucket.
The tests run on a macbook, a distant descendant of BSD linux, Mach Kernel and its incomplete open source sibling, Darwin. Much of the dev tooling I use is all open source, downloaded via homebrew. The network connection is via a router running DD-WRT.
That Landsat file, s3a://landsat-pds/scene_list.gz, is arguably the single main contribution from the AWS infra for that Hadoop testing.
It's a few hundred MB of free to use data, so I've used it for IO seek/read performance tests, spark dataframe queries, and now, SQL statements direct to the storage infrastructure. Those test are also where I get to explore the new features of the java language, LambdaTestUtils, which is my lifting of what I consider to be the best bits of scalatest. Now I'm adding async IO operations to the Hadoop FileSystem/FileContext classes, and in the tests I'm learning about the java 8+ completable future stuff, how to get them to run IOException-raising code (TL;DR: it hurts)
While I wait for my tests to finish, I see, there's a lot of online discussion about could providers and open source projects, especially post AWS re:Invent (re:Package?), so I'd thought I'd join in.
Of all the bits of recent writing on the topic, one I really like is Roman's, which focuses a lot on community over code.
That is a key thing: open source development is a community. And members of that community can participate by
- writing code
- reviewing code
- writing docs
- reviewing docs
- testing releases, nightly builds
- helping other people with their problems
- helping other projects who are having problems with your project's code.
- helping other projects take ideas from your code and make use of it
- filing bug reports
- reviewing, commenting, on, maybe even fixing bug reports.
- turning up a conferences, talking about what you are doing, sharing
- listening. Not just to people who pay you money, but people who want to use the stuff you've written.
- helping build that community by encouraging the participation of others, nurturing their contributions along, trying to get them to embrace your code and testing philosophy, etc.
A key, recurrent theme is that community, where you can contribute in many ways, but you do have to be proactive to build that community. And the best ASF projects are ones which have a broad set of contributors
Take for example, the grand Java 9, 10, 11 project: [HADOOP-11123, HADOOP-11423, HADOOP-15338 ]. That's an epic of suffering, primarily taken on by Akira Ajisaka, and Takanobu Asanuma at Yahoo! Japan, and a few other brave people. This isn't some grand "shout about this at keynotes" kind of feature, but its a critical contribution by people who rely on Hadoop and have a pressing need "Run on Java 9+", and are prepared to put in the effort. I watch their JIRAs with awe.
That's a community effort, driven by users with needs.
Another interesting bit of work: Multipart Upload from HDFS to other block stores. I've been the specification police there; my reaction to "adding fundamental APIs without strict specification and tests" was predictable, so I don't know why they tried to get it past me. Who did that work? Ewan Higgs at Western Digital did a lot -they can see real benefit for their enterprise object store. Virajith Jalaparti at Microsoft, People who want HDFS to be able to use their storage systems for the block store. And there's another side-effect: that mulitpart upload API essentially provides a standard API for multipart-upload based committers. For the S3A committers we added our own private API to S3A FS "WriteOperationHelper"; this will give it to every FS which supports it. And you can do a version of DistCP which writes blocks to the store in parallel from across the filesystem...a very high performance blockstore-to-block-or-object store copy mechanism.
This work is all being done by people who sell storage in one form or another, who see value in the feature, and are putting in the effort to develop it in the open, encourage participation from others, and deliver something independent of underlying storage
This bit of work highlights something else: that Hadoop FS API is broadly used way beyond HDFS, and we need to evolve it to deal with things HDFS offers but keeps hidden, but also for object stores, whose characteristics involve:
- very expensive cost of seek(), especially given ORC and Parquet know their read plan way beyond the next read. Fix: HADOOP-11867: Vectorized Read Operations,
- very expensive cost of mimicking hierarchical directory trees, treewalking is not ideal,
- slow to open a file, as even the existence check can take hundreds of milliseconds.
- Interesting new failure modes. Throttling/503 responses if you put too much load on a shard of the store, for example. Which can surface anywhere.
- Variable rename performance O(data) for mimicked S3 rename, O(files) for GCS, O(1) with some occasional pauses on Azure.
Which is why input from the people who spent time building object stores and hooking their application is so important. That includes:
We end up duplicating effort, like S3Guard, which is the ASF equivalent of EMR consistent view. The fact that EMR shipped with a feature long before us, could be viewed as their benefit of having a proprietary S3 connector. But S3Guard, like the EMR team, models its work on Netflix S3mper. That's code which one of the EMR team's largest customers wrote, code which Netflix had to retrofit onto the EMR closed-source filesystem using AspectJ.
And that time-to-market edge? Well, is not so obvious any more
Much of that commit code was written by myself, but it depended utterly on some insights from Thomas Demoor of WDC, It also contains a large donation of code from Netflix, their S3 committer. They'd bypassed emrfs completely and were using the S3A client direct: we took this, hooked it up to what we'd already started to write, incorporated their mockito tests —and now their code is the specific committer I recommend. Because Netflix were using it and it worked.
A heavy user of AWS S3 wanting to fix their problem, sharing the code, having it pulled into the core code so that anyone using the OSS releases gets the benefit of their code *and their testing*.
- Code
- Bug reports
- Insight from their experiences
- Information about the problems they've had
- Problems their users are having
We end up duplicating effort, like S3Guard, which is the ASF equivalent of EMR consistent view. The fact that EMR shipped with a feature long before us, could be viewed as their benefit of having a proprietary S3 connector. But S3Guard, like the EMR team, models its work on Netflix S3mper. That's code which one of the EMR team's largest customers wrote, code which Netflix had to retrofit onto the EMR closed-source filesystem using AspectJ.
And that time-to-market edge? Well, is not so obvious any more
- The EMR S3-optimised committer shipped in November 2018.
- Its open source predecessor, the S3A committers, shipped in Hadoop 3.1, March 2018. That's over 7-8 months ahead of the EMR implementation.
- And it shipped in HDP-3.0 in June 2018. That's 5 months ahead.
Much of that commit code was written by myself, but it depended utterly on some insights from Thomas Demoor of WDC, It also contains a large donation of code from Netflix, their S3 committer. They'd bypassed emrfs completely and were using the S3A client direct: we took this, hooked it up to what we'd already started to write, incorporated their mockito tests —and now their code is the specific committer I recommend. Because Netflix were using it and it worked.
A heavy user of AWS S3 wanting to fix their problem, sharing the code, having it pulled into the core code so that anyone using the OSS releases gets the benefit of their code *and their testing*.
We were able to pick that code up because Netflix had gone around emrfs and were writing things themselves. That is: they had to bypass the EMR team. And once they'd one that, we could take it, integrate it and ship it eight months before that EMR team did. With proofs of correctness(-ish).
Which is why I don't believe that isolationism is good for whoever opts out of the community. Nor, by the evidence, is is good for their customers.
I don't even think it helps the EMR team's colleagues with their own support calls. Because really, if you aren't active in the project, those colleagues end up depending on the goodwill of others.
(photo: building in central Havana. There'll be a starbucks there eventually)