2018-12-20

Isolation is not participation



First
  1. I speak only for myself as an individual, not representing any current or previous employer, ASF ...etc.
  2. I'm not going anywhere near business aspects; out of scope.
  3. I am only looking at Apache Hadoop, which is of course the foundation of Amazon EMR. Which also means: if someone says "yes but projects X, Y & Z, ..." my response is "that's nice, but coming back to the project under discussion, ..."
  4. lots of people I know & respect work for Amazon. I am very much looking at the actions of the company, not the individuals.
  5. And I'm not making any suggestions about what people should do, only arguing that the current stance is harmful to everyone.
  6. I saw last week that EMR now has a reimplementation of the S3A committers, without any credit whatsoever for something I consider profound. This means I'm probably a bit sensitive right now. I waited a couple of days before finishing this  post,
With that out the way:-

As I type this a nearby terminal window runs MiniMR jobs against a csv.gz file listing AWS landsat photos, stored somewhere in a bucket.

The tests run on a macbook, a distant descendant of BSD linux, Mach Kernel and its incomplete open source sibling, Darwin. Much of the dev tooling I use is all open source, downloaded via homebrew. The network connection is via a router running DD-WRT.


That Landsat file, s3a://landsat-pds/scene_list.gz, is arguably the single main contribution from the AWS infra for that Hadoop testing.

It's a few hundred MB of free to use data, so I've used it for IO seek/read performance tests, spark dataframe queries, and now, SQL statements direct to the storage infrastructure. Those test are also where I get to explore the new features of the java language, LambdaTestUtils, which is my lifting of what I consider to be the best bits of scalatest. Now I'm adding async IO operations to the Hadoop FileSystem/FileContext classes, and in the tests I'm learning about the java 8+ completable future stuff, how to get them to run IOException-raising code (TL;DR: it hurts)

While I wait for my tests to finish, I see, there's a lot of online discussion about could providers and open source projects, especially post AWS re:Invent (re:Package?), so I'd thought I'd join in.


Of all the bits of recent writing on the topic, one I really like is Roman's, which focuses a lot on community over code.

That is a key thing: open source development is a community. And members of that community can participate by
  1. writing code
  2. reviewing code
  3. writing docs
  4. reviewing docs
  5. testing releases, nightly builds
  6. helping other people with their problems
  7. helping other projects who are having problems with your project's code.
  8. helping other projects take ideas from your code and make use of it
  9. filing bug reports
  10. reviewing, commenting, on, maybe even fixing bug reports.
  11. turning up a conferences, talking about what you are doing, sharing
  12. listening. Not just to people who pay you money, but people who want to use the stuff you've written.
  13. helping build that community by encouraging the participation of others, nurturing their contributions along, trying to get them to embrace your code and testing philosophy, etc.
There are more, but those are some of the key ones.

A key, recurrent theme is that community, where you can contribute in many ways, but you do have to be proactive to build that community. And the best ASF projects are ones which have a broad set of contributors

Take for example, the grand Java 9, 10, 11 project: [HADOOP-11123, HADOOP-11423, HADOOP-15338 ]. That's an epic of suffering, primarily taken on by Akira Ajisaka, and Takanobu Asanuma at Yahoo! Japan, and a few other brave people. This isn't some grand "shout about this at keynotes" kind of feature, but its a critical contribution by people who rely on Hadoop and have a pressing need "Run on Java 9+", and are prepared to put in the effort. I watch their JIRAs with awe.

That's a community effort, driven by users with needs.

Another interesting bit of work: Multipart Upload from HDFS to other block stores. I've  been the specification police there; my reaction to "adding fundamental APIs without strict specification and tests" was predictable, so I don't know why they tried to get it past me. Who did that work? Ewan Higgs at Western Digital did a lot -they can see real benefit for their enterprise object store. Virajith Jalaparti at Microsoft, People who want HDFS to be able to use their storage systems for the block store. And there's another side-effect: that mulitpart upload API essentially provides a standard API for multipart-upload based committers. For the S3A committers we added our own private API to S3A FS "WriteOperationHelper"; this will give it to every FS which supports it. And you can do a version of DistCP which writes blocks to the store in parallel from across the filesystem...a very high performance blockstore-to-block-or-object store copy mechanism.

This work is all being done by people who sell storage in one form or another, who see value in the feature, and are putting in the effort to develop it in the open, encourage participation from others, and deliver  something independent of underlying storage

This bit of work highlights something else: that Hadoop FS API is broadly used way beyond HDFS, and we need to evolve it to deal with things HDFS offers but keeps hidden, but also for object stores, whose characteristics involve:
  • very expensive cost of seek(), especially given ORC and Parquet know their read plan way beyond the next read. Fix: HADOOP-11867: Vectorized Read Operations,
  • very expensive cost of mimicking hierarchical directory trees, treewalking is not ideal,
  • slow to open a file, as even the existence check can take hundreds of milliseconds.
  • Interesting new failure modes. Throttling/503 responses if you put too much load on a shard of the store, for example. Which can surface anywhere.
  • Variable rename performance O(data) for mimicked S3 rename, O(files) for GCS, O(1) with some occasional pauses on Azure.
There are big challenges here and it goes all the way through the system. There's no point adding a feature in a store if the APIs used to access it don't pick it up; there's no point changing an API if the applications people use don't adopt it.

Which is why input from the people who spent time building object stores and hooking their application is so important. That includes:
  • Code
  • Bug reports
  • Insight from their experiences
  • Information about the problems they've had
  • Problems their users are having
And that's also why the decision of of the EMR team to isolate themselves from the OSS development holds us back.

We end up duplicating effort, like S3Guard, which is the ASF equivalent of EMR consistent view. The fact that EMR shipped with a feature long before us, could be viewed as their benefit of having a proprietary S3 connector. But S3Guard, like the EMR team, models its work on Netflix S3mper. That's code which one of the EMR team's largest customers wrote, code which Netflix had to retrofit onto the EMR closed-source filesystem using AspectJ.

And that time-to-market edge? Well, is not so obvious any more
  1. The EMR S3-optimised committer shipped in November 2018.
  2. Its open source predecessor, the S3A committers, shipped in Hadoop 3.1, March 2018. That's over 7-8 months ahead of the EMR implementation.
  3. And it shipped in HDP-3.0 in June 2018. That's 5 months ahead.
I'm really happy with that committer, first bit of CS-hard code I've done for a long time, got me into the depths of how committers really work, got an unpublished paper  "A zero Rename Committer" from it. And, in writing the TLA+ spec of the object store I used on the way to convincing myself things worked, I was corrected by Lamport himself.

Much of that commit code was written by myself, but it depended utterly on some insights from Thomas Demoor of WDC, It also contains a large donation of code from Netflix, their S3 committer. They'd bypassed emrfs completely and were using the S3A client direct: we took this, hooked it up to what we'd already started to write, incorporated their mockito tests —and now their code is the specific committer I recommend. Because Netflix were using it and it worked.

A heavy user of AWS S3 wanting to fix their problem, sharing the code, having it pulled into the core code so that anyone using the OSS releases gets the benefit of their code *and their testing*.

We were able to pick that code up because Netflix had gone around emrfs and were writing things themselves. That is: they had to bypass the EMR team. And once they'd one that, we could take it, integrate it and ship it eight months before that EMR team did. With proofs of correctness(-ish).

Which is why I don't believe that isolationism is good for whoever opts out of the community. Nor, by the evidence, is is good for their customers.

I don't even think it helps the EMR team's colleagues with their own support calls. Because really, if you aren't active in the project, those colleagues end up depending on the goodwill of others.


(photo: building in central Havana. There'll be a starbucks there eventually)