<![CDATA[Jeremy W. Sherman]]> 2017-03-19T14:37:47-04:00 https://jeremywsherman.com/ Octopress <![CDATA[Microservices vs Distributed Objects]]> 2017-03-08T09:55:00-05:00 https://jeremywsherman.com/blog/2017/03/08/microservices-vs-distributed-objects Distributed objects died out eventually; you can’t really hide the network layer without changing your system design to match. Here’s a Cocoa take. And here’s a Martin Fowler take found via the article below, with a sidebar suggesting a remote façade (to provide a coarse API as a remote endpoint) and data transfer objects (to provide coarse data transfer also as a way around slow remote communication times).

So, if DO sucks, why are microservices any different?

Enter Phil Calçado’s Microservices and the First Law of Distributed Objects:

Objects are not a good unit of distribution. They are too small and chatty to be good network citizens. Services, on the other hand, are meant to be more coarse-grained.

Group terms by affinity; grab out your connected components; now you have bounded contexts. Make those your services.

Early on, not worth the trouble - monoliths make sense. At huge scale, performance considerations (not going down) dwarf maintenance. In the middle, though, this rule of thumb ain’t bad.

Found via Devops Weekly.

]]>
<![CDATA[The Gist of Regex]]> 2017-01-14T14:52:00-05:00 https://jeremywsherman.com/blog/2017/01/14/the-gist-of-regex Regular expressions scare some people. They’re really quite warm and cuddly, or at least, conceptually very neat and tidy. If you don’t feel that way, this post is for you! Here’s how I think about regexen, in a nutshell.

I use this conception on a regular basis; when it comes to writing regex, I think about what I want to do in this model, then translate it into whatever regex notation the system I’m using at the time gives me. (I do the same thing with distributed version control and relation databases, but let’s stick to regexen for now.)

Regex Is Tiny Machines!

Regular expressions are a compact description of a symbol-matching machine. Like, “If you see an a, then maybe a b, and then one or more c, it’s a match!” for ab?c+.

But the machines can nest, so you can instead say stuff like, “If you see one thing matched by this machine, then maybe one thing matched by that one, followed by one or more things matched by that other, it’s a match!” So the a, b, and c from the last bit could actually be bigger regular expressions themselves.

But you have no variables in regex! So, instead, you plop the whole machine descriptions in there in parentheses, like (…)(…)?(…)+ And repeat the description if you need the same machine twice.

Pitch in self-referentiality - “if you see exactly the same thing as you ended up matched back there” - by using backrefs to parenthesized machines, you’re in our modern world of extended “regular” expressions. At that point, what we’re talking about is no longer actually expressions describing what’s technically known as a regular language, but they’re exceedingly useful extensions of the notation, so no-one cares. ;)

Compact Notation, Effective Expressive Power

What makes regular expressions so useful is:

  • Reach: A lot of stuff we want to match against can actually be described by them, especially when you pitch in a lot of the extended power-ups
  • Compactness: They’re a marvelously compact notation for what would otherwise be a lot of very boring code! Instead of writing that code, we dash of a regex, and we leave the translation into code to the regular expression engine.

For the More Curious

If that’s whet your whistle, Friedl’s Mastering Regular Expressions is excellent. And, as a bonus, you can probably just read the first few chapters and emerge enlightened. :)

P.S. You can also look at regular expressions as definitions of regular languages - as generators rather than consumers of text. Running them backwards like this can be a good way to think about whether a regex you’re writing captures exactly what you’re aiming at, or whether it might include a bit more than you intended!

P.P.S. And if you think about them in terms of machines, it’s really easy to start thinking about how to write fast regular expressions.

P.P.P.S. Hat-tip to @bazbt3 over at App.Net. What is dead can never die!

]]>
<![CDATA[Iterative Development]]> 2017-01-14T14:32:00-05:00 https://jeremywsherman.com/blog/2017/01/14/iterative-development “At last, my current practice of writing no automated tests has the blessing of science! See, TDD doesn’t do anything!” That’s how Fucci et al.’s 2016 conference paper An External Replication on the Effects of Test-driven Development Using a Multi-site Blind Analysis Approach was introduced to me.

And, indeed, it concludes like so:

Despite adopting such countermeasures, aimed at reducing researchers' bias [when replicating a prior, baseline study], we confirmed the baseline results: TDD does not affect testing effort, software external quality, and developers’ productivity.

Takeaways:

  • All coding is debugging
    • Work in small steps
    • Stay grounded in observed outputs
    • Keep good notes (tests or REPL session logs)
  • TDD won’t slow you down at steady state
    • Changing how you code to be more intentional and iterative might to start.
    • What will definitely slow you down: Learning your tooling and the impact of that iterative approach on the code you produce (expose those probe points for external testing! add indicator LEDs via assertions!)

Now that we’ve got the conclusion out of the way, keep reading to see how I got there. :)

Martin: It Tells Us Nothing: Keep on keeping on with TDD!

In the time since I’ve had “I should really write up my response to this” on my to-do list (a few months…), Bob Martin wrote up his take: TDD Doesn’t Work.

His article concludes the study made a distinction without difference and so naturally found no difference: Basically, that the folks involved were still practicing TDD, but where they actually wrote the tests – rather than conceived of and directed their coding efforts to them – was altered slightly.

Me: Iterativeness Is The Key!

But, check this:

Control treatment: the baseline experiment and its replication compared TDD to a really similar approach, labelled as TLD. Under this [sic] circumstances, we might be focusing on the incorrect part of development process (i.e., whether write tests first or not), and disregard the part of the process in which the a substantial effect might lie (i.e., the iterativeness of the process). Accordingly, the tasks used for both experiments were designed to fit the iterative nature of both treatments — i.e., isolate the process itself from the cognitive effort required to break down a complex problem into sub-problems. Pančur and Ciglarič [33] made a similar claim reporting the inconclusive results of a similar experiment. (bold emphasis added)

Fail Fast, Focus on Observed Outputs

The authors are on to something: I’ve seen people new to TDD stumble over learning to work in checkable baby steps. That iterative, fail-fast approach is where a lot of the time savings comes from; the other savings comes from being very intentional and focused about the concrete change you’re attempting to effect, or the specific knowledge you’re trying to elicit through experimentation. This same mindset also pays off in spades in debugging.

TDD or REPL, Just Use One!

We know you can learn this mindset via TDD, but a dev loop based around a REPL can work just as well. It ends up as TDD without the durable byproduct – once the session scrolls away, all those tests you wrote during bring-up are gone.

TDD Won’t Slow You Down

There’s another positive takeaway from the paper’s conclusion of no substantive difference:

  • TDD does not affect testing effort, software external quality, and developers’ productivity. As long as you’re working iteratively and actually writing tests, you’re going to write working software and be as productive as you can under the circumstances.

If you don’t practice TDD already, fear not: TDD is not going to slow you down.

…But Learning to Use Test Frameworks Might, To Start

Getting the hang of working iteratively, and actually writing tests yourself, on the other hand – those will take a bit of time. And then save you far more over time.

Missing

The Long View: Maintenance Burden

It would be interesting to see experiments comparing TDD or ITLD (iterative, test-last development) and development where one of those two constraints is relaxed:

  • Either drop the iterative bit, or
  • drop the test-writing bit.

This was a small dev task, so I bet you’d see productivity go up as quality goes down. Put another way, I bet small tasks naturally lead people to ditch both of the things that make up TDD/ITLD.

This short scale approach doesn’t assess two real-world challenges that we are concerned with as software maintainers:

  • Responding to changes over time.
  • Not breaking stuff on timescales longer than a single workday in codebases larger than what one person can turn out in a workday.

Style: Internal Code Quality

The study also did not assess internal quality (is it readable, navigable, maintainable code?) in any way. Out of scope for their purposes, rather important for those of many professional developers, as the wide spread of PR-based code review processes and the flourishing of adjuvants like SwiftLint and Danger reflect.

Conclusion

Reread the intro. (Or, as the grimly satirical Scarfolk Council would say: “For more information, please reread.”) I’m saving you having to read the abstract and then page to the conclusion this way. Go forth, and do the same for others. ;)

]]>
<![CDATA[How to Work Around an Empty Zenfolio Zip File]]> 2016-11-28T13:02:00-05:00 https://jeremywsherman.com/blog/2016/11/28/how-to-work-around-an-empty-zenfolio-zip-file My family recently had some holiday photos taken. The photographer was using Zenfolio to host their photos. I loved the photos and wanted to archive the originals on my laptop (and NAS, and Amazon Photos, and Time Machine, and Carbon Copy Cloner clone, and…). But every time I tried to download an original – of one photo, of all the photos, makes no difference – the server always sent me an empty zipfile!

I emailed the photographer to let them know, but I wasn’t going to wait.

Rather than work around this manually by visiting each page and right-clicking to Save As each photo – and I’m not sure that would show me the full-size image , anyway! – I figured Zenfolio would have an API.

Sure enough, there’s a well-enough documented Zenfolio API. I was in business!

I was able to lash together some shell commands to grab my full photoset. To save you some fumbling, here’s how I did it.

Walkthrough

Grab the Photo Details for the Photoset

Get the photoset ID. You can grab this from the URL you’re using to view the photos on the photographer’s website. If you view your photos at http://www.example.com/p544941453, then your photoset ID is 544941453.

Fetch the list of photos in that photoset using curl and save the JSON response to disk for the next step:

1
2
3
4
5
6
7
8
9
curl -v \
    -H'Content-Type: application/json' \
    api.zenfolio.com/api/1.8/zfapi.asmx \
    -d '{
      "method": "LoadPhotoSetPhotos",
      "params": [544941453, 0, 100],
      "id": 1
    }' \
    > photoset.json

This grabs the photos in photoset 544941453 starting from index 0 and returns at most 100 photos. Tweak those values to match your photoset and number of photos.

Also, I’m using fish as my shell. You might need to tweak that command line to make your shell happy, especially with the multiline string literal.

See: LoadPhotoSetPhotos method documentation

Download Each OriginalUrl

Grab the OriginalUrl field from the photo objects in the photoset response using jq, the JSON multitool:

1
jq '.result[].OriginalUrl' photoset.json

Download each file at those URLs by feeding them to curl via xargs:

1
2
jq '.result[].OriginalUrl' photoset.json \
    | xargs -n 1 curl -O

(The -n 1 is there so that curl sees one -O for each file argument. Without it, xargs would run curl -O url1 url2 url3…. This causes curl to download only the first URL to a matching file on disk; the rest, it starts piping out to stdout. I couldn’t work out a good way to get xargs to repeat the -O per argument, so I just throttled it to calling curl -O justASingleURL repeatedly.)

Enjoy your photos!

Caveat: Assumes Public Photos

This walkthrough assumes no authentication is required to download your photos. I lucked out: All my photos had an AccessDescriptor.AccessType of Public.

If the originals are password-protected, you’ll find a walkthrough of the hoops to jump through in “Downloading Original Files”.

If things are more locked down, you might need to sort out the authentication flow before you can even grab the photoset details. I didn’t need to do any of that, so I can’t walk you through how. Sorry!

]]>
<![CDATA[A Practical Example of FlatMap]]> 2016-09-22T11:59:00-04:00 https://jeremywsherman.com/blog/2016/09/22/a-practical-example-of-flatmap The Swift standard library introduces some unfamiliar concepts if you’re coming from Obj-C and Cocoa. map is one thing, but for some, flatMap seems a bridge too far. It’s a question of taste, and of background, if something comes across as a well-chosen, expressive phrase or if it just seems like status signaling, high-falutin' bullshit.

Well, I’m not going to sort that all out, but I did find myself rewriting an expression using a mix of if let/else into a flatMap chain recently, so I thought I’d share how I rewrote it and why.

If you’re mystified by Optional.flatMap, read on, and you should have a good feel for what that does in a couple minutes.

I’m not going to demystify everything: You still won’t know why it’s called flatMap.

But then, why do we use + for addition? And how do you implement it in terms of a fixed number of bits?

Just because you don’t know a symbol’s etymology or a function’s implementation, that doesn’t mean you can’t make it do useful work for you. If you treat flatMap as an operator written using Roman letters, you can get good value out of it!

Duck, Duck, Goose

Here’s what some deserialization code looked like to start:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
init?(json: JsonApiObject) {
    guard let name = json.attributes["name"] as? String
        , let initials = json.attributes["initials"] as? String
        else { return nil }
    self.name = name
    self.initials = initials
    self.building = json.attributes["building"] as? String
    self.office = json.attributes["office"] as? String
    self.mailStop = json.attributes["mailStop"] as? String
    if let base64 = json.attributes["photoBase64"] as? String
    , let data = Data(base64Encoded: base64) {
        self.photo = UIImage(data: data)
    } else {
        self.photo = nil
    }
}

Notice how you’re trucking along reading, “OK, we set this field, set that field, set that other field, and WHAT THE HECK IS THAT.” The if let bit comes out of left field, breaks your ability to quickly skim the code, and takes some puzzling to sort out. It also leads to repeating the assignment in both branches.

Cleaning This Up

Extract Intention-Revealing Method

To start with, we can take the existing code as-is, yank it out into a helper method, and call that:

1
self.photo = image(fromBase64: json.attributes["photoBase64"] as? String)

This makes the call site in init? read fine, but we’ve just moved the ugly somewhere else.

Take Advantage of Guard

Shifting it into a method dedicated to returning an image does open up using guard let to make the unhappy path clear:

1
2
3
4
5
6
7
8
func image(fromBase64 string: String?) -> UIImage? {
    guard let base64 = string
    , let data = Data(base64Encoded: base64)
    , let photo = UIImage(data: data) else {
        return nil
    }
    return photo
}

Still Too Noisy!

But that’s no real improvement:

  • The return values just restate our return type. They’re noise.
  • The reader has to manually notice that we’re threading each let-bound name into the computation that’s supposed to produce the next one.
  • We’re forced to name totally uninteresting intermediate values just so we have a handle to them to feed into the next computation.

All told, that’s a lot of noise for something that’s conceptually simple and that should be eminently skimmable.

A Pipeline with Escape Hatch

The pipeline we have is:

  • feed in a string
  • transform it into data by decoding it as base64
  • transform that into an image by feeding it into UIImage
  • spit out the image

The trick is, if any of these steps fails – that is, if any step spits out a nil – we just want to bail out and send back a nil immediately. It’s like each step has an escape hatch that short circuits the rest of the pipeline.

Pipeline with Escape Hatch Is Just FlatMap

Well, that’s exactly the behavior that sequencing all these with Optional.flatMap would buy you! Have a look:

1
2
3
4
5
func image(fromBase64 string: String?) -> UIImage? {
    return string
           .flatMap { Data(base64Encoded: $0) }
           .flatMap { UIImage(data: $0) }
}

And if you inlined it, it’d still be eminently readable, because it puts the topic first (“hey, y'all, we’re going to set photo!”), which preserves the flow of the code and its skimmability, and you can quickly skim the pipeline to see how we get that value.

Conclusion

Flatmap very clearly expresses a data transformation pipeline, without extraneous syntax and temporary variables.

We backed into using it in this example for reasons of readability, not for reasons of “I have a hammer! Everything is a nail!”

Sometimes, the new tool really is the right tool.

Appendix: Similar Rewrites

This “assign something depending on something/s else” situation happens a lot. And it can shake out a lot of different ways.

If the expression had been simpler, we could have rewritten it using ?: to eliminate the repeated assignment target. This often shows up with code like:

1
2
3
4
5
6
- if haveThing {
-     x = thing
- } else {
-     x = defaultThing
- }
+ x = haveThing ? thing! : defaultThing

Which, in that common “sub in a default” case, can be further simplified:

1
2
- x = haveThing ? thing! : defaultThing
+ x = thing ?? defaultThing

And if nil is an A-OK default, becomes the wonderfully concise:

1
2
3
- let defaultThing = nil
- x = thing ?? defaultThing
+ x = thing

There’s a similar transform that eliminates guard let stacks by using optional-chaining, but that deserves a bit more of an example, I think.

]]>
<![CDATA[The Internet Speaks: Testing FP Code]]> 2016-09-20T17:37:00-04:00 https://jeremywsherman.com/blog/2016/09/20/the-internet-on-testing-fp-code One problem I have writing Swift is that I’m not really sure how to tackle testing FP-ish code using XCTest.

I did some quick Internet research. If you read it on the Internet, it must be true. This is a distillation of those great Internet truths.

The Context: Data Persistence

But first, some context. Why did I care about this?

I ran into this in the context of sorting out how to persist and restore some app data at specific “app lifecycle” hooks.

Specifically:

  • When the app backgrounds, start a background task, then serialize and write to disk, then end the task.
    • Inputs: data store, serialization strategy, where to write to
    • Outputs: updated file on disk (side effect)
  • When the app launches, block the main thread till we’ve loaded the data from disk and unpacked it. This should be fast enough. Anything else will lead to folks seeing a not-yet-ready UI.
    • Inputs: serialization strategy, where we wrote to
    • Outputs: We can see the restored DataStore (side effect)

This is very much “app lifecycle” stuff, so we want the App Delegate to do it.

What’s the cleanest code we could imagine?

1
2
3
4
5
6
bracket startBackgroundTask endBackgroundTask $
    dataStore |> serialize |> write location

deserialize(location)
|> fromJust seedDataStore
|> set dataStoreOwner .dataStore

I think my big ??? is that I don’t get how to test a functional pipeline. It seems to not having any of the seams you’d usually rely on.

Testing FP Code

Summarizing:

  • Separate out pure code from impure.
  • Use PBT for the pure code.
  • Use typeclasses or protocols or similar dynamic binding methods to swizzle impure actions.

I guess, use acceptance testing to check that you got the wiring to impure stuff correct? That issue seems mostly ignored in favor of the much happier “pure functions are easy to test” story.

In practice, I think I’m now foundering on the mess that is object-functional blending. You’d hope that the Scala folks might have something good to stay on that, but that’ll have to be a later round of The Internet Speaks.

Static Methods Are Death to Testability

http://misko.hevery.com/2008/12/15/static-methods-are-death-to-testability/

Recapitulates the problem I identified:

Unit-testing needs seams, seams is where we prevent the execution of normal code path and is how we achieve isolation of the class under test. seams work through polymorphism, we override/implement class/interface and than wire the class under test differently in order to take control of the execution flow. With static methods there is nothing to override.

Recommends converting static methods to instance methods:

If your application has no global state than all of the input for your static method must come from its arguments. Chances are very good that you can move the method as an instance method to one of the method’s arguments. (As in method(a,b) becomes a.method(b).) Once you move it you realized that that is where the method should have been to begin with.

Says not to even consider leaf methods as OK as static, because they tend not to remain leaves for long.

Unit Testing and Programming Paradigms

http://www.giorgiosironi.com/2009/11/unit-testing-and-programming-paradigms.html Identifies the same problem as you move away from leaf functions in the context of procedural programming:

The problem manifests when we want to do the equivalent of injecting stubs and mocks in higher-level functions: there are no seams where we can substitute collaborator functions with stubbed ones, useful for testing. If my function calls printf(), I cannot stub that out specifying a different implementation (unless maybe I recompile everytime and play a lot with the preprocessor).

Outlines, in theory, what they would do, but have not done, for FP code: Pass in functions to parameterize behavior:

So instead of injecting collaborators in the constructor we could provide them as arguments, earning the ability to pass in fake functions in tests. The upper layers can thus be insulated without problems (with this sort of dependency injection) and there are no side effects that we have to take care of in the tear down phase

Omits stack and logic paradigms. No surprise there.

Recoverability and Testing: OO vs FP

https://www.infoq.com/news/2008/03/revoerability-and-testing-oo-fp

Sums up a conversation that happens across several blogs. Weirdly omits any links to primary sources. Yuck.

OO is rife with seams that are easy to exploit, so Feathers likes it. Where you need a seam is a design issue:

Another blogger, Andrew, highlights that if “code isn’t factored into methods that align with the needs of your tests”, the implementation will need to be changed to accommodate the test. Hence, he argues as well that “thoughts about “seams” are really just getting at the underlying issue of design for testability”, i.e. the proper placement of seams.

But not all systems are always so designed (putting it nicely), so “recoverability” matters: being able to make something testable in spite of itself.

According to Feathers, even though there are alternative modules to link against in functional languages, “it’s clunky”, with exception of Haskel where “most of the code that you’d ever want to avoid in a test can be sequestered in a monad”

Then there’s an argument that pushing the impurity to the edges makes things testable. No-one addresses validating correct composition of verified components, though. :(

SO: Testing in Functional Programming

https://stackoverflow.com/questions/28594186/testing-in-functional-programming

Answers point out:

  • Function composition builds units, in that you can test them quickly.
  • QuickCheck/SmallCheck dodge the combinatorial explosion of codepaths that you get by composing functions.
  • Coding against a typeclass that you can swizzle out for a test one lets you stub out IO-like functions. (Or just manually pass in a dictionary type.)
]]>
<![CDATA[Why I'm Meh About JSON API]]> 2016-07-23T12:56:00-04:00 https://jeremywsherman.com/blog/2016/07/23/why-im-meh-about-json-api JSON API has been pretty successful at providing a framework for APIs that lets you focus on roughly the entity–relationship diagram of your data.

But I find it frustrating at some turns (too flexible!) and peculiar at others (why is it bound to just one content-type?).

My frustrations with JSON API are ultimately because it doesn’t solve the problems I have as an API consumer, and its aim of preserving flexibility results in API consumers paying the price of that in needing to deal with the foibles of a specific implementation and in manually tuning their API queries.

I find the approach taken by GraphQL more directly and usefully addresses my needs as a client developer while also necessarily, by design, minimizing requests made and data transmitted.

JSON API makes it possible to accomplish that, but it leaves the responsibility for doing so up to the client developer; GraphQL makes it possible to accomplish that, but it takes the perftuning responsibility upon itself, which makes my life as a client dev easier.

In This Article

Introduction

I spent the last couple months working on an Ember app. The backend was running the cerebris/jsonapi-resources flavor of JSON API implementation. The frontend was using Ember Data’s JSON API adapter.

It worked, but I also kept running across ugly data requests like:

1
2
3
4
5
6
7
8
9
10
11
12
this.store.findAll(
  'work-order',
  { include:
    [ 'location'
    , 'shipping-address'
    , 'credit-card'
    , 'user'
    , 'shipments.shipment-items.order-item.inventory-item.part'
    , 'order-items.inventory-item'
    , 'inventory-items.part.part-kind'
    ].join(',')
  });

When I see something like that, all I can think is, Why am I listing all this out for the computer? It should figure it out! Maybe in a year or so, Ember Data will indeed do that, but you need to do that sort of thing today, unless you want template rendering to lead to this conversation between HTMLBars and Ember Data: “render, oh crud we need some data, fetching… rerender, oh crud more‽ ok, fetching… rerender… what, more! fetching…”

But if you’re hitting the API by hand – be it manual XMLHTTPRequest preparation or curl – that leads to a bear of a URL. And parsing out the data once it arrives is also not so fun. I hope you enjoy writing JOIN logic client-side!

And how do you even find out what you can toss in that include bit? I just popped over to the backend source and nosed around. That’s fine when you have access to the backend source code, but what if you don’t? What’s JSON API got to say to that?

Well. I’m not terribly happy with JSON API’s answers – and we’ll come to those in a bit – but let’s see if we can understand where JSON API is coming from: How did JSON API end up like this, and to what end?

JSON API: Bytecount Golfing with -ility Handicaps

JSON API’s primary purpose is to minimize request count and data transmitted. It attempts to balance this against concerns for readability, flexibility, and discoverability.

Readability: Not Too Shabby

JSON API is pretty readable. Hit a site using it (most anything Ember), and check out the API requests and responses in your browser debugging tools, and you can work out pretty quickly what’s going on.

The side-car style for included objects, where you have to bounce from an ID reference in the main response to a lookup table that got sent along with it, hurts a bit here for humans: you have to do manual joins client-side. But inlining them wouldn’t play nice with the “minimize transfer” focus, so it makes sense.

The URLs asking for those included objects get pretty gnarly, though.

Discoverability: Meh

I’d say its discoverability is pretty darn poor; this is partly a result of its flexibility, but mainly a result of its not providing much in the way of standardized introspection facilities.

The most frustrating lack for me when I hear “backend is using JSON API” is not being able to hit the root of the API and crawl from there to work out the whole of the API and what it supports. This is one of the most important attributes for the usability of a RESTful API from where I stand, but JSON API drops the ball, or heck, doesn’t even pick it up in the first place: Hypertext through-and-through it simply ain’t.

Where this often comes to a head is with include; there’s no standard way to signal that this is supported by a backend. You can give it a go and see if it yells at you, though. But if it does support it, it’s not clear what you’re not/allowed to include with something until you try.

And if the backend doesn’t support include, then it’s free to unilaterally include whatever alongside the data you asked for. If the backend API is sanely versioned – and JSON API does not specify how to manage that – you’re probably fine, but if it’s not, and your JSON API client library prefers to fail eagerly rather than being liberal in what it accepts, your backend can break your frontend pretty readily. Versioning aside, that’s more an implementation issue than a spec issue, though, so we can let that slide.

So we have our answer to the question from the intro: How do you even find out what you can toss in that include bit? You don’t, or you guess, or you look up the docs or source code for the backend, or you email support. Mmm, emailing support: Definitely something I like to include smack in the middle of my development cycle.

Flexibility: Hurts Discoverability and Limits Utility of Having a Spec

There are a lot of “servers may do this, or that, or maybe that…” bits in there too, which make finding out a server uses JSON API less of a “now I know everything about it” than it could be. (Search for “MAY” and “SHOULD” in the document.)

We saw this with include, but it also comes into play with requesting only certain bits of a record (sparse fieldsets), sorting, pagination, and filtering, the latter of which is specified in its entirety as: “The filter query parameter is reserved for filtering data. Servers and clients SHOULD use this key for filtering operations.” The limited specification of filtering and sparse fieldsets seems suprising in the face of a focus on reducing the amount of data transferred: This seems very much fair game for a spec with that aim in mind, but it handwaves and throws it in the flexibility bin, instead.

This really smarts for two reasons as a client dev:

  • There’s no standard way to communicate what implementation-defined choices a JSON API backend has made.
  • There’s no requirement to make those choices uniformly across all APIs.

This again means that learning an API is using the JSON API spec doesn’t buy you as much as it could; you still have to ask a lot of questions to sort out what that means in practice.

It also means that any client-side de/serializer for JSON API is limited in the support it can provide to you. The spec is very open to customization, which means that you will have to learn those customizations in force for your backend and teach your JSON API parser about them.

This reminds me a bit of how OAuth 2 moved from being a spec to a meta-“spec”, flexible to a fault, as described by the one-time lead author and editor of that spec:

One of the compromises was to rename it from a protocol to a framework, and another to add a disclaimer that warns that the specification is unlike to produce interoperable implementations. (“OAuth 2.0 and the Road to Hell”)

Peculiar: Why Only JSON?

JSON API seems weirdly bound to the content-type (it’s an API! in JSON!), which is kind of funny to me in light of the “A server MUST prepare responses, and a client MUST interpret responses, in accordance with HTTP semantics” language. This feels like following the letter rather than spirit of the law: There’s no notion of a resource that might go by various possible representations. Content transferred under JSON API’s auspices goes by a JSON-API–specific content-type.

JSON is not a terribly expressive data format; you’ve got the rudiments needed to cobble together more specific data types atop it. That also means there’s little reason you couldn’t translate the data in a JSON API response into another content type, be it BSON, XML, S-expressions, or something even more unique.

Maybe That’s Not JSON API’s Job?

Perhaps the JSON API homepage, rather than the spec, is more honest in its aims:

If you’ve ever argued with your team about the way your JSON responses should be formatted, JSON API can be your anti-bikeshedding tool.

By following shared conventions, you can increase productivity, take advantage of generalized tooling, and focus on what matters: your application.

Clients built around JSON API are able to take advantage of its features around efficiently caching responses, sometimes eliminating network requests entirely.

It takes for granted you’re building an API, and it’s only going to support JSON. Its pitch: Use JSON API so you don’t have to quibble about how you encode your data, and you get this already thought-through support for caching data and minimizing the requests needed for free!

Perhaps JSON API’s audience is specifically API producers, not consumers, and that’s why I don’t find it addressing my needs.

How Is GraphQL Better?

The more declarative approach of GraphQL (and, to a lesser degree, Falcor) fulfills the spec-stated goals better than JSON API does itself.

Heck, it also satisfies those of the homepage better, too!

GraphQL in a Nutshell

The rough idea is:

  • There is a typed spec for what data is available and how it’s related.
  • Components can request specific bits they need using a query language. Queries can be typechecked.
  • A query builder can aggregate component requests into a more general request, coalesce them, and then hit the backend/respond from cache intelligently, without the components needing to worry about this.
  • The system then vends back to the components precisely the info they requested, no more, no less.

You can see where that’d be handy in a React world of little components asking for this or that nugget of info, which was the background against which GraphQL arose.

No Intrinsic Content-Type

The actual wire protocol is kind of beside the point unless you’re the GraphQL engine implementor. Is it sent by JSON or BSON or MP3s modulated at 56kbps? Is it using HTTP over TCP? Audio blips over SCTP? Who cares! Here’s how the data is laid out when it reaches you, here’s the types of that data; ask for what you need.

No Manual Perftuning

The query optimization bit can get arbitrarily clever without impacting the components, which is excellent for future performance tuning: Upgrade your client-side GraphQL engine, and your app stands to suddenly get more performant, without any further work on your part.

And then I go to an Ember app using JSON API where they have these insane URLs where they’ve got &include=this,that,theotherthing,ohandnowthisthing and the diffs for that insanely long line are so fun to read, lemme tell you. (I prettied up the intro code snippet to use […].join() so it’d be readable at most widths without horizontal scrolling, but that was just one big ol' string in the source. Ayup.)

It’s this very manual, rough query optimization by hand that I think it’s silly they’re needing to worry about, when they’re not even concerned about query optimization; they just need some data. And the optimization is limited by the size of the records, as well.

Conclusion

JSON API is a rather limited spec that I find flexible to a fault as a client developer. It seems wide open to bikeshedding still on the API producer’s side as well, due to that flexibility, so I’m not sure how well it meets either its spec-stated or marketing-stated aims.

GraphQL offers a declarative approach to directly expressing the data available – which addresses my desire to be able to pull that information without a lot of digging as a client dev – and the data requested – which addresses my desire to be able to pull the data I need without worrying about the details of how it’s going to get to me.

But JSON API slots neatly into an existing niche – an API! in JSON! Hey, I think I’ve used a few of those! While GraphQL is a different bird entirely. Consequently, I’ve yet to use GraphQL, while I have ended up working against a JSON API already, and expect I’ll find myself doing so again in future as well.

JSON API is an incremental improvement, serviceable and certainly no worse than even more thoroughly ad hoc API creations, and so I expect it to spread widely: I expect to run into a lot of JSON API backends, and may never have the chance to consume a GraphQL backend. I’m glad for what sanity JSON API does bring to the wild west of APIs out there.


Thanks to Chris Krycho for feedback on a draft of this article. He tripped over all the awkward transitions so you don’t have to. ;)

]]>
<![CDATA[Father's Day: Happy Hurricane]]> 2016-06-17T22:11:00-04:00 https://jeremywsherman.com/blog/2016/06/17/fathers-day This Sunday marks my second Father’s Day as a father. If you’re not yourself a parent, that won’t mean much to you. It certainly didn’t to me. If you’re en route to fatherhood, read on to learn what “fatherhood” actually means.

My experience was that preparation for new parents focused heavily on the birth experience. What I knew of what would follow focused primarily on early childhood development and dangers with a side helping of lactation and baby-wearing. These are good things to know, but they don’t do jack for helping you cope with what having a newborn in your house means for you.

Birth as Loss

As a newborn, your kid is entirely dependent on its parents for everything. You will be eating, breathing, and sleeping baby. Your schedule is its schedule.

Eventually, stuff might get a bit saner. You’ll get nursing sorted out, you’ll find a sleeping arrangement that works for your family, you’ll find some ways to care for your kid.

But there’s an especially strong and demanding pairbond between mother and child, and you might very well feel left out, or more strongly, crowded out: You might feel like you’ve lost your wife to this child. And babies are not terribly relatable creatures, but they are very demanding, and they know no patience. It can feel like a raw deal.

I turned a corner once my son was able to laugh. I could do something, and he could respond to it, and I could relate to that. I think that’s when my baby went from “it” to “he” for me.

Say Goodbye to Life as You Knew It

When they’re a baby, their schedule is yours.

Turns out, that doesn’t really change as they age into toddlerhood.

You’ve Yielded Autonomy

You’ve lost a lot of autonomy by assuming stewardship of an amateur human. Sure, you can stay up late; but if your kid wakes up at 6 am, someone has to be up with them. That someone is likely you. So either you go to bed, or you spend a day tired and cranky, and no good to nobody. When you wake up is no longer your choice, and if you know what’s best for you, then neither is when you go to sleep, either.

More than that: What you do during the day is restricted to what you can do while watching over your child. Maybe you’ll have a quiet kid who is fine sitting and playing with whatever for a while. This won’t be much of a burden to bear. Maybe you’ll have a very interactive and active child who very much wants to do something right now thank you very much and are you watching this because we’re going to do this together. You’ll find your options in that scenario are rather limited.

Young children don’t suffer fools, or delays, gladly. If doing something involves waiting around, especially quietly, you can probably cut it out. Like dinners out with a 45-minute wait spent standing around and ordering drinks at the bar and gabbing to pass the time? Yeah - that’s incompatible with wee ones. Movies out don’t really work either. Picking up donuts to go with a kid in tow can be touch and go if there’s a line. A lot of stuff you took for granted that you could do, you can’t, at least without a sonic tax, possibly with tears attached.

Everyday stuff you take for granted that needs to happen can also become a challenge. Traveling by car means a lot more prep work. Shopping requires half a mind on what exactly your kid is doing with that produce you thought you safely tucked in the cart. And is vacuuming worth a tussle over who gets to control the vacuum? How badly do you need something cleaned, and how clean is clean enough?

You’ve Also Lost Environmental Control

That’s a good segue from loss of personal autonomy into loss of control. As an adult, you have a lot of control over your environment. If you’ve got your own place, you can pretty much stick something wherever, and expect to find it still there. You can demolish a staircase and landing, cut a hole for a new door, and take your time fitting a new door and rebuilding the staircase. You know well enough not to try to leap a storey down onto unforgiving cement. Even your cat’s curiosity isn’t enough to overcome their caution in that case.

Your kid is another matter. Even if they knew well enough that falling down that far would be a bad idea, they’re just not terribly good at moving around and keeping track of the environment in their head. They can easily accidentally walk too far, or lose their balance near an edge, or forget to watch where they’re going because there’s a housefly or a patch of sunlight. So you’ll find yourself reshaping your entire environment to fit their needs and behaviors. And you’ll weigh convenience against how big of a mess they can make if you’re looking elsewhere for thirty seconds. (A salt cellar makes a great mini-sandbox in a pinch, don’t you know?)

It’s a big step from “master of my tiny pocket universe” to “adult graciously allowed to exist as my caregiver and diviner of my needs and desires”.

A Change in the Weather

I found this stifling and isolating. It’s a very peculiar experience to find you’ve more autonomy in your work life than in your home life.

But it has its upsides. And I’d do it again.

Recentering

All those losses are losses from a point of view where you’re at the center of your own universe.

In practice, they’re just side effects of shifting the center from yourself to your family. It’s no longer all about you. There’s not time for you to maintain that illusion any more. Welcome to adulthood.

Learning Patience & Humility

Children are elemental forces. You can’t reason with them for the several years. You can empathize. You can distract with counter-proposals. But you can’t negotiate.

You mostly won’t get your way. You’ll find that it doesn’t even matter that you don’t get your way. You just wanted things to go your way because that’s what you were comfortable with.

You’re going to have to relax control and work with the situation as it presents itself. You’ll leave getting your way for when it matters – when there’s risk to health or safety, or there’s something important enough that it’s worth possibly distressing your kid, stressing everyone around you out, and maybe dealing with some shrieking and crying.

One concrete way this shows up is in learning to be patient. Yeah, I get it, you want to go right now. But your kid doesn’t. And you don’t really need to go right now. You just want to. Suck it up and wait a while. Set the expectation that you will be leaving soon. When the time comes, then you can leave.

Facing Humanity Head-On

Stuff will get broken. Things will go wrong. A lot of things will go wrong. Kids are clumsy, curious, and not bridled by concern for cleanliness, hygiene, or common sense. This is OK.

If you’re a perfectionist, you’re probably accustomed to everything going to plan, and ensuring that you have a plan and execute on it such that everything goes to plan. You won’t be able to exert that level of control, that clockwork execution, when it comes to large parts of your life any more.

You might have spent a couple decades driving out the human. It’s back now in spades, and you’ve no choice but to confront it head on.

This also teaches patience. It teaches you to expect people to stumble, to make mistakes, to err. You’ve probably kind of known that was the case in theory, but it wasn’t your experience before, and it was hard to cut someone a break because you’d worked out how to run things so you didn’t need anyone to cut you a break. Now it is your experience, and theory is practice, and boy, will you be getting a lot of practice. And you’ll probably be really glad when people cut you a break for acting in weird ways, running out of line to snatch up a kid about to get in trouble, walking all throughout the restaurant courtyard following an imp climbing up and down and around things and maybe bumping into people before bouncing off and away onto the next thing.

Having Fun

With kids, the lows can be low, but the highs can be so high. You have a license to be silly and a new set of eyes to experience the world through. You get to look at such commonplaces as trees, birds, and squirrels with fresh awareness and naked joy at their existence and activity. And if you’ve forgotten to play, you’ll learn that anew, too.

The Traumatic Hurricane of Fatherhood

So, Father’s Day. When you were born, you destroyed someone’s world and remade it around you. As a new father, you have to come to terms with the dramatic difference in responsibility, relationships, and rituals that come with this hurricane of a change. It’s sudden and total, but you can build a new and better life in its aftermath.

Here’s to hoping having a second kid is less hurricane and more tropical storm!

]]>
<![CDATA[Beyond Our Ken]]> 2016-05-05T23:20:00-04:00 https://jeremywsherman.com/blog/2016/05/05/beyond-our-ken The more I poke around, the more convinced I become that actually knowing what a piece of software is supposed to do is truly rather rare and generally beyond mortal ken. Making it do what you think it should do is nearly beyond our grasp.

If we’re honest with ourselves, we need every tool we can get just to wrangle software into behaving. That means types, that means tests, and that means, yes, even: proofs.

And that also means that proofs need tests, too.

What drove this home was reading a couple papers related to combining proving and testing.

Types and Tests

I’m on record for arguing in favor of using both types and tests to their utmost in both Types Complement Tests Complement Types and Beyond Type Wars.

It’ll come as no surprise what I’m going to recommend here: Use proofs and tests. And also types.

(It’s even less surprising if you’ve run across the Curry-Howard isomorphism, which relates logical proofs and propositions to exhibiting an instance of a type – Propositions as Types – or, more broadly, the notion of computational Trinitarianism. There are some deep connections here, and we should wring them for every last ounce of help they can give us in crafting correct and elegant software.)

Use Proofs AND Tests

This time, it’s not gonna be me saying it, though.

Really, you should use both tests and proofs, not just one or just the other:

This also reinforces the general idea that testing and proving are synergistic activities, and gives us hope that a virtuous cycle between testing and proving can be achieved in a theorem prover. (Zoe Paraskevopoulou et al., “Foundational Property-Based Testing”, 2015)

If you don’t, you’re going to screw up. In small ways often, basically just tripping over your feet, but sometimes in big ways, where no-one can see how to bail you out:

Second, tests complement proofs. We encountered five papers in which explicitly claimed theorems are false as stated. […] In every case, though, rudimentary testing discovered errors missed with pencil-and-paper proofs.

Indeed, we claim that tests complement even machine-checked proofs. As one example, two of the POPLmark solutions that contain proofs of type soundness use call-by-name beta in violation of the specification (Crary and Gacek, personal communication). We believe unit testing would quickly reveal this error.

Even better, one can sometimes test propositions that cannot be validated via proof. […] Testing also removes another obstacle to proof, the requirement that we first state the proposition of interest. Due to its exploratory nature, testing can inadvertently falsify unstated but desired propositions, e.g., that threads block without busy waiting (section 4.4). This is especially true for system-level and randomized testing. To some degree, the same is true of proving, but testing seems to be more effective at covering a broad space of system behaviors. (Casey Klein et al., “Run Your Research: On the Effectiveness of Lightweight Mechanization”, 2012)

Use ALL The Tools

We don’t have to choose just tests.

We don’t have to choose just types.

We don’t have to choose just proofs.

We have an abundance of tools waiting for us to take them up and apply them to our problems. It’s simple and reassuring to reject a whole class of them out of hand; if we pick just one, perhaps we can convince ourselves of our expertise. And you can indeed get quite far with just one. But if you can stomach your own ignorance, you might find you can get even farther by striving to master all these many disciplines.

That Said, Tests Are a Really Mature Technology

Automated testing stopped being rocket science at least a decade ago. If you do nothing else, at least write some automated tests.

Humans suck at repeating mechanical tasks, we’re bad at documenting them, bad at following them, we get bored really easily, and we’re really slow. Be virtuously lazy and sic a computer on your testing, for everyone’s sake.

If you’re going to pick just one of these, pick automated testing, and work it for all it’s worth. (Quite a lot, honestly!)

]]>
<![CDATA[Beyond Type Wars: Types Can Be Tests Too]]> 2016-05-05T23:13:00-04:00 https://jeremywsherman.com/blog/2016/05/05/beyond-type-wars Types and tests are not at war. Choose both.

In fact, if we tilt our heads a bit, types are just another flavor of test.

Don’t use just one flavor of testing; use all the tools you have at your disposal to make the best software you can.

Type Wars

Robert C. Martin believes code TDD’d into existence, and so having 100% test coverage by construction, nullifies the value of types:

My own prediction is that TDD is the deciding factor. You don’t need static type checking if you have 100% unit test coverage. And, as we have repeatedly seen, unit test coverage close to 100% can, and is, being achieved. What’s more, the benefits of that achievement are enormous.

Therefore, I predict, that as TDD becomes ever more accepted as a necessary professional discipline, dynamic languages will become the preferred languages. The Smalltalkers will, eventually, win. (Robert C. Martin, “Type Wars”, 2016)

The further your own development practice is from TDD, the more ludicrous this will seem to you.

If you ignore Martin’s emphasis on TDD, and focus instead on the “100% unit test coverage” bit, you’re likely to reject it out of hand: Coverage measures are a very fraught and limited measure. Even if you go “but that’s just line coverage!”, well, not even 100% branch coverage suffices to demonstrate freedom from fairly mechanical bugs, never mind more abstract errors in how you’ve implemented whatever half-imagined, unspecified system you’re aiming at.

The Compilation Test

I think he’s leaving a tool on the table, though. Not even any tool: A robust bevy of tests. And a tool that slots neatly into test-driven development.

The more powerful your type system, the more oomph you can get out of simply, “Does it compile?”

Even with Java, though, you can get rather far:

Defining types is very much like writing tests—the compiler continuously checks the types for consistency while we loop back and fix errors. Step 0[, define all the types,] is exactly like normal TDD, except we are making formal statements about the system that the compiler maintains. Could step 0 take a long time? Sure. Maybe with a sufficiently-advanced type system we never even leave step 0. With Java I’m going to hit a wall pretty fast, but not before avoiding many of the worst problems with the Money design. (Ken Fox, “More Typing, Less Testing: TDD with Static Types, Part 2”, 2014)

As that demonstrates, you can usefully incorporate types into test-driven development with thoroughly salutary effects.

It Cramps My Style

It’s true that, once you’ve got a type system, you’re constrained to writing code that fits within its constraints. Often you can ram through something that doesn’t, but it’s uncomfortable and tends to come with some at least syntactic overhead that makes it not nice to do.

TDD puts you under similar constraints, though: In order to achieve test isolation, you have to structure your software differently. You’ve narrowed your collection of possible programs from all those that can be represented in your language to only those that can be test-driven into existence and so all those that yield readily to automated testing.

Both typing and testing constrain what we can do with our code; we accept both limits because they free us to build with more confidence than we’d have without either.

Types AND Tests, Or Types ARE Tests

Whichever way you look at it, use ‘em both.

Your software will be better for it, and you’ll grow to be a better software engineer for the practice.

]]>
<![CDATA[Types Complement Tests Complement Types]]> 2016-05-05T22:55:00-04:00 https://jeremywsherman.com/blog/2016/05/05/types-complement-tests-complement-types Types and tests are complementary. They might even be synergistic: The two together can accomplish what neither can alone. They definitely are not rivalrous goods, and if you’re picking only one, you’re doing yourself a disservice.

If You Have To Pick One, Though

There’s a ceiling to how far we can get with types. Most languages developers work in have rather limited type systems. Most developers lack the skill, practice, and simple exposure to past examples to make use of more powerful type systems. That’s not a slight: Generating those examples today can be a good way to get yourself at least a Masters if not a PhD.

We can push automated testing really far regardless of type system. There’s an abundance of popular literature on the subject. If you want to get better, you don’t have to look far, and you can put what you learn to practice immediately.

If you had to pick between either building 100% TDD’d code in a unityped language or building code with no automated tests in a conventionally typed language, you’d be a fool not to pick the TDD’d codebase.

But You Don’t, So Use Both

You don’t have to choose one or the other. Reject the false dichotomy, chase off its acolytes on their hobby horses, and make the most of all the technologies available to you today to produce better software.

]]>
<![CDATA[Here's to iOS apps in F#]]> 2016-04-06T11:50:00-04:00 https://jeremywsherman.com/blog/2016/04/06/heres-to-ios-apps-in-f-sharp At Build 2016, Microsoft announced that Xamarin is free with all versions of Visual Studio, and the Xamarin SDK will be open-sourced.

My first thought was: iOS apps in F#? Lemme at it!

Why F#?

Swift is going through growing pains, and it’s still substantially a statement-oriented language. It’s supposed to be very comfortable if you come from a blocks-and-braces background, with seamless interop with C and Obj-C, and it’s executed on that wonderfully. If you were hoping for a more truly functional language, though, it’s kind of a downer; its gig at the bleeding edge seems more generic programming than functional programming (“C++ done right”).

F# benefits more directly from the long evolution of ML languages. It’s been public longer, and it’s got a good pedigree: Microsoft have done interesting things with all their languages over the past decade, and F# is no exception.

It might be “grass is greener”, but I’d like to take that for a spin and kick its tires, without having to up and move to a completely different target platform.

Interested?

F# for Fun and Profit is really great for learning about F# and why it’s good stuff. It’s organized less like a blog and more like a collection of series of instructional content.

Here’s their one-page summary of “why use F#”. Most of the bullets apply as well for Swift as for F#, but the core difference of expression-orientation rather than statement-orientation – not called out there – matters quite a bit in how easy it is to compose expressions and extract expressions as independent functions. (The workflow/computation expression sugar is quite nice, as well, and of course F# for Fun and Profit has a series teaching it in detail.)

If you prefer to listen rather than read, then check out:

Both of these also get into introducing F# in the workplace, if that’s something you’re motivated to tackle.

Why Xamarin?

Xamarin was $$$ before, but this drops the price of adoption for me (and anyone inheriting my codebase) significantly. Adoption costs matter!

As a bonus, if I can be like, “Hey, you’ll get an iOS app, and you’ll get a pile of platform-independent code you can point at Windows, Mac, or Android afterwards,” that seems like a win all around.

But really, I want a shot to use an expression-oriented language today as my main language.

Warning: Untested Speculation

I haven’t actually tried to do this yet. It might go down in flames in practice when I try to get everything lined up and working together; lots of things sound good in outline that fail in implementation.

If I throw an afternoon to it some time in future, I’ll check back in with an experience report then.

]]>
<![CDATA[XCTestExpectation Gotchas]]> 2016-03-19T21:24:00-04:00 https://jeremywsherman.com/blog/2016/03/19/xctestexpectation-gotchas XCTestExpectation simplifies testing callback-style code, but some of its design choices make tests using it fragile unless they’re mitigated:

  • It explodes if everything works right but later than you expected.
  • It explodes if everything works right more than once.

This article presents two concrete mitigations:

  • Use weak references to ensure the expectation dies before it can cause you trouble.
  • Use a different promise API to do your waiting.

Contents:

A Quick Review

XCTestExpectation is the tool Apple’s unit testing framework XCTest provides for coping with asynchronous APIs.

It’s a promise/future with one purpose: to answer the question, “did it get filled in time?”

To use it, you ask the test case to create one or more:

1
let promise = expectationWithDescription("it'll happen, trust me")

wait a configurable amount of time for every outstanding expectation to get filled:

1
waitForExpectationsWithTimeout(maxWaitSeconds, handler: nil)

and log a test failure if time runs out before that happens:

Asynchronous wait failed: Exceeded timeout of 1 seconds, with unfulfilled expectations: “it’ll happen, trust me”.

It would have succeeded if it had been filled in time:

1
promise?.fulfill()

Example: We’ll Call You

You can’t use the XCTest framework from a Playground (rdar://problem/17839045), so you’ll need to throw this in a full-blown project:

Get the code from GitHub

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class LateCallback: XCTestCase {
    let callBackDelay: NSTimeInterval = 2


    func testNotWaitingLongEnough() {
        let promiseToCallBack = expectationWithDescription("calls back")
        after(seconds: callBackDelay) { () -> Void in
            print("I knew you'd call!")
            promiseToCallBack.fulfill()
        }

        waitForExpectationsWithTimeout(callBackDelay / 2) { error in
            print("Aww, we timed out: \(error)")
        }
    }
}

Go ahead and run this. Everything works fine – for now:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Test Suite 'All tests' started at 2016-03-19 21:56:49.223
Test Suite 'Tests.xctest' started at 2016-03-19 21:56:49.225
Test Suite 'LateCallback' started at 2016-03-19 21:56:49.225
Test Case '-[Tests.LateCallback testNotWaitingLongEnough]' started.
Aww, we timed out: Optional(Error Domain=com.apple.XCTestErrorDomain Code=0 "The operation couldn’t be completed. (com.apple.XCTestErrorDomain error 0.)")
/Users/jeremy/Github/XCTestExpectationGotchas/Tests/LateCallback.swift:26: error: -[Tests.LateCallback testNotWaitingLongEnough] : Asynchronous wait failed: Exceeded timeout of 1 seconds, with unfulfilled expectations: "calls back".
Test Case '-[Tests.LateCallback testNotWaitingLongEnough]' failed (2.247 seconds).
Test Suite 'LateCallback' failed at 2016-03-19 21:56:51.473.
   Executed 1 test, with 1 failure (0 unexpected) in 2.247 (2.248) seconds
Test Suite 'Tests.xctest' failed at 2016-03-19 21:56:51.474.
   Executed 1 test, with 1 failure (0 unexpected) in 2.247 (2.249) seconds
Test Suite 'All tests' failed at 2016-03-19 21:56:51.474.
   Executed 1 test, with 1 failure (0 unexpected) in 2.247 (2.251) seconds


Test session log:
  /var/folders/63/np5g0d5j54x1s0z12rf41wxm0000gp/T/com.apple.dt.XCTest-status/Session-2016-03-19_21:56:45-vfvzhb.log

Program ended with exit code: 1

Test suite kicks off, everything runs, the test fails due to a timeout while waiting for the expectation to be met, and the process exits. This is how XCTestExpectation is supposed to work.

Kaboom: Missing the Window

We only ran the one test, though. Let’s say you have more tests to run after this one.

We can fake this out by adding a new test method whose name sorts alphabetically after our testNotWaitingLongEnough test that runs the runloop for a bit before exiting.

Conveniently enough, XCTest happens to run tests in alphabetical order, so the test runner will run our first test, then run this second one, then exit.

Here’s our new test method:

1
2
3
4
5
6
func testZzz() {
    print("Let's just wait a while…")
    let tillAfterCallBack = callBackDelay
    spin(forSeconds: tillAfterCallBack)
    print("Yawn, that was boring.")
}

Let’s see what happens (or you can skip to the summary):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
Test Suite 'All tests' started at 2016-03-19 22:19:31.796
Test Suite 'Tests.xctest' started at 2016-03-19 22:19:31.798
Test Suite 'LateCallback' started at 2016-03-19 22:19:31.798
Test Case '-[Tests.LateCallback testNotWaitingLongEnough]' started.
Aww, we timed out: Optional(Error Domain=com.apple.XCTestErrorDomain Code=0 "The operation couldn’t be completed. (com.apple.XCTestErrorDomain error 0.)")
/Users/jeremy/Github/XCTestExpectationGotchas/Tests/LateCallback.swift:16: error: -[Tests.LateCallback testNotWaitingLongEnough] : Asynchronous wait failed: Exceeded timeout of 1 seconds, with unfulfilled expectations: "calls back".
Test Case '-[Tests.LateCallback testNotWaitingLongEnough]' failed (2.202 seconds).
Test Case '-[Tests.LateCallback testZzz]' started.
Let's just wait a while…
2.0: finished waiting
I knew you'd call!
2016-03-19 22:19:34.001 xctest[92369:96447173] *** Assertion failure in -[XCTestExpectation fulfill], /Library/Caches/com.apple.xbs/Sources/XCTest/XCTest-9530/XCTestFramework/Classes/XCTestCase+AsynchronousTesting.m:451
2016-03-19 22:19:34.002 xctest[92369:96447173] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'API violation - called -[XCTestExpectation fulfill] after the wait context has ended for calls back.'
*** First throw call stack:
(
  0   CoreFoundation                      0x00007fff897ec03c __exceptionPreprocess + 172
  1   libobjc.A.dylib                     0x00007fff8674276e objc_exception_throw + 43
  2   CoreFoundation                      0x00007fff897ebe1a +[NSException raise:format:arguments:] + 106
  3   Foundation                          0x00007fff8b98b99b -[NSAssertionHandler handleFailureInMethod:object:file:lineNumber:description:] + 195
  4   XCTest                              0x000000010006f149 -[XCTestExpectation fulfill] + 302
  5   Tests                               0x00000001006858ab _TFFC5Tests12LateCallback24testNotWaitingLongEnoughFS0_FT_T_U_FT_T_ + 203
  6   Tests                               0x0000000100685c4f _TFF5Tests5afterFT7secondsSd4callFT_T__T_U_FT_T_ + 367
  7   Tests                               0x0000000100685de7 _TTRXFo__dT__XFdCb__dT__ + 39
  8   libdispatch.dylib                   0x00007fff8301f700 _dispatch_call_block_and_release + 12
  9   libdispatch.dylib                   0x00007fff8301be73 _dispatch_client_callout + 8
  10  libdispatch.dylib                   0x00007fff8302d6a0 _dispatch_after_timer_callback + 77
  11  libdispatch.dylib                   0x00007fff8301be73 _dispatch_client_callout + 8
  12  libdispatch.dylib                   0x00007fff830284e6 _dispatch_source_latch_and_call + 721
  13  libdispatch.dylib                   0x00007fff8302093b _dispatch_source_invoke + 412
  14  libdispatch.dylib                   0x00007fff8302c5aa _dispatch_main_queue_callback_4CF + 416
  15  CoreFoundation                      0x00007fff8973f3f9 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 9
  16  CoreFoundation                      0x00007fff896fa68f __CFRunLoopRun + 2159
  17  CoreFoundation                      0x00007fff896f9bd8 CFRunLoopRunSpecific + 296
  18  Foundation                          0x00007fff8b953b29 -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 278
  19  Foundation                          0x00007fff8b971d9e -[NSRunLoop(NSRunLoop) runUntilDate:] + 108
  20  Tests                               0x0000000100685262 _TF5Tests4spinFT10forSecondsSd_T_ + 162
  21  Tests                               0x000000010068510f _TFC5Tests12LateCallback7testZzzfS0_FT_T_ + 207
  22  Tests                               0x00000001006852a2 _TToFC5Tests12LateCallback7testZzzfS0_FT_T_ + 34
  23  CoreFoundation                      0x00007fff896c37bc __invoking___ + 140
  24  CoreFoundation                      0x00007fff896c3612 -[NSInvocation invoke] + 290
  25  XCTest                              0x0000000100022598 __24-[XCTestCase invokeTest]_block_invoke_2 + 159
  26  XCTest                              0x000000010005602e -[XCTestContext performInScope:] + 184
  27  XCTest                              0x00000001000224e8 -[XCTestCase invokeTest] + 169
  28  XCTest                              0x0000000100022983 -[XCTestCase performTest:] + 443
  29  XCTest                              0x0000000100020654 -[XCTestSuite performTest:] + 377
  30  XCTest                              0x0000000100020654 -[XCTestSuite performTest:] + 377
  31  XCTest                              0x0000000100020654 -[XCTestSuite performTest:] + 377
  32  XCTest                              0x000000010000e892 __25-[XCTestDriver _runSuite]_block_invoke + 51
  33  XCTest                              0x0000000100033a1b -[XCTestObservationCenter _observeTestExecutionForBlock:] + 611
  34  XCTest                              0x000000010000e7db -[XCTestDriver _runSuite] + 408
  35  XCTest                              0x000000010000f38a -[XCTestDriver _checkForTestManager] + 696
  36  XCTest                              0x000000010005729f _XCTestMain + 628
  37  xctest                              0x0000000100001dca xctest + 7626
  38  libdyld.dylib                       0x00007fff8b25f5c9 start + 1
)
libc++abi.dylib: terminating with uncaught exception of type NSException
(lldb)

And now we’re sitting at the debugger. Oof, that smarts.

Take a look at what’s going on in that backtrace:

  • Our Zzz test is hanging out running the runloop.
  • The after(seconds:call:) finishes waiting and calls its callback.
  • The callback fulfills an expectation belonging to the (already finished, already failed) first test
  • This trips a “you’re holding it wrong” assertion in the test framework:

    Terminating app due to uncaught exception ‘NSInternalInconsistencyException’, reason: ‘API violation - called -[XCTestExpectation fulfill] after the wait context has ended for calls back.’

You might run up against this in practice when writing integration tests against a live, but not always quick to respond, backend service.

Kaboom: Calling Twice

That’s not the only way things can go wrong.

What happens if our callback has at-least-once rather than exactly-once behavior, and happens to call back twice?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
class DoubleCallback: XCTestCase {
    func testDoubleTheFulfillment() {
        let promiseToCallBack = expectationWithDescription("calls back")
        let callBackDelay: NSTimeInterval = 1

        twiceAfter(seconds: callBackDelay) {
            print("i hear you calling me")
            promiseToCallBack.fulfill()
        }

        let afterCallBack = 2 * callBackDelay
        waitForExpectationsWithTimeout(afterCallBack, handler: nil)
    }
}

This is what happens (or skip to the summary)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
Test Suite 'Selected tests' started at 2016-03-19 22:38:09.451
Test Suite 'DoubleCallback' started at 2016-03-19 22:38:09.452
Test Case '-[Tests.DoubleCallback testDoubleTheFulfillment]' started.
1.0: finished waiting
now once
i hear you calling me
now twice
i hear you calling me
2016-03-19 22:38:10.567 xctest[93147:96490281] *** Assertion failure in -[XCTestExpectation fulfill], /Library/Caches/com.apple.xbs/Sources/XCTest/XCTest-9530/XCTestFramework/Classes/XCTestCase+AsynchronousTesting.m:450
2016-03-19 22:38:10.568 xctest[93147:96490281] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'API violation - multiple calls made to -[XCTestExpectation fulfill] for calls back.'
*** First throw call stack:
(
  0   CoreFoundation                      0x00007fff897ec03c __exceptionPreprocess + 172
  1   libobjc.A.dylib                     0x00007fff8674276e objc_exception_throw + 43
  2   CoreFoundation                      0x00007fff897ebe1a +[NSException raise:format:arguments:] + 106
  3   Foundation                          0x00007fff8b98b99b -[NSAssertionHandler handleFailureInMethod:object:file:lineNumber:description:] + 195
  4   XCTest                              0x000000010006f0bb -[XCTestExpectation fulfill] + 160
  5   Tests                               0x0000000100795c6b _TFFC5Tests14DoubleCallback24testDoubleTheFulfillmentFS0_FT_T_U_FT_T_ + 203
  6   Tests                               0x0000000100795e05 _TFF5Tests10twiceAfterFT7secondsSd4callFT_T__T_U_FT_T_ + 389
  7   Tests                               0x0000000100794eff _TFF5Tests5afterFT7secondsSd4callFT_T__T_U_FT_T_ + 367
  8   Tests                               0x0000000100795097 _TTRXFo__dT__XFdCb__dT__ + 39
  9   libdispatch.dylib                   0x00007fff8301f700 _dispatch_call_block_and_release + 12
  10  libdispatch.dylib                   0x00007fff8301be73 _dispatch_client_callout + 8
  11  libdispatch.dylib                   0x00007fff8302d6a0 _dispatch_after_timer_callback + 77
  12  libdispatch.dylib                   0x00007fff8301be73 _dispatch_client_callout + 8
  13  libdispatch.dylib                   0x00007fff830284e6 _dispatch_source_latch_and_call + 721
  14  libdispatch.dylib                   0x00007fff8302093b _dispatch_source_invoke + 412
  15  libdispatch.dylib                   0x00007fff8302c5aa _dispatch_main_queue_callback_4CF + 416
  16  CoreFoundation                      0x00007fff8973f3f9 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 9
  17  CoreFoundation                      0x00007fff896fa68f __CFRunLoopRun + 2159
  18  CoreFoundation                      0x00007fff896f9bd8 CFRunLoopRunSpecific + 296
  19  Foundation                          0x00007fff8b953b29 -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 278
  20  XCTest                              0x000000010006e6e8 -[XCTestCase(AsynchronousTesting) waitForExpectationsWithTimeout:handler:] + 1083
  21  Tests                               0x00000001007954d6 _TFC5Tests14DoubleCallback24testDoubleTheFulfillmentfS0_FT_T_ + 614
  22  Tests                               0x0000000100795722 _TToFC5Tests14DoubleCallback24testDoubleTheFulfillmentfS0_FT_T_ + 34
  23  CoreFoundation                      0x00007fff896c37bc __invoking___ + 140
  24  CoreFoundation                      0x00007fff896c3612 -[NSInvocation invoke] + 290
  25  XCTest                              0x0000000100022598 __24-[XCTestCase invokeTest]_block_invoke_2 + 159
  26  XCTest                              0x000000010005602e -[XCTestContext performInScope:] + 184
  27  XCTest                              0x00000001000224e8 -[XCTestCase invokeTest] + 169
  28  XCTest                              0x0000000100022983 -[XCTestCase performTest:] + 443
  29  XCTest                              0x0000000100020654 -[XCTestSuite performTest:] + 377
  30  XCTest                              0x0000000100020654 -[XCTestSuite performTest:] + 377
  31  XCTest                              0x000000010000e892 __25-[XCTestDriver _runSuite]_block_invoke + 51
  32  XCTest                              0x0000000100033a1b -[XCTestObservationCenter _observeTestExecutionForBlock:] + 611
  33  XCTest                              0x000000010000e7db -[XCTestDriver _runSuite] + 408
  34  XCTest                              0x000000010000f38a -[XCTestDriver _checkForTestManager] + 696
  35  XCTest                              0x000000010005729f _XCTestMain + 628
  36  xctest                              0x0000000100001dca xctest + 7626
  37  libdyld.dylib                       0x00007fff8b25f5c9 start + 1
)
libc++abi.dylib: terminating with uncaught exception of type NSException
(lldb)

We trip yet another assertion in XCTest:

Terminating app due to uncaught exception ‘NSInternalInconsistencyException’, reason: ‘API violation - multiple calls made to -[XCTestExpectation fulfill] for calls back.’

This probably does indicate an actual error in the code calling the callback much of the time, but if it doesn’t, you’ll want to know about and be able to dodge this assertion, too.

What’s Wrong?

This double-callback scenario calls back twice in succession. But if there were a delay between the first and second call back, and the test runner happened to exit during that delay, you’d get a successful test run rather than crashing every time.

With a delay between callbacks, you’d only trip the assertion when other tests kept the test runner process running long enough.

This situation parallels that of the too-late callback: no problems till appear till something else runs out the clock.

This is tricky:

  • You won’t ever trip them when you’re banging away at whatever the latest test you’re working on is, because a test runner running just that async test will exit as soon as the wait-timer runs out, before the too-late/second callback can occur.
  • You might not even trip them when you run your whole test suite at first, because they might be the last test in the run or the tests that follow don’t run for long enough.

This is also obnoxious to run into: When an assertion trips, it bombs the entire test process. (Unwrapping an implicitly unwrapped optional to find a nil has the same effect.)

These assertions aren’t test failures that would allow testing to continue; instead, XCTest treats as programmer error:

  • Fulfilling a promise after its test has already finished
  • Filling an already-filled promise

To be fair, these cases are called out in the documentation for XCTestExpectation.fulfill():

Call -fulfill to mark an expectation as having been met. It’s an error to call -fulfill on an expectation that has already been fulfilled or when the test case that vended the expectation has already completed.

but the documentation isn’t explicit that “it’s an error” translates to “and it will bomb your whole test process”.

Avoiding These Assertions

In both cases, the problem is that we’re calling fulfill when we shouldn’t. So let’s not do that.

Let the Expectation Die With the Test

XCTest actually hangs on to the expectations it creates so it can collect them during the wait call.

Our test method doesn’t need yet another strong reference to the expectation; if we instead work with a weak reference in our callback closure, the expectation will die with our test, rather than lingering for us to trip over after the test has completed, and we’ll have turned our callback into a no-op.

First, neuter the time-bombed testNotWaitingLongEnough by prefixing its name with an x so it won’t get picked up by the test runner any more:

1
2
3
4
5
6
7
8
 class LateCallback: XCTestCase {
     let callBackDelay: NSTimeInterval = 2


-    func testNotWaitingLongEnough() {
+    func xtestNotWaitingLongEnough() {
         let promiseToCallBack = expectationWithDescription("calls back")
         after(seconds: callBackDelay) { () -> Void in

Now clone it, but this time, use a weak reference to the expectation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
func testPreparedForNotWaitingLongEnough() {
    weak var promiseToCallBack = expectationWithDescription("calls back")
    after(seconds: callBackDelay) { () -> Void in
        guard let promise = promiseToCallBack else {
            print("too late, buckaroo")
            return
        }

        print("I knew you'd call!")
        promise.fulfill()
    }

    waitForExpectationsWithTimeout(callBackDelay / 2) { error in
        print("Aww, we timed out: \(error)")
    }
}

Run the LateCallback suite again, and the logs now look like (or skip to the summary):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Test Suite 'Selected tests' started at 2016-03-19 23:19:19.980
Test Suite 'LateCallback' started at 2016-03-19 23:19:19.981
Test Case '-[Tests.LateCallback testPreparedForNotWaitingLongEnough]' started.
Aww, we timed out: Optional(Error Domain=com.apple.XCTestErrorDomain Code=0 "The operation couldn’t be completed. (com.apple.XCTestErrorDomain error 0.)")
/Users/jeremy/Github/XCTestExpectationGotchas/Tests/LateCallback.swift:34: error: -[Tests.LateCallback testPreparedForNotWaitingLongEnough] : Asynchronous wait failed: Exceeded timeout of 1 seconds, with unfulfilled expectations: "calls back".
Test Case '-[Tests.LateCallback testPreparedForNotWaitingLongEnough]' failed (1.945 seconds).
Test Case '-[Tests.LateCallback testZzz]' started.
Let's just wait a while…
2.0: finished waiting
too late, buckaroo
2.0: all done here
Yawn, that was boring.
Test Case '-[Tests.LateCallback testZzz]' passed (2.004 seconds).
Test Suite 'LateCallback' failed at 2016-03-19 23:19:23.932.
   Executed 2 tests, with 1 failure (0 unexpected) in 3.950 (3.951) seconds


Test session log:
  /var/folders/63/np5g0d5j54x1s0z12rf41wxm0000gp/T/com.apple.dt.XCTest-status/Session-2016-03-19_23:19:16-QZf0lq.log

Test Suite 'Selected tests' failed at 2016-03-19 23:19:23.933.
   Executed 2 tests, with 1 failure (0 unexpected) in 3.950 (3.953) seconds
Program ended with exit code: 1

Our testZzz runs to completion and passes, and the test process exits on its own terms reporting the one failure.

The late callback still happened, but by that time, promiseToCallBack had been zeroed, so we never called fulfill().

Assertion: Dodged!

Kill the Expectation Proactively

What about the double-callback case? We can use the same trick, only this time, we’ll want to proactively annihilate the expectation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
func testSafelyDoubleTheFulfillment() {
    weak var promiseToCallBack = expectationWithDescription("calls back")
    let callBackDelay: NSTimeInterval = 1

    twiceAfter(seconds: callBackDelay) {
        guard let promise = promiseToCallBack else {
            print("once was enough, thanks!")
            return
        }

        promise.fulfill()
        promiseToCallBack = nil
    }

    let afterCallBack = 2 * callBackDelay
    waitForExpectationsWithTimeout(afterCallBack, handler: nil)
}

With the unsafe test neutered via the prefix-x trick, running the test class gives (or skip to the summary):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Test Suite 'Selected tests' started at 2016-03-19 23:22:56.356
Test Suite 'DoubleCallback' started at 2016-03-19 23:22:56.357
Test Case '-[Tests.DoubleCallback testSafelyDoubleTheFulfillment]' started.
1.0: finished waiting


Test session log:
  /var/folders/63/np5g0d5j54x1s0z12rf41wxm0000gp/T/com.apple.dt.XCTest-status/Session-2016-03-19_23:22:51-14ywpS.log

now once
i hear you calling me
now twice
once was enough, thanks!
wasn't that nice?
1.0: all done here
Test Case '-[Tests.DoubleCallback testSafelyDoubleTheFulfillment]' passed (1.099 seconds).
Test Suite 'DoubleCallback' passed at 2016-03-19 23:22:57.457.
   Executed 1 test, with 0 failures (0 unexpected) in 1.099 (1.100) seconds
Test Suite 'Selected tests' passed at 2016-03-19 23:22:57.458.
   Executed 1 test, with 0 failures (0 unexpected) in 1.099 (1.102) seconds
Program ended with exit code: 0

Since we explicitly set the promise to nil, we only end up fulfilling it once. No harm, no foul.

Use a Different Promise API

If you’ve got an API written in terms of a promise/future library already, such as Deferred, then there’s no need to use XCTest’s promises:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
class BringYourOwnPromises: XCTestCase {
    let anyDelay: NSTimeInterval = 1


    func testGettingAPony() {
        let futurePony = giveMeAPony(after: anyDelay)

        let longEnough = anyDelay + 1
        guard let pony = futurePony.wait(.Interval(longEnough)) else {
            XCTFail("no pony ;_;")
            return
        }

        print("we got a pony! \(pony)")
    }


    func testWhenImpatientNoPonyForYou() {
        let futurePony = giveMeAPony(after: anyDelay)

        guard let pony = futurePony.wait(.Now) else {
            print("no patience, no pony")
            return
        }

        XCTFail("we got a pony???! \(pony)")
    }


    func testZzzDoesNotCrash() {
        spin(forSeconds: 2 * anyDelay)
    }
}

Summary

  • Always assign your expectations to a weak reference, and then bail in your callback if it’s nil.
  • In the rare case where you expect your callback to be triggered more than once, you can avoid fulfilling by annihilating your weak reference after fulfilling it and then ignoring future calls.
    • More likely, you know how many times you should be called, and you’ll want to fulfill the promise only on the last call. But the workaround is there if you need it.
  • If you’re already working with a promise-based API, you can skip XCTestExpectation and use whatever wait-and-see API is provided by that promise instead of XCTest’s own.
    • This has the added advantage of linearizing your test code by eliminating the need to handle the delivered value in the closure (or manually shuttle it out to assert against after the XCTest wait has finished).
]]>
<![CDATA[Embedded Content Contains Swift]]> 2016-03-06T21:23:00-05:00 https://jeremywsherman.com/blog/2016/03/06/embedded-content-contains-swift If you’re developing a QuickLook plugin using Swift, make sure you flip on the EMBEDDED_CONTENT_CONTAINS_SWIFT build setting for the target, otherwise bundle loading will fail in a spectacularly unhelpful way.

Creating a Mixed-Language QuickLook Plugin

Recently I decided to add a QuickLook plugin to my ImageSlicer utility app.

The default QuickLook plugin template stamps out an entirely C plugin. Changing the thumbnail/preview template files to have a .m suffix put us back in Obj-C land, but getting to Swift land takes a couple more steps.

Not to worry: Add a new Swift file to the target, and Xcode will offer to make bridging easy-peasy for you. Give it the go-ahead, and you should be good to go, right?

I add the main model and view classes from my app project to the QuickLook target, wire stuff up to load the document and render the view, and everything compiles and links all happy-like. Let’s test this thing!

Gatekeeper?

I fire up qlmanage, point it at my generator and a .slicedimage document, and I see That Error:

The bundle “QuickLookSlicedImage” couldn’t be loaded because it is damaged or missing necessary resources.

I’ve seen this error way too many times when I grab an older app bundle off the Internet. Every time before, “damaged or missing necessary resources” has been code for “no-one signed this app bundle”.

I’m asking the system to execute code, so, sure, that kind of makes sense?

I hare off looking at using spctl to whitelist my bundle, successfully whitelist it with spctl --add --label JWSDev path/to/QuickLookSlicedImage.qlgenerator, and spctl --assess is OK with it.

Let’s try again.

Not Gatekeeper

I see the same error. Hrm. What if it really is missing something? Now I want to see the smoking gun.

After sufficient rooting around, I eventually work through to where it loads the bundle, then the plugin, then finally to where the real business happens: dlopen.

After the call to dlopen, the CFBundle machinery checked for success with dlerror, and that gave me an actually informative error message (which I’ve abbreviated and hard-wrapped for readability):

1
2
3
4
5
(lldb) x/s $rax
0x100576819: "dlopen(LONG_PATH/QuickLookSlicedImage, 262):
Library not loaded: @rpath/libswiftAppKit.dylib\n
  Referenced from: LONG_PATH/QuickLookSlicedImage\n
  Reason: image not found"

Yup, missing Swift dylibs.

EMBEDDED_CONTENT_CONTAINS_SWIFT

The fix is to tell Xcode to copy all the Swift dylibs the built product needs into its bundle using the build setting EMBEDDED_CONTENT_CONTAINS_SWIFT=YES.

(The other fix is to ensure qlmanage is actually running the generator you’re building now, not the generator embedded in the copy of your app you built an hour or two ago that still has the missing-dylib issue. Oops.)

Take-Away

The take-away is this:

  • When Xcode offers to add a Swift–Obj-C bridging header for you,
  • Then that means the target was not previously configured for Swift,
  • And you should probably ensure that EMBEDDED_CONTENT_CONTAINS_SWIFT=YES gets set for the target.

The “probably” is there because, if you’re baking it into an app bundle that’s already embedding the Swift dylibs, you could probably mess with the rpath to get it to share those rather than having Yet Another Copy of the Swift support dylibs in your app bundle.

But that’ll be a pain, and disk space is cheap, so you’ll probably still want to just flip on EMBEDDED_CONTENT_CONTAINS_SWIFT=YES.

]]>
<![CDATA[Review: SE-0026: Abstract classes and methods]]> 2016-02-29T12:38:00-05:00 https://jeremywsherman.com/blog/2016/02/29/review-se-0026-abstract-classes-and-methods This is a review of SE-0026 “Abstract classes and methods”.

I am against the acceptance of this proposal:

  • It lacks a clear problem.
  • The leap from a nebulous problem to abstract classes as the solution is a non sequitur.
  • Its arguments are insufficient to justify the complication it would add to Swift, which is contrary the simplification and clarification aims of the Swift community.

The contrast is sharpened by comparison to the Python Enhancement Proposal that accompanied the introduction of abstract base classes into Python. The present proposal fails to provide a correspondingly thoughtful rationale.

No Clear Problem

The proposal itself does little to define a practical problem, and less to explain how abstract classes solve this problem better than alternatives. It feels like a solution in want of a problem, which is the opposite of a considered addition to the language.

As best I can determine, the primary problem introduced is that of wanting to have abstract properties. The example given is better resolved by providing the url as a constructor argument, as noted by Stephen Celis. Further, the immediate solution appears to be to argue in favor of uniform access as found in Self and Eiffel, not abstract base classes, which compound non-uniform access by a further serving of complexity.

Another problem mentioned is lack of easy delegation of implementation in the context of protocols; providing a simple way to proxy calls to another object would present a promising and useful avenue for resolving this problem that would also compose more generally with the rest of the language. NSProxy has always been somewhat awkward in this regard; perhaps we can do better in Swift?

No Clear Significance

Without a clear problem to address, it becomes difficult to evaluate the significance of the problem.

Ultimately, it’s unclear precisely what the problem under consideration is, unless the problem is stated simply as, “Swift doesn’t have abstract base classes.” If that is the true problem considered to address, then it seems especially insignificant; Swift also lacks good support for relational programming à la mini-kanren, but a difference does not a problem make.

If we focus on “no abstract classes” as the problem, then the problem appears insignificant: Smalltalk and Objective-C have both made do without formal support for abstract classes. Objective-C went so far as to remove subclassResponsibility from the common language vocabulary, which eliminated all inbuilt support for abstract classes. Never have I heard either Smalltalker or Obj-C hacker end up despondent and cursing over the lack of built-in abstract class support in these languages.

Compared to Python’s Rationale for Adding Abstract Classes

It is interesting to consider the motivation for adding abstract base class support to Python as explained in PEP 3119.

In Python’s case, the decision was motivated by the desire for a reliable means to test particularly for some shared quality of a group of objects - basically, a reliable respondsToSelector: or isKindOfClass: that allows detecting this quality without incidental risk of false positives or negatives (“Rationale”).

As a result, Python adopted abstract base classes as an alternative to interfaces (“ABCs vs. Interfaces”). But Swift already has interfaces in the form of protocols; this answers the need that motivated the addition of abstract base classes to Swift.

Because we cannot borrow the rationale used for adding abstract base classes to Python, and the document before us spends its effort explaining abstract base classes rather than the problem they would solve, it remains for those arguing for the added formal complexity of abstract base classes to motivate their addition in the context of Swift. The current proposal is manifestly lacking in this regard.

Out of Alignment with Swift

Adding abstract class support to Swift seems unprincipled. I cannot see what problem would be solved, and Swift is working towards considered language growth, and even better, language contraction, at this point in time. Adding abstract base classes would feel like nodding to feature agglutination by cargo cult, not the careful evolution we aspire to.

Effort

I read the article and then looked at the arguments in favor of supporting abstract base classes in Python for comparison. I would love to see a rationale as tailored to Swift and to real problems as PEP 3119 was to Python and its programmers' problems! In Python’s case, “[m]uch of the thinking that went into the proposal [was] not about the specific mechanism of ABCs, as contrasted with Interfaces or Generic Functions (GFs), but about clarifying philosophical issues[…].” This sort of laborious semantic work is a necessary accompaniment to any significant proposed changes to an object system, and that thought is unfortunately not apparent in this proposal.

This article was originally posted to swift-evolution on 28 February 2016.

]]>
<![CDATA[Go Versions and the Open-Closed Principle]]> 2016-02-24T17:15:00-05:00 https://jeremywsherman.com/blog/2016/02/24/go-versions-and-the-open-closed-principle People aren’t happy about Go’s approach to managing software versions:

aren’t different API versions supposed to live at different import paths in Go? This works great if you have a proprietary codebase, are using a monorepo, and don’t support the sharing culture of open source. And, it doesn’t address the issue of minor or patch versions.

Hello, Open-Closed Principle

The funny thing is that Go’s official version management approach is effectively a strict reading of the open-closed principle as applied to libraries rather than classes.

The “fork it and rename it” approach was actually the way the principle was originally introduced for classes.

You want to change how a class works?

Fine, subclass it and make your changes.

Dependents can adopt MyVeryOwnFooV35 at their convenience, rather than you just stomping on the one and only MyVeryOwnFoo class in the project.

But That’s Crazy Talk!

Yeah, it didn’t much catch on in object-oriented programming, either, in spite of being enshrined in the SOLID acronym.

Apparently Gophers think it’s equally crazy for libraries (ibid):

Can you imagine that every time a library needs to increment a major version it needs to create a new repo on GitHub? Yeah, no one does that. The path for major API version is a Go thing. It’s not intuitive. Someone had to tell me. And, many Go developers just don’t do it. If they did there would be no reason for gopkg.in.

People Actually Do That

I can imagine it, and people actually do it. Check out the Creating Stable Releases section of the Collective Code Construction Contract. This is the social contract that governs development of ZeroMQ, amongst a few other projects.

Every time they want to make a stable version, they shard off a new repo for that version, with its own steward.

Thus, every time ZeroMQ needs to increment a major, or minor, or patch version, they need to fork a new repo. Mainline development continues on the main repo, and the stable release repo gets its own repo, its own maintenance patches, and its own name in the form of repo URL.

Why Don’t We Do That in OOP?

I think we don’t do this for OOP precisely because we find ourselves in the monorepo scenario that let Google avoid introducing package management into go. Most object-oriented projects live in one repo, so we can readily coordinate changes across the codebase: we don’t need to fork a new subclass, because we can just update all callers to play ball with the new version.

]]>
<![CDATA[Housekeeping]]> 2016-01-07T09:19:00-05:00 https://jeremywsherman.com/blog/2016/01/07/housekeeping This post automagically appeared on the site thanks to a post-receive hook. Every prior post was written, compiled, and rsynced from my laptop. No more!

Now: I can post from my phone using Working Copy.

Later: I’ll work out handling for microposts, so I can send those here and sync to ADN after.

Later still: Figuring out a good workflow for link blogging from my phone. For ADN, I’ve got a very slick workflow using @dasdom’s wonderful Jupp sharing extension, and I’ve seen how any hiccups in that workflow significantly reduce how much I share my reading with ADN.

]]>
<![CDATA[Do more of less]]> 2015-11-21T18:22:00-05:00 https://jeremywsherman.com/blog/2015/11/21/do-more-of-less The most valuable lesson of Kanban is to limit work in progress. At the personal level, this jives with studies showing that humans suck at multitasking.

This is a hard lesson for me: My life is littered with the detritus of works begun, works planned, resources squirreled away against a future that rarely comes back to them.

A messy desk or hard-drive becomes an oppressive labyrinth: one sits down for a purpose, only to have all one’s energies dispersed for nothing down the forking hallways of might-have-beens.

Facing this honestly is terrifying: It means admitting one might never pursue that avenue, never chase that morning’s dream. It means confronting the brief spark that is human life; no, that’s not what frightens: rather, the dark that follows, as persistence of vision gives way to vanishing memory, and one’s name and deeds fade forgotten.

***

Please forgive my messy desk; the dark is waiting, and I would but close my eyes a while longer.

]]>
<![CDATA[Agile]]> 2015-11-13T12:47:00-05:00 https://jeremywsherman.com/blog/2015/11/13/agile I take agile as rejecting the notion of estimation as having value. In the event you have a deadline, the best you can hope for is to deliver as much working software as you can before that deadline. Dithering over what’s going to fall on which side of the deadline is time better spent delivering a feature and winnowing out low-value crap that came along with the high-value bits of your original ideas so you don’t waste time implementing the dross.

If I do need to estimate, I reject the silly notion of giving a ludicrously precise single value, and instead give a more honest pair of (estimate, complexity), where complexity is rated on a scale from “I’ve done this a hundred times” to “nobody in the world has ever done this” (http://lizkeogh.com/2013/07/21/estimating-complexity/). And potentially give a range instead of a single value, though some sort of highly-skewed normal distribution might be better.

I similarly reject the notion of “backlog” as wasting time counting your chickens before they’ve hatched (http://ronjeffries.com/articles/015-10/the-backlog/article.html). The only project artifact that matters is the running code you have at the end of a sprint. Everything else is BS; use whatever support tools you need, but don’t confuse your list of dreams with what you have in hand now. If you can’t ship what you have now? You’re probably setting yourself up for serious pain and suffering when the budget suddenly runs out, or your main dev gets moved to another project, or quits, or…

I joke that the product champion should keep a stack of might-wants in a Trello board with “a pony” at the bottom. No-one ever gets everything they want implemented in software; most of those inchoate wants are a mix of some good and valuable ideas and a bunch of lossy cruft that would be a waste of time to do anyway. (Also: diminishing returns.) We forget human finity at our, and our projects', peril.

Many project management artifacts and behaviors seem smoke and mirrors rituals attempted in the vain hope of preventing the dread manifestation of Learning and consequent Change. Unfortunately, no matter how many ways we invent to scream, “COME NOT IN THAT FORM!” into the unknown, we remain saddled with imperfect knowledge, or alternatively, blessed with the joy of learning ever more and new things about our domain of interest. Dispensing with these distractors – from the primitive state of both today’s tools and the discipline of software development as a whole, and more generally from our own cloud of unknowing – is terrifying, but addressing reality head-on frees you to make the best use of the precious time and limited tools you have.

]]>
<![CDATA[Updating Plex on Synology NAS]]> 2015-11-08T12:02:00-05:00 https://jeremywsherman.com/blog/2015/11/08/updating-plex-on-synology-nas My family has been using ChromeCast to send YouTube videos to the TV. While flipping through the ChromeCast app on my phone, I noticed Plex integrates with ChromeCast. Funny enough, Synology also ships a Plex server package. How hard could this be?

ChromeCast: Easy Come, Easy Stow

I have a very curious toddler.

The ChromeCast is easy to hook up and break down as needed when you want to use it, and there’s not much to break. It was a simple and immediate solution to make the TV usable again without needing to run any cables or install a shelf outside toddler reach.

Before this, we spent several months with the TV completely unplugged after we dismantled the entertainment center and mounted the TV to the wall as part of making our living room child-resistant.

(Child-proof vs child-resistant is like waterproof/water-resistant: Nature finds a way, and all we can do is try to hold out – in this case, till an adult notices a curious and ill-omened silence.)

Plex Client Is Picky; Synology Plex Is Old

Setting up the app on my phone was fairly easy. I needed to create a new login, which, yawn, but 1Password is with me.

Installing the package on my NAS was also one-click.

Getting them talking to each other was a bloody mess. Plex client is very aggressive about not working with older versions of Plex server, which meant that right now, it didn’t work at all with the version packaged by Synology.

Installing a Manual Package

Luckily, Plex packages Plex Server for Synology (and several other flavors of NAS) themselves.

Their instructions only cover a small part of the install process, though. What papered over the gap for me is this article. (That article has pictures, unlike this one.)

Here are the steps I followed:

  • Check your processor type in the Synology Control Panel
  • Download the package for that processor type from Plex Downloads
  • Download the Plex package signing key linked from here
  • Verify the md5sum they give you for what comfort that might give. (md5sum? Really?)
  • Open the Synology package center and hit the Settings button:
    • On the General pane, widen your trust from just Synology to Synology plus trusted publishers.
    • On the Certificate pane, upload the PlexSign.key you just grabbed.
      • Now Plex is a trusted publisher.
  • Exit the Settings modal and hit the Manual Install button.
    • Select the Plex package you downloaded.
    • Wait for it to upload, then OK the install.

Unfortunately, Plex doesn’t seem to publish a stream of updates, just individual packages, so when the client yells at you again about the server being too old, you get to repeat most of this dance.

Gotchas

Potential: Synology vs Plex package differences

I read some tales of issues with swiching between the two packages. I know that installing the Synology version first and then the Plex version worked fine for me. Your mileage may vary. If it breaks, you might need to pop the hood and ssh in to see what’s gone wrong.

I didn’t encounter any issues myself, so I wouldn’t worry about this unless you run into it.

Derp: Manual packages must be uploaded from the client

If you’re like me, you might think like this:

  • I will need to install this package to the NAS.
  • The package file needs to end up on the NAS eventually.
  • Downloading the package file directly to the NAS using Download Station will save transfer time.

You’re right in theory, and wrong in practice, because the manual install flow only lets you select a local file to upload. Let me say this again: There is no way to point the manual install wizard at a package that’s already downloaded to the NAS. You have to upload it from your local machine directly to the manual package installer.

The fun end result of this is that, if you downloaded the file to the NAS to begin with, you now get to download the file from your NAS so you can upload it back for the manual package install flow.

The steps I listed above skip this time-wasting cleverness.

Conclusion

Plex would be a lot easier to use if they’d do a better job of preserving client–server compatibility across versions.

If you’ve been looking for an excuse to wander into manual Synology package installation, though, you’ve come to the right product.

]]>