The Internet Speaks: Testing FP Code
By: . Published: . Categories: the-internet-speaks testing functional-programming.One problem I have writing Swift is that I’m not really sure how to tackle testing FP-ish code using XCTest.
I did some quick Internet research. If you read it on the Internet, it must be true. This is a distillation of those great Internet truths.
The Context: Data Persistence
But first, some context. Why did I care about this?
I ran into this in the context of sorting out how to persist and restore some app data at specific “app lifecycle” hooks.
Specifically:
- When the app backgrounds, start a background task, then serialize and write
to disk, then end the task.
- Inputs: data store, serialization strategy, where to write to
- Outputs: updated file on disk (side effect)
- When the app launches, block the main thread till we’ve loaded the data from
disk and unpacked it. This should be fast enough. Anything else will lead to
folks seeing a not-yet-ready UI.
- Inputs: serialization strategy, where we wrote to
- Outputs: We can see the restored DataStore (side effect)
This is very much “app lifecycle” stuff, so we want the App Delegate to do it.
What’s the cleanest code we could imagine?
bracket startBackgroundTask endBackgroundTask $
dataStore |> serialize |> write location
deserialize(location)
|> fromJust seedDataStore
|> set dataStoreOwner .dataStore
I think my big ??? is that I don’t get how to test a functional pipeline. It seems to not having any of the seams you’d usually rely on.
Testing FP Code
Summarizing:
- Separate out pure code from impure.
- Use PBT for the pure code.
- Use typeclasses or protocols or similar dynamic binding methods to swizzle impure actions.
I guess, use acceptance testing to check that you got the wiring to impure stuff correct? That issue seems mostly ignored in favor of the much happier “pure functions are easy to test” story.
In practice, I think I’m now foundering on the mess that is object-functional blending. You’d hope that the Scala folks might have something good to stay on that, but that’ll have to be a later round of The Internet Speaks.
Static Methods Are Death to Testability
http://misko.hevery.com/2008/12/15/static-methods-are-death-to-testability/
Recapitulates the problem I identified:
Unit-testing needs seams, seams is where we prevent the execution of normal code path and is how we achieve isolation of the class under test. seams work through polymorphism, we override/implement class/interface and than wire the class under test differently in order to take control of the execution flow. With static methods there is nothing to override.
Recommends converting static methods to instance methods:
If your application has no global state than all of the input for your static method must come from its arguments. Chances are very good that you can move the method as an instance method to one of the method’s arguments. (As in method(a,b) becomes a.method(b).) Once you move it you realized that that is where the method should have been to begin with.
Says not to even consider leaf methods as OK as static, because they tend not to remain leaves for long.
Unit Testing and Programming Paradigms
http://www.giorgiosironi.com/2009/11/unit-testing-and-programming-paradigms.html Identifies the same problem as you move away from leaf functions in the context of procedural programming:
The problem manifests when we want to do the equivalent of injecting stubs and mocks in higher-level functions: there are no seams where we can substitute collaborator functions with stubbed ones, useful for testing. If my function calls printf(), I cannot stub that out specifying a different implementation (unless maybe I recompile everytime and play a lot with the preprocessor).
Outlines, in theory, what they would do, but have not done, for FP code: Pass in functions to parameterize behavior:
So instead of injecting collaborators in the constructor we could provide them as arguments, earning the ability to pass in fake functions in tests. The upper layers can thus be insulated without problems (with this sort of dependency injection) and there are no side effects that we have to take care of in the tear down phase
Omits stack and logic paradigms. No surprise there.
Recoverability and Testing: OO vs FP
https://www.infoq.com/news/2008/03/revoerability-and-testing-oo-fp
Sums up a conversation that happens across several blogs. Weirdly omits any links to primary sources. Yuck.
OO is rife with seams that are easy to exploit, so Feathers likes it. Where you need a seam is a design issue:
Another blogger, Andrew, highlights that if “code isn’t factored into methods that align with the needs of your tests”, the implementation will need to be changed to accommodate the test. Hence, he argues as well that “thoughts about “seams” are really just getting at the underlying issue of design for testability”, i.e. the proper placement of seams.
But not all systems are always so designed (putting it nicely), so “recoverability” matters: being able to make something testable in spite of itself.
According to Feathers, even though there are alternative modules to link against in functional languages, “it’s clunky”, with exception of Haskel where “most of the code that you’d ever want to avoid in a test can be sequestered in a monad”
Then there’s an argument that pushing the impurity to the edges makes things testable. No-one addresses validating correct composition of verified components, though. :(
SO: Testing in Functional Programming
https://stackoverflow.com/questions/28594186/testing-in-functional-programming
Answers point out:
- Function composition builds units, in that you can test them quickly.
- QuickCheck/SmallCheck dodge the combinatorial explosion of codepaths that you get by composing functions.
- Coding against a typeclass that you can swizzle out for a test one lets you stub out IO-like functions. (Or just manually pass in a dictionary type.)