How to Work Around an Empty Zenfolio Zip File

By: Jeremy W. Sherman. Published: . Categories: how-to.

My family recently had some holiday photos taken. The photographer was using Zenfolio to host their photos. I loved the photos and wanted to archive the originals on my laptop (and NAS, and Amazon Photos, and Time Machine, and Carbon Copy Cloner clone, and…). But every time I tried to download an original – of one photo, of all the photos, makes no difference – the server always sent me an empty zipfile!

I emailed the photographer to let them know, but I wasn’t going to wait.

Rather than work around this manually by visiting each page and right-clicking to Save As each photo – and I’m not sure that would show me the full-size image , anyway! – I figured Zenfolio would have an API.

Sure enough, there’s a well-enough documented Zenfolio API. I was in business!

I was able to lash together some shell commands to grab my full photoset. To save you some fumbling, here’s how I did it.


Grab the Photo Details for the Photoset

Get the photoset ID. You can grab this from the URL you’re using to view the photos on the photographer’s website. If you view your photos at, then your photoset ID is 544941453.

Fetch the list of photos in that photoset using curl and save the JSON response to disk for the next step:

curl -v \
    -H'Content-Type: application/json' \ \
    -d '{
      "method": "LoadPhotoSetPhotos",
      "params": [544941453, 0, 100],
      "id": 1
    }' \
    > photoset.json

This grabs the photos in photoset 544941453 starting from index 0 and returns at most 100 photos. Tweak those values to match your photoset and number of photos.

Also, I’m using fish as my shell. You might need to tweak that command line to make your shell happy, especially with the multiline string literal.

See: LoadPhotoSetPhotos method documentation

Download Each OriginalUrl

Grab the OriginalUrl field from the photo objects in the photoset response using jq, the JSON multitool:

jq '.result[].OriginalUrl' photoset.json

Download each file at those URLs by feeding them to curl via xargs:

jq '.result[].OriginalUrl' photoset.json \
    | xargs -n 1 curl -O

(The -n 1 is there so that curl sees one -O for each file argument. Without it, xargs would run curl -O url1 url2 url3…. This causes curl to download only the first URL to a matching file on disk; the rest, it starts piping out to stdout. I couldn’t work out a good way to get xargs to repeat the -O per argument, so I just throttled it to calling curl -O justASingleURL repeatedly.)

Enjoy your photos!

Caveat: Assumes Public Photos

This walkthrough assumes no authentication is required to download your photos. I lucked out: All my photos had an AccessDescriptor.AccessType of Public.

If the originals are password-protected, you’ll find a walkthrough of the hoops to jump through in “Downloading Original Files”.

If things are more locked down, you might need to sort out the authentication flow before you can even grab the photoset details. I didn’t need to do any of that, so I can’t walk you through how. Sorry!