Iacutone.rb

coding and things

GraphQL With Elm Part 2, Parsing Nested JSON

| Comments

In this post we will discuss how to turn nested JSON into Elm data types. This post uses a thin Elm.Http wrapper. Using this library helps us concentrate on contructing the necessary Decoders. We will also contruct a more complex query in order to explain how to contruct complex data types. The following query returns JSON for the first two team members of FracturedAtlas.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
query : String
query =
    """
    query {
      organization(login: "FracturedAtlas") {
        team(slug: "fa-dev") {
          members(first:2) {
            edges {
              node {
                id
                login
              }
            }
          }
        }
      }
    }
    """

Swapping out our query will return the following JSON.

1
{"data":{"organization":{"team":{"members":{"edges":[{"node":{"id":"<user id>","login":"<user name>"}},{"node":{"id":"<another user id>","login":"<user name>"}}]}}}}}

The following part took me a long time to grok. The issues I had difficulty resolving are: 1. How do I grab the list of nodes? 1. How to I change my stringified JSON output into Elm data types?

The rest of this post tackles these two issues.

The requiredAt function from Elm Decode Pipeline solves the first problem.

1
2
3
4
5
6
7
8
9
10
11
12
13
decodeLogin =
    decode Users
            |> requiredAt ["data", "organization", "team", "members", "edges"] (Json.Decode.list decodeNode)
      -- 'dig' into the JSON and extract the node list

decodeNode =
    decode Node
        |> required "node" decodeUser

decodeUser =
    decode User
        |> required "id" string
        |> required "login" string

Let’s construct our Elm datatypes based on the JSON response.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
type alias Model =
    { message : String
    , users : Maybe Users
    }

type alias Users =
    { edges : List Node }

type alias Node =
    { node: User
    }

type alias User =
    { id:  String
    , login : String
    }

Now to handle the update function.

1
2
3
4
5
6
7
8
9
10
11
12
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
    case msg of
        GraphQLQuery (Ok res) ->
            case decodeString decodeLogin res of
                Ok res ->
                    ( { model | users = Just res }, Cmd.none )
                Err error ->
                    ( { model | message = error, users = Nothing }, Cmd.none )

        GraphQLQuery (Err res) ->
            ( { model | users = Nothing }, Cmd.none )

I spent numerous hours figuring out how to map the Elm data types to the decoders. In Part 3, we will learn how to display the User data type in the browser.

GraphQL With Elm Part 1, Communicating With GitHub

| Comments

A co-worker and are using GitHub’s GraphQL API for a side project. I am writing a series of posts around what we learned from creating the application entitled ‘GraghQL and Elm’. This post will detail how to communicate with the Github GraphQL API with Elm. All code related to these posts can be found here.

GitHub has extensive documentation on communicating with their GraphQL server. In this post we will write a simple query and display the JSON from Github. In the next post, we will use Elm Decoders in order to turn the JSON into Elm data.

The GitHub GraphQL Explorer is useful for crafting queries. Let’s write a query that fetches a user’s id:

1
2
3
4
5
{
  user(login:"iacutone") {
    id
  }
}

With the aforementioned JSON, let’s contruct a request to GitHub in Elm!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
query : String
query =
    """
    query {
      {
        user(login:"iacutone") {
          id
        }
      }
    }
    """

baseUrl : String
baseUrl =
    "https://api.github.com/graphql"

bearerToken : String
bearerToken =
    "Bearer <your GitHub token here>"

request : Http.Request String
request =
    Http.request
  { method = "POST"
  , headers = [Http.header "Authorization" bearerToken]
  , url = baseUrl
  , body = Http.jsonBody (Encode.object [("query", Encode.string query)])
  , expect = Http.expectString
  , timeout = Nothing
  , withCredentials = False
  }

After posting the query to GitHub, your Elm update function will receive the following response:

1
2
3
4
5
6
7
{
  "data": {
    "user": {
      "id": "MDQ6VXNlcjE1NjMyMDE="
    }
  }
}

I find it helpful to define this response as an Elm String type and view it in the browser. If there is an error with the query, you can see the results in the browser.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
-- MODEL

type alias Model =
    { response : String
    }

-- UPDATE

type Msg
    = GraphQLQuery (Result Http.Error String)

update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
    case msg of
        GraphQLQuery (Ok res) ->
            ( { model | response = res }, Cmd.none )

        GraphQLQuery (Err res) ->
            ( { model | response = toString res }, Cmd.none )

initialModel : Model
initialModel =
    { response = ""
    }

init : (Model, Cmd Msg)
init =
    ( initialModel, Http.send GraphQLQuery request )

main : Program Never Model Msg
main =
    Html.program
        { init = init
        , view = view
        , update = update
        , subscriptions = \_ -> Sub.none
        }

-- VIEW

view : Model -> Html Msg
view model =
    div [] [ text model.response ]

The code above will display a stringfied version of our query. In Part 2, we will transform the Elm String data object into something more useful with Elm decoders.

Understanding Union Types in Elm

| Comments

Watching Making Impossible States Impossible and reading Higher Level Coding with Elm Union Types were enlightening in my Elm development. Now, I look for places were I can refine my data model by using union types. Union types allow me to replace conditionals with pattern matching case statements. This pattern is much cleaner and easier to understand. The following is an example of how I used union types to refactor a hamburger.

Elm Model

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
-- BEFORE

type Msg
    = DisplayHamburgerItems

type alias Model =
    { hamburger_open : Bool
    }

-- AFTER

type Msg
    = DisplayHamburgerItems Hamburger

type Hamburger
    = Open
    | Closed


type alias Model =
    { hamburger : Hamburger
    }

Elm Update

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =

    -- BEFORE

    case msg of
        DisplayHamburgerItems ->
            if model.hamburger_open == True then
                let
                    items = List.append model.display_hamburger ["About", "Contact", "Menu"]
                in
                    ( { model | display_hamburger = items, hamburger_open = False }, Cmd.none )
            else
                ( { model | display_hamburger = [], hamburger_open = True }, Cmd.none


    -- AFTER

        DisplayHamburgerItems msg ->
            case msg of
                Open ->
                    let
                        items = List.append model.display_hamburger ["About", "Contact", "Menu"]
                    in
                        ( { model | hamburger = Open, display_hamburger = items }, Cmd.none )
                Closed ->
                    ( { model | hamburger =  Closed, display_hamburger = [] }, Cmd.none )

Elm View

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
view : Model -> Html Msg
view model =

    -- BEFORE

    i [ onClick (DisplayHamburgerItems) ] []

    -- AFTER

    i [ onClick (DisplayHamburgerItems (toggleHamburger model.hamburger)) ] []

viewHamburgerItems : Model -> Html Msg
viewHamburgerItems model =

    -- BEFORE

    if model.hamburger_open == False then
        div [] (List.map item model.display_hamburger)
    else
        div [] []

    -- AFTER

    case model.hamburger of
        Open ->
            div [] (List.map item model.display_hamburger)
        Closed ->
            div [] []

It is easier to reason about Hamburger with Open and Closed types than checking against a Bool. Pattern matching on the union type is expressive and is really helpful when the union types grow in complexity.

Full code here.

PR Review Diagram

| Comments

I thought I would share my diagram for how I think about PR review. The following diagram received good reception from my team.

Understanding Test Doubles

| Comments

I refactored credit card payments on an application that uses the ActiveMerchant gem. I was not confident in the test, so, I re-wrote the tests to verify responses from Braintree. Once I verified I did not break anything with my refactor, I decided the best next step was to remove the interaction with Braintree and replace it with stubbed responses. I always forget when and how to use RSpec allow, spy and double methods. This post is meant to help reinforce my knowledge of the aforementioned methods.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
describe CreditCardPayment do
  let!(:payment) { build(:credit_card_payment) }

  describe '#authorize' do
    before { payment.authorize }

    it 'records the GatewayTransaction' do
      transaction = GatewayTransaction.last

      expect(transaction.success).to eq true
      expect(transaction.amount).to eq amount
      expect(transaction.message).to eq '1000 Approved'
      expect(transaction.response).to be_present
      expect(transaction.service_fee).to eq service_fee[:service_fee]
    end
  end
end

Authorize is a wrapper around the payment gateway #authorize method which returns a response from Braintree. I should have confidence that when sending Braintree arguments that the#authorize expects, the response will be successful. This is interaction is what needs to be stubbed in order to not call the Braintree API in the test environment. The benefits of this approach are that the test will be faster and more resilient because there is no communication with an external service. The negatives of this approach is that the API can change.

Let’s update the above codeblock to stub out the interaction with Braintree.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
describe CreditCardPayment do
  let(:gateway_double) { double('ActiveMerchant::Billing::BraintreeGateway') }
  let!(:payment) { build(:credit_card_payment, gateway: gateway_double) }

  before { payment.gateway = gateway_double }

  describe '#authorize' do
    it 'sends a #authorize request to Braintree' do
      expect(gateway_double).to receive(:authorize).exactly(1).times
        .with(amount, payment.credit_card, payment.credit_card_descriptor)
  .and_return(authorize_response_stub)

      payment.authorize
    end

    it 'creates a GatewayTransaction' do
      expect { payment.authorize }.to change(GatewayTransaction, :count).by(1)
    end
  end

  def authorize_response_stub
    params = { 'response_from_braintree' => 'yay' }

     ActiveMerchant::Billing::Response.new(true, '1000 Approved', params, {authorization: '3e4r5q'})
  end
end

Notice that in the it block the expectation comes before the #payment method call. We could replace the double with a spy. The difference is a spy uses the have_received method. It is your preference to use either a double or a spy.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
  let(:gateway_double) { spy('ActiveMerchant::Billing::BraintreeGateway') }
  let!(:payment) { build(:credit_card_payment, gateway: gateway_double) }

  before { payment.gateway = gateway_double }

  describe '#authorize' do
    it 'sends a #authorize request to Braintree' do
      payment.authorize

      expect(gateway_double).to have_receive(:authorize).exactly(1).times
        .with(amount, payment.credit_card, payment.credit_card_descriptor)
  .and_return(authorize_response_stub)
    end

    it 'creates a GatewayTransaction' do
      expect { payment.authorize }.to change(GatewayTransaction, :count).by(1)
    end
  end

Helpful Links

RSpec Mocks 3.6

Python, First Impressions

| Comments

Debugging

  • Write the following code to use an interactive debugger in Python.
1
2
import pdb
pdb.set_trace()

Self

  • Passing a reference to self seems repetitive.
1
2
3
class Foo:
  def bar(self)
    print('Hello')

Testing

  • You need to pass in the -s to print test output. I expected print to log output regardless.

Modules

  • You can conditionally implement different behavior when importing modules in Python.
  • Taken from this StackOverflow example.
one.py
1
2
3
4
5
6
7
8
9
def func():
    print("func() in one.py")

print("top-level in one.py")

if __name__ == "__main__":
    print("one.py is being run directly")
else:
    print("one.py is being imported into another module")
two.py
1
2
3
4
5
6
7
8
9
import one

print("top-level in two.py")
one.func()

if __name__ == "__main__":
    print("two.py is being run directly")
else:
    print("two.py is being imported into another module")

Running python one.py returns: top-level in one.py one.py is being run directly

While running python two.py returns: top-level in one.py one.py is being imported into another module top-level in two.py func() in one.py two.py is being run directly

Artificial Intelligence Nano Degree

| Comments

I started the Udacity Artificial Intelligence Nano Degree last month.

Project 1, Sudoku Solver

The Sudoku Solver was a fun project. I enjoy learning by writing code to make tests pass, like Ruby Katas.

Project 2, Isolation Game

Creating an algo to play an Isolation Game is much more complicated than the first project and was very frustrating to build. Unlike the first project, there is barely a test framework to follow. You push your methods to a remote Udacity server which gives feedback about test failures. This development cycle is difficult to debug because you can neither use a debugger nor get print output. After the minimax and alphabeta functions pass the remote tests, things get interesting. You are asked to write custom heuristics in order to decide the best move for your player. I wish Udacity gave more hints as to game heuristics.

Writing a synopsis for a research paper is also required for the project. I chose to write about AlphaGo. Due to my lack of knowledge reading math proofs, it took several attempts to understand the paper.

Project 3, Implement a Planning Search

Using planning problems—states, actions, and goals for planning algorithms. I enjoyed this unit of the course and it was interesting to study the development of Graph Plan algorithms since the 1970’s.

Project 4, Hidden Markov Models

The objective of this unit is to create an algo that can correctly guess sign language words. Hidden Markov models are interesting and remind me of deep learning algos. They both seem to solve a similar problem set.

Overall Thoughts

I am not sure this course is worth the $800 price. The class was interesting, but very time consuming. The benefits to taking the Udacity course over self learning on the internet is the Slack channel, forum and project feedback. I used the forum when I was stuck on a problem. I did not take advantage of the Slack channel. The project feedback was helpful. I will not be taking the next unit in the Artifical Intelligence Nano Degree. Instead, I am progressing through the Fast AI classes. I plan on writing a post after I finish the course.

My Photo Storage Solution

| Comments

I have been searching for a photo storage solution. My solution only involves two devices, a raspberry pi and an external hard drive. This post will explain how to set up your pi in order to automatically transfer RAW and JPG photos from an SD card to the hard drive connected to the pi. The scripts I used are available here.

I like this solution because the entire process is automatic. You are notified when the photo transfer has finished with an email. Also, for redundancy, I created a cron job that syncs all of the photos on my hard drive to Amazon photos. I use this service to sync the photos in the cloud.

Steps

Mount the external hard drive to a folder on your pi.
  • sudo fdisk -l take note of the Device, something like /dev/sdb1
  • I like to mount the drive under the home dir, sudo mount /dev/sdb1 ~/external_hd/
  • Run lsblk -f to know here files are mounted
Create a udev rule
  • udev rules live here: /etc/udev/rules.d
  • cd /etc/udev/rules.d
  • touch 50-sdcard.rules
  • chmod +x 50-sdcard.rules
  • sudo fdisk -l again to find the sd card device name, in this instance ‘sda1’
  • echo 'KERNEL=="sda1", ACTION=="add", RUN+="/home/pi/sdcard/sdcard_added.sh"' < 50-sdcard.rules
  • tail -f /var/log/syslog is extremely useful for debugging udev rules
Create an fstab privilege
  • vi /etc/fstab
  • Find device uuid with blkid
  • Add UUID=<your-uuid> /home/pi/SD_CARD auto rw,users,noauto 0 0
  • Uncomment out ‘user_allow_other’ in /etc/fuse.conf
rsync when an sd card is added
  • touch /home/pi/sdcard/sdcard_added.sh
  • chmod +x /home/pi/sdcard/sdcard_added.sh
  • create the script
1
2
3
4
5
#!/bin/bash

echo "`date +%Y-%m-%d:%H:%M:%S` SD card inserted" >> /home/pi/sdcard/output.log
mount /dev/sda1 /home/pi/SD_CARD
rsync -av /home/pi/SD_CARD/DCIM/100MSDCF/ /home/pi/external_hd/raw_photos
Sync photos to cloud provider
  • Set up cron job
  • sudo crontab -e
  • 00 4 * * * /home/pi/sd_card/rclone_cron.sh, this runs the cron job at 4 am, daily
  • create the bast script
1
2
#!/bin/sh
/usr/sbin/rclone sync /home/pi/external_hd/raw_photos/ amazon:raw_photos --config /home/pi/.rclone.conf -v
Send Email Notification
  • I used this post to easily add email notifications when the rsync process completes
Tips Since Creating My Photo Process
  • I could not get the above code to work with Debian due to permission issues
  • It is better to identify from UUID instead of KERNEL for setting up udev rules, run ls -l /dev/disk/by-uuid to discover device UUID’s
RClone
  • Amazon has blocked all thrid party applications from pushing to Amazon Drive. For more information see this
  • I am going to implement Duplicati as a solution to sync my photos to Amazon Drive
  • Another potential solution is to continute to use rclone, but host the files on Google Drive

RSpec and Elastic Search

| Comments

I had a difficult time setting up ElasticSearch on both RSpec (CircleCI) and Heroku. The ElasticSearch test cluster was not working on the CircleCI Docker image. Fortunately, one can configure the Circle environment to start with an ElasticSearch process. So, instead of using the test cluster, both my local testing environment and Circle environment use a real ElasticSearch process.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#
# ElasticSearch
#

config.before :all, elasticsearch: true do
  port = ENV['CIRCLE_CI_ES_URL'].present? ? 9200 : 9250
  Elasticsearch::Model.client = Elasticsearch::Client.new(port: port)
end

config.before :each, elasticsearch: true do
  Campaign.__elasticsearch__.create_index!(force: true)
end

config.after :each, elasticsearch: true do
  Campaign.__elasticsearch__.delete_index!
end

When a test needs to use ElasticSearch:

1
2
3
4
5
before { Campaign.__elasticsearch__.refresh_index! }

describe '#search', elasticsearch: true do
  expect(search.results).to be_present
end

I ran into issues using ElasticSearch on Heroku when creating an index. Heroku review apps are configurable by defining an app.json. In the json file, Heroku can spin up an ElasticSearch process.

1
2
3
4
5
6
7
8
9
10
11
12
13
"scripts": {
  "postdeploy": "bundle exec rake db:schema:load db:seed"
},
"formation": {
  "web": {
    "size": "free",
    "quantity": 1
  },
  "elasticsearch": {
    "size": "free",
    "quantity": 1
  }
}

The process is running before any Ruby code is executed. The next step is to create the index using postdeploy. Before creating ActiveRecord objects in the seed.rb file, create an index with Model.index(force: true).

Vim, Thoughts After Two Weeks

| Comments

Purposefully, I began using Vim during a non-strenuous work week which helped manage frustration. I have been using solely Vim as a text editor for two weeks and have experienced improved efficienies over my previous text editor, Sublime.

Improvments

The ability to change between files quickly using the Ctrl-P plugin is the biggest quality of life increase. Commands such as / and ? make finding word matches in a file easy. Using RSpec with Vim is also great; I can change my code and re-run a single test without switching screens! There are many small wins with using Vim and they add up.

Pain Points

Adding new files does not work if the directory does not exist. The mkdir plugin solves this issue. Furthermore, commenting out code is a pain without vim-commentary. Checking my vimrc file into version controller has been helpful in debgging where breaking changes originate.

More Resources

Vim Awesome