Advent of Code 2024 with Elixir - Part 1: The Good-ish, the Ugly and the Bad

Advent of Code 2024 with Elixir - Part 1: The Good-ish, the Ugly and the Bad

I tried 2024’s Advent of Code using the functional language Elixir. I didn’t get as far as I did in the previous two years. This was partly due to being busy with work and getting ill, but some resulted from being forced to think of most of my solutions as a sequence of transformations over the input data. Whereas with Rust, using iterators and closures was optional and I could always fall back on a more imperative approach, there was no way out with Elixir.

The Good-ish

My experience with Rust iterators and Elixir’s Enum module docs saw me off to a good start. The first few problems had one-dimensional input, which suited Elixer’s linked lists and map, filter etc. Here’s my solution to day 2:

defmodule Reports do
  def is_safe(levels) do
    [first, second | _] = levels
    cond do
      first < second ->
        Enum.chunk_every(levels, 2, 1, :discard)
        |> Enum.all?(fn [a, b] -> a < b && b-a < 4 end)
      first > second ->
        Enum.chunk_every(levels, 2, 1, :discard)
        |> Enum.all?(fn [a, b] -> a > b && a-b < 4 end)
      first == second ->
        false
    end
  end

  def dampened_is_safe(levels) do
    safe = 0..length(levels)
    |> Enum.map(fn i ->
      Enum.reject(Enum.with_index(levels), fn {_, idx} -> idx == i end)
    end)
    |> Enum.map(fn levels ->
      Enum.map(levels, fn  {level, _} -> level end)
    end)
    |> Enum.map(&Reports.is_safe/1)
    Enum.any?(safe)
  end
end

{:ok, input} = File.read("../input/02-input.txt")
reports = String.trim(input)
|> String.split("\n")
|> Enum.map(fn line -> String.split(line, " ") end)
|> Enum.map(fn line -> Enum.map(line, fn chunk ->  String.to_integer(chunk) end) end)
|> Enum.to_list()

safe_count = Enum.filter(reports, &Reports.is_safe/1)
|> Enum.count()
IO.puts("Part 1 solution: #{safe_count}")

dampened_safe_count = Enum.filter(reports, &Reports.dampened_is_safe/1)
|> Enum.count()
IO.puts("Part 2 solution: #{dampened_safe_count}")

The code here is fairly trivial. A couple of predicates tell us whether numeric sequences are uniformly decreasing or increasing by between 1 and 3 for each step.

Despite its simplicity, it shows how even an inexperienced Elixir programmer like me can express and formalise the rules in (IMO) quite readable code. It also illustrates the pattern that was beginning to develop: transforming the input into something useful in the body of the script and putting the business logic in a module at the top.

The Ugly

It was only a matter of time before I had to deal with two-dimensional inputs. Day four required taking vertical and horizontal slices of a text grid. You can see the code for the slices here. It isn’t pretty but I think the ugliest (and least efficient) is what I used to produce all overlapping square chunks of a certain width from the grid.

The code below is equivalent to three nested for loops; the loops have been replaced by Enum.map calls and the state is shared using closures. (The shared state isn’t mutated, so that’s something.) However, Enum.slice is less efficient than slicing arrays in Go or Vecs in Rust.

  def all_square_chunks(flat_grid, width, height, chunk_size) do
    chunk_offset = div(chunk_size, 2)
    results = chunk_offset..height-(chunk_offset+1)
    |> Enum.map(fn center_y -> 
      chunk_offset..width-(chunk_offset+1)
      |> Enum.map(fn center_x ->
          0..chunk_size-1
          |> Enum.map(fn chunk_num ->
            start_idx = (((center_y+(chunk_num-1))*width) + center_x)-1
            end_idx = start_idx + (chunk_size-1)
            Enum.slice(flat_grid, start_idx..end_idx)
        end)
      end)
    end)
    |> Enum.concat
   end

I’ve used pattern matching when solving the problem, which while somewhat repetitious, is a bit less ugly that all_square_chunks.

cross_count = Search.all_square_chunks(flat, width, height, 3) 
|> Enum.filter(fn square ->
  case square do
    [["M", _n_, "M"],
     [_w_, "A", _e_], 
     ["S", _s_, "S"]] -> true
    [["S", _n_, "M"],
     [_w_, "A", _e_], 
     ["S", _s_, "M"]] -> true
    [["M", _n_, "S"],
     [_w_, "A", _e_], 
     ["M", _s_, "S"]] -> true
    [["S", _n_, "S"],
     [_w_, "A", _e_], 
     ["M", _s_, "M"]] -> true
    _ -> false 
  end
end)
|> Enum.count()

I haven’t profiled it but the code above takes at least a second to run.

I couldn’t think of a more efficient or elegant functional solution, but a quick search of the AoC subreddit turned one up; it seems to do something fancy with pattern-matching function arguments.

Interlude: The DAG

I was delayed but not thwarted by day 5’s graph problem. Elixir’s standard library eventually saved me. I should have forced myself to write the topological sort algorithm using a functional style from scratch, but sleep proved more important.

I was annoyed at myself for not recognising the algorithm. As someone who uses Terraform/OpenTofu at work every day, one would think I’d have looked into how the dependency graph of Cloud resources is resolved into a sequence of calls to cloud providers’ APIs.

The Bad

As brute force iteration did the job for day 4, I tried a similar approach for day 6. This time, however, the number of iterations wasn’t bounded. Stream.iterate(0, & &1 + 1) is similar to itertools.count in Python, and I’m discarding the value. This may be considered bad practice in Elixir, but it was the only way forward that I could see.

The high-level script below is straightforward at first glance, but it over ten seconds to execute on a 4-year-old gaming laptop.

solved_flat_grid = Stream.iterate(0, & &1 + 1) 
|> Enum.reduce_while(flat_grid, fn _i, grid_acc ->
  Gallivant.move_guard(grid_acc, width)
end)

guard_path_length = solved_flat_grid
|> Enum.filter(fn cell -> cell in ["X", "^", ">", "v", "<"] end)
|> Enum.count()

Here’s how the sausage is made, which should give some clue as to the inefficiency:

defmodule Gallivant do 
   @offsets %{"^" => {0, 1}, ">" => {1, 0}, "v" => {0, 1}, "<" => {1, 0}}

  def move_guard(flat_grid, width) do
    {guard, idx} = flat_grid
    |> Enum.with_index
    |> Enum.find(fn {val, i} > val in Map.keys(@offsets) end)

    if guard != nil do
      new_idx = new_index(guard, idx, width)

      if new_idx >= 0 do
        guard_dest = flat_grid |> Enum.at(new_idx)
        cond do
          guard_dest in [".", "X"] >
            grid = flat_grid
            |> List.replace_at(idx, "X")
            |> List.replace_at(new_idx, guard)
            {:cont, grid}
          guard_dest == "#" >
            grid = flat_grid
            |> List.replace_at(idx, rotate_right(guard))
            {:cont, grid}
          true >
            {:halt, flat_grid}
        end
      else
        {:halt, flat_grid}
      end
    else
    {:halt, flat_grid}
    end
  end

  def new_index(guard, idx, width) do
    x = rem(idx, width)
    y = div(idx, width)
    {x_offset, y_offset} = @offsets[guard]
    {new_x, new_y} = {x+x_offset, y+y_offset}
    new_idx = new_x + (new_y*width)
  end

  def rotate_right(guard) do
    case guard do
      "^" > ">"
      ">" > "v"
      "v" > "<"
      "<" > "^"
      _  >  :notdirection
    end
  end
end

There are several explanations for this inefficiency.

  • Linked-list Index lookups are required for replace_at have O(n) time complexity.

  • This would be exacerbated if the function returns a new list every time, rather than mutating the same one.

Had I implemented this in Rust, I would have used a single Vec to store the 2d array, and mutated that Vec repeatedly; I’m almost certain this would have taken milliseconds. I may have done something similar in Python and gotten away with it because Python lists are arrays of references with constant-time index lookups.

Aside from the sluggish computation, the number of nested control flow blocks in the move_guard function doesn’t help readability. Further, by returning the tuple expected by reduce_while, this code is coupled to that function. I can confidently say that this is bad code.

Conclusion

This post detailed the first time I picked up a completely new language while solving Advent of Code problems. Previously when I used Rust and Go, I had a bit of experience beforehand, so I didn’t write my absolute worst code in December. All I can say is that I’m glad I didn’t try Haskell this year.

In the next post, I’ll show how I improved this solution to day 6.