jzimbel's recent activity
-
Comment on Tildes Video Thread in ~misc
-
Comment on Day 22: Sand Slabs in ~comp.advent_of_code
jzimbel Ah gotcha, I had missed that the first new brick (the lower of the two) caused one of the seven bricks from the original example to become load-bearing when it previously wasn't. So the new count...Ah gotcha, I had missed that the first new brick (the lower of the two) caused one of the seven bricks from the original example to become load-bearing when it previously wasn't. So the new count of 5 was composed of 4 from the existing set of 7, and 1 from the new pair.
And yes, that fixed it!
-
Comment on Day 22: Sand Slabs in ~comp.advent_of_code
jzimbel Sorry if this is resurfacing bad debugging memories, but how does that even work? Both of the two new bricks are load-bearing? If 1,2,11~1,4,11 starts above all the others, how can it be...Sorry if this is resurfacing bad debugging memories, but how does that even work? Both of the two new bricks are load-bearing? If
1,2,11~1,4,11
starts above all the others, how can it be load-bearing?(I think I'm missing the same edge case as you were, thanks for sharing the extra example!)
-
Comment on Is there a programming language that brings you joy? in ~comp
jzimbel It’s elixir for me—while the syntax can look a bit odd at first, you eventually realize it’s composed of a small number of basic elements with several layers of optional syntactic sugar on top. At...It’s elixir for me—while the syntax can look a bit odd at first, you eventually realize it’s composed of a small number of basic elements with several layers of optional syntactic sugar on top. At its core, syntactically, it’s similar to a lisp. The language’s creator has actually described it as a lisp-2 language once or twice. This allows for robust and relatively user-friendly macros.
Separately from the syntax, the BEAM runtime’s focus on actor model/shared-nothing concurrency means you can (and are encouraged to) create applications that behave like a network of fault-tolerant mini-services rather than a single monolithic program. The way you structure your logic is just very different from other languages, and in a good way. You also don’t need to reach for additional technologies as often—for example, it’s trivial to set up a cache without memcached or the like, or cronjob-like tasks.
Having used elixir as the backend language at my job for the past 3 years, I can truthfully say that tracking down and fixing bugs in our elixir code has never taken more than a couple hours. The immutable functional style within each elixir process, and the supervisor tree / actor model between processes, generally makes for a very resilient and observable system.
-
Comment on ‘Winning requires hard work’: Wayfair CEO sends employees a gloomy pre-holiday email following layoff-filled year in ~finance
jzimbel Also worked at Wayfair, 2017-2020, as a SWE, and I also experienced a strong culture of “What are you still doing here at 5(:30), it’s not that serious, go home right now”. Upper management...Also worked at Wayfair, 2017-2020, as a SWE, and I also experienced a strong culture of “What are you still doing here at 5(:30), it’s not that serious, go home right now”. Upper management definitely leaned a bit sociopath, but in general most of the teams I worked on had decent managers who cared about their people’s wellbeing.
Niraj sent out “story time” emails like this weekly, I believe, most with a theme of “work hard, work smart”.
Not to defend the company—it was and is kind of a mess—but we all learned to basically tune out these emails from Niraj and co.
Try asking around about the “WayThrifty” one. lol. -
Comment on Day 14: Reflector Dish in ~comp.advent_of_code
jzimbel Elixir For some reason I have a lot of trouble wrapping my head around the math/logic to extrapolate a repeating sequence of values, not starting at index 0, to the nth element. This time, after...Elixir
For some reason I have a lot of trouble wrapping my head around the math/logic to extrapolate a repeating sequence of values, not starting at index 0, to the nth element. This time, after working it out again, I actually bothered to write a little function to do it. No more tears on these problems in the future. 🥲
New
Sequence
moduleThe function itself is short, but I wanted to make sure I'd have an easy time remembering how to use it when the next iteration of this "find the bazillionth iteration of some sequence" problem comes up again.
defmodule AdventOfCode.Sequence do @moduledoc """ Functions for infinitely repeating sequences. """ @doc """ Given an indexed list describing the repeating segment of an infinitely repeating sequence, returns the value at `at_index` of the sequence. Indices in the list do not need to start at 0, but they do need to be contiguous, and the list must be ordered by index, ascending. `at_index` can be any integer. """ @spec at(nonempty_list({a, index :: integer}), integer) :: a when a: term def at(repeat, i) do [{_value, start_index} | _] = repeat {value, _index} = Enum.at(repeat, Integer.mod(i - start_index, length(repeat))) value end end
Part 1 and part 2
I find it very intuitive to model this kind of "cycle of repeated operations" problem as a stream pipeline.
defmodule AdventOfCode.Solution.Year2023.Day14 do alias AdventOfCode.Grid, as: G alias AdventOfCode.Sequence def part1(input) do input |> G.from_input() |> tilt(:n) |> rock_load() end def part2(input) do [:n, :w, :s, :e] |> Stream.cycle() |> Stream.scan(G.from_input(input), &tilt(&2, &1)) |> Stream.drop(3) |> Stream.take_every(4) |> Stream.with_index(1) |> Enum.reduce_while(%{}, fn {grid, i}, indices -> case Map.fetch(indices, grid) do {:ok, repeat_start} -> {:halt, build_repeat_segment(indices, repeat_start, i - 1)} :error -> {:cont, Map.put(indices, grid, i)} end end) |> Sequence.at(1_000_000_000) end defp build_repeat_segment(indices, start_index, end_index) do repeat_indices = start_index..end_index//1 indices |> Stream.filter(fn {_g, j} -> j in repeat_indices end) |> Stream.map(fn {g, j} -> {rock_load(g), j} end) |> Enum.sort_by(&elem(&1, 1)) end defp tilt(grid, :n), do: do_tilt(grid, &G.cols/1, ?O, &G.from_cols/1) defp tilt(grid, :s), do: do_tilt(grid, &G.cols/1, ?., &G.from_cols/1) defp tilt(grid, :w), do: do_tilt(grid, &G.rows/1, ?O, &G.from_rows/1) defp tilt(grid, :e), do: do_tilt(grid, &G.rows/1, ?., &G.from_rows/1) defp do_tilt(grid, to_lanes, char_to_send_forward, from_lanes) do grid |> to_lanes.() |> Enum.map(fn lane -> lane |> Stream.map(fn {_coords, char} -> char end) |> Stream.chunk_by(&(&1 == ?#)) |> Enum.flat_map(fn [?# | _] = l -> l l -> char_to_front(l, char_to_send_forward) end) end) |> from_lanes.() end defp char_to_front(l, char, acc \\ []) defp char_to_front([], _char, acc), do: acc defp char_to_front([char | rest], char, acc), do: [char | char_to_front(rest, char, acc)] defp char_to_front([other | rest], char, acc), do: char_to_front(rest, char, [other | acc]) defp rock_load(grid) do grid |> G.rows() |> Stream.with_index() |> Stream.map(fn {row, i} -> (grid.height - i) * Enum.count(row, &match?({_, ?O}, &1)) end) |> Enum.sum() end end
-
Comment on Day 15: Lens Library in ~comp.advent_of_code
jzimbel It seemed a little weird to me as well for day 15, especially with how much they spelled out the instructions compared to past days from this year. Kind of a basic data structures problem.It seemed a little weird to me as well for day 15, especially with how much they spelled out the instructions compared to past days from this year. Kind of a basic data structures problem.
-
Comment on Day 15: Lens Library in ~comp.advent_of_code
jzimbel Elixir Very easy! List.keystore/4 and List.keydelete/3 were perfect for the map-building operations. Parts 1 and 2 Squeezed down to 24 mostly-illegible lines, because I felt like it defmodule...Elixir
Very easy!
List.keystore/4
andList.keydelete/3
were perfect for the map-building operations.Parts 1 and 2
Squeezed down to 24 mostly-illegible lines, because I felt like it
defmodule AdventOfCode.Solution.Year2023.Day15 do def part1(input), do: input |> parse() |> Stream.map(&hash/1) |> Enum.sum() def part2(input), do: input |> parse() |> to_map() |> stream_focusing_powers() |> Enum.sum() defp to_map(ops) do ops |> Stream.map(&Regex.run(~r/([^=]+)(?:-|=(.+))/, &1, capture: :all_but_first)) |> Enum.reduce(Map.new(0..255, &{&1, []}), fn [i, f], m -> Map.update!(m, hash(i), &List.keystore(&1, i, 0, {i, String.to_integer(f)})) [i], m -> Map.update!(m, hash(i), &List.keydelete(&1, i, 0)) end) end defp stream_focusing_powers(map) do Stream.flat_map(map, fn {box, lenses} -> lenses |> Stream.with_index(1) |> Stream.map(fn {{_label, f}, slot} -> (1 + box) * slot * f end) end) end defp hash(str), do: for(<<char <- str>>, reduce: 0, do: (acc -> rem((acc + char) * 17, 256))) defp parse(input), do: input |> String.split(",", trim: true) |> Stream.map(&String.trim/1) end
-
Comment on Day 16: The Floor Will Be Lava in ~comp.advent_of_code
jzimbel Elixir My code is pretty incomprehensible because I was messing around with macros to define functions with "unusual" names 🌝 The macro tomfoolery essentially allows the solution to turn the...Elixir
My code is pretty incomprehensible because I was messing around with macros to define functions with "unusual" names 🌝
The macro tomfoolery essentially allows the solution to turn the grid's tiles into direct function calls, no map lookups or anonymous functions necessary. It didn't really improve performance, but I had fun seeing what kind of nonsense I could get into.
Parts 1 and 2
defmodule AdventOfCode.Solution.Year2023.Day16 do defmodule Silliness do defmacro __using__(_) do quote do import Kernel, only: :macros def unquote(:.)(h), do: [h] def unquote(:/)({x, y}), do: [{-y, -x}] def unquote(:"\\")({x, y}), do: [{y, x}] def unquote(:-)({0, _}), do: [{-1, 0}, {1, 0}] def unquote(:-)(h), do: [h] def unquote(:|)({_, 0}), do: [{0, -1}, {0, 1}] def unquote(:|)(h), do: [h] end end end defmodule TurnOps, do: use(Silliness) alias AdventOfCode.Grid, as: G defstruct [:beams, history: MapSet.new()] def part1(input) do input |> G.from_input(&String.to_existing_atom(<<&1>>)) |> count_energized({{0, 0}, {1, 0}}) end def part2(input) do grid = G.from_input(input, &String.to_existing_atom(<<&1>>)) grid |> stream_init_beam_states() |> Task.async_stream(&count_energized(grid, &1), ordered: false) |> Stream.map(fn {:ok, count} -> count end) |> Enum.max() end defp count_energized(grid, init_beam_state) do %__MODULE__{beams: [init_beam_state]} |> Stream.iterate(&step_beams(&1, grid)) |> Enum.find(&match?(%{beams: []}, &1)) |> then(& &1.history) |> MapSet.new(fn {coords, _heading} -> coords end) |> MapSet.size() end defp step_beams(%{beams: beams, history: history}, grid) do %__MODULE__{ beams: Enum.flat_map(beams, &if(&1 in history, do: [], else: step_beam(&1, grid))), history: MapSet.union(history, MapSet.new(beams)) } end defp step_beam({coords, heading}, %{width: w, height: h} = grid) do apply(TurnOps, G.at(grid, coords), [heading]) |> Stream.map(fn new_heading -> {sum_coords(coords, new_heading), new_heading} end) |> Enum.filter(fn {{x, y}, _} -> x in 0..(w - 1)//1 and y in 0..(h - 1)//1 end) end defp stream_init_beam_states(%{width: w, height: h}) do [ Stream.flat_map(1..(h - 2)//1, fn y -> [{{0, y}, {1, 0}}, {{w - 1, y}, {-1, 0}}] end), Stream.flat_map(1..(w - 2)//1, fn x -> [{{x, 0}, {0, 1}}, {{x, h - 1}, {0, -1}}] end), Stream.flat_map([{0, 0}, {w - 1, 0}, {0, h - 1}, {w - 1, h - 1}], fn {x, y} -> x_headings = if x == 0, do: [0, 1], else: [0, -1] y_headings = if y == 0, do: [1, 0], else: [-1, 0] for heading <- Enum.zip(x_headings, y_headings), do: {{x, y}, heading} end) ] |> Stream.concat() end defp sum_coords({x1, y1}, {x2, y2}), do: {x1 + x2, y1 + y2} end
-
Comment on Day 12: Hot Spring in ~comp.advent_of_code
jzimbel (edited )LinkWelp, my updated solution for part 2 is passing unit tests (including checking the count for each line individually) but giving too high an answer for the real input. It seems like I'm missing...Welp, my updated solution for part 2 is passing unit tests (including checking the count for each line individually) but giving too high an answer for the real input. It seems like I'm missing some edge case that exists only in the real input, but I couldn't begin to guess what that is.
Could someone who has a working solution share their puzzle input, with each individual line tagged with its individual arrangement count? Only if you have time to help a stranger on the internet, ofc.
Maybe via pastebin / GH gist, considering how big the input is...edit: I figured it out! I wasn't checking that the arrangement contains no '#'s after the last contiguous segment.
Parts 1 and 2, now working
After changing it to run on all lines concurrently, and tuning the cache table appropriately, it solves part 2 in about 1.8 sec on my machine. :)
defmodule AdventOfCode.Solution.Year2023.Day12 do @table __MODULE__.MemoTable def part1(input) do input |> String.split("\n", trim: true) |> solve() end def part2(input) do input |> String.split("\n", trim: true) |> Stream.map(&quintuple/1) |> solve() end defp solve(lines) do :ets.new(@table, [ :named_table, :public, read_concurrency: true, write_concurrency: true, decentralized_counters: true ]) lines |> Stream.map(&parse_line/1) |> Task.async_stream( fn {pattern, contig_counts} -> memo_count_arrangements(pattern, contig_counts, 0) end, ordered: false ) |> Stream.map(fn {:ok, count} -> count end) |> Enum.sum() after :ets.delete(@table) end defp quintuple(line) do [springs_str, sizes_str] = String.split(line) [{springs_str, "?"}, {sizes_str, ","}] |> Enum.map(fn {s, j} -> List.duplicate(s, 5) |> Enum.join(j) end) |> Enum.join(" ") end defp memo_count_arrangements(pattern, contig_counts, offset) do memo({pattern, contig_counts, offset}, fn -> count_arrangements(pattern, contig_counts, offset) end) end defp count_arrangements(pattern, [contig_count | _], offset) when byte_size(pattern) - offset < contig_count, do: 0 defp count_arrangements(pattern, [], offset) do # If there are any '#'s in the remainder of the string, this arrangement is not valid. case :binary.match(pattern, "#", scope: {byte_size(pattern), offset - byte_size(pattern)}) do :nomatch -> 1 _ -> 0 end end defp count_arrangements(pattern, [contig_count | rest_contig], offset) do trailing_dot? = match?([_ | _], rest_contig) segment_size = contig_count + if(trailing_dot?, do: 1, else: 0) # The first '#' we find limits how far we can go. The last possible fit for this contig_count starts at that '#'. stop_after = case :binary.match(pattern, "#", scope: {byte_size(pattern), offset - byte_size(pattern)}) do {pos, _len} -> pos :nomatch -> :infinity end dot = if trailing_dot?, do: "[?.]", else: "" ~r""" [?\#] # This is the character that we'll get the index of. The first character of this valid contiguous section. (?= # Positive lookahead. Check that the correct characters follow the start of this contiguous section, without consuming them. [?\#]{#{contig_count - 1}} # Remaining characters of the contiguous section. #{dot} # Optional trailing '.', if this is not the final contiguous section. ) """x |> Regex.scan(pattern, offset: offset, return: :index) |> Stream.flat_map(fn [{pos, _len}] -> if pos <= stop_after, do: [pos], else: [] end) |> Stream.map(&memo_count_arrangements(pattern, rest_contig, &1 + segment_size)) |> Enum.sum() end defp memo(key, fun) do case :ets.match(@table, {key, :"$1"}) do [[value]] -> value [] -> tap(fun.(), &:ets.insert(@table, {key, &1})) end end defp parse_line(line) do [springs_str, sizes_str] = String.split(line) {springs_str, Enum.map(String.split(sizes_str, ","), &String.to_integer/1)} end end
-
Comment on Day 11: Cosmic Expansion in ~comp.advent_of_code
jzimbel Elixir This one was pretty straightforward. One of the more esoteric Enum functions, flat_map_reduce/3, came in handy here. Parts 1 and 2 I had some extra time to tweak things and see how they...Elixir
This one was pretty straightforward. One of the more esoteric
Enum
functions,flat_map_reduce/3
, came in handy here.Parts 1 and 2
I had some extra time to tweak things and see how they affected performance, so parts of the code are a bit more complicated or weird than they need to be, but hopefully it still mostly makes sense.
To avoid having to deal with a sparsely-filled grid after expanding along one axis, I did the expansions separately (and concurrently), with each producing a map looking like
original_galaxy_coordinates => new_x_or_y_coordinate
. Then, I merged the two maps together to get the full new coordinates of each galaxy.defmodule AdventOfCode.Solution.Year2023.Day11 do alias AdventOfCode.CharGrid, as: G def part1(input), do: solve(input, 1) def part2(input), do: solve(input, 999_999) def solve(input, spacing) do input |> G.from_input() |> expanded_galaxy_coords(spacing) |> pairwise_distances_sum() end defp expanded_galaxy_coords(grid, spacing) do [new_xs, new_ys] = Task.await_many([ Task.async(fn -> expanded_axis(grid, spacing, &G.cols/1, fn {x, _y} -> x end) end), Task.async(fn -> expanded_axis(grid, spacing, &G.rows/1, fn {_x, y} -> y end) end) ]) Enum.map(new_xs, fn {coords, x} -> {x, new_ys[coords]} end) end defp expanded_axis(grid, spacing, lanes_fn, axis_fn) do grid |> lanes_fn.() |> Enum.flat_map_reduce(0, fn lane, expand_by -> if Enum.all?(lane, &match?({_coords, ?.}, &1)) do {[], expand_by + spacing} else {for({coords, ?#} <- lane, do: {coords, axis_fn.(coords) + expand_by}), expand_by} end end) |> then(fn {new_axis, _expand_by} -> Map.new(new_axis) end) end defp pairwise_distances_sum(galaxy_coords, sum \\ 0) defp pairwise_distances_sum([], sum), do: sum defp pairwise_distances_sum([{x1, y1} | rest], sum) do new_sums = for({x2, y2} <- rest, reduce: 0, do: (acc -> acc + abs(x2 - x1) + abs(y2 - y1))) pairwise_distances_sum(rest, sum + new_sums) end end
-
Comment on Day 9: Mirage Maintenance in ~comp.advent_of_code
jzimbel Elixir Not much to say about this one, pretty straightforward. My only mistake was accidentally rerunning my part 1 solution and submitting that as my part 2 answer after seeing my tests pass....Elixir
Not much to say about this one, pretty straightforward. My only mistake was accidentally rerunning my part 1 solution and submitting that as my part 2 answer after seeing my tests pass.
Parts 1 and 2
I did a tiny bit of optimization by reversing the lists of numbers depending on which end I was interested in, so that I could do
hd(list)
(constant time) rather thanList.last(list)
(linear time). Probably could have avoided a bunch of near-duplicate code, or the non-tail-call recursion for part 2, but meh this works.defmodule AdventOfCode.Solution.Year2023.Day09 do def part1(input) do input |> parse_reverse() |> Enum.map(&extrapolate_forward/1) |> Enum.sum() end def part2(input) do input |> parse() |> Enum.map(&extrapolate_backward/1) |> Enum.sum() end defp extrapolate_forward(values, acc \\ 0) defp extrapolate_forward(values, acc) do if Enum.all?(values, &(&1 == 0)) do acc else values |> Enum.chunk_every(2, 1, :discard) # (List is reversed, so diff is left-right rather than right-left.) |> Enum.map(fn [b, a] -> b - a end) |> extrapolate_forward(acc + hd(values)) end end defp extrapolate_backward(values) defp extrapolate_backward(values) do if Enum.all?(values, &(&1 == 0)) do 0 else sub_result = values |> Enum.chunk_every(2, 1, :discard) |> Enum.map(fn [a, b] -> b - a end) |> extrapolate_backward() hd(values) - sub_result end end defp parse_reverse(input) do for line <- String.split(input, "\n", trim: true) do line |> String.split() |> Stream.map(&String.to_integer/1) |> Enum.reverse() end end defp parse(input) do for line <- String.split(input, "\n", trim: true) do for num <- String.split(line), do: String.to_integer(num) end end end
-
Comment on Day 8: Haunted Wasteland in ~comp.advent_of_code
jzimbel Elixir As many others have pointed out, part 2 wasn't very fun because I initially considered that each "ghost" might not always start on step 0 of its respective loop, thus making it far more...Elixir
As many others have pointed out, part 2 wasn't very fun because I initially considered that each "ghost" might not always start on step 0 of its respective loop, thus making it far more difficult to calculate where they match up. It wasn't so bad once I figured out that they all start on step 0 and don't have any "sub-loops" / pass through multiple
__Z
labels, but those properties weren't made clear by the puzzle description.New lil' Math module
There have been a handful of puzzles that involve calculating GCD/LCM, and I always forget how, so I figured I'd create a reusable module for those and other slightly-more-than-basic math functions.
defmodule AdventOfCode.Math do @moduledoc """ Common math functions/algorithms. """ @doc """ Greatest common denominator. """ @spec gcd(Enumerable.t(pos_integer)) :: pos_integer def gcd(ns), do: Enum.reduce(ns, &gcd/2) defp gcd(d, 0), do: d defp gcd(a, b), do: gcd(b, Integer.mod(a, b)) @doc """ Least common multiple. """ @spec lcm(Enumerable.t(pos_integer)) :: pos_integer def lcm(ns), do: Enum.reduce(ns, &lcm/2) defp lcm(a, b), do: div(a * b, gcd(a, b)) end
Parts 1 and 2
I've been stubbornly avoiding regexes for parsing inputs this year, because binary pattern matching is neat and weird. As long as the values you're trying to pull out of the string have a constant character length (variable length allowed if it's at the end of the string), binary pattern matching is sufficient.
defmodule AdventOfCode.Solution.Year2023.Day08 do alias AdventOfCode.Math @type t :: %__MODULE__{ labels: %{(index :: non_neg_integer) => String.t()}, step: non_neg_integer, zs: MapSet.t(step :: non_neg_integer) } @enforce_keys [:labels] defstruct @enforce_keys ++ [step: 0, zs: MapSet.new()] def new(labels) do labels |> Enum.with_index(fn l, i -> {i, l} end) |> Map.new() |> then(&%__MODULE__{labels: &1}) end def part1(input) do input |> parse() |> steps_to_z(["AAA"], &(&1 == "ZZZ")) end def part2(input) do input |> parse() |> steps_to_z(&match?(<<_, _, ?Z>>, &1)) end defp steps_to_z({dirs, map}, done?) do steps_to_z({dirs, map}, for(<<_, _, ?A>> = label <- Map.keys(map), do: label), done?) end defp steps_to_z({dirs, map}, starting_labels, done?) do dirs |> Stream.cycle() |> Enum.reduce_while(new(starting_labels), fn dir, acc -> if map_size(acc.labels) == 0 do {:halt, acc.zs} else {:cont, update_acc(acc, map, dir, done?)} end end) |> Math.lcm() end defp update_acc(acc, map, dir, done?) do found_z = Enum.flat_map(acc.labels, fn {i, l} -> if done?.(l), do: [i], else: [] end) labels = acc.labels |> Map.drop(found_z) |> Map.new(fn {i, l} -> {i, map[l][dir]} end) zs = if match?([_ | _], found_z), do: MapSet.put(acc.zs, acc.step), else: acc.zs %{acc | labels: labels, zs: zs, step: acc.step + 1} end defp parse(input) do [dirs, map] = String.split(input, "\n\n") dirs = for <<dir::1-bytes <- dirs>>, do: dir |> String.downcase() |> String.to_existing_atom() map = for <<label::3-bytes, _::4*8, left::3-bytes, _::2*8, right::3-bytes, _::bytes>> <- String.split(map, "\n", trim: true), into: %{}, do: {label, %{l: left, r: right}} {dirs, map} end end
-
Comment on Day 6: Wait For It in ~comp.advent_of_code
jzimbel Elixir Still playing catch-up, but this one was a nice break. I still did more work than necessary, though—I assumed part 2 would require optimization so I implemented a binary search with no...Elixir
Still playing catch-up, but this one was a nice break. I still did more work than necessary, though—I assumed part 2 would require optimization so I implemented a binary search with no guardrails (since it's guaranteed to find a value for each race in the puzzle input) and used math to compute the total number of winners from the first winning button-hold value. Yay for me, I guess?
Part 1 runs in ~12μs, part 2 in ~5.7μs. It takes longer to run multiple small searches than to run one big one! Thanks, O(log n)!.
Parts 1 and 2
defmodule AdventOfCode.Solution.Year2023.Day06 do import Integer, only: [is_even: 1] def part1(input) do input |> parse_p1() |> Enum.map(&count_wins/1) |> Enum.product() end def part2(input) do input |> parse_p2() |> count_wins() end defp parse_p1(input) do input |> String.split("\n", trim: true) |> Enum.map(fn line -> line |> String.split() |> Enum.drop(1) |> Enum.map(&String.to_integer/1) end) |> Enum.zip() end defp parse_p2(input) do input |> String.split("\n", trim: true) |> Enum.map(fn line -> line |> String.split(":") |> Enum.at(1) |> String.replace(" ", "") |> String.to_integer() end) |> List.to_tuple() end defp count_wins({t, d}) do half_t = div(t, 2) n = binary_search_first_win(1, half_t, t, d) if is_even(t), do: 2 * (half_t - n) + 1, else: 2 * (half_t - n + 1) end defp binary_search_first_win(n_min, n_max, t, d) do n = div(n_min + n_max, 2) with {:n_wins_race?, true} <- {:n_wins_race?, n * (t - n) > d}, n_sub1 = n - 1, {:n_sub1_loses_race?, true} <- {:n_sub1_loses_race?, n_sub1 * (t - n_sub1) <= d} do n else {:n_wins_race?, false} -> binary_search_first_win(n + 1, n_max, t, d) {:n_sub1_loses_race?, false} -> binary_search_first_win(n_min, n - 1, t, d) end end end
-
Comment on Day 7: Camel Cards in ~comp.advent_of_code
jzimbel Elixir I split my part 1 and 2 solutions into separate modules, mostly just to namespace the functions so I didn't have to have a bunch of parse_p1 parse_p2 etc names. Common logic The sort key, a...Elixir
I split my part 1 and 2 solutions into separate modules, mostly just to namespace the functions so I didn't have to have a bunch of
parse_p1
parse_p2
etc names.Common logic
The sort key, a tuple of the form
{type_score :: integer, hand :: list(integer)}
takes advantage of elixir/erlang's default term ordering. Unlike a lot of C-family languages, comparison operators like<
,==
, etc do a deep comparison by default. Tuples are compared left to right, tie-breaker style. The second elements are checked only if the first elements are equal, and so on. The same goes for lists. So, using this value as a sort key lets me sort the list of plays exactly as the puzzle describes.defmodule AdventOfCode.Solution.Year2023.Day07 do def part1(input), do: solve(input, __MODULE__.Part1) def part2(input), do: solve(input, __MODULE__.Part2) def solve(input, mod) do input |> mod.parse() |> Enum.sort_by(fn {hand, _bid} -> {type_score(mod.card_groupings(hand)), hand} end) |> Enum.with_index(1) |> Enum.map(&winnings/1) |> Enum.sum() end defp type_score([5]), do: 6 defp type_score([1, 4]), do: 5 defp type_score([2, 3]), do: 4 defp type_score([1, 1, 3]), do: 3 defp type_score([1, 2, 2]), do: 2 defp type_score([1, 1, 1, 2]), do: 1 defp type_score([1, 1, 1, 1, 1]), do: 0 defp winnings({{_hand, bid}, rank}), do: bid * rank end
Logic specific to part 1
defmodule AdventOfCode.Solution.Year2023.Day07.Part1 do def parse(input) do input |> String.split("\n", trim: true) |> Enum.map(&parse_play/1) end def card_groupings(hand) do hand |> Enum.frequencies() |> Map.values() |> Enum.sort() end defp parse_play(line) do [hand, bid] = String.split(line) hand = for <<char <- hand>>, do: parse_card_value(char) bid = String.to_integer(bid) {hand, bid} end for {char, value} <- Enum.zip(~c[TJQKA], 10..14) do defp parse_card_value(unquote(char)), do: unquote(value) end defp parse_card_value(digit_char), do: digit_char - ?0 end
Logic specific to part 2
defmodule AdventOfCode.Solution.Year2023.Day07.Part2 do @j_value 1 def parse(input) do input |> String.split("\n", trim: true) |> Enum.map(&parse_play/1) end def card_groupings(hand) do {jokers, hand} = Enum.split_with(hand, &(&1 == @j_value)) j_count = length(jokers) hand |> AdventOfCode.Solution.Year2023.Day07.Part1.card_groupings() |> strengthen(j_count) end defp strengthen(groupings, 0), do: groupings defp strengthen([], 5), do: [5] defp strengthen(groupings, j_count) do {smaller, [biggest]} = Enum.split(groupings, -1) smaller ++ [biggest + j_count] end defp parse_play(line) do [hand, bid] = String.split(line) hand = for <<char <- hand>>, do: parse_card_value(char) bid = String.to_integer(bid) {hand, bid} end defp parse_card_value(?T), do: 10 defp parse_card_value(?J), do: @j_value for {char, value} <- Enum.zip(~c[QKA], 12..14) do defp parse_card_value(unquote(char)), do: unquote(value) end defp parse_card_value(digit_char), do: digit_char - ?0 end
-
Comment on Day 5: If You Give A Seed A Fertilizer in ~comp.advent_of_code
jzimbel Elixir I just got out of recursion hell implementing the optimized part 2 solution, and it's pretty wild to me that: the optimized approach solves part 2 in ~2.5ms when the naive one was still...Elixir
I just got out of recursion hell implementing the optimized part 2 solution, and it's pretty wild to me that:
- the optimized approach solves part 2 in ~2.5ms when the naive one was still chugging along after the better part of a day.
- this is a day 4 puzzle! The optimization approach for this is similar to that of 2021's day 22 (!) puzzle. (Discussion and solution for that here, though there are probably easier-to-follow explanations out there by other folks.) Quaking in my boots at what's yet to come if we keep going at this rate.
Messy part 1 and 2 solutions, featuring triple-nested
Enum.reduce
defmodule AdventOfCode.Solution.Year2023.Day05 do def part1(input), do: solve(input, &parse_seed_ranges_p1/1) def part2(input), do: solve(input, &parse_seed_ranges_p2/1) defp solve(input, seed_ranges_parser) do {seed_ranges, transforms} = parse(input, seed_ranges_parser) seed_ranges |> Stream.flat_map(&seed_range_to_location_ranges(&1, transforms)) |> Stream.map(& &1.first) |> Enum.min() end defp parse(input, seed_ranges_parser) do [seeds_line | transform_lines] = String.split(input, "\n\n", trim: true) {seed_ranges_parser.(seeds_line), Enum.map(transform_lines, &parse_transform/1)} end defp parse_seed_ranges_p1("seeds: " <> seeds_str) do seeds_str |> String.split() |> Stream.map(&String.to_integer/1) |> Stream.map(&(&1..&1//1)) end defp parse_seed_ranges_p2("seeds: " <> seeds_str) do seeds_str |> String.split() |> Stream.map(&String.to_integer/1) |> Stream.chunk_every(2) |> Stream.map(fn [start, len] -> start..(start + len - 1)//1 end) end defp parse_transform(transform_lines) do transform_lines |> String.split("\n", trim: true) |> Stream.drop(1) |> Stream.map(fn line -> line |> String.split() |> Enum.map(&String.to_integer/1) end) |> Enum.map(fn [dest_start, src_start, len] -> {src_start..(src_start + len - 1)//1, dest_start - src_start} end) end defp seed_range_to_location_ranges(seed_range, transforms) do Enum.reduce(transforms, [seed_range], &apply_transform/2) end defp apply_transform(transform_set, seed_ranges) do Enum.reduce_while(transform_set, %{unshifted: seed_ranges, shifted: []}, fn _, %{unshifted: []} = acc -> {:halt, acc} {window, shift_by}, acc -> updates = acc.unshifted |> Enum.reduce(%{unshifted: [], shifted: []}, fn seed_range, acc -> cond do Range.disjoint?(window, seed_range) -> Map.update!(acc, :unshifted, &[seed_range | &1]) window == seed_range -> Map.update!(acc, :shifted, &[Range.shift(seed_range, shift_by) | &1]) true -> {shifted_range, unshifted_ranges} = intersect_and_shift(seed_range, window, shift_by) acc |> Map.update!(:shifted, &[shifted_range | &1]) |> Map.update!(:unshifted, &(unshifted_ranges ++ &1)) end end) acc |> Map.put(:unshifted, updates.unshifted) |> Map.update!(:shifted, &(updates.shifted ++ &1)) |> then(&{:cont, &1}) end) |> then(&(&1.unshifted ++ &1.shifted)) end defp intersect_and_shift(sl..sr//1, tl..tr//1, shift_by) do { Range.shift(max(sl, tl)..min(sr, tr)//1, shift_by), Enum.reject([sl..(tl - 1)//1, (tr + 1)..sr//1], &(Range.size(&1) == 0)) } end end
-
Comment on Day 4: Scratchcards in ~comp.advent_of_code
jzimbel Ah ok, that's more what I'd expect. I was about to be pretty shocked by how much optimization the erlang VM does.Ah ok, that's more what I'd expect. I was about to be pretty shocked by how much optimization the erlang VM does.
-
Comment on Day 4: Scratchcards in ~comp.advent_of_code
jzimbel Out of curiosity, how are you timing your program? Is it 5ms to solve both parts? Asking because the benchmarks for my solution in elixir are consistently around 4.5 ms or less for each part—I...Out of curiosity, how are you timing your program? Is it 5ms to solve both parts? Asking because the benchmarks for my solution in elixir are consistently around 4.5 ms or less for each part—I really would not have expected it to run faster than a rust solution!
Specifically, I'm timing only the evaluation of each solution function, which receives the puzzle input already loaded from file, as a string argument. Running on a 2019 MBP.
$ mix do advent.solve -d4 -p1 -b + advent.solve -d4 -p2 -b # (Benchmark for part 1 solution fn) Benchmarking result ... Name ips average deviation median 99th % result 233.56 4.28 ms ±6.42% 4.23 ms 5.42 ms # (Benchmark for part 2 solution fn) Benchmarking result ... Name ips average deviation median 99th % result 224.13 4.46 ms ±9.05% 4.35 ms 6.04 ms
-
Comment on Day 4: Scratchcards in ~comp.advent_of_code
jzimbel My partner and I bought and decorated a Christmas tree for the first time in our adult lives last night, so I was a little too tired to stay up past midnight and solve this one quickly. So many...My partner and I bought and decorated a Christmas tree for the first time in our adult lives last night, so I was a little too tired to stay up past midnight and solve this one quickly. So many pine needles, everywhere!!
Elixir
Parts 1 and 2
I think the thing I had the most trouble with on this one was naming, honestly.
Part 2 seemed like it would be a challenge, but turned out to be quite straightforward once I came up with sensible—if a bit verbose—labels for the two variables that describe each card:{self_copy_count, child_copy_count}
.It was nice that you could get away with not parsing any numbers for this one. It works fine just by comparing the numeric strings.
You don't even need to use the card IDs!defmodule AdventOfCode.Solution.Year2023.Day04 do def part1(input) do input |> String.split("\n", trim: true) |> Enum.map(&parse_card/1) |> Enum.map(&score_card/1) |> Enum.sum() end def part2(input) do input |> String.split("\n", trim: true) |> Enum.map(&parse_card/1) |> count_cards() end defp score_card({_, 0}), do: 0 defp score_card({_, n}), do: Integer.pow(2, n - 1) defp count_cards(cards, acc \\ 0) defp count_cards([], acc), do: acc defp count_cards([{self_copy_count, 0} | children], acc) do count_cards(children, acc + self_copy_count) end defp count_cards([{self_copy_count, child_copy_count} | children], acc) do {children_to_copy, unchanged} = Enum.split(children, child_copy_count) new_children = for {sub_self_copy_count, sub_child_copy_count} <- children_to_copy do {sub_self_copy_count + self_copy_count, sub_child_copy_count} end count_cards(new_children ++ unchanged, acc + self_copy_count) end defp parse_card(line) do [_card_number, contents] = String.split(line, ": ") [winners_str, hand_str] = String.split(contents, " | ") winners = MapSet.new(String.split(winners_str)) hand = MapSet.new(String.split(hand_str)) {1, MapSet.size(MapSet.intersection(winners, hand))} end end
-
Comment on Day 3: Gear Ratios in ~comp.advent_of_code
jzimbel In case anyone else had the same bug in their code as I did, here's a hint. If you swap the sample input given in the puzzle instructions from this: Original sample input 467..114.. ...*.........In case anyone else had the same bug in their code as I did, here's a hint.
If you swap the sample input given in the puzzle instructions from this:
Original sample input
467..114.. ...*...... ..35..633. ......#... 617*...... .....+.58. ..592..... ......755. ...$.*.... .664.598..
Tweaked sample input covering an extra edge case
467..114.. ...*...... ..35...633 # <- line changed 617....#.. # <- line changed ...*...... # <- line changed .....+.58. ..592..... ......755. ...$.*.... .664.598..
Your code should produce the same result.
Elixir
I started on this when the puzzle released at midnight in my timezone, but ended up deciding to sleep on it because I wasn't sure how best to represent the schematic as a data structure.
I'm glad I took some time to think the structure through, because it made querying the data pretty straightforward for both parts.
The code to parse the schematic into my struct is one giant list comprehension, by far the most complicated one I've written in this language, so that was... fun.
Part 1 runs in 4.5-5ms, part 2 runs in 5-5.5ms, which I think is pretty good given everything that's going on here?
Parts 1 and 2
The
Schematic
module defines the data structure (thedefstruct
and the@type
def), logic to parse it from the input string, and functions to query it for parts 1 and 2.Since you can't implicitly "re-use" a compound value (like a list or a map) with a pointer/reference in elixir, I had to explicitly make a lookup table using the built-in
reference
type.
Coordinates of individual digits, which form part of a whole number, are mapped to a commonreference
. Then, a second map maps thereference
to the number formed by the group of digits.
This allows me to look up the number(s) adjacent to a symbol in logarithmic time, as well as uniquely identifying each number so I can avoid counting it more than once. The same number can also appear in multiple places in the schematic, so this helped me avoid bugs related to that as well.defmodule AdventOfCode.Solution.Year2023.Day03.Schematic do @type coords :: {non_neg_integer, non_neg_integer} @type t :: %__MODULE__{ symbols: %{coords => char}, digits: %{coords => reference}, numbers: %{reference => non_neg_integer} } defstruct symbols: %{}, digits: %{}, numbers: %{} def from_input(input) do for {line, y} <- Enum.with_index(String.split(input, "\n", trim: true)), # Put an additional '.' at the end to make sure trailing digits get finalized as numbers line = line <> ".", {char, x} <- Enum.with_index(String.to_charlist(line)), reduce: %__MODULE__{} do # Non-symbol, completes a number {acc, start_coords, digits} when char == ?. -> finalize_number(acc, start_coords, digits) # Non-symbol acc when char == ?. -> acc # Digit, continues a number {acc, start_coords, digits} when char in ?0..?9 -> {acc, start_coords, [char - ?0 | digits]} # Digit, starts a new number acc when char in ?0..?9 -> {acc, {x, y}, [char - ?0]} # Symbol, completes a number {acc, start_coords, digits} -> acc |> finalize_number(start_coords, digits) |> then(&%{&1 | symbols: Map.put(&1.symbols, {x, y}, char)}) # Symbol acc -> %{acc | symbols: Map.put(acc.symbols, {x, y}, char)} end # We know the acc will not end up in its {acc, start_coords, digits} alternate form # because the string will always end with '.', which will cause `finalize_number` # to be called even if the original input ends with a digit. end def symbol_adjacent_numbers(schematic) do # Get coords of all symbols schematic.symbols |> Map.keys() # Get all coords adjacent to those symbols |> Stream.flat_map(&adjacent_coords/1) # Get number refs for all those coords |> Stream.map(&schematic.digits[&1]) # Remove duplicate refs |> Stream.uniq() # Remove nil |> Stream.reject(&is_nil/1) # Look up number for each ref |> Stream.map(&schematic.numbers[&1]) end def gear_ratios(schematic) do # Get coords of all '*' symbols schematic.symbols |> Stream.flat_map(fn {coords, symbol} -> if symbol == ?*, do: [coords], else: [] end) # Get all coords adjacent to those symbols, keeping a separate list of adjacents per symbol |> Stream.map(&adjacent_coords/1) # Get number refs for each set of adjacent coords, filtering out duplicates & nil |> Stream.map(fn adjacents -> adjacents |> Stream.map(&schematic.digits[&1]) |> Stream.uniq() |> Enum.reject(&is_nil/1) end) # Filter to those with exactly 2 adjacent number refs |> Stream.filter(&match?([_, _], &1)) # Look up number for each ref and multiply each pair of numbers |> Stream.map(fn [ref1, ref2] -> schematic.numbers[ref1] * schematic.numbers[ref2] end) end # List of coords that produce the 8 coordinates surrounding a given coord when added to it @all_adjacent_deltas for x <- -1..1, y <- -1..1, not (x == 0 and y == 0), do: {x, y} defp adjacent_coords(coords) do Enum.map(@all_adjacent_deltas, &sum_coords(&1, coords)) end defp sum_coords({x1, y1}, {x2, y2}), do: {x1 + x2, y1 + y2} defp finalize_number(acc, {x, y} = _start_coords, digits) do number = digits |> Enum.reverse() |> Integer.undigits() ref = make_ref() digit_positions = Map.new(0..(length(digits) - 1)//1, &{{x + &1, y}, ref}) %{ acc | digits: Map.merge(acc.digits, digit_positions), numbers: Map.put(acc.numbers, ref, number) } end end defmodule AdventOfCode.Solution.Year2023.Day03 do alias __MODULE__.Schematic def part1(input) do input |> Schematic.from_input() |> Schematic.symbol_adjacent_numbers() |> Enum.sum() end def part2(input) do input |> Schematic.from_input() |> Schematic.gear_ratios() |> Enum.sum() end end
It’s February once again.