7 votes

Day 5: Cafeteria

Today's problem description: https://adventofcode.com/2025/day/5

Please post your solutions in your own top-level comment. Here's a template you can copy-paste into your comment to format it nicely, with the code collapsed by default inside an expandable section with syntax highlighting (you can replace python with any of the "short names" listed in this page of supported languages):

<details>
<summary>Part 1</summary>

```python
Your code here.
```

</details>

16 comments

  1. [4]
    jzimbel
    Link
    Elixir Both parts I figured it would help to merge overlapping ranges for part 1, and it turned out to be necessary for part 2. I went a little overboard trying to squeeze stuff onto single lines,...

    Elixir

    Both parts I figured it would help to merge overlapping ranges for part 1, and it turned out to be necessary for part 2.

    I went a little overboard trying to squeeze stuff onto single lines, my code was more legible earlier. Ah well.

    defmodule AdventOfCode.Solution.Year2025.Day05 do
      use AdventOfCode.Solution.SharedParse
    
      @impl true
      def parse(input) do
        [ranges, ids] = String.split(input, "\n\n")
        {parse_and_consolidate_ranges(ranges), parse_ids(ids)}
      end
    
      def part1({ranges, ids}), do: Enum.count(ids, fn id -> Enum.any?(ranges, &(id in &1)) end)
      def part2({ranges, _ids}), do: Enum.sum_by(ranges, &Range.size/1)
    
      defp consolidate(ranges) do
        [hd_range | ranges] = Enum.sort_by(ranges, & &1.first)
    
        Enum.chunk_while(
          ranges,
          hd_range,
          fn _..b2//1 = r2, a1..b1//1 = r1 ->
            if Range.disjoint?(r1, r2), do: {:cont, r1, r2}, else: {:cont, a1..max(b1, b2)//1}
          end,
          fn final_range -> {:cont, final_range, nil} end
        )
      end
    
      defp parse_and_consolidate_ranges(str) do
        str
        |> String.split()
        |> Enum.map(fn line ->
          [first, last] = String.split(line, "-")
          String.to_integer(first)..String.to_integer(last)//1
        end)
        |> consolidate()
      end
    
      defp parse_ids(str) do
        str
        |> String.split()
        |> Enum.map(&String.to_integer/1)
      end
    end
    
    Benchmarks
    Name             ips        average  deviation         median         99th %
    Part 2     1401.14 K        0.71 μs   ±888.65%        0.71 μs        0.83 μs
    Parse         8.53 K      117.22 μs     ±4.82%      118.33 μs      131.04 μs
    Part 1        1.05 K      956.85 μs     ±1.20%      956.63 μs      983.61 μs
    
    3 votes
    1. [3]
      hpr
      Link Parent
      Very similar. Even though I banged my head against a silly error in the range consolidation, I'm a little proud I have you very slightly beat on this one! Code ### PARSING ### def parse, do:...

      Very similar. Even though I banged my head against a silly error in the range consolidation, I'm a little proud I have you very slightly beat on this one!

      Code
        ### PARSING ###
      
        def parse, do: parse(AocElixir.read_input(5))
      
        def parse(input) do
          [fresh_lines, test_lines] =
            input
            |> String.split("\n\n")
            |> Enum.map(&String.split(&1, "\n"))
      
          fresh_ranges = fresh_lines |> parse_fresh() |> compact_ranges()
          test_ids = test_lines |> Enum.map(&String.to_integer/1)
      
          {fresh_ranges, test_ids}
        end
      
        defp contains?({from, to}, num), do: from <= num and num <= to
      
        defp map_ranges(range, :default), do: {[], range}
      
        defp map_ranges({new_from, new_to}, {current_from, current_to}) do
          if contains?({current_from, current_to}, new_from) do
            # don't add the element just yet, instead merge it into accumulator, we're gathering a bigger range
            {[], {current_from, max(new_to, current_to)}}
          else
            # stop gathering the current range and add it into the list, start gathering the new range as the accumulator to check merges for next elements
            {[{current_from, current_to}], {new_from, new_to}}
          end
        end
      
        defp merge_last_accumulator({results, last_acc}), do: results ++ [last_acc]
      
        def compact_ranges(ranges) do
          ranges
          |> Enum.group_by(fn {from, _to} -> from end, fn {_from, to} -> to end)
          |> Enum.map(fn {key, values} -> {key, Enum.max(values)} end)
          |> Enum.sort_by(fn {from, _to} -> from end)
          |> Enum.flat_map_reduce(:default, &map_ranges/2)
          |> merge_last_accumulator()
        end
      
        def parse_fresh(range_lines) do
          range_lines
          |> Enum.map(&String.split(&1, "-"))
          |> Enum.map(fn [from, to] -> {String.to_integer(from), String.to_integer(to)} end)
        end
      
        ### PART 1 ###
      
        def fresh?(fresh_ranges, num), do: Enum.any?(fresh_ranges, &contains?(&1, num))
      
        def part1({fresh_ranges, test_nums}), do: Enum.count(test_nums, &fresh?(fresh_ranges, &1))
      
        ### PART 2 ###
      
        defp num_ids({from, to}), do: to - from + 1
      
        def part2({fresh_ranges, _test_nums}), do: fresh_ranges |> Enum.sum_by(&num_ids(&1))
      
      Benchmarks
      Name               ips        average  deviation         median         99th %
      part two     1533.17 K        0.65 μs    ±70.29%        0.64 μs        0.76 μs
      parse           5.67 K      176.24 μs    ±11.47%      172.19 μs      236.19 μs
      part one        2.87 K      348.02 μs     ±3.81%      344.16 μs      386.45 μs
      

      Also, is it all just tuples, arrays, maps and such usually in Elixir?
      Coming from OOP, I feel a little... type-starved :D
      But it would've been a good idea to split out the Range-stuff as you did.

      2 votes
      1. [2]
        jzimbel
        Link Parent
        Oh wow, I did not realize in (which is just a macro that compiles to an Enum.member?/2 call, I believe) had that much overhead for ranges! If I change the anonymous fn passed to Enum.any?/2 in my...

        Oh wow, I did not realize in (which is just a macro that compiles to an Enum.member?/2 call, I believe) had that much overhead for ranges!

        If I change the anonymous fn passed to Enum.any?/2 in my part 1 solution to this:

        &(id >= &1.first and id <= &1.last)
        

        it has the exact same performance as yours.

        Seems like the generalized logic for ranges with steps other than 1 slows things down a lot. (There's also some overhead from dispatching via the Enumerable protocol.)

        I wonder why they don't just add a separate clause with the faster logic for the most commonly used step size of 1.

        Re: limited type options, OOP concept relations

        Yeah, the main composite types available in elixir are much simpler than what you can cook up in most OOP languages. There are also structs, but they're basically just maps with well-defined keys.

        The limited inventory of core types is partly due to how deeply pattern matching is baked into the language and even the BEAM VM. Simple types allow for powerful, fast pattern matching.

        Also related: a big part of Elixir and the larger OTP ecosystem, which you unfortunately won't get much practice with on AoC puzzles, is stateful message-passing processes like Agents and GenServers. These are conceptually analogous to objects: they bundle some well-defined data structure with functions that act on it and a public/private interface. The big difference is that they behave more like mini-servers internal to your application, running receive-act-respond loops, with strong concurrency guarantees.

        1 vote
        1. hpr
          Link Parent
          Strange indeed, the more you know! Re: Re: ... types ...: Yeah I think for puzzles like this, that's actually pretty fine. Indeed, it's probably much more convenient than usual OOP type-handling....

          I wonder why they don't just add a separate clause with the faster logic for the most commonly used step size of 1.

          Strange indeed, the more you know!

          Re: Re: ... types ...: Yeah I think for puzzles like this, that's actually pretty fine.
          Indeed, it's probably much more convenient than usual OOP type-handling. I'm really enjoying the multiple function definitions to replace if-statement / pattern matching tbh. (Although as a beginner I've already encountered some weirdness around the restrictions of what can be / can not be used as a guard.)

          I'm wondering how this would pan out in large applications, where (in my experience) it's very handy to restrict what kind of operations you can do on certain data to prevent "illegal" state. In the same vein, I do wonder how I'd deal with the dynamic typing in that kind of environment once you tackle some larger refactoring. (You see, I've only professionally used static languages)

          When I became interested in a BEAM-language, I first looked at Gleam as a static language, but Elixir simply seemed more mature.

  2. [3]
    thecakeisalime
    (edited )
    Link
    I foolishly didn't look at the input until I'd written a solution to Part 1, then when I saw how large the numbers were, I figured "eh, I'll run it and see what happens". Apparently, using up all...

    I foolishly didn't look at the input until I'd written a solution to Part 1, then when I saw how large the numbers were, I figured "eh, I'll run it and see what happens". Apparently, using up all my memory and crashing my computer is what happens.

    And Part 2 had so many off-by-one errors. This is the first time I've had to submit an answer more than twice, and one of my submissions was only 16 away from the correct answer.

    Part 1&2 - Python
    def parse_input(lines):
      idx = lines.index('')
      fresh = lines[:idx]
      available = lines[idx+1:]
    
      return fresh, available
    
    def part_one(lines):
      fresh, available = parse_input(lines)
      count = 0
    
      for a in available:
        n = int(a)
        for f in fresh:
          values = f.split('-')
          if int(values[0]) <= n <= int(values[1])+1:
            count += 1
            break
      return count
      
    
    def part_two(lines):
      fresh = parse_input(lines)[0]
      pairs = [list(map(int, f.split('-'))) for f in fresh]
      pairs.sort()
      count = 0
    
      for p1, p2 in zip(pairs[:-1], pairs[1:]):
        if p1[1] >= p2[1]:
          p2[0] = -1
          p2[1] = p1[1]
        elif p1[1] >= p2[0]:
          p2[0] = p1[1]+1
        
        if (p1[0] != -1):
          count += p1[1] - p1[0] + 1
      
      if (-1 != pairs[-1][0] and pairs[-1][0] <= pairs[-1][1]):
        count += pairs[-1][1] - pairs[-1][0] + 1
    
      return count
    
    
    with open('05.in') as file:
      lines = [line.rstrip() for line in file]
    
    print('Part 1:', part_one(lines))
    print('Part 2:', part_two(lines))
    
    3 votes
    1. [2]
      mawelborn
      Link Parent
      I had almost the exact same experience! My first approach was to create a set of all fresh inventory IDs, which promptly consumed all RAM and froze XFCE. I had to switch to a TTY and kill it in...

      I had almost the exact same experience! My first approach was to create a set of all fresh inventory IDs, which promptly consumed all RAM and froze XFCE. I had to switch to a TTY and kill it in htop before it took out my swap too. Then struggled with so many off-by-one errors I wrote a custom Range type to make it easier to get right.

      2 votes
      1. hpr
        Link Parent
        Another similar experience here. Also tried with a set, but I killed the process before anything bad could happen. Then I tried to be "clever" and thought it might somehow take less memory if I...

        Another similar experience here.
        Also tried with a set, but I killed the process before anything bad could happen.

        Then I tried to be "clever" and thought it might somehow take less memory if I packed it all into bit-mask where 1 indicated presence and 0 indicated non-presence. Turns out, the Erlang VM does not like arbitrarily large bit-shifts.

        ...and I also struggled with a very basic error because of a case that wasn't handled in the demonstration data, but was in my input.

        2 votes
  3. scarecrw
    (edited )
    Link
    Very much needed a good day after yesterday frustrated me so much. It just seemed like yesterday everything I tried with the language went wrong, and today it fell together nicely. Nothing...

    Very much needed a good day after yesterday frustrated me so much. It just seemed like yesterday everything I tried with the language went wrong, and today it fell together nicely.

    Nothing creative on the algorithmic front; I suppose there may be some efficiency gains to be had on both parts by keeping ranges sorted, but the list was short enough it didn't matter.

    Prolog Solution
    :- ['src/util.pl'].
    
    parse_range(RangeString, [Start, End]):-
        split_string(RangeString, "-", "", [A, B]),
        number_string(Start, A),
        number_string(End, B).
    
    in_ranges(Ranges, ID):-
        member([Start, End], Ranges),
        between(Start, End, ID).
    
    overlap([S1, E1], [S2, E2]):-
        S1 =< E2,
        S2 =< E1.
    
    merge_range_rangelist(Range, RangeList, Result):-
        partition(overlap(Range), RangeList, OverlappingRanges, OtherRanges),
        flatten([Range|OverlappingRanges], EndPoints),
        min_list(EndPoints, Start),
        max_list(EndPoints, End),
        Result = [[Start, End]|OtherRanges].
    
    range_size([A, B], RangeSize):-
        RangeSize is B - A + 1.
    
    main:-
        Filename = 'inputs/day05.txt',
        read_file_to_strings(Filename, Lines),
        append(RangeStrings, [""|IDStrings], Lines),
        maplist(parse_range, RangeStrings, Ranges),
        maplist(string_number, IDStrings, IDs),
    
        include(in_ranges(Ranges), IDs, FreshIDs),
        length(FreshIDs, Result1),
        write("Part 1: "), write(Result1), nl,
    
        foldl(merge_range_rangelist, Ranges, [], MergedRanges),
        maplist(range_size, MergedRanges, RangeSizes),
        sumlist(RangeSizes, Result2),
        write("Part 2: "), write(Result2), nl,
        halt.
    
    :- initialization(main).
    

    I tried reading up on difference lists and DCGs this afternoon and did not get anywhere with it. I'll have to give it another go with a clearer head.

    2 votes
  4. lily
    (edited )
    Link
    I keep expecting the problems to suddenly get harder, but I guess we haven't reached that point yet. Surprisingly easy day, though part 2 was pretty interesting. I'm not sure what I'm doing here...

    I keep expecting the problems to suddenly get harder, but I guess we haven't reached that point yet. Surprisingly easy day, though part 2 was pretty interesting. I'm not sure what I'm doing here is the most efficient way to do it (nor am I entirely convinced it does the job in all cases, though it works fine for my input), but it was the first thing I could think of.

    (Updated to use Lily's List methods instead of manual loops. It's almost certainly no more efficient than what I had before, but I thought it might be a better showcase of some of Lily's features.)

    Solution (Lily)
    var line_groups = File
        .read_to_string("inputs/day_05.txt")
        .split("\n\n")
        .map(|string| string.split("\n"))
    
    var ranges = line_groups[0]
        .map(|line|
            var endpoints = line.split("-")
            <[endpoints[0].parse_i().unwrap(), endpoints[1].parse_i().unwrap()]>
        )
        .sort(|a, b| a[0] - b[0])
        .accumulate([].@(List[Tuple[Integer, Integer]]), (|merged_ranges, range|
            if merged_ranges: {
                var previous = merged_ranges[-1]
                if range[0] <= previous[1]: {
                    if range[1] > previous[1]: {
                        previous[1] = range[1]
                    }
                else:
                    merged_ranges.push(range)
                }
            else:
                merged_ranges.push(range)
            }
        ))
    
    print("Part 1: {}\nPart 2: {}".format(
        line_groups[1].map(|line| line.parse_i().unwrap()).count(|id|
            ranges.any(|range| id >= range[0] && id <= range[1])
        ),
        ranges.fold(0, (|sum, range| sum + range[1] - range[0] + 1))
    ))
    
    2 votes
  5. [2]
    Berdes
    Link
    For part 1, I directly implemented some logic to sort the ranges and merge the overlapping ones. The idea was to be able to binary search the ranges to see if an ID is part of one of the ranges....

    For part 1, I directly implemented some logic to sort the ranges and merge the overlapping ones. The idea was to be able to binary search the ranges to see if an ID is part of one of the ranges. In the end, the binary search idea was barely faster than a linear scan using the sorted+merged ranges. Maybe an approach based on a binary tree would have been faster?

    This approach made part 2 extremely trivial since I already had a list of ranges without any overlap.

    Part 1 and 2 (Rust)
    use std::fmt::Display;
    use std::io;
    use std::time::Instant;
    use std::vec::Vec;
    
    #[derive(Clone)]
    struct Range {
      start: u64,
      end: u64, // inclusive
    }
    
    impl Range {
      fn from_str(s: String) -> Self {
        let (str_start, str_end) = s.split_once('-').unwrap();
        Range {
          start: str_start.parse().expect(""),
          end: str_end.parse().expect(""),
        }
      }
    
      fn contains(&self, i: u64) -> bool {
        self.start <= i && i <= self.end
      }
    }
    
    struct Input {
      fresh_ranges: Vec<Range>,
      inventory: Vec<u64>,
    }
    
    impl Input {
      fn read_from_stdin() -> Input {
        let mut fresh_ranges = vec![];
        let mut inventory = vec![];
        let mut parse_ranges = true;
        for line in io::stdin().lines().map(|l| l.unwrap()) {
          if line == "" {
            parse_ranges = false;
            continue;
          }
          if parse_ranges {
            fresh_ranges.push(Range::from_str(line));
          } else {
            inventory.push(line.parse::<u64>().expect(""));
          }
        }
        Input {
          fresh_ranges,
          inventory,
        }
      }
    }
    
    fn merge_ranges(ranges: &Vec<Range>) -> Vec<Range> {
      let mut sorted = ranges.clone();
      sorted.sort_by_key(|r| r.start);
      let mut merged: Vec<Range> = vec![];
      for r in sorted {
        if let Some(last) = merged.last() {
          if last.end >= r.start {
            let end = &mut merged.last_mut().unwrap().end;
            *end = r.end.max(*end);
          } else {
            merged.push(r);
          }
        } else {
          merged.push(r);
        }
      }
      merged
    }
    
    fn part1_direct(input: &Input) -> u64 {
      let mut ret = 0;
      for &i in &input.inventory {
        for r in &input.fresh_ranges {
          if r.contains(i) {
            ret += 1;
            break;
          }
        }
      }
      ret
    }
    
    fn part1_merged(input: &Input) -> u64 {
      let merged_fresh = merge_ranges(&input.fresh_ranges);
      let mut ret = 0;
      for &i in &input.inventory {
        for r in &merged_fresh {
          if i <= r.end {
            if i >= r.start {
              ret += 1;
            }
            break;
          }
        }
      }
      ret
    }
    
    fn part1_bsearch(input: &Input) -> u64 {
      let merged_fresh = merge_ranges(&input.fresh_ranges);
      let mut ret = 0;
      for &i in &input.inventory {
        let mut s = &merged_fresh[..];
        while !s.is_empty() {
          let pivot_idx = s.len() / 2;
          let pivot = &s[pivot_idx];
          if i < pivot.start {
            s = &s[..pivot_idx];
          } else if i > pivot.end {
            s = &s[(pivot_idx + 1)..];
          } else {
            ret += 1;
            break;
          }
        }
      }
      ret
    }
    
    fn part2(input: &Input) -> u64 {
      let merged_fresh = merge_ranges(&input.fresh_ranges);
      let mut ret = 0;
      for r in merged_fresh {
        ret += r.end - r.start + 1;
      }
      ret
    }
    
    fn timed_run<T: Display, F: FnOnce() -> T>(name: &str, f: F) {
      let start = Instant::now();
      let result = f();
      let time = start.elapsed();
      println!("{}: {}\n  in {:?}", name, result, time);
    }
    
    fn main() {
      let input = Input::read_from_stdin();
      timed_run("Part 1 direct", || part1_direct(&input));
      timed_run("Part 1 merged", || part1_merged(&input));
      timed_run("Part 1 bsearch", || part1_bsearch(&input));
      timed_run("Part 2", || part2(&input));
    }
    
    1 vote
    1. Berdes
      Link Parent
      So, I managed to implement two solutions using BTreeMap, which required to use the nightly build to get access to the Cursor API. The first approach was to use the key to represent a bound and the...

      So, I managed to implement two solutions using BTreeMap, which required to use the nightly build to get access to the Cursor API.

      The first approach was to use the key to represent a bound and the value to represent whether it's a starting point or an ending point. When trying to find whether an ID is part of a range, I can query for the largest key that is smaller or equal to the ID. If it corresponds to a starting point, I know my ID is inside a range. One detail with this approach is that I needed to store the ending points with a value + 1 (i.e. the ending point is excluded from the range), otherwise ranges like [5, 5] would collide on the key 5. The side benefit of having excluded ending points is that ranges that are next to each other would still get merged: [3, 5] [6, 8] would become [3, 8].

      Using start and end points as keys
      #[derive(Debug)]
      enum Node {
        Start,
        End,
      }
      
      fn merge_ranges_btree_double(ranges: &Vec<Range>) -> BTreeMap<u64, Node> {
        let mut merged = BTreeMap::<u64, Node>::new();
        for r in ranges {
          let mut cursor = merged.upper_bound_mut(Bound::Excluded(&r.start));
          while cursor
            .peek_next()
            .map_or(false, |(nk, _)| nk <= &(r.end + 1))
          {
            cursor.remove_next();
          }
          match cursor.peek_prev() {
            Some((_, &mut Node::Start)) => {}
            _ => cursor.insert_before(r.start, Node::Start).expect(""),
          }
          match cursor.peek_next() {
            Some((_, &mut Node::End)) => {}
            _ => cursor.insert_before(r.end + 1, Node::End).expect(""),
          }
        }
        merged
      }
      
      fn part1_btree_double(input: &Input) -> u64 {
        let merged_fresh = merge_ranges_btree_double(&input.fresh_ranges);
        let mut ret = 0;
        for i in &input.inventory {
          match merged_fresh.upper_bound(Bound::Included(i)).peek_prev() {
            Some((_, Node::Start)) => ret += 1,
            _ => {}
          }
        }
        ret
      }
      
      fn part2_btree_double(input: &Input) -> u64 {
        let merged = merge_ranges_btree_double(&input.fresh_ranges);
        let mut ret = 0;
        let mut last_start = 0;
        for (&k, v) in merged.iter() {
          match v {
            Node::Start => last_start = k,
            Node::End => ret += k - last_start,
          }
        }
        ret
      }
      

      The second approach (likely more sane) was to have the key as start point and the value as end point. It's basically the same as having a sorted vector of (start, end) points, except that it's not necessary to pre-sort the list and removing an element in the middle should be cheap (amortized O(1) I think?). Similarly to the previous approach, finding whether an ID is in one of the ranges requires querying for the largest key that is smaller or equal to the ID and checking whether the ID is less or equal to the end point.

      Start points as key, end points as values
      fn merge_ranges_btree(ranges: &Vec<Range>) -> BTreeMap<u64, u64> {
        let mut merged = BTreeMap::<u64, u64>::new();
        for r in ranges {
          let mut cursor = merged.upper_bound_mut(Bound::Included(&r.start));
          match cursor.peek_prev() {
            Some((_, pend)) => {
              if r.start <= *pend {
                *pend = r.end.max(*pend);
              } else {
                cursor.insert_before(r.start, r.end).expect("");
              }
            }
            None => cursor.insert_before(r.start, r.end).expect(""),
          }
          // Can't hold a mutable borrow of the previous item while still accessing cursor to peek at the
          // next.
          let mut pend = cursor.peek_prev().unwrap().1.clone();
          loop {
            if let Some((nstart, nend)) = cursor.peek_next() {
              if *nstart <= pend {
                pend = pend.max(*nend);
                cursor.remove_next();
              } else {
                break;
              }
            } else {
              break;
            }
          }
          *cursor.peek_prev().unwrap().1 = pend;
        }
        merged
      }
      
      fn part1_btree(input: &Input) -> u64 {
        let merged_fresh = merge_ranges_btree(&input.fresh_ranges);
        let mut ret = 0;
        for i in &input.inventory {
          match merged_fresh.upper_bound(Bound::Included(i)).peek_prev() {
            Some((_, end)) => {
              if i <= end {
                ret += 1
              }
            }
            _ => {}
          }
        }
        ret
      }
      
      fn part2_btree(input: &Input) -> u64 {
        let merged = merge_ranges_btree(&input.fresh_ranges);
        let mut ret = 0;
        let mut last_start = 0;
        for (&k, v) in merged.iter() {
          ret += v - k + 1;
        }
        ret
      }
      

      The second approach was (unsurprisingly) faster than the first one. Sadly, both were slower than the vector-based approaches I used previously. I don't know if this is just the vector based approaches being inherently faster because of the memory layout, amount of branching and data dependency between instructions, or if it's just the BTreeMap implementation not being super well optimized. The Cursor API is still in nightly, so I wouldn't be surprised if the implementation was a bit sub-optimal.

      Runtime comparison of various approaches

      Not really the best benchmark since I only run each thing once with no warmup. I ran the program a couple of times to get a sense of which runtime was normal and which one was just an outlier (I guess due to the computer doing other stuff in the background). The parsing of the input is excluded.

      Part 1 direct in 169.504µs
      Part 1 merged in 27.85µs
      Part 1 bsearch in 27.166µs
      Part 1 btree start+end in 49.206µs
      Part 1 btree in 37.387µs
      Part 2 in 5.801µs
      Part 2 btree start+end in 18.164µs
      Part 2 btree in 9.65µs
      
      2 votes
  6. [3]
    mawelborn
    Link
    Ooh first day where the simplest naive solution ran out of memory ^.^ For Part 01, doing a linear search of the ranges worked without issue and is still very straightforward. Part 02 gave me a...

    Ooh first day where the simplest naive solution ran out of memory ^.^

    For Part 01, doing a linear search of the ranges worked without issue and is still very straightforward.

    Part 02 gave me a bear of a time with the range overlap checks and off-by-one errors. I eventually created a custom type for the ranges that factors out the calculations, making it easier to get right. Once I had that, sorting, merging, and summing the ranges was easy.

    I'm not super pleased with my range merging generator. It's clean enough, but I feel like there should be a more concise way to say "take each range and merge it with the previous if they overlap" that doesn't require nested conditionals. Let me know if you've got an itertools-based approach or something that you think is better!

    Part 1
    def solve(input: str) -> int:
        fresh_slices = list(fresh_ingredient_slices(input))
        available_ids = set(available_ingredient_ids(input))
        return len([
            available_id
            for available_id in available_ids
            if any(
                slice.start <= available_id <= slice.stop
                for slice in fresh_slices
            )
        ])
    
    
    def fresh_ingredient_slices(input: str) -> Iterator[slice]:
        id_ranges = input.split("\n\n")[0]
        for id_range in id_ranges.split():
            yield slice(*map(int, id_range.split("-")))
    
    
    def available_ingredient_ids(input: str) -> Iterator[int]:
        ids = input.split("\n\n")[1]
        yield from map(int, ids.split())
    
    Part 2
    @dataclass(frozen=True, order=True)
    class Range:
        start: int
        _stop: int
    
        @property
        def stop(self) -> int:
            return self._stop + 1
    
        @staticmethod
        def from_str(ids: str) -> "Range":
            return Range(*map(int, ids.split("-")))
    
        def __bool__(self) -> bool:
            return self != NULL_RANGE
    
        def __len__(self) -> int:
            return self.stop - self.start
    
        def __or__(self, other: "Range") -> "Range":
            if self.stop < other.start or other.stop < self.start:
                return NULL_RANGE
    
            return Range(
                min(self.start, other.start),
                max(self._stop, other._stop),
            )
    
    
    NULL_RANGE: Final = Range(-1, -1)
    
    
    def solve(input: str) -> int:
        return sum(map(len, fresh_ingredient_ranges(input)))
    
    
    def fresh_ingredient_ranges(input: str) -> Iterator[Range]:
        merged_range = NULL_RANGE
    
        for range in sorted(map(Range.from_str, input.split())):
            if merged_range | range:
                merged_range |= range
            else:
                if merged_range: yield merged_range
                merged_range = range
    
        yield merged_range
    
    1 vote
    1. [2]
      lily
      Link Parent
      I was curious myself whether there was a way to do the merging without sorting the list of ranges first. Your logic looks pretty close to my own, though, so maybe that is the best way to do it...?...

      I was curious myself whether there was a way to do the merging without sorting the list of ranges first. Your logic looks pretty close to my own, though, so maybe that is the best way to do it...? Performance seems fine in any case, so it probably doesn't matter much anyway.

      2 votes
      1. mawelborn
        Link Parent
        I think it probably is. Making the null range have a length of zero lets me eliminate one of the conditionals, since yielding it won't affect the sum. But I think sorting + overlap check is pretty...

        I think it probably is. Making the null range have a length of zero lets me eliminate one of the conditionals, since yielding it won't affect the sum. But I think sorting + overlap check is pretty much required.

        Definitely no performance issues though! Both parts run in ~0.2s total.

        2 votes
  7. DeaconBlue
    Link
    Running in 337.65µs on my machine, first sub-1ms entry so far this year! Order the ranges by the lower number. If the higher number in the range is larger than the next low number, merge the...

    Running in 337.65µs on my machine, first sub-1ms entry so far this year!

    Order the ranges by the lower number. If the higher number in the range is larger than the next low number, merge the ranges.

    Part 2
    use std::{
        env,
        fs::File,
        io::{self, BufRead},
        path::Path,
        time::Instant,
    };
    
    fn main() {
        let now = Instant::now();
        let args: Vec<String> = env::args().collect();
    
        let mut ranges = vec![];
        if let Ok(lines) = read_lines(&args[1]) {
            for line in lines {
                let line = line.expect("");
                if line.contains("-") {
                    let mut split = line.split("-");
                    ranges.push((
                        split.next().unwrap().parse::<i64>().unwrap(),
                        split.next().unwrap().parse::<i64>().unwrap(),
                    ));
                }
            }
        }
        ranges.sort();
    
        let mut i = 0;
        while i < ranges.len() - 1 {
            if !(ranges[i].1 < ranges[i + 1].0) {
                ranges[i + 1] = (
                    (ranges[i].0).min(ranges[i + 1].0),
                    (ranges[i].1).max(ranges[i + 1].1),
                );
                ranges.remove(i);
                continue;
            }
            i += 1;
        }
    
        let sum: i64 = ranges.into_iter().map(|x| x.1 - x.0 + 1).sum();
        println!("{sum}");
    
        let elapsed = now.elapsed();
        println!("Elapsed: {:.2?}", elapsed);
    }
    fn read_lines<P>(filename: P) -> io::Result<io::Lines<io::BufReader<File>>>
    where
        P: AsRef<Path>,
    {
        let file = File::open(filename)?;
        Ok(io::BufReader::new(file).lines())
    }
    
    1 vote
  8. balooga
    Link
    TypeScript Seems like the range sort/merge approach is probably the only sane way to tackle this one. Once I cleared that mental block it was smooth sailing, and Part 2 ended up being faster and...

    TypeScript

    Seems like the range sort/merge approach is probably the only sane way to tackle this one. Once I cleared that mental block it was smooth sailing, and Part 2 ended up being faster and simpler than Part 1 (a welcome change)!

    I did have some fun modifying the array of ranges in-place with Array.prototype.splice() while looping through it, carefully managing the current index to recompare merged ranges after each mutation. Some judicial use of continue and break in the Part 1 nested loops helps avoid unnecessary iterations.

    Parts 1 and 2
    import { RunFunction } from '../../../types';
    
    type Ranges = [number, number][];
    interface InputData {
      freshRanges: Ranges;
      ids: number[];
    }
    
    function formatInput(input: string): InputData {
      const [freshRangesRaw, idsRaw] = input.trim().split('\n\n');
      return {
        freshRanges: freshRangesRaw
          .split('\n')
          .map(freshRangeRaw => freshRangeRaw.split('-').map(boundString => Number(boundString)) as [number, number]),
        ids: idsRaw.split('\n').map(idString => Number(idString)),
      };
    }
    
    export const run: RunFunction<InputData> = () => {
      const mergeRanges = (freshRanges: Ranges): Ranges => {
        const sortedRanges = freshRanges.sort(([a], [b]) => a - b);
        for (let i = 0; i < sortedRanges.length - 1; i++) {
          // no overlap between ranges, do nothing
          if (sortedRanges[i + 1][0] > sortedRanges[i][1]) {
            continue;
          }
          // partial overlap, merge ranges (keep start of A and end of B)
          if (sortedRanges[i + 1][1] > sortedRanges[i][1]) {
            sortedRanges.splice(i, 2, [sortedRanges[i][0], sortedRanges[i + 1][1]]);
            i--;
            continue;
          }
          // range A fully overlaps B, so just delete B
          sortedRanges.splice(i + 1, 1);
          i--;
        }
        return sortedRanges;
      };
    
      const countAvailableFreshIds = ({ freshRanges, ids }: InputData): number => {
        const mergedRanges = mergeRanges(freshRanges);
        let total = 0;
        for (const id of ids) {
          for (const [start, end] of mergedRanges) {
            if (id >= start) {
              if (id <= end) {
                total++;
                break;
              }
              continue;
            }
            break;
          }
        }
        return total;
      };
    
      const countAllFreshIds = ({ freshRanges }: InputData): number => {
        const mergedRanges = mergeRanges(freshRanges);
        let total = 0;
        for (const [start, end] of mergedRanges) {
          total += end - start + 1;
        }
        return total;
      };
    
      return [formatInput, countAvailableFreshIds, countAllFreshIds];
    };
    

    On my machine the median input formatting time is 115µs, Part 1 is 225µs, and Part 2 is 12µs. This is my first solution since improving my benchmark methodology earlier today. Eagle eyes may notice slightly different scaffolding syntax in today's code, to support that.

    It made me really happy to see <1ms times for all three parts. These accurate benchmarks make me feel more like a real contender after all, even as a JS goof trying to keep up with you hardcore Rust gurus and your ilk. Turns out, JS can be pretty quick too!

    1 vote