2 votes

Day 11: Reactor

Today's problem description: https://adventofcode.com/2025/day/11

Please post your solutions in your own top-level comment. Here's a template you can copy-paste into your comment to format it nicely, with the code collapsed by default inside an expandable section with syntax highlighting (you can replace python with any of the "short names" listed in this page of supported languages):

<details>
<summary>Part 1</summary>

```python
Your code here.
```

</details>

3 comments

  1. lily
    Link
    Glad to have an easy day for a change! Just a simple recursive search with memoization (not needed for part 1, but it doesn't hurt). Part 2 is just the product of the solutions to three separate...

    Glad to have an easy day for a change! Just a simple recursive search with memoization (not needed for part 1, but it doesn't hurt). Part 2 is just the product of the solutions to three separate problems solved the same way as part 1. Would've finished this quicker if I hadn't wasted time writing a naive BFS solution that would take forever to finish executing, but oh well. I wonder what tomorrow will be like... this feels a bit like the calm before the storm.

    Solution (Lily)
    var outputs: Hash[String, List[String]] = []
    for line in File.read_to_string("inputs/day_11.txt").split("\n"): {
        var parts = line.split(": ")
        outputs[parts[0]] = parts[1].split(" ")
    }
    
    var cached_paths: Hash[String, Integer] = []
    define paths_between(start: String, end: String): Integer {
        if start == end: {
            return 1
        }
    
        if start == "out": {
            return 0
        }
    
        var key = start ++ end
    
        with cached_paths.get(key) as Some(paths): {
            return paths
        }
    
        var paths = 0
        for output in outputs[start]: {
            paths += paths_between(output, end)
        }
    
        cached_paths[key] = paths
        return paths
    }
    
    print("Part 1: {}".format(paths_between("you", "out")))
    
    var queue = ["svr"]
    var first_midpoint = ""
    
    do: {
        var device = queue.shift()
    
        if device == "dac": {
            first_midpoint = "dac"
            break
        }
    
        if device == "fft": {
            first_midpoint = "fft"
            break
        }
    
        for output in outputs[device]: {
            queue.push(output)
        }
    } while queue
    
    var second_midpoint = (first_midpoint == "dac" ? "fft" : "dac")
    
    print("Part 2: {}".format(
        paths_between("svr", first_midpoint)
        * paths_between(first_midpoint, second_midpoint)
        * paths_between(second_midpoint, "out")
    ))
    
    2 votes
  2. Berdes
    Link
    Computing the number of paths in a directed acyclic graph is a classical DP problem. I used the direct approach of visiting nodes in a lexicographical order, each time updating the value for its...

    Computing the number of paths in a directed acyclic graph is a classical DP problem. I used the direct approach of visiting nodes in a lexicographical order, each time updating the value for its children. The lexicographical order ensures that when visiting a node, I've already visited all its parents and its value is not going to change.

    For path 1, the value of each node is just the sum of its parents, initializing the starting node with 1.

    For part 2, I used exactly the same approach, except that I'm using 4 values for each node, each corresponding to one type of path: paths going through neither dac nor fft, paths going through dac but not fft, paths going through fft but not dac, and paths going through both. For most nodes, their counters are directly propagated to their children. For dac and fft nodes, the none counter gets propagated to the dac and fft counters respectively and the fft and dac counters respectively get propagated to the both counter.

    Boths are O(n) with n the number of nodes in the graph, so it's super quick to compute, even when using strings as key/values for the graph (and doing a ton of copies): 60μs for part 1 and 420μs for part 2.

    Solution (Rust)
    use aoc_2025::timed_run;
    use std::collections::HashMap;
    use std::collections::HashSet;
    use std::io;
    use std::vec::Vec;
    
    struct Input {
      graph: HashMap<String, Vec<String>>,
    }
    
    impl Input {
      fn read_from_stdin() -> Input {
        Input {
          graph: io::stdin()
            .lines()
            .map(|l| {
              let tmp = l.unwrap();
              let (key, values) = tmp.split_once(": ").unwrap();
              (
                key.to_string(),
                values.split(' ').map(|s| s.to_string()).collect(),
              )
            })
            .chain([("out".to_string(), vec![])])
            .collect(),
        }
      }
    }
    
    fn lexicographical_order_from<T: Clone + Eq + std::hash::Hash, F: FnMut(&T)>(
      graph: &HashMap<T, Vec<T>>,
      start: T,
      mut f: F,
    ) {
      // First pass to compute the number of parents for each node reachable from the starting point.
      let mut num_parents = HashMap::<T, u64>::new();
      let mut visited = HashSet::<T>::new();
      let mut st = vec![start.clone()];
      while !st.is_empty() {
        let cur = st.pop().unwrap();
        if !visited.insert(cur.clone()) {
          continue;
        }
        for child in &graph[&cur] {
          *num_parents.entry(child.clone()).or_insert(0) += 1;
          st.push(child.clone());
        }
      }
      // Second pass for the actual lexicographical order. Only visiting a node once we visited all its
      // parents.
      st.push(start);
      while !st.is_empty() {
        let cur = st.pop().unwrap();
        f(&cur);
        for child in &graph[&cur] {
          let np = num_parents.get_mut(child).unwrap();
          *np -= 1;
          if *np == 0 {
            st.push(child.clone());
          }
        }
      }
    }
    
    fn part1(input: &Input) -> u64 {
      let mut num_paths_to = HashMap::<String, u64>::new();
      let start = "you".to_string();
      num_paths_to.insert(start.clone(), 1);
      lexicographical_order_from(&input.graph, start, |n| {
        let num_to_n = num_paths_to[n];
        for child in &input.graph[n] {
          *num_paths_to.entry(child.clone()).or_insert(0) += num_to_n;
        }
      });
      num_paths_to["out"]
    }
    
    #[derive(Clone)]
    struct PathData {
      none: u64,
      dac: u64,
      fft: u64,
      both: u64,
    }
    
    fn part2(input: &Input) -> u64 {
      let mut num_paths_to = HashMap::<String, PathData>::new();
      let start = "svr".to_string();
      num_paths_to.insert(
        start.clone(),
        PathData {
          none: 1,
          dac: 0,
          fft: 0,
          both: 0,
        },
      );
      lexicographical_order_from(&input.graph, start, |n| {
        let num_to_n = num_paths_to[n].clone();
        for child in &input.graph[n] {
          let cnum = num_paths_to.entry(child.clone()).or_insert(PathData {
            none: 0,
            dac: 0,
            fft: 0,
            both: 0,
          });
          match n.as_str() {
            "fft" => {
              cnum.fft += num_to_n.none;
              cnum.both += num_to_n.dac;
            }
            "dac" => {
              cnum.dac += num_to_n.none;
              cnum.both += num_to_n.fft;
            }
            _ => {
              cnum.none += num_to_n.none;
              cnum.dac += num_to_n.dac;
              cnum.fft += num_to_n.fft;
              cnum.both += num_to_n.both;
            }
          }
        }
      });
      num_paths_to["out"].both
    }
    
    fn main() {
      let input = Input::read_from_stdin();
      timed_run("Part 1", || part1(&input));
      timed_run("Part 2", || part2(&input));
    }
    
    2 votes
  3. thecakeisalime
    Link
    Much easier today. Of course, that didn't stop me from making the same mistakes as yesterday. Copied my BFS code (that was several orders of magnitude too inefficient for yesterday's problem) from...

    Much easier today. Of course, that didn't stop me from making the same mistakes as yesterday.

    Copied my BFS code (that was several orders of magnitude too inefficient for yesterday's problem) from yesterday to solve part 1. Why didn't I reuse the DFS (that also was too inefficient for yesterday's problem)? Because the BFS code was prettier.

    Get to Part 2 and realized I should have just started with DFS. Oh well. Easy enough to just rewrite a DFS. Luckily, DFS is actually the right tool for this job, and this worked as expected. My original plan was to use a "visited" check for visiting the fft and dac nodes, but that turned out to make caching much more complicated than I wanted.

    Part 1&2 [Python]
    def parse_input(lines: list[str]) -> dict[str, list[str]]:
      deviceIO = {}
      for line in lines:
        input, output = map(str.strip, line.split(':'))
        deviceIO[input] = output.split()
      return deviceIO
    
    def count_paths(deviceIO: dict[str, list[str]], node: str, end: str, memo: set[str]) -> int:
      if node in memo:
        return memo[node]
      paths = 0
      for n in deviceIO[node]:
        if n == end:
          return 1
        paths += count_paths(deviceIO, n, end, memo)
      memo[node] = paths
      return paths
    
    def part_one(deviceIO: dict[str, list[str]]) -> int:
      return count_paths(deviceIO, 'you', 'out', {})
    
    def part_two(deviceIO: dict[str, list[str]]) -> int:
      deviceIO['out'] = []
      return (
              count_paths(deviceIO, 'svr', 'fft', {})
              * count_paths(deviceIO, 'fft', 'dac', {})
              * count_paths(deviceIO, 'dac', 'out', {})
      )
    
    def main() -> None:
      with open('11.in') as file:
        lines = [line.rstrip() for line in file]
    
      input = parse_input(lines)
      print('Part 1:', part_one(input))
      print('Part 2:', part_two(input))
    
    if __name__ == '__main__':
      main()
    
    1 vote