Berdes's recent activity
-
Comment on Day 11: Reactor in ~comp.advent_of_code
-
Comment on Day 10: Factory in ~comp.advent_of_code
Berdes Link ParentThanks for pointing this out! I fixed the message to avoid confusing other people. Hopefully, I did a good enough job to also avoid having people later confused by your message :)Thanks for pointing this out! I fixed the message to avoid confusing other people. Hopefully, I did a good enough job to also avoid having people later confused by your message :)
-
Comment on Day 10: Factory in ~comp.advent_of_code
Berdes Link ParentFor your BFS, do you mean that each node in your graph represents a list of values corresponding to the number of times each button is pressed? If that's the case, won't this mean you're going to...For your BFS, do you mean that each node in your graph represents a list of values corresponding to the number of times each button is pressed?
If that's the case, won't this mean you're going to explore at least all the nodes [x, y, z, ...] with each variable in the range
[0, number of times it needs to be pressed in the optimal solution]? The number of such nodes is going to be the product of the values x, y, z, ... for the optimal solution. Unless I have made a mistake somewhere, the worst machine in my input has a product of number of presses in the optimal solution of about 10 billions. And that's not even including all the nodes that don't lead to an optimal solution. It's not obviously yet in the "longer than the age of the universe" range of runtime, but I wouldn't bet that it's going to finish in a reasonable amount of time either, unless visiting each node is really quick. Also, if the cache doesn't flush all the nodes that are not going to be visited again (sum of [x, y, z, ...] less than what you're currently visiting), you're also likely going to end up with memory problems, since storing those 10 billion nodes times something like 8 buttons means storing at least 80GB of data. -
Comment on Day 10: Factory in ~comp.advent_of_code
Berdes (edited )LinkWow, what an escalation in difficulty! For part 1, it's clear that the order in which buttons are pressed doesn't matter and that it's useless to press a button more than once since pressing twice...Wow, what an escalation in difficulty!
For part 1, it's clear that the order in which buttons are pressed doesn't matter and that it's useless to press a button more than once since pressing twice is equivalent to not pressing and we want to minimize the number of presses. This problem can be represented very efficiently as a bitset and xor operations, where each bit corresponds to a light.
button (0, 1, 3) corresponds to 0b1011 button (1, 4) corresponds to 0b10010 (and not 0b10001, as pointed out by zestier) pressing both buttons turns on lights for 0b1011 ^ 0b10010 = 0b11001My solution was to iterate over all the combinations of pressed and not pressed for each button and see if the xor of all the pressed buttons matches the required light pattern. I think it would be more efficient to iterate over all the permutation of
nbutton pressed, increasingnuntil we find a match. But with at most 13 buttons in my input, in the worst case I'll test 8192 patterns, which is pretty fast since I'm only doing some basic bit operations for each iteration. Runtime for this was ~500μs.For Part 2, I'm converting the problem into a system of equations, with one equation for each joltage. For the
nth joltage, I have an equation resemblingsum b_k = joltage[n], whereb_krepresents the number of times thekth button is pressed. The sum is only on buttons that turn on thenth light. For example:button 0: (0, 1, 3) button 1: (1, 4) button 2: (2) joltages: {1, 2, 3, 5, 7} gives the following equations: b_0 = 1 b_0 + b_1 = 2 b_2 = 3 b_0 = 5 b_1 = 7Obviously, this example doesn't have a solution, but hopefully the idea is clear. Importantly, all the
b_iare known to be integers and positive or 0.I have two ways to know for sure that a variable is equal to a given value:
- If any equation is
something = 0, then all the corresponding variables are 0. - If any equation has a single variable
b_i = x, then I know the value of this variable.
For all the known values, I can substitute them in the existing equations.
Additionally, I also have some logic to create more equations that use the same form (i.e. a sum of variables without any multiplication) by identifying cases where one equation contains all the variables of another equation. For example:
b_1 + b_3 + b_4 = 15 b_3 + b_4 = 10This implies that
b_1 = 5, which feeds into the logic where I know the value of a variable and can further simplify the system.This is actually enough to solve a good 40% of the machines in my input. However, that's not enough for all of them. So once I have simplified a system as much as possible, I identify the equation with the smallest constant, pick a variable in that equation and try solving the system assuming the picked variable is equal to any value from 0 to the constant. If I get no solution, the system cannot be solved and if I get multiple solutions, I keep the one leading to the smallest number of button presses.
Importantly, when doing all the operations on the system, I make sure that I don't have any issues, like an equation with a negative constant, or multiple equations with the same variables but different constants. If this happens, I know the system has no solution.
In terms of implementation, the left side of all those equations can be efficiently represented as bitsets. For example
b_1 + b_3 + b_4can be represented as0b11010. Finding equations with a single variables corresponds to finding numbers that are powers of 2 (but I just use the count_ones() method). Finding equations containing all the variables of another can also be done with a few bit operations:a & b == ameansbcontains all the bits thatacontains.One optimization I didn't do is to use a bound on the number of button presses when solving the systems. Once I have a solution, if any partial solve would lead to more button presses than the known solution, there is no need to continue solving the system.
Anyway, with all of this, part 2 runs in ~240ms.
Solution (Rust)
use aoc_2025::timed_run; use std::collections::HashMap; use std::fmt::Display; use std::io; use std::vec::Vec; struct Machine { lights: u64, buttons: Vec<u64>, joltages: Vec<u64>, } impl Machine { fn from_str(l: &str) -> Self { let (lights_str, remaining) = l.split_once(' ').unwrap(); let (buttons_str, joltages_str) = remaining.rsplit_once(' ').unwrap(); let lights = lights_str .strip_prefix('[') .unwrap() .strip_suffix(']') .unwrap() .chars() .enumerate() .fold(0, |lights, (i, c)| match c { '#' => lights | (1 << i), '.' => lights, _ => panic!(), }); let buttons = buttons_str .split(' ') .map(|b| { b.strip_prefix('(') .unwrap() .strip_suffix(')') .unwrap() .split(',') .map(|i| 1 << i.parse::<u64>().unwrap()) .reduce(|a, b| a | b) .unwrap() }) .collect(); let joltages = joltages_str .strip_prefix('{') .unwrap() .strip_suffix('}') .unwrap() .split(',') .map(|i| i.parse::<u64>().unwrap()) .collect(); Machine { lights, buttons, joltages, } } fn print<T: Display>(&self, label: T) { println!("Machine {label}:"); println!(" Lights: {:b}", self.lights); for (j, b) in self.buttons.iter().enumerate() { println!(" Button {j}: {b:b}"); } println!(" Joltages: {:?}", self.joltages); } } struct Input { machines: Vec<Machine>, } impl Input { fn read_from_stdin() -> Input { Input { machines: io::stdin() .lines() .map(|l| Machine::from_str(l.unwrap().as_str())) .collect(), } } } fn part1(input: &Input) -> u64 { let mut ret = 0; for m in &input.machines { let mut min_found = m.buttons.len() + 1; for i in 0..2_u64.pow(m.buttons.len() as u32) { let mut lights = 0; let mut tmp = i; while tmp != 0 { let pos = tmp.trailing_zeros(); lights ^= m.buttons[pos as usize]; tmp &= !(1 << pos); } if lights == m.lights { min_found = min_found.min(i.count_ones() as usize); } } if min_found > m.buttons.len() { m.print("failed machine"); panic!(); } assert!(min_found < m.buttons.len()); ret += min_found; } ret as u64 } struct SystemEq { equations: HashMap<u32, u64>, num_presses: u64, num_unsolved: u64, } #[derive(Debug)] struct SystemErr; impl SystemEq { #[allow(dead_code)] fn print(&self) { println!( " num_presses: {}, num_unsolved: {}", self.num_presses, self.num_unsolved ); let num_ones = self .equations .iter() .fold(0, |a, (&k, _)| a | k) .count_ones() as u64; if num_ones != self.num_unsolved { println!( "!!!!Incorrect num_unsolved: cached {}, actual {}", self.num_unsolved, num_ones ); } for (k, v) in &self.equations { println!(" {k:013b} = {v}"); } } fn clone(&self) -> SystemEq { SystemEq { equations: self.equations.clone(), num_presses: self.num_presses, num_unsolved: self.num_unsolved, } } fn checked_insert(&mut self, k: u32, v: u64) -> Result<(), SystemErr> { if let Some(old_val) = self.equations.insert(k, v) { if old_val != v { return Err(SystemErr); } } Ok(()) } // Eliminate the given variable using the given value. fn eliminate_single(&mut self, single: u32, val: u64) -> Result<(), SystemErr> { self.num_presses += val; self.num_unsolved = self.num_unsolved.checked_sub(1).unwrap(); let updated = self .equations .extract_if(|k, _| k & single != 0) .collect::<Vec<_>>(); for (k, v) in updated { if k != single { let new_val = v.checked_sub(val).ok_or(SystemErr)?; self.checked_insert(k & !single, new_val)?; } else if v != val { return Err(SystemErr); } } Ok(()) } // Eliminate all variables involved in equations that are sum(xs) = 0. fn eliminate_nulls(&mut self) -> Result<(), SystemErr> { let mut nulls = self .equations .extract_if(|k, v| *v == 0 && k.count_ones() > 1) .fold(0, |a, (k, _)| a | k); while nulls != 0 { let pos = nulls.trailing_zeros(); let single = 1 << pos; self.eliminate_single(single, 0)?; nulls &= !single; } Ok(()) } // Eliminate all variables involved in equations with a single variable. Returns true if it // eliminated anything. fn eliminate_singles(&mut self) -> Result<bool, SystemErr> { let singles = self .equations .extract_if(|k, _| k.count_ones() == 1) .collect::<Vec<_>>(); let has_singles = !singles.is_empty(); for (single, val) in singles { self.eliminate_single(single, val)?; } Ok(has_singles) } // Tries to generate equations by substracting one from another. Returns true if it generated // anything. fn generate_eqs(&mut self) -> Result<bool, SystemErr> { let mut added = false; let copied_eqs = self .equations .iter() .map(|(k, v)| (*k, *v)) .collect::<Vec<_>>(); for (k, v) in copied_eqs { let new_eqs = self .equations .iter() .filter(|(nk, _)| **nk != k && *nk & k == k && !self.equations.contains_key(&(*nk & !k))) .map(|(nk, nv)| (nk & !k, nv.checked_sub(v))) .collect::<Vec<_>>(); added = added || !new_eqs.is_empty(); for (nk, nv) in new_eqs { self.checked_insert(nk, nv.ok_or(SystemErr)?)?; } } Ok(added) } // Tries applying all simplification steps until nothing more can be done. fn simplify(&mut self) -> Result<(), SystemErr> { loop { self.eliminate_nulls()?; if self.eliminate_singles()? { continue; } if !self.generate_eqs()? { break; } } Ok(()) } fn solve(&mut self) -> Result<(), SystemErr> { self.simplify()?; if self.num_unsolved == 0 { return Ok(()); } let mut min_val = u64::MAX; let mut key_for_min = None; for (&k, &v) in &self.equations { if v < min_val { min_val = v; key_for_min = Some(k); } } let single = 1 << key_for_min.unwrap().trailing_zeros(); let mut min_found = None; for i in 0..=min_val { let mut new_sys = self.clone(); if new_sys.eliminate_single(single, i).is_err() { continue; } if new_sys.solve().is_err() { continue; } min_found = Some(min_found.map_or(new_sys.num_presses, |min: u64| min.min(new_sys.num_presses))); } if let Some(min) = min_found { self.num_presses = min; self.num_unsolved = 0; Ok(()) } else { Err(SystemErr) } } } fn part2(input: &Input) -> u64 { let mut ret = 0; for (mi, m) in input.machines.iter().enumerate() { let mut start_equations = vec![0; m.joltages.len()]; for (i, b) in m.buttons.iter().enumerate() { let mut tmp = *b; while tmp != 0 { let pos = tmp.trailing_zeros(); start_equations[pos as usize] |= 1 << i; tmp &= !(1 << pos); } } let mut equations = HashMap::<u32, u64>::new(); for (i, e) in start_equations.into_iter().enumerate() { equations.insert(e, m.joltages[i]); } let mut system = SystemEq { equations, num_presses: 0, num_unsolved: m.buttons.len() as u64, }; system.simplify().unwrap(); if system.num_unsolved == 0 { ret += 1; } continue; if system.solve().is_err() { m.print(mi); println!(" Current state:"); for (k, v) in system.equations { println!(" {k:013b} = {v}"); } panic!("failed to find a solution"); } ret += system.num_presses; } ret } fn main() { let input = Input::read_from_stdin(); timed_run("Part 1", || part1(&input)); timed_run("Part 2", || part2(&input)); }I'd be curious if anyone finds a nicer solution for part 2, outside of just calling any off-the-shelf linear integer programming solver.
- If any equation is
-
Comment on Day 9: Movie Theater in ~comp.advent_of_code
Berdes Link ParentThe idea was that building an in-memory map of all the points that are inside vs outside would require way too much memory using the provided coordinates. However, the compressed model fits in a...The idea was that building an in-memory map of all the points that are inside vs outside would require way too much memory using the provided coordinates. However, the compressed model fits in a 500x500 map, which is trivial to fit in memory. If the compression is correctly done, then a rectangle that is fully inside in the compressed version is also going to be fully inside in the uncompressed version. Thus, it's possible to use the naïve approach of just testing all possible rectangles and checking if they are contained in the perimeter. You just need to be careful to use the real coordinates to compute the area of the rectangle instead of the compressed coordinates.
-
Comment on Day 9: Movie Theater in ~comp.advent_of_code
Berdes Link ParentI thought about the same idea (checking if we're crossing lines), but I feared that it could yield false positives in cases where we have two lines without any gap between them. Something like...I thought about the same idea (checking if we're crossing lines), but I feared that it could yield false positives in cases where we have two lines without any gap between them. Something like this:
0,3 2,3 2,10 3,10 3,0 5,0 1 | 2------3 5---------4 | 6with the inside on the right side.
As often, the inputs tend to have nice properties not mentioned in the problem statement that allow for some approaches that don't work in the general case.
-
Comment on Day 9: Movie Theater in ~comp.advent_of_code
Berdes LinkFirst day where I don't immediately see how to solve the problem efficiently. In the end, my idea of compressing the coordinates for visualization purposes turned out to actually be a key idea to...First day where I don't immediately see how to solve the problem efficiently. In the end, my idea of compressing the coordinates for visualization purposes turned out to actually be a key idea to quickly solve part 2.
Unfortunately, I had a stupid bug in my computation for the size of rectangles (doing abs(a-b+1) instead of abs(a-b)+1), and that didn't matter for either examples nor for part 1. If it wasn't for @lily's posted solution, which allowed me to figure out that none of the rectangles matched the size of the expected answer, I don't know how I would have found this bug.
After fixing the bug, I got the right answer in ~9ms.
Another idea for part 2 is that all the outside points are visible from the border of the map. So it's sufficient to keep track of the min and max x for all y and y for all x. Then, it's possible to check whether the edges of the rectangle are contained within those min/max values. Without coordinate compression, this approach takes ~1s. With coordinate compression, it's down to ~5ms.
Solution (Rust)
use aoc_2025::timed_run; use std::collections::HashMap; use std::io; use std::vec::Vec; struct Point { x: usize, y: usize, } fn side_length(a: usize, b: usize) -> usize { if a < b { b - a + 1 } else { a - b + 1 } } impl Point { fn square_with(&self, other: &Self) -> usize { side_length(self.x, other.x) * side_length(self.y, other.y) } } struct Input { points: Vec<Point>, } impl Input { fn read_from_stdin() -> Input { Input { points: io::stdin() .lines() .map(|l| { let ns = l .unwrap() .split(',') .map(|n| n.parse::<usize>().unwrap()) .collect::<Vec<_>>(); assert_eq!(ns.len(), 2); Point { x: ns[0], y: ns[1] } }) .collect(), } } } fn part1(input: &Input) -> u64 { let mut ret = 0; for i in 0..input.points.len() { for j in i + 1..input.points.len() { ret = ret.max(input.points[i].square_with(&input.points[j])); } } ret as u64 } // Compresses the coordinates such that any gap larger than 2 will end up as a gap of size 2. fn remap_coords<F: Fn(&Point) -> usize>(input: &Input, f: F) -> (HashMap<usize, usize>, usize) { let mut ret = HashMap::new(); let mut coords = vec![]; for p in &input.points { coords.push(f(p)); } coords.sort(); let mut new_coord = 0; ret.insert(coords[0], new_coord); for i in 1..coords.len() { let prev = coords[i - 1]; let cur = coords[i]; if prev == cur { continue; } new_coord += if prev + 2 <= cur { 2 } else { 1 }; ret.insert(coords[i], new_coord); } (ret, new_coord) } fn part2(input: &Input) -> u64 { let (new_x, max_x) = remap_coords(input, |p| p.x); let (new_y, max_y) = remap_coords(input, |p| p.y); let remap_point = |p: &Point| Point { x: *new_x.get(&p.x).unwrap(), y: *new_y.get(&p.y).unwrap(), }; // Create the perimeter in the map. let mut map = vec![vec![b'.'; max_y + 1]; max_x + 1]; for i in 1..=input.points.len() { let prev = remap_point(&input.points[i - 1]); let cur = remap_point(&input.points[i % input.points.len()]); if prev.x == cur.x { for y in prev.y.min(cur.y)..=prev.y.max(cur.y) { map[prev.x][y] = b'#'; } } else { for x in prev.x.min(cur.x)..=prev.x.max(cur.x) { map[x][prev.y] = b'#'; } } } // Fill the inside of the perimeter. let mut to_color = vec![if max_x < 100 { Point { x: 1, y: 3 } } else { Point { x: 100, y: 180 } }]; while !to_color.is_empty() { let p = to_color.pop().unwrap(); if std::mem::replace(&mut map[p.x][p.y], b'#') != b'.' { continue; } for np in [ Point { x: p.x - 1, y: p.y }, Point { x: p.x + 1, y: p.y }, Point { x: p.x, y: p.y - 1 }, Point { x: p.x, y: p.y + 1 }, ] { if map[np.x][np.y] == b'.' { to_color.push(np); } } } // Find the largest rectangle that fits fully inside the perimeter. let mut ret = 0; for i in 0..input.points.len() { 'next_point: for j in i + 1..input.points.len() { let pi = &input.points[i]; let pj = &input.points[j]; let new_size = pi.square_with(pj); if new_size <= ret { continue; } let ri = remap_point(pi); let rj = remap_point(pj); let min = Point { x: ri.x.min(rj.x), y: ri.y.min(rj.y), }; let max = Point { x: ri.x.max(rj.x), y: ri.y.max(rj.y), }; for x in min.x..=max.x { if map[x][min.y] == b'.' || map[x][max.y] == b'.' { continue 'next_point; } } for y in min.y..=max.y { if map[min.x][y] == b'.' || map[max.x][y] == b'.' { continue 'next_point; } } ret = new_size; } } ret as u64 } fn part2_slow(input: &Input) -> u64 { let mut max_x_val = 0; let mut max_y_val = 0; for p in &input.points { max_x_val = max_x_val.max(p.x); max_y_val = max_y_val.max(p.y); } // Get the min/max values for each x and y coordinates. This doesn't work in the general case, // but should work with the kind of input we have. let mut min_x = vec![max_x_val; max_y_val + 1]; let mut max_x = vec![0; max_y_val + 1]; let mut min_y = vec![max_y_val; max_x_val + 1]; let mut max_y = vec![0; max_x_val + 1]; for i in 1..=input.points.len() { let prev = &input.points[i - 1]; let cur = &input.points[i % input.points.len()]; let min = Point { x: prev.x.min(cur.x), y: prev.y.min(cur.y), }; let max = Point { x: prev.x.max(cur.x), y: prev.y.max(cur.y), }; if prev.x == cur.x { min_y[prev.x] = min_y[prev.x].min(min.y); max_y[prev.x] = max_y[prev.x].max(max.y); for y in min.y..=max.y { min_x[y] = min_x[y].min(min.x); max_x[y] = max_x[y].max(max.x); } } else { assert_eq!(prev.y, cur.y); min_x[prev.y] = min_x[prev.y].min(min.x); max_x[prev.y] = max_x[prev.y].max(max.x); for x in min.x..=max.x { min_y[x] = min_y[x].min(min.y); max_y[x] = max_y[x].max(max.y); } } } // Find the largest rectangle that fits fully inside the boundaries. let mut ret = 0; for i in 0..input.points.len() { 'next_point: for j in i + 1..input.points.len() { let pi = &input.points[i]; let pj = &input.points[j]; let new_size = pi.square_with(pj); if new_size <= ret { continue; } let min = Point { x: pi.x.min(pj.x), y: pi.y.min(pj.y), }; let max = Point { x: pi.x.max(pj.x), y: pi.y.max(pj.y), }; for x in min.x..=max.x { if min.y < min_y[x] || max.y > max_y[x] { continue 'next_point; } } for y in min.y..=max.y { if min.x < min_x[y] || max.x > max_x[y] { continue 'next_point; } } ret = new_size; } } ret as u64 } fn part2_combined(input: &Input) -> u64 { let (new_x, max_x) = remap_coords(input, |p| p.x); let (new_y, max_y) = remap_coords(input, |p| p.y); let remap_point = |p: &Point| Point { x: *new_x.get(&p.x).unwrap(), y: *new_y.get(&p.y).unwrap(), }; let mut max_x_val = 0; let mut max_y_val = 0; for p in &input.points { let rp = remap_point(p); max_x_val = max_x_val.max(rp.x); max_y_val = max_y_val.max(rp.y); } // Get the min/max values for each x and y coordinates. This doesn't work in the general case, // but should work with the kind of input we have. let mut min_x = vec![max_x_val; max_y_val + 1]; let mut max_x = vec![0; max_y_val + 1]; let mut min_y = vec![max_y_val; max_x_val + 1]; let mut max_y = vec![0; max_x_val + 1]; for i in 1..=input.points.len() { let prev = remap_point(&input.points[i - 1]); let cur = remap_point(&input.points[i % input.points.len()]); let min = Point { x: prev.x.min(cur.x), y: prev.y.min(cur.y), }; let max = Point { x: prev.x.max(cur.x), y: prev.y.max(cur.y), }; if prev.x == cur.x { min_y[prev.x] = min_y[prev.x].min(min.y); max_y[prev.x] = max_y[prev.x].max(max.y); for y in min.y..=max.y { min_x[y] = min_x[y].min(min.x); max_x[y] = max_x[y].max(max.x); } } else { assert_eq!(prev.y, cur.y); min_x[prev.y] = min_x[prev.y].min(min.x); max_x[prev.y] = max_x[prev.y].max(max.x); for x in min.x..=max.x { min_y[x] = min_y[x].min(min.y); max_y[x] = max_y[x].max(max.y); } } } // Find the largest rectangle that fits fully inside the boundaries. let mut ret = 0; for i in 0..input.points.len() { 'next_point: for j in i + 1..input.points.len() { let pi = &input.points[i]; let pj = &input.points[j]; let new_size = pi.square_with(pj); if new_size <= ret { continue; } let ri = remap_point(pi); let rj = remap_point(pj); let min = Point { x: ri.x.min(rj.x), y: ri.y.min(rj.y), }; let max = Point { x: ri.x.max(rj.x), y: ri.y.max(rj.y), }; for x in min.x..=max.x { if min.y < min_y[x] || max.y > max_y[x] { continue 'next_point; } } for y in min.y..=max.y { if min.x < min_x[y] || max.x > max_x[y] { continue 'next_point; } } ret = new_size; } } ret as u64 } fn main() { let input = Input::read_from_stdin(); timed_run("Part 1", || part1(&input)); timed_run("Part 2", || part2(&input)); timed_run("Part 2 slow", || part2_slow(&input)); timed_run("Part 2 combined", || part2_combined(&input)); } -
Comment on Day 8: Playground in ~comp.advent_of_code
Berdes LinkClear ramp up in difficulty compared to the previous days. I like it! For part 1, I initially used a max heap to keep track of the smallest 1000 pairs seen so far, followed by a DSU/Union Find...Clear ramp up in difficulty compared to the previous days. I like it!
For part 1, I initially used a max heap to keep track of the smallest 1000 pairs seen so far, followed by a DSU/Union Find data structure to keep track of which junctions are in the same connected component and the size of the components. Finally, I used a min heap to find the largest 3 connected components. However, it's much better to replace both heaps by just storing all the elements and using quick select to get the best N elements, since we don't care about their order. In my case, runtime went from ~14ms down to ~3ms when switching to quick select.
Part 2 can be done with a sort on all the pairs and using a DSU again to figure out when the current connected component includes all the junctions (based on its size). However, this can be further optimized by sorting while iterating over all the pairs and stopping early once all the junctions are connected. This optimization pushed the runtime from ~22ms down to ~4.7ms.
Solution Rust
use aoc_2025::timed_run; use std::cmp::Ordering; use std::cmp::Reverse; use std::collections::BinaryHeap; use std::io; use std::vec::Vec; struct Point { x: u64, y: u64, z: u64, } impl Point { fn distance_to(&self, other: &Self) -> u64 { (self.x - other.x).pow(2) + (self.y - other.y).pow(2) + (self.z - other.z).pow(2) } } struct Input { points: Vec<Point>, } impl Input { fn read_from_stdin() -> Input { Input { points: io::stdin() .lines() .map(|l| { l.unwrap() .split(',') .map(|n| n.parse::<u64>().unwrap()) .collect::<Vec<_>>() }) .map(|ns| { assert_eq!(ns.len(), 3); Point { x: ns[0], y: ns[1], z: ns[2], } }) .collect(), } } } struct PointPair { p1: usize, p2: usize, d: u64, } impl PartialEq for PointPair { fn eq(&self, other: &Self) -> bool { self.d.eq(&other.d) } } impl Eq for PointPair {} impl PartialOrd for PointPair { fn partial_cmp(&self, other: &Self) -> Option<Ordering> { self.d.partial_cmp(&other.d) } } impl Ord for PointPair { fn cmp(&self, other: &Self) -> Ordering { self.d.cmp(&other.d) } } impl PointPair { fn clone(&self) -> Self { Self { p1: self.p1, p2: self.p2, d: self.d, } } } struct DsuElement { parent: usize, size: usize, } struct DSU { map: Vec<DsuElement>, } impl DSU { fn new(size: usize) -> DSU { let mut map = vec![]; map.reserve(size); for i in 0..size { map.push(DsuElement { parent: i, size: 1 }) } DSU { map } } fn find(&mut self, i: usize) -> usize { let mut root = self.map[i].parent; if root != i { root = self.find(root); self.map[i].parent = root; } root } // Returns the new root fn join(&mut self, i: usize, j: usize) -> usize { let mut root_i = self.find(i); let mut root_j = self.find(j); if root_i != root_j { if self.map[root_i].size < self.map[root_j].size { (root_i, root_j) = (root_j, root_i); } self.map[root_j].parent = root_i; self.map[root_i].size += self.map[root_j].size; } root_i } } fn part1(input: &Input) -> u64 { let mut smallest_pairs = BinaryHeap::new(); for i in 0..input.points.len() { for j in i + 1..input.points.len() { let d = input.points[i].distance_to(&input.points[j]); smallest_pairs.push(PointPair { p1: i, p2: j, d }); if smallest_pairs.len() > 1000 { smallest_pairs.pop(); } } } let mut sets = DSU::new(input.points.len()); for pair in smallest_pairs { sets.join(pair.p1, pair.p2); } let mut largest_circuits = BinaryHeap::new(); for i in 0..input.points.len() { let e = &sets.map[i]; if e.parent == i { largest_circuits.push((Reverse(e.size), e.parent)); if largest_circuits.len() > 3 { largest_circuits.pop(); } } } let mut ret = 1; for (Reverse(size), _) in largest_circuits { ret *= size; } ret as u64 } fn all_pairs(input: &Input) -> Vec<PointPair> { let mut pairs = vec![]; pairs.reserve(input.points.len().pow(2)); for i in 0..input.points.len() { for j in i + 1..input.points.len() { let d = input.points[i].distance_to(&input.points[j]); pairs.push(PointPair { p1: i, p2: j, d }); } } pairs } fn part1_select(input: &Input) -> u64 { let mut pairs = all_pairs(input); let (smallest_pairs, _, _) = pairs.select_nth_unstable(1000); let mut sets = DSU::new(input.points.len()); for pair in smallest_pairs { sets.join(pair.p1, pair.p2); } let mut circuits = vec![]; circuits.reserve(input.points.len()); for i in 0..input.points.len() { let e = &sets.map[i]; if e.parent == i { circuits.push((Reverse(e.size), e.parent)); } } let (largest_circuits, _, _) = circuits.select_nth_unstable(3); let mut ret = 1; for (Reverse(size), _) in largest_circuits { ret *= *size; } ret as u64 } fn part2(input: &Input) -> u64 { let mut pairs = all_pairs(input); pairs.sort(); let mut sets = DSU::new(input.points.len()); for pair in pairs { let root = sets.join(pair.p1, pair.p2); if sets.map[root].size == input.points.len() { return input.points[pair.p1].x * input.points[pair.p2].x; } } panic!("unreachable"); } // Sorts the given slice in-place and calls f on the sorted elements as we go. Early stops if f // returns false, in which case, only the elements seen by f are guaranteed to be sorted. fn iterate_sorted<F: FnMut(&PointPair) -> bool>(pairs: &mut [PointPair], f: &mut F) -> bool { if pairs.len() <= 32 { pairs.sort(); for p in pairs { if !f(p) { return false; } } return true; } else { let pivot = pairs[0].clone(); let mut i = 1; let mut large_idx = pairs.len() - 1; while i < large_idx { if pairs[i] < pivot { i += 1; } else { pairs.swap(i, large_idx); large_idx -= 1; } } if pairs[i] < pivot { pairs.swap(0, i); } iterate_sorted(&mut pairs[..i], f) && iterate_sorted(&mut pairs[i..], f) } } fn part2_iterate_sorted(input: &Input) -> u64 { let mut ret = 0; let mut sets = DSU::new(input.points.len()); iterate_sorted(&mut all_pairs(input), &mut |pair: &PointPair| { let root = sets.join(pair.p1, pair.p2); if sets.map[root].size == input.points.len() { ret = input.points[pair.p1].x * input.points[pair.p2].x; return false; } true }); ret } fn main() { let input = Input::read_from_stdin(); timed_run("Part 1", || part1(&input)); timed_run("Part 1 select", || part1_select(&input)); timed_run("Part 2", || part2(&input)); timed_run("Part 2 iterate sorted", || part2_iterate_sorted(&input)); } -
Comment on Day 7: Laboratories in ~comp.advent_of_code
Berdes LinkWhen I saw the problem, I could smell what part 2 would involve from a mile ahead. I feel like the problem could have been made a bit more interesting if they also had beams going at an angle,...When I saw the problem, I could smell what part 2 would involve from a mile ahead. I feel like the problem could have been made a bit more interesting if they also had beams going at an angle, like with a splitter that splits incoming vertical rays into two rays going at 45° and straighten incoming 45° beams.
I experimented with a bunch of small optimizations like swapping my two vectors instead of discarding one every time or keeping track of the range of values that might contain a ray for the inner loop, but neither lead to significant enough improvement to really justify the increased complexity.
Solution (Rust)
use aoc_2025::timed_run; use std::io; use std::vec::Vec; struct Input { width: usize, lines: Vec<String>, } impl Input { fn read_from_stdin() -> Input { let lines: Vec<String> = io::stdin().lines().map(|l| l.unwrap()).collect(); assert!(lines.len() > 0); let width = lines[0].len(); for l in &lines { assert_eq!(width, l.len()); } Input { width, lines } } } fn part1(input: &Input) -> u64 { let mut ret = 0; let mut beams = vec![false; input.width]; for l in &input.lines { let mut new_beams = vec![false; input.width]; for i in 0..input.width { match l.as_bytes()[i] { b'.' => new_beams[i] = new_beams[i] || beams[i], b'S' => new_beams[i] = true, b'^' => { if beams[i] { new_beams[i - 1] = true; new_beams[i + 1] = true; ret += 1; } } _ => panic!(""), } } beams = new_beams; } ret } fn part2(input: &Input) -> u64 { let mut beams = vec![0; input.width]; for l in &input.lines { let mut new_beams = vec![0; input.width]; for i in 0..input.width { match l.as_bytes()[i] { b'.' => new_beams[i] += beams[i], b'S' => new_beams[i] = 1, b'^' => { new_beams[i - 1] += beams[i]; new_beams[i + 1] += beams[i]; } _ => panic!(""), } } beams = new_beams; } let mut ret = 0; for n in beams { ret += n; } ret } fn main() { let input = Input::read_from_stdin(); timed_run("Part 1", || part1(&input)); timed_run("Part 2", || part2(&input)); } -
Comment on Day 6: Trash Compactor in ~comp.advent_of_code
Berdes LinkNot the most interesting day for me since this was only about parsing the input the right way, without any shred of thinking to be had on the algorithm side. What's funny is that it would have...Not the most interesting day for me since this was only about parsing the input the right way, without any shred of thinking to be had on the algorithm side.
What's funny is that it would have been much faster for me to first tackle the 2nd part, followed by the first part since the first part is just the second part with the two loops parsing numbers needing to be swapped.
Solution (Rust)
use std::io; use std::time::Instant; use std::vec::Vec; enum Op { Add, Mul, } impl Op { fn from_str(s: &str) -> Op { assert_eq!(s.len(), 1); match s { "+" => Op::Add, "*" => Op::Mul, _ => panic!("Bad op {}", s), } } fn neutral_element(&self) -> u64 { match self { Op::Add => 0, Op::Mul => 1, } } fn apply(&self, lhs: u64, rhs: u64) -> u64 { match self { Op::Add => lhs + rhs, Op::Mul => lhs * rhs, } } } struct Input { len: usize, numbers: Vec<Vec<u64>>, operators: Vec<Op>, raw_len: usize, raw_numbers: Vec<String>, raw_ops: String, } impl Input { fn read_from_stdin() -> Input { let raw_lines: Vec<String> = io::stdin().lines().map(|l| l.unwrap()).collect(); assert!(raw_lines.len() > 0); let raw_len = raw_lines[0].len(); for i in 1..raw_lines.len() { assert_eq!(raw_len, raw_lines[i].len()); } let raw_numbers = raw_lines[..raw_lines.len() - 1].to_vec(); let raw_ops = raw_lines.last().expect("").clone(); let mut lines: Vec<Vec<String>> = raw_lines .into_iter() .map(|l| l.split_ascii_whitespace().map(|s| s.to_string()).collect()) .collect(); let len = lines[0].len(); for i in 1..lines.len() { assert_eq!(len, lines[i].len()); } let op_line = lines.pop().unwrap(); Input { len, numbers: lines .into_iter() .map(|l| l.into_iter().map(|n| n.parse::<u64>().expect("")).collect()) .collect(), operators: op_line .into_iter() .map(|op| Op::from_str(op.as_str())) .collect(), raw_len, raw_numbers, raw_ops, } } } fn part1(input: &Input) -> u64 { let mut ret = 0; for i in 0..input.len { let op = &input.operators[i]; let mut tmp = op.neutral_element(); for ns in &input.numbers { tmp = op.apply(tmp, ns[i]); } ret += tmp; } ret } fn part2(input: &Input) -> u64 { let mut ret = 0; let mut i = 0; while i < input.raw_len { let op = Op::from_str(str::from_utf8(&input.raw_ops.as_bytes()[i..i + 1]).expect("")); let mut number_end = input.raw_len; for j in i + 1..input.raw_len { if input.raw_ops.as_bytes()[j] != b' ' { number_end = j - 1; break; } } let mut tmp = op.neutral_element(); for j in i..number_end { let mut n: u64 = 0; for l in &input.raw_numbers { let c = l.as_bytes()[j]; if c != b' ' { n = n * 10 + (c - b'0') as u64; } } tmp = op.apply(tmp, n); } ret += tmp; i = number_end + 1; } ret } fn main() { let input = Input::read_from_stdin(); timed_run("Part 1", || part1(&input)); timed_run("Part 2", || part2(&input)); } -
Comment on Day 5: Cafeteria in ~comp.advent_of_code
Berdes Link ParentSo, I managed to implement two solutions using BTreeMap, which required to use the nightly build to get access to the Cursor API. The first approach was to use the key to represent a bound and the...So, I managed to implement two solutions using BTreeMap, which required to use the nightly build to get access to the Cursor API.
The first approach was to use the key to represent a bound and the value to represent whether it's a starting point or an ending point. When trying to find whether an ID is part of a range, I can query for the largest key that is smaller or equal to the ID. If it corresponds to a starting point, I know my ID is inside a range. One detail with this approach is that I needed to store the ending points with a value + 1 (i.e. the ending point is excluded from the range), otherwise ranges like [5, 5] would collide on the key 5. The side benefit of having excluded ending points is that ranges that are next to each other would still get merged: [3, 5] [6, 8] would become [3, 8].
Using start and end points as keys
#[derive(Debug)] enum Node { Start, End, } fn merge_ranges_btree_double(ranges: &Vec<Range>) -> BTreeMap<u64, Node> { let mut merged = BTreeMap::<u64, Node>::new(); for r in ranges { let mut cursor = merged.upper_bound_mut(Bound::Excluded(&r.start)); while cursor .peek_next() .map_or(false, |(nk, _)| nk <= &(r.end + 1)) { cursor.remove_next(); } match cursor.peek_prev() { Some((_, &mut Node::Start)) => {} _ => cursor.insert_before(r.start, Node::Start).expect(""), } match cursor.peek_next() { Some((_, &mut Node::End)) => {} _ => cursor.insert_before(r.end + 1, Node::End).expect(""), } } merged } fn part1_btree_double(input: &Input) -> u64 { let merged_fresh = merge_ranges_btree_double(&input.fresh_ranges); let mut ret = 0; for i in &input.inventory { match merged_fresh.upper_bound(Bound::Included(i)).peek_prev() { Some((_, Node::Start)) => ret += 1, _ => {} } } ret } fn part2_btree_double(input: &Input) -> u64 { let merged = merge_ranges_btree_double(&input.fresh_ranges); let mut ret = 0; let mut last_start = 0; for (&k, v) in merged.iter() { match v { Node::Start => last_start = k, Node::End => ret += k - last_start, } } ret }The second approach (likely more sane) was to have the key as start point and the value as end point. It's basically the same as having a sorted vector of (start, end) points, except that it's not necessary to pre-sort the list and removing an element in the middle should be cheap (amortized O(1) I think?). Similarly to the previous approach, finding whether an ID is in one of the ranges requires querying for the largest key that is smaller or equal to the ID and checking whether the ID is less or equal to the end point.
Start points as key, end points as values
fn merge_ranges_btree(ranges: &Vec<Range>) -> BTreeMap<u64, u64> { let mut merged = BTreeMap::<u64, u64>::new(); for r in ranges { let mut cursor = merged.upper_bound_mut(Bound::Included(&r.start)); match cursor.peek_prev() { Some((_, pend)) => { if r.start <= *pend { *pend = r.end.max(*pend); } else { cursor.insert_before(r.start, r.end).expect(""); } } None => cursor.insert_before(r.start, r.end).expect(""), } // Can't hold a mutable borrow of the previous item while still accessing cursor to peek at the // next. let mut pend = cursor.peek_prev().unwrap().1.clone(); loop { if let Some((nstart, nend)) = cursor.peek_next() { if *nstart <= pend { pend = pend.max(*nend); cursor.remove_next(); } else { break; } } else { break; } } *cursor.peek_prev().unwrap().1 = pend; } merged } fn part1_btree(input: &Input) -> u64 { let merged_fresh = merge_ranges_btree(&input.fresh_ranges); let mut ret = 0; for i in &input.inventory { match merged_fresh.upper_bound(Bound::Included(i)).peek_prev() { Some((_, end)) => { if i <= end { ret += 1 } } _ => {} } } ret } fn part2_btree(input: &Input) -> u64 { let merged = merge_ranges_btree(&input.fresh_ranges); let mut ret = 0; let mut last_start = 0; for (&k, v) in merged.iter() { ret += v - k + 1; } ret }The second approach was (unsurprisingly) faster than the first one. Sadly, both were slower than the vector-based approaches I used previously. I don't know if this is just the vector based approaches being inherently faster because of the memory layout, amount of branching and data dependency between instructions, or if it's just the BTreeMap implementation not being super well optimized. The Cursor API is still in nightly, so I wouldn't be surprised if the implementation was a bit sub-optimal.
Runtime comparison of various approaches
Not really the best benchmark since I only run each thing once with no warmup. I ran the program a couple of times to get a sense of which runtime was normal and which one was just an outlier (I guess due to the computer doing other stuff in the background). The parsing of the input is excluded.
Part 1 direct in 169.504µs Part 1 merged in 27.85µs Part 1 bsearch in 27.166µs Part 1 btree start+end in 49.206µs Part 1 btree in 37.387µs Part 2 in 5.801µs Part 2 btree start+end in 18.164µs Part 2 btree in 9.65µs -
Comment on Day 5: Cafeteria in ~comp.advent_of_code
Berdes LinkFor part 1, I directly implemented some logic to sort the ranges and merge the overlapping ones. The idea was to be able to binary search the ranges to see if an ID is part of one of the ranges....For part 1, I directly implemented some logic to sort the ranges and merge the overlapping ones. The idea was to be able to binary search the ranges to see if an ID is part of one of the ranges. In the end, the binary search idea was barely faster than a linear scan using the sorted+merged ranges. Maybe an approach based on a binary tree would have been faster?
This approach made part 2 extremely trivial since I already had a list of ranges without any overlap.
Part 1 and 2 (Rust)
use std::fmt::Display; use std::io; use std::time::Instant; use std::vec::Vec; #[derive(Clone)] struct Range { start: u64, end: u64, // inclusive } impl Range { fn from_str(s: String) -> Self { let (str_start, str_end) = s.split_once('-').unwrap(); Range { start: str_start.parse().expect(""), end: str_end.parse().expect(""), } } fn contains(&self, i: u64) -> bool { self.start <= i && i <= self.end } } struct Input { fresh_ranges: Vec<Range>, inventory: Vec<u64>, } impl Input { fn read_from_stdin() -> Input { let mut fresh_ranges = vec![]; let mut inventory = vec![]; let mut parse_ranges = true; for line in io::stdin().lines().map(|l| l.unwrap()) { if line == "" { parse_ranges = false; continue; } if parse_ranges { fresh_ranges.push(Range::from_str(line)); } else { inventory.push(line.parse::<u64>().expect("")); } } Input { fresh_ranges, inventory, } } } fn merge_ranges(ranges: &Vec<Range>) -> Vec<Range> { let mut sorted = ranges.clone(); sorted.sort_by_key(|r| r.start); let mut merged: Vec<Range> = vec![]; for r in sorted { if let Some(last) = merged.last() { if last.end >= r.start { let end = &mut merged.last_mut().unwrap().end; *end = r.end.max(*end); } else { merged.push(r); } } else { merged.push(r); } } merged } fn part1_direct(input: &Input) -> u64 { let mut ret = 0; for &i in &input.inventory { for r in &input.fresh_ranges { if r.contains(i) { ret += 1; break; } } } ret } fn part1_merged(input: &Input) -> u64 { let merged_fresh = merge_ranges(&input.fresh_ranges); let mut ret = 0; for &i in &input.inventory { for r in &merged_fresh { if i <= r.end { if i >= r.start { ret += 1; } break; } } } ret } fn part1_bsearch(input: &Input) -> u64 { let merged_fresh = merge_ranges(&input.fresh_ranges); let mut ret = 0; for &i in &input.inventory { let mut s = &merged_fresh[..]; while !s.is_empty() { let pivot_idx = s.len() / 2; let pivot = &s[pivot_idx]; if i < pivot.start { s = &s[..pivot_idx]; } else if i > pivot.end { s = &s[(pivot_idx + 1)..]; } else { ret += 1; break; } } } ret } fn part2(input: &Input) -> u64 { let merged_fresh = merge_ranges(&input.fresh_ranges); let mut ret = 0; for r in merged_fresh { ret += r.end - r.start + 1; } ret } fn timed_run<T: Display, F: FnOnce() -> T>(name: &str, f: F) { let start = Instant::now(); let result = f(); let time = start.elapsed(); println!("{}: {}\n in {:?}", name, result, time); } fn main() { let input = Input::read_from_stdin(); timed_run("Part 1 direct", || part1_direct(&input)); timed_run("Part 1 merged", || part1_merged(&input)); timed_run("Part 1 bsearch", || part1_bsearch(&input)); timed_run("Part 2", || part2(&input)); } -
Comment on Day 3: Lobby in ~comp.advent_of_code
Berdes Link ParentI checked how my implementation complies and it's indeed what I expected: the inner loop is fully unrolled and each iteration is avoiding any branching by using a conditional move. This means...I checked how my implementation complies and it's indeed what I expected: the inner loop is fully unrolled and each iteration is avoiding any branching by using a conditional move. This means there is no risk of (failed) speculative execution and the iterations can be efficiently pipelined inside the CPU. Even the vector of 13 elements ends up being mostly stored in registers, with only a few positions being on the stack because there aren't enough registers. I guess the main point is that my version compiles into something that the CPU can execute extremely efficiently, which makes up for the fact that it might require more operations.
If the accumulators could be u8 instead of u64, the compiler could even use vectorized instructions to run many iterations at the same time since there is a vectorized max instruction to replace the conditional moves.
-
Comment on Day 4: Printing Department in ~comp.advent_of_code
Berdes Link ParentI'm also expecting the difficulty to start ramping up soon, although we might still get 1 or 2 days for which a brute force approach still works. I only finished all days in 2020 due to time...I'm also expecting the difficulty to start ramping up soon, although we might still get 1 or 2 days for which a brute force approach still works.
I only finished all days in 2020 due to time constraints, but from what I remember, the difficulty increases over time with the only exception being the 25th that was always super simple. With the shorter schedule, I would expect the 12th problem to be the hardest since there would be no obvious reasons to break the pattern.
-
Comment on Day 4: Printing Department in ~comp.advent_of_code
Berdes LinkThe problem itself wasn't super difficult. Even part 2 could likely be implemented by repeatedly iterating over the whole map, removing what could be removed until nothing could be removed. This...The problem itself wasn't super difficult. Even part 2 could likely be implemented by repeatedly iterating over the whole map, removing what could be removed until nothing could be removed. This approach can be improved by having a list of cells whose neighbors have been modified and need rechecking. Even better, keeping track of the number of neighboring rolls + a list of rolls to be removed allows us to directly know if a neighbor is newly removable when removing a roll.
On the Rust side, today was about learning many of the pitfalls when using closures the same way I sometimes use C++ lambdas with auto arguments. At first, I had tons of issues with the type inference, where
|x, y| data[x][y]doesn't compile, even though|x, y| fn(x, y)does, which I find confusing. Then, I had to satisfy the borrow checker because if I want to mutate a capture inside a closure, I cannot use this object outside of the closure. My initial solution was to explicitly pass the stuff I wanted to mutate as argument to the closure, but I later transitioned to a nicer (maybe?) solution where I put my data into a struct and declared methods for the struct. Hopefully I'll get familiar with all those things before I end up in a situation where I'll need to use explicit lifetime annotations.Part 1 and 2 (Rust) after tons of refactoring and cleanup
use std::io; use std::ops::Range; use std::vec::Vec; fn neighbor_ints(x: usize, max: usize) -> Range<usize> { let start = if x > 0 { x - 1 } else { 0 }; let end = (x + 2).min(max); return start..end; } struct Map { data: Vec<Vec<char>>, height: usize, width: usize, } fn make_map(data: Vec<Vec<char>>) -> Map { let height = data.len(); assert!(height > 0); let width = data[0].len(); for x in 1..height { assert_eq!(data[x].len(), width, "line {}", x) } Map { data: data, height, width, } } impl Map { fn foreach_pos<F: FnMut(usize, usize)>(&self, mut f: F) { for x in 0..self.height { for y in 0..self.width { f(x, y); } } } fn foreach_neighbor<F: FnMut(usize, usize)>(&self, x: usize, y: usize, mut f: F) { for nx in neighbor_ints(x, self.height) { for ny in neighbor_ints(y, self.width) { if nx != x || ny != y { f(nx, ny); } } } } fn num_neighbor_rolls(&self, x: usize, y: usize) -> u8 { let mut num_rolls = 0; self.foreach_neighbor(x, y, |nx: usize, ny: usize| { if self.data[nx][ny] == '@' { num_rolls += 1; } }); num_rolls } } fn part1(map: &Map) -> u64 { let mut ret = 0; map.foreach_pos(|x: usize, y: usize| { if map.data[x][y] == '@' && map.num_neighbor_rolls(x, y) < 4 { ret += 1; } }); ret } fn part2(map: &Map) -> u64 { let mut ret = 0; let mut to_remove = Vec::<(usize, usize)>::new(); let mut num_neighbor_rolls = vec![vec![0 as u8; map.width]; map.height]; map.foreach_pos(|x: usize, y: usize| { if map.data[x][y] == '.' { return; } let num_rolls = map.num_neighbor_rolls(x, y); num_neighbor_rolls[x][y] = num_rolls; if num_rolls < 4 { to_remove.push((x, y)); } }); while !to_remove.is_empty() { let (x, y) = to_remove.pop().unwrap(); ret += 1; map.foreach_neighbor(x, y, |nx: usize, ny: usize| { let nr = &mut num_neighbor_rolls[nx][ny]; *nr -= 1; if *nr == 3 { to_remove.push((nx, ny)); } }); } ret } fn main() { let map = make_map( io::stdin() .lines() .map(|l| l.unwrap().chars().collect::<Vec<char>>()) .collect(), ); println!("Part 1: {}", part1(&map)); println!("Part 2: {}", part2(&map)); } -
Comment on Day 3: Lobby in ~comp.advent_of_code
Berdes LinkObvious dynamic programming problem for today. You only need to keep track of maximum joltage you can get using 1 to n previous batteries and when adding one more battery, you update the max of...Obvious dynamic programming problem for today. You only need to keep track of maximum joltage you can get using 1 to n previous batteries and when adding one more battery, you update the max of size k using if you can get a better value from using the max of size k-1 and the current battery.
It's my cleanest implementation so far, but that's mostly because it's just so short.
Part 1 and 2 (Rust)
use std::io; fn main() { let mut part1 = 0; let mut part2 = 0; for line in io::stdin().lines().map(|l| l.unwrap()) { // The index is the number of batteries used for the given max. Index 0 // will never be modified, but is still usefull to avoid the corner case // when updating index 1. let mut max_vals = [0; 13]; for c in line.as_bytes() { let n = (c - b'0') as u64; // Updating in reverse, otherwise max_vals[i] would use the already updated value of // max_vals[i-1]. for i in (1..13).rev() { max_vals[i] = max_vals[i].max(max_vals[i - 1] * 10 + n); } } part1 += max_vals[2]; part2 += max_vals[12]; } println!("Part 1: {}", part1); println!("Part 2: {}", part2); } -
Comment on Day 2: Gift Shop in ~comp.advent_of_code
Berdes (edited )LinkThe problem today felt about as simple as yesterday, with my logic to find invalid IDs even uglier than the stuff I wrote yesterday. At least, I'm getting a hang of the syntax of Rust and didn't...The problem today felt about as simple as yesterday, with my logic to find invalid IDs even uglier than the stuff I wrote yesterday. At least, I'm getting a hang of the syntax of Rust and didn't need to spend much time searching the documentation.
Instead of going through all the numbers in each range, I'm sure it's feasible to iterate over the first half of each number and directly check if the resulting repeating ID is inside the range. Thinking about it, I'm feeling like this could even yield a simpler logic than what I currently have.
Part 1 and 2 (Rust)
use std::io; use std::str::FromStr; struct Range { a: u64, b: u64, } struct ParseRangeError { message: String, } impl FromStr for Range { type Err = ParseRangeError; fn from_str(s: &str) -> Result<Self, Self::Err> { let (a, b) = s.split_once('-').ok_or(ParseRangeError { message: format!("Invalid range {}, missing -", s), })?; let ia = a.parse::<u64>().map_err(|e| ParseRangeError { message: format!("Invalid number {}: {}", s, e), })?; let ib = b.parse::<u64>().map_err(|e| ParseRangeError { message: format!("Inavlid number {}: {}", s, e), })?; Ok(Range { a: ia, b: ib }) } } fn is_invalid1(a: u64) -> bool { let mut mul = 10; while a > mul { if a < mul * mul && a > mul * mul / 10 { let tmp = a % mul; return a == tmp + tmp * mul; } mul *= 10; } return false; } fn is_invalid2(a: u64) -> bool { let mut mul = 10; while a > mul { let p = a % mul; let mut pmul = mul; while a > pmul { if (a / pmul) % mul != p { break; } else if a < pmul * mul && a > pmul * mul / 10 { return true; } pmul *= mul; } mul *= 10; } return false; } fn main() { let mut part1 = 0; let mut part2 = 0; let mut buffer = String::new(); io::stdin().read_line(&mut buffer).unwrap(); for range in buffer.trim().split(',').map(|e| e.parse::<Range>()) { match range { Ok(r) => { for n in r.a..r.b + 1 { part1 += if is_invalid1(n) { n } else { 0 }; part2 += if is_invalid2(n) { n } else { 0 }; } } Err(e) => panic!("Error reading range: {}", e.message), } } println!("Part 1: {}", part1); println!("Part 2: {}", part2); }Edit: Indeed, it's possible to keep track of the part that is going to be repeated and iterate directly over it. Based on the result, it's not exactly simpler, especially since the second part still needs the
is_invalidlogic to avoid double-counting numbers in the range. At least, it's mighty fast: it's running in ~1ms including all the overhead of starting the process and parsing the input while the previous version took ~120ms.Implementation of the sum of invalid IDs over a range for both parts
fn length(mut a: u64) -> u64 { if a == 0 { return 1; } let mut l = 0; while a > 0 { a /= 10; l += 1; } return l; } fn pow(n: u64, e: u64) -> u64 { let mut ret = 1; for _ in 0..e { ret *= n; } return ret; } fn sum_range1(r: &Range) -> u64 { let l = length(r.a); let mut first_half = if l % 2 == 0 { r.a / pow(10, l / 2) } else { pow(10, l / 2) }; let mut ret = 0; loop { let n = first_half * pow(10, length(first_half)) + first_half; if n > r.b { break; } if n >= r.a { ret += n; } first_half += 1; } return ret; } fn is_invalid2(a: u64) -> bool { let mut mul = 10; while a > mul { let p = a % mul; let mut pmul = mul; while a > pmul { if (a / pmul) % mul != p { break; } else if a < pmul * mul && a > pmul * mul / 10 { return true; } pmul *= mul; } mul *= 10; } return false; } fn sum_range2(r: &Range) -> u64 { let la = length(r.a); let lb = length(r.b); let mut ret = 0; for num_parts in 2..lb + 1 { let mut part = if la % num_parts == 0 { r.a / pow(10, (num_parts - 1) * la / num_parts) } else { pow(10, la / num_parts) }; loop { let mut n = part; let mult = pow(10, length(part)); for _ in 1..num_parts { n = n * mult + part; } if n > r.b { break; } // If the part is invalid, the number formed risks being counted multiple // times. E.g. 1111 can be decomposed as four 1 or two 11. When this // happens, exactly one decomposition will have a part that is also a // valid ID. if n >= r.a && !is_invalid2(part) { ret += n; } part += 1; } } return ret; }Edit 2: actually, things can be pushed even further since the sum of invalid IDs (for part 1) in the range
123000..456999is the same as the sum of numbers123..456times 101, which can be computer is O(1) using the fact that the sum of numbers from 1 to n (included) isn*(n+1)/2. Part 2 becomes even more interesting since you also want to exclude all the numbers that are invalid IDs from the sum of the consecutive numbers, which can be done using a recursive call to the function that computes the sum of invalid IDs for a given range. Thus, it's not actually necessary to have anis_validfunction for part 2 either. -
Comment on Day 1: Secret Entrance in ~comp.advent_of_code
Berdes LinkI'm doing it in Rust this year to get some experience using the language. I'm very familiar with C++ and have used Haskell a long time ago, so I'm not (yet?) struggling with new concepts. For now,...I'm doing it in Rust this year to get some experience using the language. I'm very familiar with C++ and have used Haskell a long time ago, so I'm not (yet?) struggling with new concepts. For now, I'm mostly learning the details of the syntax and figuring out what's available in the standard library.
Parts 1 and 2
use std::io; use std::str::FromStr; enum Rotation { L, R, } struct ParseRotationError { message: String, } impl Rotation { fn from_char(c: char) -> Result<Self, ParseRotationError> { match c { 'L' => Ok(Rotation::L), 'R' => Ok(Rotation::R), _ => Err(ParseRotationError { message: format!("Cannot parse {} as Rotation", c), }), } } } struct Move { r: Rotation, c: i32, } impl Move { fn signed_move(&self) -> i32 { match self.r { Rotation::L => -self.c, Rotation::R => self.c, } } } struct ParseMoveError { message: String, } impl FromStr for Move { type Err = ParseMoveError; fn from_str(s: &str) -> Result<Self, Self::Err> { if s.len() < 2 { return Err(ParseMoveError { message: format!( "Move should contain at least 2 characters: len is {}", s.len() ), }); } let mut chars = s.chars(); let r = Rotation::from_char(chars.next().unwrap()).map_err(|e| ParseMoveError { message: format!("Failed to parse rotation: {}", e.message), })?; let c = chars.as_str().parse::<i32>().map_err(|e| ParseMoveError { message: format!("Failed to parse count: {}", e), })?; Ok(Move { r, c }) } } fn main() { let mut part1 = 0; let mut part2 = 0; let mut position = 50; for line in io::stdin().lines() { match line { Ok(l) => match l.parse::<Move>() { Ok(m) => { let prev_pos = position; position += m.signed_move(); if position <= 0 { part2 += (-position) / 100 + (if prev_pos == 0 { 0 } else { 1 }); } else if position >= 100 { part2 += position / 100; } position = (position % 100 + 100) % 100; part1 += if position == 0 { 1 } else { 0 }; } Err(e) => println!("Failed to parse '{}': {}", l, e.message), }, Err(e) => println!("Error reading line: {}", e), } } println!("Part 1: {}", part1); println!("Part 2: {}", part2); } -
Comment on An interstellar comet flew past Mars, and spacecraft took pictures in ~space
Berdes Link ParentDetermining that an object is interstellar is probably the easiest thing astronomers found about 3i/Atlas. I'm simplifying a bit, but when astronomers see something that moves in the sky (compared...Determining that an object is interstellar is probably the easiest thing astronomers found about 3i/Atlas. I'm simplifying a bit, but when astronomers see something that moves in the sky (compared to distant stars), they will try to figure out its orbit. In theory, with only 3 observations, you alreasy have enough data to figure out the obital parameters of the object you're observing, which will tell you if the object is interstellar or not (i.e. whether it follows a normal circular/elliptic trajectory or if it follows a hyperbolic trajectory).
There are multiple telescopes whose main purpose is to regularly scan the night sky to find asteroids and they have been very successful. We have now identified more than 1 million asteroids in our solar system. Given how much better we're becoming at finding previously unknown asteroids, I would expect that we will find interstellar objects more and more frequently.
-
Comment on Shouldn't somebody *stop* "Meta Superintelligence Labs"? in ~tech
Berdes Link ParentIs that really the case? When you see how differently people react to harm done to dogs and to humans in movies (or even in really in too many cases), I would be cautious to assume that being...Is that really the case? When you see how differently people react to harm done to dogs and to humans in movies (or even in really in too many cases), I would be cautious to assume that being kinder to artificial beeing would translate to more kindness to other humans.
Computing the number of paths in a directed acyclic graph is a classical DP problem. I used the direct approach of visiting nodes in a lexicographical order, each time updating the value for its children. The lexicographical order ensures that when visiting a node, I've already visited all its parents and its value is not going to change.
For path 1, the value of each node is just the sum of its parents, initializing the starting node with 1.
For part 2, I used exactly the same approach, except that I'm using 4 values for each node, each corresponding to one type of path: paths going through neither dac nor fft, paths going through dac but not fft, paths going through fft but not dac, and paths going through both. For most nodes, their counters are directly propagated to their children. For dac and fft nodes, the none counter gets propagated to the dac and fft counters respectively and the fft and dac counters respectively get propagated to the both counter.
Boths are O(n) with n the number of nodes in the graph, so it's super quick to compute, even when using strings as key/values for the graph (and doing a ton of copies): 60μs for part 1 and 420μs for part 2.
Solution (Rust)