Berdes's recent activity
-
Comment on Food: Your personal year in review for 2025 in ~food
-
Comment on Linus Åkesson - 8-bit Boléro (2025) in ~music
Berdes LinkWhile it's a pretty interesting interpretation, the answer to the title question (The World's Most Ambitious Chiptune?) is a clear "no" IMHO. I'm at least aware of the Super Rite of Spring by...While it's a pretty interesting interpretation, the answer to the title question (The World's Most Ambitious Chiptune?) is a clear "no" IMHO. I'm at least aware of the Super Rite of Spring by Shabubula (which is critically under-viewed!) that seems way more ambitious to me, and there might be even more out there.
-
Comment on EU drops 2035 combustion engine ban as global electric vehicle shift faces reset in ~transport
Berdes Link ParentYou can go even further and get rid of the frame, add a smidge of human power and you end up with an e-bike! This would also reduce the parking space need, reducing distance between destinations,...You can go even further and get rid of the frame, add a smidge of human power and you end up with an e-bike! This would also reduce the parking space need, reducing distance between destinations, counteracting part of the time lost due to the speed reduction.
-
Comment on EU drops 2035 combustion engine ban as global electric vehicle shift faces reset in ~transport
Berdes Link ParentAnd that's not restricted to taxis. If parents need to do a return trip (or a significant detour) to bring/pick up their kids from school and other activities, they are also unnecessary overhead....And that's not restricted to taxis. If parents need to do a return trip (or a significant detour) to bring/pick up their kids from school and other activities, they are also unnecessary overhead.
But I also think that in most cases, driverless taxis should be the last resort solution for cases when a better approach (public transport) cannot work.
-
Comment on AI-designed Linux computer with 843 components boots on first attempt — dual-PCB Project Speedrun was made in just one week and required less than forty hours of human work in ~tech
Berdes Link ParentFrom what I can see, it's only an "AI" because of the hype around the term. They talk about it doing some placement and routing tasks, which is something that can be done by existing tools. Maybe...From what I can see, it's only an "AI" because of the hype around the term. They talk about it doing some placement and routing tasks, which is something that can be done by existing tools. Maybe it's doing a bit more than existing competitors?
About the project they present, I saw some feedback from someone working in the industry (on a different website) and their conclusion was that the project was perfectly scoped to magnify the impact of their tooI. For example, they don't start from scratch, but reuse an existing design (which can take 1-2 months of work for this kind of design). They also use a service to get an extra-speedy manufacturing of the board, which most people wouldn't use due to the cost. Testing (beyond just booting an OS) is not included either. So basically, their tool speeds up a small part of a hardware engineer job when compared to it being done manually. Without a proper comparison against similar tools, their advertisement is pretty meaningless.
-
Comment on Day 12: Christmas Tree Farm in ~comp.advent_of_code
Berdes LinkI got baited soooooo hard by this problem. I started implementing a solver for the general problem, believing that maybe the dimensions where small enough that a good implementation could work....I got baited soooooo hard by this problem. I started implementing a solver for the general problem, believing that maybe the dimensions where small enough that a good implementation could work. Obviously, I had a very quick explosion of states while trying to find a solution for any of the regions. So I implemented some additional logic to try and eliminate states that are strictly worse than other states I generated, which still wasn't enough. So I looked at culling states that had no hope of being filled and early exiting if I found a state that could trivially finish being filled. Magically, those two last optimizations where enough... because every single input immediately fits into one of those categories.
Also, while trying some manual inputs with feasible dimensions, I saw that my solver clearly had some bugs because it could find a solution for a 39x6 region, but not for a 40x6 with the exact same shape requirements. :|
Anyway, if anyone is interested, here is my solution after angrily deleting all the code related to the useless solver (there are still traces of optimizations that are clearly not necessary anymore):
Solution (Rust)
use aoc_2025::timed_run; use std::io; use std::vec::Vec; // Bitmask of the shape. Bit 0 is the top left, bit 2 the top right and bit 8 the bottom right. #[derive(Clone, Copy)] struct Shape(u16); impl Shape { fn parse(ls: &[String]) -> Self { assert_eq!(ls.len(), 4); let mut ret = 0; let mut pos = 0; for x in 1..4 { assert_eq!(ls[x].len(), 3); for c in ls[x].chars() { match c { '#' => ret |= 1 << pos, '.' => {} _ => panic!("unexpected char {} in shape", c), } pos += 1; } } Shape(ret) } #[allow(dead_code)] fn to_string(&self) -> String { let mut ret = String::new(); let mut tmp = self.0; for i in 0..9 { ret.push(if tmp & 1 == 1 { '#' } else { '.' }); if i == 2 || i == 5 { ret.push('\n'); } tmp >>= 1; } ret } } struct Region { width: u32, height: u32, shapes_count: Vec<u32>, } impl Region { fn parse(l: &str) -> Self { let (size, shapes) = l.split_once(": ").unwrap(); let (width, height) = size.split_once('x').unwrap(); Self { width: width.parse().unwrap(), height: height.parse().unwrap(), shapes_count: shapes .split(' ') .map(|s| s.parse::<u32>().unwrap()) .collect(), } } #[allow(dead_code)] fn to_string(&self) -> String { format!("{}x{}: {:?}", self.width, self.height, self.shapes_count) } fn trivial_fit(&self, shapes: &Vec<Shape>) -> Option<bool> { let num_trivial_shapes = (self.height / 3) * (self.width / 3); let num_required_shapes = self.shapes_count.iter().fold(0, |a, b| a + b); if num_required_shapes <= num_trivial_shapes { return Some(true); } let num_required_spots = shapes .iter() .map(|shape| shape.0.count_ones()) .zip(self.shapes_count.iter()) .map(|(a, b)| a * b) .fold(0, |a, b| a + b); if num_required_spots > self.height * self.width { return Some(false); } None } } struct Input { shapes: Vec<Shape>, regions: Vec<Region>, } impl Input { fn read_from_stdin() -> Input { let lines = io::stdin().lines().map(|l| l.unwrap()).collect::<Vec<_>>(); let mut iter = lines.rsplit(|l| l == ""); let regions = iter .next() .unwrap() .iter() .map(|r| Region::parse(r)) .collect::<Vec<_>>(); let shapes = iter.map(|ls| Shape::parse(ls)).rev().collect::<Vec<_>>(); for r in ®ions { assert_eq!(r.shapes_count.len(), shapes.len()); assert!(r.height > 2); assert!(r.width > 2); } Input { shapes, regions } } } fn part1(input: &Input) -> u64 { let mut ret = 0; for r in &input.regions { match r.trivial_fit(&input.shapes) { Some(true) => ret += 1, Some(false) => {} None => panic!("Cannot trivially solve {}", r.to_string()), } } ret } fn part2(_input: &Input) -> u64 { 0 } fn main() { let input = Input::read_from_stdin(); timed_run("Part 1", || part1(&input)); timed_run("Part 2", || part2(&input)); } -
Comment on Day 11: Reactor in ~comp.advent_of_code
Berdes LinkComputing the number of paths in a directed acyclic graph is a classical DP problem. I used the direct approach of visiting nodes in a lexicographical order, each time updating the value for its...Computing the number of paths in a directed acyclic graph is a classical DP problem. I used the direct approach of visiting nodes in a lexicographical order, each time updating the value for its children. The lexicographical order ensures that when visiting a node, I've already visited all its parents and its value is not going to change.
For path 1, the value of each node is just the sum of its parents, initializing the starting node with 1.
For part 2, I used exactly the same approach, except that I'm using 4 values for each node, each corresponding to one type of path: paths going through neither dac nor fft, paths going through dac but not fft, paths going through fft but not dac, and paths going through both. For most nodes, their counters are directly propagated to their children. For dac and fft nodes, the none counter gets propagated to the dac and fft counters respectively and the fft and dac counters respectively get propagated to the both counter.
Boths are O(n) with n the number of nodes in the graph, so it's super quick to compute, even when using strings as key/values for the graph (and doing a ton of copies): 60μs for part 1 and 420μs for part 2.
Solution (Rust)
use aoc_2025::timed_run; use std::collections::HashMap; use std::collections::HashSet; use std::io; use std::vec::Vec; struct Input { graph: HashMap<String, Vec<String>>, } impl Input { fn read_from_stdin() -> Input { Input { graph: io::stdin() .lines() .map(|l| { let tmp = l.unwrap(); let (key, values) = tmp.split_once(": ").unwrap(); ( key.to_string(), values.split(' ').map(|s| s.to_string()).collect(), ) }) .chain([("out".to_string(), vec![])]) .collect(), } } } fn lexicographical_order_from<T: Clone + Eq + std::hash::Hash, F: FnMut(&T)>( graph: &HashMap<T, Vec<T>>, start: T, mut f: F, ) { // First pass to compute the number of parents for each node reachable from the starting point. let mut num_parents = HashMap::<T, u64>::new(); let mut visited = HashSet::<T>::new(); let mut st = vec![start.clone()]; while !st.is_empty() { let cur = st.pop().unwrap(); if !visited.insert(cur.clone()) { continue; } for child in &graph[&cur] { *num_parents.entry(child.clone()).or_insert(0) += 1; st.push(child.clone()); } } // Second pass for the actual lexicographical order. Only visiting a node once we visited all its // parents. st.push(start); while !st.is_empty() { let cur = st.pop().unwrap(); f(&cur); for child in &graph[&cur] { let np = num_parents.get_mut(child).unwrap(); *np -= 1; if *np == 0 { st.push(child.clone()); } } } } fn part1(input: &Input) -> u64 { let mut num_paths_to = HashMap::<String, u64>::new(); let start = "you".to_string(); num_paths_to.insert(start.clone(), 1); lexicographical_order_from(&input.graph, start, |n| { let num_to_n = num_paths_to[n]; for child in &input.graph[n] { *num_paths_to.entry(child.clone()).or_insert(0) += num_to_n; } }); num_paths_to["out"] } #[derive(Clone)] struct PathData { none: u64, dac: u64, fft: u64, both: u64, } fn part2(input: &Input) -> u64 { let mut num_paths_to = HashMap::<String, PathData>::new(); let start = "svr".to_string(); num_paths_to.insert( start.clone(), PathData { none: 1, dac: 0, fft: 0, both: 0, }, ); lexicographical_order_from(&input.graph, start, |n| { let num_to_n = num_paths_to[n].clone(); for child in &input.graph[n] { let cnum = num_paths_to.entry(child.clone()).or_insert(PathData { none: 0, dac: 0, fft: 0, both: 0, }); match n.as_str() { "fft" => { cnum.fft += num_to_n.none; cnum.both += num_to_n.dac; } "dac" => { cnum.dac += num_to_n.none; cnum.both += num_to_n.fft; } _ => { cnum.none += num_to_n.none; cnum.dac += num_to_n.dac; cnum.fft += num_to_n.fft; cnum.both += num_to_n.both; } } } }); num_paths_to["out"].both } fn main() { let input = Input::read_from_stdin(); timed_run("Part 1", || part1(&input)); timed_run("Part 2", || part2(&input)); } -
Comment on Day 10: Factory in ~comp.advent_of_code
Berdes Link ParentThanks for pointing this out! I fixed the message to avoid confusing other people. Hopefully, I did a good enough job to also avoid having people later confused by your message :)Thanks for pointing this out! I fixed the message to avoid confusing other people. Hopefully, I did a good enough job to also avoid having people later confused by your message :)
-
Comment on Day 10: Factory in ~comp.advent_of_code
Berdes Link ParentFor your BFS, do you mean that each node in your graph represents a list of values corresponding to the number of times each button is pressed? If that's the case, won't this mean you're going to...For your BFS, do you mean that each node in your graph represents a list of values corresponding to the number of times each button is pressed?
If that's the case, won't this mean you're going to explore at least all the nodes [x, y, z, ...] with each variable in the range
[0, number of times it needs to be pressed in the optimal solution]? The number of such nodes is going to be the product of the values x, y, z, ... for the optimal solution. Unless I have made a mistake somewhere, the worst machine in my input has a product of number of presses in the optimal solution of about 10 billions. And that's not even including all the nodes that don't lead to an optimal solution. It's not obviously yet in the "longer than the age of the universe" range of runtime, but I wouldn't bet that it's going to finish in a reasonable amount of time either, unless visiting each node is really quick. Also, if the cache doesn't flush all the nodes that are not going to be visited again (sum of [x, y, z, ...] less than what you're currently visiting), you're also likely going to end up with memory problems, since storing those 10 billion nodes times something like 8 buttons means storing at least 80GB of data. -
Comment on Day 10: Factory in ~comp.advent_of_code
Berdes (edited )LinkWow, what an escalation in difficulty! For part 1, it's clear that the order in which buttons are pressed doesn't matter and that it's useless to press a button more than once since pressing twice...Wow, what an escalation in difficulty!
For part 1, it's clear that the order in which buttons are pressed doesn't matter and that it's useless to press a button more than once since pressing twice is equivalent to not pressing and we want to minimize the number of presses. This problem can be represented very efficiently as a bitset and xor operations, where each bit corresponds to a light.
button (0, 1, 3) corresponds to 0b1011 button (1, 4) corresponds to 0b10010 (and not 0b10001, as pointed out by zestier) pressing both buttons turns on lights for 0b1011 ^ 0b10010 = 0b11001My solution was to iterate over all the combinations of pressed and not pressed for each button and see if the xor of all the pressed buttons matches the required light pattern. I think it would be more efficient to iterate over all the permutation of
nbutton pressed, increasingnuntil we find a match. But with at most 13 buttons in my input, in the worst case I'll test 8192 patterns, which is pretty fast since I'm only doing some basic bit operations for each iteration. Runtime for this was ~500μs.For Part 2, I'm converting the problem into a system of equations, with one equation for each joltage. For the
nth joltage, I have an equation resemblingsum b_k = joltage[n], whereb_krepresents the number of times thekth button is pressed. The sum is only on buttons that turn on thenth light. For example:button 0: (0, 1, 3) button 1: (1, 4) button 2: (2) joltages: {1, 2, 3, 5, 7} gives the following equations: b_0 = 1 b_0 + b_1 = 2 b_2 = 3 b_0 = 5 b_1 = 7Obviously, this example doesn't have a solution, but hopefully the idea is clear. Importantly, all the
b_iare known to be integers and positive or 0.I have two ways to know for sure that a variable is equal to a given value:
- If any equation is
something = 0, then all the corresponding variables are 0. - If any equation has a single variable
b_i = x, then I know the value of this variable.
For all the known values, I can substitute them in the existing equations.
Additionally, I also have some logic to create more equations that use the same form (i.e. a sum of variables without any multiplication) by identifying cases where one equation contains all the variables of another equation. For example:
b_1 + b_3 + b_4 = 15 b_3 + b_4 = 10This implies that
b_1 = 5, which feeds into the logic where I know the value of a variable and can further simplify the system.This is actually enough to solve a good 40% of the machines in my input. However, that's not enough for all of them. So once I have simplified a system as much as possible, I identify the equation with the smallest constant, pick a variable in that equation and try solving the system assuming the picked variable is equal to any value from 0 to the constant. If I get no solution, the system cannot be solved and if I get multiple solutions, I keep the one leading to the smallest number of button presses.
Importantly, when doing all the operations on the system, I make sure that I don't have any issues, like an equation with a negative constant, or multiple equations with the same variables but different constants. If this happens, I know the system has no solution.
In terms of implementation, the left side of all those equations can be efficiently represented as bitsets. For example
b_1 + b_3 + b_4can be represented as0b11010. Finding equations with a single variables corresponds to finding numbers that are powers of 2 (but I just use the count_ones() method). Finding equations containing all the variables of another can also be done with a few bit operations:a & b == ameansbcontains all the bits thatacontains.One optimization I didn't do is to use a bound on the number of button presses when solving the systems. Once I have a solution, if any partial solve would lead to more button presses than the known solution, there is no need to continue solving the system.
Anyway, with all of this, part 2 runs in ~240ms.
Solution (Rust)
use aoc_2025::timed_run; use std::collections::HashMap; use std::fmt::Display; use std::io; use std::vec::Vec; struct Machine { lights: u64, buttons: Vec<u64>, joltages: Vec<u64>, } impl Machine { fn from_str(l: &str) -> Self { let (lights_str, remaining) = l.split_once(' ').unwrap(); let (buttons_str, joltages_str) = remaining.rsplit_once(' ').unwrap(); let lights = lights_str .strip_prefix('[') .unwrap() .strip_suffix(']') .unwrap() .chars() .enumerate() .fold(0, |lights, (i, c)| match c { '#' => lights | (1 << i), '.' => lights, _ => panic!(), }); let buttons = buttons_str .split(' ') .map(|b| { b.strip_prefix('(') .unwrap() .strip_suffix(')') .unwrap() .split(',') .map(|i| 1 << i.parse::<u64>().unwrap()) .reduce(|a, b| a | b) .unwrap() }) .collect(); let joltages = joltages_str .strip_prefix('{') .unwrap() .strip_suffix('}') .unwrap() .split(',') .map(|i| i.parse::<u64>().unwrap()) .collect(); Machine { lights, buttons, joltages, } } fn print<T: Display>(&self, label: T) { println!("Machine {label}:"); println!(" Lights: {:b}", self.lights); for (j, b) in self.buttons.iter().enumerate() { println!(" Button {j}: {b:b}"); } println!(" Joltages: {:?}", self.joltages); } } struct Input { machines: Vec<Machine>, } impl Input { fn read_from_stdin() -> Input { Input { machines: io::stdin() .lines() .map(|l| Machine::from_str(l.unwrap().as_str())) .collect(), } } } fn part1(input: &Input) -> u64 { let mut ret = 0; for m in &input.machines { let mut min_found = m.buttons.len() + 1; for i in 0..2_u64.pow(m.buttons.len() as u32) { let mut lights = 0; let mut tmp = i; while tmp != 0 { let pos = tmp.trailing_zeros(); lights ^= m.buttons[pos as usize]; tmp &= !(1 << pos); } if lights == m.lights { min_found = min_found.min(i.count_ones() as usize); } } if min_found > m.buttons.len() { m.print("failed machine"); panic!(); } assert!(min_found < m.buttons.len()); ret += min_found; } ret as u64 } struct SystemEq { equations: HashMap<u32, u64>, num_presses: u64, num_unsolved: u64, } #[derive(Debug)] struct SystemErr; impl SystemEq { #[allow(dead_code)] fn print(&self) { println!( " num_presses: {}, num_unsolved: {}", self.num_presses, self.num_unsolved ); let num_ones = self .equations .iter() .fold(0, |a, (&k, _)| a | k) .count_ones() as u64; if num_ones != self.num_unsolved { println!( "!!!!Incorrect num_unsolved: cached {}, actual {}", self.num_unsolved, num_ones ); } for (k, v) in &self.equations { println!(" {k:013b} = {v}"); } } fn clone(&self) -> SystemEq { SystemEq { equations: self.equations.clone(), num_presses: self.num_presses, num_unsolved: self.num_unsolved, } } fn checked_insert(&mut self, k: u32, v: u64) -> Result<(), SystemErr> { if let Some(old_val) = self.equations.insert(k, v) { if old_val != v { return Err(SystemErr); } } Ok(()) } // Eliminate the given variable using the given value. fn eliminate_single(&mut self, single: u32, val: u64) -> Result<(), SystemErr> { self.num_presses += val; self.num_unsolved = self.num_unsolved.checked_sub(1).unwrap(); let updated = self .equations .extract_if(|k, _| k & single != 0) .collect::<Vec<_>>(); for (k, v) in updated { if k != single { let new_val = v.checked_sub(val).ok_or(SystemErr)?; self.checked_insert(k & !single, new_val)?; } else if v != val { return Err(SystemErr); } } Ok(()) } // Eliminate all variables involved in equations that are sum(xs) = 0. fn eliminate_nulls(&mut self) -> Result<(), SystemErr> { let mut nulls = self .equations .extract_if(|k, v| *v == 0 && k.count_ones() > 1) .fold(0, |a, (k, _)| a | k); while nulls != 0 { let pos = nulls.trailing_zeros(); let single = 1 << pos; self.eliminate_single(single, 0)?; nulls &= !single; } Ok(()) } // Eliminate all variables involved in equations with a single variable. Returns true if it // eliminated anything. fn eliminate_singles(&mut self) -> Result<bool, SystemErr> { let singles = self .equations .extract_if(|k, _| k.count_ones() == 1) .collect::<Vec<_>>(); let has_singles = !singles.is_empty(); for (single, val) in singles { self.eliminate_single(single, val)?; } Ok(has_singles) } // Tries to generate equations by substracting one from another. Returns true if it generated // anything. fn generate_eqs(&mut self) -> Result<bool, SystemErr> { let mut added = false; let copied_eqs = self .equations .iter() .map(|(k, v)| (*k, *v)) .collect::<Vec<_>>(); for (k, v) in copied_eqs { let new_eqs = self .equations .iter() .filter(|(nk, _)| **nk != k && *nk & k == k && !self.equations.contains_key(&(*nk & !k))) .map(|(nk, nv)| (nk & !k, nv.checked_sub(v))) .collect::<Vec<_>>(); added = added || !new_eqs.is_empty(); for (nk, nv) in new_eqs { self.checked_insert(nk, nv.ok_or(SystemErr)?)?; } } Ok(added) } // Tries applying all simplification steps until nothing more can be done. fn simplify(&mut self) -> Result<(), SystemErr> { loop { self.eliminate_nulls()?; if self.eliminate_singles()? { continue; } if !self.generate_eqs()? { break; } } Ok(()) } fn solve(&mut self) -> Result<(), SystemErr> { self.simplify()?; if self.num_unsolved == 0 { return Ok(()); } let mut min_val = u64::MAX; let mut key_for_min = None; for (&k, &v) in &self.equations { if v < min_val { min_val = v; key_for_min = Some(k); } } let single = 1 << key_for_min.unwrap().trailing_zeros(); let mut min_found = None; for i in 0..=min_val { let mut new_sys = self.clone(); if new_sys.eliminate_single(single, i).is_err() { continue; } if new_sys.solve().is_err() { continue; } min_found = Some(min_found.map_or(new_sys.num_presses, |min: u64| min.min(new_sys.num_presses))); } if let Some(min) = min_found { self.num_presses = min; self.num_unsolved = 0; Ok(()) } else { Err(SystemErr) } } } fn part2(input: &Input) -> u64 { let mut ret = 0; for (mi, m) in input.machines.iter().enumerate() { let mut start_equations = vec![0; m.joltages.len()]; for (i, b) in m.buttons.iter().enumerate() { let mut tmp = *b; while tmp != 0 { let pos = tmp.trailing_zeros(); start_equations[pos as usize] |= 1 << i; tmp &= !(1 << pos); } } let mut equations = HashMap::<u32, u64>::new(); for (i, e) in start_equations.into_iter().enumerate() { equations.insert(e, m.joltages[i]); } let mut system = SystemEq { equations, num_presses: 0, num_unsolved: m.buttons.len() as u64, }; system.simplify().unwrap(); if system.num_unsolved == 0 { ret += 1; } continue; if system.solve().is_err() { m.print(mi); println!(" Current state:"); for (k, v) in system.equations { println!(" {k:013b} = {v}"); } panic!("failed to find a solution"); } ret += system.num_presses; } ret } fn main() { let input = Input::read_from_stdin(); timed_run("Part 1", || part1(&input)); timed_run("Part 2", || part2(&input)); }I'd be curious if anyone finds a nicer solution for part 2, outside of just calling any off-the-shelf linear integer programming solver.
- If any equation is
-
Comment on Day 9: Movie Theater in ~comp.advent_of_code
Berdes Link ParentThe idea was that building an in-memory map of all the points that are inside vs outside would require way too much memory using the provided coordinates. However, the compressed model fits in a...The idea was that building an in-memory map of all the points that are inside vs outside would require way too much memory using the provided coordinates. However, the compressed model fits in a 500x500 map, which is trivial to fit in memory. If the compression is correctly done, then a rectangle that is fully inside in the compressed version is also going to be fully inside in the uncompressed version. Thus, it's possible to use the naïve approach of just testing all possible rectangles and checking if they are contained in the perimeter. You just need to be careful to use the real coordinates to compute the area of the rectangle instead of the compressed coordinates.
-
Comment on Day 9: Movie Theater in ~comp.advent_of_code
Berdes Link ParentI thought about the same idea (checking if we're crossing lines), but I feared that it could yield false positives in cases where we have two lines without any gap between them. Something like...I thought about the same idea (checking if we're crossing lines), but I feared that it could yield false positives in cases where we have two lines without any gap between them. Something like this:
0,3 2,3 2,10 3,10 3,0 5,0 1 | 2------3 5---------4 | 6with the inside on the right side.
As often, the inputs tend to have nice properties not mentioned in the problem statement that allow for some approaches that don't work in the general case.
-
Comment on Day 9: Movie Theater in ~comp.advent_of_code
Berdes LinkFirst day where I don't immediately see how to solve the problem efficiently. In the end, my idea of compressing the coordinates for visualization purposes turned out to actually be a key idea to...First day where I don't immediately see how to solve the problem efficiently. In the end, my idea of compressing the coordinates for visualization purposes turned out to actually be a key idea to quickly solve part 2.
Unfortunately, I had a stupid bug in my computation for the size of rectangles (doing abs(a-b+1) instead of abs(a-b)+1), and that didn't matter for either examples nor for part 1. If it wasn't for @lily's posted solution, which allowed me to figure out that none of the rectangles matched the size of the expected answer, I don't know how I would have found this bug.
After fixing the bug, I got the right answer in ~9ms.
Another idea for part 2 is that all the outside points are visible from the border of the map. So it's sufficient to keep track of the min and max x for all y and y for all x. Then, it's possible to check whether the edges of the rectangle are contained within those min/max values. Without coordinate compression, this approach takes ~1s. With coordinate compression, it's down to ~5ms.
Solution (Rust)
use aoc_2025::timed_run; use std::collections::HashMap; use std::io; use std::vec::Vec; struct Point { x: usize, y: usize, } fn side_length(a: usize, b: usize) -> usize { if a < b { b - a + 1 } else { a - b + 1 } } impl Point { fn square_with(&self, other: &Self) -> usize { side_length(self.x, other.x) * side_length(self.y, other.y) } } struct Input { points: Vec<Point>, } impl Input { fn read_from_stdin() -> Input { Input { points: io::stdin() .lines() .map(|l| { let ns = l .unwrap() .split(',') .map(|n| n.parse::<usize>().unwrap()) .collect::<Vec<_>>(); assert_eq!(ns.len(), 2); Point { x: ns[0], y: ns[1] } }) .collect(), } } } fn part1(input: &Input) -> u64 { let mut ret = 0; for i in 0..input.points.len() { for j in i + 1..input.points.len() { ret = ret.max(input.points[i].square_with(&input.points[j])); } } ret as u64 } // Compresses the coordinates such that any gap larger than 2 will end up as a gap of size 2. fn remap_coords<F: Fn(&Point) -> usize>(input: &Input, f: F) -> (HashMap<usize, usize>, usize) { let mut ret = HashMap::new(); let mut coords = vec![]; for p in &input.points { coords.push(f(p)); } coords.sort(); let mut new_coord = 0; ret.insert(coords[0], new_coord); for i in 1..coords.len() { let prev = coords[i - 1]; let cur = coords[i]; if prev == cur { continue; } new_coord += if prev + 2 <= cur { 2 } else { 1 }; ret.insert(coords[i], new_coord); } (ret, new_coord) } fn part2(input: &Input) -> u64 { let (new_x, max_x) = remap_coords(input, |p| p.x); let (new_y, max_y) = remap_coords(input, |p| p.y); let remap_point = |p: &Point| Point { x: *new_x.get(&p.x).unwrap(), y: *new_y.get(&p.y).unwrap(), }; // Create the perimeter in the map. let mut map = vec![vec![b'.'; max_y + 1]; max_x + 1]; for i in 1..=input.points.len() { let prev = remap_point(&input.points[i - 1]); let cur = remap_point(&input.points[i % input.points.len()]); if prev.x == cur.x { for y in prev.y.min(cur.y)..=prev.y.max(cur.y) { map[prev.x][y] = b'#'; } } else { for x in prev.x.min(cur.x)..=prev.x.max(cur.x) { map[x][prev.y] = b'#'; } } } // Fill the inside of the perimeter. let mut to_color = vec![if max_x < 100 { Point { x: 1, y: 3 } } else { Point { x: 100, y: 180 } }]; while !to_color.is_empty() { let p = to_color.pop().unwrap(); if std::mem::replace(&mut map[p.x][p.y], b'#') != b'.' { continue; } for np in [ Point { x: p.x - 1, y: p.y }, Point { x: p.x + 1, y: p.y }, Point { x: p.x, y: p.y - 1 }, Point { x: p.x, y: p.y + 1 }, ] { if map[np.x][np.y] == b'.' { to_color.push(np); } } } // Find the largest rectangle that fits fully inside the perimeter. let mut ret = 0; for i in 0..input.points.len() { 'next_point: for j in i + 1..input.points.len() { let pi = &input.points[i]; let pj = &input.points[j]; let new_size = pi.square_with(pj); if new_size <= ret { continue; } let ri = remap_point(pi); let rj = remap_point(pj); let min = Point { x: ri.x.min(rj.x), y: ri.y.min(rj.y), }; let max = Point { x: ri.x.max(rj.x), y: ri.y.max(rj.y), }; for x in min.x..=max.x { if map[x][min.y] == b'.' || map[x][max.y] == b'.' { continue 'next_point; } } for y in min.y..=max.y { if map[min.x][y] == b'.' || map[max.x][y] == b'.' { continue 'next_point; } } ret = new_size; } } ret as u64 } fn part2_slow(input: &Input) -> u64 { let mut max_x_val = 0; let mut max_y_val = 0; for p in &input.points { max_x_val = max_x_val.max(p.x); max_y_val = max_y_val.max(p.y); } // Get the min/max values for each x and y coordinates. This doesn't work in the general case, // but should work with the kind of input we have. let mut min_x = vec![max_x_val; max_y_val + 1]; let mut max_x = vec![0; max_y_val + 1]; let mut min_y = vec![max_y_val; max_x_val + 1]; let mut max_y = vec![0; max_x_val + 1]; for i in 1..=input.points.len() { let prev = &input.points[i - 1]; let cur = &input.points[i % input.points.len()]; let min = Point { x: prev.x.min(cur.x), y: prev.y.min(cur.y), }; let max = Point { x: prev.x.max(cur.x), y: prev.y.max(cur.y), }; if prev.x == cur.x { min_y[prev.x] = min_y[prev.x].min(min.y); max_y[prev.x] = max_y[prev.x].max(max.y); for y in min.y..=max.y { min_x[y] = min_x[y].min(min.x); max_x[y] = max_x[y].max(max.x); } } else { assert_eq!(prev.y, cur.y); min_x[prev.y] = min_x[prev.y].min(min.x); max_x[prev.y] = max_x[prev.y].max(max.x); for x in min.x..=max.x { min_y[x] = min_y[x].min(min.y); max_y[x] = max_y[x].max(max.y); } } } // Find the largest rectangle that fits fully inside the boundaries. let mut ret = 0; for i in 0..input.points.len() { 'next_point: for j in i + 1..input.points.len() { let pi = &input.points[i]; let pj = &input.points[j]; let new_size = pi.square_with(pj); if new_size <= ret { continue; } let min = Point { x: pi.x.min(pj.x), y: pi.y.min(pj.y), }; let max = Point { x: pi.x.max(pj.x), y: pi.y.max(pj.y), }; for x in min.x..=max.x { if min.y < min_y[x] || max.y > max_y[x] { continue 'next_point; } } for y in min.y..=max.y { if min.x < min_x[y] || max.x > max_x[y] { continue 'next_point; } } ret = new_size; } } ret as u64 } fn part2_combined(input: &Input) -> u64 { let (new_x, max_x) = remap_coords(input, |p| p.x); let (new_y, max_y) = remap_coords(input, |p| p.y); let remap_point = |p: &Point| Point { x: *new_x.get(&p.x).unwrap(), y: *new_y.get(&p.y).unwrap(), }; let mut max_x_val = 0; let mut max_y_val = 0; for p in &input.points { let rp = remap_point(p); max_x_val = max_x_val.max(rp.x); max_y_val = max_y_val.max(rp.y); } // Get the min/max values for each x and y coordinates. This doesn't work in the general case, // but should work with the kind of input we have. let mut min_x = vec![max_x_val; max_y_val + 1]; let mut max_x = vec![0; max_y_val + 1]; let mut min_y = vec![max_y_val; max_x_val + 1]; let mut max_y = vec![0; max_x_val + 1]; for i in 1..=input.points.len() { let prev = remap_point(&input.points[i - 1]); let cur = remap_point(&input.points[i % input.points.len()]); let min = Point { x: prev.x.min(cur.x), y: prev.y.min(cur.y), }; let max = Point { x: prev.x.max(cur.x), y: prev.y.max(cur.y), }; if prev.x == cur.x { min_y[prev.x] = min_y[prev.x].min(min.y); max_y[prev.x] = max_y[prev.x].max(max.y); for y in min.y..=max.y { min_x[y] = min_x[y].min(min.x); max_x[y] = max_x[y].max(max.x); } } else { assert_eq!(prev.y, cur.y); min_x[prev.y] = min_x[prev.y].min(min.x); max_x[prev.y] = max_x[prev.y].max(max.x); for x in min.x..=max.x { min_y[x] = min_y[x].min(min.y); max_y[x] = max_y[x].max(max.y); } } } // Find the largest rectangle that fits fully inside the boundaries. let mut ret = 0; for i in 0..input.points.len() { 'next_point: for j in i + 1..input.points.len() { let pi = &input.points[i]; let pj = &input.points[j]; let new_size = pi.square_with(pj); if new_size <= ret { continue; } let ri = remap_point(pi); let rj = remap_point(pj); let min = Point { x: ri.x.min(rj.x), y: ri.y.min(rj.y), }; let max = Point { x: ri.x.max(rj.x), y: ri.y.max(rj.y), }; for x in min.x..=max.x { if min.y < min_y[x] || max.y > max_y[x] { continue 'next_point; } } for y in min.y..=max.y { if min.x < min_x[y] || max.x > max_x[y] { continue 'next_point; } } ret = new_size; } } ret as u64 } fn main() { let input = Input::read_from_stdin(); timed_run("Part 1", || part1(&input)); timed_run("Part 2", || part2(&input)); timed_run("Part 2 slow", || part2_slow(&input)); timed_run("Part 2 combined", || part2_combined(&input)); } -
Comment on Day 8: Playground in ~comp.advent_of_code
Berdes LinkClear ramp up in difficulty compared to the previous days. I like it! For part 1, I initially used a max heap to keep track of the smallest 1000 pairs seen so far, followed by a DSU/Union Find...Clear ramp up in difficulty compared to the previous days. I like it!
For part 1, I initially used a max heap to keep track of the smallest 1000 pairs seen so far, followed by a DSU/Union Find data structure to keep track of which junctions are in the same connected component and the size of the components. Finally, I used a min heap to find the largest 3 connected components. However, it's much better to replace both heaps by just storing all the elements and using quick select to get the best N elements, since we don't care about their order. In my case, runtime went from ~14ms down to ~3ms when switching to quick select.
Part 2 can be done with a sort on all the pairs and using a DSU again to figure out when the current connected component includes all the junctions (based on its size). However, this can be further optimized by sorting while iterating over all the pairs and stopping early once all the junctions are connected. This optimization pushed the runtime from ~22ms down to ~4.7ms.
Solution Rust
use aoc_2025::timed_run; use std::cmp::Ordering; use std::cmp::Reverse; use std::collections::BinaryHeap; use std::io; use std::vec::Vec; struct Point { x: u64, y: u64, z: u64, } impl Point { fn distance_to(&self, other: &Self) -> u64 { (self.x - other.x).pow(2) + (self.y - other.y).pow(2) + (self.z - other.z).pow(2) } } struct Input { points: Vec<Point>, } impl Input { fn read_from_stdin() -> Input { Input { points: io::stdin() .lines() .map(|l| { l.unwrap() .split(',') .map(|n| n.parse::<u64>().unwrap()) .collect::<Vec<_>>() }) .map(|ns| { assert_eq!(ns.len(), 3); Point { x: ns[0], y: ns[1], z: ns[2], } }) .collect(), } } } struct PointPair { p1: usize, p2: usize, d: u64, } impl PartialEq for PointPair { fn eq(&self, other: &Self) -> bool { self.d.eq(&other.d) } } impl Eq for PointPair {} impl PartialOrd for PointPair { fn partial_cmp(&self, other: &Self) -> Option<Ordering> { self.d.partial_cmp(&other.d) } } impl Ord for PointPair { fn cmp(&self, other: &Self) -> Ordering { self.d.cmp(&other.d) } } impl PointPair { fn clone(&self) -> Self { Self { p1: self.p1, p2: self.p2, d: self.d, } } } struct DsuElement { parent: usize, size: usize, } struct DSU { map: Vec<DsuElement>, } impl DSU { fn new(size: usize) -> DSU { let mut map = vec![]; map.reserve(size); for i in 0..size { map.push(DsuElement { parent: i, size: 1 }) } DSU { map } } fn find(&mut self, i: usize) -> usize { let mut root = self.map[i].parent; if root != i { root = self.find(root); self.map[i].parent = root; } root } // Returns the new root fn join(&mut self, i: usize, j: usize) -> usize { let mut root_i = self.find(i); let mut root_j = self.find(j); if root_i != root_j { if self.map[root_i].size < self.map[root_j].size { (root_i, root_j) = (root_j, root_i); } self.map[root_j].parent = root_i; self.map[root_i].size += self.map[root_j].size; } root_i } } fn part1(input: &Input) -> u64 { let mut smallest_pairs = BinaryHeap::new(); for i in 0..input.points.len() { for j in i + 1..input.points.len() { let d = input.points[i].distance_to(&input.points[j]); smallest_pairs.push(PointPair { p1: i, p2: j, d }); if smallest_pairs.len() > 1000 { smallest_pairs.pop(); } } } let mut sets = DSU::new(input.points.len()); for pair in smallest_pairs { sets.join(pair.p1, pair.p2); } let mut largest_circuits = BinaryHeap::new(); for i in 0..input.points.len() { let e = &sets.map[i]; if e.parent == i { largest_circuits.push((Reverse(e.size), e.parent)); if largest_circuits.len() > 3 { largest_circuits.pop(); } } } let mut ret = 1; for (Reverse(size), _) in largest_circuits { ret *= size; } ret as u64 } fn all_pairs(input: &Input) -> Vec<PointPair> { let mut pairs = vec![]; pairs.reserve(input.points.len().pow(2)); for i in 0..input.points.len() { for j in i + 1..input.points.len() { let d = input.points[i].distance_to(&input.points[j]); pairs.push(PointPair { p1: i, p2: j, d }); } } pairs } fn part1_select(input: &Input) -> u64 { let mut pairs = all_pairs(input); let (smallest_pairs, _, _) = pairs.select_nth_unstable(1000); let mut sets = DSU::new(input.points.len()); for pair in smallest_pairs { sets.join(pair.p1, pair.p2); } let mut circuits = vec![]; circuits.reserve(input.points.len()); for i in 0..input.points.len() { let e = &sets.map[i]; if e.parent == i { circuits.push((Reverse(e.size), e.parent)); } } let (largest_circuits, _, _) = circuits.select_nth_unstable(3); let mut ret = 1; for (Reverse(size), _) in largest_circuits { ret *= *size; } ret as u64 } fn part2(input: &Input) -> u64 { let mut pairs = all_pairs(input); pairs.sort(); let mut sets = DSU::new(input.points.len()); for pair in pairs { let root = sets.join(pair.p1, pair.p2); if sets.map[root].size == input.points.len() { return input.points[pair.p1].x * input.points[pair.p2].x; } } panic!("unreachable"); } // Sorts the given slice in-place and calls f on the sorted elements as we go. Early stops if f // returns false, in which case, only the elements seen by f are guaranteed to be sorted. fn iterate_sorted<F: FnMut(&PointPair) -> bool>(pairs: &mut [PointPair], f: &mut F) -> bool { if pairs.len() <= 32 { pairs.sort(); for p in pairs { if !f(p) { return false; } } return true; } else { let pivot = pairs[0].clone(); let mut i = 1; let mut large_idx = pairs.len() - 1; while i < large_idx { if pairs[i] < pivot { i += 1; } else { pairs.swap(i, large_idx); large_idx -= 1; } } if pairs[i] < pivot { pairs.swap(0, i); } iterate_sorted(&mut pairs[..i], f) && iterate_sorted(&mut pairs[i..], f) } } fn part2_iterate_sorted(input: &Input) -> u64 { let mut ret = 0; let mut sets = DSU::new(input.points.len()); iterate_sorted(&mut all_pairs(input), &mut |pair: &PointPair| { let root = sets.join(pair.p1, pair.p2); if sets.map[root].size == input.points.len() { ret = input.points[pair.p1].x * input.points[pair.p2].x; return false; } true }); ret } fn main() { let input = Input::read_from_stdin(); timed_run("Part 1", || part1(&input)); timed_run("Part 1 select", || part1_select(&input)); timed_run("Part 2", || part2(&input)); timed_run("Part 2 iterate sorted", || part2_iterate_sorted(&input)); } -
Comment on Day 7: Laboratories in ~comp.advent_of_code
Berdes LinkWhen I saw the problem, I could smell what part 2 would involve from a mile ahead. I feel like the problem could have been made a bit more interesting if they also had beams going at an angle,...When I saw the problem, I could smell what part 2 would involve from a mile ahead. I feel like the problem could have been made a bit more interesting if they also had beams going at an angle, like with a splitter that splits incoming vertical rays into two rays going at 45° and straighten incoming 45° beams.
I experimented with a bunch of small optimizations like swapping my two vectors instead of discarding one every time or keeping track of the range of values that might contain a ray for the inner loop, but neither lead to significant enough improvement to really justify the increased complexity.
Solution (Rust)
use aoc_2025::timed_run; use std::io; use std::vec::Vec; struct Input { width: usize, lines: Vec<String>, } impl Input { fn read_from_stdin() -> Input { let lines: Vec<String> = io::stdin().lines().map(|l| l.unwrap()).collect(); assert!(lines.len() > 0); let width = lines[0].len(); for l in &lines { assert_eq!(width, l.len()); } Input { width, lines } } } fn part1(input: &Input) -> u64 { let mut ret = 0; let mut beams = vec![false; input.width]; for l in &input.lines { let mut new_beams = vec![false; input.width]; for i in 0..input.width { match l.as_bytes()[i] { b'.' => new_beams[i] = new_beams[i] || beams[i], b'S' => new_beams[i] = true, b'^' => { if beams[i] { new_beams[i - 1] = true; new_beams[i + 1] = true; ret += 1; } } _ => panic!(""), } } beams = new_beams; } ret } fn part2(input: &Input) -> u64 { let mut beams = vec![0; input.width]; for l in &input.lines { let mut new_beams = vec![0; input.width]; for i in 0..input.width { match l.as_bytes()[i] { b'.' => new_beams[i] += beams[i], b'S' => new_beams[i] = 1, b'^' => { new_beams[i - 1] += beams[i]; new_beams[i + 1] += beams[i]; } _ => panic!(""), } } beams = new_beams; } let mut ret = 0; for n in beams { ret += n; } ret } fn main() { let input = Input::read_from_stdin(); timed_run("Part 1", || part1(&input)); timed_run("Part 2", || part2(&input)); } -
Comment on Day 6: Trash Compactor in ~comp.advent_of_code
Berdes LinkNot the most interesting day for me since this was only about parsing the input the right way, without any shred of thinking to be had on the algorithm side. What's funny is that it would have...Not the most interesting day for me since this was only about parsing the input the right way, without any shred of thinking to be had on the algorithm side.
What's funny is that it would have been much faster for me to first tackle the 2nd part, followed by the first part since the first part is just the second part with the two loops parsing numbers needing to be swapped.
Solution (Rust)
use std::io; use std::time::Instant; use std::vec::Vec; enum Op { Add, Mul, } impl Op { fn from_str(s: &str) -> Op { assert_eq!(s.len(), 1); match s { "+" => Op::Add, "*" => Op::Mul, _ => panic!("Bad op {}", s), } } fn neutral_element(&self) -> u64 { match self { Op::Add => 0, Op::Mul => 1, } } fn apply(&self, lhs: u64, rhs: u64) -> u64 { match self { Op::Add => lhs + rhs, Op::Mul => lhs * rhs, } } } struct Input { len: usize, numbers: Vec<Vec<u64>>, operators: Vec<Op>, raw_len: usize, raw_numbers: Vec<String>, raw_ops: String, } impl Input { fn read_from_stdin() -> Input { let raw_lines: Vec<String> = io::stdin().lines().map(|l| l.unwrap()).collect(); assert!(raw_lines.len() > 0); let raw_len = raw_lines[0].len(); for i in 1..raw_lines.len() { assert_eq!(raw_len, raw_lines[i].len()); } let raw_numbers = raw_lines[..raw_lines.len() - 1].to_vec(); let raw_ops = raw_lines.last().expect("").clone(); let mut lines: Vec<Vec<String>> = raw_lines .into_iter() .map(|l| l.split_ascii_whitespace().map(|s| s.to_string()).collect()) .collect(); let len = lines[0].len(); for i in 1..lines.len() { assert_eq!(len, lines[i].len()); } let op_line = lines.pop().unwrap(); Input { len, numbers: lines .into_iter() .map(|l| l.into_iter().map(|n| n.parse::<u64>().expect("")).collect()) .collect(), operators: op_line .into_iter() .map(|op| Op::from_str(op.as_str())) .collect(), raw_len, raw_numbers, raw_ops, } } } fn part1(input: &Input) -> u64 { let mut ret = 0; for i in 0..input.len { let op = &input.operators[i]; let mut tmp = op.neutral_element(); for ns in &input.numbers { tmp = op.apply(tmp, ns[i]); } ret += tmp; } ret } fn part2(input: &Input) -> u64 { let mut ret = 0; let mut i = 0; while i < input.raw_len { let op = Op::from_str(str::from_utf8(&input.raw_ops.as_bytes()[i..i + 1]).expect("")); let mut number_end = input.raw_len; for j in i + 1..input.raw_len { if input.raw_ops.as_bytes()[j] != b' ' { number_end = j - 1; break; } } let mut tmp = op.neutral_element(); for j in i..number_end { let mut n: u64 = 0; for l in &input.raw_numbers { let c = l.as_bytes()[j]; if c != b' ' { n = n * 10 + (c - b'0') as u64; } } tmp = op.apply(tmp, n); } ret += tmp; i = number_end + 1; } ret } fn main() { let input = Input::read_from_stdin(); timed_run("Part 1", || part1(&input)); timed_run("Part 2", || part2(&input)); } -
Comment on Day 5: Cafeteria in ~comp.advent_of_code
Berdes Link ParentSo, I managed to implement two solutions using BTreeMap, which required to use the nightly build to get access to the Cursor API. The first approach was to use the key to represent a bound and the...So, I managed to implement two solutions using BTreeMap, which required to use the nightly build to get access to the Cursor API.
The first approach was to use the key to represent a bound and the value to represent whether it's a starting point or an ending point. When trying to find whether an ID is part of a range, I can query for the largest key that is smaller or equal to the ID. If it corresponds to a starting point, I know my ID is inside a range. One detail with this approach is that I needed to store the ending points with a value + 1 (i.e. the ending point is excluded from the range), otherwise ranges like [5, 5] would collide on the key 5. The side benefit of having excluded ending points is that ranges that are next to each other would still get merged: [3, 5] [6, 8] would become [3, 8].
Using start and end points as keys
#[derive(Debug)] enum Node { Start, End, } fn merge_ranges_btree_double(ranges: &Vec<Range>) -> BTreeMap<u64, Node> { let mut merged = BTreeMap::<u64, Node>::new(); for r in ranges { let mut cursor = merged.upper_bound_mut(Bound::Excluded(&r.start)); while cursor .peek_next() .map_or(false, |(nk, _)| nk <= &(r.end + 1)) { cursor.remove_next(); } match cursor.peek_prev() { Some((_, &mut Node::Start)) => {} _ => cursor.insert_before(r.start, Node::Start).expect(""), } match cursor.peek_next() { Some((_, &mut Node::End)) => {} _ => cursor.insert_before(r.end + 1, Node::End).expect(""), } } merged } fn part1_btree_double(input: &Input) -> u64 { let merged_fresh = merge_ranges_btree_double(&input.fresh_ranges); let mut ret = 0; for i in &input.inventory { match merged_fresh.upper_bound(Bound::Included(i)).peek_prev() { Some((_, Node::Start)) => ret += 1, _ => {} } } ret } fn part2_btree_double(input: &Input) -> u64 { let merged = merge_ranges_btree_double(&input.fresh_ranges); let mut ret = 0; let mut last_start = 0; for (&k, v) in merged.iter() { match v { Node::Start => last_start = k, Node::End => ret += k - last_start, } } ret }The second approach (likely more sane) was to have the key as start point and the value as end point. It's basically the same as having a sorted vector of (start, end) points, except that it's not necessary to pre-sort the list and removing an element in the middle should be cheap (amortized O(1) I think?). Similarly to the previous approach, finding whether an ID is in one of the ranges requires querying for the largest key that is smaller or equal to the ID and checking whether the ID is less or equal to the end point.
Start points as key, end points as values
fn merge_ranges_btree(ranges: &Vec<Range>) -> BTreeMap<u64, u64> { let mut merged = BTreeMap::<u64, u64>::new(); for r in ranges { let mut cursor = merged.upper_bound_mut(Bound::Included(&r.start)); match cursor.peek_prev() { Some((_, pend)) => { if r.start <= *pend { *pend = r.end.max(*pend); } else { cursor.insert_before(r.start, r.end).expect(""); } } None => cursor.insert_before(r.start, r.end).expect(""), } // Can't hold a mutable borrow of the previous item while still accessing cursor to peek at the // next. let mut pend = cursor.peek_prev().unwrap().1.clone(); loop { if let Some((nstart, nend)) = cursor.peek_next() { if *nstart <= pend { pend = pend.max(*nend); cursor.remove_next(); } else { break; } } else { break; } } *cursor.peek_prev().unwrap().1 = pend; } merged } fn part1_btree(input: &Input) -> u64 { let merged_fresh = merge_ranges_btree(&input.fresh_ranges); let mut ret = 0; for i in &input.inventory { match merged_fresh.upper_bound(Bound::Included(i)).peek_prev() { Some((_, end)) => { if i <= end { ret += 1 } } _ => {} } } ret } fn part2_btree(input: &Input) -> u64 { let merged = merge_ranges_btree(&input.fresh_ranges); let mut ret = 0; let mut last_start = 0; for (&k, v) in merged.iter() { ret += v - k + 1; } ret }The second approach was (unsurprisingly) faster than the first one. Sadly, both were slower than the vector-based approaches I used previously. I don't know if this is just the vector based approaches being inherently faster because of the memory layout, amount of branching and data dependency between instructions, or if it's just the BTreeMap implementation not being super well optimized. The Cursor API is still in nightly, so I wouldn't be surprised if the implementation was a bit sub-optimal.
Runtime comparison of various approaches
Not really the best benchmark since I only run each thing once with no warmup. I ran the program a couple of times to get a sense of which runtime was normal and which one was just an outlier (I guess due to the computer doing other stuff in the background). The parsing of the input is excluded.
Part 1 direct in 169.504µs Part 1 merged in 27.85µs Part 1 bsearch in 27.166µs Part 1 btree start+end in 49.206µs Part 1 btree in 37.387µs Part 2 in 5.801µs Part 2 btree start+end in 18.164µs Part 2 btree in 9.65µs -
Comment on Day 5: Cafeteria in ~comp.advent_of_code
Berdes LinkFor part 1, I directly implemented some logic to sort the ranges and merge the overlapping ones. The idea was to be able to binary search the ranges to see if an ID is part of one of the ranges....For part 1, I directly implemented some logic to sort the ranges and merge the overlapping ones. The idea was to be able to binary search the ranges to see if an ID is part of one of the ranges. In the end, the binary search idea was barely faster than a linear scan using the sorted+merged ranges. Maybe an approach based on a binary tree would have been faster?
This approach made part 2 extremely trivial since I already had a list of ranges without any overlap.
Part 1 and 2 (Rust)
use std::fmt::Display; use std::io; use std::time::Instant; use std::vec::Vec; #[derive(Clone)] struct Range { start: u64, end: u64, // inclusive } impl Range { fn from_str(s: String) -> Self { let (str_start, str_end) = s.split_once('-').unwrap(); Range { start: str_start.parse().expect(""), end: str_end.parse().expect(""), } } fn contains(&self, i: u64) -> bool { self.start <= i && i <= self.end } } struct Input { fresh_ranges: Vec<Range>, inventory: Vec<u64>, } impl Input { fn read_from_stdin() -> Input { let mut fresh_ranges = vec![]; let mut inventory = vec![]; let mut parse_ranges = true; for line in io::stdin().lines().map(|l| l.unwrap()) { if line == "" { parse_ranges = false; continue; } if parse_ranges { fresh_ranges.push(Range::from_str(line)); } else { inventory.push(line.parse::<u64>().expect("")); } } Input { fresh_ranges, inventory, } } } fn merge_ranges(ranges: &Vec<Range>) -> Vec<Range> { let mut sorted = ranges.clone(); sorted.sort_by_key(|r| r.start); let mut merged: Vec<Range> = vec![]; for r in sorted { if let Some(last) = merged.last() { if last.end >= r.start { let end = &mut merged.last_mut().unwrap().end; *end = r.end.max(*end); } else { merged.push(r); } } else { merged.push(r); } } merged } fn part1_direct(input: &Input) -> u64 { let mut ret = 0; for &i in &input.inventory { for r in &input.fresh_ranges { if r.contains(i) { ret += 1; break; } } } ret } fn part1_merged(input: &Input) -> u64 { let merged_fresh = merge_ranges(&input.fresh_ranges); let mut ret = 0; for &i in &input.inventory { for r in &merged_fresh { if i <= r.end { if i >= r.start { ret += 1; } break; } } } ret } fn part1_bsearch(input: &Input) -> u64 { let merged_fresh = merge_ranges(&input.fresh_ranges); let mut ret = 0; for &i in &input.inventory { let mut s = &merged_fresh[..]; while !s.is_empty() { let pivot_idx = s.len() / 2; let pivot = &s[pivot_idx]; if i < pivot.start { s = &s[..pivot_idx]; } else if i > pivot.end { s = &s[(pivot_idx + 1)..]; } else { ret += 1; break; } } } ret } fn part2(input: &Input) -> u64 { let merged_fresh = merge_ranges(&input.fresh_ranges); let mut ret = 0; for r in merged_fresh { ret += r.end - r.start + 1; } ret } fn timed_run<T: Display, F: FnOnce() -> T>(name: &str, f: F) { let start = Instant::now(); let result = f(); let time = start.elapsed(); println!("{}: {}\n in {:?}", name, result, time); } fn main() { let input = Input::read_from_stdin(); timed_run("Part 1 direct", || part1_direct(&input)); timed_run("Part 1 merged", || part1_merged(&input)); timed_run("Part 1 bsearch", || part1_bsearch(&input)); timed_run("Part 2", || part2(&input)); } -
Comment on Day 3: Lobby in ~comp.advent_of_code
Berdes Link ParentI checked how my implementation complies and it's indeed what I expected: the inner loop is fully unrolled and each iteration is avoiding any branching by using a conditional move. This means...I checked how my implementation complies and it's indeed what I expected: the inner loop is fully unrolled and each iteration is avoiding any branching by using a conditional move. This means there is no risk of (failed) speculative execution and the iterations can be efficiently pipelined inside the CPU. Even the vector of 13 elements ends up being mostly stored in registers, with only a few positions being on the stack because there aren't enough registers. I guess the main point is that my version compiles into something that the CPU can execute extremely efficiently, which makes up for the fact that it might require more operations.
If the accumulators could be u8 instead of u64, the compiler could even use vectorized instructions to run many iterations at the same time since there is a vectorized max instruction to replace the conditional moves.
-
Comment on Day 4: Printing Department in ~comp.advent_of_code
Berdes Link ParentI'm also expecting the difficulty to start ramping up soon, although we might still get 1 or 2 days for which a brute force approach still works. I only finished all days in 2020 due to time...I'm also expecting the difficulty to start ramping up soon, although we might still get 1 or 2 days for which a brute force approach still works.
I only finished all days in 2020 due to time constraints, but from what I remember, the difficulty increases over time with the only exception being the 25th that was always super simple. With the shorter schedule, I would expect the 12th problem to be the hardest since there would be no obvious reasons to break the pattern.
The best food I had this year was black cod dish, where the fish had been marinated in a miso sauce and later cooked on the flame (japanese technique named robotayaki). This was part of a tasting menu, where every single dish was delicious, even though I clearly had my favorite.
My favorite meal was something I started cooking this year. Not sure how to describe it exactly. It's kind of like a curry but with way less sauce rhan a typical curry, I'm using onions, carrots, tomatoes, fresh curcuma, cayenne pepper, paprika and nutmeg for the base (cooked long enough that they basically from a sauce), tempeh for the proteins and near the end, I add some bell pepper, zuccini and mushrooms, all 3 cooked just enough to be not be raw while keeping their texture. I always eat this whith rice, of course. Interestingly, my first try didn't have the tomatoes and I was super happy when I figured out that what I felt was mussing was just some acidity, which can easily be added with something like tomatoes.
On the baking side, I learned to make bread. My first attempt went surprising well. I used some basic white flour and instant yeast (which are both far from ideal), but the result was still better that the garbage that Dutch supermakets have the audacity of calling "bread". Obviously, my technique improved and I'm now usually using a mix of spelt flour and whole wheat, with sourdough starter and a bit of instant yeast to ensure I'll still always have a good result.
I had a minor disaster with the dish described above, when I tried some fresh peppers to replace the cayenne powder. The mix I bought included a habanero pepper, and even just half made the dish almost too spicy for me to be able to eat it. It tasted like a raging fire, with a slight hint of what I normally expect from the dish.