6
votes

# Day 22: Sand Slabs

## Today's problem description: https://adventofcode.com/2023/day/22

Please post your solutions in your own top-level comment. Here's a template you can copy-paste into your comment to format it nicely, with the code collapsed by default inside an expandable section with syntax highlighting (you can replace `python`

with any of the "short names" listed in this page of supported languages):

```
<details>
<summary>Part 1</summary>
```python
Your code here.
```
</details>
```

Okay I did do some more of these but writing up solutions turned out to be very time consuming, so I didn't stopped a couple of weeks ago. Some of these do have some pretty crazy

asymptotically fastersolutions though (have you ever done string matching with polynomial multiplication?) so if there's time/interest I might go back and fill in the details for some of those.This is also definitely going to be my last one done live, though, since I'm going to be going to a place with a worse time zone and no easy access to a coding computer.

## Part 1 Solution

Part 1 is basically already a two-part problem, where you first have to build a graph representation of the blocks and only after doing that do you get to tackle the problem of what to do with it.

We should first define what graph we're trying to build. Namely, from the problem it sounds like we should pay attention to what blocks support each other, so we'll build a graph where every node is a block and there is an edge directed block A -> block B if A is resting on top of B.

For the first sub-part, once we have a decent representation of the blocks we can just build it from the ground up in a simple way. Namely, we can keep a 2D representation of the top-down view, and build up from the ground what the tower looks like once each block falls down onto to that tower.

Before we start we do need to know what order to consider the blocks in, since this building-up approach assumes that we know what relative order blocks will show up in the tower. The simplest way to do this is to order the blocks by increasing Z-coordinate of the bottom part of the block, so we can just go with that (you might want to prove to yourself that this works, even if it seems obvious).

The next thing to note is that the blocks do, in fact, fall, so their Z-coordinates are not necessarily fixed. This means that in our top-down view we're going to use the max Z-coordinate of every block in the shadow of each new block in order to determine what height we end up at.

Finally, even if a block overlaps another block in the top-down view, that doesn't mean the obscured block will support the upper block. This is pretty easy to see but means that we have to be careful in code, since we'll need to check the height for each block underneath. You can do this in a variety of ways (by the way you might want to be careful about multi-edges! There surface where two blocks touch might be many squares) but it's important that it gets done.

Now that we actually built our graph, we can then move onto the second sub-part. How can we use

If we disintegrate some block B, then all blocks supported by B also need to be supported by another block. In our graph, this means that for all other blocks C such that C -> B, if C -> B is the only edge then we can't destroy block B. And inversely, if every block has another edge, then if we destroy B we can let all the other blocks depend on whatever other block is on the other side of that edge, so it is safe to destroy B.

Repeating this logic for every block gets us the full answer.

## Part 2 Solution

Since we already built the graph in Part 1 it's nice that we can just reuse it for part 2.

The main thing here is that we need to the propagation of falling blocks. In particular, even if a block has more than one support, if all those supports fall then the block will still fall. This actually isn't that hard to track. Namely, for each block we track how many of its supports fell, and if we see that all of a block's supports fall then we have this block fall as well.

Luckily we can process the blocks falling in any order, since we will get to all of them eventually.

We can then simply repeat this process for every block and add it up to get the full answer.

## Part 2 Bonus

The algorithm we described in part 2 runs in O(VE) since it could potentially check off every edge and vertex individually.

But what if I told you it's not

asymptotically optimal?This one starts with an interesting observation and leads into graph theory that people smarter than me did a couple decades ago.

Namely, let's try to answer a seemingly obvious question: Given two blocks A and B, if we remove block A, will block B fall?

We basically just answered this in our solution to Part 2, but we did that by looking at the blocks being removed. What if we looked at the block doing the falling? Namely, using our graph, is there a way for us to determine from block B which blocks beneath it will cause it to fall?

Starting our search from block B, we imagine that we're travelling down those edges to the ground. If there's a path that avoids path A, then block B is supported, since this path can remain standing even if A (or any block not in that path) is removed.

However, if every path from block B to the ground goes through A, then clearly it will fall when block A is removed.

This gives us a new way to answer our question:

For blocks A and B, then B will fall when A is removed only if every path from B to the ground goes through A.

And now for the connection with graph theory: this sounds a lot like the definition of a dominator node:

This is

almostthe relationship between the blocks A and B we had from before, but we can make it exactly the same relationship by reversing all the edges in the graph (so instead of B -> A when B rests on A with edges going towards the ground, edges go up from the ground).Then, in this reversed graph, B will fall when A is removed only if A dominates B when entering from the ground (which we can represent as its own node).

With this connection established we can finally start doing the real weird logic leaps.

In particular, node dominators actually follow a strict ordering. Namely, if a node X has two dominators A and B, either A is a dominator of B or B is a dominator of A.

The basic proof idea is to start at the entry node and look at paths to X, which all must contain A and B in some order. If one path has A before B and another has B before A, then we can create a new path that takes the first path's route to A and then skips to the second path from A to X to avoid B entirely, which contradicts the definition of dominator nodes, so all nodes must have the same ordering of A and B.

This means that we can order dominators in a tree structure rooted at the entry node (if you play around with nodes under ordering rule above you'll see that every node must have at most one parent), where A -> B if A dominates B, and the full set of nodes dominated by A is the subtree rooted at A.

Now instead of directly calculating the dominators for each node, we can instead calculate the tree (uncreatively named the dominator tree) and use subtree sizes to calculate the full answer.

Finally, because of course people thought of this, dominator trees can be calculated very quickly (another analysis that's not literally 40 years old here), in the time complexity of O(E * α(V,E)) where α is the wonderfully slow growing and weird to describe Inverse Ackermann Function.

## Part 1/2 Code

Today was...fine. Spent a long, long, long time on a stupid bug that wasn't covered by the test case.

Here's an alternate test case that covers the issue I had, in case anyone else is stuck & wants to check against a bit more robust a test size:

## Test input

The answer should be 5.

Anyway, other than that it was fine. I misread part 2 twice and gave it the best-case input, and then thought I was supposed to be exploding things sequentially, and then I realized it's just the sum instead of best-case. Was very easy given how I'd done part 1.

Here's my Python code including a screenshot of my scratch paper. Although, I didn't exactly follow the steps I planned to in the end.

Sorry if this is resurfacing bad debugging memories, but how does that even work? Both of the two new bricks are load-bearing? If

`1,2,11~1,4,11`

starts above all the others, how can it be load-bearing?(I think I'm missing the same edge case as you were, thanks for sharing the extra example!)

Haha all good! The very last one isn't load-bearing, but the second-to-most-recent one is. Specifically, it's bearing the weight of the last one, from (1, 2) to (1, 3). This gives us the test case of one block supporting another in two different places, yet being its only support. If you don't do a check for

uniquesupporters (I think it's`blocks_below`

in my code), then you will double-count it!Ah gotcha, I had missed that the first new brick (the lower of the two) caused one of the seven bricks from the original example to become load-bearing when it previously wasn't. So the new count of 5 was composed of 4 from the existing set of 7, and 1 from the new pair.

And yes, that fixed it!

Fantastic, congrats & I'm so glad this data sample was worth it!

Not thrilled with my solution today; everything went fine I suppose, but I mostly just recognized that the scale of input (number of bricks, size of bricks, initial height, etc.) would permit avoiding optimizations so I didn't try to get too creative with anything.

Essentially, I stored each brick as a list of all of its coordinates and the state of the field as two maps: one from a coordinate to which brick ID (if any) was there, and another from a given brick ID to its coordinates. Dropping the bricks initially was fine, as I just sorted by their z value and dropped each one a single step at a time.

I suspect there's some clever graph-based approach for part 2, tracking relationships and identifying which bricks rely on which others. Having already written functions to remove bricks and check for touching bricks, I just simulated the whole thing for each brick.

## Haskell Solution