# Day 22: Sand Slabs

Please post your solutions in your own top-level comment. Here's a template you can copy-paste into your comment to format it nicely, with the code collapsed by default inside an expandable section with syntax highlighting (you can replace `python` with any of the "short names" listed in this page of supported languages):

``````<details>
<summary>Part 1</summary>

```python
```

</details>
``````

1. Hvv
Okay I did do some more of these but writing up solutions turned out to be very time consuming, so I didn't stopped a couple of weeks ago. Some of these do have some pretty crazy asymptotically...

Okay I did do some more of these but writing up solutions turned out to be very time consuming, so I didn't stopped a couple of weeks ago. Some of these do have some pretty crazy asymptotically faster solutions though (have you ever done string matching with polynomial multiplication?) so if there's time/interest I might go back and fill in the details for some of those.

This is also definitely going to be my last one done live, though, since I'm going to be going to a place with a worse time zone and no easy access to a coding computer.

Part 1 Solution

Part 1 is basically already a two-part problem, where you first have to build a graph representation of the blocks and only after doing that do you get to tackle the problem of what to do with it.

We should first define what graph we're trying to build. Namely, from the problem it sounds like we should pay attention to what blocks support each other, so we'll build a graph where every node is a block and there is an edge directed block A -> block B if A is resting on top of B.

For the first sub-part, once we have a decent representation of the blocks we can just build it from the ground up in a simple way. Namely, we can keep a 2D representation of the top-down view, and build up from the ground what the tower looks like once each block falls down onto to that tower.

Before we start we do need to know what order to consider the blocks in, since this building-up approach assumes that we know what relative order blocks will show up in the tower. The simplest way to do this is to order the blocks by increasing Z-coordinate of the bottom part of the block, so we can just go with that (you might want to prove to yourself that this works, even if it seems obvious).

The next thing to note is that the blocks do, in fact, fall, so their Z-coordinates are not necessarily fixed. This means that in our top-down view we're going to use the max Z-coordinate of every block in the shadow of each new block in order to determine what height we end up at.

Finally, even if a block overlaps another block in the top-down view, that doesn't mean the obscured block will support the upper block. This is pretty easy to see but means that we have to be careful in code, since we'll need to check the height for each block underneath. You can do this in a variety of ways (by the way you might want to be careful about multi-edges! There surface where two blocks touch might be many squares) but it's important that it gets done.

Now that we actually built our graph, we can then move onto the second sub-part. How can we use

If we disintegrate some block B, then all blocks supported by B also need to be supported by another block. In our graph, this means that for all other blocks C such that C -> B, if C -> B is the only edge then we can't destroy block B. And inversely, if every block has another edge, then if we destroy B we can let all the other blocks depend on whatever other block is on the other side of that edge, so it is safe to destroy B.

Repeating this logic for every block gets us the full answer.

Part 2 Solution

Since we already built the graph in Part 1 it's nice that we can just reuse it for part 2.

The main thing here is that we need to the propagation of falling blocks. In particular, even if a block has more than one support, if all those supports fall then the block will still fall. This actually isn't that hard to track. Namely, for each block we track how many of its supports fell, and if we see that all of a block's supports fall then we have this block fall as well.

Luckily we can process the blocks falling in any order, since we will get to all of them eventually.

We can then simply repeat this process for every block and add it up to get the full answer.

Part 2 Bonus

The algorithm we described in part 2 runs in O(VE) since it could potentially check off every edge and vertex individually.

But what if I told you it's not asymptotically optimal?

This one starts with an interesting observation and leads into graph theory that people smarter than me did a couple decades ago.

Namely, let's try to answer a seemingly obvious question: Given two blocks A and B, if we remove block A, will block B fall?

We basically just answered this in our solution to Part 2, but we did that by looking at the blocks being removed. What if we looked at the block doing the falling? Namely, using our graph, is there a way for us to determine from block B which blocks beneath it will cause it to fall?

Starting our search from block B, we imagine that we're travelling down those edges to the ground. If there's a path that avoids path A, then block B is supported, since this path can remain standing even if A (or any block not in that path) is removed.

However, if every path from block B to the ground goes through A, then clearly it will fall when block A is removed.

This gives us a new way to answer our question:

For blocks A and B, then B will fall when A is removed only if every path from B to the ground goes through A.

And now for the connection with graph theory: this sounds a lot like the definition of a dominator node:

In computer science, a node d of a control-flow graph dominates a node n if every path from the entry node to n must go through d.

This is almost the relationship between the blocks A and B we had from before, but we can make it exactly the same relationship by reversing all the edges in the graph (so instead of B -> A when B rests on A with edges going towards the ground, edges go up from the ground).

Then, in this reversed graph, B will fall when A is removed only if A dominates B when entering from the ground (which we can represent as its own node).

With this connection established we can finally start doing the real weird logic leaps.

In particular, node dominators actually follow a strict ordering. Namely, if a node X has two dominators A and B, either A is a dominator of B or B is a dominator of A.

The basic proof idea is to start at the entry node and look at paths to X, which all must contain A and B in some order. If one path has A before B and another has B before A, then we can create a new path that takes the first path's route to A and then skips to the second path from A to X to avoid B entirely, which contradicts the definition of dominator nodes, so all nodes must have the same ordering of A and B.

This means that we can order dominators in a tree structure rooted at the entry node (if you play around with nodes under ordering rule above you'll see that every node must have at most one parent), where A -> B if A dominates B, and the full set of nodes dominated by A is the subtree rooted at A.

Now instead of directly calculating the dominators for each node, we can instead calculate the tree (uncreatively named the dominator tree) and use subtree sizes to calculate the full answer.

Finally, because of course people thought of this, dominator trees can be calculated very quickly (another analysis that's not literally 40 years old here), in the time complexity of O(E * α(V,E)) where α is the wonderfully slow growing and weird to describe Inverse Ackermann Function.

Part 1/2 Code
``````#include <bits/stdc++.h>
using namespace std;
vector<string> split(string s, string delim) {
vector<string> ans;
while(s.find(delim) != string::npos) {
int idx = s.find(delim);
ans.push_back(s.substr(0,idx));
s = s.substr(idx+delim.size());
}
if(!s.empty()) {
ans.push_back(s);
}
return ans;
}

int get_num_falling(int source, const vector<vector<int>>& graph, const vector<vector<int>>& graph_reverse) {
int ans = 0;
queue<int> q;
q.push(source);
vector<int> degree(graph.size());
while(!q.empty()) {
int u = q.front();q.pop();
ans++;
for(const auto& v: graph[u]) {
degree[v]++;
if(degree[v] == graph_reverse[v].size()) {
q.push(v);
}
}
}
return ans-1;
}
int main() {
string s;
vector<vector<pair<int,int>>> block_coords;
map<int,vector<int>> z_coord_blocks;
while(cin >> s) {
auto coords = split(s,"~");
auto coords_low_str = split(coords[0],",");
auto coords_high_str = split(coords[1],",");
vector<int> coords_low,coords_high;
for(const auto& u: coords_low_str) {
coords_low.push_back(stoi(u));
}
for(const auto& u: coords_high_str) {
coords_high.push_back(stoi(u));
}
block_coords.emplace_back();
auto& row = block_coords.back();
for(int i=0;i<3;i++) {
row.emplace_back(coords_low[i],coords_high[i]);
}
z_coord_blocks[coords_low[2]].emplace_back(block_coords.size()-1);
}
vector<vector<int>> graph(block_coords.size());
vector<vector<int>> graph_reverse(block_coords.size());
map<pair<int,int>,int> gob;
set<pair<int,int>> sos;
for(const auto& [_,block_ids]: z_coord_blocks) {
for(const auto& idx: block_ids) {
int max_z_coord = 0;
set<int> mos;
for(int i=block_coords[idx][0].first;i<=block_coords[idx][0].second;i++) {
for(int j=block_coords[idx][1].first;j<=block_coords[idx][1].second;j++) {
if(gob.count(pair<int,int>(i,j))) {
int p = gob[pair<int,int>(i,j)];
max_z_coord = max(max_z_coord,block_coords[p][2].second);
mos.insert(p);
}
}
}
int fall = block_coords[idx][2].first-(max_z_coord+1);
block_coords[idx][2].first -= fall;
block_coords[idx][2].second -= fall;
for(int i=block_coords[idx][0].first;i<=block_coords[idx][0].second;i++) {
for(int j=block_coords[idx][1].first;j<=block_coords[idx][1].second;j++) {
gob[pair<int,int>(i,j)] = idx;
}
}
for(const auto& p: mos) {
if(block_coords[p][2].second+1 == block_coords[idx][2].first) {
graph[p].push_back(idx);
graph_reverse[idx].push_back(p);
}
}
}
}
int ans = 0;
for(int i=0;i<graph.size();i++) {
int not_falling = 1;
for(const auto& v: graph[i]) {
if(graph_reverse[v].size() == 1) {
not_falling = 0;break;
}
}
ans += not_falling;
}
cout << ans << '\n';
ans = 0;
for(int i=0;i<graph.size();i++) {
ans += get_num_falling(i,graph,graph_reverse);
}
cout << ans << '\n';
}
``````
2. [5]
RheingoldRiver
Today was...fine. Spent a long, long, long time on a stupid bug that wasn't covered by the test case. Here's an alternate test case that covers the issue I had, in case anyone else is stuck &...

Today was...fine. Spent a long, long, long time on a stupid bug that wasn't covered by the test case.

Here's an alternate test case that covers the issue I had, in case anyone else is stuck & wants to check against a bit more robust a test size:

Test input

``````1,0,1~1,2,1
0,0,2~2,0,2
0,2,3~2,2,3
0,0,4~0,2,4
2,0,5~2,2,5
0,1,6~2,1,6
1,1,8~1,1,9
1,1,10~1,3,10
1,2,11~1,4,11
``````

Anyway, other than that it was fine. I misread part 2 twice and gave it the best-case input, and then thought I was supposed to be exploding things sequentially, and then I realized it's just the sum instead of best-case. Was very easy given how I'd done part 1.

Here's my Python code including a screenshot of my scratch paper. Although, I didn't exactly follow the steps I planned to in the end.

1. [4]
jzimbel
Sorry if this is resurfacing bad debugging memories, but how does that even work? Both of the two new bricks are load-bearing? If 1,2,11~1,4,11 starts above all the others, how can it be...

Sorry if this is resurfacing bad debugging memories, but how does that even work? Both of the two new bricks are load-bearing? If `1,2,11~1,4,11` starts above all the others, how can it be load-bearing?

(I think I'm missing the same edge case as you were, thanks for sharing the extra example!)

1 vote
1. [3]
RheingoldRiver
Haha all good! The very last one isn't load-bearing, but the second-to-most-recent one is. Specifically, it's bearing the weight of the last one, from (1, 2) to (1, 3). This gives us the test case...

Haha all good! The very last one isn't load-bearing, but the second-to-most-recent one is. Specifically, it's bearing the weight of the last one, from (1, 2) to (1, 3). This gives us the test case of one block supporting another in two different places, yet being its only support. If you don't do a check for unique supporters (I think it's `blocks_below` in my code), then you will double-count it!

1. [2]
jzimbel
Ah gotcha, I had missed that the first new brick (the lower of the two) caused one of the seven bricks from the original example to become load-bearing when it previously wasn't. So the new count...

Ah gotcha, I had missed that the first new brick (the lower of the two) caused one of the seven bricks from the original example to become load-bearing when it previously wasn't. So the new count of 5 was composed of 4 from the existing set of 7, and 1 from the new pair.

And yes, that fixed it!

1. RheingoldRiver
Fantastic, congrats & I'm so glad this data sample was worth it!

Fantastic, congrats & I'm so glad this data sample was worth it!

1 vote
3. scarecrw
Not thrilled with my solution today; everything went fine I suppose, but I mostly just recognized that the scale of input (number of bricks, size of bricks, initial height, etc.) would permit...

Not thrilled with my solution today; everything went fine I suppose, but I mostly just recognized that the scale of input (number of bricks, size of bricks, initial height, etc.) would permit avoiding optimizations so I didn't try to get too creative with anything.

Essentially, I stored each brick as a list of all of its coordinates and the state of the field as two maps: one from a coordinate to which brick ID (if any) was there, and another from a given brick ID to its coordinates. Dropping the bricks initially was fine, as I just sorted by their z value and dropped each one a single step at a time.

I suspect there's some clever graph-based approach for part 2, tracking relationships and identifying which bricks rely on which others. Having already written functions to remove bricks and check for touching bricks, I just simulated the whole thing for each brick.

``````module Main (main) where
import AOCTools (splitOn, splitBy, unique)
import Data.List (sortOn)
import qualified Data.Map as Map
import Data.Map (Map, (!))
import Data.Maybe (mapMaybe)

main :: IO ()
main = do
putStrLn \$ "Part 1: " ++ show (solve1 \$ parseInput input)
putStrLn \$ "Part 2: " ++ show (solve2 \$ parseInput input)

type Coord = (Int, Int, Int)
type Brick = (BrickID, [Coord])
type BrickID = Int

getZ :: Coord -> Int
getZ (_, _, z) = z

parseInput :: String -> [Brick]
parseInput input = zip [0..] \$ map parseBrick \$ lines input

parseBrick :: String -> [Coord]
parseBrick str = brick where
(start, end) = splitOn "~" str
brick = buildBrick (toCoord start) (toCoord end)
toCoord coordStr = case map read (splitBy "," coordStr) of
[x, y, z] -> (x, y, z)
_ -> error "invalid coordinate format"

buildBrick :: Coord -> Coord -> [Coord]
buildBrick (x1, y1, z1) (x2, y2, z2)
| x1 == x2 && y1 == y2 && z1 == z2 = [(x1, y1, z1)]
| x1 /= x2 = [(x, y1, z1) | x <- [min x1 x2 .. max x1 x2]]
| y1 /= y2 = [(x1, y, z1) | y <- [min y1 y2 .. max y1 y2]]
| z1 /= z2 = [(x1, y1, z) | z <- [min z1 z2 .. max z1 z2]]
| otherwise = error "invalid brick"

solve1 :: [Brick] -> Int
solve1 bricks = length result where
(settledBricksMap, brickLocationMap) = dropBricks bricks
result = filter (canRemove settledBricksMap brickLocationMap) (Map.elems brickLocationMap)

dropBricks :: [Brick] -> (Map Coord BrickID, Map BrickID Brick)
dropBricks bricks = (settledBricksMap, brickLocationMap) where
sortedBricks = sortOn (\(_, coords) -> minimum (map getZ coords)) bricks
(settledBricks, settledBricksMap) = foldl dropBrick ([], Map.empty) sortedBricks
brickLocationMap = Map.fromList [(fst b, b) | b <- settledBricks]

canRemove :: Map Coord BrickID -> Map BrickID Brick -> Brick -> Bool
canRemove settledBricksMap brickLocationMap brick = result where
brickCells = snd brick
settledBricksMap' = foldr Map.delete settledBricksMap brickCells
aboveBrickIDs = mapMaybe (`Map.lookup` settledBricksMap') (snd (riseOne brick))
bricksAbove = map (brickLocationMap !) aboveBrickIDs
result = not. any (canDrop settledBricksMap') \$ bricksAbove

dropBrick :: ([Brick], Map Coord BrickID) -> Brick -> ([Brick], Map Coord BrickID)
dropBrick (settledBricks, settledBricksMap) brick
| canDrop settledBricksMap brick = dropBrick (settledBricks, settledBricksMap) (dropOne brick)
| otherwise = (settledBricks', settledBricksMap') where
settledBricks' = brick:settledBricks
(brickID, brickCells) = brick
settledBricksMap' = foldr (`Map.insert` brickID) settledBricksMap brickCells

canDrop :: Map Coord BrickID -> Brick -> Bool
canDrop settledBricksMap brick = not . any invalidCell . snd . dropOne \$ brick where
settledBricksMap' = foldr Map.delete settledBricksMap (snd brick)
invalidCell :: Coord -> Bool
invalidCell (x, y, z) = z <= 0 || (x, y, z) `Map.member` settledBricksMap'

dropOne :: Brick -> Brick
dropOne (brickID, brickCells) = (brickID, map (\(x, y, z) -> (x, y, z-1)) brickCells)

riseOne :: Brick -> Brick
riseOne (brickID, brickCells) = (brickID, map (\(x, y, z) -> (x, y, z+1)) brickCells)

solve2 :: [Brick] -> Int
solve2 bricks = sum (map numDropped (Map.elems brickLocationMap)) where
(settledBricksMap, brickLocationMap) = dropBricks bricks
numDropped b = countDrop settledBricksMap [b] - 1
countDrop :: Map Coord BrickID -> [Brick] -> Int
countDrop _ [] = 0
countDrop remainingBricksMap (brick:toDrop) = result where
brickCells = snd brick
remainingBricksMap' = foldr Map.delete remainingBricksMap brickCells
aboveBrickIDs = mapMaybe (`Map.lookup` remainingBricksMap') (snd (riseOne brick))
bricksAbove = map (brickLocationMap !)  (unique aboveBrickIDs)
droppable = filter (canDrop remainingBricksMap') (filter (`notElem` toDrop) bricksAbove)
result = 1 + countDrop remainingBricksMap' (toDrop ++ droppable)
``````
1 vote