As I was reading I was struggling to figure out why someone would be interested in counting these. Obviously "because it's there" is always a reasonable answer to such questions. But this is a...
As I was reading I was struggling to figure out why someone would be interested in counting these. Obviously "because it's there" is always a reasonable answer to such questions. But this is a hard enough problem that usually you'd want some justification for this effort.
I think it is a great example of how we think about begin counting "uncountably finite" types of things.
One would want to do exactly this type of analysis when building a hash function of sorts.
Any other ideas how one would use this information?
Absurd idea: theoretically you could use this work to build a json validator for input strings below a certain length, by generating every valid string and either storing them or hashes of them in...
Absurd idea: theoretically you could use this work to build a json validator for input strings below a certain length, by generating every valid string and either storing them or hashes of them in a lookup table. The validator would have a ridiculously low computational complexity, though the real-world speed would probably be poor due to the enormous size of the table, and it may take an unrealistic amount of memory to run. But there could be limited useful applications for this, especially if you were able to restrict the input to ascii.
That is an absurd idea, lol. I really don’t think there are any practical use cases for this, but that’s okay. It’s just a fun thing to think about. People, or at least some people with certain...
That is an absurd idea, lol. I really don’t think there are any practical use cases for this, but that’s okay. It’s just a fun thing to think about. People, or at least some people with certain mathematical inclinations, like to think about navigating impossibly large but finite possibility spaces. It’s a reminder that all the answers to all of the mysteries of the universe are just there, waiting to be found. It doesn’t have to have real world applications, it’s brain candy.
It took me a minute to realize the author was not talking about the values of strings as defined in the JSON spec (anything inside of "..." pairs), but "JSON blobs encoded as strings" also known...
It took me a minute to realize the author was not talking about the values of strings as defined in the JSON spec (anything inside of "..." pairs), but "JSON blobs encoded as strings" also known as JSON documents.
The article explores the number of valid JSON documents of several (small) sizes.
Edit: am wrong Acktshually, a JSON document is something else. In the article the author says: A JSON primitive such as "null" is not a JSON document. A document must be either an array or object...
Edit: am wrong
also known as JSON documents
Acktshually, a JSON document is something else. In the article the author says:
Some valid JSON strings for N = 4 would be 'null', 'true', '1234', '"ab"' and '[{}]'
A JSON primitive such as "null" is not a JSON document. A document must be either an array or object at the top level.
I'm not sure that's true. According to the spec, a JSON document is an element, which is a value surrounded by optional whitespace, and value can be a primitive, an array, or an object. Unless you...
I'm not sure that's true. According to the spec, a JSON document is an element, which is a value surrounded by optional whitespace, and value can be a primitive, an array, or an object.
Unless you are using "JSON document" in a very specific way here, but I've not come across it before like that.
As I was reading I was struggling to figure out why someone would be interested in counting these. Obviously "because it's there" is always a reasonable answer to such questions. But this is a hard enough problem that usually you'd want some justification for this effort.
Any other ideas how one would use this information?
Absurd idea: theoretically you could use this work to build a json validator for input strings below a certain length, by generating every valid string and either storing them or hashes of them in a lookup table. The validator would have a ridiculously low computational complexity, though the real-world speed would probably be poor due to the enormous size of the table, and it may take an unrealistic amount of memory to run. But there could be limited useful applications for this, especially if you were able to restrict the input to ascii.
That is an absurd idea, lol. I really don’t think there are any practical use cases for this, but that’s okay. It’s just a fun thing to think about. People, or at least some people with certain mathematical inclinations, like to think about navigating impossibly large but finite possibility spaces. It’s a reminder that all the answers to all of the mysteries of the universe are just there, waiting to be found. It doesn’t have to have real world applications, it’s brain candy.
It took me a minute to realize the author was not talking about the values of strings as defined in the JSON spec (anything inside of
"..."
pairs), but "JSON blobs encoded as strings" also known as JSON documents.The article explores the number of valid JSON documents of several (small) sizes.
Neat!
Edit: am wrong
Acktshually, a JSON document is something else. In the article the author says:
A JSON primitive such as "null" is not a JSON document. A document must be either an array or object at the top level.
I'm not sure that's true. According to the spec, a JSON document is an
element
, which is avalue
surrounded by optional whitespace, andvalue
can be a primitive, an array, or an object.Unless you are using "JSON document" in a very specific way here, but I've not come across it before like that.
My takeaway was that the author considers it a valid string if
JSON.parse()
accepts it. Which it does with primitives like “null”.Oh no. Well I thought I’d read that somewhere.