Thought this story was pretty neat, especially give the hot-button issue of AI usage these days. Infertility is a hell that's hard to fathom unless you've experienced it, so I'm all for new tech...
“Imagine searching for a single needle hidden among a thousand haystacks scattered across 10 football fields — and finding it in under two hours. That’s the level of precision and speed delivered by the STAR system. Using AI, microfluidics, and robotics, it combs through a seemingly lifeless semen sample to identify and isolate a single viable sperm cell, making the impossible possible for men once told they had no chance of biological fatherhood,”
Thought this story was pretty neat, especially give the hot-button issue of AI usage these days. Infertility is a hell that's hard to fathom unless you've experienced it, so I'm all for new tech that makes conceiving easier or more viable.
This is a great example of the type of AI usage that few would have issues with. It’s pure upside. It’s not mass processing intellectual property without permission, it’s not threatening anybody’s...
This is a great example of the type of AI usage that few would have issues with. It’s pure upside. It’s not mass processing intellectual property without permission, it’s not threatening anybody’s job, and there’s no fallout to contend with.
It’s more like namespace collision. When a term nebulously includes in the common vernacular everything from linear regression, which middle schoolers learn, to LLMs, there are going to be all...
It’s more like namespace collision. When a term nebulously includes in the common vernacular everything from linear regression, which middle schoolers learn, to LLMs, there are going to be all kinds of properties which are not shared between them.
Yeah, I did some digging to get in to what exactly they're using "AI" for, and all that came up was variants of this high-level press conference reporting style. This article from the Columbia...
Yeah, I did some digging to get in to what exactly they're using "AI" for, and all that came up was variants of this high-level press conference reporting style. This article from the Columbia site from a year ago is the only source I could find that wasn't a recent mediocre news article: https://www.columbiadoctors.org/news/columbia-fertility-looks-stars-help-men-infertility
And all I could get from that that was relevant to "AI" usage was this:
The STAR team is a collaboration of research scientists, clinicians, and experts in diverse areas, including microfabrication, machine learning, and robotics.
The "AI" in the title is most likely bogstandard boring machine learning, pattern matching on images using swaths of training data. The use of the term "AI" in the story draws a connection to LLMs like chatgpt, and an association with their push towards an (any minute now, we swear!) AGI, but it's unwarranted, it's not AI like an LLM is, it's not using chat bots, it's not using the major names we've heard before, it's pattern matching on images.
It sounds like a great breakthrough, and a great use of the wider field of machine learning and artificial intelligence, but the term "AI" being thrown around bugs me, I think too many readers are going to see it and connect it with something like chatgpt and create a false link between very different uses (and expectations) of a broad field.
I feel like it can’t go both ways with the terminology. Either “AI” is a legitimate term for modern ML/neural net systems and it applies to their use in scientific work as well, or it shouldn’t be...
I feel like it can’t go both ways with the terminology. Either “AI” is a legitimate term for modern ML/neural net systems and it applies to their use in scientific work as well, or it shouldn’t be used at all (which would be my absolute preference, but I think that ship has long since sailed).
It doesn’t seem fair to me to accept it as a term, inaccurate and misleading as it may be, and to limit it to only LLMs - especially since they’ve got a perfectly cromulent general name already: LLMs.
AI should be about the what, not the how. AI represents attempts to create fascimiles of intelligent (as in, reward maximizing) agents. Are the pacman ghosts not AI just because they are...
AI should be about the what, not the how. AI represents attempts to create fascimiles of intelligent (as in, reward maximizing) agents. Are the pacman ghosts not AI just because they are hardcoded? I don't think so. They are AI. AI that "sucks" is AI nonetheless.
When I took "AI" as an undergrad class, we learned about counterfactual regret matching and alpha-beta pruning and kalman filters.
Whether or not a neural network is AI should depend on what you're doing with it. I really don't think a simple classifier, for instance, is AI, no matter how big the matrices are.
Modern chatbot LLMs I would say are closer to AI than not, as users treat it as an intelligent agent in how they interact with it.
Honestly I'd just ban the term "AI" from any philosophical or political discussions, because it really is too diluted to be reliable in an exchange of ideas. It's been around for decades and its...
Honestly I'd just ban the term "AI" from any philosophical or political discussions, because it really is too diluted to be reliable in an exchange of ideas. It's been around for decades and its meaning has changed many times.
Genuinely not trying to nitpick, but wouldn’t ML inherently be reward maximising? That’s partially where the grouping in my mind is stemming from. Either way, in an ideal world I’d probably be...
Genuinely not trying to nitpick, but wouldn’t ML inherently be reward maximising? That’s partially where the grouping in my mind is stemming from.
Either way, in an ideal world I’d probably be agreeing with you on the “what, not how” - I distinctly remember a professor introducing lisp as “a language for AI” way back when, so we’re probably coming from a similar baseline here. But I’m trying to reconcile the way people talk and think about AI now with what I know of the technology and its capabilities - and to that end, for the sake of communication and understanding with most audiences, I think it’s clearer to broadly group deep neural nets together rather than excluding classifiers and including pacman ghosts.
You have to take in the full context of terms: a reward maximizing agent. An agent exists in the context of an environment that evolves on its own. So 512x512x3 matrix in -> hotdog boolean out?...
You have to take in the full context of terms: a reward maximizing agent. An agent exists in the context of an environment that evolves on its own.
So 512x512x3 matrix in -> hotdog boolean out? Not AI. It may be optimizing the L2 distance between its prediction and the training labels, but there's no evolving environment and reward.
(World State, Reward Count) -> (Action)? AI.
LLMs are kinda fuzzy in this context, but you can consider the current context window the environment. It's at least closer than hotdog detector. Or image generator (which are all derived from denoising models).
Fair - I can respect the line of thinking, but I think you'd struggle to get a general audience on board with the idea that Midjourney or Sora aren't under the umbrella of AI.
Fair - I can respect the line of thinking, but I think you'd struggle to get a general audience on board with the idea that Midjourney or Sora aren't under the umbrella of AI.
I don't think it's possible to change the public's mind on it; that's not how language works. That's the definition I think has clear, useful boundaries and aligns with historical use in computer...
I don't think it's possible to change the public's mind on it; that's not how language works. That's the definition I think has clear, useful boundaries and aligns with historical use in computer science. But as to its use overall, is what it is. Natural language will natural language.
That being said, the current state of the word "AI" is that it's essentially useless, because it's still in use in all aspects; people still call video game bots "AI", for instance, even though it involves no machine learning. Stockfish is still "Chess AI". All machine learning is often bucketed under AI.
So StockFish, Pacman ghosts, Linear regression on your Ti-83, Hot dog detector, Midjourney, and ChatGPT all end up vernacularly labeled "AI".
But the worst part is that many people have "strong feelings" about AI. You often see people say "I don't want to buy anything with AI in it!" or "I hate all AI". I'm guessing pacman ghosts aren't being included here. But with no one definition of AI, and one VERY EXPANSIVE "average" vernacular use, who knows what they're even trying to say?
Makes sense to me, and I think we're definitely coming from a similar place on this one - I've definitely been frustrated more than a few times with the "what the hell are people actually trying...
Makes sense to me, and I think we're definitely coming from a similar place on this one - I've definitely been frustrated more than a few times with the "what the hell are people actually trying to communicate?!" question given how arbitrary the terminology has become.
I think I've relented a little more towards common usage on what I'm allowing the word to mean in my mind, but I can absolutely see a reasonable argument either way.
AI has been used in this field for so long to refer to such a wide variety of techniques, machine learning and otherwise, that I think it's a lost cause to fight for people not to use it for those...
AI has been used in this field for so long to refer to such a wide variety of techniques, machine learning and otherwise, that I think it's a lost cause to fight for people not to use it for those things anymore. For now, all AI is going to get associated with GenAI specifically, but I suspect broad use of the term "AI" will outlive this particular trend.
The article mentions "inspired by astrophysics processes of identifying novelties in space", there is a lot of automated image processing and classification going on in that field. I think it...
The article mentions "inspired by astrophysics processes of identifying novelties in space", there is a lot of automated image processing and classification going on in that field. I think it would be hard to pinpoint a single paper for the source.
Yeah there's a ton of excellent AI that has nothing to do with LLMs or image generation. The example I use to explain to folks how AI differs from LLMs is the linear regression they did on their...
Yeah there's a ton of excellent AI that has nothing to do with LLMs or image generation. The example I use to explain to folks how AI differs from LLMs is the linear regression they did on their TI-83, and how it is a basic form of AI. Folks generally remember enough that they grasp the concept of minimizing error and weighted parameters and how that's extrapolated to a whole mess of text for LLMs for instance. And then I bring up examples like this of how AI can be good.
These folks may still use AI for all of it, but at least they understand the namespace collision and the communication issues about it. They go from thinking it's all AI to knowing a good question is "what kind of AI?"
Medical "ai" is likely where the most progress is made for humanity, with actual benefits. It's just better at analysing and predicting based on large datasets than a tired doctor. The future sees...
Medical "ai" is likely where the most progress is made for humanity, with actual benefits.
It's just better at analysing and predicting based on large datasets than a tired doctor. The future sees computers dig through data, then presents a possible diagnosis, and a doctor will then confirm. Much better than having a doctor labour over CT scans over and over until they may or may not find something.
I see where the field of haystacks metaphor comes from now. Wow. 2. ! Ouch ! I can see where the AI comes in for this now: take millions and millions of picture, and have the computer try to,...
Whereas the typical sperm count might range from 15-200 million per milliliter of semen, azoospermia leads to only 2-3 sperm cells or no measurable sperm count at all.
I see where the field of haystacks metaphor comes from now. Wow. 2.
! Ouch !
So, the options have typically been either to use donor sperm or to try undergoing a painful surgery where a portion of the testes is actually removed and they look in the testes to try to find sperm.”
I can see where the AI comes in for this now: take millions and millions of picture, and have the computer try to, click on the hydrant and the motorcycle, as it were. Very exciting technology for couples.
Coming soon to a captcha near you: select all the live sperm! Jokes aside, yeah, this is really cool. Also it’s using micro fluidics, which is a fascinating branch of technology. Thought Emporium...
Coming soon to a captcha near you: select all the live sperm!
Jokes aside, yeah, this is really cool. Also it’s using micro fluidics, which is a fascinating branch of technology. Thought Emporium has a cool video about making micro fluidics with shrinky-dinks which explains the technology.
Unironically would be far happier to click on live sperm to help couples with infertility, than be free training labour for tech greed Thanks for link recommend!...
Unironically would be far happier to click on live sperm to help couples with infertility, than be free training labour for tech greed
Thought this story was pretty neat, especially give the hot-button issue of AI usage these days. Infertility is a hell that's hard to fathom unless you've experienced it, so I'm all for new tech that makes conceiving easier or more viable.
Alternate articles: One and Two.
This is a great example of the type of AI usage that few would have issues with. It’s pure upside. It’s not mass processing intellectual property without permission, it’s not threatening anybody’s job, and there’s no fallout to contend with.
It’s more like namespace collision. When a term nebulously includes in the common vernacular everything from linear regression, which middle schoolers learn, to LLMs, there are going to be all kinds of properties which are not shared between them.
Yeah, I did some digging to get in to what exactly they're using "AI" for, and all that came up was variants of this high-level press conference reporting style. This article from the Columbia site from a year ago is the only source I could find that wasn't a recent mediocre news article: https://www.columbiadoctors.org/news/columbia-fertility-looks-stars-help-men-infertility
And all I could get from that that was relevant to "AI" usage was this:
The "AI" in the title is most likely bogstandard boring machine learning, pattern matching on images using swaths of training data. The use of the term "AI" in the story draws a connection to LLMs like chatgpt, and an association with their push towards an (any minute now, we swear!) AGI, but it's unwarranted, it's not AI like an LLM is, it's not using chat bots, it's not using the major names we've heard before, it's pattern matching on images.
It sounds like a great breakthrough, and a great use of the wider field of machine learning and artificial intelligence, but the term "AI" being thrown around bugs me, I think too many readers are going to see it and connect it with something like chatgpt and create a false link between very different uses (and expectations) of a broad field.
I feel like it can’t go both ways with the terminology. Either “AI” is a legitimate term for modern ML/neural net systems and it applies to their use in scientific work as well, or it shouldn’t be used at all (which would be my absolute preference, but I think that ship has long since sailed).
It doesn’t seem fair to me to accept it as a term, inaccurate and misleading as it may be, and to limit it to only LLMs - especially since they’ve got a perfectly cromulent general name already: LLMs.
AI should be about the what, not the how. AI represents attempts to create fascimiles of intelligent (as in, reward maximizing) agents. Are the pacman ghosts not AI just because they are hardcoded? I don't think so. They are AI. AI that "sucks" is AI nonetheless.
When I took "AI" as an undergrad class, we learned about counterfactual regret matching and alpha-beta pruning and kalman filters.
Whether or not a neural network is AI should depend on what you're doing with it. I really don't think a simple classifier, for instance, is AI, no matter how big the matrices are.
Modern chatbot LLMs I would say are closer to AI than not, as users treat it as an intelligent agent in how they interact with it.
Honestly I'd just ban the term "AI" from any philosophical or political discussions, because it really is too diluted to be reliable in an exchange of ideas. It's been around for decades and its meaning has changed many times.
Genuinely not trying to nitpick, but wouldn’t ML inherently be reward maximising? That’s partially where the grouping in my mind is stemming from.
Either way, in an ideal world I’d probably be agreeing with you on the “what, not how” - I distinctly remember a professor introducing lisp as “a language for AI” way back when, so we’re probably coming from a similar baseline here. But I’m trying to reconcile the way people talk and think about AI now with what I know of the technology and its capabilities - and to that end, for the sake of communication and understanding with most audiences, I think it’s clearer to broadly group deep neural nets together rather than excluding classifiers and including pacman ghosts.
You have to take in the full context of terms: a reward maximizing agent. An agent exists in the context of an environment that evolves on its own.
So 512x512x3 matrix in -> hotdog boolean out? Not AI. It may be optimizing the L2 distance between its prediction and the training labels, but there's no evolving environment and reward.
(World State, Reward Count) -> (Action)? AI.
LLMs are kinda fuzzy in this context, but you can consider the current context window the environment. It's at least closer than hotdog detector. Or image generator (which are all derived from denoising models).
Fair - I can respect the line of thinking, but I think you'd struggle to get a general audience on board with the idea that Midjourney or Sora aren't under the umbrella of AI.
I don't think it's possible to change the public's mind on it; that's not how language works. That's the definition I think has clear, useful boundaries and aligns with historical use in computer science. But as to its use overall, is what it is. Natural language will natural language.
That being said, the current state of the word "AI" is that it's essentially useless, because it's still in use in all aspects; people still call video game bots "AI", for instance, even though it involves no machine learning. Stockfish is still "Chess AI". All machine learning is often bucketed under AI.
So StockFish, Pacman ghosts, Linear regression on your Ti-83, Hot dog detector, Midjourney, and ChatGPT all end up vernacularly labeled "AI".
But the worst part is that many people have "strong feelings" about AI. You often see people say "I don't want to buy anything with AI in it!" or "I hate all AI". I'm guessing pacman ghosts aren't being included here. But with no one definition of AI, and one VERY EXPANSIVE "average" vernacular use, who knows what they're even trying to say?
Makes sense to me, and I think we're definitely coming from a similar place on this one - I've definitely been frustrated more than a few times with the "what the hell are people actually trying to communicate?!" question given how arbitrary the terminology has become.
I think I've relented a little more towards common usage on what I'm allowing the word to mean in my mind, but I can absolutely see a reasonable argument either way.
AI has been used in this field for so long to refer to such a wide variety of techniques, machine learning and otherwise, that I think it's a lost cause to fight for people not to use it for those things anymore. For now, all AI is going to get associated with GenAI specifically, but I suspect broad use of the term "AI" will outlive this particular trend.
The article mentions "inspired by astrophysics processes of identifying novelties in space", there is a lot of automated image processing and classification going on in that field. I think it would be hard to pinpoint a single paper for the source.
Yeah there's a ton of excellent AI that has nothing to do with LLMs or image generation. The example I use to explain to folks how AI differs from LLMs is the linear regression they did on their TI-83, and how it is a basic form of AI. Folks generally remember enough that they grasp the concept of minimizing error and weighted parameters and how that's extrapolated to a whole mess of text for LLMs for instance. And then I bring up examples like this of how AI can be good.
These folks may still use AI for all of it, but at least they understand the namespace collision and the communication issues about it. They go from thinking it's all AI to knowing a good question is "what kind of AI?"
Medical "ai" is likely where the most progress is made for humanity, with actual benefits.
It's just better at analysing and predicting based on large datasets than a tired doctor. The future sees computers dig through data, then presents a possible diagnosis, and a doctor will then confirm. Much better than having a doctor labour over CT scans over and over until they may or may not find something.
I see where the field of haystacks metaphor comes from now. Wow. 2.
! Ouch !
I can see where the AI comes in for this now: take millions and millions of picture, and have the computer try to, click on the hydrant and the motorcycle, as it were. Very exciting technology for couples.
Coming soon to a captcha near you: select all the live sperm!
Jokes aside, yeah, this is really cool. Also it’s using micro fluidics, which is a fascinating branch of technology. Thought Emporium has a cool video about making micro fluidics with shrinky-dinks which explains the technology.
Unironically would be far happier to click on live sperm to help couples with infertility, than be free training labour for tech greed
Thanks for link recommend!
https://youtu.be/eNBg_1GPuH0?si=7khhfXZeYYGzr7X9