I'm still processing my thoughts and I'm not sure where to begin, maybe writing this will help. Particularly as an educator (English, français langue étrangère & mathematics) I am seeing the...
I'm still processing my thoughts and I'm not sure where to begin, maybe writing this will help. Particularly as an educator (English, français langue étrangère & mathematics) I am seeing the drastic ramifications LLMs have had on critical thinking skills and language abilities of students (arguably the way schools operated in my region during lockdown is also partially responsible for this but I digress). There is already drastic ramifications of the technology, even in its infancy, within multiple domains.
Now, I think I can agree with the notion that I am deeply fascinated by the prospect of synthesized consciousness of some form — what is not touched upon adequately (for my opinions on AI) is how LLMs as they stand are fundamentally incapable of thought, and by extension, 'original' thought. This is still such a huge importance of what makes human writing and art important to me, and links well into the authors emphasis of context, process, and intent. In many ways it is part of the joy of analyzing literature or creating art for me, and in many ways is why I'd love to pursue a masters & PhD in old anglo-norman writing. Back on topic however, can we truly say it is the solution to the crisis?
I wish I could truly abide by such a creed, a uniform declaration outlining the problem and the solution, yet I find myself at an impasse. The trajectory we are headed for is so uncertain, lawmakers are not reacting quick enough to AI (nor do some even desire to); there is frankly no coherent future that I can divinate out of what we know.
Maybe that was a bit much on the rambling side, and definitely emphasizes my hatred for uncertainty. But, I'm hoping maybe my thoughts + the blog post can start an interesting conversation :)
I'm still processing my thoughts and I'm not sure where to begin, maybe writing this will help. Particularly as an educator (English, français langue étrangère & mathematics) I am seeing the drastic ramifications LLMs have had on critical thinking skills and language abilities of students (arguably the way schools operated in my region during lockdown is also partially responsible for this but I digress). There is already drastic ramifications of the technology, even in its infancy, within multiple domains.
Now, I think I can agree with the notion that I am deeply fascinated by the prospect of synthesized consciousness of some form — what is not touched upon adequately (for my opinions on AI) is how LLMs as they stand are fundamentally incapable of thought, and by extension, 'original' thought. This is still such a huge importance of what makes human writing and art important to me, and links well into the authors emphasis of context, process, and intent. In many ways it is part of the joy of analyzing literature or creating art for me, and in many ways is why I'd love to pursue a masters & PhD in old anglo-norman writing. Back on topic however, can we truly say it is the solution to the crisis?
I wish I could truly abide by such a creed, a uniform declaration outlining the problem and the solution, yet I find myself at an impasse. The trajectory we are headed for is so uncertain, lawmakers are not reacting quick enough to AI (nor do some even desire to); there is frankly no coherent future that I can divinate out of what we know.
Maybe that was a bit much on the rambling side, and definitely emphasizes my hatred for uncertainty. But, I'm hoping maybe my thoughts + the blog post can start an interesting conversation :)