Sometimes they’re just really badly written. As someone that mostly grew up with Wikipedia existing, one thing I’ve had to reconcile as I’ve grown older is that a lot of Wikipedia is just… really...
Sometimes they’re just really badly written. As someone that mostly grew up with Wikipedia existing, one thing I’ve had to reconcile as I’ve grown older is that a lot of Wikipedia is just… really poorly written.
Great that it works, but to be frank, there are also a variety of oddities I saw in your code. You have stylesheet but also at various spots randomly are injecting inline css. You are using...
Great that it works, but to be frank, there are also a variety of oddities I saw in your code.
You have stylesheet but also at various spots randomly are injecting inline css. You are using constants for some things but not for others (things like hex color codes). In your background tab you have a listener that only outputs something to the console. That might be there for debugging purposes, but it is unclear and seems an odd leftover.
This is just from a quick look I had on my phone, I am not really in a position to do an extremely detailed look over of your code.
As I said, great that it works. But given you clearly heavily leaned into the use of LLMs to create this I also suspect this is a good example of LLMs having trouble maintaining consistency over larger projects.
As I said, I didn't do a complete review of your code. There might be more of the things I mentioned or this might be the extend of the LLM introduced oddities.
Just some things to keep in mind. Since you are now seemingly moving on from personal projects to projects where you have actual users.
Valid points. The scope of this project (a button on Wikipedia) is low stakes enough that I'm not concerned with the cleanliness of constants. If I wrote it myself, I would be more consistent....
Valid points. The scope of this project (a button on Wikipedia) is low stakes enough that I'm not concerned with the cleanliness of constants. If I wrote it myself, I would be more consistent. That's why a project with higher stakes would only use LLMs to assist. Like you said, we're not at a point yet where LLMs can do everything perfectly.
By the way, there's an official Simple version of Wikipedia! On many pages, you can click on the language selector and change the language to "simple English". It's obviously not as extensive as...
By the way, there's an official Simple version of Wikipedia! On many pages, you can click on the language selector and change the language to "simple English". It's obviously not as extensive as regular English Wikipedia, but it does help pretty often
The article mentions that the extension will automatically suggest those if they're available, and auto-generate a simplified version if they're not. I think it's a very clever idea, and a good...
The article mentions that the extension will automatically suggest those if they're available, and auto-generate a simplified version if they're not.
I think it's a very clever idea, and a good use of LLMs. I'd be curious to see the prompt used to adjust the reading level, though.
Surely that'll depend on the article. There are plenty topics on Wikipedia obscure enough that an LLM won't have heard of, but there are also plenty where an LLM could shine.
The LLM would probably do a pretty good job
Surely that'll depend on the article. There are plenty topics on Wikipedia obscure enough that an LLM won't have heard of, but there are also plenty where an LLM could shine.
You'd be surprised how much extremely specific knowledge LLMs have access to. I think it would be hard finding a Wikipedia page that contains information ChatGPT can't discuss.
You'd be surprised how much extremely specific knowledge LLMs have access to. I think it would be hard finding a Wikipedia page that contains information ChatGPT can't discuss.
We must be talking about different LLMs then. I just pressed Random on Wikipedia a couple times and asked ChatGPT about the articles that came up and all of the answers were either “this is not a...
We must be talking about different LLMs then. I just pressed Random on Wikipedia a couple times and asked ChatGPT about the articles that came up and all of the answers were either “this is not a famous person” (ChatGPT’s way of saying “I've never heard of them”) or a complete hallucination.
I just tried pressing random and then asking ChatGPT about the topic. It got 6 out of 6. Maybe you tried on an older version when LLMs weren't this good yet? I imagine if the topic is obscure...
I just tried pressing random and then asking ChatGPT about the topic. It got 6 out of 6.
Maybe you tried on an older version when LLMs weren't this good yet?
I imagine if the topic is obscure enough maybe LLMs don't know about it. But for those articles I'm also more sceptical of the accuracy of the Wikipedia page, since I imagine those are subject to less scrutiny.
Yes, I have to say this has been incredibly beneficial for me and others. It's a fantastic alternative to LLM summaries as they are created and edited by humans.
Yes, I have to say this has been incredibly beneficial for me and others. It's a fantastic alternative to LLM summaries as they are created and edited by humans.
I know one study found that 51% of summaries that AI produced for them contained significant errors. I wouldn’t trust AI-summaries as a result. But making the switch to the simple article is very...
I know one study found that 51% of summaries that AI produced for them contained significant errors. I wouldn’t trust AI-summaries as a result. But making the switch to the simple article is very helpful.
Interesting! This is partially why I prioritize using Simple English Wikipedia, however the prompt instructions for the LLMs isn't to "summarize" but to explain in simpler terms. Hope that helps
Interesting! This is partially why I prioritize using Simple English Wikipedia, however the prompt instructions for the LLMs isn't to "summarize" but to explain in simpler terms. Hope that helps
Honestly? I can't say that I have. Usually if I get lost it's usually because I don't have context.
But cool extension though.
Sometimes they’re just really badly written. As someone that mostly grew up with Wikipedia existing, one thing I’ve had to reconcile as I’ve grown older is that a lot of Wikipedia is just… really poorly written.
I have seen more and more stuff that looks like it's been written by AI.
Great that it works, but to be frank, there are also a variety of oddities I saw in your code.
You have stylesheet but also at various spots randomly are injecting inline css. You are using constants for some things but not for others (things like hex color codes). In your background tab you have a listener that only outputs something to the console. That might be there for debugging purposes, but it is unclear and seems an odd leftover.
This is just from a quick look I had on my phone, I am not really in a position to do an extremely detailed look over of your code.
As I said, great that it works. But given you clearly heavily leaned into the use of LLMs to create this I also suspect this is a good example of LLMs having trouble maintaining consistency over larger projects.
Something I mentioned a while ago in relation to vibe coding.
As I said, I didn't do a complete review of your code. There might be more of the things I mentioned or this might be the extend of the LLM introduced oddities.
Just some things to keep in mind. Since you are now seemingly moving on from personal projects to projects where you have actual users.
Valid points. The scope of this project (a button on Wikipedia) is low stakes enough that I'm not concerned with the cleanliness of constants. If I wrote it myself, I would be more consistent. That's why a project with higher stakes would only use LLMs to assist. Like you said, we're not at a point yet where LLMs can do everything perfectly.
By the way, there's an official Simple version of Wikipedia! On many pages, you can click on the language selector and change the language to "simple English". It's obviously not as extensive as regular English Wikipedia, but it does help pretty often
The article mentions that the extension will automatically suggest those if they're available, and auto-generate a simplified version if they're not.
I think it's a very clever idea, and a good use of LLMs. I'd be curious to see the prompt used to adjust the reading level, though.
I wonder how it compares to the LLM just explaining the topic from scratch though. Without the Wikipedia page.
The LLM would probably do a pretty good job, but giving it the Wikipedia article as context helps prevent hallucinations
Surely that'll depend on the article. There are plenty topics on Wikipedia obscure enough that an LLM won't have heard of, but there are also plenty where an LLM could shine.
You'd be surprised how much extremely specific knowledge LLMs have access to. I think it would be hard finding a Wikipedia page that contains information ChatGPT can't discuss.
We must be talking about different LLMs then. I just pressed Random on Wikipedia a couple times and asked ChatGPT about the articles that came up and all of the answers were either “this is not a famous person” (ChatGPT’s way of saying “I've never heard of them”) or a complete hallucination.
I just tried pressing random and then asking ChatGPT about the topic. It got 6 out of 6.
Maybe you tried on an older version when LLMs weren't this good yet?
I imagine if the topic is obscure enough maybe LLMs don't know about it. But for those articles I'm also more sceptical of the accuracy of the Wikipedia page, since I imagine those are subject to less scrutiny.
Yup, my extension checks for that first
Yes, I have to say this has been incredibly beneficial for me and others. It's a fantastic alternative to LLM summaries as they are created and edited by humans.
I know one study found that 51% of summaries that AI produced for them contained significant errors. I wouldn’t trust AI-summaries as a result. But making the switch to the simple article is very helpful.
source https://www.bbc.com/news/articles/c0m17d8827ko
Interesting! This is partially why I prioritize using Simple English Wikipedia, however the prompt instructions for the LLMs isn't to "summarize" but to explain in simpler terms. Hope that helps