TwinTurbo's recent activity
-
Comment on Is chain-of-thought reasoning of LLMs a mirage? A data distribution lens. in ~tech
-
Comment on New Android phones, stock or flash? in ~tech
TwinTurbo I'm interested to know your thoughts on OnePlus vs Samsung. What did you like/dislike most about each? I keep seeing screenshots of UI/UX bugs and inconsistencies in the OnePlus software, whereas...I'm interested to know your thoughts on OnePlus vs Samsung. What did you like/dislike most about each?
I keep seeing screenshots of UI/UX bugs and inconsistencies in the OnePlus software, whereas One UI feels like a much more mature and stable piece of software.
-
Comment on Redditors of Tildes .. what is the thing you can live without? in ~tech
TwinTurbo Apart from your first and last point, everything else could be fixed or worked around by using a good third-party client or old.reddit on desktop. This is what reddit is to me; without them, it’s...Apart from your first and last point, everything else could be fixed or worked around by using a good third-party client or old.reddit on desktop. This is what reddit is to me; without them, it’s not a place I want to be…
-
What coffee have you been brewing at home recently?
Have you recently come across some nice beans? What roasters do you usually buy from? What's your recipe and what does your coffee tase like? Espresso and filter both welcome.
38 votes -
Comment on What types of content do you want to see on Tildes? in ~tildes
TwinTurbo /r/coffee and /r/espresso taught me a lot of things during the lockdowns. I'd love to see them here! Topic-specific news are another one. Reddit is a great place to find news on computing,.../r/coffee and /r/espresso taught me a lot of things during the lockdowns. I'd love to see them here!
Topic-specific news are another one. Reddit is a great place to find news on computing, Android/iPhone, politics, health, and so on, and it would be great to get that here.
This is my intuition, too. I remember from reading about DeepSeek-R1 that a lot of the "reasoning" value came from being able to stuff a really long CoT inside the context window. I bet Gemini Pro, with its 1M context window size, does the same, as you can see from the "thoughts" in the web interface, which sometimes go on for quite a while before the answer begins.
I'm not saying anything bad about this study, but I think it's important to keep in mind that findings on small LLMs from just a few years ago don't necessarily generalise to the newest, largest models available today.