• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics with the tag "machine learning". Back to normal view
    1. Predicting the NBA MVP with Machine Learning

      Predicting the NBA MVP with Machine Learning Thesis Every season, basketball fans debate who deserves the MVP award. We built 3 machine learning models that attempt to answer that question using...

      Predicting the NBA MVP with Machine Learning

      Thesis

      Every season, basketball fans debate who deserves the MVP award. We built 3 machine learning models that attempt to answer that question using box score statistics. At the end of each season, this award is determined by a panel of voters.

      Methodology

      Each model is trained on every NBA season from 1974 to 2017. For each player season, it looks at nine statistics:

      • Points, assists, blocks, defensive rebounds, and field goals per game the core production numbers
      • Win Shares (WS): an estimate of how many wins a player contributed to their team
      • Value Over Replacement Player (VORP): how much better a player is than a league average replacement
      • Box Plus/Minus (BPM): a player's net impact per 100 possessions
      • Usage Rate (USG%): what share of team plays run through that player

      From those nine numbers, the model learns what a typical MVP season looks like versus a non MVP season, then applies that knowledge to current players. Each model outputs an independent probability that a given player wins MVP, not a share of a single pool, so the values do not sum to 1. Think of it as each player's individual odds.

      Three Models, One Question

      Rather than relying on a single approach, the system runs three different models and lets you compare:

      Logistic Regression

      The simplest of the three. It draws a straight line through the data, each statistic gets a weight, and a player's score is the weighted sum of their stats. It's easy to interpret (a higher coefficient means that stat matters more).

      Win Shares (WS) is by far the most influential feature, with an absolute coefficient of ~1.85, nearly double the next most important feature. Box Plus/Minus (BPM) ranks second at ~1.0, followed by Defensive Rebounds per Game (DRBPG, ~0.85) and Assists per Game (ASTPG, ~0.70). VORP and Field Goals per Game (FGPG) contribute moderately at ~0.50. Blocks per Game (BLKPG), Points per Game (PTSPG), and Usage Rate (USG%) have minimal weight, all under 0.15.

      Random Forest

      Builds hundreds of decision trees, each one asking a series of "is this stat above or below X?" questions and averages their answers. It handles complex relationships between stats well and is less sensitive to any one unusual data point. Think of it as a large committee of simple rules voting together.

      WS again dominates at ~0.31, accounting for roughly twice the importance of the next feature. VORP (~0.15) and BPM (~0.125) rank second and third. DRBPG (~0.10), PTSPG (~0.08), BLKPG (~0.07), FGPG (~0.065), and ASTPG (~0.06) contribute in a fairly tight mid-range band. USG% is the least important at ~0.05. Compared to logistic regression, the Random Forest spreads importance more evenly across features.

      Gradient Boosting

      Also uses decision trees, but builds them sequentially: each new tree focuses on correcting the mistakes the previous ones made.

      This model is heavily concentrated on just two features: BPM (~0.47) and WS (~0.41) together account for roughly 88% of total feature importance. All remaining features, PTSPG, VORP, ASTPG, DRBPG, contribute ~0.02–0.03 each, and BLKPG, USG%, and FGPG are effectively unused (near zero). This suggests the gradient boosting model learned that BPM and WS alone are nearly sufficient to separate MVP candidates.

      Historical Results

      The models were trained on data through 2017, so every season from 2018 onward is a genuine out of sample test, the models have never seen these players or seasons before.

      Season Actual MVP LR RF GB
      2018 James Harden #2 #2 #1 ✓
      2019 Giannis Antetokounmpo #1 ✓ #1 ✓ #1 ✓
      2020 Giannis Antetokounmpo #1 ✓ #1 ✓ #1 ✓
      2021 Nikola Jokić #1 ✓ #1 ✓ #1 ✓
      2022 Nikola Jokić #1 ✓ #1 ✓ #1 ✓
      2023 Joel Embiid #2 #4 #2
      2024 Nikola Jokić #1 ✓ #1 ✓ #1 ✓
      2025 Shai Gilgeous-Alexander #3 #2 #569

      Top-1 accuracy: LR 5/8 · RF 5/8 · GB 6/8

      Top-3 accuracy: LR 8/8 · RF 7/8 · GB 7/8

      Top-3 accuracy: LR 8/8 · RF 7/8 · GB 7/8

      For five straight seasons (2019–2022 + 2024), all three models agreed on the same #1 pick, and were right every time.

      In 2023, every model ranked Nikola Jokić #1, and by the numbers, he arguably had the better season. Joel Embiid won the award anyway, the kind of outcome that may reflect voter narrative/fatigue and team performance rather than pure statistics. In 2025, Gradient Boosting ranked Shai Gilgeous-Alexander outside the top 500, while Logistic Regression and Random Forest had him at #3 and #2 respectively. I have no idea why GB did this. Likely a bug.

      Future Direction

      No model is perfect, and these have known blind spots. Team record is not included, MVP voters have historically punished players on losing teams regardless of individual stats. Injuries and narrative don't appear in a box score. And the training data skews toward an older era; the three point revolution and the rise of players like SGA have introduced statistical profiles the 1970s–1990s data doesn't fully capture.

      Current Season Predictions (2025–26)

      LR RF GB
      #1 Nikola Jokić Shai Gilgeous-Alexander Nikola Jokić
      #2 Shai Gilgeous-Alexander Nikola Jokić Victor Wembanyama
      #3 Victor Wembanyama Victor Wembanyama Giannis Antetokounmpo
      #4 Luka Dončić Giannis Antetokounmpo Kawhi Leonard
      #5 Jalen Johnson Luka Dončić Luka Dončić

      Two of the three models have Nikola Jokić as the frontrunner. Random Forest is the dissenter, putting Shai Gilgeous-Alexander ahead. Victor Wembanyama appears in all three top 3s in just his second season, which is notable. Before running the models, I expected him to be #1 for all of them considering the way the models use advanced stats.

      Conclusion

      Thank you for reading. I hope you found this interesting. Basketball reference also has their own model if you would like to see a different result. Please do not gamble on my models!

      13 votes
    2. Funny output from Claude.ai: a codeblock in a question about an early rennaissance theological work

      I asked Claude for references to some ideas the ai had output earlier about a 16th century cardinal (Sadoleto). It's response was, Let me analyze the letter, and then this: async function...

      I asked Claude for references to some ideas the ai had output earlier about a 16th century cardinal (Sadoleto). It's response was, Let me analyze the letter, and then this:

      async function searchForSadoletoLetter() {
        try {
          const files = ['sadoleto_letter.txt', 'sadoleto.txt', 'sadoleto_to_geneva.txt', 'letter_to_geneva.txt'];
          
          for (const filename of files) {
            try {
              const content = await window.fs.readFile(filename, { encoding: 'utf8' });
              console.log(`Found file: ${filename}`);
              console.log("First 500 characters:");
              console.log(content.substring(0, 500));
              return content;
            } catch (err) {
              // File not found, try the next one
            }
          }
          console.log("Could not find any files containing Sadoleto's letter");
          return null;
        } catch (err) {
          console.error("Error searching for files:", err);
          return null;
        }
      }
      
      const content = await searchForSadoletoLetter();
      
      // Since we don't have the letter available as a file, let's search for published versions online
      console.log("The letter isn't available as a file. We would need to rely on scholarly knowledge about Sadoleto's letter to Geneva (1539).");
      console.log("Sadoleto's letter was published in 'A Reformation Debate: Sadoleto's Letter to the Genevans and Calvin's Reply' (ed. John C. Olin).");
      

      This strikes me as odd? Did Claude write a script to generate its own response? Have coders put something like this in as a gaurdrail?

      edit: details about earlier convo

      15 votes
    3. What trustworthy resources are you using for AI/LLMs/ML education?

      Every company is trying to shoehorn AI into every product, and many online materials provide a general snake oil vibe, making it increasingly difficult to parse. So far, my primary sources have...

      Every company is trying to shoehorn AI into every product, and many online materials provide a general snake oil vibe, making it increasingly difficult to parse. So far, my primary sources have been GitHub, Medium, and some YouTube.

      My goal is to better understand the underlying technology so that I can manipulate it better, train models, and use it most effectively. This goes beyond just experimenting with prompts and trying to overcome guardrails. It includes running local, like Ollama on my M1 Max, which I'm not opposed to.

      8 votes
    4. Real-time speech-to-speech translation

      Has anyone used a free, offline, open-source, real-time speech-to-speech translation app on under-powered devices (i.e., older smart phones)? There are a few libraries that written that...

      Has anyone used a free, offline, open-source, real-time speech-to-speech translation app on under-powered devices (i.e., older smart phones)? There are a few libraries that written that purportedly can do or help with local speech-to-speech:

      I'm looking for a simple app that can listen for English, translate into Korean (and other languages), then perform speech synthesis on the translation. Although real-time would be great, a short delay would work.

      RTranslator is awkward (couldn't get it to perform speech-to-speech using a single phone). 3PO sprouts errors like dandelions and requires an online connection.

      Any suggestions?

      6 votes
    5. Can I have some advice on the neural net I've been working on?

      Apologies if this isn't an appropriate place to post this. Inspired by a paper I found a while back (https://publications.lib.chalmers.se/records/fulltext/215545/local_215545.pdf), I tried my hand...

      Apologies if this isn't an appropriate place to post this.

      Inspired by a paper I found a while back (https://publications.lib.chalmers.se/records/fulltext/215545/local_215545.pdf), I tried my hand at implementing a program (in C#) to create ASCII art from an image. It works pretty well, but like they observed in the paper, it's pretty slow to compare every tile to 90-some glyphs. In the paper, they make a decision tree to replicate this process at a faster speed.

      Recently, I revisited this. I thought I'd try making a neural net, since I found the idea interesting. I've watched some videos on neural nets, and refreshed myself on my linear algebra, and I think I've gotten pretty close. That said, I feel like there's something I'm missing (especially given the fact that the loss isn't really decreasing). I think my problem is specifically during backpropagation.

      Here is a link to the TrainAsync method in GitHub: https://github.com/bendstein/ImageToASCII/blob/1c2e2260f5d4cfb45443fac8737566141f5eff6e/LibI2A/Converter/NNConverter.cs#L164C59-L164C69. The forward and backward propagation methods are below it.

      If anyone can give me any feedback or advice on what I might be missing, I'd really appreciate it.

      14 votes
    6. What useful tasks are possible with an LLM with only 3B parameters?

      Playing with Llama 7B and 13B, I found that the 13B model was capable of doing a simple task, rewriting titles in sentence case for Tildes submissions. The 7B model doesn't appear capable of the...

      Playing with Llama 7B and 13B, I found that the 13B model was capable of doing a simple task, rewriting titles in sentence case for Tildes submissions. The 7B model doesn't appear capable of the same task, out of the box.

      I heard about Android's new AICore available on a couple of new devices. But it sounds like Gemini Nano, which runs on-device, can only handle 2B or 3B parameters.

      Is this size of model useful for real tasks? Does it only become useful after training on a specific domain? I'm a novice and wanting to learn a little bit about it. On-device AI is an appealing concept to me.

      12 votes
    7. What are some interesting machine learning research papers you found?

      Here's a place to share machine learning research papers that seem interesting to you. I'm no expert, but sometimes I skim them, and maybe there are some folks on Tilde who know more than I do?...

      Here's a place to share machine learning research papers that seem interesting to you. I'm no expert, but sometimes I skim them, and maybe there are some folks on Tilde who know more than I do?

      One paper per top-level post, and please link to arXiv (if relevant) and quote a bit of the abstract.

      11 votes