6 votes

When Machine Learning Tells the Wrong Story

1 comment

  1. skybrian
    Link
    From the blog post: …

    From the blog post:

    Since our talk, every few months, I’ve gotten the urge to write a blogpost about the paper. Among other cool things described in the paper, we…

    • Implemented a powerful machine-learning-assisted side-channel attack that can be pulled off in any modern web browser
    • Demonstrated for the first time in the literature that system interrupts, a low-level mechanism that your operating system uses to interact with hardware devices, can leak information about user activity
    • Learned a valuable lesson about the dangers of applying machine learning toward hardware security research

    I think some of these lessons are widely applicable, even outside of hardware security research.

    This would become the biggest lesson of our eventual research paper: in a machine-learning-assisted side-channel attack such as this one, if a model can reliably predict user activity, it proves the presence of a signal, but not the cause of that signal. Even though Shusterman et al.’s model could identify the correct victim website 91.4% of the time, that didn’t necessarily mean that their model was picking up on contention over the CPU cache. And the implications of getting this wrong can be big: researchers look at papers describing attacks when building defenses that make our computers safer. A more thorough analysis was needed in order to properly identify the side channel, which we set out to provide.

    1 vote