The author's reasoning for releasing a vulnerability like this is of particular interest (taken from the README):
The author's reasoning for releasing a vulnerability like this is of particular interest (taken from the README):
I like VirtualBox and it has nothing to do with why I publish a 0day vulnerability. The reason is my disagreement with contemporary state of infosec, especially of security research and bug bounty:
Wait half a year until a vulnerability is patched is considered fine.
In the bug bounty field these are considered fine:
Wait more than month until a submitted vulnerability is verified and a decision to buy or not to buy is made.
Change the decision on the fly. Today you figured out the bug bounty program will buy bugs in a software, week later you come with bugs and exploits and receive "not interested".
Have not a precise list of software a bug bounty is interested to buy bugs in. Handy for bug bounties, awkward for researchers.
Have not precise lower and upper bounds of vulnerability prices. There are many things influencing a price but researchers need to know what is worth to work on and what is not.
Delusion of grandeur and marketing bullshit: naming vulnerabilities and creating websites for them; making a thousand conferences in a year; exaggerating importance of own job as a security researcher; considering yourself "a world saviour". Come down, Your Highness.
I'm exhausted of the first two, therefore my move is full disclosure. Infosec, please move forward.
Man, I get why the author is frustrated, but I don't think publishing 0 day exploits to github readmes is gonna fix that. Honestly I feel it hurts the reputation of security researchers more than...
Exemplary
Man, I get why the author is frustrated, but I don't think publishing 0 day exploits to github readmes is gonna fix that. Honestly I feel it hurts the reputation of security researchers more than anything else.
If a researcher tries to responsibly disclose first and gets no traction then I understand public disclosure, but an attempt should be made. Not all companies ignore this stuff and it's not fair to just assume they will.
Responsible disclosure is a trap, honestly, and more or less entirely pushed by the side creating bugs in the first place. Most academic security researchers dislike it and think the same. EDIT:...
Responsible disclosure is a trap, honestly, and more or less entirely pushed by the side creating bugs in the first place. Most academic security researchers dislike it and think the same.
EDIT: Caveat, a few, but not most, of the "bigger" organisations like it. This is generally considered to be so because it allows them more leverage against - and makes it less practical to be - a small-scale security researcher.
Isn't the ultimate goal of security research the safety of end users? It's not security for securities sake. If there is even a small chance that a company will patch a vulnerability we have a...
Isn't the ultimate goal of security research the safety of end users? It's not security for securities sake. If there is even a small chance that a company will patch a vulnerability we have a responsibility to the end users to give them that chance.
Publicly disclosing 0 days is a last ditch effort to protect users by forcing the company to fix an exploit when the danger of not fixing it outweighs the danger of public disclosure.
The author's reasoning for releasing a vulnerability like this is of particular interest (taken from the README):
Man, I get why the author is frustrated, but I don't think publishing 0 day exploits to github readmes is gonna fix that. Honestly I feel it hurts the reputation of security researchers more than anything else.
If a researcher tries to responsibly disclose first and gets no traction then I understand public disclosure, but an attempt should be made. Not all companies ignore this stuff and it's not fair to just assume they will.
Responsible disclosure is a trap, honestly, and more or less entirely pushed by the side creating bugs in the first place. Most academic security researchers dislike it and think the same.
EDIT: Caveat, a few, but not most, of the "bigger" organisations like it. This is generally considered to be so because it allows them more leverage against - and makes it less practical to be - a small-scale security researcher.
Isn't the ultimate goal of security research the safety of end users? It's not security for securities sake. If there is even a small chance that a company will patch a vulnerability we have a responsibility to the end users to give them that chance.
Publicly disclosing 0 days is a last ditch effort to protect users by forcing the company to fix an exploit when the danger of not fixing it outweighs the danger of public disclosure.
See Thomas Ptacek's opinion on this:
https://news.ycombinator.com/item?id=12309035