11 votes

Topic deleted by author

10 comments

  1. envy
    Link
    I have no problem with large companies. I do have a problem with any company that continually exhibits anti-competitive/ anti-consumer behavior. To me, the obvious answer is to break up the...

    I have no problem with large companies.

    I do have a problem with any company that continually exhibits anti-competitive/ anti-consumer behavior.

    To me, the obvious answer is to break up the company. Your Experian data was stolen? Break Experian up.

    If a CEO had to choose between a little less profitability versus a real risk of losing half the company, which do you think they would choose?

    13 votes
  2. [5]
    Akir
    Link
    This is just the wheel of time making another revolution. The article mentions the infamous Xerox Alto computer, invented at Xerox but commercialized by a relative upstart called Apple. Xerox is...

    This is just the wheel of time making another revolution. The article mentions the infamous Xerox Alto computer, invented at Xerox but commercialized by a relative upstart called Apple. Xerox is not the only big company that had a research department that was making groundbreaking inventions. AT&T had one - remember Bell Labs - and so did RCA. An unwritten truth about disruptive innovations is that they often happen because big companies are investing in the research required to have them.

    The potential problem with this is that corporations get to choose what their researchers are working on, and that often leads them to working on projects that are perceived to be the most profitable. Right now, that's largely AI, which I personally think is a bit of a dead end.

    6 votes
    1. [4]
      ali
      Link Parent
      In which ways do you think AI is a dead end? There’s still so much to be discovered.

      Right now, that's largely AI, which I personally think is a bit of a dead end

      In which ways do you think AI is a dead end? There’s still so much to be discovered.

      1 vote
      1. [3]
        stu2b50
        Link Parent
        Not OP, but AI now-a-days just means machine learning. Which is cool, words change, but AI has implications which are not really true. Machine learning can be a great tool, and it has been used...

        Not OP, but AI now-a-days just means machine learning. Which is cool, words change, but AI has implications which are not really true. Machine learning can be a great tool, and it has been used very successfully ever since the 1400s. Yes, the 1400s.

        Linear regression is one type of machine learnings, and you do it in middle school. LSLR is one of the first things you'll learn in a ML class, because it has a nice closed form, exposes you to the pseudo inverse, and you can quite quickly derive it from MLE.

        But it's not a hammer for all nails. We are not really any closer to general AI. Even the most advance neural networks, in reality, have little to do with neurons and more to do with linear regression.

        5 votes
        1. [2]
          Comment deleted by author
          Link Parent
          1. stu2b50
            Link Parent
            I don't understand this separation of "machine learning" and "statistics". There's not like separate factions that are competing or something. Every ML researcher has had a traditional statistics...

            This allows it to use tools like layering of logistic regressions

            I don't understand this separation of "machine learning" and "statistics". There's not like separate factions that are competing or something.

            Every ML researcher has had a traditional statistics background, and many of them straight up are statistics PhDs.

            And in the ML world, explanability is amazing. That's partially why decision trees blew up a few decades ago, because they're incredibly clear to read.


            nd those massive layered networks can only be feasibly trained with modern computers using things like backpropogation and stochastic gradient descent, methods that would not have been used in the 1400s.

            1400s, no, but gradient descent is not new by any means. It is one of the simplest forms of optimization for convex problems, and models like lasso or logistic regression can be (though not necessarily efficiently) solved for with gradient descent.

            Tbh the only exceptional part of gradient descent on neural networks is that it works surprisingly well for how completely and utterly unfounded it is, since neural network cost functions are about the least convex things imaginable.

            2 votes
        2. Akir
          Link Parent
          That's the gist of it, but it's not the only reason. And to be honest, "dead end" might have been a bit strong of wording. Like I said, there is a lot of money being spent on "AI", and like you...

          That's the gist of it, but it's not the only reason. And to be honest, "dead end" might have been a bit strong of wording. Like I said, there is a lot of money being spent on "AI", and like you said, we're not actually that much closer to general AI.

          ML in particular is a real dead end. The thing that is genius about ML is that it works much like the human brain does. When we see an object, we are constantly updating our personal mental models as we go. ML might be likely to identify an object correctly most of the time, but it's all fuzzy logic; you will always need a person to correct it when it is wrong, no matter how much data it has been trained on. Autonomous cars are supposedly able to detect pedestrians with ML, but the existence of fashion means that there is a broad range of things to match against, and that range is continually growing. How on earth are we supposed to rely on tech companies to continually upgrade their models in the long term if we can't even depend on them to support thermostats and light bulbs?

          Also from a philosophical reason, I don't think that we will ever make an AI that thinks the same way as a human does (short of building exact-replica brains), nor do I think that making human-like AI is a wise idea. And no, it's not because I fear the AI apocalypse, it's because I think that the ways that humans think is in itself flawed. Human rationality is an oxymoron; logic and reason don't work because human thought is subjective by nature.

          3 votes
  3. [4]
    skybrian
    Link
    I'm more worried about the widespread skepticism towards all tech companies, large or small. It used to be that a hot new startup that seemed to be doing things right would get a lot of fans. It's...

    I'm more worried about the widespread skepticism towards all tech companies, large or small. It used to be that a hot new startup that seemed to be doing things right would get a lot of fans. It's not clear where people think innovation should come from, if not from young people trying to make something new?

    1 vote
    1. babypuncher
      Link Parent
      Some people fear change in general and don't really want innovation. They just want things to stay the way they are. It's not hard to blame some of them, look at all the coal workers being...

      It's not clear where people think innovation should come from, if not from young people trying to make something new?

      Some people fear change in general and don't really want innovation. They just want things to stay the way they are. It's not hard to blame some of them, look at all the coal workers being displaced by the growth of innovative clean energy alternatives.

      2 votes
    2. [2]
      joplin
      Link Parent
      What does the age of the people making it have to do with making something new?

      It's not clear where people think innovation should come from, if not from young people trying to make something new?

      What does the age of the people making it have to do with making something new?

      1 vote
      1. skybrian
        Link Parent
        Sure, it could be anyone.

        Sure, it could be anyone.

        1 vote