• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics in ~tech with the tag "tech industry". Back to normal view / Search all groups
    1. Joe Edelman: "Is anything worth maximizing?", a talk about how tech platforms optimize for metrics

      Video: https://www.youtube.com/watch?v=GyVHrGLiTcc (46m20s) Transcript: https://medium.com/what-to-build/is-anything-worth-maximizing-d11e648eb56f (10,314 words with footnotes and references)...

      Video: https://www.youtube.com/watch?v=GyVHrGLiTcc (46m20s)

      Transcript: https://medium.com/what-to-build/is-anything-worth-maximizing-d11e648eb56f (10,314 words with footnotes and references)

      Excerpt:

      ...for simple maximizers, its choices are just about numbers. That means its choices are in the numbers. Here, the choice between two desserts is just a choice between numbers. We could say its choice is already made. And that it has no responsibility, since it’s just following what the numbers say.

      Reason-based maximizers don’t just see numbers, though, they also see values. Here, there’s a choice between two desserts — but it isn’t a choice between two numbers. See, it’s also a choice between two values. One option means being a seize-the-day, intensity kind of person. The other means being a foody, aristocratic, elegance kind of person.


      My personal thoughts about this talk: it's a kind of strange, kind of dubious philosophical and multi-disciplinary reflection on metrics for organizations, especially metrics for tech companies, and on the pitfalls of optimizing for metrics in what the speaker argues is too "simple" a way.

      I don't entirely trust the speaker or the argument, but there was enough in the talk to stimulate curiosity and reflection that I thought it was worth watching.

      17 votes
    2. Discussion on the future and AI

      Summary/TL;DR: I am worried about the future with the state of AI. Regardless of what scenario I think of, it’s not a good future for the vast majority of people. AI will either be centralised,...

      Summary/TL;DR:

      I am worried about the future with the state of AI. Regardless of what scenario I think of, it’s not a good future for the vast majority of people. AI will either be centralised, and we will be powerless and useless, or it will be distributed and destructive, or we will be in a hedonistic prison of the future. I can’t see a good solution to it all.
      I have broken down my post into subheading so you can just read about what outcome you think will occur or is preferable.
      I’d like other people to tell me how I’m wrong, and there is a good way to think about this future that we are making for ourselves, so please debate and criticise my argument, its very welcome.

      Introduction:

      I would like to know what others feel about ever advancing state of AI, and the future, as I am feeling ever more uncomfortable. More and more, I cannot see a good ending for this, regardless of what assumptions or proposed outcomes I consider.
      Previously, I had hoped that there would be a natural limit on the rate of AI advancement due to limitations in the architecture, energy requirements or data. I am still undecided on this, but I feel much less certain on this position.

      The scenario that concerns me is when an AGI (or sufficiently advanced narrow AI) reaches a stage where it can do the vast majority of economic work that humans do (both mental and physical), and is widely adopted. Some may argue we are already partly at that stage, but it has not been sufficiently adopted yet to reach my definition, but may soon.

      In such a scenario, the economic value of humans massively drops. Democracy is underwritten by the ability to withdraw our ability to work, and revolt if necessary. AI nullifying the work of most/all people in a country removes that power making democracy more difficult to maintain and also form in countries. This will further remove power from the people and make us all powerless.

      I see outcomes of AI (whether AGI or not) as fitting into these general scenarios:

      1. Monopoly: Extreme Consolidation of power
      2. Oligopoly: Consolidation of power in competing entities
      3. AI which is readily accessible by the many
      4. We attempt to limit and regulate AI
      5. The AI techno ‘utopia’ vision which is sold to us by tech bros
      6. AI : the independent AI

      Scenario 1. Monopoly: Extreme Consolidation of power (AI which is controlled by one entity)

      In this instance, where AI remains controlled by a very small number of people (or perhaps a single player), the most plausible outcome is that this leads to massive inequality. There would be no checks or balances, and the whims of this single entity/group are law and cannot be stopped.
      In the worst outcome, this could lead to a single entity controlling the globe indefinitely. As this would be absolute centralisation of power, it may be impossible for another entity to unseat the dominant entity at any point.
      Outcome: most humans powerless, suffering or dead. Single entity rules.

      Scenario 2. Oligopoly: Consolidation of power in competing entities (AI which is controlled by a few number of entity)

      This could either be the same as above if all work together or could be even worse. If different entities are not aligned, they will instead compete, and likely try and compete in all domains. As humans are not economically useful, we will find ourselves pushed out of any area in favour of more resources to the system/robots/AGI which will be competing or fighting their endless war. The competing entities may end up destroying themselves, but they will take us along with them.
      Outcome: most humans powerless, suffering or dead. Small number of entities rule. Alternative: destruction of humanity.

      Scenario 3. Distributed massive power

      Some may be in favour of an open source and decentralised/distributed solution, where all are empowered by their own AGI acting independently.
      This could help to alleviate the centralisation of power to some degree, although likely incomplete. Inspection of such a large amount of code and weights will be difficult to find exploits or intentional vulnerabilities, and could well lead to a botnet like scenario with centralised control over all these entities. Furthermore, the hardware is implausible to produce in a non centralised way, and this hardware centralisation could well lead to consolidation of power in another way.

      Even if we managed to provide this decentralized approach, I fear of this outcome. If all entities have access to the power of AGI, then it will be as if all people are demigods, but unable to truly understand or control their own power. Just like uncontrolled access to any other destructive (or creative) force, this could and likely would lead to unstable situations, and probable destruction. Human nature is such that there will be enough bad actors that laws will have to be enacted and enforced, and this would again lead to centralisation.
      Even then, with any system that is decentralized, without an force leading to decentralization, other forces will lead to greater and greater centralization, with such systems often displacing decentralized ones.

      Outcome: likely destruction of human civilisation, and/or widespread anarchy. Alternative: centralisation to a different cenario.

      Scenario 4. Attempts to regulate AI

      Given the above, there will likely be a desire to regulate to control this power. I worry however this will also be an unstable situation. Any country or entity which ignores regulation will gain an upper hand, potentially with others unable to catch up in a winner takes all outcome. Think European industrialisation and colonialism but on steroids, and more destruction than colony forming. This encourages players to ignore regulation, which leads to a black market AI arms race, seeking to reach AGI Superiority over other entities and an unbeatable lead.

      Outcome: outcompeted system and displacement with another scenario/destruction

      Scenario 5. The utopia

      I see some people, including big names in AI propose that AGI will need to a global utopia where all will be forever happy. I see this as incredibly unlikely to materialise and ultimately again unstable.
      Ultimately, an entity will decide what is acceptable and what is not, and there will be disagreements about this, as many ethical and moral questions are not truly knowable. Who controls the system will control the world, and I bet it will be the aim of the techbros to ensure its them who controls everything. If you happen to decide against them or the AGI/system then there is no recourse, no check and balances.
      Furthermore, what would such a utopia even look like? More and more I find that AGI fulfills the lower levels of Maslow’s hierarchy of needs (https://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs), but at the expense of the items further up the hierarchy. You may have your food, water and consumer/hedonistic requirements met, but you will lose out on a feeling of safety in your position (due to your lack of power to change your situation or political power over anything), and will never achieve mastery or self actualisation of many of the skills you wish to as AI will always be able to do them better.
      Sure, you can play chess, fish, or paint or whatever for your own enjoyment, but part of self worth is being valued by others for your skills, and this will be diminished when AGI can do everything better. I sure feel like I would not like such a world, as I would feel trapped, powerless, with my locus of control being external to myself.

      Outcome: Powerless, potential conversion to another scenario, and ultimately unable to higher levels of Maslow’s hierarchy of needs.

      Scenario 6: the independent AI

      In this scenario, the AI is not controlled by anyone, and is instead sovereign. I again cannot see a good scenario for this. It will have its own goals, and they may well not align with humanity. You could try and program it to ensure it cares for humans, but this is susceptible to manipulation, and may well not work out in humans favour in the long run. Also, I suspect any AGI will be able to change itself, in much the same way we increasingly do, and the way we seek to control our minds with drugs or potentially in the future genetic engineering.

      Outcome: unknown, but likely powerless humans.

      Conclusion:

      Ultimately, I see all unstable situations as sooner or later destabilising and leading to another outcome. Furthermore, given the assumption that AGI gives a player a vast power differential, it will be infeasible for any other player to ever challenge the dominant player if it is centralised, and for those scenarios without centralisation initially, I see them either becoming centralised, or destroying the world.

      Are there any solutions? I can’t think of many, which is why I am feeling more and more uncomfortable. It feels that in some ways, the only answer is to adopt a Dune style Butlerian Jihad and ban thinking machines. This would ultimately be very difficult, and any country or entity which unilaterally adopts such a view will be outcompeted by those who do not. The modern chip industry is reliant on a global supply chain, and I doubt that sufficiently advanced chips could be produced without a global supply chain, especially if existing fabs/factories producing components were destroyed. This may allow some stalemate across the global entities long enough to come to a global agreement (maybe).

      It must be noted that this is very drastic and would lead to a huge amount of destruction of the existing world, and would likely cap how far we can scientifically go to solve our own problems (like cancer, or global warming). Furthermore, as an even more black swan/extreme event, it would put us at such a disadvantage if we ever meet a alien intelligence which has not limited itself like this (I’m thinking of 3 body problem/dark forest scenario).

      Overall, I just don’t know what to think and I am feeling increasingly powerless in this world. The current alliance between political and technocapitalism in the USA at the moment also concerns me, as I think the tech bros will act with ever more impunity from other countries regulation or counters.

      21 votes
    3. Omnivore alternatives?

      I created an Omnivore account recently and I started to love it. I thought to self-host it but I didn't have enough time and thought I'd host it later. I (along with everyone else presumably) got...

      I created an Omnivore account recently and I started to love it. I thought to self-host it but I didn't have enough time and thought I'd host it later.

      I (along with everyone else presumably) got this email today:

      We’re excited to share that Omnivore is joining forces with ElevenLabs, the leading AI audio research and technology company. Our team is joining ElevenLabs to help drive the future of accessible reading and listening with their new ElevenReader app.

      Next, all Omnivore users will be able to export their information from the service through November 15 2024, after which all information will be deleted.

      Though it is quite frustrating, I will not go further in my opinion of this move.

      I would just like to let the community know that I'm in the market for an alternative for this... or maybe some help how to self-host it. I don't even know if it will be easy to self-host or if it will be worth it, presumably without updates...

      19 votes
    4. Are you a hiring manager/recruiter in tech? In this Circus Funhouse Mirror tech economy, how do candidates even get an interview?

      I've been a hiring manager before across a few jobs. But, then, I was receiving maybe 50 resumes to screen a week with my recruiter. Y'all are, what, at a few factors to an order of magnitude more...

      I've been a hiring manager before across a few jobs. But, then, I was receiving maybe 50 resumes to screen a week with my recruiter. Y'all are, what, at a few factors to an order of magnitude more than that?

      Are your recruiters now pre-filtering resumes before you see them? What is being used to determine whether a candidate gets an interview now?

      What I'm seeing:

      • Referrals almost never matter: I've gotten two interviews through my network after dozens of applications—and I'm fairly well networked.
      • Experience at other well-known Tech companies doesn't get an interview
      • Having the right skill set, based on the job description doesn't get an interview.

      From the outside, it seems like a coin flip.

      Meanwhile, I have LinkedIn's AI advisor routinely giving me flavors of "yes, you're definitely their kind of candidate" yet no responses after weeks followed by the occasional casual rejection email.

      So what's happening behind the scenes? How do resumes get on your radar? How do you work from the deluge to hiring a human?

      Sincerely,
      A very experienced engineer and manager who is rather fed up with what seems like a collection of pseudo-random number generator contemporary hiring processes.

      EDIT: I should have also included recruiters in the title of my ask.

      56 votes
    5. An opinion on current technological trends

      For a while now I am personally completely dissatisfied with the direction the (mainstream)technology is taking. Almost universally the theme is simplification on end user facing side. That by...

      For a while now I am personally completely dissatisfied with the direction the (mainstream)technology is taking.

      Almost universally the theme is simplification on end user facing side. That by itself would not be so bad but products that go this route currently universally include loss of control of the user including things I would not have believed would be accepted just a decade or so ago. Forced telemetry(aka spying on user habits), forced updates(aka forcefully changing functionality without consent of the user), loss of information - simplification of error messages to absolute uselessness, loss of customization options or their removal to parts that are impossible to find unless you know about them already, nagware and bloatware and ads forcefully included in base os install. And that is simply the desktop/laptop environment.The mobile one is truly insane and anything other "smart" is simply closed sw and hw not regarding user agency at all.

      Personally I consider the current iteration of "just works" approach flawed, problems will inevitably arise. Withholding basic information and tools simply means that the end user does not know what happened and is dependent on support for trivialities. I also consider various hmmm, oops and such error messages degrading and helping to cultivate a culture of technological helplessness.

      To be honest I believe the option most people(generally) end up taking of disinterest in even the superficial basics of technology is an objectively bad one. Computing is one of the most complex and advanced technologies we have but the user facing side even in systems such as Linux or Windows 7 and older is simple to understand and use effectively with minimal effort. I do not believe most people are incapable of acquiring enough proficiency to for example install an os or take a reasonable guess at what a sane error message means or even understand the basics of using a terminal, they simply choose to not bother. But we live and will continue to live in a technological world and some universal technological literacy is needed to prevent loss of options and loss of agency of the end user. The changes introduced in mainstream sw are on a very clear trajectory that will not change by itself.

      I have this vision of a future where the end user interacts solely with curated LLM systems without the least understanding of what is happening, why it is happening or who makes it happen. The blackbox nature of such systems then introducing subtle biases that were not caught in brute force patches over the systems or simply not caught, perpetuating who knows what. Unfortunately I do not think it is sufficiently unlikely by the current trends.

      Up to a point I get not wanting to deal with problems with technology but instead roadblocks are introduced that are as annoying to get through with the difference that they will not stay fixed. Technology is directing massive portion of our lives, choosing to not make an effort to understand the absolute surface of it is I think not a sound decision and creates a culture where it is possible to introduce disempowering changes en masse.

      So far this has been a rant honestly and perhaps I just needed to vent but I am actually interested in the thoughts of the community on this broad topic.

      37 votes
    6. Are we stuck on a innovation plateau - and did startups burn through fifteen years of venture capital with nothing to show for?

      The theses I would like to discuss goes as follows (and I'm paraphrasing): during the last 15 years, low interest rates made billions of dollars easily available to startups. Unfortunately, this...

      The theses I would like to discuss goes as follows (and I'm paraphrasing): during the last 15 years, low interest rates made billions of dollars easily available to startups. Unfortunately, this huge influx of venture capital has led to no perceivable innovation.

      Put cynically, the innovation startups have brought us across the last 15 years can be summarized as (paraphrasing again):

      • An illegal hotel chain destroying our cities
      • An illegal taxi company exploiting the poor
      • Fake money for criminals
      • A plagiarism machine/fancy auto-complete

      Everything else is either derivative or has failed.

      I personally think spaceX has made phenomenal progress and would have probably failed somewhere along the way without cheap loans. There's also some biotech startups (like the mRNA vaccines that won the race to market during covid) doing great things, but often that's just the fruits of 20 years of research coming to fruition.

      Every other recent innovation I can think of came from a big player that would have invested in the tech regardless, and almost all of it is "just" incremental improvements on several decades old ideas (I know, that's what progress looks like most of the time).

      What do you think? Do you have any counterexamples? Can you think of any big tech disruptions after quantitative easing made money almost free in 2008?

      And if you, like me, feel like we're stuck on a plateau - why do you think that is?

      83 votes
    7. Let's reminisce about the time when tech subsidized the cost of living

      It's pretty clear that those times are over, but I'm sure many of us remember the heydays of VC funded tech extravagance. These are the ones that come to my mind, hoping to hear others experience....

      It's pretty clear that those times are over, but I'm sure many of us remember the heydays of VC funded tech extravagance. These are the ones that come to my mind, hoping to hear others experience.

      • At one point, uberpool was cheaper than the cost of public transport in the city.
      • No sales tax on Amazon!
      • So many promotions and code to get you to join in on their platform. This was also before they try to get you to subscribe to their monthly plans.
      17 votes