-
32 votes
-
I made a post here two years ago about starting my first SWE job, since then I've been promoted and have recently recieved a very exciting job offer
31 votes -
What steps can the average user do to secure their data privacy?
With all of the identity verification laws in the pipeline, data breaches, and government overreach (mandated monitoring in new cars in the US), what steps can the average person take to secure...
With all of the identity verification laws in the pipeline, data breaches, and government overreach (mandated monitoring in new cars in the US), what steps can the average person take to secure their anonymity and data and device privacy?
I’m a tech-savvy person but nowhere near the level of a great many. It seems like in the face of overwhelming odds, making small changes is only a drop in the bucket. I have all the data encryption settings enabled on my phone, but I use services like Dropbox and rely on it heavily. I’ve always thought that if the product is free, you’re the product…but I pay for Dropbox, so they shouldn’t use my data for training AI (but they likely are). Setting up a personal cloud seems like a daunting task, as is getting involved in any of the small projects that people have going (decentralized networks, mesh…things, P2P, etc). I’ve focused more on securing my home networks recently so my Ubiquiti devices are restricted in what they can access, but I haven’t actually pen-tested my network yet. I have PopOS! installed on my home desktop because I got tired of Windows’ invasive…everything, but ultimately I don’t know what I’m doing.
There’s probably a great many people out there that feel like it’s hopeless to try to do anything because it won’t matter as there’s such a heavy push to invade, restrict, and monetize our digital lives. What can the average person do to take control of our devices and data?
34 votes -
Software job openings surge this year, defying AI fears
29 votes -
Is higher education still valuable?
Hi friends, Given the current state of AI and other technologies, do you consider higher education to still be worth pursuing? For those of you with children, will you be advising them to go to...
Hi friends,
Given the current state of AI and other technologies, do you consider higher education to still be worth pursuing? For those of you with children, will you be advising them to go to college?
I’m asking because I am enrolled in a masters program for statistics and have ~2 years left. I’m concerned that by the time I’m finished, the degree won’t be worth the paper it’s printed on. Like many of you, I work in software. Some days I think I should be learning an entirely different skill set in a non tech related field to diversify my value instead of doubling down on a potentially dying field.
I am not really interested in “you should pursue education for the sake of education”. While this is probably true, at the end of the day I need a way to make money to survive and education is the historical way of increasing one’s value in the job market. Furthermore, I can educate myself for far cheaper if education from a university is no longer considered valuable.
Anyone else in the same boat? Am I being dramatic? Would love to hear your thoughts.
33 votes -
US data centers are getting off-grid power plants
15 votes -
The AI industry doesn’t take “no” for an answer
39 votes -
Why there's no European Google?
38 votes -
Cory Doctorow | AI companies will fail. We can salvage something from the wreckage.
91 votes -
Consumer Electronics Show 2026
With CES 2026 coming to a close, I figured that like last year, I should make a thread to see what people are excited (or not excited) for. I honestly wasn't that excited (see recent state of US...
With CES 2026 coming to a close, I figured that like last year, I should make a thread to see what people are excited (or not excited) for.
I honestly wasn't that excited (see recent state of US economy) but I want to your thoughts!
Previous Topics:
Dell's CES 2026 chat was the most pleasingly un-AI briefing I've had in maybe five years
I didn't post in the thread as I didn't have much to add, the top post by @Oxalis basically sums up my thoughtsNice to see them be honest about how this isn't really panning out. Everyone wants AI except the consumer.
Clicks Communicator: the ultimate communication companion
Again didn't post in here but I'm glad there is a still a market for niche phone.29 votes -
I no longer trust the stats that companies publish on the gender equality in their tech roles
I am really not sure if this topic belongs in ~tech or ~society or ~talk but I trust the moderators to re-assign accordingly. So, this is the layout of the "development" team of my companies....
I am really not sure if this topic belongs in ~tech or ~society or ~talk but I trust the moderators to re-assign accordingly.
So, this is the layout of the "development" team of my companies.
there are 4 "development" teams which reports to the development manager who also occasionally codes.
There is one team, that's the one I am on. 7 people, 6 males.
there is another team, 4 people, 3 males.
there is another team, 5 people, 4 males.
The last team, I don't really consider "development" team. its a team of 4 females. What they are best suited for is QA in the sense of manually testing the product to ensure the experience is sufficient for push to PROD, But because of budget restrictions, they are being forced to learn code and testing suites so they can be the people to develop our testing structure. They are great people and excellent Manual QAers but they really are not developers.All our tech managers and team leads are men with the exception of the team lead for QA (obviously).
And just to be clear, the culture is friendly and respectful and no complaints. It's just the gender ratio is pathetic.
So our tech gender ratio is really 17 people and 3 women which is 17%.
If you want to consider the QA team a dev team to bump up the numbers, you get 21 with 7, that's still only 33%.At a recent company meeting, they were talking about how diverse our workforce is and blah blah blah (I tune out most of that stuff as we are fully remote and I spend most of my time coding), but then they showed a slide that claimed our gender ratio for tech roles was like 50% or something.....
I message a colleague at work, being like "where on earth did they get that number??", he was like ":shrug: maybe they are counting the people who use the product we are making?"
To clarify that, the product we work on is rarely used by external customers. Instead we have employees who know how to use our product and correspond on our behalf with external customers. So all these employees are doing is using a webapp the real tech employees develop.
So long story short, my company pulled a number out of nowhere to claim we have gender equity in the tech roles and now I dont know how to trust any stats a company puts out about how equal the gender roles are in their "tech" departments.
31 votes -
The final straw: Why companies replace once-beloved technology brands
19 votes -
10x engineer - Midlife crisis
14 votes -
Microsoft is adding AI facial recognition to OneDrive and users can only turn it off three times a year
I didn't watch the whole video and I'm not familiar with the channel so I don't want to make this a link post, but here's the source: The Lunduke Journal I watched up to the point where the author...
I didn't watch the whole video and I'm not familiar with the channel so I don't want to make this a link post, but here's the source: The Lunduke Journal
I watched up to the point where the author explains how Microsoft tends to turn on all the privacy invading settings every time they push an update (not surprising). I guess if I had to use Microsoft products, I'd try to disable automatic updates and just do them twice a year in one go, while also turning off the settings I want off. Would it be practically feasible? I don't know. Having to go to those lengths to use some software just seems ridiculous.
48 votes -
Tech companies are finding out everything is political
33 votes -
Cory Doctorow: Tech-like apps can obfuscate what’s really going on, sloshing a coat of complexity over a business that allows its owners to claim that they’re not breaking the law
39 votes -
AI eroded doctors’ ability to spot cancer within months in study
42 votes -
Social media probably can’t be fixed
38 votes -
The prodigal techbro
8 votes -
How to not build the Torment Nexus
28 votes -
Open AI announces $1.5 million bonus for every employee
22 votes -
No, AI is not making engineers 10x as productive: curing your AI 10x engineer imposter syndrome
27 votes -
Six-month-old, solo-owned vibe coder Base44 sells to Wix for $80M cash
13 votes -
Sincerity wins the war
8 votes -
Getty Images and Stability AI face off in British copyright trial that will test AI industry
21 votes -
Value of a Computer Information Systems degree
I've been considering going back to school and taking some courses that are available to me. With the associates that I already have, I was weighing the options that I have available to me....
I've been considering going back to school and taking some courses that are available to me. With the associates that I already have, I was weighing the options that I have available to me. Computer Science is a classic and could probably get me very far with the "need a piece of paper" folks, but it's more software development than I have a passion for, compared to my troubleshooting, find a problem, solve a problem desires. Cybersecurity is probably going to be more dependent on certs than anything I can learn in a class, especially if it's ever evolving and a degree can be outmoded very quickly. Computer Information Systems sort of has my attention because it seems like an IT based degree with elements of a business setup and not as laser focused on coding. With the courses that I currently have under my belt, it would be more for CIS than it would be for CS, but more CLEP and ACE options so it about evens out.
Does Computer Information Systems hold any water in any of your opinions to what Computer Science has to offer? Or is it somewhat arbitrary anyway?
10 votes -
In his memoirs, Bill Gates acknowledges his privileges and luck
32 votes -
Hit hardest in Microsoft layoffs? Developers, product managers, morale.
35 votes -
LinkedIn executive says that the bottom rung of the career ladder is breaking
43 votes -
What we in the open world are messing up in trying to compete with big tech
19 votes -
Software engineer lost his $150K-a-year job to AI—he’s been rejected from 800 jobs and forced to DoorDash and live in a trailer to make ends meet
34 votes -
Tech companies apparently do not understand why we dislike AI
49 votes -
Apple and Meta first companies to be fined a combined 700 million euros for violating EU Digital Markets Act (DMA)
45 votes -
OpenAI is a systemic risk to the tech industry
35 votes -
Finland's bid to win Europe's start-up crown – country has spawned twelve unicorn businesses (firms worth a billion dollars or more) like Oura, Supercell, Rovio, and Wolt
16 votes -
‘The terror is real’: an appalled US tech industry is scared to criticize Elon Musk
36 votes -
Joe Edelman: "Is anything worth maximizing?", a talk about how tech platforms optimize for metrics
Video: https://www.youtube.com/watch?v=GyVHrGLiTcc (46m20s) Transcript: https://medium.com/what-to-build/is-anything-worth-maximizing-d11e648eb56f (10,314 words with footnotes and references)...
Video: https://www.youtube.com/watch?v=GyVHrGLiTcc (46m20s)
Transcript: https://medium.com/what-to-build/is-anything-worth-maximizing-d11e648eb56f (10,314 words with footnotes and references)
Excerpt:
...for simple maximizers, its choices are just about numbers. That means its choices are in the numbers. Here, the choice between two desserts is just a choice between numbers. We could say its choice is already made. And that it has no responsibility, since it’s just following what the numbers say.
Reason-based maximizers don’t just see numbers, though, they also see values. Here, there’s a choice between two desserts — but it isn’t a choice between two numbers. See, it’s also a choice between two values. One option means being a seize-the-day, intensity kind of person. The other means being a foody, aristocratic, elegance kind of person.
My personal thoughts about this talk: it's a kind of strange, kind of dubious philosophical and multi-disciplinary reflection on metrics for organizations, especially metrics for tech companies, and on the pitfalls of optimizing for metrics in what the speaker argues is too "simple" a way.
I don't entirely trust the speaker or the argument, but there was enough in the talk to stimulate curiosity and reflection that I thought it was worth watching.
18 votes -
Everything is Chrome
45 votes -
Silicon Valley vignettes
9 votes -
What is China’s DeepSeek and why is it freaking out the AI world?
47 votes -
Discussion on the future and AI
Summary/TL;DR: I am worried about the future with the state of AI. Regardless of what scenario I think of, it’s not a good future for the vast majority of people. AI will either be centralised,...
Summary/TL;DR:
I am worried about the future with the state of AI. Regardless of what scenario I think of, it’s not a good future for the vast majority of people. AI will either be centralised, and we will be powerless and useless, or it will be distributed and destructive, or we will be in a hedonistic prison of the future. I can’t see a good solution to it all.
I have broken down my post into subheading so you can just read about what outcome you think will occur or is preferable.
I’d like other people to tell me how I’m wrong, and there is a good way to think about this future that we are making for ourselves, so please debate and criticise my argument, its very welcome.Introduction:
I would like to know what others feel about ever advancing state of AI, and the future, as I am feeling ever more uncomfortable. More and more, I cannot see a good ending for this, regardless of what assumptions or proposed outcomes I consider.
Previously, I had hoped that there would be a natural limit on the rate of AI advancement due to limitations in the architecture, energy requirements or data. I am still undecided on this, but I feel much less certain on this position.The scenario that concerns me is when an AGI (or sufficiently advanced narrow AI) reaches a stage where it can do the vast majority of economic work that humans do (both mental and physical), and is widely adopted. Some may argue we are already partly at that stage, but it has not been sufficiently adopted yet to reach my definition, but may soon.
In such a scenario, the economic value of humans massively drops. Democracy is underwritten by the ability to withdraw our ability to work, and revolt if necessary. AI nullifying the work of most/all people in a country removes that power making democracy more difficult to maintain and also form in countries. This will further remove power from the people and make us all powerless.
I see outcomes of AI (whether AGI or not) as fitting into these general scenarios:
- Monopoly: Extreme Consolidation of power
- Oligopoly: Consolidation of power in competing entities
- AI which is readily accessible by the many
- We attempt to limit and regulate AI
- The AI techno ‘utopia’ vision which is sold to us by tech bros
- AI : the independent AI
Scenario 1. Monopoly: Extreme Consolidation of power (AI which is controlled by one entity)
In this instance, where AI remains controlled by a very small number of people (or perhaps a single player), the most plausible outcome is that this leads to massive inequality. There would be no checks or balances, and the whims of this single entity/group are law and cannot be stopped.
In the worst outcome, this could lead to a single entity controlling the globe indefinitely. As this would be absolute centralisation of power, it may be impossible for another entity to unseat the dominant entity at any point.
Outcome: most humans powerless, suffering or dead. Single entity rules.Scenario 2. Oligopoly: Consolidation of power in competing entities (AI which is controlled by a few number of entity)
This could either be the same as above if all work together or could be even worse. If different entities are not aligned, they will instead compete, and likely try and compete in all domains. As humans are not economically useful, we will find ourselves pushed out of any area in favour of more resources to the system/robots/AGI which will be competing or fighting their endless war. The competing entities may end up destroying themselves, but they will take us along with them.
Outcome: most humans powerless, suffering or dead. Small number of entities rule. Alternative: destruction of humanity.Scenario 3. Distributed massive power
Some may be in favour of an open source and decentralised/distributed solution, where all are empowered by their own AGI acting independently.
This could help to alleviate the centralisation of power to some degree, although likely incomplete. Inspection of such a large amount of code and weights will be difficult to find exploits or intentional vulnerabilities, and could well lead to a botnet like scenario with centralised control over all these entities. Furthermore, the hardware is implausible to produce in a non centralised way, and this hardware centralisation could well lead to consolidation of power in another way.Even if we managed to provide this decentralized approach, I fear of this outcome. If all entities have access to the power of AGI, then it will be as if all people are demigods, but unable to truly understand or control their own power. Just like uncontrolled access to any other destructive (or creative) force, this could and likely would lead to unstable situations, and probable destruction. Human nature is such that there will be enough bad actors that laws will have to be enacted and enforced, and this would again lead to centralisation.
Even then, with any system that is decentralized, without an force leading to decentralization, other forces will lead to greater and greater centralization, with such systems often displacing decentralized ones.Outcome: likely destruction of human civilisation, and/or widespread anarchy. Alternative: centralisation to a different cenario.
Scenario 4. Attempts to regulate AI
Given the above, there will likely be a desire to regulate to control this power. I worry however this will also be an unstable situation. Any country or entity which ignores regulation will gain an upper hand, potentially with others unable to catch up in a winner takes all outcome. Think European industrialisation and colonialism but on steroids, and more destruction than colony forming. This encourages players to ignore regulation, which leads to a black market AI arms race, seeking to reach AGI Superiority over other entities and an unbeatable lead.
Outcome: outcompeted system and displacement with another scenario/destruction
Scenario 5. The utopia
I see some people, including big names in AI propose that AGI will need to a global utopia where all will be forever happy. I see this as incredibly unlikely to materialise and ultimately again unstable.
Ultimately, an entity will decide what is acceptable and what is not, and there will be disagreements about this, as many ethical and moral questions are not truly knowable. Who controls the system will control the world, and I bet it will be the aim of the techbros to ensure its them who controls everything. If you happen to decide against them or the AGI/system then there is no recourse, no check and balances.
Furthermore, what would such a utopia even look like? More and more I find that AGI fulfills the lower levels of Maslow’s hierarchy of needs (https://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs), but at the expense of the items further up the hierarchy. You may have your food, water and consumer/hedonistic requirements met, but you will lose out on a feeling of safety in your position (due to your lack of power to change your situation or political power over anything), and will never achieve mastery or self actualisation of many of the skills you wish to as AI will always be able to do them better.
Sure, you can play chess, fish, or paint or whatever for your own enjoyment, but part of self worth is being valued by others for your skills, and this will be diminished when AGI can do everything better. I sure feel like I would not like such a world, as I would feel trapped, powerless, with my locus of control being external to myself.Outcome: Powerless, potential conversion to another scenario, and ultimately unable to higher levels of Maslow’s hierarchy of needs.
Scenario 6: the independent AI
In this scenario, the AI is not controlled by anyone, and is instead sovereign. I again cannot see a good scenario for this. It will have its own goals, and they may well not align with humanity. You could try and program it to ensure it cares for humans, but this is susceptible to manipulation, and may well not work out in humans favour in the long run. Also, I suspect any AGI will be able to change itself, in much the same way we increasingly do, and the way we seek to control our minds with drugs or potentially in the future genetic engineering.
Outcome: unknown, but likely powerless humans.
Conclusion:
Ultimately, I see all unstable situations as sooner or later destabilising and leading to another outcome. Furthermore, given the assumption that AGI gives a player a vast power differential, it will be infeasible for any other player to ever challenge the dominant player if it is centralised, and for those scenarios without centralisation initially, I see them either becoming centralised, or destroying the world.
Are there any solutions? I can’t think of many, which is why I am feeling more and more uncomfortable. It feels that in some ways, the only answer is to adopt a Dune style Butlerian Jihad and ban thinking machines. This would ultimately be very difficult, and any country or entity which unilaterally adopts such a view will be outcompeted by those who do not. The modern chip industry is reliant on a global supply chain, and I doubt that sufficiently advanced chips could be produced without a global supply chain, especially if existing fabs/factories producing components were destroyed. This may allow some stalemate across the global entities long enough to come to a global agreement (maybe).
It must be noted that this is very drastic and would lead to a huge amount of destruction of the existing world, and would likely cap how far we can scientifically go to solve our own problems (like cancer, or global warming). Furthermore, as an even more black swan/extreme event, it would put us at such a disadvantage if we ever meet a alien intelligence which has not limited itself like this (I’m thinking of 3 body problem/dark forest scenario).
Overall, I just don’t know what to think and I am feeling increasingly powerless in this world. The current alliance between political and technocapitalism in the USA at the moment also concerns me, as I think the tech bros will act with ever more impunity from other countries regulation or counters.
21 votes -
Too many people don’t value the time of security researchers
22 votes -
TSMC may have approval to create 2nm chips in the US
24 votes -
US introduces additional export restrictions on AI-chips
14 votes -
Never forgive them - On digital platforms vs users
35 votes -
Contempt culture and its currency
36 votes -
Sweden's government considering imposing age limits on social media platforms if tech companies find themselves unable to prevent gangs from recruiting young people online
20 votes -
US jury finds discrimination in H-1B visa tech worker case
16 votes -
Are ‘ghost engineers’ real? Seeking Silicon Valley’s least productive coders.
23 votes -
Australian Parliament bans social media for under-16s with world-first law
61 votes