-
44 votes
-
Computer chip with built-in human brain tissue gets military funding
39 votes -
‘Not for machines to harvest’: Data revolts break out against AI
40 votes -
ChatGPT can be broken by entering these strange words, and nobody is sure why
56 votes -
Why transformative artificial intelligence is really, really hard to achieve
10 votes -
Apple tests ‘Apple GPT,’ develops generative AI tools to catch OpenAI
17 votes -
AI often mangles African languages. A network of thousands of coders and researchers is working to develop translation tools that understand their native languages
17 votes -
How to use ChatGPT to ruin your legal career
28 votes -
The shady world of Brave selling copyrighted data for AI training
59 votes -
Why AI detectors think the US Constitution was written by AI
35 votes -
AI does not exist but it will ruin everything anyway
32 votes -
The workers at the frontlines of the AI revolution
12 votes -
Inside the white-hot center of AI doomerism: Anthropic
8 votes -
If you wish to make an apple pie, you must first become dictator of the universe
11 votes -
US Federal Trade Commission opens investigation into OpenAI over technology’s potential harms
17 votes -
Portland radio station now has an AI DJ as a midday host
14 votes -
Two authors file a lawsuit against OpenAI, alleging that ChatGPT unlawfully ‘ingested’ their books
36 votes -
Introducing Superalignment
35 votes -
How we could stumble into AI catastrophe
12 votes -
GPT-4 API general availability and deprecation of older models in the Completions API
11 votes -
The actual danger from AI is mostly not what is getting talked about
46 votes -
Inflection AI develops supercomputer equipped with 22,000 NVIDIA H100 AI GPUs
7 votes -
A project that transforms QR codes into functional pieces of generative art
21 votes -
A preview of Humane's AI Pin - TED Talk by Imran Chaudhri
12 votes -
America's first law regulating AI bias in hiring takes effect this week
13 votes -
Google updates its privacy policy to clarify it can use public data for training AI models
44 votes -
“We have built a giant treadmill that we can’t get off”: Sci-fi author Ted Chiang on how to best think about AI
25 votes -
Midjourney version 5.2 adds support for "zoom out" feature
30 votes -
A social network for AI
12 votes -
Boring Report: An app that aims to remove sensationalism from the news and make it boring to read, by utilizing the power of advanced AI language models
66 votes -
A monocle display with open-source hardware from Brilliant Labs
4 votes -
AI camera inspired by star-nosed mole snaps "photos" without taking photos
11 votes -
Google warns its own employees: Do not use code generated by Bard
34 votes -
Why Haidt and Schmidt’s proposed social media reforms are insufficient
4 votes -
Military AI’s next frontier: Your work computer
16 votes -
Inside the AI factory: The humans that make tech seem human
14 votes -
Stack Overflow moderators are striking to stop garbage AI content from flooding the site
45 votes -
Are any AI virtual assistants actually useful?
AI Virtual Assistants are on the rise, and logically it seems like I could use one to support productivity, small business, neurodivergent accomodations, etc., BUT, when reviewing what's out there...
AI Virtual Assistants are on the rise, and logically it seems like I could use one to support productivity, small business, neurodivergent accomodations, etc., BUT, when reviewing what's out there they don't seem super useful.
Otter seems the most useful because it can attend web meetings and record, contextualize screenshares, and sift the transcripts into action items, but it cant go to all webinar services and I'm not sure I can log into this in a corporate platform. Others seem to be able to check a calendar or make a reminder, but nothing I would pay for.
Some use cases might be gathering basic info from clients, scheduling meetings (calendly can handle this), blocking time for my task lists, writing basic email drafts, adding up expenses each month, sending reminders for customers, etc.
All of this could happen with various tools, but seem like good territory for an AI Virtual Assistant.
So, have you found any AI VAs that would be worth paying for? Anything that saves time or makes life easier?
23 votes -
No, GPT4 can’t ace MIT - a critical analysis of “Exploring the MIT Mathematics and EECS Curriculum Using Large Language Models”
17 votes -
Why former Salesforce engineers want to take on Google
6 votes -
Let us show you how GPT works
55 votes -
Anyone can Photoshop now, thanks to AI’s latest leap
12 votes -
Ask_jesus, a Twitch channel wherein an AI-generated Jesus answers questions asked in chat
29 votes -
Reddit CEO praises Elon Musk’s cost-cutting at Twitter, as protests continue to rock Reddit
105 votes -
Anyone know of research using GPTs for non-language tasks
I've been a computer scientist in the field of AI for almost 15 years. Much of my time has been devoted to classical AI; things like planning, reasoning, clustering, induction, logic, etc. This...
I've been a computer scientist in the field of AI for almost 15 years. Much of my time has been devoted to classical AI; things like planning, reasoning, clustering, induction, logic, etc. This has included (but had rarely been my focus) machine learning tasks (lots of Case-Based Reasoning). For whatever reason though, the deep learning trend never really interested me until recently. It really just felt like they were claiming huge AI advancements when all they really found was an impressive way to store learned data (I know this is an understatement).
Over time my opinion on that has changed slightly, and I have been blown away with the boom that is happening with transformers (GPTs specifically) and large language models. Open source projects are creating models comparable to OpenAIs behemoths with far less training and parameters which is making me take another look into GPTs.
What I find surprising though is that they seem to have only experimented with language. As far as I understand the inputs/outputs, the language is tokenized into bytes before prediction anyway. Why does it seem like (or rather the community act like) the technology can only be used for LLMs?
For example, what about a planning domain? You can specify actions in a domain in such a manner that tokenization would be trivial, and have far fewer tokens then raw text. Similarly you could generate a near infinite amount of training data if you wanted via other planning algorithms or simulations. Is there some obvious flaw I'm not seeing? Other examples might include behavior and/or state prediction.
I'm not saying that out of the box a standard GPT architecture is a guaranteed success for plan learning/planning... But it seems like it should be viable and no one is trying?
9 votes -
Google’s new AI-powered search tools are not coming for anyone’s job
5 votes -
Google parent Alphabet tells workers to be wary of AI chatbots
5 votes -
Gmail AI can now write emails for you on your phone: how it works
11 votes -
Microsoft launched Bing chatbot despite OpenAI warning it wasn’t ready
16 votes -
Europeans take a major step toward regulating AI
19 votes