How do you feel about AI and the future?
Off the back of all the chatter and conspiracy theories being thrown around because of the OpenAI Boardroom drama, I thought I'd ask how people are feeling about AI in general.
For those of you with your head in the sand, have you heard about AI or interacted with anything knowingly?
Those of you surfing the AI world right now, what are you looking forward to AI doing and are you fearful of where it's leading?
I'll start by saying, as I've said before, I'm looking forward to living in a world where the start of "I,Robot" is a reality. Robotics and generative AI are going to be amazing, as shown in its infancy by Boston Dynamics. If we stick to the way things are going, we could end up very Star Trek with our very own "Data" androids.
My fear is that the human race isn't ready, and we're too stupid as a whole, so we end up more with T1s being actually made by the military and at some point, AI will become cognitive and conscious.
So come on, folks of Tildes, where do you stand?
I'm Senior Leadership in Data Engineering.
I am absolutely exhausted having to explain to digitally / data illiterate people how hard it is to make it 'automate all the boring stuff' when they can't get their data infrastructure sorted. Previous Director of Data & CEO couldn't wrap their heads around "You can't have an LLM access 72 different, endlessly changing spreadsheets and then spit out perfect marketting" at all and it was boring / exhausting every single month that those meetings came round.
Industry still hasn't got its shit together to centralise, govern and maintain its data infrastructure, mostly because it's complicated, costs too much and there is SO MUCH FUD thrown about by consultant types looking for their next leadership gig. Doesn't help that the data 'community' is like a fucking cult at the moment because of it.
It's part of a cycle of products that has been going on for 15 years. First it was Virtualised databases, then it was data lakes, then it was predictive analytics, then it was cloud data warehouses, then cloud lakehouses, then GAI, Then LLMs... it's just tech hype and there's so few actual implimentations for the layman that it's useless unless you've got a digitally literate workforce to use it!
There's some great stuff around CoPilot and programmers not having to throw knowledge at the wall constantly to work. But my word LLM's need some serious digital / data literacy lessons before people go out and use it.
My biggest social worry? These tools are owned by corporations. They're not going to share the productivity savings unless you buy from them WHEN they arrive. We need leaders at the top, reigning in the technology and ensuring that they get democratised, not centralised by corporations who will simply switch off 5000 roles into AI and then harvest the profits upwards. UBI won't save us from ourselves when we can't feel human through creativity anymore.
People still get ERP wrong. And that's been around since the early 90's.
If you can't comprehend why it's a good idea to model (and adapt) your workflows into an ERP system and not spreadsheets and informal meetings, there's nothing more that higher-tech, modern solutions will offer you other than a shiner interface and buzzwords.
"Low Code" is the next evolution of the spreadsheet. Beware.
Low code tools can be awesome (I am an Alteryx fan boy unapologetically)
But businesses would rather buy a tool and bend it around whatever insane process they have. Than buy a tool and quickly use BAs to change the process for the better. It's ridiculous.
This is the #1 reason for all the spectacular SAP transition failures that cost companies tens of millions or even more.
The First Rule of SAP is that your company changes its processes to fit the SAP model. That's it.
If you try to customise SAP to fit your process, you'll eventually run into a very expensive wall.
Yeah. Place I've recently come from has just about EVERYTHING operational flowing into SAP in an insane spiderweb. The arguments I used to get into around micro-architecture and data provisioning were insane.
New place is moving away from SAP as a data repo and onto a proper Cloud Warehouse, though they don't know how to do that yet. That's my job, I thoroughly enjoy doing that.
I can say from experience at work that one of the things LLMs are incredibly inconsistent about is formatting their outputs properly.
This has been a common problem, but it seems like the techniques for fixing this are known and starting to become more available. The decoder can be changed to never generate tokens that are a syntax error in whatever format you want to hard-code.
For OpenAI you can set JSON mode. That's pretty limited (it doesn't require a specific schema) but I'd guess more flexible ways of doing it will become available eventually.
lmao the issues we're encountering at work have been with invalid json... I'll have to check whether we're using json mode or not.
As bad as it sounds, I think the best long term outcome is if this technology arrives quickly and displaces enormous percentage of the workforce. A slow burn will allow the companies that control these tools to tighten their grip on every aspect of our lives that they already touch, and to pay off politicians who might try to genuinely regulate things for the sake of the broader public.
I think a catastrophic shift in the workforce and jobs available to people is the only way regulators will genuinely step in, and at that point I hope they go after the root cause (centralised power in the hands of unelected companies, as you mentioned) instead of just bandaid solutions.
I just don't see that happening though.
A big bang is just going to have the current leadership do what the UK did during Covid and stick vast sums of cash into the economy to stop everything imploding. We'd need systemic change and none of the current lot know what that means, let alone what it looks like.
I was reading Bertrand Russell earlier this week and he wrote how we're barrelling towards the system we're in now... And he wrote it in the 40s.
We need rid of bullshit jobs, bullshit hours and an acceptance that being alive and enjoying life is what we're here for. But the upper and wealthy class despise the idea of "poor" people having lesuire time. So they don't know how to change the machine to get away from their own expectations and needs.
I view this latest AI boom as kind of like being given a sport car despite having irresponsible driving habits. Its the most fun we have had in a long time but its a gift that we should never have received at this stage in our development.
We talk about aligning AI to human goals when we can't even agree on what those goals are.
We talk about how AI will free us from menial work when its datasets are built on the backs of underpaid workers in the developing world.
We talk about it empowering individuals when it is taking away computing power from individuals and places in the hands of those with wealth to own server farms.
I don't know if we can ever reconcile these contradictions.
I'm not sure about the "gift that we should never have received at this stage in our development" angle. I think we will inevitably need to learn about how to use AI from experience, just like we learned everything else.
(One of these experiences is having ChatGPT lie to you. Hopefully it's about something inconsequential.)
Sure, and like a 16yo having a fancy sports car, many will inevitably crash it. Just hope the crash doesn't cost their or others' lives in the process of learning
Fortunately, ChatGPT just outputs text, so in common use, it's much less dangerous than a sports car. Hooking it up to something dangerous is certainly possible and will inevitably happen, but it takes some doing.
I don't think we will have to wait for too long for that considering how much Microsoft is aiming to integrate GPT4 with Windows 11. The fact that you can already ask Bing Chat to do things like change Edge from light mode to dark mode is a sign of things to come.
I'm too young to remember the 90s but from what I have read about it seems like we entering a similar era of computing where hardware is constantly going out of date due to rapid development and security is a secondary consideration to launching products ahead of your competitors to try and claim the top spot in unstable market sectors.
Yes, that's a step towards doing something dangerous, but most laptops aren't hooked up to anything safety-related. I guess we'll see what commands they enable? I'd expect it to be nerfed.
Meanwhile, command-line users can install the llm command which can pipe AI output to whatever Unix command you want. I find it useful. I don't recommend piping it to a shell. That's asking for trouble.
Even if you did, it probably won't delete your hard drive, though? I've occasionally made a mistake that resulted in executing the wrong file as a bash script, and it would result in a bunch of error messages, not any actual damage.
You are probably right I just don't like the idea of something with fuzzy output like an LLM hooked directly into system functionality. Though I suspect that Windows Server may eventually follow even if people are not excited about the interface change. Microsoft has a history of being overeager with UI changes that create problems such as the Windows 8 Metro interface and the Settings app/Control panel split
That is an interesting library. How would say it compares to using something like langchain or using the API options of llama.cpp and just parsing the json to pass to a shell
Although it works as a Python library too, it’s mostly designed for interactive command line usage, like writing Unix pipelines. I haven’t used langchain so I don’t know how it compares.
The llm command supports plugins for connecting it to local or remote llm services, but I’ve only used it with GPT-4.
I hate that it's called AI. I'm a science fiction nerd. Calling this current tech AI is confusing people who think that it's smarter than it is.
I don't really find it that useful.
The name 'AI' has been used thus by the academic literature for decades.
To go along with /u/DefinitelyNotAFae though, AI is supposed to be used for artificial true cognitive thought. AI at the moment is still more of a natural language search engine and response for the most part.
I know. It still annoys me, and the disconnect between colloquial understanding and academic use feels deliberately taken advantage of by the people selling their products.
I dislike all of it.
It's not particularly helpful in my work - I've played around with it, but it hasn't been particularly useful.
It's been called a "neural net" for decades, AI is a broad term and if there's one thing I'm reasonably sure about, it's that academics are all nitpicky pedants.
I'm worried for all the obvious reasons. Most of those reasons have to do with capitalism and the unfettered corruption in government (talking about the US mostly) which is now at the level of open bribes of supreme court justices and unlimited dark money funding misinformation about everything.
It won't be regulated properly because most lawmakers don't understand technology and most of the rest are easily bribed. So expect more and more deepfakes and psychological manipulation of the masses. This is besides all the economic disruption it will cause over the next few decades as technical and creative people are replaced by AI.
Whether or not it gets dangerous in a "terminator/skynet" way, it will be used as a tool by the rich and powerful to maintain power.
CuriosityStream has a current "Connections" series by James Burke. There is one episode where he talks about AI. (Mostly he talks about past inventions that led to it. Actually the connections he gives are very tenuous, but that's a separate issue than I'm making).
Anyway when he gets to the part about what AI will do, he mostly shows positive things such as personal tutors people will have. He also talks about how AI may be given rights some day. But like with most of his connections, he skips the part how it will be misused by billionaires and other bad actors.
Your post and another also reflect how I feel about AI and honestly how I've started to feel about computing in general. There are people fighting the fight but as usual, once the world understood computers and the internet and saw the power/money in embracing it, they have destroyed many beautiful parts of it. Not saying the 90's web was perfect but I think I liked the actual anonymity of it and that new tech was generally aimed to make things better.
As the black box of what our technology is doing grows ever larger and our population grows ever distant from understanding it, the ability of blackbox tech like AI to shape our reality grows exponentially in ways that are easily manipulated by a small group of people. Honestly, I guess the truth is we've been manipulated by media before, but it won't take very long until our AI results are littered with advertisements and political opinion.
I think of the amount of effort and money spent to make us angrily and shockingly click on things and do things on the web right now, and am sad at what it could have been should the motivation behind the designs been to objectively improve our lives rather than drive metrics for cash. I have no doubt AI results will be pushed that way, and now we can't even tell if the part of the data that came back originated from some SEO ridden garbage rag of a website, like we can with regular search.
The other fear is that our webites will become useless garbage filed up with AI speak. It is already hard enough to figure out what is a bot or entity payed to put garbage links/reviews/whatever, and AI is just going to make it even harder to filter out. In the way that spam destroyed our emails, AI is already going to accelerate the destruction of things like this site with actual human voices echoing their opinions and having discussions.
I mean in the end I have loved the internet and technology and it was the magic in the world for me growing up, but I don't know that we can stem the tide of greed destroying it all for much longer.
For myself, I think the ease of creating spam and adverts and trash on the internet with AI will likely be matched by the ease of using AI to filter or block or remove the same, but it’s yet another layer to the constant arms race that’s been going on forever. Which also means that the people not savvy enough to wield an “AI ad-blocker” in their favour will just be subject to the brunt of it, so my counterargument here still isn’t sunshine and roses by any measure.
It's worth noting that the Republicans are far, far, far more guilty of this than the Democrats.
I like the potential of current LLM, but also don't think they are at the level as some people like to believe. But, I do think they are really useful already for various use cases .
For example, I like working on code projects, but not for the sake of code itself entirely. Often I get fed up with the intricate details, or when I simply want to achieve something and most of the time is being spent figuring out what ends up being mundane stuff.
Lately, I have been using ChatGPT and the OpenAI API to basically help me specifically in that regard. I used it to help me round out the design, brainstorm about approaches, tune database requirements, etc. I basically got to the point where I had a proof of concept for all the separate components in a very short amount of time. Then for the implementation it was a similar story. I already had a much more solid idea (technical and functional design, if you will) of how I wanted to implement things than I normally do. And, for most of the things where I would get slowed down normally, I could just turn to the chat. Then by just telling it what part I had trouble with, it would get me back on track in no time.
Having said all that, I couldn't have used it in such a way without any knowledge of programming. Because if you just tell it that you want to "create an application that does X" it will come up with overly broad solution. All the questions and problems I presented to it were based from a position where I already knew the language, platform and had a general sense of requirements.
As an extension on that I have also been exploring something else, details here: https://tildes.net/~comp/1c5e/what_programming_technical_projects_have_you_been_working_on#comment-b8bm
tl;dr. In the current form, it can already be a very useful tool. But, it still is a tool and like any tool it isn't useful for everything and you need to know how to use it.
No. No point whatsoever. Not for the things I'm interested in doing. A friend was doing research into garbage collection and asked it about some of the things she was working on, on a lark; the results were laughably bad. Somebody else asked it to explain a famous piece of obscure code; the result was riddled with errors (to its credit, it contained a surprising number of true statements). Somebody else uses it for prose generation when worldbuilding; I'll link my previously expressed thoughts on that.
I have no doubt that it improves the productivity of people who do useless things like writing advertising copy and boilerplate code. But wouldn't it be much better if nobody did those things?
The fashion in which they are produced is also extremely rude.
Humans inherently look for the easier way. It's one of the qualities that's boosted us to the top of the food chain on our planet. A lot of the time, across our civilization, the "easier way" was some sort of process improvement, like the lever to multiply force, or the wheel to create greater efficiency with pulling.
Now technological efficiencies are upon us. IT people, from the help desk all the way back into the most academic labs, look for automation. They don't want to do the same stuff repeatedly if there's a way for the computer to just do it for them. Spreadsheets replaced literal rooms of people doing math. The atomic bomb project was staffed by these people, all sitting there running calculations that decades later you just have the computer do for you. Companies just to hire rooms of these people to run accounting and stuff.
Spreadsheets were a fundamental change in How Things Get Done. AI is the same thing. Probably even more of one, more fundamental, once they get further into it. AI is moving into "fuzzy logic" areas where you can ask it human-like questions and get human-like answers back that are backed up with data points you can trace through, the same as you'd check a human's work.
A lot of copy has been written about people who are upset that "their thing", be it code or art or text or whatever, is on the cusp of being threatened by AI that can do it too. Humans are adaptable, and just like everything that came before that "killed a job category", humans will adapt to this too. But the genie doesn't go back in the bottle, and most people tend to admit that once they get over the yelling and tantrum they inevitably pitch at first.
For a lot of casual, ordinary, normal people, I think one of the biggest areas they're going to see and embrace it in is with search. Because if you look through threads and discussions, a lot of folks seem to think GPT and other LLMs has "the information of the universe" at its fingertips. And expect to be able to ask it those complicated human-like questions, like they would if they had an expert on the subject at hand there to ask, and get a correct answer back.
Most people don't seem to either know how to, or want to, skim and parse through results they might get when they punch a query into Google. They expect the Google Box, the summary answer, to be the be-all end-all answer. Again, the same as if a human expert was there. Right now, LLMs are frozen in time with their datasets, and haven't dialed really deeply into finding the right answers.
Once an AI "front-end" can be wired up to the Internet, at an affordable consumer level, similar to the way IBM wired Watson up to play Jeopardy, that's what a lot of people are going to fall in love with. They want "the computer" to be a be-all, end-all know-it-all for them.
They want to be able to run a lot of the thoughts that come to mind through their phone to answer. Not just stuff like "when did that album release" or "what was the name of that movie with that guy where they punched a shark", but fuzzier questions. They want the computer to do all the skimming and parsing for them, and just serve up the answer, Star Trek style. Whether it's straight from the encyclopedia or is something more akin to a subject-matter-expert's response.
And really, it seems like the teams working on AI want that too. So it's definitely gonna be here. At the rate they seem to be charging forward with the technology, it seems like it's just a matter of a few years, probably less than ten, before people will be able to ask ridiculous stuff like "Computer, what could I go do that's fun and costs less than (X) and is close to me now" and have the computer, already "knowing" that person's general likes and dislikes, already "knowing" that person's account balances and available travel methods, come back with specific events, compete with times and costs and everything.
And that's just a random example. Engineers and scientists will push that same logic, used for stupid common purposes, to aid them in doing real engineering and science. Just like out of Star Trek. "Computer, can we redesign this component to withstand (X) and fit within (Y) and be less costly than (Z)?"
That's what's coming. It might already be here in a few labs. Combine all this AI code research with Moore's Law and related axioms for how computer hardware continues to advance in processing power, and inevitably what is at first an expensive prototype eventually scales down in cost and availability for Buffy to stand bored on a street corner asking her phone for that list of things she can go do.
May you live in interesting times. And we do.
I’m sorry, I was with you until this point. Moore’s Law is practically dead as of late. We are hitting fundamental limits with transistor density, which has led to the multi-core CPU heyday we’ve been seeing.
AI models are extremely taxing computationally, and one could argue this current resurgence is due to extra GPU capacity being freed up after the latest crypto-bust.
OpenAI has been losing boatloads of money (even pre-Altman drama) just to process current demand. Hardware is unlikely to find any game-changing gains, so software is left to uncover significant efficiencies. The only way I see that happening, given the brute-force method LLMs currently operate on (Large is part of the name for a reason) is to reduce the very same general purpose abilities it has.
I know LLMs take enormous computing resources to train, but I thought once it’s trained, a home computer is about the level of resources needed to operate it? Though I could be wrong, and I’m willing to be corrected at this.
CPUs don't have all that much to do with LLMs; they're extremely parallel so you'll either be putting it on a GPU or such. And GPU performance is still improving exponentially. Not to mention, purpose-designed neural-net chip architectures look to be offering a couple of orders of magnitudes of performance savings.
LLM's haven't been around for very long and I would expect the software to become significantly more efficient due to algorithmic improvements.
Although, they're barely adequate for some tasks, so a lot of the performance gains will get dumped into making LLM's smarter.
These days I assume any new AI-related product is a cool demo but not practical, until proven otherwise. It’s become easier to create a demo to do just about anything.
Longer term, though, I don’t know what we’ll have in a year, let alone five years.
It’s funny that the unpredictable long term future can include ranges like “next year” but I agree, I have no idea what to expect next from GPT-style models even next month, it’s a very exciting time.
I think the biggest hurdle for AI is that unless there are breakthroughs in reducing computation needs, when the hype settles, people will wise up that the cost/benefit will often still have the scales tipped to older, but simpler-to-maintain systems.
I don't see "simpler-to-maintain" as being a deciding factor if you're not the one maintaining the system. Search engines and web browsers and mobile phone operating systems aren't simple to maintain. People will use hard-to-maintain systems, provided the complexity is hidden from them.
For programmers, maybe a company pays for Copilot subscriptions, but that isn't expensive compared to a programmer's salary. There are IDE's that are more expensive.
From a business point of view, the question is whether the benefit is worth the subscription fee. If not, people will cancel when the novelty wears off and they find they don't use it.
I am speaking as a maintainer of systems. If a system is easier to maintain, it has better ROI. So as the hype dies and cost/benefit comes into play, AI-driven systems will be at a disadvantage unless they genuinely provide a massive improvement in ways that matter.
All these hyped up custom-built AI assistants for websites? I'm betting half of them end up costing way more than just having a well-written FAQ and functional support staff that are empowered to solve issues rather than just meet call-time metrics.
Well, sure, I wouldn't be surprised if 90% of them got shut down. There are a lot of startups that don't seem to be adding a lot of value.
I see a couple ways it could go: maybe AI gets taken out of the loop? So the documentation writers use it to assist for writing documentation, but end users just see a search engine. Then there's no API cost in production.
Also, OpenAI has made some significant price cuts. I'm guessing they've reduced operating costs? That trend could continue.
My guess is that Microsoft is bankrolling it and subsidizing it heavily until such time the market is entrenched and then they start jacking up the costs.
Hello Microsoft 365 subscriptions, meet Bing Chat licensing costs - you'll have a lot in common.
Yes, it might be subsidized, but that's a strong incentive to actually reduce costs and stop burning lots of money.
Just because there's an incentive doesn't mean they'll succeed. But I expect they do have people working on it, and I expect there are efficiency gains to be had fairly quickly. This isn't like running up against the laws of physics, or fundamental algorithmic constraints.
There are also strong incentives to improve efficiency outside OpenAI, like all the people trying to get language models to run on a laptop, with some success.
No, it's a strong incentive to keep prices below cost of running until such time the customer base is entrenched enough they have to face multi-million-dollar migrations or pay an annual extra 6 figures.
Much how Amazon drove Diapers.com under.
These aren’t mutually exclusive. Yes, sometimes companies sell things below cost for strategic reasons. But that doesn’t mean they wouldn’t rather reduce their costs, if they could. Lower costs are always better. If you can undercut the competition in a sustainable way, that’s better than doing it in an unsustainable way.
This is my own take as a technical person but not especially focused on the AI.
Calling current generative statistical based models AI is misleading. They are powerful and impressive tools and they will inevitably bring about change, but that change will arise from how they are used.
As far as I know there is not really any artificial general intelligence anywhere in sight. Current models are entirely dependent on human input, both in training and in use and in essence simply remix what was fed into them.
I am more afraid that we will end up with widely used models whose results are rarely checked by the end user. And that the subtle and hidden biases present in the datasets taken from all over will remain the result of which will be who knows what.
Social media are already bad enough, your own echo chamber tuned to get your attention fed with data that contain who knows what and transformed by process that is not entirely transparent even to the people who wrote it is terrifying.
I look at it and see a need to make sure my ass is covered. I have no interest listening to the world of marketing, so I've taken time to understand the tools by using them a bit. That way I know what in my life could actually be improved, and can control the extent to which I engage with the tools. So that I can better judge whether a tool is worth integrating/paying for and can't be easily persuaded.
One thing I find worrisome is folks socializing with the chatbots. I think it's reckless and irresponsible to bill any of these tools as being able to fulfill these kinds of needs. It's one thing to fuck around on your own, it's very different when the product advertises itself as being some sort of person (like a "virtual girlfriend" and shit like that). I guess if you're a psych researcher, get your tools of analysis ready because I'm sure we're in for some shit with this.
I am excited at the prospect of a sentient machine though. I think there's a shot at it, and I'm super interested in understanding the nature of that intelligence. In the meantime my goal is to understand the bullshit layer as best I can, so I can avoid it and see the cool stuff.
I'm up to my armpits in it, looking at the regulatory environment, the use cases that are getting thrown around, and how the whole system of rules and technology fits together (or doesn't).
The short version is this: the new generative AI is cool and powerful, if a little computationally expensive. The people wielding it (particularly in the private sector) are happily using the flashy AI to confuse users and regulators about who is responsible for the actions taken at training time, and at inference time.
It won't be long until we see the first lawsuit or criminal charge stemming from an AI agent that takes an action on a user's behalf where someone will argue that "It wasn't me, it was the AI!"
There's a line in I,Robot which stands out. I cannot remember the exact quote but it's something along the lines that if the robot caused the death of the scientist, it'll be equivalent to an Industrial Accident.
That's how AI could turn out, in theory.
As has already been pointed out here, its got very few niches.
One of those niches is in another very niche sector: industrial 3D printing and Additive Manufacturing.
The general gist is that AI driven design iteration and creation is great because if you give it boundaries, it will optimise (a bit like topology optimisation but without the maths) to the nth degree whereby it'll give you the best possible design it sees.
With that said, there are still drawbacks in that it is still experimental and extremely expensive.
In my experience as an Engineer who works for a 3D printing manufacturer and service bureau, if you have a specific use case, and its given enough direction, it's very powerful. High efficiency heat exchangers are popular at the minute, as designing them conventionally for 3D printing processes is very challenging: AI generates a design in minutes based on certain processing constraints (minimum feature size that can be made by the printing system, particle size of the material, heat transfer properties, etc).
But then you have the situation that most of the big players in the Indsutrial AM sector have this capability, so no one is innovating with it. Its just all coral-shaped looking heatsinks on everyone's exhibition stands (AI only knows what we teach it, and we know the most efficient transfer mechanisms are in nature, hence this result.) For anyone interested (I can't link on mobile), TOffeeAM is the current choice for most AM companies, and their designs that target heat transfer as the main boundary all look quite organic.
Outside of AM and 3D printing, I don't see much of a positive impact. Just the streamlining of certain processes perhaps.
Edit: I have just fact checked myself and i was wrong about TOffeeAM being AI driven - it is also maths driven. Leaving my comment up for transparency. Feel free to mark as noise.
I do compilers research. There are very similar applications to the analysis and optimisation of code (application of ml had been demonstrated long before tab9/copilot/chatgpt*, though only as research; not really practice yet). Broadly speaking there are a lot of cases where we have an optimisation problem which is a very large state-space search—too large to explore exhaustively—and ml can be helpful in deciding what to look at. But this all really has nothing to do with the present 'ai craze'.
I won't be the only one who looks at it from a simple persons view. At the moment stuff like OpenAI makes it easier to generated answers or summary to questions based off existing data right ?. Isn't AI meant to be something that can think for itself, makes own decisions and create and resolve within some sort of limit ?. Their should be limits on AI generated art, music, novels... All of it currently pulls and meshes from existing original works created by other people
This is already here. For quick skims of search results, Kagi and Bing both do this. For in depth research, there are tools like waldo.fyi. If you have more specific needs, it’s not hard to roll your own, combining search APIs, web scrapers, and public LLM models like GPT, Claude, or even LLaMA or other models that you can run on your own hardware.
I think we're going to have AI generated movies and TV shows soon. AI image gen is already extremely good and animation is getting better day by day.
I think we're going to have 90% AI generated content (script, voices, video) with a touch of human direction and fine tuning.
I could see something like this on Netflix as early as next year, but whether it's 2 years away or 20 it will happen.
My views are best represented here with the XKCD comic “Robot Future.” If there’s a weak link here, it’s not the offchance that computers awaken or request autonomy and become violent if denied. It’s that this tech will enable people to do some real harm, like every other tool ever made but more. On a tangential note, I find it funny that CEOs are most worried of losing control of their workplace and the levers of power no longer responding to them one day. I’d imagine some of their workers are afraid of them for the exact same reason.