in short, no Ai's are already cannibalizing off of eachother, using other AI made images to make more AI images, which other AI's are gonna use to make more images, resulting in a continuous...
in short, no
Ai's are already cannibalizing off of eachother, using other AI made images to make more AI images, which other AI's are gonna use to make more images, resulting in a continuous corruption where the AI flaws get amplified over and over with every iteration. An hand with five fingers becomes one with 6 for example, then with 8 , then with 15, etc etc.
Ai's will keep feeding off of the work of real artists, but they need real artists to survive. They're like parasites that can end up killing their host if they get too big.
I don't think that's how it works though. An AI like this needs training data, but it doesn't like consume an image to make a new one or anything. And all the artwork that has ever been created...
I don't think that's how it works though.
An AI like this needs training data, but it doesn't like consume an image to make a new one or anything. And all the artwork that has ever been created combined can be used for this, so it doesn't need new data to be constantly generated. Meaning it would continue doing the exact thing it is already doing even if nobody would create new artwork anymore. Plus, people will continue to make art even if they can't make any money off of it anymore due to AI, so the amount of training data will continue growing regardless.
Realistically, AI will replace some people similar to how automation in manufacturing etc replaced some. But odds are applications where cost is less of an issue there will still be a human doing the work likely aided by AI. Since at least in the short to intermediate term humans are probably still better at creating things that fall outside of the AI's training data.
More price conscious applications might become mostly AI generated though, with human design being a bit of a luxury thing.
People designing these things are data scientists and they can and will filter databases for the flaws and continue to build improved models over time. Do not count on this being the case.
Ai's are already cannibalizing off of eachother, using other AI made images to make more AI images, which other AI's are gonna use to make more images, resulting in a continuous corruption where the AI flaws get amplified over and over
People designing these things are data scientists and they can and will filter databases for the flaws and continue to build improved models over time. Do not count on this being the case.
I don't know what the future holds but at least with text AI, it's not that technical and the information is old. My current job deals with chatgpt specifically and it was really cool...
I don't know what the future holds but at least with text AI, it's not that technical and the information is old. My current job deals with chatgpt specifically and it was really cool experimenting at first. After a month or so of using it I found that it is wrong most of the time on technical analysis and very rarely has up to date information. I understand it takes data from like June 2021 and before. It's just not viable to replace humans any time soon.
Don't like to assume from one sentence, but it sounds like possibly you're using it wrong and underestimating it as a result. ChatGPT is bad as a question answering machine because you never know...
Don't like to assume from one sentence, but it sounds like possibly you're using it wrong and underestimating it as a result. ChatGPT is bad as a question answering machine because you never know whether it's hallucinating or not, but I've had it work really well for text/information processing. From simple tasks like writing an article bit by bit based on my notes that I feed into it (saving me about 25% of time overall) to giving it a slightly long article that I was too lazy to read and asking it whether a specific subject is mentioned in it, and if so to give me a summary - only did this a few times, but it was bang on every time. Interesting thing is that this seems to work in multiple languages. It also works really well for coding, but that's well-known.
To use it like a question answering machine Bing AI is probably going to be better because it actually searches the internet, but that one definitely isn't quite there yet. I've had a few use cases where it gave me what I wanted immediately after google failed (or I failed with google), but for more complex searching tasks in often completely fails.
That said, yesterday I had a conversation with ChatGPT 4 (the paid version) about specific concepts in psychoacoustics and it was surprisingly correct and helpful. The issue is that in this use case you have to verify what it says.
I troubleshoot software as my primary job. We have ChatGPT integrated as a tool we can use in our system which is similar to chatgpt4. Troubleshooting is not very complicated if you can understand...
I troubleshoot software as my primary job. We have ChatGPT integrated as a tool we can use in our system which is similar to chatgpt4. Troubleshooting is not very complicated if you can understand the error code and know where to find the problem. If I ask chatgpt to find the primary reason of the error, it will often tells me to look in places inside the software that does not exist. As an example, I'll put in a descriptive error code and ask for a way to fix it and have references on where chatgpt found that information so I can verify the answer. The answer it often gives me is to look in an area that does not exist in the software and a reference that does not exist.
Using Chatgpt everyday for multiple issues, I would give it an average of 20% giving me helpful, but not totally accurate answers.
So you kind of use it as a knowledge base and it doesn't work. I admit I did not think of this case when I mentioned programming, but it's not surprising that is suffers from similar problems as...
So you kind of use it as a knowledge base and it doesn't work. I admit I did not think of this case when I mentioned programming, but it's not surprising that is suffers from similar problems as using it for a non-programming knowledge base.
I found it invaluable when for example working with libraries for the first time or doing something for the first time in general. It essentially gives me a "hello world" for a technology I knew nothing about, only tailored for my exact needs, which saves a ton of time especially when I use said technology for something less common that an introductory section of the manual doesn't cater to. Similarly it was pretty good for finding bugs in an isolated block of code or examining what badly readable code does.
Yes that's correct. I do think it is a very good tool for a lot of reasons. I just don't think it's reasonable for a company to replace humans with AI to do the job. There's an argument to be made...
Yes that's correct. I do think it is a very good tool for a lot of reasons. I just don't think it's reasonable for a company to replace humans with AI to do the job. There's an argument to be made if it's moral or not, but instead of that direction I think it's not capable of doing the same job as a human. Maybe there are some easy jobs an AI may be able to handle, but I wouldn't think there would be many. I do use it often for personal use for certain questions that I trust it with. Like taking care of my plants. I think it does a decent job with that as my plants are nice and healthy :)
I don't believe so, not for awhile at least. I'm in a somewhat specialized field of Packaging design, but at this point I feel like AI is going to affect more artists and photographers or people...
I don't believe so, not for awhile at least. I'm in a somewhat specialized field of Packaging design, but at this point I feel like AI is going to affect more artists and photographers or people who work on conceptual/draft artwork and such more than designers who work on more complicated designs which have to communicate detailed messages.
Like whoever makes their money off of stock artwork/photography/commissions, they're probably going to be the first ones to see that money stream dwindle up as people just run prompts through whatever artwork generators they can find to get some draft artwork. For conceptual stuff, it's "good enough" probably for most executives who may not pay attention to the details, but for delivering a final product, it's just not there right now (not to say it won't get there, but not anytime soon I think).
AI artwork falls apart when you actually start looking too closely at it, and relying on it for actual design with copy & text at this point is laughable. Maybe somebody will figure out a way to unite ChatGPT with MidJourney to create workable artwork with text, but at the moment, alot of what AI outputs just doesn't make sense, it's nonsensical or defies physics/reality.
I DO THINK though that we may start seeing waves of AI-generated books start hitting Amazon, where almost the entire production of the books will be managed by AI (cover & text). These will just start getting mass produced to where you'll have a hard time picking out crappy AI books from actual human-produced content.
Coincidentally the Behind the Bastards podcast is currently airing episodes about this topic, and the host has written an article about it with the things they discuss.
in short, no
Ai's are already cannibalizing off of eachother, using other AI made images to make more AI images, which other AI's are gonna use to make more images, resulting in a continuous corruption where the AI flaws get amplified over and over with every iteration. An hand with five fingers becomes one with 6 for example, then with 8 , then with 15, etc etc.
Ai's will keep feeding off of the work of real artists, but they need real artists to survive. They're like parasites that can end up killing their host if they get too big.
I don't think that's how it works though.
An AI like this needs training data, but it doesn't like consume an image to make a new one or anything. And all the artwork that has ever been created combined can be used for this, so it doesn't need new data to be constantly generated. Meaning it would continue doing the exact thing it is already doing even if nobody would create new artwork anymore. Plus, people will continue to make art even if they can't make any money off of it anymore due to AI, so the amount of training data will continue growing regardless.
Realistically, AI will replace some people similar to how automation in manufacturing etc replaced some. But odds are applications where cost is less of an issue there will still be a human doing the work likely aided by AI. Since at least in the short to intermediate term humans are probably still better at creating things that fall outside of the AI's training data.
More price conscious applications might become mostly AI generated though, with human design being a bit of a luxury thing.
People designing these things are data scientists and they can and will filter databases for the flaws and continue to build improved models over time. Do not count on this being the case.
I don't know what the future holds but at least with text AI, it's not that technical and the information is old. My current job deals with chatgpt specifically and it was really cool experimenting at first. After a month or so of using it I found that it is wrong most of the time on technical analysis and very rarely has up to date information. I understand it takes data from like June 2021 and before. It's just not viable to replace humans any time soon.
Don't like to assume from one sentence, but it sounds like possibly you're using it wrong and underestimating it as a result. ChatGPT is bad as a question answering machine because you never know whether it's hallucinating or not, but I've had it work really well for text/information processing. From simple tasks like writing an article bit by bit based on my notes that I feed into it (saving me about 25% of time overall) to giving it a slightly long article that I was too lazy to read and asking it whether a specific subject is mentioned in it, and if so to give me a summary - only did this a few times, but it was bang on every time. Interesting thing is that this seems to work in multiple languages. It also works really well for coding, but that's well-known.
To use it like a question answering machine Bing AI is probably going to be better because it actually searches the internet, but that one definitely isn't quite there yet. I've had a few use cases where it gave me what I wanted immediately after google failed (or I failed with google), but for more complex searching tasks in often completely fails.
That said, yesterday I had a conversation with ChatGPT 4 (the paid version) about specific concepts in psychoacoustics and it was surprisingly correct and helpful. The issue is that in this use case you have to verify what it says.
I troubleshoot software as my primary job. We have ChatGPT integrated as a tool we can use in our system which is similar to chatgpt4. Troubleshooting is not very complicated if you can understand the error code and know where to find the problem. If I ask chatgpt to find the primary reason of the error, it will often tells me to look in places inside the software that does not exist. As an example, I'll put in a descriptive error code and ask for a way to fix it and have references on where chatgpt found that information so I can verify the answer. The answer it often gives me is to look in an area that does not exist in the software and a reference that does not exist.
Using Chatgpt everyday for multiple issues, I would give it an average of 20% giving me helpful, but not totally accurate answers.
So you kind of use it as a knowledge base and it doesn't work. I admit I did not think of this case when I mentioned programming, but it's not surprising that is suffers from similar problems as using it for a non-programming knowledge base.
I found it invaluable when for example working with libraries for the first time or doing something for the first time in general. It essentially gives me a "hello world" for a technology I knew nothing about, only tailored for my exact needs, which saves a ton of time especially when I use said technology for something less common that an introductory section of the manual doesn't cater to. Similarly it was pretty good for finding bugs in an isolated block of code or examining what badly readable code does.
Yes that's correct. I do think it is a very good tool for a lot of reasons. I just don't think it's reasonable for a company to replace humans with AI to do the job. There's an argument to be made if it's moral or not, but instead of that direction I think it's not capable of doing the same job as a human. Maybe there are some easy jobs an AI may be able to handle, but I wouldn't think there would be many. I do use it often for personal use for certain questions that I trust it with. Like taking care of my plants. I think it does a decent job with that as my plants are nice and healthy :)
I don't believe so, not for awhile at least. I'm in a somewhat specialized field of Packaging design, but at this point I feel like AI is going to affect more artists and photographers or people who work on conceptual/draft artwork and such more than designers who work on more complicated designs which have to communicate detailed messages.
Like whoever makes their money off of stock artwork/photography/commissions, they're probably going to be the first ones to see that money stream dwindle up as people just run prompts through whatever artwork generators they can find to get some draft artwork. For conceptual stuff, it's "good enough" probably for most executives who may not pay attention to the details, but for delivering a final product, it's just not there right now (not to say it won't get there, but not anytime soon I think).
AI artwork falls apart when you actually start looking too closely at it, and relying on it for actual design with copy & text at this point is laughable. Maybe somebody will figure out a way to unite ChatGPT with MidJourney to create workable artwork with text, but at the moment, alot of what AI outputs just doesn't make sense, it's nonsensical or defies physics/reality.
I DO THINK though that we may start seeing waves of AI-generated books start hitting Amazon, where almost the entire production of the books will be managed by AI (cover & text). These will just start getting mass produced to where you'll have a hard time picking out crappy AI books from actual human-produced content.
Given enough time I think most jobs will be made obsolete by AI and/or automation.
Coincidentally the Behind the Bastards podcast is currently airing episodes about this topic, and the host has written an article about it with the things they discuss.