• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics in ~tech with the tag "artificial intelligence". Back to normal view / Search all groups
    1. Are any AI virtual assistants actually useful?

      AI Virtual Assistants are on the rise, and logically it seems like I could use one to support productivity, small business, neurodivergent accomodations, etc., BUT, when reviewing what's out there...

      AI Virtual Assistants are on the rise, and logically it seems like I could use one to support productivity, small business, neurodivergent accomodations, etc., BUT, when reviewing what's out there they don't seem super useful.

      Otter seems the most useful because it can attend web meetings and record, contextualize screenshares, and sift the transcripts into action items, but it cant go to all webinar services and I'm not sure I can log into this in a corporate platform. Others seem to be able to check a calendar or make a reminder, but nothing I would pay for.

      Some use cases might be gathering basic info from clients, scheduling meetings (calendly can handle this), blocking time for my task lists, writing basic email drafts, adding up expenses each month, sending reminders for customers, etc.

      All of this could happen with various tools, but seem like good territory for an AI Virtual Assistant.

      So, have you found any AI VAs that would be worth paying for? Anything that saves time or makes life easier?

      23 votes
    2. Anyone know of research using GPTs for non-language tasks

      I've been a computer scientist in the field of AI for almost 15 years. Much of my time has been devoted to classical AI; things like planning, reasoning, clustering, induction, logic, etc. This...

      I've been a computer scientist in the field of AI for almost 15 years. Much of my time has been devoted to classical AI; things like planning, reasoning, clustering, induction, logic, etc. This has included (but had rarely been my focus) machine learning tasks (lots of Case-Based Reasoning). For whatever reason though, the deep learning trend never really interested me until recently. It really just felt like they were claiming huge AI advancements when all they really found was an impressive way to store learned data (I know this is an understatement).

      Over time my opinion on that has changed slightly, and I have been blown away with the boom that is happening with transformers (GPTs specifically) and large language models. Open source projects are creating models comparable to OpenAIs behemoths with far less training and parameters which is making me take another look into GPTs.

      What I find surprising though is that they seem to have only experimented with language. As far as I understand the inputs/outputs, the language is tokenized into bytes before prediction anyway. Why does it seem like (or rather the community act like) the technology can only be used for LLMs?

      For example, what about a planning domain? You can specify actions in a domain in such a manner that tokenization would be trivial, and have far fewer tokens then raw text. Similarly you could generate a near infinite amount of training data if you wanted via other planning algorithms or simulations. Is there some obvious flaw I'm not seeing? Other examples might include behavior and/or state prediction.

      I'm not saying that out of the box a standard GPT architecture is a guaranteed success for plan learning/planning... But it seems like it should be viable and no one is trying?

      9 votes