Sorting out your AI options
With so many types of AI emerging it’s easy to get confused… This post aims to help sort them out, maybe not as neatly as the Hogwart’s sorting hat, but hopefully helpful enough to add a bit of clarity.
First, let’s review the basics —specifically, Large Language Models (LLMs). These models, like OpenAI’s GPT (the brain behind ChatGPT), are trained on massive amounts of text and are incredibly good at generating human-like language. There are now dozens of these available and possibly hundreds of smaller, more bespoke language models lurking around the edges.
Here are just a few of the top or most popular LLMs right now:
- GPT-4o – Known for versatile text generation, it’s ideal for content creation and conversational applications. 4o with canvas is the newest as of this writing (can’t keep up!)
- Claude 3 Sonnet – Anthropic’s multimodal model, designed for high-complexity tasks with a strong ethical framework.
- Gemini 1.5 Pro – Google’s multimodal model, efficient in integrating text and visual data. This LLM drives Google’s experiments like NotebookLM.
- Mistral – A newer, open-source, multi-language model focusing on efficient deployment and scalability for developers. Good for code and complex tasks.
- LLaMA 3.2 – Meta’s latest open-source LLM, popular in research due to its customizability and access to varied language applications.
This new generation of LLMs shows the growing trend toward multimodal AI, where models process both text and visual data, providing richer responses and more complex capabilities.
LLM powered applications
LLM-powered applications are, by definition, a type of generative AI. Here’s how it breaks down:
- Generative AI is the broader category of AI models designed to create or “generate” new content, whether that’s text, images, music, or something else.
- LLM-Powered Applications fall under this category because they use a specific type of generative model, the Large Language Model (LLM), to produce human-like language responses based on prompts.
So, every LLM-powered app (like ChatGPT) uses generative AI, but not all generative AI uses LLMs. For example, tools like DALL-E or Midjourney use a different generative approach to create images, not text.
Now, let’s dive into the 12 types of AI, moving from more common types of AI to the futuristic, sci-fi ideas on the horizon.
1. LLM-Powered Applications
- What they do: These apps use Large Language Models (LLMs) like GPT to handle tasks like summarizing, transcribing, answering questions, and even creating study guides. They’re basically supercharged text processors, usually customized to do a specific job.
- Examples: ChatGPT for answering questions, Jasper for whipping up content, Grammarly for polishing language, Consensus for summarizing scientific literature… And oh so many more…
2. Retrieval-Augmented Generation (RAG)
- What they do: RAG apps combine the LLM magic with an info-retrieval system. They’re especially handy when you need answers or content specific to something you’ve uploaded, like a document. They pull up the relevant info and then generate responses around it.
- Examples: NotebookLM and Coral are great examples here; you can upload content, and they’ll generate summaries, study guides, or specific answers based on it.
3. Agent-Based Applications
- What they do: Known as “agents,” these apps act as little helpers that semi-independently handle specific tasks for you, often following a set routine. They don’t think up their own goals but can take a load off by doing repetitive tasks.
- Examples: Scheduling bots, email assistants, and transcription tools like Otter.ai.
4. Generative AI
- What they do: Generative AI is all about creating new stuff—text, images, even audio—based on what it’s learned from data. It’s popular for anything creative where you need original content.
- Examples: DALL-E for image generation, Midjourney for digital art, and AIVA for composing music.
5. Federated AI
- What they do: Federated AI keeps learning across multiple devices or servers without sharing personal data, so it gets smarter but keeps privacy intact. Think of it like on-device learning that doesn’t need to send everything to a central server.
- Examples: Predictive text on your phone or personal recommendations that don’t involve sharing your data with big servers.
6. Explainable AI (XAI)
- What they do: XAI is all about transparency. It makes sure that AI decisions are understandable to humans, especially useful in fields like healthcare or finance where you need to know the “why” behind decisions.
- Examples: Diagnostic algorithms in medicine or financial loan assessments where clear reasoning is a must.
7. Autonomous AI (Agentic AI)
- What they do: This one’s a bit more advanced—autonomous AI works on its own, tackling goals and adapting without constant input. It can manage projects or tasks over time, adjusting as needed.
- Examples: Personal AI assistants that not only schedule your meetings but also adapt and take initiative based on your habits and feedback.
8. Artificial Narrow Intelligence (ANI)
- What they do: Also called “Weak AI,” ANI is built for specific tasks. It’s not flexible or able to generalize, but it’s very good at its one job.
- Examples: Chatbots like those in customer service, image recognition software, Netflix’s recommendation engine, or any AI designed with a single purpose in mind.
9. Limited Memory AI
- What they do: Limited Memory AI uses past data or history to make better decisions, but it’s not self-improving without retraining.
- Examples: Self-driving cars that get better based on past trips but still need new training data to make any real upgrades.
10. Theory of Mind AI
- What they do: Not here yet, but Theory of Mind AI would be able to understand things like emotions, beliefs, and intentions, adjusting its behavior based on human moods or needs.
- Examples: Could be applied in future robots or learning systems that change based on how the user is feeling or responding.
11. Artificial General Intelligence (AGI)
- What they do: Also known as “Strong AI,” AGI would understand, learn, and apply knowledge across different tasks, much like a human. It could tackle almost any intellectual job humans can do.
- Examples: This one’s still theoretical, but if achieved, AGI could do everything from research to teaching to art with human-level understanding.
12. Artificial Superintelligence (ASI)
- What they do: ASI is the big one—beyond human intelligence in every way, with creativity, problem-solving, and even social understanding that outpaces ours. It’s a bit of a sci-fi concept for now but would be a game-changer.
- Examples: Hypothetical (think sci-fi); ASI could solve massive global problems or make breakthroughs, but it’s also seen as a potential existential risk.
In a world buzzing with different AI types, understanding these models can make it easier to see where they fit into our everyday lives and where they’re headed. From practical tools that help us with writing and scheduling to future possibilities like Theory of Mind and AGI, each type of AI brings something unique to the table.
While we’re still far from ASI or the sci-fi stuff, these developments – and the speed at which they are evolving – highlight how AI is rapidly transforming how we live and work.
So, whether you’re exploring current AI tools or simply curious about what’s next, there’s something here for everyone looking to make sense of this fascinating landscape. Hope this help you as much as it did me! Consider this sorted… for now…
Leaned heavily on ChatGPT to help sort these out.
Cross posted to LinkedIn.