Skip to Main Content

AI

A guide to generative AI.

Overview

There are various types of AI. These include narrow, which are task specific, meaning they can only do what they were programmed to do. SIRI or Alexa are examples of narrow AI. General AI has expanded capabilities, that allow it to do the same things a human can by learning from its mistakes. Super AI does not yet exist, but if realized, its abilities would surpass those of humans. 

Depending on what you use, you are going to be using narrow AI or general AI. If you ask Alexa a question, that is narrow AI. Chat GPT, on the other hand, is general AI. It can generate unique content, though its ability to do so is limited by what material is available online.

 

How AI Works

At its most basic form, AI uses Large Language Modules (LLMs). LLMs predict the words you may use based on statistical probability. For example, if we were to say "To make pizza, I start by making ____________", an LLM would probably say "dough" because that is the most likely answer.  LLMs are the basic of AI in all its forms. It is used in predictive text algorithms (such as the one below from Google Search) as well as grammar and spell checks. 

Generative AI builds upon LLMs. Its primary purpose is to create information based on a prompt. First, it predicts what you are looking for based on statistical probability. Then, it uses its programming to respond. So, if you ask it a question, its programming will search the databases to find the answer. If you use a chatbot as therapy, its programming will search for the most likely answer. If you use generative AI to create an image (or paper- which you should not do!), its programming will search for materials to create that required material. 

Important

All AI is programmed. It is not sentient. The programming reflects the biases of the programmer. Some programmers will put in information that is false in order to promote a certain political agenda. 

AI also relies on outward facing material (like news articles, social media, and Wikipedia) to find answers to your question. AI does not have access to most library databases and other reputable resources.  Most of AI runs off statistical probabilities of whether it is answering your questions. That means the material make be false: if a social media post promoting false claims has a high "like" count, AI will take it as the likely answer and give it to you.

If AI cannot find the answer based on its programming, it will make things up (hallucinations). Hallucinations occur because AI is trying to complete its ultimate goal: to give you an answer.