OpenAI ChatGPT
Large language models (LLMs) are AI models trained on, you guessed it, large amounts of language (English, Python, HTML, etc.) data. More specifically, wikipedia.com, the web (in general) and other sources of text and images. With such a large dataset and a mega large model (lots and lots of associations) it is able to form relationships between words, phrases and images. For example, the word “frog” is going to be associated with “green”, “water”, “slimy”. etc. more often than with “purple” or “desert”. Not really all that scary. What is a little scary is that we don’t control how these models learn these associations. We can tell it to pay more attention to one feature or another but when it comes right down to it, we have no idea how the model decides to form large scale associations. These are huge models with billions or trillions of associations. We humans have a hard time keeping track what we had for lunch last Monday. That’s not a problem with AI.
XXX
Google BARD