Do you remember your first encounter with a computer? For me, it was an Apple //e, which my parents and I used to play Ultima and Might and Magic.
No matter when you first used a computer, we all developed certain expectations about how they work, and that knowledge has served us well. Once we learned the way, it was easier to shift between programs and machines.Â
But with generative AI, a lot of these expectations no longer apply, and we need to adopt a new way of interacting with technology to use it effectively. If you've been struggling with AI tools, it might be because you're still using the old approach. So, let's break down the key changes.
AI tools are built for variability, not consistency
Imagine you're writing a formula in Excel. You'd expect the same formula in two cells to give identical results with the same input, right? That's because we're used to computers being consistent: same input, same output.
But with generative AI, the same prompt can give you different responses each time. This inconsistency happens because large language models (LLMs) are based on statistical relationships and most use a probabilistic method to generate responses. It's this random element that means each run will be slightly different from the last.
Learn more about what's going on under the hood of generative AI with this article on how ChatGPT works.
Generative AI's unpredictability can be both fascinating and frustrating. If you ask the AI for any kind of analysis—like generating a strategy or recommending the best option—the answer might change every time you ask the question. Unsettling stuff if you're heavily relying on AI for critical decisions, so make sure you don't shut your brain off while using these tools.Â
For instance, when I asked the tool to help me prioritize six possible creative projects on a free afternoon, I got completely different results each time.
Prompt:
Generation 1:
Generation 2:
Generation 3:
While inconsistency might seem like a drawback, it can be hugely beneficial for creative tasks, especially brainstorming—your inputs can provide surprising results or ideas you never would have considered. A lot of AI models even let you set the temperature, which controls the randomness of outputs. Turn that up if you want more creative responses.
Looking ahead, expect to see more and more tools that poll several AI models and aggregate the results for more consistent outputs. But until then, you can use the Regenerate function to check multiple responses and identify commonalities or discrepancies. This can help you understand the range of possible outputs and make a more informed decision.
AI can hallucinate and miscalculate
Back to our Excel example: when you type in a formula, how accurate do you expect that built-in function to be? Probably spot-on, right? After all, it's in the program, so you assume it must be accurate. With AI language models, accuracy is no longer guaranteed.Â
Because AI tools don't actually know anything—they're just predicting the best string of text that plausibly follows on from your prompt—they can hallucinate or make up information. AI developers are building in safeguards against hallucinations, and there are some prompt engineering techniques you can use to help, but there's still a non-zero chance AI will flat-out make stuff up. Computers were taught how to do things through and through—AI was given a lot of guidance, but now it's learning on its own.
To safeguard against potential inaccuracies, always double-check crucial information and references, especially for high-stakes decisions. This extra step can help ensure you're not led astray by an AI's confident but incorrect assertion.
AI tolerates errors, while code crashes on tiny mistakes
If you've spent much time programming, you know how a tiny mistake like a misplaced semicolon can cause your entire code to crash. But generative AI is surprisingly tolerant of small errors, including typos and syntax variations. That's because its training data comes from humans, complete with typos and differences in styles and formatting, and its token-based approach makes language processing more flexible and forgiving.
Look what happens when I ask what wind would pair best with a steak—it knows I meant wine.
But don't get too comfortable.
While AI might understand your typo-ridden prompt, precision is still your friend. Unclear inputs can lead to unexpected outputs. To get the best results, aim for clarity and precision in your prompts, even if the AI seems to let you off the hook. It's just like good communication skills with a human—the clearer you are, the better the AI can assist you.
Case in point: I don't want wings with my steak.
As models improve, they'll likely get even better at clarifying ambiguous inputs. Newer models are already starting to ask for clarification if you make a mistake or provide an unclear prompt, giving you a chance to improve your prompting and get better outputs.
Specialized language is no longer required
I remember getting my first computer with a mouse, and playing those little games that were really poorly-hidden tutorials to get you to learn how to move the mouse and double-click. We need something similar here to effectively work with AI tools: practice and patience (ChatGPT kind of feels a little like a game sometimes too, right?).
We're used to working with computers through specialized languages like programming languages, scripting, and formulas, but to use generative AI, we need to bring back our natural language skills. And although prompt engineering is an emerging skill to build, it's a lot easier to get started with AI because the tools already speak your language.
That said, there's still an art to crafting effective prompts. Think of it as learning to be a good conversationalist rather than memorizing a new programming language's syntax. To get the best results, you can use advanced prompt techniques or frameworks. Don't be afraid to experiment: sometimes the most natural way of asking yields the best results, but other times highly creative prompts will get you something surprisingly better.
Looking ahead, I expect this to oscillate between specialized prompt engineering techniques and more intuitive, natural language interactions as models continue to evolve.
Generative AI evolves faster than other software
We're used to software that stays pretty much the same—new features are added, bugs are fixed, tweaks are made, but very rarely are there huge overhauls. But with generative AI, new models are coming out faster than we can keep up with, and they can really change the results you get. Updates to existing models can throw your tried-and-true prompts out the window (looking at you, Midjourney).Â
This happens because of new training data, instructions, or restrictions being added. To keep up with these changes, pay attention to the model you're using. Some platforms let you specify or even go back to older versions if needed. Knowing these options can help you keep your work consistent and understand why your results might change over time.
Automate AI
Even though AI is breaking all the computer rules, it's still possible to build it into your existing software processes. By connecting your AI tools to Zapier, you can pull the power of AI into all the other apps you use. Learn more about how to use AI with automation using Zapier, or get started with one of these pre-made workflows.
Start a conversation with ChatGPT when a prompt is posted in a particular Slack channel
Write AI-generated email responses with Claude and store in Gmail
Send prompts to Google Vertex AI from Google Sheets and save the responses
Promptly reply to Facebook messages with custom responses using Google AI Studio (Gemini)
Zapier is the leader in workflow automation—integrating with thousands of apps from partners like Google, Salesforce, and Microsoft. Use interfaces, data tables, and logic to build secure, automated systems for your business-critical workflows across your organization's technology stack. Learn more.
Related reading: