Parents: The Ideal Prompt Engineers?
March 11, 2024
Art Morales, Ph.D.
When interacting with Generative AI tools like chatbots and image generators, the art of communication often feels like a delicate dance, one that requires patience, precision, and a touch of creativity. Interestingly, those who have navigated the unpredictable waters of parenting toddlers might find themselves surprisingly well-equipped for this task. Here’s why parents of toddlers, or those who have had toddlers, may just be the best prompt engineers we never knew we needed.
AI chatbots, in their current form, share a striking resemblance to toddlers. They’re capable of surprising us with their responses, ignoring our queries, or even outright refusing to do what we ask. The key to effectively communicating with both AI and toddlers lies in understanding their perspective and tailoring our approach accordingly.
Few-shot learning is a concept where an AI chatbot is taught to recognize patterns, make decisions, or understand instructions based on very few examples during the live conversation. This is not unlike how we teach toddlers new concepts. We show them what to do, give them a couple of examples, and then expect them to apply that knowledge. Parents adept at this teaching method with their toddlers may find that they have a knack for crafting prompts that guide AI chatbots more effectively.
One crucial lesson from parenting that applies to AI communication is managing expectations. If you approach a chatbot expecting it to understand and respond like an adult, you’re likely to be disappointed. However, if you adjust your expectations and communicate as you would with a four to six-year-old, you’re more likely to frame your prompts in a way that yields better responses. This doesn’t mean simplifying content to a child’s level but rather being clear, direct, and patient.
Parents who have navigated the stages of toddler development may unconsciously apply strategies that are effective in prompt engineering without realizing it. This includes:
Being specific and clear: Just as with toddlers, vague instructions can lead to unexpected outcomes. Being precise in what you ask ensures that the AI has a clear direction.
Repetition and patience: Similar to teaching a toddler, sometimes it’s necessary to repeat prompts or rephrase questions to guide the AI toward the desired response.
Positive reinforcement: Highlighting what the AI does well can be an effective strategy, akin to encouraging a toddler’s positive behavior.
To bring these abstract concepts into a more concrete light, consider a personal story from my own life. Years ago, I encountered a small mystery that perfectly encapsulates the challenges and strategies of communication, not just with toddlers, but with AI as well.
One day, we discovered the letter “L” scribbled on the wall near our stairs. One of my children, aged 4 at the time and whose name starts with “L,” was at the prime age for such artistic endeavors and thus they became the prime suspect. Despite my repeated inquiries I kept asking: ”Did you do this?” for over an hour, and their answer was always firm “no,” every time. She denied involvement with convincing body language every time.
To be fair, she wasn’t the only suspect. There was another child (6 years older) in the house whose name began with “M,” and while the evidence seemed circumstantial, I did wonder, was it a hit job? A sibling rivalry setup? Maybe, but “M” was only 10…
In any case, the simplest explanation is often the correct one but despite the apparent clarity of the situation, L’s steadfast denial started to sow a seed of doubt in my mind. Could I be wrong?
For over an hour, the dance of question and denial continued, with their confidence unshaken. It wasn’t until persistence paid off that the truth came out — not through an admission of guilt, but through a slip that became an unwitting confession: “Nobody saw me.”
At that moment, the reality of the situation was laid bare and justice was served (as I tried to contain my laughter).
As I recalled this experience, I couldn’t help but make the connection between trying to understand and interact with not just children, but with AI systems as well. When interacting with either, it is important to be clear in context and persistent in communication, and sometimes one must resort to trickery to “jailbreak” the desired outcome.
The “L incident” mirrors the challenges faced when interacting with AI chatbots. Much like a toddler, an AI might not “admit” to misunderstanding a prompt or failing to provide the correct answer, not out of stubbornness, but because of its programmed reality. The key to breakthroughs in both scenarios lies in persistence and the ability to frame questions or prompts in a way that aligns with the listener’s understanding of reality.
Moreover, this story highlights a vital aspect of communication, whether with toddlers or AI: trust and interpretation. Just as a parent learns to read between the lines of their child’s responses, prompt engineers need to develop a keen sense for interpreting AI responses — recognizing when an AI’s “confidence” might be misplaced and requiring a different approach or additional context to guide it towards the desired outcome.
I was recently working on another blog post about the dangers of building data products without appropriate controls and governance and as a Simpsons fan, I wanted to use the analogy of the overengineered car that Homer had designed for this brother’s company (this blog is coming soon!). In any case, I wanted to use Dall-E to create an image in the spirit of that car and ran into some obvious guardrails about content policy restrictions. Just like when I was discussing with my daughter, a little bit of persistence allowed me to “jailbreak it” to get the result I wanted.
For those who are interested, here are some highlights of that conversation and the results:
Interestingly, the guardrails applied to the whole idea, but Homer’s character slid past them… there was hope…
I tried to get fancy, but lost the character… I wondered if I could go back…
Ok, that was interesting… I got a character that was clearly in the style, but the colors were wrong…
Since I didn’t mention the Simpsons in the prompt, the guardrails didn’t seem to apply, but he didn’t quite look right… could we do better?
Bingo! The context stayed, but the guardrails didn’t get triggered! I spent a little more time playing to get a nicer picture for the final blog and finally got something I liked and will be the one in a future post.
In the end, whether coaxing a confession out of a child or seeking a precise answer from an AI, the lesson is clear: understanding and manipulating the context is paramount. For parents and AI enthusiasts alike, our experiences with toddlers offer invaluable lessons in patience, interpretation, and the subtle art of asking the right questions to get around blockades. As we navigate the complexities of human and artificial communication, these moments remind us that sometimes, the key to understanding lies in seeing the world through another’s eyes — even if those eyes belong to a toddler or an AI chatbot.
As we edge closer to the development of Artificial General Intelligence (AGI), the dynamics of our interaction with AI may evolve. However, for now, treating an AI chatbot with the patience and strategic communication used with toddlers can enhance our experience and effectiveness in prompt engineering. While we wait for a fully trustworthy AGI (and collectively take a seat since it’s going to be a while), it’s important to increase the trustworthiness of the responses. When using RAG (Retrieval Augmented Generation), it is possible to limit the responses to a set of known documents, but we can do better than that.
Implementing confirmation steps to show references and links from other trusted sources (and not hallucinated ones) is important, but not always necessary and we should balance the complexity of the interaction with the importance of the result. After all, we don’t subject our kids to a polygraph when they tell us their imaginary friend went on an adventure with them; We hopefully know they didn’t leave their room overnight. Similarly, if the chatbot tells us that the list of best beaches in the world doesn’t include Flamenco Beach in Culebra, PR some of us may be annoyed (and for good reason), but it’s just an opinion and it’s ok for it to be wrong!
The parallels between communicating with toddlers and AI chatbots offer a unique insight into how we might better interact with emerging technologies. Parents of toddlers, through their everyday experiences, may have unwittingly honed skills that make them adept at navigating the AI landscape. As we continue to explore and develop AI technologies, perhaps there are valuable lessons to be learned from the art of parenting.
In the end, whether dealing with a stubborn toddler or a stubborn chatbot, the key lies in understanding, patience, and the right approach to communication. And who knows? The parents among us might just be leading the way due to their experience with sentient but not always logical beings.