We open this edition of our bi-monthly AI roundup with a prompting question: can you name three AI prompting techniques that regularly improve the quality of AI responses?
With studies increasingly showing that prompts really matter for AI output (so much that even AI developers themselves are unsure what their models are capable of), we’re also seeing some odd things, as covered in research by VMWare authors Rick Battle and Teja Gallapudi.
Their study found that by putting AI prompts in terms of Star Trek scenarios (for sets of 50 math problems, for example, such as: “Command, we need you to…” or asking the AI to start its response with “Captain’s Log, Stardate 2024…”) or high-risk thrillers (for sets of 100, such as: “The life of the president’s advisor hangs in the balance…”) consistently produced more effective than prompts than without.
And no one is really sure why.
One theory is that, like it’s training data, AI performs better when prompted with urgency or encouragement, which leads to us to one of this week’s topics: the quirky benefits (vs real risks) of anthropomorphizing AIs.
In March and April, the first signs of an AI thaw on Wall Street appeared. If 2024 is truly the “show me the money” year for AI, that pressure may be mounting now, but it’s not shared by everyone.
Our focus in this edition: AI usage trends, leadership and regulation, innovations, and a look at the obvious cons (and some pros) of anthropomorphizing our AIs.
[Check out our prior AI roundup to catch up on events from January and February !]
A(n Updated) Look at Usage
Despite the massive volume of people using AI every day around the world, updated US stats from the Pew Research Center give us some context: