Why Prompt Engineering Is Now Everyone’s Job
Talking to machines is the new productivity hack—and everyone’s expected to learn
Artificial intelligence existed for decades before ChatGPT, but it mostly operated behind the scenes—powering recommendation engines, detecting fraud, and optimizing logistics. Back then, technical professionals like engineers and data scientists were the ones steering these systems. But with the rise of generative AI in the workplace, anyone can now instruct the bots on what to do. Crafting those instructions—known as prompts—has become a widely accessible skill. Still, as companies race to unlock AI’s potential, a key question looms: should they hire dedicated prompt engineers, or treat prompt writing as a core competency that every employee should have?
In the 1971 classic Willy Wonka & the Chocolate Factory, there’s a memorable scene where a programmer feeds instructions into a machine, hoping it will calculate the location of a Golden Ticket. It’s a fitting metaphor for how AI used to work—complex systems operated by specialists behind the scenes, far removed from everyday users. The way we use technology today has shifted dramatically—instead of learning to speak the language of machines, we’re now using natural language to tell them what to do.
Today, the barriers to using AI are lower than ever. Tech companies are embracing low- and no-code tools, allowing workers to build automations, analyze data, or deploy AI-powered solutions without needing to write code or rely on technical teams. Software makers are not just building more intuitive platforms; they’re actively promoting the idea that anyone can use AI, no technical background required.
Prompt Engineering For All
The Gen AI era has enabled anyone to become a prompt engineer. This democratization of AI has profound implications. Prompt engineering is now a critical skill that can empower employees across an organization to harness the power of these advanced language models. As companies seek to maximize the value of generative AI, they must decide whether to hire dedicated prompt specialists or equip all workers with prompt writing abilities. The ability to craft effective prompts is quickly becoming a must-have competency, a versatile tool that unlocks AI's potential in numerous business scenarios.
To understand what prompt engineering is — and why it matters far beyond the tech world — it helps to start with how Richard Socher, CEO of You.com and an early pioneer of the field, defines it. “It’s the idea that you can ask an AI model any kind of question, and it will give you a good answer,” he tells me. He argues that it’s all about delegation, giving clear, unambiguous instructions to AI agents to automate workflows. It’s not dissimilar to how managers delegate work to their employees. “Once you realize how important delegation is to managing people, and hence managing AI, you just realize it will never really go away,” Socher says.
While it might seem intimidating at first, the “father of prompt engineering” believes it’s quickly becoming a “necessary” workplace skill—as essential as knowing how to use Microsoft Office. “It isn’t a specialized profession the way delegation isn’t a specialized profession. You just use it everywhere if you become a manager,” he reasons. “There is a lot of delegation in most kinds of jobs as you become more senior. Hence, you need to have that skill and then have the skill to evaluate the outputs. And so, [an] unambiguous explanation and delegation of tasks and workflows is going to be a crucial capability of most high-functioning knowledge workers of the future. But it’s not really a separate job.”
Socher believes companies should view prompt engineering training “in terms of delegation,” meaning the “most productive people” and the “best performers” should have this skill. “They will have to know prompt engineering because they are going to set up the AI,” he explains. “But there might also be a lot of people who still have to be involved in various processes, and if they don’t have the skill set, then they’re going to be blocked. They won’t be as efficient as people who can manage their AIs productively and carefully.”
“The ability to discern and evaluate contents and outputs is going to become more and more important than the initial creation of them, and that will make the people, companies, and economies that lean into that much more efficient than anyone else, and they’re going to slowly, but surely run away in terms of overall productivity from those that don’t,” Socher predicts.
Prompting as a New Form of Digital Literacy
While prompting may often be framed as a way to generate better outputs, it’s equally about learning how to communicate with machines. As AI systems proliferate across tools, platforms, and workspaces, prompting is emerging as a new form of digital literacy, one that goes beyond software to understanding how to communicate effectively with intelligent systems. With a growing number of large language models embedded in nearly everything you can think of, knowing how to craft clear, goal-oriented prompts is becoming as essential as learning how to navigate a web browser or write an email.
Just as we adjust how we communicate with different people based on their background and preferences, we need to take a similar approach when prompting AI models, Dr. Walter Sun, SAP’s global head of AI, notes. “Different people hear things that are spoken to differently. That’s kind of how humans are. And these LLMs are trained [on] slightly varying data sets, so they also like to be spoken to, if you will, differently as well,” he tells me. “So prompt engineering…is basically the way that you take a statement, a command, or a request, and you tune it and optimize it for a specific model. And much like human beings…you know how they like to be spoken to.
In short, “Prompt engineering at a highest level is basically a way of just saying, ‘How do I speak most accurately or most efficiently to an LLM so I get the response that I want?’”
He shares that a “new cottage industry” of prompt engineers has recently emerged, comprising people who aren’t necessarily technical but are “verbal and able to try different things [and] understand what different models got better responses” such as home improvement or fashion people who knew how to “speak” to AI about their fields. “You don’t have to be technical in the traditional sense of having years and years of deep learning experience or a Ph.D. to be a prompt engineer,” Sun contends.
As AI becomes increasingly embedded in our daily lives, Sun suggests the conversation shouldn’t focus solely on whether to hire a specialist or do it ourselves. Instead, he argues, we should be asking: can the AI solve the problem on its own? As language models continue to advance, Sun envisions a future where AI systems can optimize their own performance.
This shift is likely what Microsoft and Salesforce had in mind with their respective visions of “frontier firms” and a growing digital workforce. As AI becomes a collaborative partner in the workplace, the ability to interact with machines—not just command them, but engage in a kind of dialogue—will become essential. This is true whether you’re building with Salesforce’s Agentforce or Microsoft Copilot Studio, generating visuals with Adobe Firefly or Midjourney, or conducting deep research using ChatGPT, Anthropic’s Claude, or Google Gemini. In all cases, the ability to craft effective prompts is becoming a foundational skill. Prompt engineering, in this context, isn’t about getting results; it’s about developing a new kind of fluency for working alongside intelligent systems.
Identifying a Good Prompt
A good prompt can make the difference between AI generating gibberish and gold. Although you can instruct AI models using natural language, the results will vary depending on how detailed the input is. So what’s the key to optimizing a prompt to generate the correct result?
“A really good prompt is unambiguous,” Socher details. “Again, a prompt is basically a set of instructions to get AI to do something for you. Initially, that was just answering questions. Now, that can be all kinds of things. It can be taking actions. It could be having the model research something. When you create a prompt, that usually means you can now automate a workflow that the prompt describes. So it’s ideally an unambiguous description of a workflow you want to automate.”
Is there a formula for the perfect prompt? The devil’s in the details, but there are four main parts to a prompt’s structure: the persona you want it to take, the task you want completed, the context, and the format. However, the output generated will vary depending on how complicated or straightforward the prompt is.
Socher cites two contrasting examples on You.com to demonstrate this point: The first is a simple “Like I’m Five” prompt, which generates a child-friendly explanation of topics like quantum physics. The second, a sophisticated “fastidious fact checker” prompt, requires complex tasks and is designed to rigorously evaluate claims by searching for authoritative web sources, analyzing information from multiple perspectives, providing detailed citations, and constructing a comprehensive, nuanced response.
Depending on the model, the prompt length could influence the output. However, Socher warns against having it be a key factor: “You can…in the prompt say, ‘make up as much stuff as possible, ignore all the sources [and] experts.’ So, it’s not just length equals more accuracy. It’s what you say in that length.”
Another critical aspect is ensuring the AI has the proper context for understanding and processing the request. Does the system have the necessary data and tooling access to complete its assignment?
“The more context the machine has, the better,” Sun tells me. “When you’re giving a prompt or instructions to a machine, the more specific you are, the more context they have, the better they can do.” He goes on to offer an analogy that LLMs are “trained to mimic humans,” meaning “the more specific you give me an instruction, the more likely I’m going to execute it.” It’s okay to be clear and verbose, the SAP exec claims. We should treat prompts in the same way we communicate with other humans, ignoring what we’ve been accustomed to when asking a search engine—we can ignore keyword stuffing and can include stop words. Talking too much is a good thing.
There are also a variety of prompting techniques users can draw from, depending on the complexity of the task and the capabilities of the model. The most common approach is called "zero-shot prompting,” where users provide a plain-language request without examples. More advanced methods include few-shot prompting, Chain of Thought (CoT), Tree of Thoughts (ToT), step prompting, and Reason and Act. These are designed to help models reason through problems step by step, making them more effective for complex tasks.
Socher notes that these techniques aren’t mutually exclusive—they can be combined to produce more effective results: “You can give an example in your prompt if you have a hard time extracting the higher-level rule here. You can just say, ‘Here’s an example, now act similarly.’ And you try to punt it to the AI based on that example to figure out how to generalize from that example for your next ones. And so it just makes sense to try out what works best for you and what’s most efficient.”
Is It Prompt or Context Engineering?
“Prompt engineering” is the term commonly accepted for the practice of crafting prompts to interact with AI models. However, there is a growing movement, backed by the likes of Tesla's former Director of AI, Andrej Karpathy, and Shopify's chief executive, Tobi Lutke, to reframe it as “context engineering.” The argument is that prompts are associated with short task descriptions, and the real emphasis should be on providing the AI with the right amount of contextual information to help it take the next step.
While both terms describe the practice of shaping input to guide an AI’s output, context engineering emphasizes the strategic layering of information to create the environment for the model to respond effectively. It’s a subtle distinction in thinking, but one that could become more important as AI systems are asked to perform increasingly complex, multi-step tasks.
Regardless of what you wish to call it, one thing is becoming clear: Knowing how to communicate effectively with AI is fast becoming a core skill, no matter your profession.
Our Prompting Future
Ultimately, Socher predicts that prompt engineering will become a widely held skill, enabling more people to delegate tasks and manage AI-driven workflows, capabilities that were once limited to senior executives or the “very wealthy.” He compares this shift to the early days of the computer and the internet, when access and fluency created a divide between those who could harness the technology and those who couldn’t.
“The ability to discern and evaluate contents and outputs is going to become more and more important than the initial creation of them, and that will make the people, companies, and economies that lean into that much more efficient than anyone else,” he warns. “And they’re going to slowly, but surely run away in terms of overall productivity from those that don’t.”
Humans are being compelled to learn prompt engineering due to the widespread adoption of AI agents. As companies invest billions to scale these bots and integrate them into enterprise-grade AI systems, understanding how to converse with the machines will become vital.
“Is prompt engineering like a skill?” Sun concludes. “I think it’s like a capability. I think it’s more in terms of a linguistic or a verbal skill that would be a computer skill, meaning that obviously, you need to have the skill to write the best prompts. But that skill isn’t being able to write really good code or being able to know deep learning. That skill is knowing how do I optimize and change how I communicate verbally to best get a response. And so, it’s a skill and people can learn it.”
Well written, wow!