Getting to know Prompt Development Life Cycle (PDLC) methodologies will boost your generative AI … [+]
In today’s column, I am continuing my ongoing coverage of prompt engineering strategies and tactics that aid in getting the most out of using generative AI apps such as ChatGPT, GPT-4, Bard, Gemini, Claude, etc. The focus this time will be on a vital and systematic way to undertake prompting, referred to as using a Prompt Development Life Cycle (PDLC). This is reminiscent of programming or software engineering making use of a System Development Life Cycle (SDLC).
If you are interested in prompt engineering overall, you might find of interest my comprehensive guide on over fifty other keystone prompting strategies, see the discussion at the link here.
Prompting Off-The-Cuff Is The Norm
Most people tend to compose their prompts via an off-the-cuff sense of intuition.
It goes like this.
There you are, staring at the blank prompt-entry screen in your preferred generative AI app, and you have in mind something that you want to ask of the AI. You start to type the question that is vaguely in your head. Oops, you realize that you probably should provide some helpful context, or else the generative AI won’t realize the nature of the problem being asked.
Do you add the context to whatever you’ve already typed, or do you erase the so-far typed entry and start over?
This is the classic hunt-and-peck kind of prompt development process.
You didn’t especially prepare beforehand. The vacuous idea was to simply log into the generative AI app and start typing. One way or another, you anticipated that you would come up with a prompt that would get the job done. It might require entering a prompt that is wimpy and you’ll have to redo the prompt once you see the generated response by the AI. No big deal, you’ll just rewrite the prompt on the fly.
Now then, some angry pundits will excoriate you for being so lackadaisical. How dare you just come up with a prompt out of the blue. Step away from the AI and first get your act together. Methodically and studiou…
You see, I willingly acknowledge that there are times at which using generative AI via an off-the-cuff prompting approach is perfectly fine. More power to you. But I also urge that you should not merely be a one-trick pony. If you always act wantonly when doing your prompting, you are sadly and regrettably missing out on the full and vital value of using generative AI.
You ought to know how to go beyond the intuition method.
Four Foundational Methods To Prompting
Anyone who seriously considers themselves a prompt engineer should be comfortable with the full range of ways to compose prompts.
In the classes that I teach on prompt engineering, I note that there are four foundational methods:
- (1) Instinctual prompting. Composing prompts off-the-cuff and generally done in real-time as you see fit.
- (2) Mindful prompting. Thinking about a prompt before you write it, being mindful about what you want the prompt to say and do.
- (3) Planned prompting. Planning ahead of time about a prompt and series of prompts that you aim to compose, including writing some of them beforehand while others might be written in real-time based on an overarching predetermined structure.
- (4) Thorough prompting. The pinnacle of prompt development entails a type of life cycle formalization akin to a rigorous systems development life cycle (SDLC), often noted as the PDLC (prompt development life cycle).
I’ll be unpacking those for you as we go along.
This notion of composing prompts can be likened to writing computer programs. There are software developers and computer programmers who write their code by the seat of their pants. Others do so in a systematic and prescribed way. A well-balanced system developer knows the various approaches and chooses the right one for the circumstance at hand.
You might have heard of or maybe used an SDLC (systems development life cycle) if perchance you are someone who has developed new systems from scratch. Even if you’ve never been a developer, you might have been a user who was part of an effort that was undertaken to put together a new system. Perhaps the system developers showed you a series of steps or stages for the development effort. That is considered a kind of SDLC.
SDLCs are usually firmly stipulated as a series of phases, stages, or steps that are supposed to be undertaken when conceiving, designing, coding, testing, and fielding a new system. This is also sometimes referred to as a waterfall model. Typically, you move from one stage or step to the next, like water streaming downriver. In some situations, you might be more iterative and move through those quickly and cyclically, a precept of a methodological principle underlying agile development.
An often-raised complaint about formal SDLCs is that they customarily assume that the system being devised is big and bulky. With all the paperwork involved, it seems hard to justify all that formalism if the system is small and streamlined at the get-go.
A tongue-in-cheek commentary about big-time SDLCs is that they are like an elephant gun and that it makes little sense to try and rid yourself of an ant with such a potent weapon. As a result, most well-written SDLCs have provisions for a shortened version of the approach, seeking to recognize that there will be times when a quicker method is suitable. Think of this accommodation as an ant-sized variation to match appropriately with targeted ants.
Formality With Aplomb
I bring this up because a question floating around the prompt engineering realm consists of whether using a prompt-oriented SDLC is like that proverbial elephant gun. Prompts are unlike coding in that you usually will only have a handful of prompts at a time, while coding can involve hundreds or thousands of lines of code.
Does the composing of prompts sensibly relate to writing code, or is the analogy overstretched and taking us down a wrongful path of trying to use SDLC when it ought to not apply?
My answer is that you can indeed make use of a prompt-oriented SDLC, doing so when the situation warrants such an added capability. I loudly proclaim that you do not need to exhaustively always employ that rigorous path. Again, please use the right tool for the right moment at hand.
What name or moniker should we use to refer to a prompt-oriented SDLC?
The field of prompt engineering is so new that there isn’t an across-the-board widely accepted standard on this. Here are some names that I’ve seen utilized:
- PE-SDLC. Prompt Engineering – System Development Life Cycle.
- PESDLC. Prompt Engineering System Development Life Cycle.
- PDLC. Prompt Development Life Cycle (this is the most common phrasing).
- PODLC. Prompt-Oriented Development Life Cycle.
- PLC. Prompting Life Cycle.
- Etc.
Various researchers and companies are coming up with recommended practices for prompt engineering and they at times specify a proprietary prompt-oriented SDLC. Make sure to look closely at whatever is stated in the methodology. Does it make sense to you? Is it easy to use or overly hard to use? Will the usage be beneficial to you or create undue hardship?
Use PDLC With Common Sense
If you see and fall in love with a PDLC, this does not mean that you must utterly forsake the hunch-based approach to prompting. I don’t want to sound like a broken record on this point, but it is something to keep at the forefront of your mindset when composing prompts.
I am reminded of a famous line that I learned when I was starting my career years ago as an AI developer and professional software engineer. I at first wanted to ensure that my code was always of the most supreme quality. One day, we had an issue arise with one of the systems that our company was responsible for keeping up and running. I wanted to spend gobs of time composing just the most beautiful of code that would solve the problem.
Meanwhile, the system was down. Users were going berserk. Bells were going off. Upper management was freaking out. It was a debacle of the first order.
A senior software developer ran up to me, saw what I was laboriously working on, and proceeded to whip out some spaghetti code that looked ugly, but worked, and asked me what the heck I was doing. When I tried to explain that my approach to development was of the highest order, he shook his head and chuckled. I’ll never forget what he told me. He looked over his glasses at me and told me, in a stern voice: “There isn’t style in a knife fight”.
Sometimes you need to do whatever you need to do to survive.
That tale of sage wisdom applies to prompting too.
There are times when composing a prompt by whim is perfectly fine. Go for it. There are other times when you ought to set aside the kneejerk approach and be more thoughtful in your composing of a prompt.
Graduated Nature Of Approaches
My four keystone approaches identified above are of a graduated nature.
The first one is the most ad hoc, labeled as Instinctual Prompting. The last one is the more rigorous, which I simply refer to as Thorough Prompting. In between are those that rise from ad hoc to rigorous, namely Mindful Prompting to Planned Prompt. This overall marks a series of graduated approaches that aim to provide increasing levels of dedication when devising prompts.
What is your assignment here?
You should ultimately be comfortable with everything from the ad hoc approach to the most defined and stringent approach. Once you sincerely have all of them under your belt, use the right one at the right time.
There is another famous line that you probably know and befits this condition. They say that if all that you know is a hammer, the whole world looks as though it only contains nails. Sometimes a screwdriver is a much better choice than a hammer. Prompt engineers need to be familiar with the distinctive prompt engineering development approaches and use each as befits the moment at hand.
Digging Into The Determining Factors
How will you know which circumstance befits which one of the four approaches?
I’m glad you asked.
Consider these five key elements:
- (1) Prompt Purpose
- (2) Prompt Limitations
- (3) Prompt Frequency
- (4) Prompt Intended Type
- (5) Prompt Development Method
Let’s briefly explore each of those key elements.
First, you need to consider the purpose of a potential prompt. I’ve previously discussed that most of the time you should have an end goal in mind when using generative AI, see the link here. Are you trying to answer specific questions? Are you trying to solve a particular problem? Are you rummaging around seeking new ideas?
All in all, a session or conversation with generative AI should have a purpose and the aim then is to craft prompts that sufficiently and suitably serve that purpose.
I like to conceive of a “prompt purpose” as consisting of three levels:
- (1) Prompt Purpose
- Level 1. Messing around.
- Level 2. Trying to get a good answer or response.
- Level 3. Seriously delving into something important.
Another factor that plays into determining the right circumstances involves various limitations that you face when using generative AI.
Here’s a short list of some typical limitations:
- (2) Prompt Limitations
- Limit 1. Time to devise a prompt.
- Limit 2. Cost to devise prompt.
- Limit 3. Time to run prompt and get a result.
- Limit 4. Cost to run prompt and get a result.
- Limit 5. Quality of response or result.
As stated above, one limitation is the time that it would take for you to come up with a prompt (Limit 1). If you are in a hurry and need to move quickly, you might not be willing to spend much time thinking about and composing a prompt. There might also be a cost with devising a prompt (Limit 2), such as if you have to do some research or pay someone else to first identify salient facets for the prompt.
I would dare say that the most looming limits are those involving the time to run the prompt (Limit 3) and the charged cost for doing so (Limit 4). Some generative AI apps can be used for free, ergo you probably don’t care how long it takes to run a prompt nor care about the cost since there isn’t a charge or fee involved. But for those using generative AI apps that do charge a fee, you likely need to be mindful of your spending.
There are ways to compose a prompt such that it will run fast. You can also mindlessly word a prompt that turns out to run quite slowly. Besides a delay in getting your response, this is undoubtedly chewing up processing cycles on the server that is running the generative AI. Prompt wording can either pinch pennies or throw pennies away.
Perhaps the most apparent limit is the quality of the response or result (Limit 5).
Here’s what a lot of people do. They hurriedly word a prompt. They hit return. The result comes back that is far afield of what the person was hoping to get. They reword the prompt. A result comes back that still isn’t hitting the mark. On and on this goes. The quality of the responses is not necessarily due to the fault of the AI. It is perhaps due to a lousy prompt or a crummy series of prompts.
That might be okay with you if you have plenty of personal time to spend. But if you are paying for the cost of using the generative AI, you can readily rack up a lot of charges through this mindless repetitive effort. The odds are that if you had thought carefully beforehand about the prompt, you could get it right either by the first shot or maybe by the second shot. No need for foolishly wasting precious coinage on chicken scratching.
More Factors Involved And An Example
The next factor to consider consists of whether the prompt is going to be used on a one-time-only basis or whether you might want to use the prompt again. That is the frequency associated with the prompt.
I construe this as follows:
- (3) Prompt Frequency
- Frequency 1. One-Time Only.
- Frequency 2. Repeated Usage.
- Frequency 3. Reusable Template.
A rule of thumb is that if a prompt is likely to be used now and then again later on, or possibly be melded into a reusable template, the upfront effort to compose the prompt is warranted. This does not suggest that a one-time prompt doesn’t also deserve upfront thoughtfulness. It can. The other factors will certainly aid in determining that point.
Speaking of templates, a prompt can have a variety of intended types.
Here’s my list of intended types:
- (4) Prompt Intended Type
- Type 1. Template usage.
- Type 2. Example usage.
- Type 3. AI devises for you.
- Type 4. Devise from scratch.
Those Prompt Intended Types weave into the mosaic that is the intricate set of factors to consider when composing a prompt.
Suppose I was trying to compose a prompt to get generative AI to come up with an answer to a tough and crucial question. I aim to turn the prompt into a reusable template so that other people can later leverage my prompt for similar problem-solving. The generative AI app that I am using has a weekly cap on how much usage I am allowed. If I go over the cap, I start paying an overage fee.
Which of the approaches should I use?
Let’s assess the factors and see how they aid our decision. I will put each selection in bold and include an arrow to showcase the selected consideration based on the end goal of this prompt.
- (1) Prompt Purpose
- Level 1. Messing around.
- Level 2. Trying to get a good answer or response.
- -> Level 3. Seriously delving into something important.
- (2) Prompt Limitations
- Limit 1.