Agentic Workflows are a hot topic in Artificial Intelligence right now. These workflows are really different approaches to getting to a solution.
Chain Prompts – Tell it the first thing to do, then the next thing to do, etc. This can use answers from earlier prompts to help find answers to later prompts. This is a good approach if you want to direct the research to a specific outcome.
Parallel Prompts – This breaks the problem up into parts and runs them all at the same time. This requires that you NOT use answers from earlier prompts, but run them all at the same time.
Tools – This allows the client to provide tools to the AI to provide information that it doesn’t already have. This is good for expanding the ability of the LLM and giving it new information it isn’t trained on.
Evaluator/Optimizer – This allows the LLM to evaluate whether the answer is a good answer. When we ask a coding agent for code, and it gives us code that is incomplete, you can ask it if that code will satisfy your requirements and let it decide if it did a good job.
Pros and Cons:
Chain Prompts – The advantage is that not only is it easy to do, but you can direct the process to get outputs formatted to your liking. The con is that it tends to use the same model for each step, and the cost of sending in and getting out duplicate information can be expensive.
Parallel Prompts – can use many short inputs to get to the answers, which can save a LOT of time. However, not all problems can be parallelized because there are no good ways to divide it up.
Tools – If you have some specialized data (a database or live API) that your LLM isn’t trained on, you can add an interface to your information so that the LLM knows to query your data and use that to create answers. The limitation is that you need to use agentic programming to integrate your private data source, which opens up data security questions. These can be ameliorated with local LLMs, if you’re willing to pay the cost of hosting and maintaining it.
Evaluator/Optimizer – If you’ve ever asked an LLM a question and it gave you a partial (or incorrect answer, then YOU had to figure it out and ask it for a better answer, you’ll appreciate the evaluator/optimizer workflow. In essence, you ask the LLM a question, and it checks it’s answer against itself to see if it got the right answer. With this model, you’re using the LLM multiple times to do something, check if it’s right, and optimize (or correct) the answer. It takes your participation out of the equation and makes the computers do all the work. The pros are that you can get highly optimized (correct) answers to your queries. But if you make the computers do all the work, you pay for all the work they do.
As you can see, there are several ways to solve your problems with workflows. The decision between them depends on you situation, your resources, and your constraints. If you’re stuck in one line of thinking, take a look at other workflows and see if your problem can take a different approach.
