buildfastwithaibuildfastwithai
GenAI LaunchpadAI WorkshopsAll blogs
Back to blogs
LLMs
Implementation
Tutorials

TaskGen: A Task-Based Agentic Framework with StrictJSON Outputs for LLMs

December 16, 2024
5 min read
TaskGen: A Task-Based Agentic Framework with StrictJSON Outputs for LLMs

The best time to start with AI was yesterday. The second best time? Right after reading this post. The fastest way? Gen AI Launch Pad’s 6-week transformation.

Join Build Fast with AI’s Gen AI Launch Pad 2025 - your accelerated path to mastering AI tools and building revolutionary applications.

------------------------

What is TaskGen?

TaskGen is designed for streamlined and efficient task automation. It leverages:

  • StrictJSON Outputs: Ensuring structured and predictable outputs from LLMs.
  • Chain of Thought Reasoning: Enabling logical step-by-step problem-solving.
  • Shared Variables: For seamless communication between different agents or tasks.
  • Retrieval-Augmented Generation (RAG): Incorporating external knowledge retrieval.
  • Asynchronous Capabilities: To handle multiple tasks concurrently.

Why Use TaskGen?

  • Enhanced Automation: Automates complex workflows with minimal intervention.
  • Flexibility: Build agents tailored to specific tasks.
  • Scalability: Manage subtasks and dependencies efficiently.
  • Consistency: StrictJSON ensures outputs are always in the expected format, reducing errors.
  • Extensibility: Integrates easily with other tools and APIs.

TaskGen is particularly useful in industries such as:

  • Customer Support: Automating responses and managing support tickets.
  • Healthcare: Assisting with diagnostics and patient interactions.
  • Education: Personalized tutoring agents.
  • E-commerce: Managing orders, inventory, and customer inquiries.

Setup and Installation

Let's get started by setting up the required libraries and configurations.

Install Required Libraries

To use TaskGen, you need to install the necessary libraries. You can do this via pip:

!pip install taskgen-ai openai

If you're working in a virtual environment, make sure to activate it before installing the libraries.

Configure OpenAI API

You'll need your OpenAI API key to use LLM models within TaskGen. Here's how to configure it:

import os
from taskgen import *
from google.colab import userdata

# Set your OpenAI API key
os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')
OPENAIKEY = os.getenv('OPENAI_API_KEY')

If you're not using Google Colab, you can manually set your API key like this:

os.environ['OPENAI_API_KEY'] = 'your-api-key-here'

Defining a Custom LLM Function

You can define a custom LLM function to integrate OpenAI's models with TaskGen. Here's a sample configuration:

def llm(system_prompt: str, user_prompt: str) -> str:
    from openai import OpenAI
    
    client = OpenAI(api_key=OPENAIKEY)
    response = client.chat.completions.create(
        model='gpt-4o-mini',
        temperature=0,
        messages=[
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": user_prompt}
        ]
    )
    return response.choices[0].message.content

This function sends prompts to OpenAI's GPT-4o-mini model and returns the model's response.

Understanding the Parameters

  • system_prompt: Provides context or instructions to the model.
  • user_prompt: The user's input or question.
  • temperature: Controls the randomness of the output (0 for deterministic results).

You can customize this function to use other models like GPT-3.5 or GPT-4.

Example: Building a Psychology Counsellor Agent

TaskGen allows you to create specialized agents. Let's build a Psychology Counsellor Agent that understands and responds to user emotions.

Step 1: Define the Agent

agent = Agent(
    'Psychology counsellor',
    "Helps to understand and respond to User's emotion and situation. Reply based on User Requests for the Conversation",
    llm=llm
)

Here, the agent is defined with a specific role and a description of its purpose.

Step 2: Add Conversation Memory

To maintain context, use a ConversationWrapper with persistent memory:

my_agent = ConversationWrapper(
    agent,
    persistent_memory={
        'User Requests for the Conversation': '',
        'User Emotion': '',
        'Summary of Key Incidents': "Key incidents relevant to understanding User's situation in one line"
    }
)

Step 3: Set Up the Conversation Loop

Now, create a loop for user interaction:

while True:
    user_input = input("User: ")
    if user_input.lower() in ["exit", "quit"]:
        break
    response = my_agent.run(user_input)
    print("Counsellor:", response)

This loop allows continuous interaction until the user decides to quit.

Key Features of TaskGen

1. StrictJSON Outputs

StrictJSON ensures the outputs from the LLM are always structured and predictable. This is particularly useful for:

  • APIs: Ensuring consistent data formats.
  • Data Pipelines: Automating data processing with reliable outputs.
  • Automation Workflows: Reducing errors in automated tasks.

Example of StrictJSON in Action

from taskgen import Task

task = Task("Analyze sentiment of the following review and provide a JSON output.",
            llm=llm,
            output_format="StrictJSON")

response = task.run("The product is amazing! It exceeded all my expectations.")
print(response)

2. Chain of Thought (CoT) Reasoning

CoT reasoning allows the agent to break down complex problems into logical steps. This improves accuracy and clarity, especially in tasks like:

  • Problem-Solving: Step-by-step solutions.
  • Diagnostics: Breaking down medical symptoms.
  • Decision-Making: Justifying decisions.

3. Shared Variables

Shared variables enable agents to pass information between tasks. This is useful in multi-step workflows where context needs to be maintained.

4. Retrieval-Augmented Generation (RAG)

RAG allows the agent to retrieve external information and incorporate it into responses. This is particularly powerful for tasks that require up-to-date knowledge.

5. Async Capabilities

Async capabilities let you run multiple tasks concurrently, improving efficiency in:

  • Parallel Processing: Handling multiple requests at once.
  • Batch Operations: Processing large datasets.

Advanced Configurations and Troubleshooting

Customizing LLM Parameters

You can fine-tune the LLM parameters to suit your needs. For example:

response = client.chat.completions.create(
    model='gpt-4',
    temperature=0.7,  # Higher value for more creative responses
    max_tokens=200    # Limit the response length
)

Common Issues and Solutions

  1. API Key Errors: Ensure your API key is set correctly.
  2. Rate Limits: Check your OpenAI usage and manage requests accordingly.
  3. JSON Parsing Errors: Validate the output format to match StrictJSON.


Conclusion

TaskGen offers a powerful framework for building task-based agents that can manage complex workflows efficiently. With features like StrictJSON outputs, shared variables, and async capabilities, TaskGen is ideal for automation projects requiring precise and reliable task management.

Ready to Build Your Own Agents?

Start exploring TaskGen today and revolutionize your task automation workflows!

For more information, check out the TaskGen GitHub Repository.


Resources

  • TaskGen GitHub Repository
  • Build Fast With AI TaskGen Github Repository
  • OpenAI API Documentation

--------------

Stay Updated:- Follow Build Fast with AI pages for all the latest AI updates and resources.

Experts predict 2025 will be the defining year for Gen AI implementation.Want to be ahead of the curve?

Join Build Fast with AI’s Gen AI Launch Pad 2025 - your accelerated path to mastering AI tools and building revolutionary applications.



Related Articles

OpenClaw WhatsApp AI on ₹500 VPS India: Full 2026 Setup Guide

Jan 30• 696 views

Microsoft AI Unveils rStar2-Agent: A 14B Math Powerhouse Outperforming 671B Models

Sep 9• 405 views

Elysia: The Open-Source Python Framework Redefining Agentic RAG

Sep 8• 1106 views

    You Might Also Like

    How FAISS is Revolutionizing Vector Search: Everything You Need to Know
    LLMs

    How FAISS is Revolutionizing Vector Search: Everything You Need to Know

    Discover FAISS, the ultimate library for fast similarity search and clustering of dense vectors! This in-depth guide covers setup, vector stores, document management, similarity search, and real-world applications. Master FAISS to build scalable, AI-powered search systems efficiently! 🚀

    7 AI Tools That Changed Development (December 2025 Guide)
    Tools

    7 AI Tools That Changed Development (December 2025 Guide)

    7 AI tools reshaping development: Google Workspace Studio, DeepSeek V3.2, Gemini 3 Deep Think, Kling 2.6, FLUX.2, Mistral 3, and Runway Gen-4.5.