Dwarves
Memo
Type ESC to close search bar

Building chatbot agent to streamline project management

Umbrella is a project management platform tailored for athletes, musicians, creatives, and businesses alike, bringing everything from team collaboration to secure document sharing under one roof. As our user base grew to a substantial number of active users managing a significant volume of projects and tasks, we identified an opportunity to leverage generative AI to enhance our platform’s capabilities and streamline project management workflows.

The challenge was to natively integrate a generative AI chatbot that could assist users in brainstorming ideas, generating project proposals, and performing tasks directly within the chat interface. By enabling users to seamlessly switch between research, ideation, and execution, we aimed to boost productivity and simplify project management.

Implementing the chatbot agent involved key technical domains such as developing an interface to communicate with external AI platforms like OpenAI, creating an agentic system to interpret and execute user requests, and setting up usage monitoring to control AI token consumption and track chatbot performance.

System requirements

Business requirements

Technical requirements

Scalability

Reliability

Security

Integration

Architecture overview

System components

Supervisor

Worker agents

Load balancer

LLM providers

Monitoring & logging

Database

The data flows from the user to the Supervisor, which routes the request to the appropriate worker agent. The worker agent processes the request, interacting with the necessary tools and the database, and generates a response. The response is then returned to the Supervisor and finally to the user.

Technical implementation

Core workflows

sequenceDiagram
    participant User
    participant Supervisor
    participant Agent
    participant Tool
    participant MongoDB

    User ->> Supervisor: Send query
    Supervisor ->> Supervisor: Select appropriate agent based on user query
    Supervisor ->> Agent: Send user query with system prompt to matched agent
    Agent ->> Agent: Determine if tool usage is needed based on input
    alt Need to call tool
        Agent ->> Tool: Call tool to process user query
        Tool ->> MongoDB: Set/Get data
        MongoDB ->> Tool: Response
        Tool ->> Agent: Return processed data
        Agent ->> Agent: Generate response based on tool output
    else No need to call tool
        Agent ->> Agent: Generate response directly
    end
    Agent ->> Supervisor: Return user response
    Supervisor ->> User: Return user response

The workflow diagram illustrates the core interaction between the user, Supervisor, worker agents, tools, and the database. The Supervisor analyzes the user’s query and routes it to the appropriate worker agent. The worker agent determines if tool usage is necessary and generates a response based on the processed data or directly, depending on the query. The response is then returned to the user via the Supervisor.

Technical challenges & solutions

Managing long conversation threads

To address the challenge of endless conversations reaching the LLM model’s context limit due to the UI design not splitting conversations into separate threads, we implemented a cronjob that runs every minute to close threads where the last message is more than 10 minutes old and limited the history context to the last 25 messages. This solution successfully prevented context limit errors and maintained conversation manageability

Maintaining chatbot accuracy/performance while adding functions

As the number of function modules increased, the chatbot’s scope expanded, leading to longer system prompts, increased hallucination, reduced accuracy, and difficult codebase maintenance. To overcome this challenge, we implemented a supervisor-worker pattern using LangGraph, a library of LangChain, to build a multi-agent AI system. By dividing the AI workload among multiple agents and using a supervisor to orchestrate and route tasks, we successfully reduced hallucination, maintained stable accuracy, and improved codebase maintainability even with the addition of new chatbot functions.

Widget-based display

To address the need for displaying custom UI elements instead of text-only responses in chatbot conversations, we configured the chatbot to respond with HTML widget strings, allowing the frontend to render custom UI elements within the chat. For example, when a user requests to create a task, the chatbot generates an HTML widget string, based on which the frontend can render a polished UI card containing all the relevant task information and a link to the task detail. This solution enhanced chatbot responses with visually appealing and informative custom UI blocks, improving user experience and comprehension.

Technology stack

Lessons learned

What worked well

  1. Implementing the supervisor-worker pattern using LangGraph allowed us to build a scalable and extensible multi-agent AI system that could handle increasing functionalities without compromising performance.
  2. Leveraging popular AI frameworks like LangChain and platforms like LangSmith accelerated development and provided robust tools for debugging, testing, and monitoring the chatbot agent.
  3. Structuring the chatbot’s responses as HTML widgets significantly enhanced the user experience by enabling visually appealing and informative custom UI elements within the chat interface.

Areas for improvement

  1. Managing long conversation threads remains a challenge due to the UI design limitations. In the future, we plan to explore text summarization techniques and implement a more user-friendly thread management system.
  2. While the current implementation handles scalability well, there is room for optimization in terms of resource utilization and load balancing. We aim to investigate advanced load balancing techniques and fine-tune the system architecture.

Future considerations

  1. Building a Retrieval-Augmented Generation (RAG) system to enable the chatbot to access real-time knowledge and provide more up-to-date and contextually relevant responses.
  2. Implementing a feedback system to gather user input and continuously improve the chatbot’s accuracy and performance based on real-world interactions and user preferences.

Conclusion

The implementation of the chatbot agent has significantly streamlined project management workflows within the Umbrella platform. By leveraging generative AI and a multi-agent architecture, we have enabled users to seamlessly brainstorm ideas, generate project proposals, and perform tasks directly within the chat interface.

The scalable and extensible architecture, built using the supervisor-worker pattern and powered by LangChain and LangGraph, allows for future enhancements and the addition of new functionalities without compromising performance. The integration of LangSmith ensures robust debugging, testing, and monitoring capabilities, maintaining the chatbot’s accuracy and reliability.

The successful adoption of the chatbot agent has resulted in increased productivity, improved user satisfaction, and reduced cognitive load for project managers and team members alike. As we continue to iterate and improve upon the chatbot agent, we remain committed to delivering a seamless and intelligent project management experience for our users.