I’m Avneet, a RackN AI/ML Intern, and I have been developing AI integrations within RackN’s systems. I was asked to explore how RackN could best use LLMs to improve customer experience. To answer that, I’ve developed two AI-centric projects for Digital Rebar: a custom-trained large language model designed to generate automation tasks via prompting, and an assistive chatbot designed to help users troubleshoot issues.
AND… I’m delighted to announce that the chatbot is already being included in the Digital Rebar UX and users can test it there. How did we do that? Read on!
Which to do: Generative Coding or Chat Bot?
A significant component of my early time at RackN was intended to be exploratory. For that reason, my initial work was towards an NLP system that is capable of generating Digital Rebar Platform, DRP, tasks for use in workflows (detailed in this post). Then in mid-June, I was part of a RackN co-hosted DevOps roundtable in partnership with Redmonk. The engaging discussions provoked critical insight on the concerns and goals of tech leaders within the DevOps industry and how AI might be able to address those challenges. One of the challenges of Generative AI that arose during the conversation was that generating new code for every problem can result in a surplus of one-time generated solutions when there could already be existing solutions that users can utilize instead.
Based on this insight, we decided to shift our focus on a question-answering chatbot that presents existing documentation, task files and solutions that can solve a customer’s issues. In fact, the prototype is already showing that the chatbot will effectively address these challenges!
Building an Assistive “RackGPT” Chatbot
RackGPT required building a unique Digital Rebar dataset. To collect the documentation for our dataset, we accessed the RackN repository that contained the markdown files for all documentation for DRP. As another source of data, I exported a text list of RackN YouTube titles and links, developed a function to compile the text into a list of addresses with the title as metadata, and transcribed the videos in order to supplement our dataset with more expertise.
Solving challenges in tooling and performance
The AI industry is full of emerging frameworks, packages and tools, so much so that it becomes difficult for one to choose the best path forward. For developing RackGPT, we considered multiple industry-leading Machine Learning frameworks such as HuggingFace, PyTorch Hub, OpenAI products, and LangChain. We settled on LangChain as our main platform because LangChain allows for complex customizations that can be implemented relatively quickly. LangChain also offers the implementation of custom agents, custom chains, web searching, document querying, and many other tools that can bolster the performance of RackGPT.
Another challenge was evaluating the chatbot’s performance as a Digital Rebar expert. Since I was not an expert in the many tools, techniques and terminologies, I relied on the RackN team to assist me with evaluating RackGPT’s expertise. RackGPT was intentionally started with the focus on “documentation-as-expertise” to both limit hallucinations and to capitalize on the thorough documentation and knowledge provided by RackN.
With those challenges handled, I focused on building RackGPT.
When preprocessing the data, we attempted multiple splits and variations of the dataset in order to optimize documentation referencing. The best results were obtained from not splitting the documentation. The markdown files are already relatively small, so referencing them at their current size wasn’t too computationally intensive. We also debated filtering out redundant documentation, but elected not to, as this may result in a gap of knowledge.
In order for the chatbot to access expertise efficiently, the team worked to store the documentation in a local database using HuggingFace’s pretrained embeddings. For this iteration of the chatbot, the team elected to utilize ChromaDB, as this was the recommended platform for storage and reference with LangChain.
Once the collection is initialized, we began iterating through our dataset and adding each item to the collection. For documentation, we utilized the URL for the item id and the page content as the document itself. For transcripts, we used the video title as well as the starting index of the transcript, as we had to split the transcript data into chunks. After the collection is loaded with the dataset, we can simply query with a text input and receive the top n closest matches within our dataset, along with their respective URLs.
Designing the Chatbot
As previously stated, the team utilized LangChain to customize the outputs for the chatbot. In the current version, the model uses a single chain for every user query. The team aims to diversify and add multiple other chains and agents in the future; this is discussed further in the next steps category.
The chatbot takes the user’s input and queries the vector-stored documentation for the most relevant documentation. Then the user’s input, relevant documentation, and chat history are all passed into a custom prompt which is sent to OpenAI’s API for output. The output is displayed and the user can ask a new question.
The primary goal of RackGPT has been to lower the barrier of expertise required to use Digital Rebar. RackGPT focuses on the importance of using RackN’s current in-depth documentation and knowledge bases as the main source of expertise. This is to limit hallucinations, which is a large concern within the Generative AI space. This chatbot will allow DRP users to get quick, informative answers to technical questions without having to search for solutions. This will result in a decreased number of support tickets submitted, as well as an increase in user productivity. The RackGPT team aims to continue adding functionality and improvements to the product, which will be discussed further below.
We’re excited about the initial results and also have a list of multiple features and enhancements to RackGPT that the team would like the opportunity to implement. These include:
- Implementation of the Preliminary Model version of RackGPT within the assistive chatbot to generate new code if needed
- Integrating external knowledge bases as additional expertise for areas surrounding DRP, such as bare metal, Terraform, Ansible, etc.
- Implement file upload functionality
- Integrate DRP login credentials with RackGPT to allow for multiple chat sessions to be saved
- Deploy script to regularly pull updated documentation so that RackGPT’s knowledge bases stay up to date
An updated list of next steps, fixes and goals for the chatbot are listed in the GitLab issue linked here.
I’d like to thank Greg Althaus, Rob Hirschfeld, and the rest of the RackN team for giving me the opportunity to lead this project.