the ai playbook part 1 - integrating llms at work

sunk thought’s - the ai playbook

the ai playbook” is a mini-book about integrating ai within your software engineering organization. This is part 1, focused on the benefits in doing so. I can’t be sure how long there will be between parts, as this isn’t completely written. But, I want to avoid hoarding this as timeliness is of the essence in this space (things are changing rapidly) rather than save it all up until the whole thing is finished.

One chair, a million screens… is this the future of work?

Integrating LLMs at Work

All organizations are constantly seeking ways to improve efficiency, foster collaboration, and maintain a competitive edge. Large Language Models (LLMs) are revolutionizing how teams collaborate, innovate, and operate. This playbook serves as your introductory guide to harnessing the power of LLMs, offering actionable insights into enhancing productivity, fostering innovation, and maintaining a competitive edge.

AI is already having a huge impact on software engineering, and no, it’s not because it makes engineers obsolete. What it does do is make everyone in the organization more connected to the latest state of the organization and more effective within their role.

It does that today.

It will only improve going forward.

The Problem Space

The larger your organization the more difficult it can be to stay connected. I worked in Research Engineering at Protocol Labs, a globally distributed company with various groups all working on groundbreaking projects at breakneck speeds. The amount of internal documentation, codebases, teams, project spaces, and the speed of the obsoletion of that documentation was a constant pain. 

On my team alone, my efforts to organize the sheer amount of project information that was created for each of the projects we had going at a time were greatly in vain. Meeting notes, slack conversations, tickets, commits, demos, feature documentation – all of this stuff flowed in constantly, making any documentation written out of date before it was shared half of the time. 

Additionally documentation between different services (notion, git, slack, email, docs) was greatly fragmented. This type of info is here, that info is there… it’s chaos! If your role doesn’t use this one tool or another you might not ever stay up to date fully.

Enter Large Language Models

Tools like Ollama, allowing LLMs to be implemented locally or on privately hosted servers, make it possible for companies to get these benefits without giving up their data to another company. In the process, they can replace LLM subscription fees and tokens with increased hardware costs.

I’m using Ollama today to run deepseek r1 (33 billion parameter version, which gives ~92-95% of the performance of the full model for most benchmarks) on my m3-max macbook pro with 48GB of RAM.

A cloud-based server can be spun up to run the whole model or you could build a box for your intranet, but both would then allow your staff to access it on the cloud with RAG implemented for all internal documentation and communications.

This playbook explores how integrating LLMs can revolutionize software engineering org practices across three key areas: Instant Internal Knowledge, Automation, and Engineering Aids. It will then delve into how you can implement this system for your organization, how to hire the best people for the AI age, and more!

Let’s first look at some of the key benefits of workplace AI that are viable today.

Instantly Up-to-Date Internal Knowledge

1. Natural Language Queries

Local LLMs enable employees to interact with all the internal data they have access to. Developers can interact with the codebase using plain language, reducing the time spent searching through files manually. For instance, a developer can ask, "What are the main functions in module X?" and receive an instant, concise answer. 

This capability extends beyond codebases to anything your company has posted internally or externally, allowing employees to access information seamlessly. That same developer can ask, “I need a new laptop, what’s the company policy for requisitioning or expensing the purchase of one?”

2. Knowledge Sharing Across Teams

By giving your LLM context from your project management software, your code repositories, meeting notes, and project documentation you create an instantly accessible context for anything you encounter. Acting as a central knowledge base, local LLMs provide uniform context on the entirety of the system, enhancing cross-team collaboration.  

This is particularly beneficial in large organizations where teams often work in silos, ensuring that everyone has access to the same up-to-date information – in a world where the README is always out of date, this is particularly useful.

3. Onboarding & Mentorship

New employees can benefit from fully automated onboarding sessions, getting all the things signed that need signing, setting up all the internal accounts needed, help setting up their development environment, tours of the codebase, and more all facilitated by local LLMs. These virtual guides provide context and highlight key files, accelerating the learning curve for new team members.

4. General Knowledge & Aid

All employees will benefit from the vast knowledge and creativity of your LLM. Have it teach you how to do something, come up with 3 new marketing strategies, come up with 5 blog post ideas and outline the one you like, ask it for random information on the fly, etc.

Automation

1. Automated Documentation, Meeting Notes, Action Items

Local LLMs can generate or update documentation automatically, ensuring it remains current and comprehensive. This reduces the manual effort required to maintain documentation, freeing up resources for more critical tasks.  By summarizing meetings, extracting key points, and generating follow-up tasks, local LLMs can help ensure alignment within teams, and so much more.

2. Process Automation

Local LLMs can automate various processes such as code reviews, issue triage and tracking, CI/CD, build testing, vulnerability detection, compliance evaluation, and more. This automation can improve efficiency and reduce the likelihood of human error.

Software Engineering

1. Engineering Assistance

Enhance engineer efficiency with real-time suggestions from local LLMs. I use the “Continue” third party plugin on VS Code (my IDE) to use my available ollama llm library and avoid using a certain ubiquitous third-party service by MSFT 😇, reducing errors, subscription costs, and speeding up development cycles.

It can also generate test cases for new features or modifications, ensuring comprehensive testing and reducing the risk of post-deployment issues. Ask it to generate code snippets or entire functions based on natural language descriptions, significantly speeding up the development process. Get instant feedback and suggestions during pair programming sessions, improving collaboration and problem-solving.

2. Quality Assurance

By analyzing code patterns and providing insights into potential bugs, local LLMs aid in efficient debugging, reducing the time spent on troubleshooting. Categorize issues based on severity and type, helping teams prioritize bugs effectively and maintain workflow continuity. By identifying potential security vulnerabilities, local LLMs can contribute to maintaining robust and secure codebases.

Considerations and Challenges: Navigating the Integration

While local LLMs present transformative opportunities, organizations must navigate several considerations:

1. Data Privacy and Security

Implementing local LLMs requires robust data security measures. Tools like Ollama help by processing data locally, minimizing exposure to external threats. Organizations should also establish clear guidelines for data usage and access.

2. Ethical Considerations

As AI becomes integral to workflows, addressing ethical concerns is crucial. This includes ensuring transparency in how LLMs operate and preventing biases in their outputs. Regular audits can help maintain accountability.

3. Cost Considerations

A M3-Max Macbook Pro with 48GB of RAM ain’t cheap. Neither is running a machine that can run the full model for multiple users. It’s true.

But even if you just stuck to local LLMs via Ollama. How much is the Macbook Pro you already buy employees? At my previous two companies

  • I was asked “What laptop do you want?” (with no further questions asked)

  • The other had a $3000 budget.

So, okay, if you’re like my previous employer, maybe you raise the budget up $1500 per employee.

Why would you not spend $62.50/month more ($1500 extra every 2 years) to measurably improve the overall engagement and effectiveness of your $200,000/year employee? Please, ask ChatGPT to come up with a satisfactory response, I can’t think of one.


LLMs are not just another tool; they represent a paradigm shift in how software engineering teams should operate. By embracing these technologies, organizations can create more efficient, collaborative, and innovative environments, setting themselves at the head of the pack, and prepare for long-term success as the revolution grows in capability and efficacy.

In the coming posts I’ll walk you through sample implementations, use cases, and additional organizational concerns. Buckle up, we’re headed to the future, Marty!

Previous
Previous

the ai playbook part 2 - set up a private personal ai toolkit

Next
Next

book report: tribal leadership by dan logan