Ollama says "Oh Lord" when shipping updates: A Dora Metrics case study
7 min read
Table of Contents
- Ollama Dora Metrics: Good Deployment Frequency, Erratic Cycle & Lead Times
- Not-so-great Cycle Time and Lead Time
- Ollama’s Killer Deployment Frequency Saved their Day!
- What can Ollama do to improve their performance?
- Ollama Should Bring More Contributors to their GitHub Community
- Parting Thoughts - Community Growth: Key to Ollama's AI Development
- Further Resources
"Hey Siri, can you play 'I'm Gonna Miss You' by Milli Vanilli?".
"Alexa, can you play the movie, Fight Club?"
We all do it, right? But have you ever stopped to think about how these virtual assistants actually work their magic?
It’s all thanks to LLMs!
Large Language Models (LLMs) are powerful AI tools that handle a variety of natural language tasks, but managing them can be tricky.
That’s where Ollama comes in. Ollama makes running LLMs on your local machine easy and accessible. It simplifies the whole process—from downloading and installing to interacting with different LLMs—so you can explore their potential without needing deep technical know-how or depending on cloud services.
Since Ollama is an open-source project known for its support of LLMs, I decided to dig into its repository and explore how the backend operates.
I analyzed their workflow for the last three months, i.e., July, August, and September. I looked into their Dora Metrics using our tool, Middleware OSS. To know how Middleware works, check our live demo.
Also read: What are Dora Metrics?
Ollama Dora Metrics: Good Deployment Frequency, Erratic Cycle & Lead Times
Ollama may have a smaller community of about 314 contributors, but that doesn't stop them from aiming high. Like many open-source projects with big ambitions, Ollama focuses heavily on pushing new features, dedicating around 40% of their resources to feature development. Recent contributions include:
Add sidellama link by gyopak
Update Docs to Include Tools by royjhan
Fix cmake build to install dependent dylibs by jmorganca
While Ollama’s ambitions continue to soar, they face a challenge common to many open-source projects: over-reliance on key contributors. Team Ollama is currently dependent on proactive reviewers like Alex Mavrogiannis and Daniel Hiltgen for prompt PR responses and merges. This reliance on a few individuals can be risky, especially in an open-source ecosystem where workloads need to be distributed evenly to maintain momentum and avoid bottlenecks. Expanding the pool of active reviewers could help mitigate this challenge and improve efficiency.
Let’s see what their Dora Metrics say:
Ollama showed a disappointing performance in their first response, merge, and rework times.
In July and August, their first response times were 45.5 hours and 80 hours. In September, their first response time was 191.3 hours.
Similarly, their rework time in July and August was 40.9 hours and 61.8 hours. In September, it was a bit better at 26.2 hours.
Except for in July with 27.3 hours, their merge times too were disappointing taking 77.1 hours in August and 43.6 hours in September.
These numbers are way out of the Dora Metrics benchmark set by the 2023 State of the DevOps Report
However, PR #5677 was merged in less than 1 minute, and PR #5655 was merged in less than 1 hour.
Not-so-great Cycle Time and Lead Time
Unfortunately, much like a game of Jenga, the wobbly foundation of slow first responses, repeated rework, and lengthy merge times has caused Ollama’s cycle and lead times to tumble. The delays keep stacking up, leading to bottlenecks in their software delivery pipeline. Without improving these key metrics, the overall efficiency can crumble, slowing down the project’s progress and impacting their ability to release updates quickly.
In July their Cycle time was 4.7 days.
In August and September, their cycle time was 9.1 and 10.9 days respectively.
Similarly,
In July, their lead time was 5.9 days.
In August and September, their lead time was 9.1 and 10.9 days respectively
Ollama’s Killer Deployment Frequency Saved their Day!
In July, they did 152 deployments.
In August, they slowed down a bit but kept the momentum with 98 deployments.
In September, they again scaled their releases to 105.
Despite having a smaller group of 313 contributors, they managed to maintain an impressive deployment frequency, consistently hitting over 50 deployments and keeping the quality high.
How?
They have a robust CI/CD pipeline that automates the integration, testing, and deployment of code changes. This helps scale deployment frequency by ensuring high-quality software delivery through consistent feedback loops and rapid iteration.
Also read: Jenkins Dora Metrics: CI/CD Leader With High Deployments, Slower Cycle Time
What can Ollama do to improve their performance?
Ollama’s strength is their efficient CI/CD pipeline. They should use it effectively to their advantage. For instance:
1. Automated Testing
CI/CD pipelines automatically run tests on new code submissions. So, currently, they are spending a good 10% on tests. They can divert 10% of their effort and time to something fruitful. Also, automated testing ensures that any bugs or issues are identified early in the development process, reducing the chances of problematic code being merged into the main branch.
2. Faster Feedback Loops
With automated tests and checks, developers receive immediate feedback on their code. This quick turnaround allows them to address issues promptly, leading to faster iterations and reduced rework time.
3. Consistent Code Quality
CI/CD pipelines enforce coding standards and best practices by integrating tools such as Jenkins, Version Control Systems, etc. for code quality checks, linting, and static analysis. This consistency helps maintain high-quality code throughout the project and will reduce the rework time.
4. Reduced Merge Conflicts
Frequent integrations of small code changes help minimize merge conflicts by ensuring that developers are regularly synchronizing their work with the main codebase, which reduces the likelihood of overlapping changes and the complexities that arise from larger, less frequent merges. By encouraging developers to merge their changes regularly, it promotes a culture of collaboration and communication, where team members are more aware of each other’s work and can address potential conflicts early on, leading to a smoother and more efficient development process. They can also analyze their own PRs including PR #5677 which was merged in less than 1 minute and PR #5655 which was merged in less than 1 hour to understand their efficiency.
Also read: Bootstrap: Strong Merge and Cycle Times, but First Response Time Needs a Revamp
Ollama Should Bring More Contributors to their GitHub Community
Ollama needs to pull up their socks and do everything possible to bring more developers on board. How can they do it?
Develop a clear README that outlines the project's purpose, installation, and usage. Currently, it only reads, ‘Get up and running with large language models.’ They can do better.
Share the project on platforms like LinkedIn and relevant forums to reach a wider audience.
Engage in open-source communities and events like Hacktoberfest to encourage contributions.
Tag beginner-friendly issues with labels like “good first issue” to guide new contributors.
Invite feature requests from users to encourage them to contribute their ideas.
Respond promptly to pull requests and issues, showing appreciation for contributions.
Celebrate contributors in release notes or social media to recognize their efforts.
Host workshops or meetups to foster collaboration and introduce potential contributors to the project.
With limited resources, dedicating about 40% of their time to developing new features and 30% to fixing bugs can put a lot of pressure on this tight-knit group.
Since Ollama is open-source, contributors often juggle other commitments, making it challenging to maintain a smooth workflow.
The burden of responsibility falls on this small community, which could slow down progress and affect the overall development pace.
Parting Thoughts - Community Growth: Key to Ollama's AI Development
As the world of AI rapidly evolves, projects like Ollama play a crucial role in democratizing access to powerful tools like LLMs. They should encourage more developers to join their open-source community. This will bring stability to their workflow and offer great help to push new features and fix bugs quickly.
If you are also facing such engineering dilemmas, then write to us at productivity@middlewarehq.com and we would be happy to help you provide actionable insights into your workflow or you can also try tracking your Dora metrics yourself using Middleware Open Source that too for free!