In 2024, AI supported the work. In 2025, it redefined which work we take on— and how the work gets done.
But how does a digital product and applied AI partner incorporate AI to optimize its own development process? In our case, it can best be summed up as strategically smart.
We choose to see it as a collaborator and sounding board. We use it to speed up processes so that our team can focus on what they do best—solving high-value problems.
We’ve explored basically all the tools, but the ones that stood out are:
- Microsoft Copilot for its development integration and agentic abilities
- Cursor to reduce build time (and for its compatibility with React)
- NotebookLM for compiling notes and documentation from various different sources
- DuckDuckGo for rubberducking
- Claude Opus 4.5 for its strong coding capabilities
For example, a tool like NotebookLM has become a daily tool. Now, it's invaluable to our dev process. It acts like a translation layer between product and tech and helps us to maintain a living, breathing project context.
A tool like Claude is useful for filling the gaps. While it’s at the higher end of mainstream LLM pricing, we believe that you get what you pay for. Our senior developers have noted that there’s a marked drop in quality when they stopped using premium tools like Claude. After all, we’re a digital product partner that moves fast. Using older models only slows us down in that last 5% to 10% stretch.
While Claude excels at coding specifically, sometimes you also need context-free chatbots (hence, why DuckDuckGo deserves special mention too). Explaining code or an issue to a chatbot helps us to verbalize a problem (aka rubberducking). Basically, it becomes a sounding board.
Throughout the product lifecycle, the areas where we’ve seen AI offer the most value is whenever there’s a handoff or translation between two different stakeholders (such as between product and engineering or, again, between engineering and QA). LLMs can handle the little steps in between cleanly.
Throughout the product lifecycle, the areas where we’ve seen AI offer the most value is whenever there’s a handoff or translation between two different stakeholders (such as between product and engineering or, again, between engineering and QA). LLMs can handle the little steps in between cleanly.
While we’re leaning fully into AI and use it heavily for documentation, it’s never at the cost of human expertise or knowledge. As Doug Glover, Metajive’s tech lead, points out, you still need to read the documentation and understand the code.
In his opinion, “AI is best in the hands of seasoned professionals who know what the expected outlook should look like. Just about anyone can use it to make things happen, but the scale and complexity it can accomplish is limited without a guiding hand. If you entrust AI to build something from start to finish, the results will lack meaningful depth.”
It’s also important to remember that AI can be wrong. We don’t necessarily want to phrase this as a con, but rather that we approach it carefully whenever we want to use it intelligently.
You need human intervention between AI agents. Without it, hallucinations can grow and scale and ultimately impact client-facing work.
As such, we’ve found the most sensible approach is to scatter it into transition and integration points. We avoid taking the output from one AI agent and sticking it into the input of another.
At Metajive, we use AI in ways to multiply our time availability and productivity. When it can clearly offer value, that’s when we use it. Instead of allowing it to lead us down rabbit holes, we lead the way.
Our team continues to set personal goals which involve actually learning tools of the trade. True to Metajive’s values, we’re resisting the temptation to skip the line and go straight to AI. We stay hungry. After all, that’s how great digital products (like Ember that we built for The Future Laboratory) see the light. Not by working lightly by handing all of it over to AI.
Going forward, we plan to increase more proactive use of AI in our development system. Think automated checks. Report generation. Automatically generated status updates. Listing specs. Automation in QA. More rubberducking.
Basically, we want to use AI more so that our team has less repetitive work.
Internal documentation will also get more attention. Creating instruction files for LLMs and including those in our code repositories will help us ensure consistency which LLMs currently lack. So, we’re getting consistent in how we use AI to do things and we’re getting consistent about how the rest of Metajive’s team uses AI. It’s just one more way that we’re cultivating a thriving company culture at Metajive. We’re excited to continue innovating, supporting our team, and bringing the best products and processes to our clients.