Strategy

When Activity Looks Like Progress

Jun 15, 2025

IlPerson analyzing business dashboard metrics on a laptop with charts and graphs.

Every engineering leader I’ve worked with wants the same thing: to understand what’s actually happening with their teams.

Sounds straightforward, right? It’s not.

We build elaborate dashboards packed with numbers that look important—story points completed, pull requests merged, code changes submitted. These dashboards are very official, very executive-friendly, and often completely disconnected from whether teams are accomplishing anything meaningful.

I’ve seen engineers juggling five or six concurrent PRs across multiple services, each requiring context shifts, scattered reviews, and partial ownership. On paper, it looks like steady throughput. In practice, it creates cognitive drag, delayed delivery, and fragmented accountability.

The Standard Metrics (And Their Problems)

Let’s start with ticket counts and throughput. Years ago, a team I worked with celebrated closing 200 Jira tickets in a quarter. Sounds productive, right? Except that most were bite-sized tasks, split for optics. Meanwhile, meaningful architectural work stayed untouched. But 200 closed tickets looked great in status reports.

Pull request counts can also distort reality. I’ve seen developers split one feature into multiple tiny PRs—one for the migration, one for the API, one for the UI. It inflates GitHub activity but creates review overhead and fragments the story.

Commit frequency isn’t any better. Some teams even track “days coding” or “days with commits” to avoid a Pareto problem, where 10% of engineers quietly carry most of the load. The intent makes sense: expose invisible disengagement and highlight who’s consistently active.

If done right, commit tracking can help surface hidden work. For example, a sudden drop in commits from a senior engineer might not signal disengagement—it could reflect time spent unblocking teammates, triaging incidents, or mentoring in ways that don’t show up in the repo.

But when commit tracking becomes a proxy for effort, teams start gaming the system. Commits can be shallow—formatting tweaks, comment changes, or work-in-progress pushes. I’ve seen developers inflate activity just to “show up” in the data. Others, meanwhile, might contribute through pairing, mentoring, production triage, or architectural reviews—none of which show up in Git logs.

While commit frequency may seem like a useful engagement metric, it often mistakes motion for progress and punishes the quiet contributors doing high-leverage work behind the scenes.

Story points and velocity deserve special attention.

Velocity was designed to help teams plan realistically. But once it’s used to judge performance, everything warps. Estimates inflate. Complexity gets redefined. Suddenly, we’re gaming a number instead of solving problems. I’ve seen teams spend more time debating whether something is a 5 or an 8 than actually doing the work.

The Illusion of Green Metrics

Front-line managers usually know which teams are performing well and which are struggling. They don’t need a dashboard to tell them that Sarah’s team consistently delivers while Mike’s team always seems stuck.

What they need is insight into why.

Why do Mike’s pull requests sit in review for a week? Why did Sarah’s team ship three features last month but nothing this month, despite steady velocity? Why does every “quick fix” spiral into multi-day debugging?

Instead of useful insight, we get dashboards full of green indicators suggesting everything’s fine—when reality tells a different story.

I sat through a meeting once where a VP of Engineering showed off some impressive charts. Commits up 30%! Velocity steady! Story completion at 95%! Meanwhile, the product team hadn’t shipped a customer-requested feature in months. The metrics said “success.” The outcomes said otherwise.

It’s like evaluating a restaurant by counting orders instead of asking whether customers enjoyed their meals.

What Actually Provides Value

Instead of tracking motion, focus on insight. After working with dozens of teams, I’ve seen what actually helps.

1. Focus on customer value, not code volume

Instead of counting story points or commit activity, ask what problems got solved. What value was delivered?

That reframes the conversation. “We shipped three features that increased engagement by 15% and cut support tickets by 200/month” resonates. “147 tickets closed, and 23 PRs merged” does not.

2. Track flow metrics that expose friction

Cycle time is especially useful—how long from first commit to production?

It’s not about speed for its own sake. It’s about visibility. One team I supported saw cycle time spike from 3 to 12 days. Velocity stayed flat, masking the slowdown. We discovered senior devs were overwhelmed with review requests, while juniors hesitated to approve anything substantial. With better mentoring and clearer guidelines, cycle time came back down.

Especially in remote or hybrid teams—where coordination costs are higher and async workflows are common—flow metrics help uncover invisible blockers like PRs aging quietly or reviews stalling in Slack threads no one sees.

3. Measure unplanned work and firefighting

Track how much sprint capacity goes to production issues or “urgent” requests.

I worked with a team that spent 80% of its time on firefighting. But those weren’t surprises—just predictable failures from legacy systems and neglected infrastructure. Once they tracked firefighting time, they could justify fixing root causes. “Spend two weeks fixing this properly, or burn 20 hours a week forever?” Easy choice.

4. Monitor developer experience and satisfaction

This isn’t soft. Happy engineers stay longer, contribute more, and solve harder problems. Disengaged ones quietly check out—and replacing them is costly.

Pulse surveys, regular 1:1s, and candid retros work well. Do people feel supported and challenged—or treated like code machines, under constant surveillance?

Low morale eventually shows up in delivery metrics—but often too late.

If you like frameworks: DORA covers delivery health. SPACE adds developer satisfaction and collaboration—factors that quietly shape long-term performance.

Vanity Metrics vs. Diagnostic Metrics

Too many dashboards are filled with what’s easy to measure—regardless of whether it drives action. Here’s a quick reference:

Vanity Metric

Better Diagnostic Metric

Pull request count

Cycle time

Days coding, commit count

Lead time to deploy

Tickets closed

Customer problems solved

Story points completed

Delivery predictability or effort-to-impact ratio

Total deployments

Change failure rate or MTTR

If a metric doesn’t help you ask better questions or make better decisions, it’s not worth tracking.

Making Metrics Actionable

Numbers don’t solve problems. But numbers paired with thoughtful conversation can spark real improvements.

If cycle time spikes, ask why. Are PRs stalled in review? Is QA backed up? Are requirements changing mid-sprint?

If feature delivery slows, is it because priorities keep shifting? Or because teams are context-switching across too many initiatives?

If team satisfaction drops, is it burnout? Unclear goals? Disconnection from meaningful outcomes?

Good metrics prompt good questions. Good questions lead to better outcomes.

Communicating with Business Leadership

Executives care about outcomes—not engineering throughput.

What did reliability improvements do for churn? What did feature launches do for engagement? What did automation free up?

For example, instead of reporting “47 PRs merged,” frame the result as:

“Reduced onboarding time by 30% through internal tooling improvements”—that’s the kind of language a CFO or CPO can act on.

“99.9% uptime, which correlated with improved customer satisfaction and reduced churn” hits harder than “47 deployments this quarter.”

Same work. Different framing. Better alignment with business priorities.

Starting the Shift

If you’re ready to move away from vanity metrics, start small:

  • Pick one metric that reveals bottlenecks. Try tracking end-to-end cycle time—not just commits.

  • Pair every chart with a question. Instead of “velocity dropped,” ask “what changed in our flow?”

  • Start team retros with outcomes, not activity. What value did we deliver? What got in our way?

  • Run a pilot. Choose one team to experiment with new metrics before rolling them out org-wide.

Small wins build momentum—and help avoid the trap of just swapping one vanity metric for another.

A Simple Test

Before adding any metric to a dashboard, ask:

Will this help someone make a better decision?

If not, consider dropping it if it just looks good in meetings. Too many dashboards are full of metrics that are easy to measure but hard to use. That’s noise, not insight.

Moving Forward

Activity metrics create the illusion of progress. They reward motion over meaning.

Impact metrics highlight customer value, flow efficiency, and team health. They lead to better decisions, stronger alignment, and healthier teams.

Metrics should be a diagnostic tool—not a scoreboard. They should guide attention, not assign blame.

Once you make the shift—from measuring activity to understanding impact—the obsession with counting outputs doesn’t just feel outdated. It feels irrelevant.

So next time you see a dashboard full of green lights, ask yourself: are we making progress—or just moving?

Let’s talk about your platform challenge.

If your organization is navigating scale under regulatory complexity—or making the shift from reactive delivery to resilient platform engineering—I’d welcome the conversation.

3. Nashville Skyline
1. Nashville Skyline
3. Nashville Skyline
1. Nashville Skyline
3. Nashville Skyline
4. Nashville Skyline
2. Nashville Skyline
4. Nashville Skyline
2. Nashville Skyline

Let’s talk about your platform challenge.

If your organization is navigating scale under regulatory complexity—or making the shift from reactive delivery to resilient platform engineering—I’d welcome the conversation.

3. Nashville Skyline
3. Nashville Skyline
3. Nashville Skyline
3. Nashville Skyline
3. Nashville Skyline
4. Nashville Skyline
2. Nashville Skyline
4. Nashville Skyline
2. Nashville Skyline

Let’s talk about your platform challenge.

If your organization is navigating scale under regulatory complexity—or making the shift from reactive delivery to resilient platform engineering—I’d welcome the conversation.

3. Nashville Skyline
1. Nashville Skyline
3. Nashville Skyline
1. Nashville Skyline
1. Nashville Skyline
4. Nashville Skyline
2. Nashville Skyline
4. Nashville Skyline
2. Nashville Skyline