This is part two of a series. I suggest reading part one first.
So, What is to be Done?
How
do we build companies that can adapt in a healthy way? I think it goes
back to the attributes mentioned at the in part 1: self-awareness &
the ability to change. We need these at both the individual &
organizational level.
Start with People
First,
we prioritize hiring people with those attributes. This is, I think,
the real answer. Everything else is a workaround. These people will be
adept at spotting patterns within the org and advocating for change
when needed. As a bonus, such people tend to be remarkably flexible,
able to tackle multi-faceted, ambiguous problems.
Second,
we balance personal weaknesses through collaboration. Smart organizations build
diverse teams. One person's weakness will be another's strength. For
example, people resistant to change like consistency. That is a
strength in some situations. That personality type might enjoy
maintenance work in on a stable product in which they can finely hone
their skills to that environment. Another personality type might enjoy
the novelty of a new & evolving project.
An
aside: This brings me to a common anti-pattern. I often see companies
break up high performing teams. The thought is that they will spread
the members of that team elsewhere to raise the performance of their new
teams. This isn't always misguided, but it ignores the group dynamic.
A high performing team owes as much to the relationships between the
team members and their ability to collaborate as it does to the
abilities of each individual.
A Sentient Org
Here, I will confess, I am openly brainstorming a bit. I have not yet held a senior leadership position, so many of these ideas I have not tried at scale. I welcome feedback on what follows.
Next,
we must build self-awareness at an organizational level. I can think
of at least two mechanisms: people's subjective experience &
empirical data. The former is easier to gather; the latter is more
objective.
I
have a theory: Most organizational self-awareness is subjective, both
inputs and outputs. Therefore, inputs are often skipped and outputs
easily dismissed or spun. Let's take a closer look:
Subjective Experience
The tools here are well known:
- Anonymous Pulse Surveys
- Retrospective Meetings
- Shadow Walks (ok, maybe less well known) – Have a small group sit in on another
team’s daily stand‑up, planning, or on‑call rotation for a day.
Observers note mismatches between stated processes and actual behavior.
- Listening tours
- Internal "lessons learned" blogs
- etc.
The
problem is typically that these practices are used sporadically, with
varying levels of participation, and that outputs often don't result in
change. But, think of the data we might have to work with if we were
consistent....
Meta Analysis
In
the
world of peer-reviewed research, there are meta analyses - that is,
studies summarizing the findings of many other studies on a given
topic.
In an organization, the "study" might be project retrospectives. How
many companies do you know that do a meta-analysis of their retros?
Let's be honest; most companies don't do enough retros to even consider
it.
Retrospective meetings are an oft-recommended practice. However, in my experience, most companies conduct them
only sporadically, after major failures, and often the findings are not
turned into organizational change.
However,
if we could use meta-analysis to spot patterns across these data
gathering mechanisms, we may be able to start to quantify the impact of
those patterns.
Objective Measures
Here,
I suggest we perhaps take a queue from Software Engineering itself.
Modern software systems not only perform a business objective, but
include mechanisms to measure how well the software itself is
functioning. We can apply some of those ideas to business process.
Observability
In
a hosted software system, we typically use observability tools. That
is, we have real-time indicators of system performance. How quickly is
our application responding to customer requests? What is the current
database load? How long are background tasks taking to run?
What
indicators could we track for people systems? Most "metrics" focus on
business outcomes, e.g. X revenue in Y timeframe, not root causes. I
propose tracking process
metrics:
- What percentage of POCs resulted in the project being pursued?
- Similarly: What percentage of projects estimated were pursued?
- What is the average lifespan of a product or feature?
- How often can new features be released?
- What percentage of releases require human intervention?
- How happy are members of the org? (Happy people work harder.)
- Do people like their colleagues?
- Customer-found Defect (CFD) occurrence rate
- Lead time distribution
- Time for new hires to have new laptop fully configured
- Engineering time required for each release
- Dollars spent on manual human testing before each release
- etc.
We
could make quite a long list, but the point is that we often don't
monitor process performance. Instead we focus on individual performance, often using silly metrics like
lines of code written or minutes per day the mouse is wiggled, because we
don't actually know what to measure.
A/B Testing
Did
you know that the Amazon/Instagram/Gmail you see may be different than
what many other people see? I don't mean just the content, but the
structure of
the page, the colors, etc. These companies are constantly trying
different things with test groups and tracking the results on product
usage.
Yet
I rarely see this in org structures. Most orgs have a consistent
structure from top-to-bottom, with minor differences where some managers
tweak things. What could we learn if we tried different structures
& compared the results?
The Ability to Change
This
is, perhaps, the hardest part - in part because it sometimes requires
leaders to come to terms with their own weaknesses which may be rippling
through the organization. This is why it is so crucial that we
surround ourselves with people who balance those weaknesses...back to
self awareness.
I
think one mechanism for change is to make the above metrics public
within the company, quantified in monetary terms, and ideally, tracked
in realtime. For example, "we spent X dollars last year on new hires
getting their laptops configured." Or, "we can release no faster than
once every X weeks, which causes issue Y." Those hard numbers allow
findings to be compared for business value and, if improvement
initiatives are undertaken, allow the improvements to be quantified. In
my experience, many process improvement initiatives are dead on arrival
because the payoff is highly subjective. Making the metrics public
& quantifiable means there is little for people to argue about.
It
strikes me that this sounds a bit like OKRs, but in my experience, OKRs
are typically stated as desired business outcomes without much data
around root causes of issues preventing those outcomes. I suppose I am
proposing connecting the two.
All
organizations have weaknesses. Those with the ability to spot &
correct them will thrive; those that do not can only hope to get lucky.
Comments
Post a Comment