Jeffrey Hammond, vice president and principal analyst at Forrester, is one of the leading experts on app development and delivery technologies. In a recent webinar Skuid hosted, Jeffrey explained why teams designing and building apps must focus on the user.
To compete in today’s market, companies need to deliver customer-focused, digital-first experiences, fast.
In 2019, IDC predicted, “By 2023, over 500 million digital apps and services will be developed and deployed using cloud-native approaches – the same number of apps developed in the last 40 years...This explosion of new digital apps and services will define the new minimum competitive requirements in every industry.”
But meeting these market demands while staying laser-focused on the customer is easier said than done. Siloed organizations, stage-gate processes, fragmented tool-chains, and a lack of shared understanding make it hard for product, design, and development teams to find common ground.
And customer centricity isn’t just for external apps—it applies to internal applications, too. As the company’s primary interface for serving customers, internal apps must deliver a delightful experience because as it goes for the employee, so it will go for the customer. App experience affects every customer touchpoint—for good or ill.
In light of these challenges, Jeffrey answers five critical questions that app designers, builders, and leaders should be considering.
1. How should app dev and delivery teams measure the success of an application?
JH: Great question. We've been researching this topic to update a research piece called "Agile Metrics That Matter." What we're seeing is that business value, and specifically customer experience, should be at the core of your measurement efforts.
It sounds simple, but so few development teams measure things like "How much revenue did our app generate?", "Is our net-promoter score going up?", "How many new customers did our app help acquire?", or "How many customers used our service at least ten times in the past 28 days?" In our surveys of business decision-makers, we constantly see that improving customer experience is one of the most important priorities they have, but it’s not filtering down to app development teams.
We often hear teams object by saying "Well, we don't sell what we build" or "It's an internal app." OK, but you can still find business value if you try. Consider Amazon: 90% of the services its engineering teams build don't get exposed externally, but they still measure those services on their reuse level inside the organization. The philosophy behind that is simple: the more teams use a service that one team produces, the more valuable it is, and thus deserving of support.
Whatever your metric for business value, make an effort to define it and measure it. It could be the number of customers or employees using it, the revenue it generates, the costs it saves, the increase in orders it generates—but make the connection.
There’s also a set of secondary indicators that teams can use to measure the health of processes for delivering applications. While valuable apps can be built in the midst of chaos, we find that having healthy teams and processes repeatably correlates to good outcomes. These indicators generally break down into four types:
- the quality of what they've built
- the efficiency with which they've built it
- their progress during the time they built it
- and the health and engagement level of the team itself.
If indicators in all these areas are good, and you're generating business value, you're on the right track.
2. What convinced you that human-centered design was a game-changer? Tell us your story.
JH: In some ways, it's related to the measurement question above. I've been fortunate to be involved in some amazingly successful software products in my career. But I've also seen some not-so-successful efforts, including those with 10x more budget and staff. The most successful product I was part of launched with ~10 pages of documentation, and when the company tried to rebuild it, they spent millions more and failed.
What was the difference? Our small team that built the original product was in the trenches with customers, doing nightly builds, getting real-time feedback, and shaping the product with our customers. In some ways, we were doing agile development before the manifesto was even a thing. Ever since I saw how budget and bodies have little correlation with software success if development isn't done right, I've been a convert.
I think the belief has only strengthened over time as I've seen how anyone can embrace design-thinking principles. I spent a few years mentoring a team of high-school kids that were part of the FIRST robotics competition. If you aren't familiar with it, let me share a few details.
To compete in a challenge against other teams, these kids had to build a functional, 120-pound robot in six weeks. During that effort, I saw these students embrace divergent and convergent thinking, use simulation and rapid prototyping to create amazing embedded software and hardware, and driver interfaces to direct it. They used practices like Kanban and daily stand-ups.
My takeaway from the whole thing? If a 15-year-old can embrace the principles of design thinking, then anyone can and should.
3. What happens when you focus on little-d design before or without doing the work of big-D design and do you have any good stories about this?
JH: You get the Juicero, the greatest example of Silicon Valley stupidity to date. If you aren't familiar, that team created a beautiful, expensive product with a highly curated experience. But you could accomplish the same end goal by squeezing one of the product juice packets by hand like a big mayonnaise packet. The company focused so much on the little "d" aspects of the experience that it missed the big-D—does anyone want or need what we're building?
Opposite the Juicero case is the quote often attributed to Henry Ford: "If I'd asked customers what they wanted, they would have said ‘a faster horse.’" But even here, if you dig into the detail, there's an explicit understanding of the customer. Ford bet that early customers would give up their desire for different colored Model Ts and buy black cars as long as this transformative product was cheap enough that the average customer could afford to buy it.
I think the message in both cases is the same:
Unless you watch how customers use your product and unless you engage with them, you'll likely over-engineer or miss what's transformative about what you are building. When you do, little-d design becomes lipstick on a pig.
4. How much user input is too little or too much?
JH: I think of it in terms of both frequency and volume. Both are important, but if I could only have one, I'm biased toward frequency because it allows me to course-correct earlier based on new data.
In terms of volume, I think you need to collect enough input to be able to see recurring data patterns. One point of data is an anecdote, two points make a line, and multiple points make a trend. Once you have high confidence about a trend, I think the speed of response takes precedence over waiting for more data and increases that confidence, especially if you can execute a test to confirm the trend.
Now there's a big assumption here that my OODA loop is tight enough that I can actually act and deliver new capability fast enough to take advantage of my observations. But if my time-to-value is weeks or months, then it's the limiting factor on the breadth of data I collect. In other words, I don't see any harm in continuing to validate my inputs with additional data to increase my confidence if I only get four or six chances a year to release new capability.
But with modern software delivery techniques and dev ops automation practices, there's no reason I should be able to get my OODA loop down to days or even hours. In that world, you have to be willing to act on limited inputs, knowing you might not always get it right—which is why multi-variate testing and the ability to recover from an incorrect action becomes so important. Low-code platforms are another way to tighten the OODA loop, because they allow you to quickly capture ideas, create prototypes, get real-time customer feedback, and then act on what you’ve learned.
5. How do you find the balance between researching user desirability and having users exert too much control over the design process? And what does this look like from your experience?
JH: Going back to the Henry Ford example, I think it's important to have a vision, a mission for the problem you are trying to solve, and a model of how you will solve it. But, it would help if you also had the humility to accept that your model might not be perfect or even correct, and your understanding of the problem might be flawed as well.
As you interact with users, you'll see their models and their understanding of the problem, and when the differences crop up, that's where it gets interesting.
Here's the reality: their mental models may be just as flawed as yours, or even more so. And every user's model is unique. It's finding the intersection between your model of the problem and proposed solutions and their model of the problem—and what they are willing to try to solve it, plus, how much time, money, or effort they are willing to spend to adapt their behavior and assumptions to match yours.
If I have to spend hundreds of dollars to get a glass of cold press juice that I can just as easily get by squeezing a packet, I'm probably not going to do that. If I can change my mobility and expand my horizons for my career, where I live, and how I work, I might be willing to spend $500, learn how to drive, and get licensed by a state authority.
I think the balance between users and development teams comes down to this normal process of testing your models, testing your assumptions, and responding to the data that comes back—even (and especially) when it surprises you.
When you get to the point where you're no longer getting surprised regularly, that's a good sign. On the other hand, you need to be prepared to reject the edge cases in the name of efficiency and speed. Otherwise, you end up getting caught in a trap where an application can have too many features or ways of doing things.
Be on the lookout for feature bloat. There was a period where this happened in Microsoft Office; it seemed like there were three ways to do anything in Word. Another thing to watch out for is creating sub-products or configurations because now you have to maintain a more extensive set of products—and you've created a bunch of technical debt in the process.
I think when you pare it down to the nub, you have to regularly challenge your assumptions (not constantly, but regularly). Does the problem still exist? Does our understanding of it make sense? Do our models for solving it resonate with the bulk of our target users? What are the acceptable substitutions for what we've created? Adjust the balance of your user research in direct proportion to your ability to answer questions like these.