The Difference Engine is an insight-driven product strategy company.


Read the Blog Contact Us Join our Community

The Cynefin Framework

By Farrah Bostic | @farrahbostic | 10 Jul, 2014


I was recently introduced to this framework for sense-making, known as Cynefin. The video above is worth having a look, and David Snowden is an articulate, concise speaker on what the framework is all about.

In essence, the framework is about beginning from disorder and determining which kind of conditions you are operating under based on the facts, and therefore, which type of practice you need in order to cope with those conditions. Starting from data, rather than starting from the framework is critical to its utility as a sense-making model.

In many respects this has me thinking about the framework for thinking that many common law trained lawyers use. You begin with the facts - those that are ascertainable either through direct evidence or common acceptance by parties and witnesses. These facts begin to take on a pattern - cause and effect, liability and remedy, intent and outcome, responsible party and aggrieved party. The patterns start to point you in the direction of a general principle - an actual statute or a precedent in case law - that you must then work out how to apply to these facts.

The interesting bits are in distinguishing facts so that you can apply principles, or distinguishing principles so you can martial the facts. It was always my impression as a law student that the former was preferable to the latter - it tended toward more just results, especially in the long term, when the results of this case might be taken as precedent for deciding the next one.

But this may be precisely where the case study tradition of the law departs to starkly from the case study tradition of business schools. Law and precedent are meant to be both durable and flexible. They are meant to take the long view, to ensure just results, not just in this case but in subsequent, similar cases. And because it is driven by general principles both of precedent and of remedy, we find that past experience can help us reason by analogy to predict future outcomes.

Lawyers also have a method by which cases are tracked and cited, and conflicting cases have to be reckoned with in order for judges to decide a case.

Business schools seem not to feel that this approach to the case study method applies to them. Both because of the standard financial maxim that past results are not a predictor of future performance, and because the business school case study method tends to rely on cherrypicking cases.

Nevertheless, a sense-making approach, where facts lead and frameworks follow, appeals to me. It suggests there may yet be such a thing as precedent in business, and that facts may matter at least as much as intuition.



Continue Share

what needs doing

Designing for Data - Why I'm Speaking at #StrataConf

By Farrah Bostic | @farrahbostic | 11 Feb, 2014


This week I’ll be speaking at O’Reilly’s Strataconf in Santa Clara, both as part of the Data Driven Business Day on Tuesday, February 11 and as a keynote speaker on Wednesday, February 12. I’m excited to be part of the event, and thrilled to be working with Alistair Croll and his team as they put on what looks to be an event full of both big and useful ideas, and of some heavy-hitters when it comes to data and analytics.

So what’s a person like me, decidedly not a data scientist, doing there? What can I possibly have to tell a convening of many of the most data-oriented and data-savvy minds in technology and business?

Two things, really: numbers won’t count themselves, and people are data. Two lovely titles for talks, but what does it mean?

Those numbers won’t count themselves

I’m particularly excited about the Data Driven Business Day because I want to see how people define data-driven businesses. We are increasingly helping clients to ‘design for data’ - to anticipate and plan for better KPIs. We think whatever you choose to measure, it should help you make decisions.

I doubt very much that anyone really disagrees with that notion. But how we actually measure frequently tells another story.

Before starting The Difference Engine, we frequently encountered marketing clients who dealt in what the Lean Startup movement (specifically Eric Ries and Dave McClure) would call “vanity metrics” - metrics that make you look good, but don’t help you make decisions. These metrics might include a million likes on Facebook at one end of the spectrum, or “top 2 box on brand preference scores with competitor x” at another end.

We also frequently encountered brand managers who had very little data about their end users, their actual customers. Large consumer electronics brands often lack CRM systems that reach below the sales channel and so rely upon segmentation studies to guess about customers and prospects; networks and television shows mainly depend on Nielsen for viewer data but often don’t know who is really watching and why; B2B brands sometimes hold onto outdated beliefs about the motivations of small business owners or IT decision makers.

We used to belief this was simply because marketing is cut off from sales and operations, or that managers are tradition-bound to use tracking studies, segmentation, or attitudes & usage studies. There’s a lot that our clients do “because tradition”.

But as we work more with product managers, CEOs and even boards of directors, whether in established businesses or in startups, we find that the real tradition behind this dependency on 3rd party data sources is poor data design.

They don’t have direct data because they didn’t set themselves up to measure it. The consumer electronics company who does not sell direct to consumer can be (mostly) forgiven for not having a robust CRM system for analyzing end users, because they don’t directly transact with them (except over warranties or after-sales service).

But the digital marketing or product team is rarely similarly forgiven? Just the word “digital” seems to imply measurement - it’s numbers, after all. Everything can be counted now, so too often, digital marketers and product developers assume everything is being counted. But what’s worse is that they frequently aren’t sure what they should measure, how they should test, which customers they should care about, or what “conversion” even means for their business.

When you’re not sure what to count, you count likes. You count click-through rates. You count page views or time-on-site. You look at the numbers that are available. What gets counted is what counts, instead of the other way around.

Despite all the time marketers have spent measuring ad recall, brand like-ability, brand preference, intent to purchase, and despite all the effort spent haggling over reach, frequency and impressions, these numbers may not stand for anything meaningful to customers or their behavior. And what’s irrelevant to customer behavior is - most likely - irrelevant to business results.

People are data, too.

Businesses take comfort in numbers. Numbers seem dispassionate; numbers seem steady. The fickleness and fecklessness of people are scrubbed out of data. You can trust the data. The data will tell you what to do.

But that’s not really true. For most people, binders full of tabulated data will never translate, Neo-peering-into-the-Matrix-like, into action or intent. A survey will not “tell you what to do”. We need people to analyze the data, to interpret it, and to make recommendations based upon it.

Even more importantly, we need people who know how to measure the right things, who know how to design for better data. And to do that, we need to understand the underlying system of a market, a product or a customer segment. We need to not only know who people are by correlating their media habits with their brand preferences (a probably useful method of buying media); we need to know how people actually purchase products in the category, why they buy them, how they use them, and what else they use that we might see as ‘competition’. We need to know what people tend to do right before they cancel a service; we need to know what they tend to do right after they buy or sign-up or register or like or mention.

In short, quant research should never be undertaken in isolation. It should depend on a sound qualitative understanding of the world. Much like the natural philosophers-cum-scientists of the 19th century and before, we need great hypotheses to do great experiments.

When it comes to product strategy, we like to think of this as empathy building. We’re used to businesses taking an “if we build it they will come” approach to product and marketing development. When customers don’t, in fact, flock to the brand or the product, they try to optimize their existing behavior. This explains, at least in part, the rise of programmatic ad buying, analytics software, the popularity of dashboards and user surveys, and endless A/B testing. But where we think these tools fall short is in developing deep understanding and insight, in fostering empathy with customers, and in turn, helping managers develop better ‘instincts’ and ‘intuition’.

We’re all - marketers and product developers alike - trying to find the signal in the noise. Too often, we’re mistaking one for the other. But we don’t have to. We can get smarter about the underlying systems, measure better indicators, understand correlation and causation better. We can have more empathy for our customers. We can show better judgment, commit to our ideas, and make better decisions.

So how do we do it?

We believe in talking to people, but also in understanding the gaps between what people say they do and what they do. We believe in understanding the real life experience of a product or service or brand. So we watch, we listen, we investigate, we discover.

But these conversations are not enough. We help clients translate what we learn from real people into hypotheses we can test, and we collaborate with clients to tie these hypotheses to measurable actions. We set up clients with qualitative research that is part of the design process, not separate from it. We get the whole team involved in getting continuous customer feedback. We design research that helps us really understand and identify a problem, and we work with product and brand design teams to solve those problems. We help our clients develop a set of KPIs that are meant to drive action.

Based on understanding the business, the competitive environment, and the customer experience, we helped one of our clients make two key decisions: pay staff a little better to attract talent; give raises or bonuses to staff that get positive customer and management feedback. That’s it. It sounds simple, because it should be. And in our work with this client, we’d built enough trust in the process and the data that they made those decisions during our presentation to their board of directors.

We’re a product-strategy company. The strategy has to come from somewhere. We think it comes from empathy with customers and prospects, deep understanding of the underlying system that a business or brand lives in, great data design, and an eye for opportunity.

We say, let's be data pragmatists. Let's use data - in all its forms - to learn what to do next.



Continue Share

New Year, New Roadmap

By Farrah Bostic | @farrahbostic | 09 Jan, 2014


2014 is off to an exciting start. As last year wound down to a close, we made some very important decisions. Chief among them was welcoming Laila Forster to The Difference Engine. Her experience in operations, strategic client services, project management and integrated marketing is going to put some serious muscle behind our goal of "operationalizing strategy".

As part of the decision to bring Laila on was also a decision to clarify the focus and vision of The Difference Engine. While research is the vehicle for much of the work we do, this is not a market research company.

An insight-driven product strategy company.

Over time, we realized that, both by design and as an inevitable result of the relationships we have with our clients, we have three distinct work-streams.

Regardless of the work-stream, the types of questions our clients ask are often quite similar: Who are (or should be) our customers? How do we best reach and serve this new market? How can we grow?

While the questions sound deceptively simple, the answers are often quite complex. We believe the answers to these questions often lie in the ways that businesses create new value for customers.

In fact, we believe "creating new value" is a pretty good definition of innovation.

How are we different?

We love it when the ideas we help develop become real products, services and experiences.

We're not seers or soothsayers or trendspotters or futurists, and we're also not McKinsey-style consultants.

So we don't leave clients with a nice looking presentation full of half-baked concepts and highly stylized observations.

And we won't leave them with reams of graphs and charts to decipher.

We create tools for them to use, roadmaps to follow, KPIs to adopt, research-based resources to tap, and prototypes to build.

We design these assets to work within existing workflows and cultures.

We train clients on how to use them, how to adapt them, 
and how to bring others on board.

We set our clients up to succeed.

Create your own tools.

As a product-strategy company, we think it's important to understand exactly what it's like for our clients to develop a new product or service. And we like to muck about on the internet.

So, we're in the process of designing software that will serve as an important tool for our research practice. Once we've road-tested it on a few projects, we'll make it more widely available - first, to our clients, and then to others.

We've got a working prototype today, but we need to spend some time thinking about the user experience, the workflow, and essential design decisions. When we've got some screenshots to show you, we'll start posting them here.

In the meantime, if you'd like to read a bit more, feel free to poke around the site, or have a look at our latest credentials presentation.



Continue Share