Jenni Romaniuk on better brand health 

The Ehrenberg-Bass professor explains the principles and pitfalls of brand health tracking

When it came time for Jenni Romaniuk, international director of the Ehrenberg-Bass Institute for Marketing Science, to write a new book, she thought about Adam Grant.

Romaniuk had heard the popular science author say that before he decides to write about something he asks himself whether he’d be happy talking about it for the next two years, and she thought: ‘Okay, that’s a good criteria.’

That is not the only reason that Better Brand Health: Measures and Metrics for a How Brands Grow World exists, of course. For another thing, Romaniuk was eminently qualified to write the book. Maybe even uniquely so. She is one of the world’s foremost experts on brand growth and distinctive assets, and she has spent a decade both practising and studying brand health tracking.

With one foot in the private sector and the other in academia, Romaniuk could get to grips with the pointy end of brand health tracking – designing questionnaires for companies, analysing results, etc – and then direct research to fill in the gaps whenever she was unsatisfied with the level of knowledge.

Better Brand Health brings together Romaniuk’s practical insights and research findings and grounds them within the established framework of brand growth to create a comprehensive guide to measuring people’s attitudes and memories. 

We spoke to her about some of the core concepts in her book, and some of the ways that marketers get it wrong when they set out to measure brand health.

Let’s start off simple. What are brand health metrics?

Brand health metrics are where we try to capture what effect we've had on a category by memories. There’s a whole range of different metrics under that umbrella, but it’s all about getting a window into the category and how what’s been going on in the marketplace has changed how people think and feel about brands. So brand health metrics, in the broadest sense, are anything dealing with memory.

One of the things you set out early on in the book is the mantra, ‘design for the category, analyse for the buyer, report for the brand’. Can you explain what that means and why it's important?

Basically, it points to three of the things that people get wrong or misunderstand.

‘Design for the category’ means you should have a brand-health tracker that any brand in your category would be happy to use. It shouldn't be just about you. If another brand in the category, whether it's a bigger brand or a smaller brand, would not use it, then you’ve got biased measurement. You don’t want that because you don’t know where your brand is going to be in the future. You might be a big brand now, but imagine you launch another brand in the category – then you’ve got to look at it through a small lens, and you’re going to have to design a totally new tracker and that seems a bit counterproductive, particularly when we know how brands compete. Your biggest competitors are the bigger brands.

‘Analyse for the buyer’. It's amazing how often people will ask for cuts by gender, age, life cycle, economic state, and not realise that the differences between them are trivial. Most of those differences are actually driven by the number of buyers or non-buyers of the brand you have in that segment.

The brand is not the package, the brand is the Uber driver delivering the package

Jenni Romaniuk, Ehrenberg-Bass Institute for Marketing Science

The biggest difference between someone giving a more positive response or a more negative response is whether or not they've had past experience with the brand. So that's the big thing we've got to control for: whether or not someone's a brand user.

The third aspect, which is to ‘report for the brand’, basically draws on things like the law of double jeopardy, which is that small brands are penalised twice: they have many fewer users who are slightly less loyal which, turned into metrics, means that loyalty-score expectations for a small brand are different to that of a big brand. And it’s the same with your expectations for brand awareness and your expectations for word-of-mouth scores. If you don’t control for that, it’s very easy as a small brand to go, ‘Oh, we’re not doing anything right’, and miss when you’re successful.

It’s dangerous for big brands as well. They get complacent because they score well on everything. I’ve been to brand health meetings where the brand has been going, ‘Everything’s good [...] but our sales are down. Why is this not showing up?’ And that’s because they’re not controlling for metrics and controlling for brand size when interpreting the metrics.

You said that brands look too much at characteristics, like age and economic state, when the most important factor is whether they’re a buyer or a non-buyer. But if you have lots of non-buyers among certain certain demographics, doesn’t that suggest that characteristics do play a part?

Not necessarily. You’ve got to remember that young people, old people, they have very similar brains. There’s a little bit of difference when you get older, but how people access and process and use information is very similar. What changes is what they’ve been exposed to. So if your media plan has actively avoided reaching older buyers, they’re going to have weakened memory structures, and that’s probably going to translate to weaker sales as well.

So, often when we see differences in age or gender, we’ve artificially caused them because of how we have done our marketing.

Which measure, if it starts to decline, should set alarm bells ringing for CMOs or brands?

It’s not just about whether the measure declines, it’s from whom. The thing about metrics is having the right context to interpret them. I no longer think about blanket metrics. I think about metrics according to buyers and non-buyers and size.

So, I would say share of mind among buyers for big brands, and mental penetration among everybody – but particularly non-buyers – for small brands. If they went down, I would be a little bit worried.

In the book, you talk about a common misconception people have about brands and you use an analogy about Uber drivers and packages. Could you explain that misconception and how it shapes marketers attitudes and actions?

Basically, what I said was that the brand is not the package, the brand is the Uber driver delivering the package.

This is partly what led to the whole research I did on category entry points. I noticed that brand attributes in brand health trackers were all about the brand – is the brand trustworthy? Is the brand reliable? And I thought, ‘Okay, I sort of understand why that might be useful. But what about the buyers and their lives? Where does this fit in?’ Because buyers don't buy a brand because it's trustworthy. They buy a brand because they're hungry or for a whole heap of other reasons. Don't get me wrong. If you're not trustworthy, they might not buy you, but then they’ll buy a different brand.

Think of something like what Tesla's gone through. It used to be like, ‘Oh, it's the best electric car’. And then they had a few slip ups, and now there’s more competitors, and people are going, ‘Maybe I'll go for something else because I’ve now got doubts about whether or not it will do what it’s supposed to do.’ But it wasn’t that they were buying it before because it was absolutely the most trustworthy thing. They just didn’t have doubts about it. 

And so when we make a tracker about brands and not about buyers, that [assumes] the destination is the brand.

So it’s a bit like an update of the old adage that customers don’t want a hammer…

…People don’t want a drill bit, they want a quarter-inch hole or whatever it is. Yeah, it’s a bit ‘Marketing 101’, but I think we often get caught. I've had the same discussion about distinctive assets. People often get really excited and think their goal is to build distinctive assets, and it’s like, ‘No, distinctive assets are tools that you use to brand’.

Your goal is not to build distinctive assets, your goal is to build good branding tools that you can then use to make sure that your house doesn’t flood… that’s where my analogy breaks down.

It seems that unprompted brand awareness gets a little bit of a short shrift in the book. Why is that?

Unprompted measurement, in general, isn’t particularly good, whether it’s brand attributes or brand awareness.

Top of mind awareness is particularly bad because it has all the bad biases. It biases against the audience that we want to know most about (non-users) and small brands.

A bit of history: those measures came into force in the 1960s when we started measuring things about brands. But our whole concept of memory associative network theory – understanding how retrieval works – wasn’t properly integrated into marketing thinking until the 1980s. So we’ve held on to this measure that was [created] at a time when we knew very little about how memory works.

Prompted brand awareness has a role in terms of knowing if a brand is a member of a category or not. So, how many non-buyers of your brand who are buyers in the category know that you offer something in that category? Because if they don’t know you offer a particular product, you’ve got no chance of being bought.

Some of our best work on category entry points has been highlighting to companies where they've been winning the battle but losing the war. They're doing really well on a category entry point, but it doesn't come up very often for many people

Jenni Romaniuk, Ehrenberg-Bass Institute for Marketing Science

So there’s no benefit to being the first brand that people recall when they’re asked to list brands within a category?

Yeah, why would there be? Wouldn’t it be better to be the last brand thought of than the first?

To get the benefit of some sort of recency bias?

Yeah, exactly. The only reason [being the first brand recalled] makes any logical sense is if you're the only brand [recalled]. But even then, sometimes people will go, ‘Oh, I can only think of this. I'll go on Google and find something else.’ 

Our working memory – that’s the main part of our memory where we hold ideas in our head – can hold more than one concept at a time. The magic number is four, plus or minus three. So the maximum is thought to be seven, but realistically, three or four is probably all you can hold in your head. Usually, after about three or four, we start bunching information together. When someone gives you an eight-digit phone number, it’s much easier to remember it as four sets of two-digit numbers because holding eight numbers in your head in order is really tough.

I sort of joke that it would be better to be last [in a list of recalled brands], but it actually doesn’t really matter. The reality is that once they are in your brain, you can choose between any of them.

There are some cases where [being first] is better but not when it comes to memory. And the nature of associative networks is that once we start remembering something, we remember other stuff as well. Our brain just doesn’t shut off after one recall. So it just didn’t make any logical sense to me, this primacy of top of mind.

​​​​​​​Are there any generalities about how memory works that you’ve discovered in your research that’s relevant to marketers?

The thing to remember is, people never give you everything that’s in their memory. Even if they wanted to, they can’t. Our ability to access things from our long-term memory is limited, and we don’t even realise it. So the idea that you’re getting a census of any part of memory is…there’s no evidence to support that. So we want to make it as easy as possible for people to give us associations.

Is there anything to be said for the concept of brand love?

I don’t know why people keep persisting with these things. Well, I do know why – they’re psychologically comfortable. It’s nice to think that you’re working on a brand that's loved.

What we’ve got to be wary of is that these things come around every 10 years or so. If you look back 10 years ago, people were talking about brand relationships and using marriage as an analogy – that consumers marry brands and have affairs on the side with other brands. We’ve gone through this so many different times [with] the same old concepts that have been shown to have no basis. Can we just get a bit smarter and out of this spiral?

There’s a lot of information available online. Is it possible to build a working picture of your brand’s health just by measuring this stuff?

We did look at this because, you know, if there’s some way to get out of interviewing people, that would be great. But the big problem we have is that our lives are not online, only a small biased part of them.

So much of our day is ordinary and mundane and still involves brands, but it’s not something that we share online.

Even the vast majority of word of mouth is still offline; it’s between two people having a conversation.

So until the online world becomes less biased, I can’t see how we can do it.

Now, there are also some other problems, with things like attribution. When you’re talking about word of mouth, it has two sides to it – the giver and the receiver. You need to know about both to understand how impactful that can be. Often online you don’t understand who the giver is. You also don’t understand who the receiver is. So it’s really hard to judge how useful that is for the brand, just from an online context.

Having said this, I saw Daniel [Hochuli] from LinkedIn had posted a thing where he tried to get category entry points using ChatGPT, and I will say it was better than I expected it to be. He did write a good prompt, so I think that was part of it. And he was choosing [a product] like CRM software, which I imagine is something people do search a lot for, because it’s a big purchase. I’m not sure it would work for, say, toothpaste.

So you could maybe use [AI] to generate potential category entry points. But that’s very different from understanding how you’re performing on them, and I can’t see any way that you can do that using current online data. So there are some areas where there’s potential, but more about generation of ideas than quantification, or any form of formal measurement.

You say in the book that brands should not assume that CSR or purpose-y attributes are relevant without evidence. In your experience, are they especially predictive of anything?

I’m not a subscriber to individual attributes [as] drivers. I think a lot of our choice is by the accumulation of knowledge.

That particular comment [about CSR], if I remember rightly, was made in the context that sometimes people in brand attributes [surveys] will put things like, ‘is a socially responsible brand’ and ‘cares for the community’.

Sometimes you might want to put them in because you’re reporting on it to the board or whatever. And they can be category entry points. But if you ask consumers most of the time, in most categories, [CSR] will not be a key driver of purchase. But that’s not a reason not to do it. Just because consumers might not want it as the primary thing they want from the category, doesn’t mean you shouldn’t create sustainable brands and use as few resources as possible. That to me is a different conversation because that’s about being a good corporate citizen and ensuring that you can have a business because there is an environment, a society and an economy 100 years down the line.

We see this all in all of the ‘do good’ type things. If you take something like healthiness, there are some food categories where it’s really high, and some where it’s really low. If you’re in one of the categories where it’s really low, you can keep making your food better but just be aware that, in that category, that’s not what people are going for first.

Some of our best work on category entry points has been highlighting to companies where they’ve been winning the battle but losing the war. They’re doing really well on a category entry point, but it doesn’t come up very often for many people. And if that’s the case, you’re wasting a lot of messaging real estate on something that’s only going to get you a limited return.

You talk about being wary of faddish metrics, but are there any new brand health measures that you think are worthwhile and will stay the course?

I’m going to shout out to my colleague, Dr Ella Ward, who’s developed a measure of portfolio cohesiveness that I think is a really interesting way of tackling this. It’s a new empirical approach to doing it, and I think it’s got a lot of legs. I think it’ll be really useful because so many organisations struggle with building cohesiveness, particularly as brands become bigger, and they release more variants.

Is portfolio cohesiveness just what it sounds like? Having a suite of products that all make sense in relation to each other?

Portfolio cohesiveness is about making sure that the visual identity of your brands, particularly in packaged goods, [is consistent].

So often these end up being fragmented messes as brands launch new products and try to make each look different from the core, and they become this hodgepodge mess.

Next time you’re in a supermarket, just have a look at a brand and look at the range of things they offer, and you’ll see there’s some that don’t look even remotely related to the other ones in the family.

So [Ward’s] measure is about identifying those and quantifying those, and also when you’re launching new brands, working out whether these are adding or subtracting to cohesiveness. I think it’s an incredibly useful thing to do.



This article was downloaded from the Contagious intelligence platform. If you are not yet a member and would like access to 11,000+ campaigns, trends and interviews, email [email protected] or visit contagious.com to learn more.