Episode 44

bonus
Published on:

6th Aug 2024

A Deep Dive on B2B SaaS Reporting - Justin guests on "Beyond the Pipeline"

It’s August and I’m taking some time off this month to travel and recharge a bit. 

But I didn’t want to leave you hanging for a whole month, so I thought it would a great time to share some recent podcasts where I’ve been featured as a guest. 

First up, we have an episode on Beyond the Pipeline with Vivin Vergis, where we do a deep dive into reporting for B2B SaaS orgs. 

This is a tough, thorny, sometimes painful topic, but Vivin asked some really great questions and we explore how to tell better stories with data, create a culture of objectivity, prioritize ad-hoc requests, and a whole bunch more. 

Let’s dive in to the episode.

--------

In this episode of Beyond the Pipeline, host Vivin welcomes Justin Norris, Director of Marketing and BDR Operations at 360 Learning and host of the RevOpps FM podcast. Justin shares his journey into operations, transitioning from an English major to a pivotal figure in marketing operations. They dive deep into the challenges of reporting in B2B SaaS, discussing concepts like reporting fatigue, the importance of storytelling in data presentation, and handling impulsive reporting requests.

Justin emphasizes the need for a cultural shift towards objective data analysis and the role of ops in being accountable for business performance. Tune in to gain valuable insights on managing reporting requests, addressing cognitive biases, achieving a single source of truth, and avoiding reporting fatigue in B2B SaaS.

Timestamps:

[00:02] Introduction and Justin’s Journey into Operations

  • Justin shares his unique path from being an English major to becoming a pivotal figure in marketing operations.

[03:32] Reporting Fatigue in B2B SaaS

  • Discussion on the challenges of data overload and how reporting fatigue sets in within organizations.

[07:22] Storytelling with Data

  • The importance of creating a narrative around data and how effective communication can alleviate reporting fatigue.

[08:23] Handling Impulsive Reporting Requests

  • Strategies for filtering and prioritizing reporting requests from different teams to avoid unnecessary work.

[14:38] Enabling Self-Serve Reporting

  • Tips on empowering teams to generate their own reports and the role of ops in making tools accessible.

[19:35] Common Reporting Tools and Their Limitations

  • Comparing the effectiveness of tools like Salesforce, Looker, and Tableau for self-serve and advanced reporting needs.

[27:58] Cognitive Bias in Reporting

  • Addressing the impact of biases like confirmation bias in reporting and the importance of maintaining objectivity.

[35:45] Taking Action on Data Insights

  • The critical role of follow-through on data insights and establishing a feedback loop for continuous improvement.

[39:49] Achieving a Single Source of Truth

  • Challenges and strategies for creating a single source of truth in organizations and the trade-offs involved.
Transcript
Vivin Vergis:

Welcome to the seventh episode of Beyond the Pipeline podcast.

2

:

Today's episode is all about getting

reporting right in B2B SaaS companies.

3

:

We've all come a long way when it comes

to reporting, thanks to some cutting

4

:

edge tech out there, but like anything

else, it's not the tech, but the people.

5

:

People and the process behind the tech

that really determine if you're getting

6

:

the right data and the right insight.

7

:

To discuss this with me today is

Justin Norris, director of Marketing

8

:

Ops at 360 Learning and host of the

Match acclaim, rev Ops FM podcast.

9

:

This is a topic that I'm sure a lot

of you all in ops would relate to.

10

:

I certainly did, and it's definitely

helped me structure a lot of my

11

:

thinking around building reports.

12

:

Let's get right into it.

13

:

Justin, welcome to the show.

14

:

So glad

15

:

Justin Norris: to

16

:

Vivin Vergis: be here, Vivan.

17

:

Justin Norris: Thank you for having me.

18

:

Vivin Vergis: Yeah, my pleasure.

19

:

Justin, like every other episode,

I started with a question,

20

:

which is not really the topic.

21

:

And the question is very simple is how

did you end up in operations, right?

22

:

And it's a story that is unique

for every guest and would love

23

:

to hear your part as well.

24

:

Justin Norris: I think like

many operations folks, I

25

:

ended up here accidentally.

26

:

Uh, I I was an English major by trade and

learned fairly quickly that, uh, you've

27

:

limited options outside of academia with

English in terms of, you know, a set

28

:

career path that it prepares you for.

29

:

So I went into copywriting, I thought

I'd try my hand at business copywriting

30

:

and from there got into marketing,

was very interested in marketing.

31

:

The psychological aspect,

understanding customers.

32

:

So really grounded myself in that

and then moved into a startup where I

33

:

was the third employee, uh, the first

marketing hire and really wearing all

34

:

the hats from a marketing point of view.

35

:

You know, re skinned the, the

product, helped hire SDRs, tooling,

36

:

systems, demand gen channels.

37

:

So I was doing it all, which was a

great sort of bootcamp and education and

38

:

all of the fundamentals of marketing.

39

:

But I found that I kept being inclined

towards systems, you know, like

40

:

I just loved getting my hands on

tools, got my first Marketo instance.

41

:

This was, you know, well over, uh, it

was 13 years ago now, I would say, and

42

:

just really was, was drawn in that area.

43

:

Even though operations, marketing

operations, at least wasn't even a thing

44

:

that I could articulate at that time,

but I kind of fell into it that way.

45

:

And then at a certain point in time,

you know, you hit a threshold of

46

:

like, How much you can contribute

within the organization that you're

47

:

in, within the place that you're in.

48

:

So at that point in time I said, uh,

let me go over to the consulting side

49

:

and, uh, and really specialize at this,

which I did for about seven years.

50

:

And then at another point in time

I was like, actually now I'd like

51

:

to be closer to the business again.

52

:

So I moved back in house from there.

53

:

Vivin Vergis: Got it.

54

:

Yeah, I think I think one thing

that's common is everyone starts

55

:

off in a startup doing everything

where operations is just one part.

56

:

And that's how they

stumble upon operations.

57

:

And that's how I did as well,

where you end up doing everything

58

:

operations is just a part.

59

:

And then, you know, you see the amount of

gains that you can do with and actually

60

:

operations is One area that not really

everyone wants to get into and, you know,

61

:

I think it solves a lot of problems.

62

:

These are problems that no one else

wants to solve and you probably want to

63

:

get into and, you know, start looking at

tools, processes and start solving them.

64

:

And that's, that's what

really got me into operations.

65

:

Justin Norris: And how else do you know?

66

:

I mean, I, I do think that's the

great thing about being in a startup

67

:

is that, uh, You really have no

idea when you start your career, you

68

:

start working, like what you actually

like, you might have some thoughts.

69

:

Maybe some people have a set

career path, like doctor,

70

:

lawyer, accountant, or whatever.

71

:

But if you're just in this general

mix of businessy stuff, until you

72

:

try your hand at things, you don't

really know what you love doing.

73

:

And then you find those things.

74

:

So yeah, it's great to

have that opportunity.

75

:

Vivin Vergis: All right.

76

:

So moving on to the topic

of the day, Justin, which is

77

:

reporting in B2B SaaS companies.

78

:

And, uh, you know, I think there's

no doubt that data and reports,

79

:

they're paramount when it comes to

making decisions in any organization,

80

:

not just B2B SaaS companies.

81

:

But I feel like nowadays there is a lot of

data and reports are being thrown around.

82

:

I feel like there's almost like an

information overload or maybe, you know,

83

:

what we call like a reporting fatigue.

84

:

When do you think the sets in and

in my personal opinion, I've seen

85

:

this in, you know, early stage

companies where you have to go to

86

:

two, three tools to find your reports.

87

:

You have multiple.

88

:

Teams coming in with different

standards of reports and, uh, you don't

89

:

really have a single repository and

even to get the easiest of answers,

90

:

it becomes very difficult, right?

91

:

So do you see reporting

fatigue setting in?

92

:

And when do you think, or when

do you put a stop to generating

93

:

reports day in and day out?

94

:

Justin Norris: I don't think we can

ever stop the generating of reports,

95

:

but I think the way that we communicate.

96

:

And share that information within the

company, uh, can have a big impact

97

:

on that, that feeling of fatigue.

98

:

I'm an English major, as I said, so

I am not a person who can look at

99

:

a dense slide covered with KPIs and

just like, understand it instantly.

100

:

I can, I can find my way, but

it takes an effort for me.

101

:

I work with people who can

look at those dense slides and

102

:

be like, Oh yeah, and see it.

103

:

And I, I envy that ability,

um, but it takes effort for me.

104

:

But I, I think even for me, For most

general business consumers, that approach

105

:

of just like dense KPIs, slide after

slide, after slide, we go to sleep.

106

:

It's very boring.

107

:

It isn't helpful typically.

108

:

Uh, there's no sense of

priority or hierarchy, uh, to

109

:

information a lot of the time.

110

:

There's a long time, very well known

data thought leader, Abhinash Kaushik,

111

:

worked for Google, or maybe still does.

112

:

He talks a lot about data puking, where

you're just like, and it's sort of a

113

:

disgusting analogy, but it's accurate.

114

:

So I think the job of the skilled

analyst, or the business communicator,

115

:

is to create that story around the data.

116

:

And I almost liken it to Archaeology,

because if you think about archaeology,

117

:

you know, we go into the ground or we

look at all sorts of, uh, different data

118

:

points, ice cores, pollen samples, you

know, there's lots of different things you

119

:

can look at in archaeology, but you, you

start with those facts and then you need

120

:

to paint a picture, and if you think about

like National Geographic documentaries

121

:

or those sorts of documentaries that

are made for a mass audience, they do

122

:

a really good job Not necessarily the

accuracy of how they interpret facts I'm

123

:

talking about here, but just creating

a story that the average person can get

124

:

into and can understand, uh, or at least

that vision or that picture of the past.

125

:

So I think I would liken that to what the

analyst or the business communicator needs

126

:

to do with their data to tell that story.

127

:

Focus on a few core KPIs.

128

:

You can do a deeper dive into things to

illustrate a problem or highlight how

129

:

you arrived at something, but it has to

have that storytelling framework in mind.

130

:

And when I think of the people that I

work with that are the most effective at

131

:

doing this, my boss is really good at it.

132

:

Uh, our head of marketing.

133

:

Her background is product marketing,

you know, so storytelling, uh,

134

:

or our COO, he's from management

consulting, former McKinsey.

135

:

So again, very strong on communication.

136

:

So I think it's, that is the ability

that's critical and that can then

137

:

alleviate that fatigue because I think the

fatigue comes from just being bombarded

138

:

by numbers without context, without story.

139

:

Vivin Vergis: Yeah, absolutely.

140

:

I think that also connects to the

way an ops person grows, right?

141

:

Because if all you're doing is creating

reports on HubSpot and Salesforce and

142

:

just giving it to people to, you know,

analyze and get insights out of it, you're

143

:

just being the person who builds those

reports and not able to analyze them.

144

:

Right.

145

:

And I think that's also like

anyone who's out there listening.

146

:

I think if you're not able to analyze,

give stories out of the data that

147

:

you're preparing for your team.

148

:

I think that just keeps you

at a very low level at ops.

149

:

I think the next level of ops where

you need to really build is, you

150

:

know, creating those docs where you

really have a story built out and

151

:

sharing those with the consumers

of the report, mostly leadership.

152

:

And if they do have doubts, then

the second level could be the raw

153

:

data that they probably want to

dive into and get more insights on.

154

:

I think, yeah, I mean,

that's a great call out.

155

:

And I think.

156

:

That's something that I've learned

over the due course of time as well.

157

:

All right.

158

:

So Dustin, I think the next bit that I

really want to get to is as ops folks,

159

:

all of us get hit with a lot of requests

from across different teams, right?

160

:

Especially within marketing, especially

if you're marketing ops, you have

161

:

content reaching out, you may have.

162

:

The digital team reaching out

to you and a lot of other teams

163

:

reaching out to you to prove ROI

of their efforts and initiatives.

164

:

And that's okay, right?

165

:

Because as ops folks, you're the center of

the platforms and systems and reporting,

166

:

and that's okay for people to reach out.

167

:

But I feel like some reporting might

be very impulsive in nature, right?

168

:

Someone somewhere is probably talking

about, Hey, you know, this data

169

:

point would be really great to have.

170

:

And that data you need, or that

request immediately hits ops

171

:

and ops starts working on it.

172

:

Now these impulsive or these.

173

:

Reports are used.

174

:

One time and then never used again,

impulsive reports that are probably

175

:

required at that point in time and

not even looked at when the report

176

:

is shared as an ops person, what

are the right kind of questions you

177

:

should ideally be asking to filter

the requests that come through, right?

178

:

Is it even worth your time to

be looking at all the reporting

179

:

requests that comes through?

180

:

Justin Norris: Yeah,

that's a good question.

181

:

I think there's different types

of requests and it's important to

182

:

understand sort of what a domain of the

business or the request is related to.

183

:

One of the core things we all have as

a business, right, is this operational

184

:

rhythm of, uh, regular, repeatable

reporting, like things like funnel

185

:

metrics, channel performance, revenue,

that happens, you know, on a weekly,

186

:

bi weekly, monthly, whatever, cadence.

187

:

And those things should

really be standardized.

188

:

And they should have, you know,

the right drills so you can go

189

:

in deeper into the information.

190

:

And you don't want those things

changing too regularly because I

191

:

think the predictability and the

familiarity there is important.

192

:

So that type of reporting, I think of

it as a product in the sense that, uh,

193

:

it is, uh, something that ops builds

and maintains for the organization.

194

:

It has new features.

195

:

So maybe it's a feature

request for that product.

196

:

So in that case, it's, you know,

how important is it, how urgent

197

:

are we actually going to use it?

198

:

You know, you, you stress test it

in all those ways, uh, and should it

199

:

be incorporated into that product?

200

:

The second The way I might think about

it is, or the second sort of domain

201

:

that a request could relate to, in

my mind, is performance management.

202

:

So this is a case where

something is wrong, something

203

:

is broken in the revenue engine.

204

:

It needs to be fixed.

205

:

And the team is digging deeper than

usual to try to isolate that problem.

206

:

And this is an area where I

think, at least at the scale of

207

:

organization that I typically work

at, like the startup scale up.

208

:

Space, you want to be very reactive.

209

:

You don't want to stand in the way and

be, you know, you know, we're behind on

210

:

opportunities this month on pipeline, but

I don't think I can prioritize, you know,

211

:

it's, it's not a, it's not a good look.

212

:

It's not a good career.

213

:

I don't think it's the

right business decision.

214

:

We're behind.

215

:

It has to be a hands on.

216

:

And then, ideally, these can

become standardised over time.

217

:

And then, ideally, these can

become standardised over time.

218

:

So as you go through that routine

a few times, say, all right, if

219

:

performance is down, this is the

20 step process that we follow.

220

:

We look, you know, we navigate

down through all the different

221

:

layers of the funnel.

222

:

And so we can standardise those reports

as well and make those less ad hoc.

223

:

And then I guess the third domain

or the third type of request is

224

:

more, I think, kind of like what you

were alluding to in your question.

225

:

Kind of innovation type requests.

226

:

These are like the, the what if, or

I wonder, or we're blue skying it.

227

:

They're new initiatives.

228

:

They're new ideas.

229

:

And those can have a wide range of

urgency and impact associated with them.

230

:

Could be a report that maybe never gets

even looked at by the time it's built.

231

:

The person's already moved on

in their mind to something else.

232

:

And so you, you do need to be

really rigorous there, like

233

:

you said, to evaluate that.

234

:

And.

235

:

I would look at, you know, timeline.

236

:

Is this, is there an upcoming event?

237

:

And we want to pull a report of a

certain type of prospect that's at that

238

:

event, so that we can do some outreach.

239

:

And there's a very specific

need associated with it

240

:

that has a clear outcome.

241

:

Makes sense.

242

:

If it's just like a general, what

if, it's hard to understand, um,

243

:

who's asking for it plays into it.

244

:

Quite frankly, there's, there's always

that aspect that needs to be considered,

245

:

but ultimately, you know, you, you, you

perform some kind of impact analysis.

246

:

What decisions are we going to make?

247

:

What activities are we going to do?

248

:

As a result of that info, and

then if it can't be done right

249

:

away, you know, you don't have to

say no, but you can backlog it.

250

:

And quite often that backlogging

is a forcing function by

251

:

the time you get back to it.

252

:

So is this still relevant in the,

like, actually, no, I'm, I'm good.

253

:

So sometimes that allowing it to

mature a little bit can be helpful.

254

:

Vivin Vergis: Yeah, I got it.

255

:

And I think, uh, you know, to your

point where, you know, you end up.

256

:

Saying no, or let's say deprioritizing

it for some reason or the other, right?

257

:

I think enabling self serve

reporting is also a great way to

258

:

ensure that, I mean, it's a win win

situation for both parties, right?

259

:

Because one, it reduces

dependencies on ops.

260

:

Second, you get your data

faster to get moving, right?

261

:

And while, you know, self serve is

great for both the sides, I think it's

262

:

mostly an ops to enable that, right.

263

:

In order to make sure that, and, and

a lot of things, practically what I've

264

:

seen is people are not really good with

tools and it's not their fault, right?

265

:

Maybe the tool by itself is intuitive,

but the way that you've set up

266

:

the data architecture within the

tool might be so complex that it's

267

:

probably hard for people to understand

which property do I need to pull?

268

:

Which object is it that I need to create?

269

:

What kind of report types I have?

270

:

It becomes very difficult for a very.

271

:

normal person who's probably used

bare minimum of the tool to understand

272

:

how to create reports or how to

look at data within the tool, right?

273

:

So what would be, let's say, you know,

and it mostly comes down to enablement,

274

:

but if you do want to start, if you're

just stepping into a company, you want

275

:

to start enabling users to do many things

on their own, including reporting, where

276

:

do you start in terms of enablement?

277

:

Justin Norris: Yeah,

that's such a big question.

278

:

I mean, I agree with you

that self serve is the ideal,

279

:

particularly for more mature.

280

:

What you want to avoid.

281

:

I don't know if you have those, uh, self

checkout things at the grocery store,

282

:

you know, where they have, you have

just like a screen and as a shopper, a

283

:

grocery shopper, you can just like check

your own items, bag, your own items.

284

:

The challenge I see with those

is that they're always breaking.

285

:

They're never working quite well.

286

:

And so there's always like a store

employee there that's constantly

287

:

having to go between the different self

checkout things and like help people.

288

:

They're frustrated.

289

:

So it's like we're trying to self

serve, but ultimately probably

290

:

they're still saving some time.

291

:

And if you just have one

or two items, it's fine.

292

:

Uh, but they're not fun to use and

they're frustrating and it still

293

:

involves a lot of time from the employee

in that case to go around and solve

294

:

problems and, you know, resolve issues.

295

:

So you have to make sure that whatever

system you're setting up actually

296

:

does enable true self service.

297

:

And you have to know what is the

escalation point where you bring in

298

:

an analyst or the data team or, or

some, you know, Resource above that,

299

:

where the self serve isn't appropriate.

300

:

The few things I think about in terms

of actually enabling that to happen.

301

:

So number one is like, where do you do it?

302

:

I don't know if this is a controversial

take or not, but a lot of people like

303

:

to rag on Salesforce reporting, but I

think it has one of the most powerful

304

:

reporting engines has ever built.

305

:

A lot of organizations, at least the ones

that I've worked with are on Salesforce.

306

:

I think that is probably the ideal

starting point for, for self serve.

307

:

You know, I've worked with like tools like

Looker on and off for more than 10 years.

308

:

I've yet to see one where, uh, anyone

beyond, you know, an analyst type

309

:

profile, um, Could use it comfortably.

310

:

Just something to do with

bumping into limitations.

311

:

Still the interface feels technical.

312

:

Even most sales people or sales

leaders can create a Salesforce report.

313

:

Plus you're close to the data.

314

:

That's another key point.

315

:

If you ever had the experience like

looking at a report and you're like, I

316

:

want to see what's behind that number

and you click and like nothing happens,

317

:

it's a very frustrating experience.

318

:

Uh, and Salesforce, usually

everything is drillable.

319

:

You can go down to the row level.

320

:

You have that trust that I understand

what this data is comprised of.

321

:

Uh, you, users are already working there.

322

:

You can go into a record and take actions.

323

:

The gap between action and insight.

324

:

It's very close.

325

:

So I actually start there for

at least for most organizations,

326

:

obviously there's limitations in

terms of bringing in other data.

327

:

Uh, and so then there's like

the hard skills aspect, like how

328

:

do I build a Salesforce report?

329

:

And I think that's the easiest

part of the problem to solve.

330

:

Because there's trailhead courses,

people can learn how to do that.

331

:

The challenge beyond that,

and you touched on it.

332

:

And then the last one is understanding the

data that you're actually working with.

333

:

And this one is definitely, uh, well,

there's some shared responsibility,

334

:

but I think this is an ops problem.

335

:

We need to get those

irrelevant fields out there.

336

:

You know, most people work in a Salesforce

org that's now over 10 years old.

337

:

I do.

338

:

And.

339

:

Again, the archeology example, you

know, there's different layers, like

340

:

you can almost see fields related

to different periods of the business

341

:

where people had a certain idea or

vision of how they wanted to operate.

342

:

They created fields for that

and then they changed their

343

:

minds three or four years later.

344

:

New people came in, they

created other fields.

345

:

And so all that history is just there.

346

:

And if you're a user that's like.

347

:

I don't know, region or number

of employees or country.

348

:

Like you can have like six different

fields for each of those data points.

349

:

You have no idea how they're populated.

350

:

No idea where the data comes from.

351

:

No idea which one is accurate.

352

:

So we need to do our

job and clean that up.

353

:

Um, we need to ensure that

labels are up to date.

354

:

Like I work for a French company.

355

:

So some of the older fields, the

labels are actually in French.

356

:

I can figure it out.

357

:

I can translate them, but there's

significant friction there.

358

:

So having up to date labels, clear

nomenclature, description fields, help

359

:

fields, having that actually populated,

um, our team now does a great job of,

360

:

uh, Whenever a new field is created, they

will link in the description field back

361

:

to the original request so we can get all

that business context, uh, surrounding

362

:

that field and ultimately having a

glossary for users and then knowing how

363

:

to filter as well as like the next thing.

364

:

And this is the risk with

self serve reporting.

365

:

You've probably seen this too, where

like five different users, like.

366

:

Um, and then they're like,

Hey, my report is broken.

367

:

It doesn't add up.

368

:

And of course it's because they've

all filtered on region in a slightly

369

:

different way or filtered by a team

or they don't know how to identify

370

:

new versus repeat business in the

same way or opportunities credited to

371

:

marketing your sales in the same way.

372

:

So I think the distillation of

that is if your data is a mess.

373

:

Nothing works.

374

:

And then creating like a safe environment

where everything's kind of labeled.

375

:

Everything is clear, um, and then just,

you know, enabling the users on the

376

:

tool, which I think is the easiest part.

377

:

Vivin Vergis: Got it.

378

:

And the other thing that I would probably

add is a lot of the requests that I get is

379

:

from, Possibly already a report when the

request comes in, just that the person is

380

:

not aware that the report already exists,

that addresses the same use case, right?

381

:

So I think just like a data glossary,

you could probably have a report

382

:

defining what each report does.

383

:

Justin Norris: Absolutely.

384

:

We just, it's funny.

385

:

I just, uh, built.

386

:

That as I, my colleagues on the

sales ops side, they'd created

387

:

like a reporting library, which

I thought was an amazing idea.

388

:

Um, so we just did that for the BDR

side and yeah, it creates that clarity

389

:

of like, here are the list of reports.

390

:

Here's what we maintain.

391

:

Here's what we are responsible for.

392

:

You go and build something else.

393

:

That's great.

394

:

But we can't, we can't own

the, the accuracy of that.

395

:

Vivin Vergis: Yeah, got it.

396

:

And to your point where you were talking

about, you know, building dashboards and

397

:

reports out of, you know, Platforms like

local studio or tableau, you know, sure.

398

:

It's not something that you can double

click into and, you know, figure out

399

:

what's underneath the data, but do

you think at what point do you think

400

:

a company can move from, let's say

system related reporting, like HubSpot

401

:

Salesforce into a more complex system

of Snowflake, Fivetran, Tableau, you

402

:

know, all the SQLs, do you think it's

an upgrade or a downgrade or at what

403

:

maturity level Do you think a company

should probably think of moving into

404

:

a more complex reporting system?

405

:

Justin Norris: I think it probably has

still has to happen at a relatively

406

:

early stage because despite all the

great things that I just said about Um,

407

:

Salesforce reporting, you inevitably do

run into limitations, whether that's with

408

:

the data model or your product data that

you want to splice it with is not there.

409

:

And you don't necessarily want to be

pushing all of that data into Salesforce.

410

:

So I think it's probably as early

as you can get the skill set

411

:

internally to build and maintain that

infrastructure, which realistically 100

412

:

people, if not a little bit earlier.

413

:

It's never been easier

to build out that stack.

414

:

I mean, you can literally go in

less than a day and spin up five

415

:

Tran and snowflake and Tableau, um,

spend a bit of money, but you can

416

:

get it all going and start bringing

in data and stitching it together.

417

:

Like it's just never

been easier to do that.

418

:

And that was not the case like eight

years ago, eight or nine years ago.

419

:

And I remember really thinking about

like, Oh, I just, because I was in the

420

:

consulting side is I would love if we

could offer like a cloud based completely

421

:

cloud based sort of BI solution.

422

:

And those tools were kind of out

there, but it was not nearly as common.

423

:

As it is right now, but you need,

you need that maturity to be able to,

424

:

to be able to build those reports.

425

:

And so probably the way I think a

bit, if we come back to like the

426

:

different domains, the operational

rhythm reporting in those dashboards,

427

:

probably living off of the warehouse

and living in snowflake and some ad

428

:

hoc reporting potentially there, but

more often than not for like truly.

429

:

And then having a data governance

process and, and team and sort of

430

:

council that makes sure that the data

is clean and consistent between systems.

431

:

Vivin Vergis: Got it.

432

:

Got it.

433

:

And just one.

434

:

question is how important do you think

it is for Ops folks to know languages

435

:

like SQL and you know, learning

how to use visualization tools?

436

:

You know, I'm pretty sure there are

a lot of data analysts who's going to

437

:

be, you know, they're helping you out

with these, but do you think it adds

438

:

on to your skill as an Ops professional

to be knowing those languages?

439

:

Justin Norris: I mean, I, I definitely do.

440

:

I guess I don't think it's that important

because I don't really know SQL myself.

441

:

I know it like well enough that

every time I need to do something,

442

:

I'll go and like Google it.

443

:

I guess now I would use chat GPT

to write the SQL query for me.

444

:

So, um, but I, all those, all

those skills are an asset.

445

:

All those skills make

you, uh, more dangerous.

446

:

I think it, I think it just depends

at what level you want to work at.

447

:

And you may find this as the same.

448

:

I'm curious if you relate to this, I

guess, but I have always found that I

449

:

Flex to fill gaps that are around me.

450

:

So if we're like, Oh, we need to do this

thing and nobody knows how to do it.

451

:

Like, like, okay, I'll,

I will figure that out.

452

:

But right now I have a strong

data team that I work with.

453

:

I have no need and no incentive really.

454

:

And perhaps at the stage of the career

that I'm at, it's not like my main focus

455

:

to be adding that on to my, uh, my resume.

456

:

But yes, I absolutely think it's an

asset, but maybe, maybe it becomes less

457

:

necessary, perhaps with the rise of

like AI and tools and things like that,

458

:

where you might be able to like, yeah.

459

:

Structure your data with

natural language queries.

460

:

Vivin Vergis: Yeah.

461

:

Yeah.

462

:

Yeah.

463

:

I mean, absolutely relate to that part

where, you know, there is a gap that you

464

:

probably have to depend on some other

team to fill and, and that's something

465

:

that drives me to start learning.

466

:

I mean, I didn't know SQL until very

recently when I had to depend on

467

:

an agency or someone else to help

me out with certain queries, but

468

:

then I, every time I had to reach

out to them, it was a pain because.

469

:

The lead time to turn things around,

making them understand the business

470

:

part of, uh, what you're really trying

to achieve because they're coming

471

:

in only with the tech skill, right?

472

:

So I think the fact that you understand

the business with some skills in terms of

473

:

tech, I mean, might be a very basic SQL.

474

:

But I think it really helps you

in, you know, turn things on faster

475

:

and also have the context about the

business while you're, you know,

476

:

bringing in those technical skills.

477

:

Justin Norris: Necessity is

the mother of invention, right?

478

:

Vivin Vergis: Absolutely.

479

:

Yes, absolutely.

480

:

Right.

481

:

I think the next topic is something

that I've thought about quite a lot,

482

:

and I think I've experienced this in

my previous roles as well, is, is about

483

:

biases when it comes to reporting, right?

484

:

And these, these might not be

biases that you really want to

485

:

bring in, into your reports.

486

:

Some common kind of biases that I see

is, you know, this is, you know, and data

487

:

is something that is, is basically based

on who's building up the report, right?

488

:

If I want to prove something right, and

I think it's called the confirmation

489

:

bias, if I'm not wrong, but if I really

want to make sure that a campaign

490

:

looks pretty in the eyes of the

leadership, I can do it irrespective.

491

:

If the core data proves wrong, there's

always a way that you can possibly make

492

:

a data point look better than it is.

493

:

And there are different ways

to weave data around how you're

494

:

trying to save the story.

495

:

Just like you said, that you can

build a story around how your

496

:

campaign worked out really well.

497

:

Right.

498

:

And I've done it in my previous

role as a program manager, where I

499

:

wanted to make sure that, you know,

my numbers are coming out, right.

500

:

You're not fudging data.

501

:

You're not faking data,

but you could still.

502

:

Say it in a very different way, which

looks pretty right as an ops person,

503

:

I think it's very important to make

sure that you don't take sides.

504

:

You need to bring up the actual.

505

:

picture because it helps

everyone around it.

506

:

It helps you take the right decisions.

507

:

It helps the leadership understand what's

actually happening on ground, right?

508

:

How do you, as an ops person,

how do you make sure that

509

:

these biases don't creep in?

510

:

And how do you make sure the right

kind of data is being presented?

511

:

Justin Norris: Yeah, we're

all biased, like you said.

512

:

Right.

513

:

And yeah, I mean, it's so funny.

514

:

Look at any social, political issue today.

515

:

And everybody has data to support,

you know, you can find data

516

:

about why you should eat meat.

517

:

You should find data about

why you shouldn't eat meat.

518

:

You should find, you know, every, almost

every topic that there's people on

519

:

both sides and they all have their data

and they all have their data points.

520

:

And.

521

:

Yeah.

522

:

I think the most important thing

is cultural and it's you as an

523

:

Ops person can influence it, but

honestly it goes beyond just Ops.

524

:

It's do we have a commitment

as a company to objectivity, to

525

:

rationality, to understanding reality?

526

:

And is there an ability to Put

forward different points of view

527

:

and to dissent without consequences.

528

:

And I think if you have that, it doesn't

mean that you necessarily don't have

529

:

bias, but it means that you have a,

like a dialectical process through

530

:

which people can challenge each other.

531

:

Say, I don't know, this

assumption doesn't seem right.

532

:

What about this?

533

:

What about that?

534

:

And you can work towards a

shared and hopefully More

535

:

accurate picture of reality.

536

:

If you don't have that, uh,

culturally, you're going to be in

537

:

trouble because you're going to be the

person saying, well, what about this?

538

:

What about that?

539

:

And then you're going to get shut down

by the C XO, whatever, or by the CEO.

540

:

Who's only interested in hearing the thing

that supports her, his point of view.

541

:

So finding a company that has that

I think is, is really important.

542

:

You got to check the environment that

you work in to the extent that you can.

543

:

Not everybody can these days.

544

:

It's hard, but to the extent that you can.

545

:

Choose where you're going

to invest your time.

546

:

I feel really fortunate that, you know,

rationality is a big part of, uh, the

547

:

values and the work methodology where

I work doesn't mean we always agree.

548

:

But you're, everyone is free to

like put forward that, um, that

549

:

point of view and to put forward

like a fact based perspective on

550

:

why they see things the way they do.

551

:

I think the cognitive bias tends

to be in the so what, because the

552

:

metrics usually are pretty factual.

553

:

Like we have so many leads.

554

:

We have so many opportunities.

555

:

Is this good or bad?

556

:

Like, What are the consequences?

557

:

Sometimes a KPI can rise or fall and

someone may make a big deal out of

558

:

something and start like a fire alarm.

559

:

And you're like, well, actually,

if we look at it in context,

560

:

the drop is not meaningful, it's

not statistically significant.

561

:

So, reasonable people can disagree,

I think is the other thing.

562

:

But we need to have that, that

back and forth process to kind of

563

:

sharpen the sword and figure out

what truth is as much as we can.

564

:

Vivin Vergis: Yeah, absolutely agree.

565

:

I think the part of culture that you're

saying, I think the bias creeps in because

566

:

you're probably, you probably have an

unconscious feeling that, Hey, you know,

567

:

if I don't show data, which looks good for

a particular campaign, I get shot down.

568

:

Because of that fear, you end up

probably looking for data that

569

:

makes your campaign look good or

anything that basically looks good.

570

:

The other part is, and I've, I've kind

of felt this in, in a lot of companies

571

:

is about the way that ops team report the

reporting structure of ops teams, right?

572

:

Because when ops team is aligned

to marketing, sorry, marketing ops

573

:

is aligned to marketing sales ops

is aligned to sales, and you have

574

:

a pipeline problem where you're

probably dipping in almost all.

575

:

Metrics marketing comes up with

the story, which makes marketing

576

:

looks good saying that, Hey, you

know what, you've done this, right?

577

:

And I think the sales efficiency is

going down and sales ops comes with

578

:

a different version of the story

saying that marketing is probably not

579

:

giving us good quality leads, right?

580

:

So I think the reporting structure and

I probably don't have an answer here,

581

:

but I think the reporting structure for

ops teams probably falls apart there

582

:

where you're taking sides and maybe

that's where a central ops team or a

583

:

central structure where ops reports

into a CRO and then you're looking at

584

:

a holistic view of the organization

of the pipeline and not taking sides.

585

:

Do you have a comment there

in terms of structure possibly

586

:

taking a say in, in these devices?

587

:

Justin Norris: Yeah, I mean, they

absolutely, it absolutely does the

588

:

first thing I'll say and just your

point about like everybody wants to

589

:

report data that makes them look good.

590

:

I think that's true, but there's

a weird like career hack that I've

591

:

noticed where you can actually build

a lot of credibility and trust by

592

:

reporting data that makes you look good.

593

:

I don't want to say it makes you look

bad, but that isn't favorable to you.

594

:

Obviously you don't want to like show up

and be like, you know, I'm, I'm an idiot.

595

:

I don't know what I'm doing.

596

:

It's more just if you're running

a program and it's not going well,

597

:

then step being the first person to

step up and say like, we are behind.

598

:

Here's my analysis.

599

:

Here's why that builds so much trust

because everyone, if someone is just

600

:

constantly the good news and I can do

no wrong, are you really going to trust?

601

:

That person, you know, whereas if we

all know that nothing is perfect, but

602

:

if someone steps up to the plate and

takes accountability, accountability

603

:

and responsibility goes so, so far.

604

:

So I think, you know, building a culture

where that's normalized is great.

605

:

And that alleviates a lot of that

pressure to be like, I'm right.

606

:

I'm right.

607

:

We're good.

608

:

You're bad.

609

:

You know, that.

610

:

The very immature, uh,

behavior, in my opinion.

611

:

Yeah.

612

:

And then, you know, coming into

like how, like the different

613

:

teams, certainly unification helps.

614

:

Um, and I am, I am a big believer

in that, but I don't work in a

615

:

centralized op structure myself

and we don't have that problem.

616

:

I think because in part commitment to

rationality and then shared, uh, systems,

617

:

shared data models, you know, yeah.

618

:

If I'm reporting out of like Marketo that

has a different data model and a different

619

:

view of the world than what my sales

team and sales ops counterparts use, then

620

:

we're not going to ever see eye to eye.

621

:

I think when it comes to core business

metrics, we have those standardized.

622

:

And so at that point, there

really is one version of the

623

:

truth in terms of the facts, the

interpretation can vary, but I think.

624

:

In, in my experience, and maybe

I'm fortunate, like, uh, we

625

:

are hardest on our ourselves.

626

:

And I think that's really

how it should be rather than

627

:

trying to avoid accountability.

628

:

So again, you know, cultural, I

think a lot of it starts there.

629

:

The more that we, we talk about that.

630

:

Vivin Vergis: Got it.

631

:

Yeah.

632

:

Yeah.

633

:

Makes sense.

634

:

And I think you also alluded

to data and insights.

635

:

Every company has data and every

company has a team that basically

636

:

gets insights out of this data.

637

:

And it's almost like

a commodity right now.

638

:

And I feel like what is rare is taking

action on those insights, being able

639

:

to make changes based on those insights

and also create a feedback loop between

640

:

the data team and the team that really

needs to take action on those insights.

641

:

Yeah.

642

:

Have you been in any situations where

you ask for accountability on the

643

:

insights that you generate, right?

644

:

It can't be just that we keep

generating these insights.

645

:

You don't see any action.

646

:

You don't see any feedback loop in

terms of what has happened from those

647

:

insights and what more data is required

to make those insights more crisp in

648

:

order to, you know, make sure that

the followup action works out well.

649

:

Have you encountered any situation

like that or have you set up a feedback

650

:

loop anytime during your career?

651

:

Justin Norris: Yeah, definitely.

652

:

I think this is the single.

653

:

Most critical issue.

654

:

I mean, there's many critical issues,

but without this part, this taking

655

:

action, we don't have any impact.

656

:

It's all kind of a waste of time.

657

:

So it takes us to the heart of the matter.

658

:

And And what are the roles and

responsibilities within that process?

659

:

I think that prioritization is a really

important piece here, like coming back

660

:

to that reporting fatigue question,

because, you know, we live in this

661

:

universe of infinite information.

662

:

There's so many different data

points we could look at, and if

663

:

you have too many things, you

won't take action on any of it.

664

:

It's like, Oh, this is interesting.

665

:

And that's interesting.

666

:

And that like, you're just

kind of like wandering around.

667

:

Uh, one of the techniques I learned

it from my COO, but I think it's,

668

:

it's a technique that's around there.

669

:

It's this notion of a KPI tree where

you look at like, what's like the core

670

:

things we're trying to do as a business.

671

:

All right.

672

:

We're trying to produce revenue.

673

:

All right.

674

:

Well, what are the, the key

things that lead to revenues?

675

:

Like we have so many leads,

so many opportunities, so

676

:

many close one opportunities.

677

:

And then you like continue

to break those down.

678

:

Like, well, what leads to an opportunity?

679

:

It's like, well, did we

follow up with the lead?

680

:

Did we have a meeting?

681

:

Did the, they attend the meeting?

682

:

Did the.

683

:

And you keep breaking it down, breaking

it down, and you end up with all the KPIs

684

:

that are actually, you know, directly

related to those core things and kind

685

:

of a logical sequence and in a logical.

686

:

hierarchy.

687

:

Once you have identified these

are the KPIs that actually matter.

688

:

It doesn't mean that there's nothing

else that's potentially interesting,

689

:

but these are the ones that we need to

really own to make this business work.

690

:

Then there's an accountability process.

691

:

And I think we've all had the

frustrating experience of like

692

:

trying to do this in an ad hoc way.

693

:

But I think if it's structured in

the sense that where you have regular

694

:

meetings, where like, if you've got

to stand up in front of your peers

695

:

as an executive or as a leader of

some kind of manager, And say like,

696

:

yes, I did this, or I didn't do that.

697

:

That's the sort of thing

that motivates people to act.

698

:

Or if their boss is talking about

it in a one on one, or if they have

699

:

to present on it at an all hands.

700

:

So again, culture, building that culture

of shared accountability, and a regular

701

:

business readout of those Quark APIs.

702

:

We actually have a philosophy, Uh, to

some extent of like manual reporting,

703

:

which again is counterintuitive,

but like having people fill out

704

:

spreadsheets of, of things, it's

like, well, why we have the tool here?

705

:

Why not just, you know, but in doing that,

it brings people closer to their data.

706

:

It forces them, creates a certain friction

that you want to force people to look to

707

:

at certain KPIs, but yeah, there are still

frustrating situations and the functional

708

:

business owners need to own the action,

but I think ops can be a forcing function

709

:

and that's another way that ops can.

710

:

You know, like you said, elevate

yourself from just being like the

711

:

reporting desk where it's like, here's

your report, kind of like delivering

712

:

a pizza and being a more strategic,

more impact oriented function or

713

:

say, Hey, what did we do with this?

714

:

Is this KPI moving?

715

:

Are we taking this action?

716

:

It's a really powerful shift.

717

:

I found

718

:

Vivin Vergis: got it.

719

:

And I think even even when it comes

to ops teams, like you said, right, if

720

:

you're able to nail down your KPIs, what

matters to the business the most, and you

721

:

gear your insights towards those KPIs,

or those core metrics that matter, then

722

:

you can also prioritize your insights

or your effort and finding insights that

723

:

affect those outcomes the most, right?

724

:

Because any insight is

not worth acting upon.

725

:

And you might have an insight, which

is probably great, but if it doesn't

726

:

impact directly those core metrics or

KPIs that you're after, then you're

727

:

probably, you know, wasting your

effort in finding those insights.

728

:

So I think that's also one way

to maybe prioritize the way that

729

:

you look at data or the insights

that you're trying to find.

730

:

Justin Norris: Yeah.

731

:

That's great.

732

:

Totally true.

733

:

Vivin Vergis: Yeah.

734

:

And, and I think, uh, we definitely

be missing out, you know, especially

735

:

since we're talking about reporting.

736

:

And if you miss out on this topic,

which is the single source of truth,

737

:

I think everyone talks about this,

everyone chases this, but very

738

:

rarely have I found something finally

saying that, Hey, you know what,

739

:

we have a single source of truth.

740

:

In your experience, is

that even worth an effort?

741

:

Sure.

742

:

I mean, you probably don't want to

navigate between, you know, five

743

:

to six different tools to get data.

744

:

But honestly, I feel there's always going

to be some level of data that you probably

745

:

have to manually report or go through,

let's say, X number of tools to find.

746

:

But what's, what's the best balance there?

747

:

Or like you say, I mean, is, is there a

way to do this, uh, in a frictionless way?

748

:

Justin Norris: The answer is

probably a bit different at

749

:

different stages of a company, right?

750

:

Like I've never tried to create a single

source of truth at like a 10, 000 person

751

:

company or a 100, 000 person company.

752

:

So we're talking probably completely

different challenges in the context

753

:

of like a sub, sub 5, 000, sub 1, 000.

754

:

I don't know what the number

is, but a company of that size.

755

:

I think it's an achievable thing, uh,

but it kind of depends on what we mean.

756

:

If we mean the single source of truth

is going to answer, you know, the, be

757

:

the magic eight ball that answers any

question that we ever have and every

758

:

data point we could ever possibly

want is there and we will never

759

:

need to look at another source tool.

760

:

It's probably not realistically achievable

by which I mean like, yeah, you could

761

:

do it, but the cost and effort of doing

it would be way, way, way outweigh.

762

:

The benefit.

763

:

So I think coming back to like, what

are like the different domains of

764

:

reporting, you know, the operational

rhythm, troubleshooting, and then

765

:

more innovation, things that are

operational rhythm, definitely be in the

766

:

source of the single source of truth.

767

:

And again, it's never been easier

to do that on a technical level,

768

:

get five trend, get snowflake,

get your tableau, your looker.

769

:

You're off to the races, you can

pay with a credit card where it gets

770

:

sticky and where it gets challenging

is the definitions is the enablement

771

:

of people on how to use that the

rigidity of data warehouses where

772

:

every new question that is not already

baked in now requires data teamwork.

773

:

Oh, we've got to build

that into our data model.

774

:

We've got to write that query.

775

:

We've got to, you know, so with

that power comes a certain rigidity.

776

:

I've yet to see a system that

just can like suck in data.

777

:

And then provide unlimited flexibility

and like, you know, effortlessly

778

:

stitched together those relationships.

779

:

And maybe we're going to get

there, like get closer to that.

780

:

But today my experience, even trying

to do something as simple as blending

781

:

sales loft data and Salesforce data to

say like, I want to like bring in all

782

:

the activities and stuff from sales law.

783

:

And then I want to bring.

784

:

All the, um, all the opportunity

information from Salesforce, even that can

785

:

like be such a big project because there

are subtle differences and how, you know,

786

:

Oh, a meeting over here is like this.

787

:

And a meeting over here is like that.

788

:

APIs are not all completely

consistent or logical.

789

:

So, Oh, we can't filter this like this.

790

:

So now we've got to like suck in

everything and then go to this other

791

:

endpoint and get this other data

point and then filter it by that.

792

:

And.

793

:

So again, it's all doable,

but you're spending money

794

:

in human time by doing this.

795

:

So yes, a single source

of truth, but for what?

796

:

And you have to like, you

have to make hard choices and

797

:

prioritize what you want there.

798

:

And where you say, actually, we're just

going to report on this out of the.

799

:

Uh, the source, uh, system, the, the,

the, the point tool, and that's okay.

800

:

Vivin Vergis: All right.

801

:

So Justin, I think that brings

to the end of the segment.

802

:

We'd love to probably ask you a

few more questions, but I think

803

:

that's the time that we have.

804

:

Uh, but I'd love to have the last

segment with you, which is, uh, again,

805

:

it's got nothing to do with the topic,

three very random questions, uh, that

806

:

I haven't please feel free to answer

it in any which way you would like

807

:

my, my first question is basically.

808

:

What's the one truth that very few people

agree with you on with respect to ops?

809

:

Justin Norris: I don't know to what extent

people don't agree with me about this,

810

:

but I don't hear as many people talking

about it, uh, which is that ops should

811

:

be accountable for business performance.

812

:

And that's just something that is kind of

normalized in the culture where I work.

813

:

But I know that a lot of ops

teams Or I've heard this attitude

814

:

expressed like, well, we do X, Y, Z.

815

:

Like we deliver the systems

or we fulfill your tickets.

816

:

Uh, we give you your campaign, whether

it works or not, you know, that's sort

817

:

of on you and you can understand having

that separation of powers, but I think

818

:

that is a very limiting mindset for ops.

819

:

So I'm actually a big believer

that we need to be partners in

820

:

performance management and be

like a kind of coach and challenge

821

:

things where they don't make sense.

822

:

And have that as a part of our role.

823

:

Vivin Vergis: Yeah, absolutely.

824

:

I mean, I think tying the ops, uh,

outcomes to the business is one of

825

:

the biggest things that I think will

help ops understand their value and

826

:

obviously help the business understand.

827

:

How important ops is to the entire

outcome of your business, right?

828

:

My second question is, you know, I

think there's a lot of talk about

829

:

what's going to change in the next

five to 10 years, what are the new

830

:

technologies that are going to come out?

831

:

What do you think will not

change in the next five years?

832

:

Justin Norris: Yeah, I'm going

to go out on a limb and say that

833

:

I think that most teams will

still have their data as a mess.

834

:

In, in five years, um, it's probably

not a very bold prediction, but it's

835

:

not something that AI can easily fix.

836

:

Maybe it can help in some ways,

but, um, since AI is, is based on

837

:

the data that you provide into it.

838

:

If the underlying data itself is a mess,

you know, how, how easily can we fix that?

839

:

Maybe we will see some, some

improvements there, but honestly,

840

:

I think it's a huge challenge.

841

:

I think it requires discipline.

842

:

There's never enough resources.

843

:

It's hard to prioritize a

data debt payback projects.

844

:

Uh, so I think that we will

continue to struggle in this area.

845

:

Vivin Vergis: The last question, uh,

Justin, for the day is, is what quality do

846

:

you think is most critical for Ops folks?

847

:

Uh, creative thinking,

communication, speed to execution.

848

:

And the last time I asked this to

my guest, she also added another

849

:

option, which is learning to say no.

850

:

So which one do you think works best?

851

:

Justin Norris: Oh, am I allowed

to add other options too?

852

:

Or do I have to, do I have to push?

853

:

Yes, of course.

854

:

Vivin Vergis: I'll probably

pass it on to the next guest.

855

:

Justin Norris: They're, they're all, I'll

stick with your, your original three.

856

:

I mean, I think they're all important.

857

:

If I had to pick one, I would

say a communication because

858

:

if you can't communicate, then

none of the other things matter.

859

:

You can be creative as you want,

but if you can't communicate, then

860

:

you won't get buy into your ideas.

861

:

You can execute quickly, but if

you can't communicate, nobody

862

:

will know about it properly.

863

:

You won't be recognized.

864

:

You won't be able to get people on

board with what you're trying to do.

865

:

So I think communication.

866

:

Is a prerequisite for All of the

other things, uh, especially as

867

:

you move from being an individual

contributor into leadership and beyond.

868

:

So I'm going to pick that one.

869

:

Vivin Vergis: Got it.

870

:

Got it.

871

:

All right.

872

:

So, uh, I think that's wraps up

our, uh, time for the day, Justin.

873

:

Thanks a lot for doing this.

874

:

Thanks for the amazing insights.

875

:

I've personally learned a lot again.

876

:

Like I said, I think we could have

gone on for a longer time, but I

877

:

think this has to end somewhere.

878

:

So again, it was a privilege to

host you and, uh, yeah, I would

879

:

love to connect again sometime.

880

:

Justin Norris: Me too.

881

:

Thank you so much.

882

:

I really enjoyed it.

883

:

Vivin Vergis: Cheers.

884

:

Thanks.

885

:

Thanks

Show artwork for RevOps FM

About the Podcast

RevOps FM
Thinking out loud about RevOps and go-to-market strategy.
This podcast is your weekly masterclass on becoming a better revenue operator. We challenge conventional wisdom and dig into what actually works for building predictable revenue at scale.

For show notes and extra resources, visit https://revops.fm/show

Key topics include: marketing technology, sales technology, marketing operations, sales operations, process optimization, team structure, planning, reporting, forecasting, workflow automation, and GTM strategy.

About your host

Profile picture for Justin Norris

Justin Norris

Justin has over 15 years as a marketing, operations, and GTM professional.

He's worked almost exclusively at startups, including a successful exit. As an operations consultant, he's been a trusted partner to numerous SaaS "unicorns" and Fortune 500s.