Episode 58

full
Published on:

21st Mar 2025

How AI Agents Really Work - Daniel Vassilev

AI agents are everywhere in conversation right now—but what actually makes them work? It’s not just slapping a large language model into a workflow and calling it a day. Under the hood, real agentic systems operate differently. They make decisions. They adapt. They break out of rigid if-this-then-that logic and enter something closer to human judgment.

In this episode, I talk with Daniel Vassilev, co-founder of Relevance AI, a platform purpose-built for building and deploying true agents. We dig deep into how agentic systems are structured—from core instructions to tool orchestration—and how that foundation changes what’s possible. Daniel explains the difference between automation and autonomy in clear, practical terms that any builder, founder, or operator can understand.

We also explore real-world use cases: where agents shine today, where they fall short, and how teams are already using them to 10x output without ballooning headcount. Whether you’re dabbling in LLM workflows or ready to rethink how your company works entirely, this conversation will level up your mental model.

If you’ve been wondering where the hype ends and the real architecture begins—this is the episode.

About Today's Guest

Daniel Vassilev is Co-Founder and Co-CEO of Relevance AI, a platform to develop commercial-grade multi-agent systems to power your business. With a background in software engineering, he previously created, grew and monetised two apps to a combined 7 million users, reaching #1 on the App Store top free.

Key Topics

  • [00:00] - Introduction
  • [01:31] - Defining agentic AI
  • [03:28] - AI in linear workflows vs. agentic systems
  • [08:19] - How agents work under the hood
  • [11:24] - Always-on agents
  • [13:43] - Selecting the right tasks for agentic AI
  • [17:42] - Copilot vs. Autopilot
  • [22:44] - Are there tasks we should never delegate to AI?
  • [25:03] - Coolest use cases
  • [34:30] - Agent memory and continual improvement
  • [37:55] - Compounding effect of agent teams
  • [41:39] - Relevance the company and platform

Learn More

Visit the RevOps FM Substack for our weekly newsletter:

Newsletter

Disclosure: I am using an affiliate link for Relevance AI, which means I earn a small bonus if you sign up through my content.

Transcript
Justin:

I feel like 2025 is the year that really explode as a topic and potentially

2

:

as a reality for many companies too.

3

:

But the challenge that I see in this is

that the topic is really not well defined.

4

:

There's kind of this vague notion that

it has something to do with putting

5

:

AI into workflows or giving it tools.

6

:

And something, something, something,

it takes over everyone's jobs all well

7

:

and good, but how do you actually make

AI agents that save you time and labor,

8

:

because anyone who has tried to do this

knows that it's not just as simple as,

9

:

you know, spinning up chat GPT and giving

it a mission and letting it run wild.

10

:

It still requires some kind

of architecture building,

11

:

debugging, thinking.

12

:

And it also requires some kind of platform

or environment to do that building in.

13

:

So today's guest has seen this

need and he's co founded a

14

:

company called Relevance AI.

15

:

They're a platform for AI

agents, a really cool product.

16

:

I actually recently became a customer

so I could explore this topic further

17

:

and I've enjoyed digging into it.

18

:

Daniel Vasilev, welcome to the show.

19

:

Danieil: Hey, Justin,

thanks for having me.

20

:

Justin: I have personally been excited

by this topic, in a way that think I've

21

:

really felt since I first got into like,

you know, marketing automation and got the

22

:

ability to just create simple, if this,

then that workflows it kind of, to me, it

23

:

feels like the next generation of that.

24

:

And I'm just curious if, you could

just start us off by giving your

25

:

definition, at least of what an.

26

:

AI agent or what a gentic AI is

just so we have this common frame of

27

:

reference for everyone that's listening.

28

:

Danieil: Yeah, absolutely.

29

:

I mean, the simple way like to think

about agentic AI for us is whether it

30

:

can now make decisions that are dynamic.

31

:

Can it handle non deterministic workflows?

32

:

Right.

33

:

So if we think about software

and software systems, they're

34

:

largely defined by algorithms.

35

:

Where you described a if this

then that that tends to be fairly

36

:

fixed and rigid rules you can

put in place to make decisions.

37

:

Agentsic systems are a lot more like

human systems in the sense that given

38

:

instructions and given context at a

decision point, it could make a variety

39

:

of decisions and those decisions don't

necessarily need to be predefined

40

:

and those decisions can be based on

some sort of qualitative judgment in

41

:

addition to a quantitative judgment.

42

:

for us the real difference and appeal

of a genetic AI is when it fully enables

43

:

them How can it accelerate our ability

to make decisions and take actions?

44

:

without being constrained purely by the

amount of people you have on your team

45

:

Or constrained purely by the amount of

hours you have in the day What does that

46

:

work look like and we are looking at

relevance in particular to help accelerate

47

:

that journey And make it so that Teams

are absolutely unleashed, they have the

48

:

ability to execute on ideas and hopefully

remove a little bit of that constraint,

49

:

which we have today, which is, oh, if only

we had an extra person to help us do this.

50

:

That's kind of where we see Agents.

51

:

AI take place.

52

:

And only if it's able to

handle those dynamic decisions.

53

:

If you're still kind of stuck in

rule based decisions, then that's

54

:

much more akin to software and

software systems of the past.

55

:

Um, I know there's a lot of marketing

noise out there, but I think once, you

56

:

this year wraps up and we start being

clearer as an industry of what a Agentic

57

:

AI is, that'll be the main differentiator.

58

:

Justin: Let's drill down on that dynamic

quality that you isolated as kind of

59

:

the essential quality of an agent.

60

:

and it's also, I think, key to how

a lot of people are talking about

61

:

agents that, they're autonomous in

this way that they're decision making.

62

:

And there's a little bit of.

63

:

Mistification around that because when

I've gone into some agentic workflows

64

:

or ostensibly agentic workflows

that people have shared on LinkedIn

65

:

or wherever, it's still very much

rules based with like an AI step,

66

:

like instead of just a deterministic

calculation, you've got an AI model

67

:

doing something within that workflow,

which is awesome, but that's not aligning

68

:

with the definition that you've given.

69

:

So what is the key from your

point of view that enables, a

70

:

workflow to be agentic in that way?

71

:

Like what, what is required maybe from a

technical perspective for that to happen?

72

:

Danieil: I mean, so kind of what you're

describing there is a little bit of

73

:

an in between stage of kind of like

software systems and then trying to

74

:

incorporate large language models.

75

:

It's that process you're right,

a lot of the kind of traditional

76

:

workflow automation has achieved

that by adding a new step that

77

:

lets you also plug in an LLM.

78

:

Now, that to me doesn't

quite qualify as agentic.

79

:

The reason for that is quite simple.

80

:

It's you're basically

introducing a new tool, step into

81

:

existing workflow automation.

82

:

Just in the same way you have a PS steps.

83

:

You might have a triggered

some other system.

84

:

You might have some code step, and

then you might add an LLM step.

85

:

You're still broadly within that

workflow automation space, and

86

:

now you just have this ability

to generate output from an lm.

87

:

So I think that's kind of

what you're describing.

88

:

That's a lot of what's in the market

at the moment, and we actually have

89

:

that ourselves within our tool builder.

90

:

That's something very different to

our agent builder, and for us, that's

91

:

really powerful because tools and

workflow automation tools in general,

92

:

let us create kind of repeatable

steps for repeatable workflows.

93

:

and now you can add LLMs as part

of that, like you can a code

94

:

step, like you can a Python step,

but that's different to agentic.

95

:

So that's one way of using an LLM.

96

:

when we're talking about agentic

capabilities and agents in general, the

97

:

way we think about it is, okay, so what

happens if you have 50 of these tools?

98

:

And you could use any 50 of them at any

one time, depending on some context.

99

:

And how do you decide which tool to use

when not based on some linear workflow

100

:

but based on real decision making?

101

:

So when we're doing our jobs,

like let's say my job to be done

102

:

today is you know I've got some

recruiting work to do after this.

103

:

We're currently rapidly recruiting.

104

:

We're screening lots of candidates We're

doing washouts and I know I need to submit

105

:

my feedback for a lot of these different.

106

:

people we've interviewed and also

like accept meetings for new ones.

107

:

When I do that, I'm working across maybe

five to ten different systems, right?

108

:

I'm jumping in Slack, I'm jumping in

email, I'm jumping in my calendar,

109

:

I'm jumping in my ATS, and so forth.

110

:

And the steps that I'm doing for those

different jobs to be done can really vary.

111

:

And they can vary based on the candidate.

112

:

They can really vary based

on some instruction I've been

113

:

given by someone on my team.

114

:

And the magical thing about me at the

moment, like really why I'm valuable

115

:

in that process is because I can

decide, Hey, I need to do this like

116

:

this and then do this over there.

117

:

So if we think about agents from

that lens, right, that's when they're

118

:

truly powerful, when you can give

them a set of these systems, a set

119

:

of tasks that they can achieve,

instructions on how to achieve them.

120

:

And then they can go out and actually

make decisions on how to execute

121

:

them and actually decide, you

know what, now I need to go to the

122

:

ATS, I need to do something there.

123

:

Then we need to go to Google Calendar.

124

:

And this really goes beyond kind

of that traditional linear workflow

125

:

automation style experience that

you're describing because we're

126

:

no longer just defining a flow.

127

:

We're now really letting

it decide the flow.

128

:

We're letting it decide how to plan

its activity and how to execute this.

129

:

the way I like to describe for a lot

of people to kind of demystify this

130

:

is just think about hiring someone new

on your team, like a junior employee.

131

:

I say this to prospects all the time,

like, could you hire me tomorrow

132

:

and put me into a meeting room and

on the whiteboard sketch out for me

133

:

this, the way I'm going to be doing

my job, the decisions I have to make.

134

:

and how to go really well, right?

135

:

And maybe even teaching how to use the

software if we're using some sort of

136

:

software, can you, can you do that?

137

:

And if the answer to that is

yes, then we can train an agent

138

:

to do that, typically, right?

139

:

And that's the process that we're

going to follow with an agent.

140

:

it's less as if I'm coming in

and you're going to write to

141

:

me, here's the 10 things I do.

142

:

click those same things every time.

143

:

It's much more along the lines of

here's your job, here's how to do it.

144

:

Here's the decisions you need to

make, here are the systems you have.

145

:

and I think once we

start thinking of agents.

146

:

in that perspective and less about

the technology and like what framework

147

:

you're using or what LLM you're using.

148

:

It becomes a lot easier to suddenly

understand the difference between

149

:

genetic systems and software systems,

uh, where workflow automation,

150

:

even with LLM capabilities is

very much still a software system.

151

:

Energetic systems are starting to

become closer to human systems.

152

:

Justin: In your platform, one of the

things I really like, like the way it's

153

:

fleshed out, you kind of define an agent.

154

:

It has a core set of instructions and

then it has access to tools kind of

155

:

mirroring what you just described, all

the different things that it can do.

156

:

The tools themselves, like you said,

are almost like mini workflows where it

157

:

can make API callouts to other systems.

158

:

It can do various things

can scrape the web.

159

:

What I want to understand is

when that agent is triggered.

160

:

is it really just like, you know, some

of the new thinking models like, patchy

161

:

PTO three or Something like that.

162

:

deep research or some of these models

where you can really see like the chain

163

:

of reasoning that it's doing, is that

kind of what, like the agent component

164

:

of it does, where it gets a request

and then it sort of makes a plan.

165

:

And then as part of that, it starts

pinging those tools and doing those

166

:

different things, or if that's not it,

what is happening under the hood when

167

:

one of these agents gets an input?

168

:

Danieil: Yeah, I mean, so we use large

language models for that decision making.

169

:

if we think of large language models,

less in terms of chap GPT and more

170

:

as a fundamental technology that

provides reasoning capabilities.

171

:

then I can kind of, you know,

you start understanding how large

172

:

language models can be leveraged.

173

:

So, large language model basically

can be given some context and

174

:

then it can generate some output.

175

:

And if you use that correctly,

you can actually have it make

176

:

really good decisions for you.

177

:

and so under the hood, we

would use a variety of models.

178

:

Some of them would be,

you know, thinking models.

179

:

And, Each one of these models can

provide different kind of benefits

180

:

kind of advantages over different ones.

181

:

So some might be better performing, i.

182

:

e.

183

:

they can handle higher levels of

reasoning, but they might cost more.

184

:

Others might be faster and cheaper,

but maybe handle simpler use cases.

185

:

So what ends up happening is we basically

leverage these models to make a decision.

186

:

Okay, so this is what's happened so far.

187

:

These are the instructions we have.

188

:

What should we do next?

189

:

And then based on that, Decision making

and reasoning ability, we can then

190

:

take another action and then, you know,

our system can orchestrate that we

191

:

like to think of ourselves essentially

as an agent operating system, right?

192

:

In the same way that your team has

Windows, your company is now going

193

:

to have an agent operating system

and our interface is both the IDE

194

:

for creating those agents on the

agent OS and it's also the analytics

195

:

and monitoring and governance.

196

:

So you can see what's happening

in the operating system.

197

:

And with those two things, it

means that you can give tasks to

198

:

agents and then under the hood,

they'll start making decisions.

199

:

You can see how it's made those

decisions, which actions taken.

200

:

it gives an update using the models

again on why it's made those decisions

201

:

and what it's actually finished

doing when it completes a task.

202

:

but ostensibly it all comes down

to just leveraging the models

203

:

for decision making points.

204

:

And if you just start thinking

of everything you do as

205

:

like, you know, even me.

206

:

When I started doing that, you

know, the CEO review every single

207

:

time, I'm probably stopping for

a second and making a decision.

208

:

And that's what we're leveraging the

large language model to help us achieve.

209

:

Obviously today, that level is

different to where it's going

210

:

to be in a year or two from now.

211

:

Today, we recommend that predominantly

for tasks that are easier and simpler, i.

212

:

e.

213

:

something that you might hire

a more junior employee for.

214

:

but as the capabilities of the

models expand, then the capabilities

215

:

as well, the agents will expand.

216

:

Justin: I want to touch on use

cases, but just quickly before we

217

:

go there, in terms of how an agent

process or workflow, whatever you

218

:

want to call it, gets kicked off.

219

:

it seems to me there's a variety

of ways that could happen.

220

:

There could be like a user chatting.

221

:

Um, With the agent that starts

something and making a request, could

222

:

be an API request from another system.

223

:

Is there such a thing yet within

your platform or that you're aware

224

:

of just in general of agents that are

kind of like always on, like always

225

:

scanning, looking, evaluating data

and then performing according to a

226

:

set of instructions they've been given

or scheduled by a batch like a data

227

:

quality agent, you know, once an hour.

228

:

Come and look at all the new

leads that have been created and

229

:

clean up their data and merge any

duplicates or, something like that.

230

:

Danieil: definitely.

231

:

I mean, we internally this kind of

such a use case, but I think it's

232

:

the best way to highlight this.

233

:

We have an agent that every single

day will go ahead and at the end of

234

:

the day, go through all of our core

transcripts in the sales look and

235

:

then for each transcript, it extracts

a bunch of information and pushes it

236

:

to a notion database that we have.

237

:

That's very much formatted

in a way that works for us.

238

:

that's what we use as part of enablement.

239

:

So Reps can go in, they can, every single

day, if they have a question about like,

240

:

how do I handle, pricing or onboard of

our implementation, they can filter by

241

:

that, they can see how other people have

asked or answered similar questions,

242

:

improvements they could have done based

on what the agent suggested, our RevOps

243

:

team can have a look at like statistics,

you know, percentage of questions coming

244

:

in this week have increased about,

you know, implementation, so maybe we

245

:

need to improve our collateral there.

246

:

So in that situation, we've got an

agent that basically is constantly

247

:

ready to receive new transcripts,

process that data and Submit it and

248

:

that happens on a daily cadence you

can have all sorts of triggers, right?

249

:

Like those triggers could be time

based They could be you know, and

250

:

time could be every 30 seconds, right?

251

:

Like you could just be doing

this job every 30 seconds.

252

:

It could be as you mentioned from

other software integrations i.

253

:

e slack messages come through User

messages come through it could

254

:

be like an email to be received a

calendar has just been triggered

255

:

or started the event and so forth.

256

:

So absolutely like we already have

engines that are constantly working

257

:

for us Um, and those I guess always

on it just depends maybe on how

258

:

frequent those activations are

259

:

Justin: But it makes sense.

260

:

diving into use cases, share a

quick anecdote, just something

261

:

that was an unlock for me.

262

:

And then I would love for you to

comment on it and also just expand

263

:

about the wide variety of use cases.

264

:

I'm sure you're seeing

within your customer base.

265

:

But I had a member of my team leave late

last year, uh, had been there for a while.

266

:

And so as part of her

off boarding, we did.

267

:

of an inventory of all the work.

268

:

We obviously knew about like the big

strategic projects, the OKRs, uh, but

269

:

we really wanted to see like, what is

it that's taking up 40 hours a week?

270

:

And uh, really itemizing that out

just as part of evaluating, you know,

271

:

what should this role look like?

272

:

What does it look like today?

273

:

What should it look like in the future?

274

:

And it was really eyeopening

for me because it highlighted

275

:

to me how much of work.

276

:

It's not necessarily these big strategic

things, there's a lot of granular work,

277

:

things that are not yet predictable

enough to fully automate, but are not

278

:

necessarily like, very complicated

or requiring very senior skills.

279

:

They just require a

certain level of judgment.

280

:

Could be, you know, evaluating a record

a CRM and making a decision about it.

281

:

what channel do we attribute it to, et

cetera, providing that sort of input.

282

:

And it hit me that that is a

great place for AI to play.

283

:

You don't want it to like come in and

create your strategic vision necessarily.

284

:

Maybe you disagree with that, but

level of work, that's just, it's not

285

:

quite totally deterministic, but it's

also still relatively straightforward.

286

:

please react to that and tell me if you

agree or disagree and what are the cool

287

:

things that you're seeing out there.

288

:

Danieil: I think this answer

will change over time.

289

:

I think currently I do agree.

290

:

I think right now the state of kind

of technology, Makes it better for

291

:

handling more of those tasks, adding

tasks that are not necessarily

292

:

the strategy, not necessarily the

vision, but it's all about execution.

293

:

And especially when things, you includes

time or volume, it can just outcompete

294

:

every single day of the week because.

295

:

It doesn't scale with,

traditional resources.

296

:

It scales with compute and when,

when people think about this, I don't

297

:

think enough people really just stop

for a moment and just reflect on how

298

:

powerful that is right now, right?

299

:

We have all these websites in the

world and if you get more traffic,

300

:

they can just add more service

and they can have that traffic.

301

:

Imagine that same concept being applied to

the work that your organization is doing.

302

:

it's so difficult to fathom the

results and outcomes of that.

303

:

And what I tend to really to people is

like, don't think about the technology as

304

:

just doing a little bit more of the same.

305

:

Think about it.

306

:

What will your business look

like if you could do 100x more?

307

:

Because I think what we're about to

encounter is Significant increase

308

:

in the amount of value, uh, that

we can generate, globally from

309

:

a business perspective, right?

310

:

When you think about the goods and

services being produced, I think we're

311

:

going to produce better services,

better goods at a better cost.

312

:

Um, and I think as a result of

that, that's absolutely going to

313

:

increase the value we generate,

whether you measure that through

314

:

GDP or through something else.

315

:

I just think we're going to live through

an absolute explosion in opportunity.

316

:

That being said.

317

:

today agents are really capable

for those tasks that have a well

318

:

known, well defined process.

319

:

I.

320

:

e.

321

:

you could teach 50 people how to do

this and they could all do a good job.

322

:

If it's something that is still not

working very well and needs to be

323

:

figured out, It might be better off

doing it like in a more traditional

324

:

sense today, not leveraging automation.

325

:

The reason for that is even if you

have a process that doesn't work, you

326

:

wouldn't hire 50 people to that process.

327

:

You would probably just have one person,

two people maybe figure out that process.

328

:

And so for any work like that,

that's very exploratory trying

329

:

to figure something out.

330

:

I think you should not use agents.

331

:

But for work, that is something

that you could build a large team

332

:

for something that you could, scale

up, then agents are a great place

333

:

to start thinking to deploy them.

334

:

Now, will that change someday?

335

:

Probably, right?

336

:

the more inputs you have, the

better decisions you can make.

337

:

There's a world in which I can see

agents being able to look at every

338

:

single data point in your business

to help you make better decisions.

339

:

And I think we're not too far from that.

340

:

But I think today, the best

way to approach this is less

341

:

strategy, more execution.

342

:

Justin: going into that future

vision that you just sketched out.

343

:

it seems to me that or AI in

general or large language models

344

:

in general have both advantages and

disadvantages versus human cognition.

345

:

one of the advantages that you cited

obviously is just the ability to.

346

:

Take in such a Vaster context window than

a human can easily take in or retain.

347

:

Like you said, looking at every data

point in the business, maybe seeing

348

:

patterns, calling things out that from

our limited vantage point, we just

349

:

don't have enough space in the brain.

350

:

perhaps arguably, limitation

versus human cognition is, uh,

351

:

is kind of inherently derivative.

352

:

It's all, based on corpus of, of

information that it's sort of processed

353

:

and then continually recreating.

354

:

So can it be truly, uh, original?

355

:

as the technology improves, would we ever

completely outsource certain elements

356

:

of strategy to AI or will it always be

a sidekick, a co pilot in that process?

357

:

Danieil: I think we will.

358

:

I think we definitely will.

359

:

and we published this in 2023.

360

:

We, when we look, there's a series

and we published an article, beyond

361

:

co pilot and kind of stated how

we think that realistically co

362

:

pilot is a very short term trend.

363

:

And a lot of us, I don't know,

five years from now, right?

364

:

The reality is every single place

that autopilot can do the job better.

365

:

People are going to prefer it, right?

366

:

Like, think about it this way.

367

:

If you had the choice in your company,

or in your team to have five people

368

:

sitting around you and all they could

do is sit around your desk and wait for

369

:

you to turn to them, ask them something,

and then they'd reply back to you.

370

:

Or you could have five desks around

you with those same five people and

371

:

you're all working and collaborating

together and you need something done.

372

:

You can delegate it to someone else.

373

:

They can go off and do it themselves,

come back when it's complete.

374

:

Which of the two would you prefer?

375

:

Obviously, it's the first one, which

is why companies today are built

376

:

with teams that are autonomous.

377

:

They can delegate work.

378

:

They can achieve things.

379

:

We value autonomy.

380

:

We reward it.

381

:

And we don't just have, you know,

many assistants to one person.

382

:

I think that's exactly the same,

uh, when we think about agents

383

:

and co pilot versus autopilot.

384

:

To clarify, when I say autopilot,

I don't mean something that

385

:

doesn't involve humans, right?

386

:

I still think human in the loop

plays a really important part.

387

:

And in fact, when you delegate

work between people, it

388

:

goes between people, right?

389

:

So if you delegate some work to an agent,

even if they could complete That task on

390

:

autopilot, there is still a touch point

that then goes back to the human, whether

391

:

that's an approval process, whether

that's an escalation for help, whether

392

:

that's just completing the task and

handing it over to the next touchpoint.

393

:

So when we think about autopilot and

copilot, I think the future that I

394

:

really see and we believe that we've been

building towards honestly for a few years

395

:

now is that world where you can delegate

tasks that can be done autonomously.

396

:

You still have the first class

experience for approvals, right?

397

:

Because businesses are.

398

:

Built on approval methods.

399

:

One of the most questions, common

questions I get asked is how do you

400

:

make sure it does the right thing?

401

:

And I honestly just ask, how

do you make sure your team

402

:

does the right thing, right?

403

:

engineers have pull requests

that get code reviews.

404

:

the sales team have deal reviews

and people watching the call see

405

:

the feedback and improvement.

406

:

you have all these processes

built in an organization today

407

:

that are all about approvals.

408

:

And from our perspective, as part of

the definition of an AI workforce, it

409

:

actually finishes with a human workforce.

410

:

So we think you need to have kind of the

best in class experience when it comes

411

:

to using, and working with your agents.

412

:

And so I stress that because I

want to be really clear autopilot

413

:

does not mean without humans.

414

:

Autopilot simply just means you can

delegate work to it and it's more useful

415

:

and functional to you than just an

assistant you can, you know, have an

416

:

in out, in out sort of experience with.

417

:

which is what we believe Copilot is.

418

:

It just enables you to be a

little bit more efficient.

419

:

What does the world look like if

you could be 100x more productive?

420

:

And that 100x could be a thousand

x at the click of a button.

421

:

That's what autopilot means to us, rather

than kind of these incremental gains

422

:

that you can use as a tool, because

for us, realistically, Copilot is still

423

:

part of like this trend in the past.

424

:

So like software, yes, it's really useful.

425

:

Yes.

426

:

It's given us so many benefits, but

at the end of the day, all of those

427

:

benefits are just productivity boosts.

428

:

And at some stage when you want

to do more, whether that's high

429

:

quality, whether that's more volume,

you're still limited by headcount.

430

:

Autopilot changes that.

431

:

Autopilot should ideally enable

someone with an idea to be able

432

:

to execute something phenomenal

and magnificent beyond, the

433

:

capabilities of an individual person.

434

:

And that's the future

I'm really excited about.

435

:

Because imagine if we've all

got that capability, like,

436

:

what could we create then?

437

:

What better services,

products can we be creating?

438

:

Um, and it's not just about, you

know, like, um, kind of SaaS.

439

:

You can see this being

applied to medicine.

440

:

You can see this part of education.

441

:

You can see about the cost of these

things going down and globally

442

:

what that means for people.

443

:

So i'm extremely optimistic about

kind of that direction the thing we're

444

:

focused on it's not co pilot, right?

445

:

that's something that I think

is this We'll see in the next

446

:

few years quickly become less

and less relevant in more areas.

447

:

Justin: Since we're talking about

the future, let's, just look down a

448

:

little bit further down that path.

449

:

AI, you know, is increasingly going

to over that lower end, that more

450

:

junior end of work that we talked

about, then as models get better,

451

:

presumably it's going to come.

452

:

so to speak and take on more senior

level tasks is there an endpoint again?

453

:

I want to think about like the role

of AI and strategy like are there

454

:

tasks you think we should just never

delegate because they're Too important

455

:

and a human has to do them Or what

is the role of the human in this

456

:

future of work besides, you know?

457

:

Those like AI approving the work

that AI does in other words,

458

:

Danieil: As long as AI is

demonstrably good at the task.

459

:

There's no reason we should leverage

it again of the caveat of We delegate

460

:

to it But just because we've delegated

some work to it doesn't mean that we

461

:

don't have responsibility To be part of

that process part of any decision making

462

:

and actions that happen beyond that.

463

:

So I think for me like when I think

about an NC, I don't I don't know

464

:

what that end state is but One

thing that I fundamentally, I guess,

465

:

believe is that every single time,

you know, we've had the opportunity

466

:

to do more as a society, as I guess

as humans, we tend to take it, right?

467

:

Like we, we rarely think to

ourselves, you know what?

468

:

We've done enough now.

469

:

We were able to manufacture more of this.

470

:

Let's just stop here.

471

:

Inevitably.

472

:

More factories come up more,

it becomes more efficient.

473

:

Now we've got like, you

know, people working on that.

474

:

And I have a really strong inclination

that Magentic AI and AI more broadly

475

:

will just be, part of that journey.

476

:

The only difference here from my

perspective is that the opportunity.

477

:

And scale of, these new capabilities

will be just extraordinary.

478

:

I think that's the difference here.

479

:

It's a scale, but at the end of

the day, if you just equip people

480

:

with stronger tooling and better

capabilities, my instinct is we're

481

:

going to just try to achieve more

rather than say, you know what, now we

482

:

can do everything we did 10 years ago.

483

:

I just think that goes

against human nature.

484

:

And I think, the society and the

way we built our system tends to

485

:

incentivize trying to operate and

play and create more, services,

486

:

goods and things like that.

487

:

Right or wrong, right?

488

:

I think that is the

system we have created.

489

:

And so, I'm just particularly bullish

and from that perspective and optimistic

490

:

Justin: so if we zoom back to

today what are some of the cool

491

:

things that people are doing?

492

:

I've seen some of the templates and

examples that are available in your

493

:

platform, but like, what are either

internally or in your customer base?

494

:

What are some really interesting

things people are doing today that

495

:

maybe can spark inspiration for

people that are listening to this?

496

:

Danieil: Yeah, so I mean, we're

obviously very lucky that we have

497

:

quite a large number of customers

in the sales and marketing space.

498

:

RevOps tends to be a team

really well positioned to

499

:

benefit from an AI workforce.

500

:

it's interesting how, because of the

kind of work RevOps are traditionally

501

:

done, because it's kind of sat near the

subject matter experts, but also has been

502

:

the more technical kind of expertise on

hand, it's a really great place for both

503

:

fostering and adopting, kind of AI agents.

504

:

The thing about relevance, right?

505

:

When we built relevance, you know, before

this, I had a decade of experience in

506

:

automation, built another company before,

this with millions of users across our

507

:

products, I was very lucky to get to build

and work a lot of machine learning models,

508

:

you know, albeit quite different ones than

today's, but for very practical reasons.

509

:

And the thing in that whole experience

that was very clear to me was

510

:

automation projects rarely fail

just because of safe technology.

511

:

And one of the biggest reasons

automation projects fail are because

512

:

you don't fully understand the

unique workflows and organizational

513

:

wisdom that goes into that process.

514

:

And so instead of building an

engineering framework, we said, well,

515

:

let's build an agent operating system

for the subject matter expert, right?

516

:

Let's build the ability for

the people who are the experts

517

:

in this to train their agents.

518

:

Um, and if you kind of just, do a simple,

kind of exercise here and ask yourself,

519

:

like, who at the moment hires salespeople?

520

:

Who trains salespeople?

521

:

Who hires Redbox people?

522

:

Who trains Redbox people?

523

:

Is it engineers?

524

:

Is it data scientists?

525

:

Or is it salespeople, Redbox people, you

know, and you know, if we live in a world

526

:

where right now that the subject matter

experts are training and hiring subject

527

:

matter experts, then to us, it just

feels very natural that they are going

528

:

to be the same people that are going to

be training and hiring for these agents

529

:

and also probably managing them, right?

530

:

Because who knows how to manage

once again, those agents, then

531

:

those subject matter experts.

532

:

That's a really cool tenant of our

platform and a really cool tenant of

533

:

how we've built our product and we're

definitely not where we want to be yet,

534

:

you know, we're still more technical than

we want to be, but we're rapidly working

535

:

towards making that as simple as possible

for as many people as possible But that's

536

:

why we're not an engineering framework.

537

:

And the reason I say this is because when

we think about, Use cases in a lot of

538

:

teams, Rev Ops kind of nicely straddles at

the moment that in between status subject

539

:

matter expertise plus technical acumen.

540

:

And so I think for a lot of your

audience listeners, like, this is the

541

:

perfect time to get started with agents.

542

:

You've now got a good tooling,

whether it's relevance or kind of the

543

:

broader ecosystem is more available

to you today it's a matter of when,

544

:

not if, and I think early adoption

is always the right strategy.

545

:

And then RebOps for me is

particularly a place that can

546

:

become the internal experts.

547

:

We've already seen this people becoming

an AI workhorse manager in their

548

:

organization, because they're basically

helping all the different business

549

:

organizations build and deploy these

agents, giving them that internal.

550

:

And, and some of these

cases we've seen, right?

551

:

Like let's take RebOps.

552

:

Like we've seen some

really trivial use cases.

553

:

Like I don't even want to start with like,

you know, just to flashy ones, because.

554

:

There's just so much stuff in the

organization that is so valuable, even if

555

:

it's not necessarily that the Flash is.

556

:

Like, for example, we had one customer,

they had, I think, over 100, 000

557

:

accounts in this year round, and

it was an absolute mess in there.

558

:

It was just duplicate, old

versus new, wrong statuses.

559

:

And this was causing a great deal of

frustration for the sales team because

560

:

it really made their job harder.

561

:

And the only option they had was

one, pay an extremely large sum of

562

:

money, to a, basically like a BPO.

563

:

So then go ahead and go through

every single account one by one and

564

:

manually check and review it was going

to take I think months to complete.

565

:

Or the second option was because they're

already a customer of ours for another

566

:

use case, build an agent, deploy that

agent and get it done in less than a week.

567

:

And that, when you think about like

what that means to the businesses, okay.

568

:

So first we just save

a whole bunch of money.

569

:

We saved a whole bunch of time.

570

:

and so when we think about our sales

team being able to be productive,

571

:

we've just saved them six months.

572

:

And, that was phenomenal because the agent

could go at each account, look at it like

573

:

a human, determine what's a duplicate,

then we go search in Salesforce for other

574

:

stuff and clean it up and put it together.

575

:

So when we think about agents and

use cases, don't feel the need to

576

:

go for some pie in the sky thing.

577

:

You can start small.

578

:

and when I say start small, In terms

of like this might not sound sexiest

579

:

idea, but boy, it's impactful.

580

:

And I think that's something that I'd

really encourage people to keep in mind.

581

:

But then, you know, one of the ways we

also use relevance ourselves is we have

582

:

a small sales team at the moment and

we get, we're very lucky that we get

583

:

a lot of inbound requests and we get a

very large volume, both of signups on

584

:

our product and book demos, and it's

really hard to handle that volume.

585

:

and so at the moment we have a fleet of

agents dedicated to basically treating

586

:

every single inbound signup that comes in.

587

:

qualifying it, maybe asking us

some questions and then determining

588

:

where to route it, whether it goes

to our sales team, whether it goes

589

:

to a partner, whether it goes to

signups, something like that, right?

590

:

We might need maybe at least 10

people on staff to have that volume

591

:

and they just, they couldn't be in

one geo in order to hit our SLAs

592

:

for how fast we want to reply.

593

:

There'll need to be multiple geos and

so managing that and you know, you

594

:

can just think about how difficult

that is to build out that process,

595

:

but because we've got these agents.

596

:

They're doing that job for

us extremely effectively.

597

:

and in fact, so effectively that

I'm often on calls where people

598

:

have come in through that channel.

599

:

They ask us, do these agents work?

600

:

And then I have to remind them

that they've come in through an

601

:

agent and they, you know, they

didn't, they weren't aware of that.

602

:

So, that's another example.

603

:

Internally as well, our

life cycle marketing.

604

:

Agent is saying that's really popular

every single time someone signs up and I

605

:

recently just shared on LinkedIn a post

that someone made Analyzing the email

606

:

they got from the agent every single

time someone signs up We asked ourselves

607

:

like what would it look like if We could

do the things we did at the beginning

608

:

of like our company where we could

message every single person individually

609

:

look at who they are and help them get

started like Could we achieve this?

610

:

And so that's when we, created the life

cycle marketing agent, not because we

611

:

want to study better life cycle marketing,

but because we explicitly wanted to start

612

:

asking ourselves, can we do one on one

customer success for every single sign?

613

:

can we live in that world?

614

:

Is this what agents can enable us to do?

615

:

And that was kind of like that

first iteration of that, you know, a

616

:

hundred X future that we believe in.

617

:

we've obviously got people doing outbound

messaging, creating sequences for their

618

:

team, you know, doing research, creating

sequences, putting it into their outreach

619

:

that people can, send out so they

can have more personalized messaging.

620

:

And again, not thinking about that as a

spring prey tool, but thinking about like,

621

:

what does the top rep do in this company?

622

:

Where are they researching?

623

:

If they have an extra hour

per lead, where will they go?

624

:

How can we create an

agent that mimics that?

625

:

So now the agent can create really good

research for the team and really good

626

:

content for them to send out And that's

like very much a theme and what we talk

627

:

about to the prospects is Don't just

spray and pray on these sorts of things

628

:

find out the top human quality work you

can do and execute that And that's why

629

:

relevance is really good actually because

kind of intuitively in the past if you

630

:

think of horizontal sass horizontal

sass tends to have like You know, the

631

:

lots of use cases and that's all I'm

shallowing when it comes to our workforce.

632

:

Build a platform.

633

:

The thing that's counterintuitive is you

can now train an agent on your very niche

634

:

and specific workflow to execute things

the exact way you do versus a vertical

635

:

agent being a little bit, you know,

more rigid in terms of what it can do

636

:

and how it does it or how it integrates

the Salesforce or how it does X and Y.

637

:

And you can't really train

it to do things yourself.

638

:

And I think the future, you know,

world we're going to live in, and this

639

:

could be, sounds in general, right?

640

:

AI needs to wrap itself around

your process and not your, team and

641

:

organization around its process, around

the software's process, which has been

642

:

the way we've done things in the past.

643

:

I think we're going to

be a lot more flexible.

644

:

And the AI Workforce Builder

platform kind of unlocks that

645

:

today, for a lot of these use cases.

646

:

Justin: Yeah, I mean, two things in

response to the first, I could not

647

:

agree more about the value of automating

the, not even the little things,

648

:

but just the more mundane things.

649

:

You know, everyone likes to share these,

flashy use cases and big complicated

650

:

flow charts on LinkedIn, but quite often

the things that consume an inordinate

651

:

amount of our time, especially in

rev ops, are the duplicate accounts.

652

:

And I really want to see the architecture

of what that client built, because

653

:

I've been thinking about that exact

use case and like how to solve it,

654

:

because it's such a pain point.

655

:

I mean, we have ringly, we have tools,

but it's very, very difficult to safely

656

:

your entire database, just rules based.

657

:

There's so many exceptions where

a human can look at something.

658

:

and be like, yeah, clearly

this is a duplicate.

659

:

Clearly this is not, but it's really

hard to wrap rules around that.

660

:

So I mean, the flashy stuff is cool,

but I agree that so much of the

661

:

benefit right now, at least from

where I sit is in those little things.

662

:

And number two, I just want

to say, I think you've done a

663

:

good job at positioning your

platform for your target audience.

664

:

Cause I found you guys, I guess

as many people do, I was looking

665

:

into agent platforms and I looked

at a variety and, you know.

666

:

like crew AI to take an example of

a competitor of yours, but it, very

667

:

clearly seemed engineer oriented.

668

:

And then when I looked at your

platform, like, Oh, this is built

669

:

for, like, I'm not a developer.

670

:

I am a technical ops person.

671

:

I'm comfortable with APIs and comfortable

with Jason, et cetera, but I don't

672

:

really code at least not very well, a

little bit with the help of, chat GBT.

673

:

I'm like, this was built for me.

674

:

it makes sense.

675

:

And so.

676

:

I will say I think that you've done a

good job creating an interface and a

677

:

mental model that works for my profile,

which seems to be your target audience.

678

:

I want to just drill on something you

said around, training and memory, because

679

:

this is something that, um, I want to

understand better as it comes to agents,

680

:

because people talk about, oh, you

can train your agents and they learn.

681

:

But how does that actually happen?

682

:

Cause quite often a lot of the agentic AI

I've interacted with, some of it doesn't

683

:

even have context from message to message.

684

:

Like I was interacting with part of

Salesforce Einstein the other day.

685

:

with all of its resources,

and it literally did not have

686

:

context in between messages.

687

:

It was like each message, one shot,

you get this one chance, and certainly

688

:

in other parts, it does seem to

retain context between messages, but

689

:

not between sessions, so how do you

think about, memory and knowledge

690

:

and training and making it better

aside from just somebody going in and

691

:

manually updating the instructions?

692

:

Danieil: Yeah, I mean, on the Salesforce

manager, not to take a cheap shot,

693

:

but I don't think I've spoken to a

single Salesforce admin who hasn't

694

:

been disappointed by the over promises

of Einstein and where it's ended up.

695

:

We'll see if agent force lands

in a similar similar bucket.

696

:

But Look, that's a

really difficult problem.

697

:

It's the first thing I'd say, like

making agents, have the cognitive

698

:

abilities that, we have beyond

just reasoning is a big challenge.

699

:

And there's many ways

you can approach this.

700

:

two things that we do at Relevance, right?

701

:

one is we were actually

about to launch this.

702

:

We're currently in early access with

a bunch of our enterprise customers

703

:

is whenever agents complete tasks.

704

:

And relevance, right?

705

:

We have a lot of heuristics as to

whether that task was successful, i.

706

:

e.

707

:

either there's some feedback loop, maybe,

you know, someone successfully booked in

708

:

that meeting when they, came in inbound.

709

:

So that's a successful outcome.

710

:

Or maybe there's something else that we

can look at from the flow to determine

711

:

whether that task was successful.

712

:

We've got all these heuristics.

713

:

And so every time a task gets completed,

we have the opportunity to, one,

714

:

improve the instructions of the agent.

715

:

or to actually improve

the underlying model.

716

:

So now we're training the model every

single time a task has been completed

717

:

successfully or not to make better

decisions for that specific use case.

718

:

And so those are the two, channels

through which we're, approaching

719

:

this from a product perspective that

automates it for all our customers.

720

:

You know, in the future, these things

will just continuously get better.

721

:

And, one of them kind of speaks more

to improving our brain and the other

722

:

one speaks a little bit more to

improving our onboarding handbook.

723

:

And so that's kind of like if I was

to think about it from a very human

724

:

workforce perspective, how we're

approaching this, for example, when

725

:

escalations happen in relevance, when

the agent says, Hey, I don't know how

726

:

to do this, Justin, can you help me?

727

:

I've just had someone asked this question.

728

:

You know, I don't know how to answer it.

729

:

We also have the ability that when

people provide that intervention,

730

:

that I either updates the instructions

or the kind of the memory and

731

:

knowledge base that the agent has.

732

:

So we've got those different channels

to help improve it, but I agree with

733

:

you, it's something that is still not

as good as it can get and, you know,

734

:

every month right now, but I'm seeing

some of the stuff we're shipping

735

:

for that piece is extraordinary

and, it's really easy to forget that

736

:

we're just in the early innings of

what the technology can do, right?

737

:

Like we're fortunate enough that our

product You know, we've been developing

738

:

this now for, maybe just under two

years of like real customers, we had

739

:

one of the first agentic use cases ever

live for the customer on autopilot.

740

:

so we've got a bit of a head start,

but the reality as an industry,

741

:

we're scratching the surface.

742

:

And so I think we're going to see a lot

of improvements, with those two ones I

743

:

mentioned, I think being really big ones

that, you know, I expect to see huge

744

:

performance boosts for our customers.

745

:

Justin: Talking a little bit more

about, like, design patterns of

746

:

agents and how they work together.

747

:

There's this notion of agent teams,

there's just something inherently fun

748

:

and interesting about this notion of,

like, supervisors and, workers and people

749

:

with different subject matter expertise.

750

:

aside from that whimsy of it, I

guess, Why not just have a monolithic

751

:

agent that does everything?

752

:

Danieil: That's a really good question.

753

:

It is fun.

754

:

Part of me, you know, it is nice

seeing your team of agents executing

755

:

work and talking to each other and

completing tasks, but the more serious

756

:

answer is, and it kind of ties back

again to the human workforce, right?

757

:

Is there a single person in your

company that can do everything?

758

:

Does that exist?

759

:

The answer is probably not.

760

:

And if there is, man,

that's an impressive person.

761

:

But the reality is we can't like,

we all have to specialize somewhere.

762

:

that same principle applies to agents.

763

:

What is your agent going to specialize on?

764

:

Now, the difference of humans and agents

is that agents at the moment specialize

765

:

a little bit, have some sites, more scope

and tells what they can specialize in.

766

:

But that analogy.

767

:

plays true.

768

:

We have agents that can

specialize on tasks.

769

:

They have spikes of capabilities

in order to keep their performance

770

:

really high because you have an

agent, a monolith agent, as you

771

:

described, trying to do too much, your

performance will inevitably suffer.

772

:

The second benefit is actually

really interesting to me.

773

:

And that's, again, tying it

back to the human workforce.

774

:

when you have a team of people completing

a piece of work, you've got multiple

775

:

checkpoints to reduce mistakes and errors.

776

:

Because if you were to delegate

some work to me, I was to complete

777

:

it and give it back to you.

778

:

That review process would

inherently potentially bring out.

779

:

the, Hey, I've made a mistake here and

that applies to agents as well as they're

780

:

working with each other, delegating work.

781

:

If you've had that same hallucination,

which maybe is the, how we define a

782

:

mistake from an agent's perspective, and

it's a big concern for a lot of people

783

:

when you give that to someone else.

784

:

So another agent with the context, like

with the citations and stuff like that,

785

:

that other agent, because it has a

completely different set of instructions.

786

:

And it's got a completely

different set of context.

787

:

It's very unlikely to make

that exact same hallucination.

788

:

And so that produces a second benefit

of reducing errors and hallucinations.

789

:

And then the third benefit, which I

actually think is the most important

790

:

one and why, I highly recommend if

you're thinking about, you know, to

791

:

your audience about an AI strategy.

792

:

Think about an AI Workforce Builder

versus a vertical solution because

793

:

your agents quickly compound that one

agent that you built that specialize in

794

:

prospect research could now be applied

to help you dedupe your database, could

795

:

be applied to lifecycle marketing,

could be applied to an account based

796

:

marketing campaign, can help you qualify

inbound leads and so on and so on.

797

:

And so you quickly get this compounding

effect where these agents that

798

:

you're creating can be deployed

from many different use cases.

799

:

And the only difference is.

800

:

You just construct them

slightly differently, but you've

801

:

already created that agent.

802

:

You already know it works really well for

doing research for your kind of business,

803

:

and you can deploy it in many spaces.

804

:

And so that compounding effect for

organizations that get this right

805

:

will be extremely significant,

and will generate huge amounts of

806

:

value, for the business and ROI.

807

:

So, That's kind of the way

we think about irrelevance.

808

:

It's like why teams are even so critical.

809

:

and that's obviously going

to evolve slightly, right?

810

:

Like as agent capabilities get better,

maybe you can specialize some agents,

811

:

to be a little bit more generalized.

812

:

And then you can specialize some more

to be even deeper on that topic and can

813

:

go even like at the higher level to it.

814

:

And so it just gives you that really

great ability to mimic what happens

815

:

to their organizations, to maximize

performance, reduce errors, and also set

816

:

yourself up to benefit from compounding

effects of having many, many agents.

817

:

Justin: The modularity that you

described is one that I hadn't

818

:

thought about, but it's true.

819

:

It makes the work that

you're doing more reusable.

820

:

And if you have an agent that's very

well trained at a particular task,

821

:

then being able to just plug it in, in

different contexts, really valuable.

822

:

in the few minutes we have left, I want

to deep dive a little bit about the

823

:

platform and the company and just your

experience as a founder has probably has

824

:

been clear to anyone listening so far.

825

:

I'm, I'm a fan.

826

:

I really just like what you're doing.

827

:

And when I watched the videos

that your co founder did, just

828

:

like explaining it, I don't know.

829

:

I just really did vibe with

this platform for some reason.

830

:

So I'm curious, what did you

set out to do two years ago

831

:

and maybe how has it evolved?

832

:

Danieil: Yeah, I mean, look, I think

the thing that Our customers and

833

:

users tend to resonate a lot with

that relevance is we are really

834

:

building towards something, right?

835

:

I think, we are not just trying to chase

a trend every month and kind of pivot

836

:

the whole direction of the product.

837

:

So just satisfying one requirement,

you we always had a lot of

838

:

success in sales throughout 2024.

839

:

And it's very tempting as a product

telling a lot to sales teams to say to

840

:

yourself, Hey, I wanna build, you know,

a dedicated experience with sales teams.

841

:

I wanna verticalize.

842

:

Um, but fundamentally we have very

strongly held beliefs when it comes to

843

:

our vision about that subject matter

expertise about moving towards autopilot.

844

:

That we know that if we wanna deliver

the best product to our customers and

845

:

give them the best ROI from agents,

we have to build in this direction.

846

:

And I think that's something

that's enabled us to make

847

:

some really good decisions.

848

:

That have led to really great results.

849

:

and you'll see that consistently

throughout the messaging, right?

850

:

Like so much of what we do is inspired

by the human workforce, right?

851

:

it's such a simple concept, but I, you see

this kind of light bulb moment happen in

852

:

a lot of people when I really, communicate

and respond to their questions with

853

:

analogies towards what they currently do.

854

:

And I find that extremely helpful for

people to then be like, Oh, okay, that

855

:

actually makes a lot more sense now.

856

:

That's really practical how

I can deploy this for myself

857

:

and really get the benefit.

858

:

All the technology like this.

859

:

So I think that's one thing.

860

:

And I'm glad it's resonating with

you, but I think that's one reason

861

:

why people resonate with us is, if you

read up on Copilot,:

862

:

how much of that is still true today.

863

:

Even when, back at the time, people

were like, what are you talking about?

864

:

This is, uh, you know, not

necessarily something that a lot

865

:

of people believed, but I think,

has paid dividends for us today.

866

:

But in terms of, I guess, us as

a, company, we So, as I said, we

867

:

spent a lot of time in automation.

868

:

My co founder, Jackie, he,

previously worked with me on the

869

:

previous company that we built

together, had millions of users.

870

:

He then went to lead machine

learning for a large corporate.

871

:

We actually, first started looking at

vector embeddings because we saw that

872

:

there was a significant shift, in.

873

:

Capabilities when machines are starting

to understand data and we knew, okay,

874

:

that plus some of the model work we've

been doing and the way the models were

875

:

improving felt like there will be a moment

in time soon where the capabilities.

876

:

Of machines will start mimicking

humans it'll enable automation to

877

:

succeed in a way it hasn't before.

878

:

And we've been really passionate

about automation for context.

879

:

Like we saw the benefits in our own

business, the things we could achieve, but

880

:

also like in my daily life, you know, I

just think about all the quality of life

881

:

things that we have because of automation.

882

:

So I feel really strongly about this.

883

:

And then when we saw those two

opportunities come together, and,

884

:

uh, this was around the time,

I guess, as well, that, GP 3.

885

:

5 was launched.

886

:

Uh, we were like, okay, the AI Workforce

Vision really came together for us

887

:

and we started building towards that.

888

:

it's been a really

exciting journey so far.

889

:

We're very lucky to have some

like amazing customers from small

890

:

startups to public companies.

891

:

Um, we're very lucky to be able to deliver

a lot of value, but more importantly,

892

:

we've got a lot of work ahead of us

to keep delivering on that promise.

893

:

And as I said, in the next few months,

like six months, the barrier of entry

894

:

to creating agents and relevance.

895

:

It's going to keep dropping and we've

got some really exciting releases

896

:

to make that possible because I

really want to see everybody be

897

:

able to create agents to help them.

898

:

because I think it's just going to be one

of those technologies that once we have

899

:

it, we'll think to ourselves like, how

the hell did we do things before this?

900

:

Justin: as far as I can tell you

and Jack, you're both technical,

901

:

co founders that come from a,

computer engineering background.

902

:

it been organic in the sense of,

you know, finding fit develop?

903

:

It seems like you have a

pretty engaged community.

904

:

I'm in your discord, a

lot of people in there.

905

:

have you thought about this sort

of deliberately in terms of how

906

:

you're positioning yourself?

907

:

What's the thinking there?

908

:

Danieil: we tried to be quite

intentional about position.

909

:

I think we can always do better.

910

:

this year in particular, one of

the main things have kind of assets

911

:

on the On the mission is to make

sure that everyone who's looking

912

:

at agents knows about relevance.

913

:

I think not only can we give them the best

product, but I also think it's important

914

:

that we're in the right conversation.

915

:

So that's personally one of

my major goals for this year.

916

:

because you know, we're getting

some of the most organic

917

:

mentions on LinkedIn, on YouTube.

918

:

We're getting some of the most branded

search queries we're getting, Whether

919

:

it was ranking extremely highly for a

lot of key SEO keywords, we've got a

920

:

lot of that going for us, but I think

this year is the opportunity for us

921

:

to share relevance and kind of the AI

workforce mission and vision that we have.

922

:

And so I'm personally excited

about that and has been organic.

923

:

I mean, last year for us

was when we really started

924

:

commercializing our product.

925

:

And, that was an interesting

transition because, as you mentioned,

926

:

we're both very kind of, technical

co founders and, the other Dan as

927

:

well, he's very technical as well.

928

:

So we don't have a great marketing

background per se, but what Jack

929

:

and I fortunately had is the

experience of marketing products

930

:

that had millions of users.

931

:

Not just in that once,

but we did that twice.

932

:

and then a couple of other products that

will start hundreds of thousands of users.

933

:

So we've always interest and

understood the importance of marketing.

934

:

So I think we've always tried to

take that through the new thing that

935

:

we've introduced, I guess, for us

as a team last year was more of that

936

:

enterprise emotion as well, and really

leveling up the business to be able to

937

:

handle those enterprise engagements.

938

:

Not only building better software and

tooling for it, that enterprise is

939

:

required, but also, you know, making sure

we build a team that is enterprise ready.

940

:

Uh, we opened up an office in San

Francisco, so we're based in San

941

:

Francisco now to help engage our

customers in North America better.

942

:

We're hiring some of those talented

and brilliant people who've, either

943

:

built, the RPA slash BPA versions,

in large enterprises in the past.

944

:

Uh, and help deploy them onto our team

so we can give that same kind of level of

945

:

expertise and guidance to our customers.

946

:

And so we've really also done, um, uh,

a lot of work around building the right

947

:

team to help us engage those customers.

948

:

But it's still early days where

we're actively hiring at the moment.

949

:

And if anyone in your audience

is interested in looking at those

950

:

opportunities, please check out

the website where we're hiring

951

:

basically across every single team.

952

:

in order to, help capture this moment

in time and help deliver kind of

953

:

a genetic AI to as many companies

and people as possible this year.

954

:

Justin: I think that's all we

have time for today, but this was

955

:

just super, super interesting.

956

:

So thank you.

957

:

wish you folks the very best

and we'll check in with you

958

:

again sometime in the future.

959

:

I hope.

960

:

Danieil: Thank you so much.

961

:

Thanks, Justin.

962

:

And thanks for the opportunity to

share more to the RevOps community.

Show artwork for RevOps FM

About the Podcast

RevOps FM
Thinking out loud about RevOps and go-to-market strategy.
This podcast is your weekly masterclass on becoming a better revenue operator. We challenge conventional wisdom and dig into what actually works for building predictable revenue at scale.

For show notes and extra resources, visit https://revops.fm/show

Key topics include: marketing technology, sales technology, marketing operations, sales operations, process optimization, team structure, planning, reporting, forecasting, workflow automation, and GTM strategy.

About your host

Profile picture for Justin Norris

Justin Norris

Justin has over 15 years as a marketing, operations, and GTM professional.

He's worked almost exclusively at startups, including a successful exit. As an operations consultant, he's been a trusted partner to numerous SaaS "unicorns" and Fortune 500s.