Unveiling The Future Of AI written by John Jantsch read more at Duct Tape Marketing

Marketing Podcast with Kenneth Wenger

Kenneth Wenger, a guest on the Duct Tape Marketing PodcastIn this episode of the Duct Tape Marketing Podcast, I interview Kenneth Wenger. He is an author, a research scholar at Toronto Metropolitan University, and CTO of Squint AI Inc. His research interests lie at the intersection of humans and machines, ensuring that we build a future based on the responsible use of technology.

His newest book, Is the Algorithm Plotting Against Us?: A Layperson’s Guide to the Concepts, Math, and Pitfalls of AI. Kenneth explains the complexity of AI, demonstrating its potential and exposing its shortfalls. He empowers readers to answer the question: What exactly is AI?

Key Takeaway:

While significant progress has been made in AI, we are still at the early stages of it’s development. However, the current AI models are primarily performing simple statistical tasks rather than exhibiting deep intelligence.The future of AI lies in developing models that can understand context and differentiate between right and wrong answers.

Kenneth also emphasizes on the pitfalls of relying on AI, particularly in the lack of understanding behind the model’s decision-making process and the potential for biased outcomes. The trustworthiness and accountability of these machines are crucial to develop, especially in safety-critical domains where human lives could be at stake like in medicine or laws. Overall, while AI has made substantial strides, there is still a long way to go in unlocking its true potential and addressing the associated challenges.

Questions I ask Kenneth Wenger:

  • [02:32] The title of your book is the algorithm plotting against this is a bit of a provocative question. So why ask this question?
  • [03:45] Where do you think we really are in the continuum of the evolution of AI?
  • [07:58] Do you see a day where AI machines will start asking questions back to people?
  • [07:20] Can you name a particular instance in your career where you felt like “This is going to work, this is like what I should be doing”?
  • [09:25] You have both layperson and math in the title of the book, could you give us sort of the layperson’s version of how it does that?
  • [15:30] What are the real and obvious pitfalls of relying on AI?
  • [19:49] As people start relying on these machines to make decisions that are supposed to be informed a lot of times, predictions could be wrong right?

More About Kenneth Wenger:

  • Get your copy of Is the Algorithm Plotting Against Us?: A Layperson’s Guide to the Concepts, Math, and Pitfalls of AI.
  • Connect with Kenneth.

More About The Agency Certification Intensive Training:

  • Learn more about the Agency Certification Intensive Training here

Take The Marketing Assessment:

  • Marketingassessment.co

Like this show? Click on over and give us a review on iTunes, please!

John Jantsch (00:00): Hey, did you know that HubSpot’s annual inbound conference is coming up? That’s right. It’ll be in Boston from September 5th through the eighth. Every year inbound brings together leaders across business, sales, marketing, customer success, operations, and more. You’ll be able to discover all the latest must know trends and tactics that you can actually put into place to scale your business in a sustainable way. You can learn from industry experts and be inspired by incredible spotlight talent. This year, the likes of Reese Witherspoon, Derek Jeter, Guy Raz, are all going to make appearances. Visit inbound.com and get your ticket today. You won’t be sorry. This programming is guaranteed to inspire and recharge. That’s right. Go to inbound.com to get your ticket today.

(01:03): Hello and welcome to another episode of the Duct Tape Marketing Podcast. This is John Jantsch. My guest today is Kenneth Wenger. He’s an author, research scholar at Toronto Metropolitan University and CTO of Squint AI Inc. His research interests lie in the intersection of humans and machines, ensuring that we build a future based on the responsible use of technology. We’re gonna talk about his book today Is the Algorithm Plotting Against Us?: A Layperson’s Guide to the Concepts, Math, and Pitfalls of AI. So, Ken, welcome to the show.

Kenneth Wenger (01:40): Hi, John. Thank you very much. Thank you for having me.

John Jantsch (01:42): So, so we are gonna talk about the book, but I, I’m just curious, what, what does Squint AI do?

Kenneth Wenger (01:47): That’s a great question. So, squint ai, um, is a company that we created to, um, do some research and develop a platform that enables us to, um,

(02:00): Do, do AI in a more responsible, uh, way. Okay. Okay. So, uh, I’m sure we’re gonna get into this, but I touch upon it, uh, in the book in many cases as well, where we talk about, uh, ai, ethical use of ai, some of the downfalls of ai. And so what we’re doing with Squint is we’re trying to figure out, you know, how do we try to create a, an environment that enables us to use AI in a way that lets us understand when these algorithms are not performing at their best, when they’re making mistakes and so on. Yeah,

John Jantsch (02:30): Yeah. So, so the title of your book is The Algorithm Plotting Against, this is a bit of a provocative question. I mean, obviously I’m sure there are people out there that are saying no , and some are saying, well, absolutely. So, so why ask the question then?

Kenneth Wenger (02:49): Well, because I, I actually feel like that’s a question that’s being asked by many different people with actually with different meaning. Right? So it, it’s almost the same as the question of is AI posing an existential threat? I, I, it’s a question that means different things to different people. Right. So I wanted to get into that in the book and try to do two things. First, offer people the tools to be able to understand that question for themselves, right. And first figure out how, where they stand in that debate, and then second, um, you know, also provide my opinion along the way.

John Jantsch (03:21): Yeah, yeah. And I probably didn’t ask that question as elegantly as I’d like to. I actually think it’s great that you ask the question, because ultimately what we’re trying to do is let people come to their own decisions rather than saying, this is true of ai, or this is not true of AI . Right.

Kenneth Wenger (03:36): That’s right. That’s right. And, and, and again, especially because it’s a nuanced problem. Yeah. And it means different things to different people.

John Jantsch (03:44): So this is a really hard question, but I’m gonna ask you, you know, where are we really in the continuum of, of AI? I mean, people who have been on this topic for many years realize it’s been built into many things that we use every day and take for granted, obviously we ChatGPT brought on a whole nother spectrum of people that now, you know, at least have a talking vocabulary of what it is. But I remember, you know, I’ve been, I’ve been, I’ve had my own business 30 years. I mean, we didn’t have the web , we didn’t have websites, you know, we didn’t have mobile devices that certainly now play a part, but I remember as each of those came along, people were like, oh, we’re doomed. It’s over . Right. So, so currently there’s a lot of that type of language surrounding ai, but where do you think we really are in the continuum of the evolution?

Kenneth Wenger (04:32): You know, that’s a great question because I think we are actually very early on. Yeah. I think that, you know, we, we’ve made remarkable progress in a very short period of time, but I think it’s still, we’re at the very early stages. You know, if you think of ai where we are right now, we were a decade ago, we’ve made some progress. But I think the, fundamentally, at a scientific level, we’ve only started to scratch the surface. I’ll give you some examples. So initially, you know, the first models, they were great at really giving us some proof that this new way of posing questions, you know, the, uh, neural networks essentially. Yeah, yeah. Right. They’re very complex equations. Uh, if you use GPUs to, to run these complex equations, then we can actually solve pretty complex problems. That’s something we realized around 2012 and then after around 2017, so between 2012 and 2017, progress was very linear.

(05:28): You know, new models were created, the new ideas were proposed, but things scaled and progressed very linearly. But after 2017, with the introduction of the model that’s called the Transformer, which is the base architecture behind chat, g, pt, and all these large language models, we had another kind of realization. That’s when we realized that if you take those models and you scale them up and you scale them up in, in terms of the size of the model and the size of the data set that we used to train them, they get exponentially better. Okay. And that’s when we got to the point where we are today, where we realized that just by scaling them, again, we haven’t done anything fundamentally different since 2017. All we’ve done is increase the size of the model, increase the size of the dataset, and they’re getting exponentially better.

John Jantsch (06:14): So, so multiplication rather than addition?

Kenneth Wenger (06:18): Well, yes, exactly. Yeah. So, so it isn’t, the progress has been exponential, not only in linear trajectory. Yeah. But I think, but again, the fact that we haven’t changed much fundamentally in these models, that’s going to taper off very soon. It’s my expectation. And now where are we on the timeline? Which was your original question. I think if you think about what the models are doing today, they’re doing very element. They’re doing very simple statistics, essentially. Mm-hmm. , they’re not the idea of, of these models being called artificial intelligence. Right. I think it’s a bit of a misnomer sometimes. I agree. And it leads to some of the questions that, that people have. Um, because there, there isn’t much like deep intelligence going on, it’s just statistical modeling and very simple at that. And then where we are going from here and what I hope the future is, that’s when we start, I think the thing, things are gonna change dramatically when we start getting models that are able not just to, not just to do simple statistics, but are able to understand the context of what it is they’re trying to achieve. Yeah. And are able to understand, you know, the right answer as well as the wrong answer. So, for example, they, they, they, they’re able to know when they’re talking about things they know and when they’re kind of skirting around this gray area of things they don’t really know about. Does that make sense? Yeah,

John Jantsch (07:39): Absolutely. I mean, I totally agree with you on artificial intelligence. I’ve actually been calling it ia. I think it’s more of informed automation. is kind of how I look at it, at least in my work. Do you see a day where, you know, prompts asking questions are, you know, that’s kind of the, the street use, if you will, of AI for a lot of people. Do you see a day where it starts asking you questions back? Like, why would you wanna know that? Or what are you trying to achieve, uh, by asking this question?

Kenneth Wenger (08:06): Yeah. So the, the, the simple answer is yes. I, I definitely do. And I think that’s part of what, what achieving a higher level intelligence would be like. It’s when they’re not just doing your bidding, it’s not just a tool. Yeah, yeah. Uh, but they, they kind of have their own purpose that they’re trying to achieve. And so that’s when you would see things like questions essentially, uh, arise from the system, right? Is when they, they have a, a, a goal they wanna get at, which is, you know, and, and then they figure out a plan to get to that goal. That’s when you can see emergence of things like questions to you. I don’t think we’re there yet, but yeah, I think it’s certainly possible.

John Jantsch (08:40): But that’s the sci-fi version too, right? I mean, where people start saying, you know, the movies, it’s like, no, no, Ken, you don’t get to know that information yet. I’ll decide when you can know that .

Kenneth Wenger (08:52): Well, you’re right. I mean, the question, the way you asked the question was more like, is it, is it possible in principle? I think absolutely. Yes. Yeah. Do we want that? I mean, I, I don’t know. I guess that’s part of, yeah, it depends on what use case we’re thinking about. Uh, but from a first principle’s perspective Yeah, it is, it is certainly possible. Yeah. Not to get a model to

John Jantsch (09:13): Do that. So I, I do think there are scores and scores of people, they’re only understanding of AI is I go to this place where it has a box and I type in a question and it spits out an answer. Since you have both layperson and math in the title, could you give us sort of the layperson’s version of how it does that?

Kenneth Wenger (09:33): Yeah, absolutely. So, well, at least I’ll try, lemme put it that way, , when, a few moments ago when I mentioned that these models, essentially what they are, they’re very simple statistical models. That itself, that phrase itself, it’s a little bit of, it’s controversial because at the end of the day, we don’t know what kind of intelligence we have, right? So if you think about our intelligence, we don’t know whether at some level we are also a statistical model, right? However, what I mean by AI today in large language models like ChatGPT being simple statistical models, what I mean by that is that they’re performing a very simple task. So if you think of ChatGPT, what they’re doing is they are trying, essentially to predict the next best word in a sequence. That’s all they’re doing. And the word, the way they’re doing that is that they calculate what are called probability distribution.

(10:31): So basically for any word in a, in a, in a prompt or in a corpus of text, they calculate the probability that word belongs in that sequence. Right? And then they choose the, the next word with the highest probability of being correct there. Okay? Now, that is a very simple model in the following sense. If you think about how we communicate, right? You know, we’re having a conversation right now. I think when you ask me a question, I, I pause and I think about what I’m about to say, right? So I have a model of the world, and I have a purpose in that conversation. I come up with the idea of what I want to respond, and then I use my ability to produce words and to sound them out to communicate that with you. Right? It might be possible that I have a system in my brain that works very similar to a large language model, in the sense that as soon as I start saying words, the next word that I’m about to say is one that is most likely to be correct, given the words that I just said.

(11:32): It’s very possible. That’s true. However, what’s different is that at least I already have a plan of what I’m about to say in some latent space. I have already encoded in some form. What I want to get across, how I say it, that the ability to pro to produce those words might be very similar to a language model. But the difference is that a large language model is trying to figure out what it’s going to say as well as coming up with those words at the same time. Mm-hmm. , right? Does that make sense? So it’s a bit like they’re rambling, and sometimes if they talk for too long, they ramble in a nonsense territory. Yeah. Yeah. Because they don’t know what they’re going to say until they say it. . Yeah. So, so that’s a very fundamental difference. Yeah.

John Jantsch (12:20): I, I, I have certainly seen some output that is pretty interesting along those lines. But, you know, as I heard you talk about that, I mean, in a lot of of ways that’s what we’re doing is we’re querying a database of what we’ve been taught, are the, the words that we know in addition to the concepts that we’ve studied, uh, and are able to articulate. I mean, in some ways we’re querying that to me, prompting or me asking you a question as well, I mean, it works similar. Would you say

Kenneth Wenger (12:47): The aspect of prompting a question and then answering it, it’s similar, but what is different is the, the concept that you’re trying to describe. So, again, when you ask me a question, I think about it, and I come up with, so I, again, I have a world model that works so far for me to get me through life, right? And that world model lets me understand different concepts in different ways. And when I’m about to answer your question, I think about it, I formulate a response, and then I figure out a way to communicate that with you. Okay? That step is missing from what these language models are doing, right? They’re getting a prompt, but there is no step in which they are formulating a response with some goal, right? Right? Yes. Some purpose. They are essentially getting a text, and they’re trying to generate a sequence of words that are being figured out as they’re being produced, right? There’s no ultimate plan. So that, that’s a very fundamental difference.

John Jantsch (13:54): And now, let’s hear a word from our sponsor, marketing Made Simple. It’s a podcast hosted by Dr. J j Peterson and is brought to you by the HubSpot Podcast Network, the audio destination for business professionals marketing made simple, brings you practical tips to make your marketing easy and more importantly, make it work. And in a recent episode, JJ and April chat with StoryBrand certified guides and agency owners about how to use ChatGPT for marketing purposes. We all know how important that is today. Listen to marketing Made Simple. Wherever you get your podcasts.

(14:30): Hey, marketing agency owners, you know, I can teach you the keys to doubling your business in just 90 days, or your money back. Sound interesting. All you have to do is license our three-step process that’s gonna allow you to make your competitors irrelevant, charge a premium for your services and scale perhaps without adding overhead. And here’s the best part. You can license this entire system for your agency by simply participating in an upcoming agency certification intensive look, why create the wheel? Use a set of tools that took us over 20 years to create. And you can have ’em today, check it out at dtm.world/certification. That’s DTM world slash certification.

(15:18): I do wanna come to like what the future holds, but I want to dwell on a couple things that you dive into in the book. What are the, you know, other than sort of the fear that the media spreads , what are the real, you know, and obvious pitfalls of relying on AI?

Kenneth Wenger (15:38): I think the biggest issue, and one of the, I mean the, the, the real motivator for me when I started writing the book is that it is a powerful tool for two reasons. It’s very easy to use, seemingly, right? Yeah. You can spend a weekend learning python, you can write a few lines, and you can transform, you can analyze, you can parse data that you couldn’t before just by using a library. So you don’t really have to understand what you’re doing, and you can get some result that looks useful, okay? Mm-hmm. , but heating in that process, right? The fact that you can take data a lot, a large amounts of data, modify it in some way, and get a response, get some result without understanding what’s happening in the middle, has huge repercussions for misunderstanding the results that you’re getting, right? And then if you’re using the, these tools in a, the world, right?

(16:42): In a, in, in a way that can affect other people. For example, you know, let’s say you work in a financial institution and, and, and, and you come up with a model to figure out, uh, who you should, who you should give some credit, get, you know, approved for, for credit for a credit line, and who you shouldn’t. Now, right now, banks have their own models, but sure, if you take the AI out of it, traditionally those models are thought through by statisticians, and they may get things wrong once in a while, but at least they have a big picture of what it means to, you know, analyze data, biasing the data, right? What are the repercussions of bias in the data? How do you get rid of all these things are things that a good statistician should be trained to do. But now, if you remove the statisticians, because anybody can use a model to analyze data and get some prediction, then what happens is you end up denying and approving credit lines for people who, with you, you know, with repercussions that could be, you know, driven by very negative bias in the data, right?

(17:44): Like, it could affect a certain section of the population, uh, negatively. Maybe there’s some that can’t get a credit line anymore just because they live in a particular neighborhood mm-hmm. , or they, you know, there’s many reasons why this could be a problem,

John Jantsch (17:57): But wasn’t that a factor previously? I mean, certainly neighborhoods are considered , you know, as part of the, you know, even in the analog models, I think.

Kenneth Wenger (18:06): Yeah, absolutely. So like I said, we always had a problem with bias, right? In the data, right? But traditionally, you would hope, so two things would happen. First, you would hope that whoever comes up with a model, just because it’s a complex problem, they have to have some satis statistical training. Yeah. Right? And a, an ethical statistician would have to consider how to deal with the bias in the data, right? So that’s number one. Number two, the problem that we have right now is that, first of all, you don’t need to have that set decision. You can just use the model without understanding what’s happening, right? Right. And then what’s worse is that with these models, we can’t actually understand how the, or it’s very difficult traditionally to understand how the model arrived or prediction. So if you get denied either a credit line or as, as I talk about in the book bail, for example, in, in a court case, uh, it’s very difficult to, to argue, well, why me? Why, why was I denied this thing? And then if you go through the process of auditing it again with the traditional approach where you have a decision, you can always ask, so how did you model this? Uh, why was this person denied this particular case in a, in an audit? Mm-hmm. with a, a, a neural network, for example, that becomes a lot more complicated.

John Jantsch (19:21): So I, I mean, so so what you’re saying, one of the initial problems is that people are relying on the output, the data. I mean, even, you know, I use it in a very simple way. I run a marketing company and we use it a lot of times to give us copy ideas, give us head headline ideas, you know, for things. So I don’t really feel like there’s any real danger in there other than maybe sounding like everybody else in your copy. Uh, but, but you’re saying that, you know, as people start relying on these to make decisions that are supposed to be informed, a lot of times predictions are wrong.

Kenneth Wenger (19:57): Yes. And, and there’s very, so the answer is yes. Now, there’s two reasons for that. And by the way, let me just go back to say that there are use cases where, of course you have to think about this as, as a spectrum, right? Like yeah, yeah. There are cases where the repercussions of getting something wrong is worse than other cases, right? So as you say, if you’re trying to generate some copy and you know, if it’s nonsensical, then you just go ahead and change it. And at the end of the day, you’re probably gonna review it anyway. So, so that is a lower, probably a lower cost. The cost of a mistake there will be lower than in, in the case of, you know, using a model in a, in a judicial process, for example. Right? Right. Right. Now, with respect to the fact that these models sometimes get, make mistakes, the reason for that is that the way these models actually work is that they, and, and the part that can be deceiving is that they tend to work really well for areas in the data that that is, that they understand really well.

(20:56): So, so if you think of, of a dataset, right? So they’re trained using a dataset for most of the data in that dataset, they’re gonna be able to model it really well. And so that’s why you get models that perform, let’s say, 90% accurate on a particular data set. The problem is that for the 10% where they’re not able to model really well, the mistakes there are remarkable and in a way that a human would not be able to make those mistakes. Yeah. So what happens in those cases that, first of all, when we’re training these models that we get, we say, well, you know, we get 10% error rate in this particular dataset. The one issue is that when you take that into production, you don’t know that the incidences rate of those errors are gonna be the same in the real world, right?

(21:40): You may end up, uh, being in a situation where you get those data points that lead to errors at a much higher rate than you did in your data set. Just one problem. The second problem is that if, if you are in a, if your use case, if your production, you know, application, it’s such where a mistake could be costed, like let’s say in a medical use case or in self-driving, when you have to go back and explain why you got something wrong, why the model got something wrong, and it is just so bizarrely different from what a human would get wrong. That’s one of the fundamental reasons why we don’t have these systems being deployed across safety critical domains today. And by the way, that’s one of the fundamental reasons why we created splint, is to tackle specifically those problems, is to figure out how can we create a set of models or a system that’s able to understand specifically when models are getting things right and when they’re getting things wrong at runtime. Because I really think it’s, it’s one of the fundamental reasons why we haven’t advanced as much as we should have at this point. It’s cuz when models work really well, uh, when they’re able to model the data, well then they work great. But for the cases where they can’t model that section of the data, the mistakes are just unbelievable, right? It’s things that humans would never make those kinds of

John Jantsch (23:00): Mistake. Yeah, yeah, yeah. And, and obviously, you know, that’s certainly gonna, that has to be solved before anybody’s gonna trust sending, you know, a man spacecraft, you know, guided by AI or something, right? I mean, when you know human life is at risk, you know, you’ve gotta have trust. And so if you can’t trust that decision making, that’s certainly gonna keep people from employing the, the technology, I suppose.

Kenneth Wenger (23:24): Right? Or using them, for example, to help in, as I was saying, in medical domains, for example, cancer diagnosis, right? If you want a model to be able to detect certain types of cancer, given let’s say biopsy scans, you wanna be able to trust the model. Now anything, any model essentially, you know, it’s going to make mistakes. Nothing is ever perfect, but you want two things to happen. First, you wanna be able to minimize the types of mistakes that the model can make, and you need to have some indication that the quality of the prediction of the model isn’t great. You don’t wanna have that. Yeah. And second, once a mistake happens, you have to be able to defend that the reason the mistake happened is because the, the quality of the data was such that, you know, even a human couldn’t do better. Yeah. We can’t have models make mistakes that a human doctor would look at and say, well, this is clearly Yeah, incorrect.

John Jantsch (24:15): Yeah. Yeah. Absolutely. Well, Ken, I wanna take, uh, I wanna thank you for taking a moment to stop by the Duct Tape Marketing Podcast. You wanna tell people where they can find, connect with you if you’d like, and then obviously where they can pick up a copy of Is the Algorithm Plotting against Us?

Kenneth Wenger (24:29): Absolutely. Thank you very much, first of all for having me. It was a great conversation. So yeah, you can reach me on LinkedIn and for the cop for a copy of the book and get it both from, uh, Amazon as well as from our publisher website, the, it’s called the working fires.org.

John Jantsch (24:42): Awesome. Well, again, thanks for solving by great conversation. Hopefully, we’ll maybe we’ll run into you one of these days out there on the road.

Kenneth Wenger (24:49): Thank you.

John Jantsch (24:49): Hey, and one final thing before you go. You know how I talk about marketing strategy, strategy before tactics? Well, sometimes it can be hard to understand where you stand in that, what needs to be done with regard to creating a marketing strategy. So we created a free tool for you. It’s called the Marketing Strategy Assessment. You can find it @marketingassessment.co, not.com, dot co. Check out our free marketing assessment and learn where you are with your strategy today. That’s just marketing assessment.co. I’d love to chat with you about the results that you get.

This episode of the Duct Tape Marketing Podcast is brought to you by the HubSpot Podcast Network.

HubSpot Podcast Network is the audio destination for business professionals who seek the best education and inspiration on how to grow a business.


Did you miss our previous article…
https://www.sydneysocialmediaservices.com/?p=6453

Embracing Your Entrepreneurial Superpower Being Unemployable written by John Jantsch read more at Duct Tape Marketing

Marketing Podcast with Alysia Silberg

Alysia Silberg, a guest on the Duct Tape Marketing PodcastIn this episode of the Duct Tape Marketing Podcast, I interview Alysia Silberg. She is a leading venture capitalist in Silicon Valley, where she mentors tech startups and helps them go public. She is the CEO & General Partner of the investment firm Street Global.

Her online radio show: Global Fireside Chats, brings together global industry titans to share insights on our fast-changing world. Furthermore, Alysia is a UN Women Empower Women Global Champion and an international board director with sovereign wealth fund experience. 

Her first book, Unemployable: How I Hired Myself details her life story and guide to financial freedom. It’s a guide that helps to change your mindset from “I can’t” to “I can”. 

Key Takeaway:

Alysia changes the narrative of being “unemployable” and relates it to entrepreneurship and finding one’s superpower in business. Being unemployable is something to be proud of, as it often reflects the mindset and qualities of an entrepreneur, that can lead to innovation and generate changes. She emphasizes the importance of owning one’s uniqueness, taking risks, embracing curiosity, and seizing the opportunities presented by the digital revolution.

The current business environment, which Alysia describes as a “modern-day renaissance”, it’s a time for innovation and new opportunities. It’s important to leverage the power of AI and digital tools to start and grow a business and develop each person’s superpower.

Questions I ask Alysia Silberg:

  • [01:52] Tell me a little bit about the artwork from the book cover.
  • [03:07] Your book launch party was at a roller rink. How did that come about?
  • [04:13] Why is the book called Unemployable?
  • [07:20] Can you name a particular instance in your career where you felt like “This is going to work, this is like what I should be doing”?
  • [08:58] You talk about superpowers and finding your superpower. Does your superpower have a name?
  • [10:00] Back in South Africa you got shot, what did that story mean to your journey?
  • [11:53] Is there anything about what’s going on right now in the current business environment that you think makes us a strong time?
  • [15:25] What’s the first step you tell people to acquire the mindset you talk about?
  • [18:23] What are your thoughts on the idea that there are proven business models and you don’t have to like to create a whole new thing from zero?
  • [19:41] Based on where you see where we are today, what’s work going to look like in 10 years?

More About Alysia Silberg:

  • Get your copy of Unemployable: How I Hired Myself
  • Connect with Alysia and follow her on Instagram

More About The Agency Certification Intensive Training:

  • Learn more about the Agency Certification Intensive Training here

Take The Marketing Assessment:

  • Marketingassessment.co

Like this show? Click on over and give us a review on iTunes, please!

John Jantsch (00:00): Hey, did you know that HubSpot’s annual inbound conference is coming up? That’s right. It’ll be in Boston from September 5th through the eighth. Every year inbound brings together leaders across business, sales, marketing, customer success, operations, and more. You’ll be able to discover all the latest must know trends and tactics that you can actually put into place to scale your business in a sustainable way. You can learn from industry experts and be inspired by incredible spotlight talent. This year. The likes of Reese Witherspoon, Derek Jeter, Guy Raz are all going to make appearances. Visit inbound.com and get your ticket today. You won’t be sorry. This programming is guaranteed to inspire and recharge. That’s right. Go to inbound.com to get your ticket today.

(01:03): Hello and welcome to another episode of the Duct Tape Marketing Podcast. This is John Jantsch. My guest today is Alysia Silberg. She’s a leading venture capitalist in Silicone Valley where she mentors tech startups and helps them go public. She is the CEO and general partner of the investment firm, street Global. Her online radio show, global Fireside Chats brings together global industry tightens to share insights on our fast changing world. She is a UN Women Empower Women Global Champion, and an international board director with Sovereign Wealth Fund experience. We’re gonna talk about her first book, Unemployable: how I Hired Myself. So Alysia, welcome to the show.

Alysia Silberg (01:47): Hi John. Very excited to be joining you. Thanks for having me.

John Jantsch (01:50): So listeners can’t see this, although you’ll see, you can see it in the show notes, the video of folks obviously will see it, but I, you have a picture of the artwork from the cover behind you there in the in frame and I just, I wanted to start there because I just absolutely love it. So tell me a little bit about, I mean I frankly it’s a work of art.

Alysia Silberg (02:08): Thank you. Um, very excited to hear you say that. So you talk about being unemployable, you talk about the future of ai. I know these are themes we’ll be chatting about today, but I had five designers trying to come up with what it meant to be unemployable and no one could convey that in imagery. And one of my founders who has an ed tech startup focusing on AI in Minneapolis, he said, let me sit down and let me take the book and let me put it in open AI’s design platform and let’s see what happens. And the, this is what the AI came up with. It’s the essence of a founder’s journey and it’s interpreting it, you know, that drive and ambition, you know, like that all of its embodied it that in this situation happens to be my image. But I’m, I’m very excited that we created that connection with the AI and it turned out the way it did.

John Jantsch (02:55): Yeah, it’s kind of a, a block, almost like a Japanese block print illustration. It’s really fabulous. Okay, another totally unrelated topic to the book, sort of, I also love that you were doing a book launch party at a roller rink. How did that come about ?

Alysia Silberg (03:11): Well, we’re gonna be, you know, the book is about finding your superpower and you know, superpowers are often unexpected and we discovered them in the weirdest of ways. And for me, they happened to be, I went to a pair of pink roller skates at five years old. I went to them more than anything on earth and I couldn’t afford them. No one in my family could afford them. And I had to figure out how I could get these pink roller skates and I built a business and you’ll read all about it in the book. Crazy wild Only founders understand what it means to, to want something so badly. And I never wore those roller estates ever. I treasured them cuz they reminded me of what’s possible. The dreamer, you know, anything is possible. And so the idea of having a a roller skating party for the book was the only thing I could do to honor each of our journeys. For me it’s roller skates. For you it was probably something else, but once a founder, always a founder and it was just, it’s the way it’s meant to be

John Jantsch (04:05): Kind of sounds like a load of fun too. So there’s that . So why unemployable? I mean that that name specifically or that term specifically as opposed to entrepreneur or, you know, I, I know obviously you’re trying to convey something maybe a little deeper.

Alysia Silberg (04:26): Absolutely. So I was trained as an actuary and I went for a career aptitude test at a bank. And I was like, you know, on my way thinking I joined, you know, a big bank and I was told I, I was unemployable in that attitude test and I was devastated. I was like, what do I do now? And I took it as an insult and at the time it was, you know, it wasn’t a compliment and it took me decades to own that. And what I do today now is I’m a researcher and I’m an investor. Like those are the things that make me the entrepreneur that I am. Right. And it was very tough choosing the title. I did a ton of research because everyone kept saying, but you’re not unemployable. How can you say you’re unemployable? And I’m like, well actually I am.

(05:07): And it’s okay. It’s something to be proud of. The most important creators in history were basically unemployable to create innovation and change in these things. You’ve gotta be able to just live in a different way to many people and take risks. But there’s a lot of bravery around their title And I, I hope and honors the founder’s journey. So for everyone out there that feels the unemployable, as I say, I’ve learned to own it and be the queen of unemployable . And I know my family members are like, have you lost your mind? But it’s my truth.

John Jantsch (05:39): No, it’s funny, I relate to it maybe in a little different sense. I, I’ve owned my own business for 30 years i’s I worked for somebody for about five years and said, you know, I, anybody can run a business. But uh, there was a sense of feeling unemployable but it was more of a probably a self-esteem issue than it was uh, you know, I can go out there and conquer the world. And I wonder if, if, you know, it’d be interesting if you’ve come to this place you’re in right now, but I’m wondering if a lot of entrepreneurs, you know, start with that same view a little bit regardless of what it turns into.

Alysia Silberg (06:12): Absolutely. I think I suffered from huge imposter syndrome and the ironic part was it was that bank who didn’t want me and because of my imposter syndrome I decided no one wanted me. So when job offer offers came, I was like whoa, I don’t feel I belong here because there’s something wrong with me. Versus I’m a born and bred founder and this is what I do. Like you, you’ve been running a company for a very long time and I think definitely, I think many people and I think that’s what I hope to get out of the book where each person has something unique and instead of hiding from it and saying I have to con conform to what everybody expects me to do, rather say, okay, AI is bringing all this change. People are gonna lose their jobs, things are gonna be very different. Let me own my superpower, let me bold a business. And even if I do feel a bit like an imposter even now, I still feel like an imposter. I still have to work on it a lot. It’s okay. You will find customers that will support you just the way you are and you can bold something really cool as you have done

John Jantsch (07:12): So. So you have started, have you lost track of how many companies over the years? Number doesn’t matter

Alysia Silberg (07:18): Too many,

John Jantsch (07:18): Many. But uh, I’m wondering if you could in hindsight, as we always do, kind of go back and think about like a time, maybe it was one company or maybe it was a number of companies where you felt like this is gonna work, this is like what I should be doing. And you know, that moment was, you know, not just validating but actually drove you forward. Absolutely. You think about a particular instance

Alysia Silberg (07:43): For sure. I think it was the company that we built that brought us to the US in the first place where I just, it was connecting the dots and we were solving a problem for our customer. So it was a very early voice analytics pro uh, platform, which is helping salespeople sell better. Long before sales enablement became like this very ubiquitous thing. And it was just, there was so much intensity coming at us from the market where they wanted something better that wasn’t available. That even though everyone in South Africa said to us, you’re mad, what are you doing going to America for a sales app? The idea that there was a probability of greater than let’s say 10%, that we would bold something extremely valuable. That was enough of, I don’t know, just a spark of you know, like I’m gonna do this no matter what and even whatever happens, I’m doing this and I’m gonna make it work. But absolutely that one was just clear and I think I used that to look at startups today where when I can see something that’s gonna happen, you wanna be on that journey cuz it’s so exciting.

John Jantsch (08:44): Yeah. And then it, and I mean this in a positive way then it becomes like a drug, right? You recognize it the next sentence like I want that high again. Right, .

Alysia Silberg (08:51): Absolutely. Absolutely. It’s addictive.

John Jantsch (08:57): So, so you talk a lot in this book about superpowers and finding your superpower. I’m curious, does your superpower have a name?

Alysia Silberg (09:05): I’m obsessed with pattern recognition and I think growing up people saw me as a freak. Like it was very tough growing up cuz I was so different to everyone around me, like in South Africa and absolutely I’m not and I think that’s why I work so well with the AI because it’s so much better than pattern rec, pattern recognition than me. And faster is absolutely for sure. It’s better, I’ve gotta admit I have to, I’ve done a lot of work on my ego, a lot of humility, but I’m obsessed with finding patterns and stuff, which is really cool when it comes to investing.

John Jantsch (09:38): It’s interesting, I’ve for years, you know, have told people that my superpower is curiosity and I think that’s probably very related to you know, pattern recognition. A lot of times, you know, I will read a book about architecture and you know, get my best ideas even though I have nothing to do with architecture , you know? Right. I think there’s a lot to that. Let’s go back to South Africa. You’re very young and you had an incident where you got shot or almost got shot or faced where faced and I obviously I I think the story bears interest in where you are today, but also just, you know, what did that story sort of mean to your journey?

Alysia Silberg (10:18): Absolutely and I think it was a pivotal moment in the sense that I saw an environment that just made no sense to me. And I was very young and I saw the people around me where they chose to live in an environment that they believed made sense to them because they were fearful of going and as you say, being curious enough to try something that was better, even though it was very scary for me, I had no choice from that moment onwards. I knew I was gonna come to America and it never mattered what went wrong, what obstacle, what was thrown in my way. Like as you read the book, you’ll see the number of times where I had visa troubles and it was like I never gave up on the American dream where you say curiosity, the idea that you can live in a place where the sky is the limit for founders and you can bolt till your heart’s content and there’s so much support available and you’ll always find an investor, you’ll always find customers, you’ll always find team members. I didn’t grow up in that environment and so that moment that happened, even though it was the most terrifying thing to ever happen in my life, I still have the scar to this day and I could have had it removed, but I chose to have it because it’s a reminder of where I came from and to feel a sense of gratitude of where I am and that just never take for granted the luckiness to actually be here.

John Jantsch (11:44): Is there anything about this moment in time that makes it like, now is when you should jump now is when you should do your, you know, whatever you’ve been thinking about doing there? Anything about what’s going on right now, you know, in in the current business environment that you think makes us a strong time?

Alysia Silberg (11:59): Absolutely. I think, you know, I’m a student of the Renaissance. I’ve studied it in depth and we are living at the most exciting time in history. You know, many people are very frightened, you know, economically, politically, there’s a lot happening. But this is a time of great excitement and I think there are many people who fear the AI revolution and yes, there will be a lot of change in terms of jobs and in terms of all these things that will change, but ultimately they will change for the better. But I think going back to superpowers, why I felt it was so important to get the book into as many people’s hands as possible is I know what it’s like to have no money. I know what it’s like to be frightened. I know what it’s like to have to be poor and like all those things I’ve experienced those things and you don’t wanna be sitting in your job thinking, what’s gonna happen to me?

(12:46): What’s gonna happen to my kids? Versus thinking, okay, this may happen to me, but instead of sitting waiting for it, I’m gonna take my life into my own hands. I have something of value that I can offer the world. How do I leverage the power of, let’s say the internet? There are 3 billion people online. So com, the combination of your superpower and the power of the internet, you can easily start a business on the side and you can grow it. It’s, and the fact that you don’t need to know how to code anymore, the fact that you don’t need to know how to do all these things because the AI is so easy, it’s anyone can use it now it’s a matter of, you said it, curiosity coming from a place of like, okay, I’m gonna learn this. This isn’t difficult. Like it’s there and it’s there for the taking. And I think the longer people wait purely because it’s new and a little bit scary for many people, the more you get left behind versus saying, okay, we are living through a modern day renaissance and it’s coming out of the us let me participate, let me do it. And in yours, time at the speed things are going, you’ll never, ever look back that much I can assure you of.

John Jantsch (13:50): And now let’s hear a word from our sponsor, marketing Made Simple. It’s a podcast hosted by Dr. JJ Peterson and is brought to you by the HubSpot Podcast Network. The audio destination for business professionals marketing made simple brings you practical tips to make your marketing easy and more importantly make it work. And in a recent episode, JJ and April chat with StoryBrand certified guides and agency owners about how to use ChatGPT for marketing purposes. We all know how important that is today. Listen to marketing Made Simple. Wherever you get your podcasts.

(14:27): Hey marketing agency owners, you know, I can teach you the keys to doubling your business in just 90 days or your money back. Sound interesting. All you have to do is license our three-step process that’s gonna allow you to make your competitors irrelevant, charge a premium for your services and scale perhaps without adding overhead. And here’s the best part. You can license this entire system for your agency by simply participating in an upcoming agency certification intensive look, why create the wheel? Use a set of tools that took us over 20 years to create. And you can have ’em today, check it out at dtm.world/certification. That’s dtm.world/certification.

(15:11): I guarantee you, when this book comes out and you’re out speaking to uh, groups or speaking to individuals, somebody’s gonna come up to you and say, okay, your talk was brilliant, I’m so inspired. But like, what’s the first step? .

Alysia Silberg (15:27): Absolutely. I’ve tried to take what I’ve learned over the last decade with AI and simplify it in a way that empowers anyone, right? I’m obsessed with being a teacher and I hope, you know, I believe in radical open-mindedness. I hope that what I’ve done is going to help many people. So I have a daily AI use data that’s free and it’s got different sections to it. One of the sections I love the most is tools. And there’s all these different tools there and when people start reading it, they’ll be like, she’s insane. She expects me to understand this stuff and bear with me. As I say, I’ve taught statistics, finance, financial, maths, go and look at the tools every day, just read about the tools and in the beginning it’ll be like it’s a bit new and it’s a bit scary and there’s videos included, there’s all kinds of things.

(16:10): And give yourself, let’s say seven days, then 10 days just reading it. And by the end of that you’ll start noticing, Hey, this is not that difficult. Okay, I wanted to build a website instead of going the usual route of all the difficulties of building a website, it’s actually an AI tool that I can use for free or next for free. And I can get an AI tool to build that website for me. And as you watch it build itself and you’re like giving it the parts that it needs to build it, you’ll see it’s actually incredibly fun. It’s, you have no idea how much fun AI’s like, it’s like a form of magic like I use for text messages for, you know, you’re tired, you’re worn down, you’ve had a busy day, you’re trying to convey something, but you just, you’re like, my brain is saturated and the fact that this machine can take what you’re trying to say and just make those micro adjustments so that you’re conveying the right thing, but your tone where you don’t wanna come across as worn down and tired and all these things you wanna be, I’m happy and I’m happy to be talking to and it can do that for you.

(17:07): It’s these tiny things where you don’t have to start at the most advanced stuff. You can start at the basics and just bold up and find other people that are interested. That’s being huge for me. Where if you, you bold a peer group of people who are like, I’m really interested in it, why don’t we talk about it? Why don’t we, like one of my friends is doing like music and AI and he’s spending all his time composing music and he’s like, well can you send me your music? I’m like, if I’m embarrassed, he’s like, I’m embarrassed to my music too. But I’m like, okay, let’s share music and see where this goes. And we’ve got this whole AI music group that we’re creating. So I think it’s like, again, taking something you are really interested in and saying, can I have more fun with this? Can I do more with this? And then finding other people who can play with you. It’s a lot about playing.

John Jantsch (17:49): Yeah. You know, I started my business before we had the internet, you know, as a marketer I tell that to groups sometimes and they’re like, what? I don’t, I don’t get how right . And I really think that what happens is that with every new tool that comes along, a lot of people get obsessed with the tool itself as opposed to how the tool can be applied to an already proven model. And I think that’s where people miss it. You know, I mean we, I have been licensing our methodology of work for many years and I see AI as a great tool to actually apply to that licensing model as well. And I wonder what your thoughts are on that kind of idea that there are proven business models. You don’t have to like create a whole new thing, you can just use these tools to do it in a different way. It’s pattern recognition in a different way, isn’t it?

Alysia Silberg (18:35): Absolutely. Like what you’re doing, it would be a brilliant use case and I’d love to talk to you sometime offline where it’s so much fun to take what you’ve created and say, okay, where are the biggest problems you as the creator with mastery have over your business? Where are those things that you really, you don’t wanna be spending your time on those things, you wanna be spending your time on these other things? And how do we use the AI to give you that time back so that you can spend your time on the thing you love most within your business. And it’s so easy. That’s the part that blows the person’s mind. Where mm-hmm when you actually start doing it. Whe whether it’s accounting, what, whatever it may be, you just don’t wanna be doing it. And the idea that you can outsource it and suddenly and it happens so quickly, you’re like, wow, I have like 30% of my time available that I didn’t have. How do I use that 30%? And that’s an interesting problem to be had. So I would love to talk to you more about like figuring out how we can play together on your stuff because that would be cool.

John Jantsch (19:32): Awesome. Let’s do it. Uh, I want you to go beyond where we are today and you know, take the crystal ball for what it’s worth, and say, based on where you see where we are today, where are we, where, what’s work gonna look like in 10 years?

Alysia Silberg (19:46): I’m a contrarian and so

(19:51): I think this, I think people are going to have, we’re gonna have all these tools, they’re gonna be working for us and I think everyone will have a lot more freedom. I think the machines will be doing all the stuff no one wants to do, which I think is really cool. I think we’ll also go into a very creative period in history again like the renaissance where things that people just didn’t have time to do, they will have time to do. A lot of people around me spend a lot of time thinking about universal basic income. These kinds of things are important to also think about in terms of, in terms of the future, you know, I’ve had an interesting experience on my own team where we started bringing in like digital workers in the team. So like adv, AI avatars. And it’s been very interesting because you think about the team and the team is creating these avatars and my team was like, okay, what kind of demographic do we want?

(20:41): What age do we want the avatar to be? All these things that I’m interested to see, like they were literally designing these avatars where lands us up at the same time. I’m fascinated by what young people have to say about this. So I engage a sub even for the book especially, I engage a ton with people like in this 17, 18, 19, 20 year olds and they want a lot of in real life engagement. They want what we always had, as you said, you built your business before the internet, you knew what it was like to do everything in person and they crave that engagement. Mm-hmm . So I think at the same time as the machines will do the work and substantial wealth will be created across the board for people because the machines are doing all the work. I think people may engage very differently in a way that will, you know, I look at movies from the sixties and that and you almost like, you want that nostalgic feeling of life being simpler.

(21:38): And I have a feeling a lot of that will come back where, why do you have to spend all your time in front of a machine if you know I can hang out with you in person cuz I’m not stuck to my machine doing all my work. So I don’t know how it will play out, but I think ultimately things will be better. But that comes down to regulation too, in terms of just, you know, managing the AI really, really well cuz it is so powerful and it learns so well no matter how curious we are. , it’s a very smart learn .

John Jantsch (22:05): You know, it’s interesting when you talk about, you know, being a student of the Renaissance, you know, prior to factories being created, you know, people didn’t work like they do today. They didn’t work nine to five or whatever it was, they spent, you know, great chunks of time just hanging out in salons and doing things. So I, you know, in some ways, you know, I think what you’re, what is possible if we change the mindset of the factory, so to speak, you know, I think there is a possibility that this actually aids a return to a more, more human existence. Which is sort of contrary, isn’t it?

Alysia Silberg (22:40): I I fully agree with you. I can like, I can sense how desperate people are like, you know, I, I spend a lot of time thinking about like mental health and those things and people crave that kind of world and there’s no reason why we can’t partner with the machines to give us that kind of life for everyone. Where people do have more time to, like, I’m really enjoying this conversation. If neither of us were working, we could be hanging out, having this conversation in our own salon with people like us and the creativity and the things that can come out of it. We’ve seen the last 500 years with defined by that time in history. We can define the next 500 euros by this time.

John Jantsch (23:20): Yeah. Alysia, we could talk a long time about this stuff, but we are out of time for today’s episode. You wanna, I’d, I’d love for you to invite people to connect with you or find out however they, you would like to invite them and obviously pick up a copy of Unemployable.

Alysia Silberg (23:36): Absolutely. Uh, please, I’ve discounted unemployable to 99 cents on Amazon because I wanted to get into as many founders’ hands as possible. So please go and buy the book and review it. And if you think it sucks, I’m radically open-minded. You can tell me it sucks and I’d love to know why. Cause you know, there’s always a kernel of truth and all criticism and I’m a founder who loves to learn from their customers. So please buy the book. Let me know what you think. Connect with me on social media. I love hearing from other founders and creators and in the newsletters free, I’d love to share the newsletter so your founders and everyone in your community can subscribe. And again, if they’ve got questions, just email me back. I’ve got a team of people dedicated to it. So if they start, they feel is missing, they wanna learn more about, I’m very passionate about really changing the world when it comes to, you know, the changes taking place. And so I love hearing from people just like us.

John Jantsch (24:25): Awesome. Well again, thank you so much for taking a few minutes to stop by the podcast and hopefully we’ll run into you one of these days out there on the road in real life.

Alysia Silberg (24:33): I would love it. Thank you very much for hosting me. I loved every minute of it.

John Jantsch (24:38): Hey, and one final thing before you go. You know how I talk about marketing strategy, strategy before tactics? Well, sometimes it can be hard to understand where you stand in that, what needs to be done with regard to creating a marketing strategy. So we created a free tool for you. It’s called the Marketing Strategy Assessment. You can find it @marketingassessment.co, not.com.co. Check out our free marketing assessment and learn where you are with your strategy today. That’s just marketing assessment.co. I’d love to chat with you about the results that you get.

This episode of the Duct Tape Marketing Podcast is brought to you by the HubSpot Podcast Network.

HubSpot Podcast Network is the audio destination for business professionals who seek the best education and inspiration on how to grow a business.

I’ve recorded more than 270 videos (give or take) since I started my short-form video journey about six months ago. I’m often asked about my process for recording short-form videos, and I feel like this is a good time to do a reveal.

Truthfully, I didn’t feel all that comfortable saying much during most of this journey since I knew that I was still so early in figuring it all out. I was learning and evolving, and I knew that whatever process I had was likely to change as soon as I talked about it.

At this stage, though, I feel pretty good about the process I’m following. It’s consistent. I’m sure plenty will still change to how I do things over the next year and beyond, but the rate of change will likely slow. I’ve found what I like and what makes me efficient for now.

Below is my current process for recording my short-form videos. It’s not perfect. But it’s what works for me.

If you want to learn more about what I do and how to get started with short-form video, check out my new training course!

Scripting

I know there are a lot of theories on this, but I mostly don’t script anything.

Occasionally, I’ll write down some thoughts and I’ll even write out bullet points. But I truly think my videos are more conversational if I don’t have a script.

I have a goal in mind, of course. I know what I’m going to talk about. And then I talk.

I’ll get to that a bit more later.

Lighting

I won’t get into all of the details about lighting here because that’s a separate topic of its own, but here are three primary things that I do…

1. I turn on my two LED panels that are mounted on either side of my desk.

2. I turn off my ceiling lights.

3. I turn on my RGB light that’s laying on the floor for a splash of color to the walls and ceiling.

The RGB is unnecessary, but I’ve found it adds a little bit of style and variation. My videos are bound to look very similar to one another, so switching up the color every day tends to help.

My Camera (My Phone)

Some people are surprised to hear this, but the primary video source is my phone. It’s not a special DSLR or fancy camera.

Of course, I do use an iPhone 14 Pro, and I can see a noticeable improvement over my old iPhone 12. I also use the Cinematic video setting to make the background blurry and give it a bit of professional polish.

I perch my phone on a small tripod in front of me on my desk. I use the rear-facing camera because it produces slightly better quality. To do that, I connect my iPhone to my MacBook and open up QuickTime. I start a new video and select my phone so that I can see what I’m recording from there.

Many have reached out and told me that I don’t need to do it that way, but I do because I have an older MacBook from 2016. Newer ones can mirror to the laptop without hooking them up.

I use a little remote that once came with a tripod to start recording. Couldn’t cost more than $10.

Beyond that, I line everything up so that my desk doesn’t appear in the shot and my eyes land on the top horizontal line of the grid tool.

Screenflow Recording

While I record with my phone, I also record with Screenflow, a desktop video editing app.

Many people ask what I use for editing, and I’m always hesitant to tell them. The truth is that you can do the same thing with so many different apps, it really doesn’t matter. Use what you’re comfortable with. I’ve used Screenflow for more than a decade.

I record at least my external mic from Screenflow. This will replace my phone audio to improve the quality. If I’m doing a tutorial, I’ll also record my screen.

What that means is that we’ll have at least two files to work with when I’m editing. Since there will be audio from my phone’s video file, I will line up the phone’s audio with the audio from my external mic before deleting the phone’s audio file.

The Sit-Down

Here’s something you need to understand: While the final product is a clean edit, it was anything but that when I sat down and hit record.

View this post on Instagram

A post shared by Jon Loomer (@jonloomer)

I could create a blooper reel for every sit-down I do. I’m not kidding.

I know that this is one of the things that keeps people from creating videos. They see the final polish and they think that’s how it looked when it was recorded. That can be intimidating.

My original video files are anywhere from three to six minutes. And really, three and six almost never happen. It’s usually between four and five.

To recap from earlier, I set up my phone in front of me and hit record. I have a general idea of what I want to say, but I don’t script anything out. I just talk.

To simplify, I’m really thinking about the next sentence I want to say in my head. I’m not all that worried about the full message.

I know I’m not alone in this, but I fumble with my words a lot. I often need to attempt to say the same sentence five or six times. It’s frustrating, but that stuff gets edited out.

I say a sentence, and then pause and think. I say another sentence, pause, and think. I repeat it if it comes out awkwardly. Sometimes I say sentence fragments and pause and then complete the sentence. I can splice the fragments together later.

Editing

I won’t get into the details of my editing here, but just a few quick points.

I will cut that three to six-minute rough file down to no more than 60 seconds. No exceptions. You’ll find that many of my videos are 59+ seconds. This is not by accident.

After lining up the files, I make a first-pass edit. This is when I just clean up the mistakes and pauses.

Of course, that almost always results in a video that will be longer than a minute (typically 10 to 20 seconds over). That means I need to go back through and prioritize. This is where I take out parts that aren’t completely necessary to the message.

Once I get it down to 60 seconds, I then add a final polish with jump cuts and zoom video actions.

The final step is that I export that file and import it into CapCut to add the captions.

Scheduling and Publishing

I publish my videos to six different platforms (links to my profile for each):

  • TikTok
  • Facebook Reels
  • Instagram Reels
  • YouTube Shorts
  • LinkedIn
  • Pinterest

It’s a lot. I’ve experimented with using various scheduling and repurposing apps, but I’m currently back to scheduling almost all of them natively. I just don’t find it takes all that much time to need a separate app.

There’s always the debate about whether third-party apps hurt your reach, too. Personally, I only have suspicions about YouTube Shorts. Whether a coincidence or not, I’ve seen that my videos consistently perform far better when scheduled or published natively from YouTube.

Not that my videos do great from YouTube. But I was regularly seeing 50 views or less per video when using a third-party app, but I consistently see more than that when scheduled natively.

Everything else is trial and error and personal preference. I publish at least once per day, and it’s typically in the morning.

Your Turn

Do you have any questions about my short-form video recording process? Ask it below.

The post My Process for Recording Short-Form Videos appeared first on Jon Loomer Digital.

Facebook lead ads have a reputation for generating low-quality leads. In many cases, they’ve deserved that reputation. But, there are several steps that you can take to help improve the quality of your leads from Facebook lead ads.

I often have people ask me whether they should use Facebook lead ads or send people to a landing page on their website. The truth is that either approach can work. And both approaches have their own built-in advantages and disadvantages.

Don’t avoid Facebook lead ads due to presumed lower-quality leads. Make these forms work for you.

Use this as a guide…

Quality Lead-Building 101

In theory, it makes sense that the default Facebook lead form would produce lower-quality leads than the typical landing page. It’s not all that difficult to explain. It’s truly Quality Lead-Building 101.

It doesn’t matter what approach you use to build leads. The easier you make the process, the more leads you should expect. But that ease of completion comes at the cost of quality.

If you create an extremely simple landing page with a brief explanation and a form that only requests an email address, you can expect to get more leads than a long landing page with a form that requests a first name, last name, and additional details about your business in addition to the email address.

Facebook lead forms have an advantage related to volume because of two things:

  1. They keep potential leads on Facebook
  2. They prefill basic contact information

These factors result in less friction. Less friction will lead to greater volume. And again, that will almost always lead to less quality (a trade that can be worthwhile).

Do we just throw Facebook lead ads away as a result? Of course not. Keeping people on Facebook (or Instagram) is still valuable. You don’t have to worry about the website experience or page load. Pre-filled fields can be beneficial, too.

If you want to increase the quality of the leads generated from these forms, the answer is simple: Add more friction.

Here are some simple ways to add more friction without making your forms difficult to use, with the goal of improving the quality of leads you generate.

1. Do Not Use ‘More Volume’ Form Type

Facebook Lead Form Type

When you create a lead form, the first step is to choose a form type. By default, “More Volume” will be selected.

You should know by now that this may not lead to the highest quality leads. The form will be simpler with fewer steps. The goal, from Meta’s point of view, will be to make the form as streamlined as possible to get you the most leads.

Since this is the default selection, you’ll need to make that change from the start. “Higher Intent” is a good option. It adds a review step to prevent accidental submissions.

But you can do better than that.

2. Use ‘Custom’ Form Type

A rather recent addition is the Custom form type. You can read more about this form type in my tutorial here.

When you select the Custom form type, you can add more information and context to your form. This not only provides friction, but it will give the potential lead a clearer idea of whether they should complete the form.

The Custom form provides some stylistic enhancements like a color scheme.

Facebook Lead Form Intro

The intro section allows you to highlight a few benefits of your product or service.

Facebook Lead Form Intro

And then you can add up to four sections to build the story of your brand or product.

Facebook Lead Form Build Your Story

Here’s an example of what the final product looks like…


3. Add More Questions

The more information you demand, the fewer leads you should expect. But, of course, that’s not necessarily a bad thing.

This added friction could result in a potential low-quality lead abandoning your form. But you may also ask questions that are important to you that may help that potential lead realize that they aren’t the right fit.

While you can ask more questions that will pre-fill answers from a user’s profile (like first name, last name, and email address), a better option is a question that requires a typed answer.

Facebook Lead Ad Form Questions

When adding questions, consider short-answer.

Facebook Lead Form Short Answer

4. Use Lead Filtering

You have one more option, which might just be the best way to control the quality of your leads: Multiple Choice with Lead Filtering.

If you ask a multiple-choice question, you can turn lead filtering on.

Meta Lead Ads Lead Filtering

Once you do, a column for “Lead Filter” will appear.

Meta Lead Ads Lead Filtering

Depending on the user’s answer, you can determine whether they are a “lead” or “not a lead.”

Meta Lead Ads Lead Filtering

If they are “not a lead,” the form will send them directly to a message for non-leads.

Meta Lead Ads Lead Filtering

They will not complete the form, which means you will not receive that person’s contact information.

Find the Right Mix

Of course, this doesn’t mean that you should do all of these things. Sure, if you create a Custom lead form with the maximum number of steps, eight short answer questions, and a lead filtering question, whatever leads you get are likely to be high quality.

And you’re also not going to get many leads.

There’s a balance here between quality, volume, costs, and lead value. If all you’re doing is collecting leads for your newsletter, there’s no reason to increase friction and cut down on volume.

But if a quality lead is extremely valuable and you’ll assign a sales team to call them, you’ll want to do all you can to make sure that the sales team is focused on quality leads and not wasting their time.

You may want to start with less friction and see what you get from it related to volume and quality. Then make adjustments accordingly.

Watch Video

I recorded a video about this, too. Watch it below…

Your Turn

What do you do to increase the quality of leads generated from Facebook lead forms?

Let me know in the comments below!

The post How to Get High Quality Leads from Facebook Lead Ads appeared first on Jon Loomer Digital.

Did you miss our previous article…
https://www.sydneysocialmediaservices.com/?p=6431

Advantage Campaign Budget (formerly Campaign Budget Optimization or CBO) is an option when you create a Meta ads campaign. Should you turn it on?

Advantage Campaign Budget

In this post, we’ll explore:

  1. How Advantage Campaign Budget works
  2. Eligibility requirements
  3. How to set it up
  4. Ad set spend limits
  5. Best practices
  6. When you should use it

There’s lots to cover here. Let’s go…

How it Works

The standard campaign setup utilizes individual ad set budgets. Let’s assume that you have three ad sets…

  • Ad Set 1: $20
  • Ad Set 2: $20
  • Ad Set 3: $20

Each ad set has its own budget.

But when Advantage Campaign Budget is turned on, the budget is set within the campaign. If it’s turned on for the example above, your campaign budget might be $60. Meta can then distribute your budget optimally to get you the best results.

This simplifies the process of determining how much you should budget for each ad set. Instead of forcing the algorithm to spend $20 per ad set, Advantage Campaign Budget may distribute it on a particular day like this:

  • Ad Set 1: $30
  • Ad Set 2: $10
  • Ad Set 3: $20

If Ad Set 2 isn’t performing well, Meta can spend less on that ad set; if Ad Set 1 is outperforming the others, more budget can be moved to it. This is also a fluid process. The amount of budget dedicated to each ad set can change on a day-to-day basis.

Eligibility Requirements

In order to use Advantage Campaign Budget, the following will be true:

1. There are at least two ad sets within the campaign.

2. The same budget type will be utilized for all ad sets (daily or lifetime).

3. A common bid strategy will be utilized across all ad sets.

4. If using the Highest Volume bid strategy, the same optimization event (Performance Goal) will be utilized across all ad sets.

5. Standard delivery will automatically apply.

If you want to create ad sets that differ by budget type, bid strategy, performance goal, or delivery, you’ll need to utilize ad set budgets.

How to Set it Up

Once you turn on Advantage Campaign Budget, it will look like this…

Advantage Campaign Budget

You can set either a daily or lifetime budget. Just remember that this budget applies to all of the ad sets within the campaign. So, if you’d typically use three $20 ad set budgets, you’ll likely want to use a $60 budget here.

Advantage Campaign Budget

If you use a lifetime budget, you can run ads on a schedule (dayparting).

Advantage Campaign Budget

The schedule will be set within the ad set, so you can customize this by ad set.

Advantage Campaign Budget

The bid strategy that applies to all ad sets is determined within the campaign.

Advantage Campaign Budget

Depending on the objective, you will have the typical options available:

  • Highest Volume or Value (Value if this is your Performance Goal for a Sales campaign)
  • Cost Per Result Goal
  • ROAS Goal (if Value is your Performance Goal)
  • Bid Cap

None of these settings are unique to Advantage Campaign Budget. So if you wouldn’t normally use them when using ad set budgets, don’t worry about it.

Ad Set Spend Limits

Maybe you are required to spend a certain amount on an audience or you want to prevent Meta from pushing too much of the budget to one ad set. You can control this with ad set spend limit minimums and maximums.

After you turn Advantage Campaign Budget on, check the box for ad set spend limits within Budget & Schedule in the ad set. This is available regardless of whether you use daily or lifetime budgets.

Advantage Campaign Budget Ad Set Spend Limits

As you can see in the graphic above, Meta can’t guarantee that your minimum will be met, but it will attempt to spend it.

Advantage Campaign Budget

And the warning makes it clear that it is not recommended that you apply both a minimum and maximum ad set spend limit as it will restrict the algorithm.

There is a deeper philosophical discussion to be had regarding ad set spend limits that we’ll need to cover in another post. But overall, I don’t recommend using them. In fact, even Meta doesn’t recommend using them.

Advantage Campaign Budget

If you don’t trust Meta to optimally distribute your budget using Advantage Campaign Budget, just use ad set budgets. No judgment.

Best Practices

Meta has several recommendations to get the most out of Advantage Campaign Budget, but here are a few of the highlights…

1. Keep audience sizes similar between ad sets.

Oftentimes, advertisers will have multiple ad sets within a campaign for cold and warm audiences. This approach is not ideal for Advantage Campaign Budget. In all likelihood, most of the budget will be distributed to the larger audience. Larger audiences would be preferred here, and use similarly large audiences for each ad set.

2. Limit changes.

Whenever you add a new ad set to your campaign, there will be a two-hour re-adjustment period. Additionally, any significant changes to the campaign settings will restart the learning phase. Advantage Campaign Budget involves optimization across the entire campaign rather than ad sets performing independently, so changes make a bigger impact. If you’re going to make changes, make them in bulk and limit them.

3. Don’t pause underperforming ad sets.

This is typical practice when utilizing ad set budgets. But remember that the algorithm is constantly adjusting. There is no need to pause underperforming ad sets as Meta will simply spend less on it. Pausing will mess up the optimization.

4. Analyze results at the campaign level.

Get out of the habit of looking at your results at the ad set level. All that matters is how the campaign performs.

5. Trust it.

Overall, if you’re going to use Campaign Budget Optimization, you need to go all in. Trust it. Keep your hands off. Allow it to do its work and optimize without your interruptions or restrictions.

When Should You Use It?

You may be able to piece this together from everything in this post, but the ideal situation to use Advantage Campaign Budget is when…

1. You’re creating multiple ad sets for similarly sized audiences.

2. You have no need to customize the bid strategy, budget type, or performance goal by ad set.

3. You trust the optimization from Advantage Campaign Budget and will have a hands-off approach.

While you could technically use this with smaller and warmer audiences if those audiences are similar sizes, the ads algorithm tends to do best with more volume to work with. This is ideal for colder audience targeting when you have a common approach across ad sets.

Watch Video

I recorded a video about this, too. Check it out below…

Your Turn

What’s your experience been with Advantage Campaign Budget?

Let me know in the comments below!

The post Advantage Campaign Budget Best Practices appeared first on Jon Loomer Digital.

Did you miss our previous article…
https://www.sydneysocialmediaservices.com/?p=6419

Inspiration for how to be a better marketer can come from just about anywhere, including under the sea. Or should I say: Under The Sea, the best song from the 1989 Disney classic The Little Mermaid.

In the song, the crab Sebastian spends nearly three minutes singing to the mermaid Ariel about why she should not attempt to live on land. When we look at the song through a B2B marketing lens, it becomes a stark case study in common mistakes that any marketer should avoid.

There are four mistakes Sebastian makes in the course of this specific song that B2B marketers can learn a lesson from. Let’s take a look at each one and what they can teach us as marketers.

Mistake #1: He has the wrong message

Sebastian clearly did not consider his audience when preparing the message for this song. He centers his messaging around how staying under the sea will help her not get eaten by humans. The only problem is: humans don’t typically eat mermaids.

Instead, he may have considered a more practical message around how she would not be able to walk or breathe on land. While it’s true that Ariel eventually does get some spiffy human legs after trading her voice for them, messaging around the practical safety considerations earlier on in her decision making process may have resonated and caused her to reconsider.

The lesson for marketers: always make sure you understand your audience’s actual pain points. Don’t assume their problems are the same as your problems!


“Always make sure you understand your audience’s actual pain points. Don’t assume their problems are the same as your problems!” — Art Allen @punsultant
Click To Tweet


Mistake #2: He has the wrong messenger

I am not the first to make this point, but your dad’s coworker is probably the last person you’d listen to for advice on how to deal with the boy you like. Instead, Sebastian should have recruited someone Ariel would have been more likely to listen to. This is influencer marketing — or, in this case, maybe it’s finfluencer marketing.

Sebastian might have recruited Ariel’s good friend Flounder to deliver the “don’t try to go live on land” message. After all, Flounder also has an interest in keeping Ariel under the sea: he doesn’t want to lose his friend. Going with a more trustworthy source may very well have yielded better results.

The lesson for marketers: you (or your client) may not always be the best messenger. Consider an influencer marketing campaign to add credibility to your message.


“You (or your client) may not always be the best messenger. Consider an influencer marketing campaign to add credibility to your message.” — Art Allen @punsultant
Click To Tweet


Mistake #3: He doesn’t pivot

About two thirds of the way through the song, Flounder whispers in Ariel’s ear and they both leave. In the marketing world, we would call this real-time analytics: he can actually see his audience disengaging. But he’s too busy having fun with his song to notice.

The result of this is that he spends a full third of his campaign messaging to an audience that isn’t even there.

If he had noticed Ariel leaving, he could have pivoted. Whether changing up his message, his messenger, or some other aspect of his appeal, he shouldn’t have continued with the campaign as it was.

Lesson for marketers: pay attention to your analytics! If your audience isn’t responding to your campaign, dig into the data to figure out why and pivot accordingly.


“Pay attention to your analytics! If your audience isn’t responding to your campaign, dig into the data to figure out why and pivot accordingly.” — Art Allen @punsultant
Click To Tweet


Mistake #4: He doesn’t use generative AI

Yes, this one may seem a little unfair. But we’re halfway through 2023 here, and generative AI has matured and more than proven its value to marketers. There are no excuses for leaving generative AI out of your marketing toolbox — not even being a cartoon crab.

Sebastian might have used a generative AI tool like ChatGPT to brainstorm a list of concepts for how to present his argument. He also could have used a tool like Midjourney to create compelling visual aids to help tell his story.

Regardless of how he used it, the amount of help with ideation, concept revision, and content creation that generative AI tools offer very well may have provided him with what he needed to present a more compelling message.

Lesson for marketers: learn how to use generative AI tools. They are not a replacement for the hard work of making great marketing content, but they are invaluable for working more efficiently and effectively.


“Learn how to use generative AI tools. They are not a replacement for the hard work of making great marketing content, but they are invaluable for working more efficiently and effectively.” — Art Allen @punsultant
Click To Tweet


Don’t Make a Big Mistake

The world of B2B marketing may not be quite as magical as life under the sea, but in both places it’s easy to make mistakes. But those mistakes aren’t inevitable, and putting in a little bit of thought before, during, and after any campaign can help make sure your clients have a happy ending.

Learn more about crafting great B2B content experiences with our new free guide, Marketing with Intent: The Future of SEO & B2B Search Traffic.

Marketing with Intent: The Future of SEO & B2B Search Traffic

The post 4 B2B Marketing Lessons from Disney’s The Little Mermaid appeared first on B2B Marketing Blog – TopRank®.

Did you miss our previous article…
https://www.sydneysocialmediaservices.com/?p=6416

I’ve been in search of a Facebook Conversions API alternative to the API Gateway that works seamlessly with Google Tag Manager. I may have found it in Zaraz.

To be clear, the API Gateway works great. But I’ve heard consistently that its AWS hosting prices many small businesses out. Truthfully, I’d love to find a more affordable option myself if it’s available.

Zaraz appears to fit the bill in every way for me. Not only does it utilize triggers in many of the same ways as Google Tag Manager, but it may not cost a thing.

I’m still testing. We’re in the process of figuring out deduplication. But let me explain what excites me about Zaraz…

What is Zaraz?

Zaraz is a third-party tool manager built by Cloudflare. I’m not going to try to explain the technical capabilities of the tool because that’s not my expertise.

What I do know is that among its capabilities is sending web events for Facebook Conversions API. And not only is it built by Cloudflare, but it’s built into it.

In other words, if you already pay for Cloudflare for caching (like I do), you can set up the Conversions API without spending another penny.

Zaraz

Something I didn’t immediately understand is that Zaraz isn’t a replacement for Google Tag Manager’s client-side events. Zaraz will only send the server-side events.

We’ll need to address that later.

Triggers

One of the powers of Zaraz is the triggers. If you’ve created custom events with Google Tag Manager before (one of my absolute favorite things), you’ll be right at home here.

In fact, it’s infinitely easier with Zaraz. Zaraz triggers are based on rules.

Zaraz Triggers

Much of this requires some knowledge of CSS. I’m going to skip that and go straight to the easy and powerful stuff.

You can create triggers using a timer. For example, you can have a trigger fire once a visitor has spent 60 seconds on a page.

Zaraz Timer Trigger

Or you can create a trigger using scroll depth. For example, fire a trigger when a visitor scrolls at least halfway down a page.

Zaraz Scroll Depth Event

These are two triggers that I use for my “quality traffic” custom events that are set up in Google Tag Manager.

Here’s an example of the scroll depth trigger with Google Tag Manager…

And a timer in GTM…

Events

Once you have your triggers, you can create events in Zaraz. No coding is necessary.

Here’s what it looks like to create an event for a 3 Minute visit on a page that fires when the 3 Minute trigger happens.

Zaraz API Event

That’s it. So incredibly simple.

Testing

I want to make this point quickly because I don’t want anyone to be confused like I was. When you test these events, they will not appear in the Facebook Pixel Helper.

Some people may know that. I took it for granted because when I used the API Gateway, all of the same events were being sent both client-side and server-side. So, I didn’t realize that only the client-side events appeared.

To test server-side events, you’ll need to go to the Testing area of Events Manager.

Server-Side Testing

Deduplication

As mentioned at the top, events created with Zaraz will only be server-side (API). You will need another method for managing client-side to fire pixel events. I use Google Tag Manager.

Since events will fire from both locations independently, we are presented with an issue. If a 50% scroll depth event fires from Zaraz and from Google Tag Manager for the same scroll from the same user, how does Facebook know that it’s the same event?

First, you could theoretically run most of your events server-side and only use Google Tag Manager for events that Zaraz can’t create. I wouldn’t consider myself an expert on this, but my understanding is that Zaraz can’t replicate events I’ve created in Google Tag Manager for plays of my podcast player or embedded YouTube videos.

That’s not necessarily the best practice, though. If possible, you want to send the event both server-side and client-side and then deduplicate them.

I have Joel Hughes and his team from Glass Mountains helping me with that part of it (strong recommendation if you need their help). The solution appears to be related to an external_id and other technical stuff that is way over my pay grade.

Once I get that sorted out, I will provide details on how deduplication was accomplished so that you can do it, too.

Watch Video

I recorded a video about this, too. Check it out below…

Your Turn

Have you experimented with Zaraz? What do you think?

Let me know in the comments below!

The post Testing Zaraz to Set Up Facebook Conversions API appeared first on Jon Loomer Digital.

I’ve heard from a handful of advertisers recently who were experiencing issues with their Meta events not firing properly. In each case, the problem was caused by using “URL equals” when setting up events. The solution is simple: Use “URL contains” instead.

In this post, let’s talk about the many times this choice comes up for advertisers. And then I’ll explain why “URL equals” is causing problems and the best practices for using “URL contains.”

When Does This Come Up?

It comes up a lot, frankly…

1. Creating a Custom Conversion.

Create Custom Conversion

The default rule for Custom Conversions is based on URL.

2. Creating a Website Custom Audience.

Create a Website Custom Audience

When you create a Website Custom Audience for people who visited specific web pages, you’ll need to make this choice.

3. Creating Standard Events with the Event Setup Tool.

Create Event with Event Setup Tool

When you create an event based on URL using the Event Setup Tool, the default logic will be “URL equals.”

4. Third-Party Tool Integration.

You’ll see this outside of the Meta-branded tools as well. An example is Google Tag Manager, which is a tool that I use to manage the pixel. When creating a page view trigger, you’ll need to decide between “URL equals” and “URL contains” there as well.

Google Tag Manager Page View Trigger

The Problem with URL Equals

If you use “URL Equals,” the event will only fire when the URL equals exactly what you put into the text field.

Here’s an important clarification from Meta:

We only count a conversion when the URL exactly matches what you put in the URL field for your custom conversion. If someone lands on a version of the URL with any additional text beyond what is pasted into the URL field (for example, UTM parameters, http vs. https, or even an extra “/” at the end) we won’t count the conversion.

There are so many potential issues that can arise here…

1. Mistyping: If you typed manually and don’t add the closing “/” the event won’t fire.

2. www: Does “www” actually appear in the URL? Whether or not you include it in this rule will matter.

3. SSL: If it’s possible that people can access your website via HTTP in addition to HTTPS, the event won’t always fire.

4. UTM Parameters: Whether manually added or automatically injected, the URL may be transformed so that it does not match your rules.

There are so many potential mistakes that can be made with this that “URL equals” should only be used in specific cases when you know that you want to exclude any variations of the URL (usually for testing purposes).

Best Practices and URL Contains

Meta actually recommends that you use “URL contains.” If you’ve been using “URL equals” in any of the situations outlined above, you are likely losing events.

Before you set these up, follow these steps…

1. Go Through the Conversion Process. Most advertisers will grab the URL for the confirmation page without much thought. But actually go through the process of completing a conversion to reach that confirmation page. Don’t assume what the URL will be.

2. Use “URL Contains.” Yeah, you knew that.

3. Grab the Minimum Portion of URL. Meta recommends using “the minimum portion of the URL needed to distinguish this page from any other pages on your website.” The danger of “URL contains” is that it could potentially include multiple URLs. There’s a rather simple solution for that.

This is not a good use of “URL contains”…

URL Contains

The above rule will capture any URL that has “thank-you” in it. This could conceivably include any confirmation page on your website if you use “thank-you” on those pages.

domain.com/thank-you/
domain.com/product-1-thank-you/
domain.com/blog/why-you-should-say-thank-you/
domain.com/thank-you-for-your-help/

But be careful. Let’s stick with the example of a confirmation page that includes “thank-you” in it. This wouldn’t solve everything either…

For the same reason that this wouldn’t…

In both cases, there could be multiple URLs that contain that text, but with something different before or after.

The solution, in most cases, is this…

By adding the “/” to both sides, you clarify that there can’t be additional text before or after it. The only exception would be if that “thank-you” path could exist on multiple domains or subdomains in which your pixel fires. If that’s the case, you’ll want to include the domain — and maybe more if you run into the issue of subdomains (rare situation).

Your Turn

Do you use “URL contains” when creating your events and audiences based on URL?

Let me know in the comments below!

The post Do Not Use URL Equals for Meta Events and Audiences appeared first on Jon Loomer Digital.

One of the biggest challenges that Meta advertisers face is running ads that build quality leads that are likely to buy. The wide variation in costs and conversion rate by country is a primary source of this frustration. But there is a solution.

In this post, we’ll discuss the dilemma before I lay out my approach to help you run Meta ads for leads that effectively distribute your budget to countries that are likely to convert.

Problem #1: Cost Variation by Country

First, let’s accurately define the problem so that you understand what we are attempting to solve here.

Most advertisers understand that when running Meta ads to build leads that you should not target worldwide (all countries). While this is generally accepted, it’s important to understand the factors that can make global targeting problematic. Then, we’ll cover some solutions to counter these problems.

There’s a wide variation in costs to reach people by country.

The CPM (Cost Per 1,000 Impressions) can be under $1 for some countries and 30 times that or more in others. The ultimate CPM, of course, will depend on many factors. But the difference between reaching people in the US and India, for example, is significant.

The rate of engagement or lead completion often has a narrow range by country.

The difference in costs isn’t a problem in and of itself. The problem begins because that same disparity doesn’t exist in level of engagement or rate of lead completion. The result of this is that if you optimize for a lead, the algorithm will do all it can to get you the most leads at the lowest cost. And since Meta thinks any lead is a good lead, you can bet that the vast majority of your budget will be spent on the cheapest countries if you target worldwide since that’s the easiest way to get the most leads.

The rate of conversion from lead to paying customer by country is imbalanced.

In theory, getting all of your leads from the cheapest countries to reach isn’t an issue either. Instead, that could be an efficient way to find new customers. That is, of course, if leads convert to paying customers at a similar rate regardless of country.

But that’s not the case. You will likely find that leads will have a wide variance in conversion rate to paying customer depending on the country (among other factors). Some of the most expensive countries to reach are often the countries most likely to convert.

Problem #2: Narrow Focus on Potential Customers

Problem #1 is why many advertisers will focus their budgets on a core group of countries (like the US, UK, Canada, and Australia). While these are generally some of the most expensive countries to reach, they also tend to be more likely to become paying customers.

This approach, though, generates a couple of more issues…

First, that increased cost makes the profitability of lead building far more challenging. You will spend more per lead, and it’s more important that you get a good rate of conversion from the leads that you get.

Second, this assumes that you will only get paying customers from these four countries. That’s often not the case. In an attempt to make your ads more effective, you’ve abandoned countries that have potential to lead to paying customers.

Now that you understand the problems, let’s get to a multi-step solution that you can apply…

1. Research Where Your Paying Customers Live

This is important, especially if you’re an established brand with a history of paying customers to pull from. Actually go through your database, and you might be surprised by what you find.

I ran reports, and I have paying customers in about 100 countries. It’s actually pretty amazing!

Of course, it’s probably best not to commit to targeting a country that only has one or two paying customers, especially if you’ve been running a business for while like I have.

There’s no rule to this, but I made a cutoff at about 20 paying customers. This is my minimum for dedicating budget in a country to build leads.

This left me with 40 countries in all that I can target.

2. Uncover General Costs Per Country

You’ll understand why this is important in a minute. But we need to get a general idea of how much it will cost you to reach each country.

This is going to be imperfect, but it doesn’t need to be perfect. You need a general idea. There are undoubtedly benchmark reports that you can use for this, but I decided to do some manual work on my own.

The first thing I did was I ran a custom ad report for my ad account using the Country breakdown. Note that I did this within the custom ad reports instead of Ads Manager since it allows you to view this across your entire account instead of focusing on a single campaign.

Of course, this is imperfect since it relies on your data and the CPM costs may vary depending on objective and other factors. But I still find this valuable.

I also used the approach of creating a draft ad set with a $100 daily budget and selecting one country at a time to see how Meta projects impressions.

This will again be imperfect, as you can see from the wide range of impressions. But I used the top of the range for each country to have a consistent point of comparison.

You now have an account-specific CPM and projected impressions per $100 spent to give you an idea of costs to reach a country. If one number feels particularly off, go with the one that seems more accurate.

3. Group Countries by Projected CPM

Now that we have a couple of data points per country, let’s start grouping them together. Our main goal is to prevent wide variations in costs so that the algorithm doesn’t prefer one country over another for the CPM reason alone.

I created five groups in all. Since about half of my customers come from the US, I decided to make it one of my groups by itself.

Here’s an example of the second group…

These are the other most expensive countries to reach (beyond the US) of potential paying customers, according to my imperfect research.

4. Create an Ad Set for Each Group

This progression of steps should start to make sense. We are grouping countries together by similar CPM costs so that the algorithm won’t prefer one country over the other. While we don’t demand equal distribution within an ad set, we still want each country to have a chance.

Here are my five ad sets…

By the name, you can see what my approach is here. I actually used website custom audiences and engagement custom audiences, but I also turned on Advantage Custom Audience to allow the algorithm to expand beyond those groups. This is actually the first time I’ve used Advantage Custom Audiences. In most cases, I go completely broad for something like this.

Otherwise, everything is pretty straight-forward here. All placements, no manual bidding.

5. Establish Ad Set Budgets

The goal here should be to get the same number of leads per ad set (or it can be your goal). Of course, that wouldn’t mean using the same budget for every ad set since the costs will vary widely by country group.

Look at it this way… I projected that I can reach about 17 times more people when targeting Group 5 than when targeting the US. I set a $40 daily budget for the US, thinking that should get me to at least 50 leads per week. In theory, I only need to spend about $2.35 to get the same number of leads from Group 5.

I went with $3 for Group 5 because even that seems insane. But I can tell you that, incredibly, that’s enough to produce the number of leads I’m wanting from that group.

You can use a formula, but remember that the numbers we’re using for CPM are rough estimations. So feel free to use a bit of your gut here, too.

Here’s what I’m rolling with…

6. Monitor Distribution and Adjust if Necessary

Because these groups are based on some rough projections, it’s quite likely that we’ll run into an issue with imbalanced distribution. Again, we don’t want distribution among countries to be equal within an ad set. We just want to make sure that every country has a chance. Due to population and rate of goal completion, distribution will vary regardless.

What we want to watch for is a country that’s getting nearly all or barely any of the budget. If that happens, check the CPM to see if that may be the cause. We do that by using the breakdown by country in Ads Manager.

I wouldn’t overreact to small sample size results. Allow your ad sets to run for at least a week before making any changes to the composition of countries. When you do make those changes, the learning phase will restart.

If a country isn’t getting enough budget to bring you any leads, consider moving it to the next cheapest country group. On the flip side, if a country is eating up an ad set budget and the CPM is the lowest within the group, consider moving it to the next most expensive ad set.

But I wouldn’t micromanage this. The main thing is that every country is at least generating some leads. You’ll drive yourself crazy if you demand distribution be equal. If that’s the case, just set up an ad set for each country (which I’d only consider with much higher budgets).

A Simplified Version

If the above approach confuses you, let’s consider a much simpler variation.

Assume that instead of 40 countries, you have paying customers in five. For argument’s sake, those five countries have vastly different CPMs. Let’s use this example of countries:

  • United States ($15)
  • Ireland ($11)
  • Brazil ($8)
  • Philippines ($5)
  • India ($2)

Quite the collection of countries! The CPMs are entirely hypothetical to prove a point.

Since these CPMs are across the board, you probably shouldn’t put them into the same ad set, or the majority of your budget will be spent in India. While India has generated paying customers, you may want to be sure that you also get leads from the United States and other countries on the list.

To accomplish this, you’ll create multiple ad sets. The budget you use for each ad set should be somewhat proportionate to the differences in CPM. With a goal in mind of generating 50 leads per week per ad set and an assumed cost of $5 per lead in the US, we would start with a daily budget of $50 for the United States (this is a starting point with no math behind it).

We can then assemble our other budgets.

  • United States: $50
  • Ireland: $37
  • Brazil: $27
  • Philippines: $17
  • India: $7

In theory, this could help us get approximately the same number of leads per week from each of these countries that are sources of paying customers.

Find What Works for You

This is all a bit of an experiment for me, so I’m by no means an expert on this approach. But I can tell you that the early returns have been exciting. It’s a nice balance of high-volume cheap leads and lower-volume expensive leads, but they all have the potential to lead to paying customers.

It’s possible that you have far fewer than 40 countries to target. Don’t feel like this needs to be a long list. I only included 40 because I have data showing I should.

It’s also possible that you have a very high budget and you can create more groups. At the extreme, you’d create one ad set per country. Since you’d need an adequate budget to exit the learning phase for every country, that’s going to cost much more than grouping similar countries.

How you do this is up to you. But, experiment and have fun with it!

Your Turn

Have you tried out a similar approach to grouping countries? What do you do differently?

Let me know in the comments below!

The post Strategically Target Countries for Quality Leads with Meta Ads appeared first on Jon Loomer Digital.

If you’ve been a Facebook advertiser for a while, you’ve heard this more times than you can count. You may want to make a change. But it’s extremely risky. Do not touch a Facebook ad that’s working.

First, let’s discuss the origins of this advice. Then, allow me to share my own story of how I took the risk and lost. Finally, we’ll discuss the problem at hand and what you should do.

Why Not?

The algorithm is touchy. If the planets align to give you great results, don’t do anything to disrupt that.

Virtually every advertiser has their own story. They made a minor change. Thought nothing of it. And those great results disappeared.

This isn’t just a theory, this is an actual thing to be concerned about. Of course, the root causes aren’t clear, so we’re never entirely certain about how, when, or why this happens.

One very likely connection is the Learning Phase. This is the period of time after an ad set is launched or a significant change is made that the algorithm learns. This is when your results are the least stable.

Facebook Ads Learning Phase

This is the most likely cause in the vast majority of these situations. You had achieved stable, optimal results. You then made an edit that restarted the Learning Phase, and suddenly that stability was lost.

My Sad Story

It really doesn’t matter how long you’ve been advertising. No matter how many times you’ve been burned by this, you’ll do it again.

It’s not that we like danger. It’s that there are often so many good reasons to make changes that we just can’t help ourselves.

My example is a lead ads campaign. It performed pretty well, and it was doing everything you’d want a lead ads campaign to do. The results were improving nearly every day.

Cost Per Lead Per Day

Everything was going great. It seemed as though the Cost Per Lead could conceivably get better.

But there was one problem, and it had virtually nothing to do with the campaign itself. I was testing an application to sync leads to my CRM. Due to some beginner ignorance, I missed a step somewhere and most of these leads were labeled as “unmarketable.”

That, of course, is a bad thing. If I can’t even email these leads with the thing they requested, they aren’t really leads at all.

So, while I sorted out this issue, I switched to Zapier, the software I normally use for CRM syncing. But, I didn’t like the idea of syncing to the same lead form. I wanted a clean break from what wasn’t working properly to what was.

So, I duplicated the lead form and renamed it. Didn’t change anything to the campaign, ad set, or ad otherwise. The form would look the same.

And then, this happened…

Cost Per Lead Per Day

Okay, that’s bad. My Cost Per Lead multiplied by four, but even that is misleading. This switch happened mid-day, so the leads were virtually drying up.

I then compounded the problem. Okay. Leads stopped coming in. Let’s just switch it back to the old form and pretend that this never happened. The algorithm will be able to go back to where it was, right?

Yeah, no…

Cost Per Lead Per Day

So, just days earlier, the Cost Per Lead was dipping under $1.50 and seemed to be on the way to going even lower. Now, I have to spend about $20 to get even one lead.

Needless to say, these were disastrous decisions on my part. While I had hoped this “minor” edit wouldn’t restart the Learning Phase or negatively impact my results, it was a gamble.

I lost that gamble in a big way.

The Problem

The issue here is that my situation is common. There are so many reasons that an advertiser might want to make a very minor edit. But doing so is such a significant risk.

And while I totally understand why major edits can tank your results, it makes no sense why this one would. Sure, the algorithm is stupid and doesn’t realize that the form looks exactly like the old one. In theory, it could be a completely different form.

But it wasn’t. And this AI stuff is supposed to be so much smarter now. Why not continue optimizing the ads as it was? If the results tank from staying on that same track, obviously my change was significant and the systems should need to re-learn. But there was no reason for the algorithm to re-learn here.

This could have just as easily been a minor text edit. You fix a typo. But that fix could change everything, and not in a good way.

There has to be a way to make this system more stable. Or make it smarter at detecting significant and insignificant edits.

Meta could also stand to provide some clarity regarding significant edits that will result in re-entering the Learning Phase. This is Meta’s horrendous explanation of when budgetary changes restart the Learning Phase…

Edits that Trigger the Learning Phase

If you increase your budget from $100 to $101 (something that will never happen), you’re fine. But if you increase it 10-fold, expect learning to restart. Yeah, no kidding.

Let’s consider the very reasonable situation when this is going to come into play. An advertiser tests out a campaign with a $50 per day budget. Amazing results. Of course, you will want to scale.

You’re in luck! Increase to $51, and you should be fine. Yeah, that’s not helpful.

What could you safely increase your budget to without worry? We don’t know. We only have theories.

Some advertisers have suggested a slow 15% to 25% increase, but it’s obvious that this isn’t set in stone. We have no idea what increase is safe, and it sure would be helpful if Meta provided that clarity.

What Should You Do?

Obviously, I’m a bad teacher here because I’ve been doing this for over a decade and I’m still getting burned. But, here are some thoughts…

1. Plan for great results.

What I mean by this is that it’s better to start with a budget that is too high than too low. If you aren’t getting great results at $100 per day, come down to $50 per day. If that restarts the Learning Phase, so what? Your results weren’t great anyway.

2. Be okay with low-budget results.

If you roll with a $20 daily budget, though, understand that you may be stuck there. Be okay with that. Because it’s better to get great results at $20 per day than crappy results when you attempt to increase that budget to $50.

3. Review before publishing.

Make certain that everything is set up the way you want it. The right copy, creative, targeting, and optimization. You do not want to change anything a few days from now. Get it sorted out before it’s approved.

4. Don’t touch it.

Don’t change the ad. Don’t change the form (ugh). Don’t throw another ad set or ad into the mix. No edits.

Unless…

5. Make changes when they’re needed.

Look, if something isn’t working great, who cares? Make whatever edits or additions you want. If the results get worse, you didn’t lose anything because you were on the verge of stopping this campaign anyway.

Your changes and edits should be last-ditch efforts to save a campaign. Otherwise, start over with a new one.

Watch My Video

I recorded a video about this, too. Check it out below…

Your Turn

Have you been burned by this? What do you think?

Let me know in the comments below!

The post Do Not Touch a Facebook Ad That’s Working appeared first on Jon Loomer Digital.