
The Next Frontier: AI, Automation, and the Future of Work in Multifamily with Ben Infantino, AI Engineer at Apartment SEO
The Next Frontier: AI, Automation, and the Future of Work in Multifamily with Ben Infantino, AI Engineer at Apartment SEO
Co-Hosts: Ronn Ruiz and Martin Canchola, Co-Founders of Apartment SEO
Guest: Ben Infantino, AI Engineer at Apartment SEO
Martin: Welcome back to The Multifamily Podcast with Ronn and Martin, powered by apartmentseo.com. Today, we’re diving into a topic that’s moving faster than almost anything in history, Artificial Intelligence. Our guest today is Ben, an AI engineer with Apartment SEO, who’s been really right in the thick of these changes. From the Big Bang moment of Chat GPT to the new era of GPT five, agent mode and the future of work. We’re going to unpack what this all means not just for tech, but for the Multifamily industry itself. Ben, welcome to the Multifamily podcast.
Ben: Thank you. It’s so good to be here. I really appreciate the invite, and I am super excited to dive into everything, so I’m ready to go.
Ronn: Ben, we’re super excited. Thank you for coming on the journey with us at apartment SEO, thank you for joining us on the podcast. And honestly, I cannot wait for the audience to hear what you got to say. And I know we have a small segment, but it’s going to be jam packed. So before we get started, let’s start with your story. I wanted you to share a little bit about your background. What pulled you into the world of AI engineering? I didn’t even know there was an MBA or a Masters in that from John Hopkins University of all places. How did you begin?
Ben: Absolutely. So I graduated with a bachelor’s in 2020 from UCF in computer science, and then at that point, I was kind of looking at the job market, and the jobs that were available were kind of like website developer, back end developer, and I had an internship before with website development, and I knew it wasn’t really my thing. So as I was looking through those jobs, I saw some machine learning applications, like machine learning positions and for context, I’ve always loved automation. I’ve developed, like so many small projects. I automated a game, just because, like, I’ve always been fascinated by automation, and so machine learning was the most interesting. And from that point, I started looking at the jobs, and they all required a master’s. And so within 30 seconds, I’m like, All right, I’m going to go for a master’s. It was pretty much that instantaneous. And from that point, I just started looking, sent out some applications, and got accepted into Johns Hopkins and finished the program in August of last year. And it’s been awesome. But going to what you said about, like not even knowing that there was an, like a master’s program for AI. AI has actually been around since the fifties, and there’s been a lot of these techniques in like different ways of doing things that have been pretty well known, and the field has grown over time. But then, of course, you’ve got chat GPT, which is, we’ll dive into that 100% but that is newer, but it’s kind of built on the foundation of everything that was laid before. So when I was going through the program, a lot of it was kind of the older theoretical stuff, but we also did touch on a lot of the newer things as well. So I got to see a good mix of the way things were done in the past and the way things are done now. And now, it’s never been a better time. It’s super exciting. There’s so much going on, and I know we’ll get into all of it, but it really is a magical moment.
Martin: Definitely. I mean, people talk about the pace of change in AI as a Cambrian explosion, or like a big bang moment. It feels like new breakthroughs are really happening weekly, daily, every minute of the day. What do you think about that?
Ben: That is pretty much exactly how I would describe it, in the sense of pretty much what happened is the technology that Chat GPT is based off of, it showed promise, kind of back in the day. And if you use any of these tools, and you ask it a question, you get a coherent answer back. It wasn’t always that case, though, like back when it was still developing. You would see kind of resemblances of a good answer, but now the answers are really good. And so what that did is it basically provided tools for the rest of the industry and kind of set the standard. And now everybody is taking those tools and they’re able to build on top of it and create their own workflows and create their own designs, and basically they have a hammer, and now they can go build their houses. And so everybody’s kind of taking a different approach on it. And so it seems like there’s new research papers every week from students in labs that are kind of trying to push the frontier on a specific topic, to people building new workflows. And so there’s so many developments. You’ve got so many different model providers, from Chat GPT, Gemini Claude, they all specialize in different things. And so essentially, we just have all the Lego blocks, and people are just building what they want. And so, every week there’s something new, and it really is kind of that explosion that’s built off of the advancements that we’ve seen over the last few years.
Ronn: Explosion, and you’re part of it, you’re part of the explosive.
Ben: It’s super fun.
Ronn: Yeah, so someone obviously, who works hands on the technology, how do you keep up? And for professionals like in Multifamily, for example, where AI isn’t really their main job. What is the best way, would you say, for them to stay ahead without being completely overwhelmed by a lot of us?
Ben: Or have to be a great question. That’s a great question. So what I found, personally, is that the best place to go, where you get the most signal is on Twitter. I follow… And so, I actually made a new Twitter, brand new, and I only followed machine learning and AI people. so that the algorithm would kind of tune itself to that, so I’d be able to see and follow. Because there’s so many people out there, it’s hard to know who they are if you don’t know who they are. So I’ve built this, my Twitter followed all the machine learning people that I knew, and then over time, was able to grow that. And so, I just checked Twitter pretty much daily. And as soon as a new model drops, I see it. As soon as these new benchmarks are out, I see it. I will say, though, this is definitely more on the technical side of things, like you’ll see a lot of results and papers and benchmarks and stuff like that. So it’s really good to stay in the loop on the technical side of things, as far as non-technical people to kind of stay in the loop, it’s a little bit harder there. You don’t have to stay as involved in it. But the main things would be knowing when the big releases come out. For example, the switch from GPT four to GPT five, that’s something that you would want to be on top of. Or knowing if a new model is released, kind of what its capabilities are. The benchmarks aren’t as important. You’ll kind of learn what the models do as you use them. But really the way that I see it going in the future is pretty much how we have it here at apartment SEO, where I’m the AI engineer. I stay in the weeds of it. I look at everything on the daily, and as soon as I see something that’s like, oh, we can use this. I bring it to the team, and I start talking to the necessary people and work on that process to get it actually implemented, but I’m kind of that filter, so I stay in the weeds of it, and then the necessary information I’ll bring back to the rest of the company. So that’s kind of how I see it going fast, so that way you don’t have to spend as much time looking through all this boring technical stuff.
Ronn: And I think that it’s been having, I can speak on personal experience. I think it’s been helpful to have that filter because, you know, there’s a lot of noise out there, right? And I think there’s a lot of excitement. There clearly is a lot of excitement. And Martin, it almost reminds me of when we started apartment SEO, right? Where SEO was, like to your point, Ben, like AI has been around since the fifties. Well, SEO had been around prior to apartment SEO. Spoil alert, I know a lot of people thought we brought the table, but it’s now, it feels like the same movement, right? Where SEO was changing on the weekly, on the daily, and our look where it’s at. And then let’s talk about, you know, going back to AI and all things, you know, LLMS. It’s changing the game for us as well.
Ben: It’s an evolution.
Ronn: Right? Yeah. So I get the exciting part that you must, it must be to be in your seat where you’re kind of like, I think I’m onto something. I’m on to a good wave, right?
Ben: Absolutely. There’s one other thing I would say on that, and you kind of alluded to it before, but the space is moving so fast that it really is. It can be information overload. So while I do try to stay on top of Twitter, I also back off at times when it’s like, all right, like it’s kind of a balance, right? Because it does move so fast, it’s impossible for any one person to always stay on top of it. And so you have to kind of find that balance between being tuned in and then also plugging out, so you’re not just constantly bombarded, because you can also feel like you’re kind of missing the boat, and that people are doing things that you’re not. There’s a sense of FOMO there, but at the end of the day, everybody is still learning this. And so I would basically just say, treat it like a marathon, not like a sprint. Go at your own pace, because this is going to be things that are going to be changing for a while, like things are only going to speed up from here.
Ronn: Yeah, and I know we’re going to get to it later on, about the changes and how to keep up. So I think that’s great advice. I love the advice on Twitter. I think that’s a great way to segment it. So thank you for that.
Ben: Absolutely.
Ronn: But let’s rewind a bit. Many people call the release obviously Chat GPT the big bang moment for AI, the point where it went mainstream, obviously. So again, 1950s to 2020, whatever, right? We’re like,oh, this is new. But why was that moment so transformative?
Ben: Yeah, so kind of piggybacking off of what I said before, where the technology, I’m going to back up here. This should be beneficial so people understand kind of how these tools work. So chat GPT is built off of an architecture which essentially, the more data you can feed it, the better it’ll do. And so if you have a small amount of data, you get very kind of like, okay results. But as soon as you train it on like the entire internet, now you get amazing results. And that was ultimately what chat GPT did, is they had the biggest data set, they trained on all the internet. So they just had the best model. But that was GPT 3.5, which was pretty much what Chat GPT was. But you had GPT one and GPT two as well before that. And those kinds of things I said, you would ask a question, and you would, instead of being like a random jumbled answer, you would see something that resembled something that made sense, right? And then GPT two, you had a bigger data set, and like, okay, now it’s actually putting sentences together, or what have you. And then GPT 3.5 rolls out effectively chat GPT, and they train it on the whole internet. And now it’s like, wow, if I ask it a question, I get a really coherent answer back. But it wasn’t perfect. And so I remember when Chat GPT first rolled out, the first thing I wanted to do with it was write code, and I started kind of using it to write code, and it would give you stuff that was good, but it wasn’t great. It was effectively like the demo, like you could build toys with it. And then fast forward from then until now, and they’ve just refined all of that. So now it’s no longer just demo stuff. Now you have the tools that you need to put stuff into production, although putting stuff into production now still requires a ton of engineering. It’s not as simple as it may seem, but it’s possible. So Chat GPT, basically, was all of the work that was done previously led to that moment where it was really, actually usable, and it had practical value that anybody could see, even if you were not technical, you could just go use it, and you could instantly tell that, hey, there’s something here. And that was the moment that got everybody’s attention, sparked all the curiosity. But what’s funny is that the quality that we had in GPT 3.5 when it first came out felt like magic. But if you were to go back to those results now, it would feel so like, what is this? This is terrible, because things have just gotten so much better since then, which kind of speaks to the pace that things are evolving.
Ronn: Absolutely.
Martin: Now, would you say, Ben and this, could you define what the Turing test was? And would you say, how do you feel about Chat GPT when they first launched a bit past that Turing test?
Ben: That’s a great question. So for those of you who don’t know, the Turing test is basically, if you were to have two people and say, you put a wall in between them, so you can’t see who’s on the other side. And you have one person who’s asking questions, and then the other person is responding on the other side. To pass the Turing test, a computer or an algorithm must be able to respond to the point where the person asking the question cannot tell if it’s a human or a robot. So effectively, where the robot is so good that without knowing who’s responding, you think it’s a human effectively. However, there’s a lot of caveats to that, right? Who’s judging the responses? Is it an average person? Is it somebody with deep technical experience that kind of knows what to look for? What questions are you asking? What’s the quality of the results you’re looking for? So there’s the Turing test. There’s different kinds of views on it, so whether the current AI is past it or not, you’ll get different answers from different people depending on how they classify it. My opinion on that is, if you know what the models output. Sometimes, like for instance, they’ll generate a lot of hyphens, or they’ll use certain words. So there’s still ways where you can kind of catch on to it. So if you were to, if I was to see it, I would probably have a better eye for it. But if you had somebody who’s never used it, they might think that, oh, that’s a human. So whether we’ve passed it or not depends on who you ask and the criteria that they kind of put around it, but some people would say yes, and then other people might say no.
Martin: Yeah. And I would definitely say in the world of AI now, with scams out there, and just, you know how gullible people are on the internet, I feel like it definitely is passing the terrain test, because, oh my god, it’s just taking over right now.
Ben: Yeah.
Martin: So we’ve heard all the buzz about GPT five. What should we expect as we move into the GPT five era and beyond? I mean, I even heard Sam Altman mentioned by GPT seven, he might be handing over the reins of the company to AI. So what’s going on here?
Ben: So GPT five, there was, so every model from three to four to five got better, right, performed better on the benchmarks. Was more useful, was just able to answer more accurately. But the big change from GPT five was they basically did away with the model selection. So before GPT five, if you would go to the model drop down, you would see 03. 03 Pro, if you had it, 04 mini GPT, 4.1, 4.5, all these models and they all kind of did different things. But the naming was so bad, like you would think, 03 is better than 04 mini, but it wasn’t right. So it was just super complicated. It wasn’t really, like it just made it difficult for people to use it. With GPT five, they basically did away with that, and they switched it to an auto mode, which will pick the model for you automatically. This can be hit or miss, depending on how you’re using it. For me, I like more control. I like being able to select the model. But if you’re, if you never changed it, and you’re just somebody that was always on the default model. You might find this beneficial, because now it might route it to a better model for you. So you have the auto mode, you have the thinking mode, which kind of by itself, will automatically use its reasoning. So it’ll think a little bit longer and give you a better answer. And then you’ve got, like the Pro mode, which will do that, but even more. So effectively, they just made it easier for people to interface with it. Instead of having all those complicated models, you’ve got like an auto select, a think pretty hard and then a think really hard mode. So it just made it easier for people to use it.
Ronn: So that’s where all various levels of users could use it.
Ben: Exactly, yep.
Ronn: So here’s a question I’ve been wanting to ask also. I mean, like with many shiny objects, do you think AI development will plateau at some point? Are we just at the beginning of this exponential growth?
Ben: That’s a great question. And the answer to that is, nobody really knows. This is a pretty heated debate in the space right now, of whether AI is hitting a scaling wall. So effectively, there’s a few kinds of pieces here that determine the pace of AI, you’ve got like the algorithms. So going from the way that we did things before to the transformer architecture, which is what Chat GPT was based off of. You’ve got data, and at this point, we’ve pretty much used up most of the available data, like it’s already been trained on all the internet. You’ve got, so data is pretty much maximized at this point. And in the industry, a lot of research is being done on synthetic data generation, so we can continue to generate high quality data to train these models. And then you’ve also got computation power like. And now you’re seeing Nvidia, Open AI, X ai, all the, Google, Microsoft, they’re building not just data centers, but like these super data centers that have like hundreds of thousands of chips in it. Like X AI stood up a data center with 100,000 chips, and their goal is to get to a million, and then I think like 50 million or 5 million. It’s just crazy numbers of computation.
Martin: It was 50 million. Yeah, I was just showing Ronn a little thing. Elon was mentioning, he’s going to be spending $10 trillion in chips over the next decade, yeah.
Ben: Yeah. And so the reason for that is kind of twofold. Number one, these models take a lot of power to train, but number two, when people use them, right? So anytime you use Chat GPT, and it has that thinking process, or it’s giving you an answer, that’s also using computation as well. So you need enough data, or you need enough computation to be able to train the model, but then also to provide enough computation for people using the model, and as more people use it, you’re going to need more computational power. And the last piece of it is electricity, which is going to be paramount to AI, but that’s kind of something that will work itself out, hopefully with nuclear and all these other options. So yeah, whether it’s going to plateau or not, nobody really knows. But data kind of plateaued, computation is scaling exponentially at this point. The algorithms, this part is, in my opinion, the most important piece. And right now, we’re kind of in a phase where large language models have all the attention and they’re great. But I don’t know how far they’ll be able to be pushed, because, by the nature of large language models. They… if you ask the same question twice, you’ll get a different answer. So they’re not deterministic, they’re stochastic. And so there’s a question around how far we can push the large language models, given that that is kind of underlying in their nature, and there might be other algorithms that are more deterministic, that’ll give us better results in other areas. So, larger language models kind of have a lot of attention. So I don’t know if that’s going to slow down research outside of it. But to answer your question, it’s hard to say whether it’s plateaued. But the benchmarks, if you look at graphs, and you kind of see the results over time, it does look like it’s going exponential at the moment.
Martin: Real quick. I had a question, can you define AGI for the audience? And then, what year? When do you think this might happen? And then, from my understanding, like once we hit AGI, I mean, that’s like a whole nother, like that’s another moment, right? That could be huge for humanity and us as a whole.
Ben: Yeah, there’s actually something after AGI, which is AGI. So AGI stands for artificial general intelligence and ASI stands for artificial super intelligence. To answer the question, there is no clear definition of AGI. Everybody kind of defines it differently. There’s no kind of standardized definition for it. But from what I’ve seen, it basically is when the models are capable enough to handle a large part of the economic work that humans do, right? So once these models become way more economically viable, and they start to take a bigger role in the economy, that’s kind of what some people identify as AGI. And whether we’re at that moment or whether or not, is up for debate. Some people say yes, some people say no. What do I think? I couldn’t give you a timeline, because it’s impossible to know, because you can wake up tomorrow and there could be a whole new model out there, like it’s just changing so fast. But then ASI again, kind of not a standardized definition, but effectively is when one AI is smarter than all of humans combined. So when you have an AI that is smarter than the capability of every human put together, that is one definition that I’ve heard for ASI.
Ronn: Wow. Is that being built because we were putting all our human knowledge and intelligence into it?
Ronn: I mean, there’s a race right now, right? There’s a race. All these companies are racing to try to achieve all that.
Ben: Yeah, but the key with the ASI is, really what matters more than kind of being, when you think of being smarter than everybody on planet Earth, that also implies, like the hot, the most specialized specialist of all the specialists, right? So when you have an AI that can do better research and better like say drug development, or better law, or better, whatever industry you want to put in there, then the best of the best of the best on planet Earth. So it’s kind of a higher bar. It’s like a much higher bar. And how you kind of quantify that, like what if it’s better at medical but it’s not better at law, right? Does that count as ASI yet? So there’s still a lot of open ended questions as to how we define these things, but ASI is the point at which we basically are no longer smarter than the AI. AI is just smarter than every human put together, and by its nature, we’ll be able to do that way faster than we would. She’ll have a super intelligent AI that can work a million times faster, if not more, than we would.
Ronn: Crazy. So one of the hot topics right now is Agent mode, where obviously AI doesn’t just give you the answer. It takes actions on your behalf. In practical terms, what does that actually mean?
Ben: Good question. So you’ve kind of seen some tools get released from Chat GPT over time. Originally, you just ask it a question, you get an answer. And then they added the ability where it would do research, so it actually goes look at, look on the internet for the answer. And then they expanded that to make it even better. But ultimately they released an Agent mode. And effectively an agent is just an AI that can take actions on your behalf. So with Chat GPT is the new Agent mode, it’ll open up a browser, like a window on your computer like you would, and you can see it go to the navigation bar, type in whatever website you want to go to, and start clicking around and taking actions like you would. So you could say, hey, book me a flight, or go buy this for me on Amazon, or go do research on this, and then send this person an email, right? So now, it can actually take actions on your behalf and do things outside of just giving you answers and results. With that being said, though, we are still very much in the early innings of this. So one use case, we’ve been adopting the agent mode within ASEO to help with research and basically finding information that’s public online. But inevitably, that’ll expand to where you can give it login credentials and it could say, sign into Facebook for you, or sign into your Google Ads profile, or sign into this thing or that thing, Amazon and check out for you and do more. So the Agents, their capabilities, are still early innings, but it’s developing quickly. One thing with Agents, though, is the more control you give over, the more security plays a role, because there’s more attack vectors for data to slip, for the AI to do things that you don’t want it to do, but get to keep guardrails there. So agents effectively just take actions on your behalf. And we’re in the early innings of it, but it’s developing rapidly, where they’ll be able to do more and more things for us.
Martin: That’s exciting. And now, how close do you think we are to everyday workers having an AI agent that operates almost like a digital employee, and what safeguards, like, do we need in place to make sure it’s safe and secure?
Ben: That’s a good question. So depending on the level of expertise and your familiarity with these tools, you could effectively be doing that today. I use Chat GPT all day. Every day, and it’s a part of my everyday workflow. Now, the Agents are really good at some things, but they’re not capable of doing everything yet, so depending on on the type of work you’re doing, the agents may or may not be able to help you, but I’m 100% positive that everybody could find some value from chat GPT in their day to day, whether it’s planning, writing emails or kind of whatever you would use the tool for. So as far as how far we are from everybody having their own Agent, that one is it’s definitely something that the industry is trying to solve, but it has a lot of prerequisites, like computational power, like that’s a big one, being able to kind of serve it at scale, but also building the infrastructure around it, so that the tools are actually useful. So I don’t know if we’re at the tipping point of kind of everyday people using Agents. But if you’re already using Chat GPT, and it’s a part of your daily workflow, that’s pretty much where the cutting edge is at the moment, and then finding ways to leverage the agent tools within that. But there are a lot of companies that are working on those personalized Agents. So you kind of have one Agent that follows you around on your day to day, kind of like that employee that you mentioned. And a big piece of this is basically setting up the infrastructure, because inevitably, the way that work is going at the moment is you’ll have Agents that are kind of doing work on your behalf, and then you’re validating and kind of managing an orchestra of Agents that’s being developed but we’re not quite there yet.
Ronn: That’s crazy. So I have a question that I think is in the back of everyone’s mind. Do you believe AI is more likely to replace jobs, or will it mostly just transform them into something new? I know I get asked a lot.
Ben: Yeah, this is, again, a very heavily debated question. You’ve got some people that think it’s gonna… So ultimately, it depends on the timeline, right? Because if you were to fast forward and say 100 years from now, at that point, you’d probably have AI that’s doing pretty much most of everything for us. But the question is, when does that happen, and what does it look like in the short term? So let’s use YouTube as an example. YouTube didn’t exist until the internet was created. But if you were at the invention of the internet, if you would have asked somebody like, hey, what do you think this is going to do? If somebody was going to say, oh, there’s going to be a video sharing platform where people are going to get paid to make money as influencers, they’d look at you as if you’re crazy and be like, what are you talking about? So the internet gave rise to YouTube, which nobody could have predicted would have happened at that time. And that’s kind of the same thing that’s happening with AI right now, is we don’t really know what the future is going to hold, because as these tools get better, what’s not even kind of thinkable now, like what’s outside the realm of possibility now, might be something that happens in the future. Whether it’s going to take jobs or modify jobs, I think it’s going to do both on a long enough timeline, everything will kind of be run by AI at some point, when that is, who knows. In the short term, it’ll help increase productivity as more people use it. But on the flip side, companies might have to hire less people. So if you’re a team of 10 people, and now everybody’s leveraging these AI agents, and you can do the work of 100 people, now you don’t need to hire an extra, you don’t need to hire more people. So there’s effectively less jobs there. You’re not really firing people, but you’re not hiring as many. But on top of that, we are seeing layoffs at big tech companies like Google and Microsoft and IBM and things like that. That kind of was driven by the over hire during COVID. On top of the efficiencies that AI is given, they’re kind of reducing headcount, and they don’t need as many workers since they did over hire during COVID, and you have AI playing a role in that, so now they just need to hire less people. So in the short term, it modifies your ops. In my opinion, you might start to see some slowdown in hiring, perhaps. In the long term, more jobs are going to be automated by AI. But that doesn’t mean that people are going to be out of work. Because what’s the YouTube of AI going to be? What’s that next platform shift where you can find a way to make money that nobody would have thought possible today, effectively? So the other piece of it is, if it does take all, I’ve seen a lot of talk about UBI, universal basic income, because if AI is going to take everybody’s job at some point, who knows when? What does that look like for work, right? How do you support yourself? And so I’ve seen UBI start to come up a lot more and be heavily debated in kind of like a post AI, a post ASI world, like what does that look like? And these are things that are still heavily debated to this day.
Ronn: Yeah, a lot of speculation, I’m sure, right?
Ben: Exactly.
Martin: AI for President 2028.
Ronn: There you go. So Ben, before we let you go. This is a great question, if you had to leave our listeners with one key takeaway about AI’s future. What would it be? That’s a big…
Ben: Don’t fight it, embrace it. So AI is scary. It’s scary, even for me, like I’m an AI engineer. AI can do my job at some point, right? AI is kind of the one skill that can do any other skill. If you have enough data, if you have enough computation, if the algorithms support it, you can have an AI do anything that we could do, right? It’s kind of like a universal skill automator, effectively. And with that, really no jobs are safe at the end of the day. But it’s more along the lines of, how can you upskill yourself and level yourself up and use these tools to kind of surf that wave, instead of trying to fight back against it? And it can be scary. It’s a lot of uncertainty. We don’t know what the future holds, but everybody feels that. Even the people that are the most advanced in the AI world, they’re feeling it too. So at the end of the day, what I would suggest, just start using the tools and learn them. There’s never been a better time to, there’s never been an easier way to learn, right? These tools can help you learn about the tools. You can use Chat GPT to help you use Chat GPT better. You can use Chat GPT to help understand what Chat GPT is and how it works and what it’s good at. So the more you use the tools, you’ll kind of see where the rough edges are, what it’s good at and what it’s not good at, and that’s kind of something that you just have to experience. So my advice to keep it simple is, don’t fight it, embrace it and start to use the tools, so that you can learn them and understand and work them into your workflow, so you can see the benefits of it as well.
Ronn: Oh, I love that. So many more questions, right Martin.
Martin: Yeah, totally, totally. Any final thoughts, Ronn, before we wrap it up?
Ronn: No, again. I just, I love speaking with you, Ben, we’re so privileged to have you again. Internally, you’ve helped revolutionize a lot of what we do, a lot of how we think. Super excited for the future for us, for you know, everyone. Obviously, our big roles are primarily Multifamily, other SMBs, so I can’t wait to take them on the journey with them, so thank you.
Ben: Likewise. Yep, and I’ve said it before, and I’ll say it again. I love my job. I have the best job in the world, and I could not be more happy. So I’m super grateful for the opportunity, and I’m excited for what the feature holds.
Ronn: Yes, can you put that online?
Ben: You’re silly, right? I’ll buy a billboard. I’ll put it on the Billboard.
Martin: I love you. I love it, man. Thank you. Thank you so much for joining us today, giving us a peek into everything AI, and I think it really helped our listeners understand kind of what’s going on at a high level and even on a deeper level, too. But don’t forget, everyone who’s listening subscribes to The Multifamily Podcast. Leave us a review, and as always, you can learn more about you know, driving more results and more organic traffic and paid traffic to your communities at apartmentseo.com. Until next time, I’m Martin with my co host Ronn, and we’ll see you on the next episode. Bye, everybody.
Ronn: Cheers.
