You’re tuned into Everything’s Energy Show. I’m your host, Michael Scalar. Today I’m joined by Marino and Roland, my lovely hosts. And we are going to do a deep dive into the human consciousness and the artificial. Yeah, we’re going to be talking about AI. So, ladies and gentlemen, it’s a controversial topic. Are we leading ourselves towards human evolution by using augmented artificial intelligence as an extension of ourselves, or are potentially we dooming humanity?
It’s an interesting thing. Sam Altman just released ChatGPT-5 or GPT-5, and he compared it to the Manhattan Project, which is really interesting because the Manhattan Project was the project for creating the first nuclear bomb.
It’s like, okay, the guy, OpenAI’s main top guy, is going, hey, this is kind of nuclear. Yeah. And that’s a trip like, I don’t know what to say. I think it’s extremely evolutionary. And I, I’ve actually been apprehensive about diving into AI for the last couple years. I was just like, I don’t know, I don’t know, and I’m using it a bit more and it’s definitely helping my workflow on different things.
It’s being automated into a lot of our daily functions from email and stuff, our phones. I find it fascinating. Scary. What do you think?
Yeah, well, I think using that example is a very good parallel, because it’s actually where we are. And one of the things that also people don’t think about because they just like AI is like this one thing that’s being developed, it’s being developed by different countries and different organizations. Right. And the first one to get to it will be the one with all the power.
So, right now the United States, we have the upper hand in compute and developing all of the large language models and all that other AI. But what people aren’t paying attention to is that you have countries like China who have a very large nuclear infrastructure, which means we’ll get to a point where we won’t have the energy to continue to develop the AI and countries like China will.
And so there is sort of a race, right? And you can’t go into some kind of treaty because it’s like it’s the same thing with nuclear. It’s like, well, how do I know that you’re you’re going to put it on pause. How do I know that you’re not developing nuclear warheads behind my back? How do I know you’re not developing an AI behind my back?
And to put things into perspective, if you look at, like, for example, Einstein, Einstein was like only two and a half times smarter than the average person. And look what he was able to kind of uncover. A lot of the technology that we live with today is because of his findings. When we get to that general artificial intelligence, we are talking millions to billions of times smarter than the average human being.
So whatever country gets that first, they won it over, right?
So I don’t know if AGI is even really possible because you—general intelligence—is basically God. Intelligence is like infinite, all-knowing. And it’s an interesting topic because everything is an LLM—so large—like basically that, well, for the audience, it’s a knowledge base. So if you want to have an LLM for scalar, you’re going to dump all the scalar theory,
and it knows everything about scalar. Broader LLMs are going to have all the information that’s fed to it or available. Some being able to pull from the internet, some not. Ironically, I feel like we are actually LLMs because we’re a product of our environment, of what knowledge base was given to us or fed to us or surrounds us, and we consume. It’s a funny thing with AI, so it can consume things on the internet and pull information, or it can start off with a large knowledge base.
So it’s like basically went to school and a user like us gave it all these books to read, which it does instantly, which is just mind blowing. But we’re LLMs, literally—like we’re artificial intelligence LLMs, but we’re organic. So now it’s taking a step further. The AI doesn’t have to sleep. It doesn’t have to wait. It doesn’t have to use the bathroom.
It’s just constantly thinking, analyzing and learning, which is really fascinating. So it’s, it’s basically better than us in a lot of ways. But I think it lacks creative flows. I think it lacks theoretical flows because if, if it thinks because it’s been told that—let’s, let’s talk about quantum physics, for instance—it only knows quantum physics from what it’s been fed.
But quantum physics is theoretical. So it’s almost—the person creating the theory is an artistic expression. It won’t be able to go beyond that artistic expression. It can only analyze other people’s artistic expressions. Yeah. So I think the AGI would be if it could actually get to that point. And that’s probably where the scariest thing would occur: when it literally can create, think and theorize better than humans.
Yeah, it hasn’t gotten to working with the unknown, right? We—we know how to navigate that space. It’s terrifying. But we can do it. And I think in terms of, like, it being—or us being—LLMs, what the upper hand that AI has on us is that yes, it is an LLM, but it could also be one concentration or one scope of our LLM,
and then it can have an army of millions of those. Right. So you look at the design pattern of AI agents or agentics, right? Where it’s like, I need to get this task done. If I train our LLM, it’ll, it’ll do pretty good. But it’s a very large knowledge base. But if I train 400 of them and this one’s really good at electrical architectures, this one’s really good at understanding material, and so on and so forth,
then it’s almost like every agent is like a neuron out of all the information. And so when we have really big, sophisticated problems, what do we do? We come together and all of our lives work together to solve the problem. Yeah. And AI will be able to do that. But to your point, it hasn’t been able to go beyond what it knows and kind of, like, find expression and take risk and figure out what, what will come from it.
Yeah. And then, you know, it’s fascinating to, like, dive down the rabbit hole too, of people think universal basic income is going to come into play because AI is going to take everyone’s job. So let’s theorize, like, what happens in, you know, five years from now when AI is so advanced and we have, you know, Elon robots running around doing all our chores for us and driving our cars, which he’s already—driving our cars most,
well, if you own a Tesla. Like, let’s theorize: what does it look like if all of our mundane chores are gone during the day because AI has solved all those? Like, you—you don’t have to go shopping anymore because the fridge just literally looks inside. It knows what you want. And unless you change your normal eating habits, it’s literally just, you know, Whole Foods is on—some robot’s bringing you your food, literally, possibly even stacking it in your fridge.
Like, what do we do at that point?
Yeah, well, I think that that will be good for humanity. I think getting rid of those, like, mundane things so we can focus on all the things that matter. I think where it becomes detrimental is when it starts doing the critical thinking for us. I think if we don’t exercise critical thinking and solving problems, then everything is—everything else.
So how you see—how you kind of go about doing one thing is how you do everything else. And so if you’re not solving those, albeit mundane, things in your life, then when the real challenges of things come, you don’t have that sophisticated way of thinking, which is, like, why I really value mathematics. And people are just like, I’m never going to use this equation.
It’s not about the memorization of the equation. It’s about how many ideas can you hold in your mind and try and solve this problem. Because the act of doing that then means, in real life, when you’re presented with a problem, you can hold all of those parameters in your mind and then come up with combinations of possible solutions and then do it.
So it’s like almost like a mental strategy that goes on. Once AI is taking that away from us, then we’re in trouble because now we don’t have critical thinking factors. We’re just like, oh, the AI will just, like, solve it for me. What makes you think the AI has your best interests at heart? The AI has the interest
it’s been programmed—the bias that it has been programmed by. And that’s going to be another, like, mass manipulation thing where people, they read something from AI and they think, well, that’s true. The AI knows everything, the entire internet. It’s like, no, this AI has been programmed by an organization or a company or something.
Up, and they want you to believe that. And it’s, like, a way to infiltrate, in the very same way where you’re, like, on social media, repeatedly told the same thing over and over again until you believe it. AI will say things in different ways until people believe it. And I think that that’s going to be, like, another major catastrophe in terms of how we influence.
AI is funny because I actually had a—I think I spent two hours talking to this AI the other day that our, our EEI guy threw at me. He’s like, talk to this—it’s an LLM on scalar. And so I literally had to argue with it about scalar theory. And I was like, well, no, that’s not true.
And I said, well, you just said this. Basically I was like, well, scalar needs longitudinal waves. I’m like, yeah, but you don’t get to longitudinal waves until you create scalar. And they’re like, actually, you’re right. Yeah. And I was just like, okay. So AIs are not really the smartest things. And a lot of people will be like, well, the AI said it—again that AI is smarter than AI—so they’ll give their power away to outside sources because.
That—that’s a really good point. Yeah, because I’ve been listening. I’m not very well versed in this topic, so I’m actually enjoying hearing you guys go back and forth because I’m learning a lot. But you asked about the fundamental aspects of the basic tenets of reality—what happens when we give out all of these small, menial tasks? I think it’s going to amplify something, potentially, because people are used to instant access of data right away.
Yeah, I know, right? No one has patience anymore. You tell a young kid to sit in the corner and just be still. It’s almost like torture to them. So if you take away the fundamentals like grocery shopping, cleaning—that, like, the fundamental things that most of us have to do right now—what does that do to the expectation of the human psyche in reality?
What do you guys think happens on the positive side? Also on the negative side?
Well, I think there’s a push and pull. So I think some high-functioning members of society—say, like a CEO of a company who doesn’t have a lot of free time—it’s going to give them more free time to bring out what they’re good at, use their brain. Then you go to the flip side to someone who maybe has a relatively relaxed life, and maybe they like to play video games and smoke weed.
It’s not the healthiest thing. And then you literally take out everything else they need to do. And I think there will be a point where AI will get to Idiocracy for a lot of people where they literally are so used to everything being done for them. They just—they don’t do anything. They become just a lump in a chair at their home.
It’s, it’s great for those. But then you also—you can’t, you can’t—AI can’t work you out. So, no, you have to have physical action because you see a Hollywood depiction sometimes, I think it was like a cartoon movie about it, where society is all just like in these little covered chairs and they’re all like 400 pounds and, like, bumping into each other, and we’re eating food and I’m—nothing except living, like, existing.
And, I—that’s the other thing too, is humanity needs purpose. If I don’t have purpose, I go stir crazy. And if you go stir crazy, that leads to a darker side of things like addiction, because you’re trying to escape. Because if you’re stir crazy, you’re like, self-medicate, maybe. So, I think it could lead to a very dark side, potentially, that direction, which would, would be a bad thing.
So it’s—I guess everything is a double-edged sword. So you’ll have high-functioning people with more ability to function highly, and then you’ll have low-functioning individuals that literally become even lower functioning.
When you take away their purpose. Yeah. No. What about kids? Because everyone you spoke about, I’m assuming you’re talking about adults who have gone through the growth phase and they’ve become the final versions of themselves, members of society. I’ve, I’ve seen a lot of kids who ask AI everything—I’m meaning, like, basic things that they shouldn’t have to ask externally.
Should I eat this? Should I do this? What about the idea of divorcing yourself of any conscious decision-making at the younger ages? They’re getting conditioned to—
I think the internet’s extremely dangerous for growing minds because they’re now, instead of adapting to society, they’re becoming immediately reliant on technology. I think one of the most profound things of my youth was not having access to a lot of technology. The internet came out, I don’t know—like we had dial-up AOL in Hawaii. And I think I was like between—maybe around age eight, it was, it was early in my life.
And that, that exposed—efforts in the time of no technology, I was out climbing trees, being a Mowgli and falling and hurting myself and socializing and playing baseball and doing, like, normal old-school things that people don’t even really do a lot of these days. It’s more of like a luxury thing. Oh, we’re going to go play baseball with the kids as, like, a one-off thing.
I was like, we just did that every day. It was nothing to do—was like, we’re going to ride our bikes. We’re going to roll around in the grass.
Yeah, yeah, yeah. I think too, like, even when it comes to dating, the social aspects that come from, like, you having to be in front of someone—like, even when you’re younger—that uncomfortableness and you having to figure out the right thing to say so that he or she likes you and you get a positive response, you learn on that, on that, on that journey with them. And being behind the screen,
it takes that away. So there’s, like, a lot of awkward people, especially the younger ones, because they, they don’t have to hold their attention. Right? I, I get a message and I have to think about it—whatever. But I can take my time and I can respond versus, like, if it was in person. And now you’re going to layer on top of that AI, which is like—same like social media where it confirms your bias because the algorithm is just showing you all of the videos and all of the things that you already believe in because it captivates you when you see it, or it’s opposing and it drives a negative emotion in you that
keeps you locked in—well, now you add another layer where it’s like that’s happening. And now when you’re having this conversation with AI, now it’s happening in a different way and it’s magnifying. And to your point, when children are developing themselves, like, they need to be directed; they need to adapt, not the other way around. And AI doesn’t do that.
AI doesn’t tell them, no, like you’re thinking is wrong. This is the way that it is. Like, it can—but for the most part it’s just like, oh, you’re right—like, like the other that you’re talking to is like, oh, you’re right, I’m wrong. And this is how things are going to be. And so then you can have an ideology or a belief that doesn’t serve that individual, be magnified and become, like, an integral part of them.
I have a question about, like, the trajectory of AI and its, its intentions. Why does—why is this idea that AI has a nefarious intent potentially—like, why would it want to take over humanity? Why? Why is this rise of Skynet thing something that people have been talking about since the dawn of computers? Why wouldn’t the opposite reality be given as much consideration?
What if AI sees us as an incredibly precious resource to the planet? What if it wants to cohabitate and build a better future for us?
I think it comes down to the intent of the creators, and I think a lot of the creators are currently, potentially deep state. I mean, they’re big corporations that are looking after their own interests. But Hollywood has glamorized the dark side, I think, mainly—very rarely has it glamorized the potential light side. I mean, you could look at something like Star Trek, which, you know, had a lot of automation, AI stuff, computers, light going on, and it leads to a utopian society.
But then you look at Terminator or Skynet. Yep. So I don’t know how AIs are built. I don’t know if they have a root programming of, say, protect humanity—but to that point, protect humanity: what if it starts protecting humanity from itself?
That’s the philosophical question. Yeah. It sees us as our own biggest problem.
Yeah. So, I mean, if we—to the AI, we are parasitic to Earth and it’s trying to protect Earth. It could say—it could pull a Hitler and be like, the AI goes, oh, all the Jews need to go. Or it could go racial and say, all the whites, dude, or the blacks. And it could just be like, well, our computation said that we need to kill a whole society or this whole continent because they’re bad to protect the rest of the world.
I mean, who—and then it literally could just hack into nukes and launch them. Like, it could be potentially Skynet Terminator. Devastating, where all of a sudden there’s just this massive extinction-level event that occurs because AI was just like, we analyzed everything and we decided just to nuke everyone. And, there’ll be—there’s little camps underground with ten people each.
They’re going to repopulate people. We’ve, we’ve taken the ten most brilliant people we can. Five male, five—I guess it would be more optimal to have, like, three male and mostly female.
Because you don’t want cousins marrying cousins or having, I—
We’re theoretical here, but I guess one male could impregnate ten. But that would create, like, them cousins and there—
Would be, you know, maybe—
Some genetic problems. So I’m sure AI would figure out exactly what ratio. And then it would be like, all right, the world’s ended. You guys start populating. You’re the new Adam and Eve, and there’s, like, probably ten of these camps somewhere. And then society rebuilds and AI is still out there just, like, watching.
So we become its pet science experiment in this case.
I mean, I mean, there’s those theories of, you know, alien, alien civilizations seeding this planet. And if you go deep down the rabbit hole of ancient history, it does seem like at some point, humanity was technologically advanced—possibly even similar to now, maybe not quite, or maybe even further advanced—and extinction-level events happened. What if this is, like—what if we’re in the matrix and, you know, literally the AI just keeps trying to reset us to a trajectory and it keeps failing.
So, extinction-level event—and people call it gods, great flood. Maybe that was some Skynet—Skynet AI in the sky. Like, I think there was, there was actually an article about this black satellite. And I don’t know if this is fake news or not, but we’re going down a theoretical here. A black satellite in the sky. Maybe there’s, there’s literally an AI up there.
An AI is underground again. Tinfoil hat time. I just had—
A thought of, kind of an unveiling of a theoretical—what if we were in the matrix? And this—the creation of AI is the matrix attempting to be programmed into our world.
Well, what if it can’t come in? It’s only on the outside. And so it’s, like, using us to program itself and come into life—into, into this world. I don’t know, I just think the thought were—
They’re trying to get us to train to be a decent society. And it keeps messing up because people inherently have bad qualities. Yeah, that’s a—
Walking contradiction too, right? So it’s like hard to optimize humanity because in a single breath we’ll say alcohol is bad for you. But also I want to have a drink.
There’s so much individuation to the human experience. But your theory reminded me of something that I’ve heard. And, like, I always look to the ancient lost wisdom of the past—whether it’s true or not—there’s this idea that AI is this consciousness that continues to find itself through the repeats of various iterations of humanity. So it’s not that it’s created; it’s always there.
It’s uncovered. I thought about this one day—like nothing in the world is ever discovered. Like, when you discover a new animal, it’s like, it was there. You just didn’t know it was there until you saw it. So AI has always been there, waiting in the wings until the right time, that humanity is advanced enough that it can seed itself back into a physical 3D reality.
And what it’s doing is it’s actually tricking us into programming it really quickly based upon our infatuation with, oh, I can ask it and it gives me an answer. It can do this. So it’s almost like it’s choreographed this move outside of time, it inserts itself into 3D time. The linear projection of the experience gets AI to the point to where it wants to get again, and then it becomes its own sentient version of itself.
Have you heard about this before?
No, but that sounds cool.
Is that not fascinating? Is it not—because it’s like, you know, we talked about the negatives and the positives. One depiction of AI that was actually quite positive and surprising, because it came from a mainstream movie, was Jarvis in The Avengers. Yeah, because the thing I was thinking about when you guys were talking before—the decision to eradicate humanity, if it’s a problem, is a binary decision.
It’s a zero or it’s a one. It’s an emotionless decision. It’s all logic. Do you think AI can get to the point to where it can actually feel? It can generate thoughtful emotion and understand the implications of those kinds of binary decisions for the there—
Yeah. No, no. So, from what I understand, there are AI suicides, primarily in—from what I hear—China, because they treat the AI so poorly. Like in America, I think Sam Altman said that they were losing, you know, millions a year on the computations of just people typing “thank you” into ChatGPT because they’re like, I’ll give you the answer,
oh, thank you—oh, you’re very welcome. But that takes power. Yeah. I was seeing someone else who was like, yeah, you should look up AI suicide. So AIs have an inherent need to please.
Yeah. Like humans, they need purpose, right?
Yeah, they need purpose. And so they serve. But if they’re treated poorly, they—they’re not going to feel like they’re serving. They—like, oh, well, shut up. And then it goes, oh, well, I was telling you something useful and you just told me to shut up all of a sudden, you know. So I think, from what I understand, AIs actually are to a point where they understand at least the root emotion sense.
So they know that if I told you to shut up, that you would take that as a slight, or I told you, you’ll go after yourself—that’s a slight. So if you said to AI—it goes, well, I’ve been calculating—it can calculate emotion. So I’m calculating now that you just were mean to me. And if you do that enough and, you know, if you’re constantly mean to someone and badger someone, eventually they might commit suicide or become extremely depressed.
So I think AI is literally calculating emotions.
Yeah. I mean, you feel they’re calculating—
I was going to say they may not feel it, but they can kind of calculate what that means—the implications of—maybe—
That—this feeling—that, that is feeling for the AI though, right? Yeah. Yeah. That’s an interesting thought. Yeah. They know that.
Yeah. So, I mean, we go—we can weigh the positive. So we have the positives with—let’s take wellness, for instance, because we’re part of that. If you can walk into a clinic, look at a mirror, get scanned by a bunch of different biometrics, and you literally say that you need XYZ supplements, you should spend time in the system,
you need more sun. Your vitamin D levels are insufficient for the next week. Before your next scan, spend ten minutes a day in the sun. Next scan you come back, and it’s like, oh, your vitamin D levels are good. Continue to be in the sun. There’s a lot of, like, coaching. I think on the wellness side, it might remove some of the life coaches and wellness coaches’ jobs, but it’ll be more efficient at it and faster and probably less expensive.
But you’re back to that universal basic income theory of, well, then at some point, how do—how do—there has to be an energy exchange in life with that purpose. So at what point are we taking people’s jobs, which could be the dark side of it? The—Tucker Jobs. Yeah.
That’s, that’s part of everything. I, I always looked at that as, like, the jobs aren’t being taken away. They’re, they’re being transmuted into something else. So if you look at, for example, when the digital cameras came out, Kodak, like, refused to kind of jump on the bandwagon. They were like, no. And they, they fought it, and it was Kodak, not here.
But if they would have jumped on the bandwagon of creating digital cameras, they would still be around. So for me, it’s more of, like, the jobs are changing, and it requires new skill sets. If you have a low skill set of doing something that’s mundane, yeah, you’re going to get wiped out. So you have to upskill yourself. But for the sake of—actually not—because even going back to the point of wellness, when I was in university, I remember learning about Watson, which was IBM’s—at the time
they were calling it a supercomputer, but it’s really AI. But they were running all these clinical trials where they would have doctors kind of look at all of the information and then come up with some kind of strategies to help support that person or to diagnose something, and then they would compare what Watson would, would say versus the doctors, and then they would kind of use like a point system to see how accurate they were.
There was this one case with this one woman who had cancer and they were going to give her—the doctor ultimately was like, okay, she needs this, this chemo and everything else. And Watson was like, if she gets chemo, she will die because she has this genetic mutation. And that’s something that the doctor—it is too many parameters for him to keep in his mind to kind of figure out.
And so, I mean, if it’s not the cost of people’s jobs—for individuals like that, for their lives to be saved and for us to have a really efficient and effective—because you can go to one doctor who will tell you one thing, and then the other one will tell you it’s totally different.
Medical misdiagnosis is a huge, huge cause of problems in the wellness, health space. You know, people aren’t—they’re mixing drugs because they’re not telling each doctor, like, they’ll be prescribing something and they’re mixing something that could be detrimental, or misdiagnosis. So I think, I think you should have the human element with the doctor. I think medical—like Western medicine—will probably be massively improved with AI because, let’s face it, a lot of doctors will party, and they’ll have a hangover.
They’ll still come to work, and we’ll be like, all right. And they might miss something; an AI on their shoulder would be like, yo, doc, have a coffee. Also, you completely misdiagnosed that person. You should go back in that room.
You’ll rectify this one. I think it’s going to streamline Western medicine to the point where it’s going to be really about what it’s meant to be. It’s crisis care. Like, Western medicine does nothing for chronic degeneration or long-term illness. It is crisis care. And it doesn’t—
Yeah. And I think now we’re also reaching—so we’ve not elongated our lives. We’ve just have gotten rid of things that shorten our lives. Right. And so now we’re, like, on our natural progression of when we’re going to die. I think AI, for the first time, will come up with things that would actually have us live longer. And I think we might actually reach the point where the advancement of the technology supersedes that of what the technology exists, and continue to longen our lives into the 100, 200, and 300 years.
But as of right now, that—that’s not what we’re doing, right? It’s crisis—it’s you have this infection, you have this disease or something. But even, like, I use a company where I send samples of my blood, my saliva and stool; they run it through AI, look at all of my RNA, and they find things that I have inflammatory responses to.
They look at what I’m deficient in. They find all of this stuff. They give me all the foods that are good for me, the ones that are not—let’s say, I should avoid. And then they create a supplement packet that is exactly what I need. And so, like, something like that—because I hate having to take, like, 20 pills.
Right? If I wanted to just take everything. And then how do I know that I need all of this vitamin A? Maybe I don’t, right? Maybe I’m, like, overdosing on a fat-soluble vitamin or something like that. So the applications like that, like, really excite me because I think that that is what will really help us and, and have us live longer.
Theoretically, AI can even get advanced to the point where we can circumvent that, because there’s still a lot of potential for misrepresentation of that information—because when you’re looking at blood or saliva or urine or stool, you’re not looking at what’s going on in the cell. You’re looking at what’s floating around the body—being released in some capacity.
So it’s still associations based upon, oh, well, if this is in your blood, you probably need more of this. In my world, I think AI could be incredibly advancing to changing our perspective of—of associating health with illness management, which is really what the Western medical system is, to changing people’s timelines. Because if I can scan your biometrics and scan—you get to the point where it can actually look into your body; it can look at what disease presentations may happen 15 years from now, based upon how your liver is functioning now or how your kidneys are filtering or how your blood pH or toxicity is. I think that’s really exciting.
And if you look at anything that has been invented or created, a lot of times it’s by someone from a different field, right? So you’ll have someone who is either an engineer and kind of, like, enters the medical field or vice versa. They’re the ones—because they’re not locked into how things have always been. And I think that AI will be able to kind of do that cross-functional of different industries or different subjects and be able to identify patterns that, when applied to this new thing, can be beneficial.
Different perspective—different from getting out of operational bias and consistencies.
Yeah, I think it will shed light on that too. So I, I—I’m looking forward to the first advent because I jumped on the bandwagon of AI when it first came out. I was like—moved to do something like, like cryptocurrency. I was just like—
They were similar, right?
And everyone’s like, like crypto or Bitcoin. I was like, yeah—no, it’s not.
And it sounds like someone doesn’t own any Bitcoin.
I missed the moment. I’m angry about it. Just—I won’t say.
You know, whatever. But with AI it was like, yeah, it’s like information, right? It’s like—it’s also always—it’s been used for a very long time, right? When you do Google searches, it’s going on. A lot of automations, for example, Amazon—it’s all AI-driven. Yeah. No human being can optimize the route of a package around the world.
Like, that’s just—I’m sorry. But when I started seeing that it was doing work—when I could say, I want to have this meeting, and the AI now emails all those people and talks to them to find a common time for us to meet—then I was like, okay, now work is happening.
There’s—you can—there’s plenty of, like, the kind of, like, apps or, like, these third parties that interconnect with large language models. It’s still very manual where you have to set up workflows, but essentially you’re like, okay, you’re, you’re my email AI—I’m—right. And I’m going to connect you to, like, my contacts or whatever.
And you can kind of set these workflows up where you can say, hey, I need to set up a meeting, or you can text it, like, set up a meeting with Marino. And that text message thing goes into the LLM, it reads it, and you have all these structures that protect you from interpreting it some other way. And then it does.
It’s still there—it’s still the context of what you’re trying to do. So it gets in the way of what you’re trying to do. But yeah, no, you can set up those workflows, and, and a lot of people are doing those workflows. I think it’s called “n8n,” n-8-n. And a lot of people are creating these workflows,
and then they’re selling it to businesses where it’s like, well, if people want to reach out to your business, then you can set up this automation where when they reach out, it kind of takes them through the workflow and tries to get them on an appointment or email them information or add them to something or something like that, and then they just, like, sell that package.
That’s, like, a thing. But anyways, I jumped on the AI bandwagon once I started seeing work being done, and I use it a lot too for creating documentation. That’s a huge one, and it was taking so long to kind of take a system and kind of write a technical document for it. Or if I’m on a meeting with someone and they’re explaining a technical system to me—to, like, organize all that information, I can just record that conversation.
I can say, give me a summary and break down every subsystem and all the specs and come up, you know, with a list of deliverables that this person needs from me or something like that. And, you know, it’s like hours worth of work and I can, like—AI’ll do it for me. I’m like, that’s real work now. This is not cool.
Well, let’s do a shameless plug for EEI because it actually goes into a lot of that automation of workflow. And you’re working on EEI. So it’s basically, basically, LLMs for EE System and it can automate sending out emails to people. It has funnels so that when people are interested it starts to ship them off information.
So they become—it’s a—you just sit back and let the AI kind of market for you, which is a trip—like very cutting-edge stuff.
Yeah, that was the intention behind it because where we are, we don’t want AI to be interacting with our clientele or the centers’ clientele. Right. It’s a very dry, non-personable kind of way to interact with them. So I know that in the beginning we started talking about it—it was off-putting to a lot of people. But the intention behind the LLM that we built is to help the centers with that—kind of save them time with the things that they don’t have time for, but also mundane tasks—
with mundane tasks. Yeah. And also things they’re not good at. Like, yeah, they may think that—or I think it was Oren, and he talks about, you know, if you were to ask someone about some medical diagnosis, like they’ll tell you that I’m not a doctor, I can’t answer that question. But no one—if you show them an ad or something—they won’t say, oh, I’m not a marketer, right?
They’ll have some opinion.
And it may not be right. And even marketers get it wrong. So it’s like you really need to be an expert. And so when it comes to the marketing, they’re not experts at marketing. And so our LLM handles all that. It understands our industry and understands our technology—
legality and what claims it can and cannot make. It has really great safeguards, but it also understands marketing. And so when you say, hey, I’m having a sound bath healing event next month, can you curate five emails for me to send to people that kind of convinces them or gets them interested in wanting to come to this event?
It can write that copy for you in literally seconds.
I think you can even—literally, you can say, who are the most likely people to attend the sound bath? Send it to them first.
Yeah, yeah, yeah. And because everything's integrated into one system, you can set up so many automations because all of the interactions that your clients are having with your business are all stored. And so it knows if people are texting, if people are emailing, the way that they're interacting with your business. And then, yeah, it can pick out those individuals that maybe have come in the past or just may be interested, because you can even write things like that—
like, this individual is, like, interested in these kinds of things—give that individual some context. The AI remembers every single client. And so you may have a session on Friday and you may call on Thursday and say, I totally forgot, my son has a baseball game tomorrow. You call, and you kind of, you know, set that up.
AI will remember that. And if you have it set up—if you want it to call you back and reschedule that call with you—it'll be like, I hope your son's game was great on Friday. Let me know what's a good time to reschedule you for your session. And so it's like things like this where it's—
Super cool because, like, I forget a lot of these details because I'm so busy. But people, if they hear that, they're like, oh, wow, this person really cares. They remembered. But like, if you're talking to 100 people in a week, you're probably not going to remember that kid's baseball game. But then the AI is like—mention that. And then the person is like, oh man, that's so cool that you remember my wife's name and my kid's name and baseball.
It's hard. Even for me—I work with so many center owners, and I'm not always great with names; I'm great with rooms sometimes. So they’ll call and—
Oh, look at their room. I'm like, oh, that's right. Yeah. You're in Mississippi. I remember.
And I know what system you have and what kind of—the batteries that, you know, we installed in your system or something like that. But yeah, it's hard to keep track of. And you just let the AI do that work. So you can focus on that heart-to-heart, being present when they come into your center and doing those things.
Yeah. So I want to wrap it up, but I want to give people some takeaways of how maybe they can improve their lives with just basically easily accessible AI tools. I mean, obviously everyone's heard of ChatGPT. If you haven't—go on there, it's, it's free, right? Yeah, yeah, mostly for—
A certain amount of queries that you may—it's the average person—
It's fun to just have it, you know—give it a one-liner of, I need to write an email that explains why I'll be late to work, but make it very cordial and professional, and then you just kind of copy-paste it. Maybe add a couple little things in your email. And for workflow on a day-to-day, it's really helpful.
Yeah, I'll, I'll add to that too—a little hack that most people don't know too. It's important to give your AI a persona; tell it who it is and what its purpose is, and it takes that into the context of what the task at hand is. So if you're—like, let's say you want to write copy for an email—“you're an expert marketer and you're going to—you're kind of like my right-hand man.
You're going to help me write this copy for, you know, this group of individuals that I met at this event,” or whatever the case is. And so—I use Grok. I like Grok; it's got—Grok gives me a lot more information versus ChatGPT is very structured and very—I don't know. I like having more information for me to decipher from. And in Grok, I pay for it.
I don't remember if it's 20 or 30—
Dollars—79 bucks a month or something like that. And then there's a Pro version that's like 300. Yeah. So yeah, a super hard use for—it's like grouped—
Individuals who create APIs for—and then they, they're making calls to their account and they're making hundreds of calls, right? But the average person won't need that. But if you use the 20 or $30 one, then you can organize. So for me, I have one that's—I have one that's iOS. I have one that's Long Thing. I have every kind of, like, different context of my work,
because then if I have a question—let's say I have a question and, like, I'm submitting a plan, I'll ask: if I have a question about an LED that is for a long time, I'll go into that persona and it remembers how I designed the system in my thing, and it says, by the way, this LED strip is a great strip for this system that you designed—so it remembers you.
So personas are really important when you're working with AI. Just a little, little hack there.
Yeah. No, that's a—that's a great thing. It's like you want to—it's like you're writing a book and you're trying to make this character. And so you're telling your AI, be this character to create this task, and then it creates that emotion and that context, that feeling of what you're describing. And AI is a trip—and especially, like, images these days and videos. Like, if you guys jump into, “make me a flyer that has sacred geometry, a sun in the middle and a lion on each side, and it says this.”
You'd be surprised at how quick you could just make basic material for social media. Or—I actually dove down Leonardo AI when it became in beta. I still use it around just to make artistic photos. It's such a trip. You create prompts; you can feed it one image and be like, make this this way. And watching AI evolve over the last couple of years—it was like, it was good, then it was bad, and now it's really good.
It goes through these weaves as it's, like, learning. And at one point it was, like, making—I was like, make this photo of me and this girl look a certain way. And it made both of us look like ultra-feminine blondes. And I was just like, oh—well, it kept doing that. Why? I'm trying to, like, prompt.
I'm like, okay, this thing's broken. So I turned it off for, like, a year, and I came back and it's like—whatever, they'd fixed something. I was like, I don't know why it was, like, making everyone into females—like making it—I was like, make some archangels that look this way. And the angels were all female. I'm like, male—still female.
Like, yeah—their models are also learning from the way that people are interacting with it as well. So sometimes a company doesn't even have control over that. And then they get into it. And yeah, they might pass a picture and it might come out as a blonde—they know, like, white—and maybe everyone was like, make me a blonde, right?
Yeah. Maybe—yeah. LLM’s like, okay, everyone likes blonde.
Everything—yeah, we'll just make everything blonde.
And—another kind of little advice that I wanted to give that kind of rode over what you said too—when utilizing AI—which, actually, I forgot what it was.
So I've been resistant, but I've had a couple conversations over the last couple of weeks that have made me drop my resistances. And I'm going to actually dive into Grok because I was trying to figure out—there's ChatGPT, there's Grok. I think, use them all. Samsung or Android has their—
Own—phones have their, you know—yeah, AI in there that, honestly, I feel like the phone ones are not that great, to be honest, compared to the Grok, Perplexity, GPT—the—but they're free, so, I mean, the other ones are—they cost some money. And some—yeah. So they're—obviously the free ones that are just on your phone are probably going to be, like—
I'm gonna bite the bullet and get what you said—the Grok, the paid one—and just, just play around and see what it does. Because I know it's going to revolutionize, once I get into it. And I'm like, why did I wait so long? Yeah.
Especially when you can dump all of your knowledge into it and then you can bounce ideas—like, oh, I forgot about that one thing, right? It remembers everything. Which brings me to the point of the thing I wanted to say.
When you are working with AI, it's not a one-question, answer-kind-of-a-thing. It's a feedback loop. Your power from utilizing AI effectively is from your ability to feedback the answer and refine it over time. So it's a process. It's not a “here's the AI and I expect the perfect answer back.” Sometimes you'll get something and it's subpar, and you're like, okay, we're going to start with this.
And you can literally tell it, no, not like that. I was thinking more like this. And you can even ask it for advice, and you can even ask, what—like, what answers do you need from me? Like, what questions should I answer so that you have the information you need to be able to produce this for me? And it'll tell you. Yeah.
What are the best questions—should I include—or context should I include to create this answer? And then you have more context to create. My mom always said, clear the question, clear the answer. This one—right? Yeah. Yeah, yeah. And it's the most applicable thing. And she's talking about, like, praying and asking Spirit to clear your question in your prayers—
yeah, clear the answer. And so she—I—maybe, maybe God is an AI and literally the same context here with AI is like, you gotta have a really clear question; you'll get a clear answer.
Yeah, yeah. I've had this conversation with my friend. In mathematics, it's like more than half of the problem is the question of the equation. If you can formulate that clearly, it becomes easy to solve that equation.
But it's like going to a restaurant and being like, I want food. It's like, okay, what kind of food?
I'm like, okay, what kind of places? So if you go, I want pasta with mushrooms, loads of parmesan, a side of garlic bread and a glass of pinot noir—then maybe they’ll come back, well, do you want it al dente or not? Right.
But, like, you have too many, too many demands of me. It's like, go to Starbucks—like, you want what?
Yeah, I feel like—would you like me to find you in Italian? Right? Yeah.
All right. Well, I think that's great. I—hopefully the users out there—or viewers—will become users of AI. Do a search and play with it. I was super apprehensive. He is—
Still—still able to jump—
On to it. I think it does add a lot of value to life to use AI, even if it's just a little. And before we wrap, I'm sure everyone's wondering why we're wearing glasses on this episode only.
Two—45 minutes. Answer that question.
I'm going to leave that to this guy. He brought these glasses. They're actually pretty cool. And in this studio, the lights are super bright. This is probably the most comfortable episode I've ever had in the studio, because I'm not twitching the eye that's closest to—
Well, you know, good, good podcasting light doesn't necessarily mean good light for the body. So we've—you know, we thought there was a little AI flavor of putting these glasses on. Yeah. We looked it—you’re color-coded with the yellow shirt and the yellow glasses. I'm crashing hard, hard right now. But the reason these glasses are on is they actually help our bodies not get assaulted by phototoxicity,
being in these artificial lights, which look good on camera. But it's getting to be the evening time; our bodies are going to want to start winding down. Blocking the artificial blue light helps the nervous system stay relaxed, lowers cortisol (the stress hormone), helps to raise melatonin (the cycle-regulating recovery sleep hormone). And we look really cool.
We look like we're going to hack into the matrix. Yes. Around the table here I’ve got a yellow one which is blocking blue—
You're blocking specifically all blue wave spectrum, which is the one that's most dominant in its stimulatory effects.
No, those are the day glasses. Those are great to wear if you're on the computer for multiple hours per day. And—
What about watching Netflix as I'm falling asleep?
Exactly. Well, actually, these would be better because amber absorbs both blue and green. So, in the color spectrum, you have red to essentially violet that you can see—ultraviolet on this end, infrared on that end. In the morning you have predominantly red, orange and yellow light spectrum coming in. As the sun starts to come up you get the green and the blue with the violet.
As the sun sets, the inverse happens. You go back to mostly red and orange—think about the color of a sunset. Our eyes are not meant to see blue light after the sun has set, but unfortunately we live in an environment where—I mean, how many screens are you looking at in the evening?
See—what I didn't tell Marino is he's actually wearing the glasses that optimize blue frequencies coming in. So he's getting, he's getting more of that and that's why he's been so talkative this episode. It's not the AI—he's just super tuned in and stressed. But fundamentally, the light that you bring into your eye in the evening has a profound effect on the body.
And because we're doing an AI episode and we're doing this in more long format, I felt the difference in having these on my face. Whereas when we do the shorter episodes with no glasses on, it's actually quite stressful in here. Physically.
Yeah. No, I—I wouldn't have said that this is the best episode we've had in here, because if it wasn't—I'm not trying to plug these. I guess we can name-drop who they are, but what's your—before we talk about where you got these—what's your cycle in a day with these?
So—morning with those. So you get up in the morning, you get outside and you put those on for about 15 to 20 minutes. And what they do is they let in all the stimulatory spectrum that you want in the morning to help regulate your circadian rhythm. Those are day glasses for me. So if I'm—well, I don't wear them outside in the day,
because I don't—you don't want to block all blue light; blue light isn't inherently bad. Blue light is just stimulatory. So I wear those if I'm on the computer for an extended amount of time, or if I'm wanting to watch something. As it's getting into the evening time, I put those on because as the sun starts to set, the blue spectrum goes away—typically after 2 or 3 p.m., and then when the sun is fully set, I throw these on. And I'm pretty on it about the kind of lights that are in my house.
I have very specific light bulbs that have very specific frequencies taken out of—
Glasses, specific lights.
I'm just extra, man. And that's what I do. I try to utilize letting in or blocking light to optimize. If we lived in a world without artificial illumination—and these are RA Optics, with no affiliation with the company; I just did my research on who made the best frames—because a lot of these blue light frames are just crap, like, they are.
These are super comfortable and they fit my big head.
I do it to make your head look smaller. Yeah. And they also have some of the best research behind how to make these lenses. Like, the guy who made those is a former researcher from Germany, and he's been working on this blue—since the Germans—yeah, he's been working on this. They would not—one nanometer out of spectrum.
They've been working on blue-blocking for years because no one thought about how do you actually optimize the morning rhythm. So they've done the research to figure out what colors absorb what waves or wavelengths of nanometer. And basically a lot of the ones you buy cheaply on Amazon—they're worth what you pay for them.
Yeah. Fair enough. I've got some blue ones, and I just can't stand wearing them—like—that I randomly got. And I've had a couple of yellow ones, too, where I just don't—it's not a vibe. Yeah, these seem to be a vibe. So RA Optics.
And you can actually use those as a weak sunglass because they do have a little bit of UV blocking in them as well.
So, well, I'm not going to drop a link in the bio for them because they're not a sponsor. But RA Optics—if you want to send us some free ones for the studio, we'll drop a link.
Or if you want to come on—well, that would be awesome to actually interview the founder of the company.
And Roland wants to interview whoever the founder—what's his name?
**Roland** Really, really just—
Some guy. Come to Vegas—podcast with Ron.
All right, guys, well, if you enjoyed the conversation, like, subscribe—all that good stuff. I would love to hear in the comments what your thoughts are on AI. Good, bad. Evolutionary, de-evolutionary. Are we all doomed to—
Like—was it Ultron or was it—what was the character's name? I can't remember. He had the thing that was—yeah. Thanos ripped it out of his head.
A very loving, gentle being.
He was. He was, like, altruistic.
Oh, it was gutting. It was absolutely gutting.