What is AI? Could Skynet actually happen?
My teenage daughter asked me that the other day. It's a good question.
During my deep dive into AI this year, I found that a lot of people are worried, but it didn't really sink in until I read a research paper by Max Tegmark et al, one of the leading researchers in the field of AI at MIT. Their research sounds intimidating, but I looked closely and realized these brilliant scientists are essentially running game-like simulations to explore how AI systems behave in competitive scenarios.
They are investigating one question: how well can a more powerful AI model fool a lesser model? Their idea is to appoint a trustworthy bot to run security on the newer, more powerful bots. The trustworthy bot makes sure that the more powerful bot stays in line, until we are sure we can trust it.
I'm reading this, and I'm thinking: this would never end well on Star Trek.
Turns out, it doesn't end well in their simulations, either. When the strong bots get a lot more powerful than the good bots, they beat the good bots 90% of the time. It seems like a shocking result, considering the caliber of the scientists involved and the depth of their understanding of AI and game theory. I remembered my daughter's question, and suddenly I realized I needed to find an answer.
I turned to an old friend who has worked in AI for a long time. He's one of the smartest people I've ever worked with, and I trust him. He patiently heard me out when I started talking about Skynet, and then recommended I watch some of Tegmark's podcasts.
It was good advice, and I ended up watching many hours of him talking on YouTube. It was not reassuring, and he doesn't intend it to be. I found my answer.
Yes, there is a chance that Skynet could happen. The scientists developing AI are concerned— and we should be, too. We don't know when AGI will arrive. Tegmark says it could be soon, and it's unclear whether we'll have sufficient guardrails to handle it when it does.
So what does this mean for the rest of us? Do we really just want to leave it up to the scientists? Is there anything we can do, anyway?
Let's think this through together.
I can't do anything about this. I'm not a data scientist or a venture capitalist.
It's true that the teams developing AI are made up of very smart people, and funded by very rich people. But what has come out of those labs is affecting us all. None of us can afford to not be involved with what is happening. The ability to make intentional, ethical choices is part of what makes us human. We can't leave these essential decisions about our future to a select few, based on the mistaken belief that we can't understand it. It's time to start learning about AI, today.
I'm not smart enough to understand AI.
You don't have to be a data scientist to understand what is happening. But you do have to take a deep dive in what is, for most of us, a new and unfamiliar technology. There are many resources available, and I will tell you about some of them that helped me. You have to learn enough to be able to take part in what is happening, and I hope that you do. It's my main motivation for writing this blog.
I don't want anything to do with AI.
I hear you. We are being bombarded with AI news, and it can be exhausting. At the same time, if we don't want a future where AI bots are supposed to replace our best friends and our kids no longer have the ability to read books, then we need to actively engage with this technology in order to understand it. AI will have a massive impact on you, even if you don't want anything to do with it.
This is all just hype. The bubble will burst.
Most people who think like this are probably not interacting deeply with LLMs. I challenge you to try it, and see if you still believe it. There are many ways to engage with the machines, and you can form your own conclusions based on your own experience.
I don't have time to dive deep into AI.
You've got to make time. Even if you aren't worried about Skynet, you should be preparing yourself for what AI is going to be doing to your job. Otherwise, you may have a lot more time on your hands than you ever wanted.
A computer program could never do what I do for a living.
This is the one I hear most often. Now that people have started to understand that programmers are at risk for displacement, I am surprised that project managers, product managers, product owners etc. believe their job is somehow safe. Basic math shows us the opposite is true:
Picture your daily standup with half the developers you usually work with. Picture those remaining developers delivering twice as much as they do now— even though only half of their team was involved. Now imagine that also all the other stakeholders you work with in your various projects were reduced by 50%. That would be a lot less work, wouldn't it?
Now imagine you have an all knowing, perfectly trained personal assistance at your side at all times. This assistant can draw upon the entire body of knowledge ever produced on best business practices, product, Scrum, software engineering and much more. Also, it knows every line of code in your project. Because it wrote every line of code in your project. Think about that.
Even accounting for the need to check your assistant's results, it would probably mean that your work could be done twice as fast, too. Take this one step further: what if the same highly competent and expertly trained assistant could do more than just offer advice and carry out tasks you give it? What if it could appear as a friendly, lifelike avatar to lead meetings? What if it could develop a product idea, and present a fully researched business case to executive management? What if it could run analytics in a split second, and not only make a decision based on that information, but act upon that decision as well? Where would you be in all of this?
I'm not the first person to say this, but it bears repeating for those who haven't heard: if you spend your day in front of a computer monitor working with a mouse and a keyboard, then AI is coming for your job. I'm going to write a post soon on this one, because everyone still seems to think that their job is an exception. I don't think so.
I'm good. I already know everything I need to know about AI.
Great. That means you're someone who could help shape a vision for our collective future. Maybe you are already one of the people using AI to tackle some of humanity's most pressing problems. Or maybe you are helping those who haven't gotten the message yet, or don't understand, or are intimidated by data science. Whatever you are doing, I hope that you are sharing your knowledge and inspiring others to step up as well, by building understanding, adaptability and resilience in your work and in your community.
Sounds good. So what next?
The answer will be different for each of us. For me right now, it is about reflection, collaboration and sharing knowledge. That's why I'm starting this blog. I'll show you what I'm doing as I continue looking for answers myself. I'll introduce you to the ways I am interacting with AI and the results I am seeing. I'll walk you through my basic setup and prompts, share with you the tools I use, and think alongside you about what I'm doing and why.
What about building something?
I'm experimenting a lot right now, and I plan to share what I'm up to with you. Vibe coding is opening whole new worlds of possibilities, and it's a lot of fun, too. Right now I am working on building my own AI agent. It sounded intimidating at first, but it actually turns out to just be normal programming code, with functions that make calls to an LLM to do something useful. This goes a long way to demystifying the architecture. All you need is some ideas, a general sense of how to set up a coding environment, a basic understanding of Python, and you're good to go. I'll post some links to my GitHub once the project is somewhat presentable. It would be exciting to hear about what other people are doing, and maybe even collaborate on something.
I’m figuring this all out as I go. You’re welcome to follow along.




