When I began studying computing in 1993, the world was slower, heavier, and more deliberate. We used monochrome monitors that strained the eyes. The computers were bulky. The floppy disks were fragile. And the idea of personal access to a machine was a luxury.
At our institute, in Colombo, Sri Lanka, we were allowed one hour of computer time per week. Wednesday. Sixty minutes to type, debug, and run the code. This was luxury, so we had spent hours, sometimes days, writing by hand on special coding paper. The rest of the time was whiteboards, theory, and the mind’s imagination. That one hour was pure gold.
I still remember writing a space-shooting game in GW-BASIC. It felt like building a small universe. GW-BASIC was a simple language, but to us it was the gateway to creation. You had to plan each line. Test it in your head. Hope it would hold together when the machine finally came on. That kind of pressure makes you precise.
And yet, despite the limitations, we created. Despite the barriers, we built.
Back then, only institutes could afford computers. A full machine in a home was rare. Before that era, there were only mainframes. Machines the size of rooms, owned by governments, banks, or powerful corporations. And with those came languages like COBOL, built to control these behemoths. My first exposure to COBOL felt like touching industrial power.
In 1975, the personal computer was introduced. That moment changed everything. Suddenly, computing power could move beyond the institution. It began to trickle down into schools, labs, businesses, and eventually homes. And when laptops became mass-produced, we didn’t just unlock convenience. We unlocked participation. People used them to manage homes, recipes, businesses, and futures.
That shift, from exclusive power to accessible creation, is what we are witnessing again today.
Artificial intelligence is not new. It has been quietly evolving in labs, behind APIs, in data centers, unseen by most, much like the old IBM mainframes. It existed, but it was not available. Until now.
The rise of generative AI is the second personal computing revolution. Except this time, the machine is invisible. It is not a black box on your desk. It is a prompt window in your browser. And instead of learning code, you learn how to ask better questions.
That is what makes this moment so important.
People often ask me what AI will replace. That is the wrong question. The better one is what AI will enable. Just as the first personal computer allowed a student like me to build a game by hand, generative AI today allows a teenager to build a business, write a book, compose a song, or analyze global markets, without needing a technical degree or enterprise funding.
We are seeing this already. A designer in Jakarta creates logos with the help of image generators and sells globally. A teacher in Nairobi uses ChatGPT to write lesson plans and translate them into local dialects. A single mother in Ohio starts a side hustle by training an AI assistant to help her write grant applications for nonprofits.
These are not hypotheticals. They are daily realities.
Much like the early days of personal computing, the biggest transformation is not in the technology itself. It is in who gets to use it. And how they use it to rewire their world.
Of course, there are fears. Every shift brings them. Some worry AI will make us lazy. Others fear job losses, ethical breakdowns, or loss of control. These are valid. But history shows that when power becomes accessible, creativity tends to rise faster than collapse.
This is a call not to fear the shift, but to step into it.
We are back in a moment where the gates are opening. You do not need a lab. You do not need ten engineers. You need curiosity, intention, and the willingness to experiment. This is the new one-hour window. Except this time, you do not have to wait until Wednesday.
Ben
Technologist, Builder, and Student of Intelligence — Both Artificial and Eternal
Leave a Reply