When OpenAI’s ChatGPT was released to the public in November 2022 it sparked a frenzy of hype and panic about the capabilities of generative artificial intelligence (AI)—a form of AI that creates original content based on a user prompt.

The software’s ability to understand natural language and to offer creative responses was not just amazing but downright uncanny. It could write compelling essays and working computer code. It could play word games and tell jokes. It could listen to your mental health struggles without judgment and offer therapeutic advice. It was, it appeared, truly intelligent.

That enthusiasm was met just as quickly with fear. Within days, commentators were forecasting the decline of everything from software developers to journalists to doctors to the artists and writers whose work was used to train the AI without their consent. Meanwhile, technology scholars came out of the woodwork to warn about the threat to humanity posed by an artificial intelligence that could soon outsmart, overcome and discard its fleshy captors.

Investors, smelling the next big tech gold rush, poured billions of dollars into OpenAI and other Silicon Valley startups. In the months that followed, Google and other big tech companies rushed out ChatGPT competitors of their own. Some employers even tried to replace their workers with AI, as in the case of call centres that shifted to ChatGPT-powered text support.

Caught off guard, some governments around the world raced to control the technology. The EU and U.S. led the way with attention-grabbing regulations that, various critics argued, would simultaneously strangle progress and do nothing at all to reign in unsafe AI development.

Through it all, workers, students and other curious individuals began playing around with ChatGPT and its ilk. And the more they played, the more they became aware of its limitations.

These AI tools were prone to making things up—what the experts call “confabulation.” They reflected biases in their training data. They were unaware of current events. They wrote arguments and code that looked good but didn’t hold up to scrutiny.

As users began to understand how the algorithms behind tools like ChatGPT function—by probabilistically guessing one word at a time based on patterns in its training data—the sheen of intelligence also began to fade. It wasn’t “thinking” after all. Far from understanding what it was talking about, ChatGPT was just stringing together words in a sensical, but fundamentally unintentional, way.

One year after the release of ChatGPT, it’s tempting to dismiss both the hype and the panic that surrounded the popular emergence of generative artificial intelligence. Neither the utopian nor dystopian visions of an AI-powered world have come to pass. And, in retrospect, that should have come as little surprise given the typical trajectory of novel technologies.

The internet itself is an illustrative example. Though the basic consumer infrastructure existed in the 1990s, it took decades for many individuals and institutions to fully integrate it into their lives and processes. Even then, many of the predicted casualties of the world wide web are very much alive today. Their business models may have been forced to adapt, but for all the Googles, AirBnBs and Bitcoins out there, we still have libraries, hotels and banks.

And yet, the internet did transform the world in many ways, for better or worse—the stunning, unprecedented shift to remote work and learning during the Covid-19 pandemic being a particularly poignant example. It is now difficult to imagine life without the internet. And that’s precisely why dismissing generative AI entirely at this stage would be a mistake.

First of all, tools like ChatGPT are already having a real-world impact, and nowhere more so than in schools. Teachers have been plagued by AI-written submissions—a novel and discrete form of plagiarism—without the tools or training to address it. This is a context where the downsides of the AI, such as inaccurate information, are harder to trace. Meanwhile, the upsides, in terms of fast and free homework generation for students, are incredibly alluring.

The education system may yet stand to benefit from generative AI. For example, a personalized expert tutor on every single subject that could work in tandem with real teachers is an enticing prospect. But an education system blindsided by ChatGPT’s unregulated release hasn’t had enough time to figure out how to make it work in a responsible and effective way. In addition to concerns about accuracy, there are serious outstanding risks related to security, privacy, accountability and the inherent safety of these tools, especially when it comes to their use by students. Until a legislative framework is in place to mitigate these risks, it will be very difficult for educators to make the most of the technology.

In other sectors, there is anecdotal evidence of ChatGPT being successfully adopted by workers. Early studies have found that workers who use AI tools, flawed as they may be, tend to get more done and to produce higher quality work than those who don’t. That generative AI can help many workers be more efficient without fully replacing them is an encouraging finding at this stage. Whether workers will ultimately reap the benefits of that increased productivity, or whether over the long-term they’ll be expected to do the work of multiple people for the same pay, may be a source of labour disputes moving forward.

The second reason not to dismiss generative AI is that it simply takes time to have a transformative impact in the real world and the ChatGPT era is still young. As in the internet example, it will be years before employers and other big institutions figure out how to fully integrate generative AI into their processes, both technically and culturally.

Indeed, even if you eschew the AI hype now, every word processor and email client will soon have an AI co-pilot that helps you write and edit your work. Some already do. Familiarity with AI tools will likely be as essential a professional skill as using Microsoft Office, even if the technology never gets better than it is today.

But of course it will get better, which is the third reason to take generative AI seriously. The technology is evolving extremely quickly. Unlike when it was first released, ChatGPT can now look things up online and can both “see” and create images, among other improvements. AI tools are now able to read and parse large documents, to interpret spreadsheets and to perform many other complex analytical and creative tasks that were previously seen as impossible. The sheer number of companies working to develop artificial intelligence technologies also makes it unlikely that, regardless of the pace of development, the cat will ever be put back in the bag.

Predictions vary, but some experts believe that within the next decade AI tools could displace around a third of the hours currently worked by people in developed economies such as ours. That’s neither a dystopian nor utopian prediction, since the distributional consequences will have more to do with policy than technology. If history is any indication, the widespread adoption of novel technologies will create many new jobs that don’t exist today. But whether the economic benefits are shared or concentrated in the hands of capital will depend on our regulatory and taxation schemes.

Nevertheless, we may be confronting a labour upheaval comparable to other major technological innovations of the past hundred years, such as the introduction of computers into offices. However, the question we should be asking ourselves is not whether ChatGPT is going to replace workers directly—in the vast majority of cases, it can’t—but how we are preparing for a future where generative artificial intelligence plays a large and growing role in our economy and society.

How can unions and employers support workers as AI becomes integrated into workplaces? How can governments regulate AI to mitigate the risks of erroneous and harmful content as well as poor data management practices? How can educators prepare young people to use AI tools while thinking critically about their outputs? And how do we ensure the public benefits from a technology that is largely owned and controlled by private U.S. tech firms?

The initial generative AI hype may have been overblown, but we have a lot of work to do to prepare for what comes next.

Thank you to Mischa Terzyk and Mia Travers-Hayward for their feedback on earlier drafts of this commentary.