New technology often first appears on the desktop as an experiment. Gradually it becomes a useful aid, and eventually an important part of everyday life. Over the past couple of years, that is exactly what has happened with artificial intelligence. Not long ago, generative AI belonged mainly to research labs and technology companies’ development projects. Today, it is used constantly for tasks such as drafting reports, compiling background briefings, producing analyses and, increasingly, supporting decision-making.

At the same time, so-called AI agents are developing: systems that do not merely answer questions or carry out individual tasks, but can coordinate work, assign tasks and track their progress. They are no longer only part of workflows; they are beginning to take over functions that have traditionally belonged to management. At the same time, the need is growing to understand what risks this development brings with it – and for whom.

“The development of AI is marked by exponential growth. Capabilities can improve in a very short time in ways that are difficult to predict in advance,” says Anna Katariina Wisakanto, a researcher in AI safety.

Photo: Pentti Hokkanen/Flaming Star Oy

Wisakanto works at the international think tank Center for AI Risk Management & Alignment, which focuses on AI safety and risk management. She is also a founding member and chair of the Finnish Center for Safe AI, a network that promotes research on AI safety. Her work sits at the intersection of technology and society.

“At the heart of my research is a question that is both technical and societal: how can we understand and measure the risks of AI in a way that genuinely serves decision-making?”

A transformation that affects thinking itself

It is easy to think of AI as new software or simply a more efficient tool. Wisakanto considers that far too simplistic a way to understand what is changing.

“Earlier technologies automated tasks, but AI – and generative AI in particular – automates decision-making and thinking.”

As a result, the technology is moving into areas that have traditionally been seen as central to expert work: interpreting information, weighing alternatives and structuring complex wholes. At the same time, AI’s technical logic differs from that of earlier digital systems. In the case of large language models, for example, instructions and data cannot be separated from one another.

“With language models, instructions cannot be separated from data. That is not a bug; it is a fundamental property of the technology.”

In practice, this means that any document or message processed by AI can also steer its behaviour, whether intentionally or unintentionally.

In addition, AI operates through natural language, which is always ambiguous and context-dependent. For these reasons, adopting AI is not the same as acquiring a new information system.

“If we do not understand that difference, we will plan the adoption of AI as though it were just another IT system. That would be a serious mistake,” Wisakanto says.

When AI supports expert work

The effects of AI are already visible in many organisations. Virpi Hotti, a leading specialist at the State Treasury, works on the use of data and analytics and is also involved in developing the State Treasury’s AI solutions. She points out that AI often functions above all as an aid to experts.

According to Virpi measurable benefits of the AI has not yet been reported on a national level.

“AI is especially helpful in situations where you have to start from a blank page and work through a large body of information. In these kinds of preparatory tasks carried out by experts, it is very useful.”

This can mean, for example, reviewing extensive materials, preparing background work for reports or mapping out alternatives. As a result, work that previously took days or weeks can now be completed much more quickly. Even so, the effects of AI are difficult to measure precisely.

“If we are talking about measurable benefits, there still have not been very many reported at the national level,” Hotti says.

The estimates vary widely. “At times you see estimates of efficiency gains of as much as 70 to 90 per cent, but, for example, the Bank of Finland’s estimates have been far more modest, around one per cent at the GDP level.”

So AI’s significance is not yet showing up as major economic leaps. For now, its impact has more to do with the gradual transformation of expert work. Some routines are being automated, while at the same time the need for interpretation and evaluation is increasing.

Expertise is changing, responsibility is not

According to Hotti, AI is changing not only tools, but also the nature of expertise itself.

“I often say that an expert shines by borrowed light until their own lights come on.”

The idea is that expertise is built by engaging with existing knowledge, research, reports and discussions. Gradually, this gives rise to one’s own understanding. AI can help in that process, but it can also short-circuit it. The danger is that too much is handed over to AI.

“If you are simply shining with the help of AI without having personally engaged with the body of knowledge on which you are making a claim, that changes the nature of expertise quite radically,” Hotti says.

Security is no longer just a technical issue

AI is significantly reshaping the way we think about security. In the past, information security focused largely on preventing attacks by anticipating and identifying threats, building firewalls and keeping outsiders away from systems. According to Wisakanto, that is no longer enough.

“Instead of assuming that every error and misuse can be prevented, we need to move to a paradigm in which damage is contained on the assumption that something will go wrong.”

In practice, this means that systems must be designed to withstand errors and misuse. Perfect protection is not a realistic goal; what matters is how extensive the consequences can become. Security is no longer merely a technical issue. It is increasingly tied to how organisations build processes, authority structures and oversight.

In other words, systems must be designed so that the effects of an error or misuse cannot spiral out of control.

“AI is becoming, in part, a psychological and organisational problem,” Wisakanto says.

Attacks, too, no longer resemble purely technical breaches. More and more, they follow the logic of social manipulation, drawing on familiar tactics such as appeals to authority, a sense of urgency and persuasive language. When all of this is combined with AI’s ability to generate credible content, the boundary between technical and human risk begins to blur.

AI is also a question of power

According to Wisakanto, discussion about the risks of AI does not always focus on what matters most. Public debate tends to revolve around two extremes: AI as an almost limitless source of efficiency, or AI as a threat to all humanity.

Yet between these opposing views lies a broad terrain where AI’s real effects become visible: everyday work, organisational practices and the structures of decision-making. It is precisely in these places that both the greatest risks and the greatest opportunities of AI are found, and yet they are often discussed less.

“In Finland, the discussion is often very productivity-centred. We talk about efficiency, and too little about what is lost as AI advances and how power structures are changing.”

Wisakanto stresses that AI is not merely a tool for increasing efficiency. It is also changing how power is distributed within organisations and across society. Beyond the technology itself, the question is also who defines the terms of its use and who ultimately makes the decisions.

“AI is power. It affects who controls information, who defines the rules of operation and who decides the terms on which systems function.”

Digital sovereignty in the age of AI

The power of AI is not limited to individual decisions or organisations. It is also tied to the systems and platforms on which work increasingly depends. Hotti raises a perspective that has recently become much more central in public debate as well: digital sovereignty.

In practice, the issue is how dependent organisations and societies are on external technologies and services, and what happens if their operating logic, availability or terms change.

With AI, this becomes especially pronounced. “Many of the most widely used systems are built on global platforms whose direction of development cannot be directly shaped by an individual organisation or even a state. At the same time, decision-making, information processing and work processes are increasingly beginning to rely on these systems,” Hotti says.

Digital sovereignty is therefore bound up with questions of the extent to which an organisation retains the capacity to understand, control and, when necessary, change the systems it uses. This has become an increasingly central problem as technological development becomes ever more tightly entangled with geopolitical tensions and competition between states.

Organisations facing something new

According to Wisakanto, the organisations that succeed are not those that commit themselves to a single vision of where technology is heading. In a rapidly evolving environment, precise forecasting is often impossible, and focusing on it too much can even slow down the ability to respond. What matters more is being able to recognise alternative paths of development and adapt when change occurs.

“The organisations that do best are not the ones trying to predict where a given technology will end up or when, but the ones building the capacity to adapt quickly.”

But the ability to adapt does not emerge out of nowhere. It requires something that Wisakanto calls resilience. This does not mean only preparing for risks; it means having enough margin to withstand disruption while preserving the ability to function under uncertainty, learn quickly and change course when necessary. Resilience is above all an organisational quality. It is a way of building structures, culture and decision-making so that they can withstand change.

“Resilience is a basic prerequisite for an economy that is ready for the future.”

At the same time, organisations must clearly define responsibilities and roles. As AI takes part in work more and more often, it is not enough to know what is being done. It is also necessary to understand who makes decisions and on what basis. Without that, responsibility can quietly become blurred.

“We need to constantly define where the human decides, where the machine decides, and why.”

Now is the time to shape the outcome

Earlier in her career, Wisakanto studied at the University of Cambridge how automated systems affect people’s capacity for decision-making. One thing in particular worries her: technological change may affect how people experience their own role in work and society. If AI begins to guide work without people noticing, there is a risk that they will increasingly see themselves as users of systems rather than those who direct them.

“What worries me is that people’s sense of their own agency may weaken,” Wisakanto says.

The same idea also arises in discussions about expertise. According to Hotti, AI can support experts in their work, but it cannot replace the understanding on which expertise is built.

“Anything you present in your own name without source references is your own claim,” Hotti reminds us.

According to Wisakanto, we are living at a moment when it is still possible to influence the direction development will take. She sees it as a positive sign that problems linked to AI are now being taken seriously and openly discussed. Finland, in particular, has strong starting points, she says.

“Finland has a lot of opportunities here, especially because we are such a highly digitalised, high-trust society. We genuinely have the conditions to show direction even at the global level in terms of what kinds of solutions can be built.”

But that opportunity will not remain open indefinitely. Technological development is moving fast, and many solutions begin to solidify into practice before they have been properly assessed or questioned. What appears to be an experiment today may already be part of everyday life tomorrow. That is why Wisakanto believes it is important for organisations to act now.

What kind of society do we want to build?

Ultimately, this is about something larger than technology alone. Questions related to AI are also questions about direction, values and the kind of role we want to leave to human beings alongside these systems.

“The question is not how we make use of AI, but what kind of society we want to build – and whether it has the resilience to withstand what is coming.”

That is precisely why security depends on how we relate to uncertainty. Do we try to eliminate every risk, or do we learn to live and operate with them?

If there is one thing to remember about the use of AI, Wisakanto’s answer is clear:

“Treat your AI as an insider, and plan for a bounded failure.”

Security does not come from everything going right. It comes from preparing for the possibility that something will go wrong. “Instead of asking how we prevent this, ask what is the worst that could happen if it does happen – and whether that is acceptable.”

****

Text: Paula Minkkinen
Cover photo and Virpi’s photo: Henni Purtonen
Anna Katariina’s photo: Pentti Hokkanen/Flaming Star Oy

Anna Katariina Wisakanto is an AI safety researcher at the international think tank Center for AI Risk Management & Alignment. She is also a founding member and chair of the Finnish Center for Safe AI network. Her work lies at the intersection of technology and society, with a particular focus on how the risks of AI can be understood, measured and managed in ways that support decision-making.

Virpi Hotti is a leading specialist at the State Treasury, where she works on the use of data and analytics as well as the development of AI solutions. She is involved in advancing the use of AI in public administration, particularly as a support for expert work. Her work highlights the practical application of AI, the transformation of work, and questions related to expertise and responsibility.

Lue seuraava artikkeli

Milestones