Artificial intelligence shakes up the media sector: Scholar and entrepreneur Peter A. Bruck talks about measures that media houses and journalists can take to keep abreast with the latest developments.
Media companies need to invest in AI technology and use it for cost reduction and productivity advantages
DW Akademie: We have been living with ChatGPT and generative AI for one year now. Can we already see concrete effects on the media?
Peter A. Bruck: The phenomenon of computerized generation of texts in media is not new. It is called robot journalism. For more than ten years it has been used for standardized texts, that is for routinized information with no particular news value: examples include weather reports or sports results in minor soccer leagues etc.
What is new now is that we are seeing much more complex texts created with large language models and generative AI. These are texts that respond to instruction and interaction. Till a year ago, we had not seen this interactive dimension at the consumer level. In other words, we are at the beginning of a new era of content and text generation on a mass scale.
This is driven by a dynamic tech industry that is offering automated or algorithmic content at no or very little cost and on a very broad scale. Generative AI increases the productivity of machine writing and especially the range of rapid news production at minimal expense and free of spelling or math errors.
This is having a significant impact on the business models of established media. ChatGPT also has a major impact on the competencies expected of media professionals, who will have to learn how to use and master these new tools for those texts which require human insight, judgment, and creativity.
How is generative AI changing media business models?
The fundamental questions of media economics need to be discussed in an entirely new way. Especially when it comes to identifying how journalistic media can create and economically sustain a public sphere based on democratic discourse, i.e. facilitate citizen participation, exchange and social decision-making.
Media companies need to invest in adapting to Generative Pre-Trained Transformer (GPT) Technology and to use it for cost reduction and productivity advantages. But there need to be new public revenues for those players that fulfill the journalistic functions of reasoned reporting, accountable judgment, and editorial opinions, something generative AI cannot produce.
In addition, the regulation of generative AI providers needs to be addressed. Such companies use journalistic media output to feed data into their algorithmic models and to run their business. They should be required to pay for their use of this content or be taxed so that revenues flow back to journalistic media houses so that they can continue to provide the public with up-to-date information. Maintaining functioning journalistic media systems is a task for society as a whole.
What would new business models for media look like?
Journalistic media need to invent new forms of hybrid financing, for example. AI is accelerating the process of moving away from journalistic services predominantly financed by advertising revenue and reader payments.
A good example for alternative revenue models is the media company Rappler in the Philippines. Rappler is driven by uncompromising journalism and enriched by communities of action. Founded by Nobel Peace Prize laureate Maria Ressa, the media company supports journalism by providing technological and brand services to client organizations including bigger corporations.
In another example, new models for journalistic start-ups can be based on selling skills and insights in the form of services, which manage the complexity of creating and distributing key content via digital channels for clients. Future journalistic media companies must generate revenue via multiple services and in several markets.
We also need to look closely at how public service broadcasting can be further developed and, above all, how cooperation and mutual complementarity can be achieved. As a society, we need trustworthy institutions that create factuality. The costs for producing this must be shared so that public discourse can take place on a common factual basis. Factuality is a high-quality journalistic product. The provision of a common ground of facts is essential for any democratic society.
How will the relationship between journalists and audiences change as a result of AI?
In democratically constituted societies, we attach particular importance to the authorization of media content by authentic and competent authors, i.e. by someone who is professionally qualified as a journalist and adheres to professional rules to which she or he is ethically bound.
We are now entering a phase where a text is no longer tied to an author. This relationship is dissolving. The traditional links of individual people or a community of speakers, and the knowledge and values they share are disappearing.
We therefore need to look closely at how citizens as users interact with artificially generated content. After all, a communication act always aims to shape the relationship between sender and receiver. It is about informing, but also about changing attitudes, arousing emotions, or prompting action. In other words, there are intentionality and accountability. These are essential features of human communication.
It is important to be able to distinguish in the future: who is my counterpart and who is speaking? If a dialog takes place via a digital agent instead of a real person or author, we need to know that. Communication between actual people must be protected.
How can we trust the reality generated by AI at all?
Trust depends on the extent to which media content can be checked to have human authenticity. Here it is very important that watermarking, the technical stamping of content, takes place at the very moment of artificial content generation and not afterwards.
Labeling must take place at the moment of generation; metadata must be created and remain associated with the generated content. These metadata must provide information about the machine generation and manipulation that has taken place. This must be strictly enforced.
What options does generative AI offer in societies with limited freedom of the press or other deficits in providing people with reliable information?
Large language models have different use cases. One example is automatic translation from one language into another. With instantly available translation, people gain access to entirely new knowledge spaces. If this is transparent and comprehensible for individual users, then AI can empower people and have a great educational and even emancipatory potential.
At the same time, it is important that the diversity of languages on the Internet continues to increase. In general, it can be observed that new technologies initially have a dissolving effect in authoritarian state structures because these structures usually react slowly. This means that there is a certain phase in which people can use new technologies for self-determination, to participate more, and improve their own situation.
But this advantage will not last forever and state authorities will also make use of AI. We should then expect an increase in surveillance, and above all the falsification of historical knowledge. It is therefore vitally important to empower people by promoting an understanding of the plurality of technological possibilities of making themselves heard. We must keep fighting for freedom of expression.
Peter A. Bruck is Chairperson of the Board of Directors of World Summit Award and President of the International Center for New Media in Salzburg.
Interview: Julius Endert