The writer Nicholas Carr published in 2008, in The Atlantic, an essay with the provocative title Is Google Making Us Stupid? What the Internet Is Doing to Our Brains. Carr’s thesis, later defended in the book The Shallows: What the Internet Is Doing to Our Brains, is that the internet was having a harmful effect on cognitive functions, reducing the capacity for concentration and deep reflection on content. According to the author, the web is a “gift from God,” allowing long research processes to be completed in minutes, but this comes at a price. In his own words: “I have the uncomfortable feeling that someone or something is tinkering with my brain, remapping the neural circuits... my concentration starts to drift after (reading) two or three pages. I get restless, lose the thread, start looking for something else to do... the internet seems to be eroding my capacity for concentration and contemplation.”

Carr’s concerns had already been shared by literary critic Sven Birkerts in the book The Gutenberg Elegies (1994), where he argued that the new “electronic culture” was deteriorating reading skills. In the same line, developmental psychologist Maryanne Wolf, in Proust and the Squid: The Story and Science of the Reading Brain (2007), warned that the volume, immediacy, and speed with which information is provided on the web had a negative impact on the cognitive development of children, reducing attention span and exploratory thinking.

The critical view of the psychological effects of new knowledge technologies has now shifted from reading skills to the sphere of interpersonal relations within work teams. The article published last September in the Harvard Business Review by Kate Niederhoffer, social psychologist at the University of Texas–Austin, shows the negative impact that the inadequate use of Generative AI can have on team productivity and relational dynamics. The study revealed that 95% of companies that adopted GenAI technologies do not see a return on the investment they made. The reason lies in the fact that GenAI tools are being used to carry out work that appears to be of quality with little effort, thereby overloading other colleagues.

The phenomenon was termed workslop and defined as “AI-generated content that masquerades as good work but lacks substance, and contributes little to the task,” transferring the workload to downstream colleagues. The task is outsourced to technology and then to another person who must clarify objectives, correct mistakes, decode content, and often redo the work, generating tensions in personal relationships and in the team environment. More than 15% of respondents reported having suffered the effects of workslop. The phenomenon is more frequent among colleagues (40%) but also occurs, in both directions, between managers and teams.

In addition to productivity losses, workslop has emotional and relational costs because it affects motivation and interpersonal perception. Many feel confused or irritated by the work they receive and tend to judge colleagues as less creative, less capable, less trustworthy, and even less intelligent. Almost one third of people state that they are unwilling to collaborate with the person who sent them the work. It is likely that GenAI, when misused, stimulates practices of simplification, procrastination, and disengagement from tasks, while discouraging collaborative relationships.

Several studies have confirmed the negative effects of the misuse of AI in teamwork. An experiment by Sonia Shaikh and Ignacio Cruz (AI in Human Teams: Effects on Technology Use, Members’ Interactions, and Creative Performance under Time Scarcity, 2022) showed that teams working under time pressure use more intelligent assistance technologies, resulting in poorer performance in creative tasks and reduced face-to-face relations.

Another investigation conducted by the team of Yinuo Qin, from Columbia University (Perception of an AI Teammate in an Embodied Control Task Affects Team Performance, Reflected in Human Teammates’ Behaviors and Physiological Responses, 2025) revealed that teams in which people collaborate with AI agents to achieve goals have worse results than teams composed only of humans, especially when task complexity increases. In this case, the introduction of AI interrupts team dynamics, reduces communication and commitment to the task, with adverse physiological and behavioral impacts.

Another experimental study conducted by the team of Fabrizio Dell’Acqua, from Harvard Business School (Super Mario Meets AI: Experimental Effects of Automation and Skills on Team Performance and Coordination, 2025), concluded that the introduction of AI agents in the team reduces coordination, trust, and individual effort, and affects performance, especially in teams with lower qualifications.

Some research, such as that conducted by the team of Allam Dennis, from Indiana University, reaches more optimistic results (AI Agents as Team Members: Effects on Satisfaction, Conflict, Trustworthiness, and Willingness to Work With, 2023). In this case, the data suggest that AI agents are well accepted in teams, being able to contribute to the perception of integrity and to the reduction of conflicts, although the levels of job satisfaction are not as high as in teams composed only of human agents.

There is, therefore, consistent evidence that the integration of GenAI in work teams can lead to problems of cooperation and coordination, reduce levels of trust, individual effort and satisfaction with group processes, decrease intrinsic motivation, and foster boredom and lack of commitment. Teams highly dependent on GenAI are less creative, have less critical spirit, and provide less emotional well-being.

Mitigating these effects to take advantage of the potential of GenAI as a collaborative, value-creating tool, and to avoid it becoming a way of “rushing work and passing it on to others,” is a new challenge for team leaders. To reduce the negative effects of GenAI it is important to clearly define objectives, roles within the team, and norms for the use of AI agents. To adapt the tools to the needs of the work and to provide user training. To treat AI products as starting points or suggestions subject to human creativity, group debate, critical evaluation, and responsible decision-making. GenAI provides content, accelerates processes, recognizes patterns, simulates scenarios, and supports communication, but it is up to people to create, validate, decide, and take responsibility for results.

Luís Caeiro, Professor at CATÓLICA-LISBON