AI Is Watching, But Are We Watching Back?

Every day, we use social media, search engines, and smart apps without thinking too much. We scroll, click, share, and accept terms and conditions without reading them. What many people don’t know is that behind these systems are powerful algorithms deciding what we see, what we miss, and what we believe. These decisions are not always fair or neutral. In fact, they are often shaped by business interests, political goals, or hidden agendas. This silent control is growing stronger, and we must start asking serious questions before it’s too late.

One major fear is that freedom of speech is quietly being replaced by algorithmic filtering. You may think you’re seeing a wide range of opinions online—but you’re often being shown only what fits your profile. The systems learn what you like, what you believe, and what keeps you engaged. Then they show you more of that, creating a bubble around you. Over time, you stop hearing other views. You may not even know that your opinions are being shaped by software. This is not free thinking—it’s programmed thinking.

Another fear is that many people have no idea how their data is being used. Most online services collect huge amounts of personal information—location, behavior, contacts, health habits, even private messages. This data is stored, analyzed, and used in ways that are often unclear. Sometimes, the results feel almost magical—but in a disturbing way. My wife has often said something that many people experience these days: “I was just thinking about this product… I never searched it, but it suddenly appeared on my screen.” She usually says it with surprise, and I smile. She thinks I don’t believe her. But I do—and I try to explain: this isn’t magic, and they’re not reading your mind. But they are reading your behavior—very, very well.” And the truth is, this happens because algorithms are predicting what we want based on our behavior, location, device connections, and even the people around us.

What makes this more worrying is that very few people or companies control these powerful systems. Decisions that affect millions—or even entire countries—are sometimes made behind closed doors by small groups. These decisions are not voted on. There is no public debate. And there is very little accountability. In this way, we are moving toward a society where digital power is held by the few, and the many have no say. This is not how democracy should work.

During crises, like pandemics or elections, this power becomes even more dangerous. Information spreads fast, and it’s hard to tell what is true. Algorithms push emotional content, even if it’s false, because it gets more clicks. And when the system rewards anger or fear, society becomes more divided. This creates an environment where truth loses value, and trust in institutions breaks down. Some say that we should stop worrying and let innovation grow. But this thinking can be risky. We need innovation, yes, but we also need rules to protect people. In Europe, new laws are being made to guide how AI can be used safely. These laws are not perfect, but they are a step in the right direction. They focus on human rights, transparency, and accountability. Some people complain that rules slow down research. But what’s the point of fast progress if it leads to harm or loss of freedom?

We also need to talk about what’s coming next. Many people are not aware of how fast AI is entering their lives, in job hiring, banking, health care, education, and even law enforcement. AI can help, but it can also discriminate, exclude, or make errors that hurt real people. If we let AI systems make decisions without proper checks, we risk building a society where humans serve machines, not the other way around.

It is often said we must “educate people”, but this is not so easy. Many are already overwhelmed by work, stress, or simply don’t have the time or energy to keep up with AI developments. Telling people to “learn AI” is not enough. Instead, we must start small and meet people where they are. For adults, public media, TV, radio, and local community programs—can play a big role. Short, clear messages about how AI affects jobs, data, or daily life can open minds. Workplaces can offer short training during working hours—not to turn people into AI experts, but to help them ask the right questions.

For teenagers and young adults, schools and universities must go beyond teaching how to use technology, they must teach how to question it. Critical thinking, ethics, digital responsibility, and awareness of algorithmic bias should become part of the classroom. AI is shaping their future, so they must learn how to shape AI. This is not just the job of teachers—it requires new materials, support, and collaboration with researchers and developers who understand both the risks and the possibilities. And with children, the work starts even earlier. Even young kids today use voice assistants, tablets, and AI-driven games. Through stories, cartoons, and simple examples, they can be shown the difference between human decisions and machine decisions. This is not about creating fear, it’s about planting seeds of responsibility. If the next generation grows up asking “Why is this happening online?” or “Who made this decision?”—then we are already on the right path.

We are at a turning point. The digital world is expanding fast, and we must choose which path to take. One path leads to a fair, open society where technology supports freedom. The other leads to hidden control, silent manipulation, and a loss of public power. If we want AI to reflect values like freedom, equality, and trust, we need to make sure those values are at the heart of how it’s developed and used. That means investing in the right direction—supporting ethical innovation, funding meaningful research, and helping both local and international talent shape the future of AI. Here in the EU and the Nordic region, we are not just well-positioned to lead, we have a responsibility to lead. With our strong focus on human rights, digital trust, and democratic values, we can set a global example for how AI should serve society. This is a moment to act boldly, to support those building transparent and inclusive technologies, and to turn vision into impact. The opportunity is real, but only if we recognize it in time.

At the GPT Lab, we are thinking ahead. Our team is exploring how AI is shaping our world, and what we can do to make it more fair, open, and safe. We want to share what we learn, not just with experts, but with companies, communities, and anyone who’s interested. We believe that by working together, we can ask better questions, find better answers, and build AI that truly serves people. Our team is ready to help find these solutions and make sure no one is left behind.

Author

Muhammad Waseem

Vice-Head of GPT-Lab Tampere & Postdoctoral Research Fellow

Scroll to Top