- The Role of AI in Controlling Information
- Does Political Propaganda Exist in Search Engines Too?
- How Politics Are Manipulating What Content You See?
- Corporate Donations are Shady and One-sided
- The Responsibility of Truth Lies With You
The topic of AI and its relationship with the big tech companies as well as its influence on political narratives via social media and search engine sites like Facebook and Google has gained lots of attention, particularly in the run-up to the important US presidential election. The dramatic sequence of events starting from the assassination attempt on Donald Trump and Joe Biden’s resignation from the election to Kamala Harris’ candidacy, made it hard to discern the truth from lies being spread in media space causing the total mayhem. The interaction of AI, search algorithms and political propaganda now raises big concerns more than ever about censorship, bias, and the reliability of information released to the public.
The Role of AI in Controlling Information
To start with, social media companies such as Meta (Facebook) are using AI more and more often to manage content, including the identification of false information and propaganda from extremists. Facebook claims that a significant portion of terrorist content is successfully identified and eliminated by its AI algorithms. Nonetheless, there is frequently skepticism towards these claims. Others point out that despite such claims, the platform is still packed with extremist content and the algorithms have difficulty understanding complex contexts, which causes them to over censor or fail to detect harmful information.
The fact that AI systems can be tricked by minute modifications to text or images in the hope of being undetected worsens the issue. Even though AI can help with content moderation, it cannot completely take the place of human oversight, especially in situations where one of the most important 2024 political events like the US presidential election is involved.
In the first Mind University podcast "AI: Friend or Foe" episode of Season 1, we discuss the dark side of AI and how AI is being used to promote human-created propaganda and persuade people to follow a specific point of view. Read the full "AI: Friend or Foe" article here
Does Political Propaganda Exist in Search Engines Too?
Public perception is significantly shaped by the algorithms used by search engines like Google. For example, when searching for Donald Trump, Google mostly gives information about Kamala Harris, but when searching for Kamala Harris, it's not possible to get news on Donald Trump. It's a real sign of bias in action.
Even Elon Musk has commented on his X account about the recent mind-blowing discovery:
Search algorithms' subjectivity and their ability to sway political narratives with the help of AI by favoring some information over others form these political biases which have the potential to distort people's perceptions of political figures and events, which in turn can influence voters.
The monetization strategies of platforms such as Facebook and Google contribute to the spread of misinformation and the "clickbait" phenomenon. These businesses have come under fire for intentionally supporting disinformation campaigns by using ad systems that put engagement above accuracy. Extreme and false information therefore frequently gains greater exposure and has the potential to distort political realities by influencing public discourse.
How Politics Are Manipulating What Content You See?
When platforms yield to political pressure, the application of AI in content moderation can also result in censorship. Facebook's recent admission that it removed an iconic “photo of the year” image of President Trump shows the difficulties in striking a balance between the right to free speech and the need to remove unfavorable content. Because the image was "mistakenly" classified as offensive by the company's AI systems, there are risks associated with using such platforms for finding new trending information.
AI has made the spread of misinformation during the current process of election worse because it has led to a condition known as the "liar's dividend", whereby political actors are better able to spread false narratives and sway public opinion. These situations make it more difficult for the general public to distinguish fact from fiction especially when the fact harms greatly the opposing side, the Democrats in this case.
Corporate Donations are Shady and One-sided
It is impossible to ignore Big Tech companies' financial influence in politics. Historically, political action committees (PACs) and employees connected to large tech companies have supported Democratic candidates. For example, Google donors gave Joe Biden's campaign about $1.7 million. Democrats also received contributions from Big Tech companies with $749,410 from Amazon's affiliated donors and $576,988 from Facebook Inc.'s affiliated donors. Donors from Apple Inc. contributed $537,630, while donors from Microsoft Corp. totaled $848,667. Surprisingly, in the 2020 election cycle, none of these companies' donors ranked among the top donors to Donald Trump's campaign.
As a result, these companies are literally shaping political narratives and results given this proof of financial support. Even though employees aren't allowed to give money directly to campaigns, their combined contributions have a big impact on political dynamics. Due to these companies' support of Democratic candidates, there may be perceptions of bias in the way they handle advertising and content on their platforms with AI.
The Responsibility of Truth Lies With You
The reliability of AI and the algorithms used by companies like Facebook and Google continue to be a controversial topic as important elections draw near. It presents serious issues due to their current limitations and biases, even though they have the potential to improve content moderation and combat misinformation. The interaction of AI, search engine algorithms and political propaganda calls for a critical analysis of the choice and propagation of information.
We all must be alert, use fact-checking resources and carefully evaluate the information we come across online. Election results' integrity depends on the public's capacity to get factual information rather than biased and strongly manipulated which can affect our lives in ways that can bring much more serious consequences than we thought.