Skip to content

Digital Trust Does Not Exist.

But Is Everyone a Liar?

Society

/

Technology

/

Deceit

Writers: Petrina Harper & Ross A. Mcintyre

/

Art Direction: Micah Parker

Society

/

Technology

/

Deceit

Writers: Petrina Harper & Ross A. Mcintyre

/

Art Direction: Micah Parker

As we navigate the complexities of the future, trust will become more than a value—it will be the foundation upon which all successful and sustainable technologies are built. In a world where change is the only constant, trust will be the anchor that keeps us grounded and connected. It will also be the currency of personal and professional success. As technology evolves at an ever-increasing pace, we humans must adapt to maintain integrity. By examining the evolution of online deception, what to anticipate in the future, and the human impact, we will foster a greater breadth of understanding regarding the past, present, and future of a trend we can never entirely ignore.

In the future, the most valuable currency will be trust. Once it's lost, it's very hard to restore.Satya Nadella, CEO of Microsoft

For those of us working in technology, the need to be security-savvy has always been critically important - perhaps a little easier than for some of our non-tech friends and family. From ISO certifications to mandatory security training, it’s part of our jobs to keep data safe and recognize attempts to steal it. But the ever-increasing sophistication of scams, combined with the influx of AI technologies means that everyone– even the savviest– needs to be even more vigilant.

Over time, scams have evolved, and some older ones are now well-known, perhaps even laughable – think about your “boss” asking for iTunes vouchers via a random email or phone number that isn’t theirs, or the troubled Nigerian prince reaching out, promising you vast riches in exchange for help with their financial situation. Sadly, these things can still be convincing to some, and modern iterations continue to dupe new and unsuspecting targets. It can be useful to share recent things that have almost fooled you in social situations, just to raise awareness. It is often said that “safety is an illusion; danger is the reality” - and it does pay to be alert for bad actors.

“Safety is an illusion; danger is the reality”

Cons have undeniably gotten more sophisticated over the past few years and many couple technological exploitation and human psychology. Spear-phishing is an evolution of phishing that tailors seemingly legitimate messages to individual targets – couple this with generative AI and you have a system that can dynamically respond in a manner appropriate to the situation and the person being imitated. Business Email Compromise (BEC) features attackers posing as company executives and initiating fraudulent wires – the FBI has reported billions of lost dollars due to BEC scams.

The prevalence of data breaches is a large contributing factor. High-profile breaches at Equifax, Marriott, and Facebook, reveal that even well-established companies struggle to protect user data. Another factor is the lack of transparency and control around personal data – companies might share or sell user data without explicit consent (or consent is buried in a 15-page End User License Agreement). Finally, inadequate company security practices explain why some of these scams make it to your company laptop. When users experience such incidents, their trust in digital experiences diminishes.

High-profile breaches at Equifax, Marriott, and Facebook, reveal that even well established companies struggle to protect data.

The Age of Manipulation

For good or ill (and I have trouble seeing the good), we are firmly inside the borders of an Age of Manipulation, Misinformation, or Disinformation – maybe all three. The ascendance of deepfake technology – “synthetic media that has been manipulated ” – that uses AI to create convincing fake audio and video clips poses a new threat to digital trust. Such technology could allow greater transmission of misinformation, more extensive fraud, and a media environment in which it is difficult to distinguish between what is real and what is fabricated.

As digital technologies evolve, so do methods for exploiting them. Digital trust violations are apt to increase due to the growing sophistication of cybercriminals. Here are some forms of digital trust violations that will likely grow more advanced and effective in the future:

In order to mitigate these extant and emerging threats, cybersecurity practices, user education, and regulatory and ethical frameworks are essential. Systems capable of dynamic response to threats could more effectively safeguard data and maintain digital trust.

Such technology could allow greater transmission of misinformation, more extensive fraud, and a media environment in which it is difficult to distinguish between what is real and what is fabricated.

A Sucker Born Every Minute

Are you among the 68% of people who believe they can identify scams? A recent report from the Global Anti-Scam Alliance (GASA) found a whopping 25.5% of people surveyed from around the globe lost money to hustles or identity theft across 12 months.

A few you may have seen in the past year include:

  • Worldwide, trusted celebrity figures have been used as unwitting front people for ad swindles or to let people know they’ve "won" phony competitions.
  • Facebook/Instagram ad posts with emotive calls-to-action (e.g., “We have to close our business down!”), Often, if you dig a little deeper, you will find they are not legitimate, and often have on-page comments from other users calling out the fraud.
  • Text messages that require extra scrutiny with almost perfect fake URLs, from the “courier,” the “bank,” and the “tax department.” These may include urgency in their wording and off-brand URLs.

Consider these few recommendations:

  • A call to action that involves a limited time to address a situation is the number one way people get into trouble because a scammer will utilize this feeling to get individuals to act without thinking.
  • Carefully check email addresses both before and after the @ sign, to confirm that they do not have any red flags such as random numbers or weird domains.
  • Do your friends, family, and colleagues usually sign off their text messages with their names? Unless it is your Luddite relative who consistently does it, it might pay to be suspicious - especially if it is from a new number.
  • Never click in-message links but, instead, use search engines to get the right URLs for investigation.
  • Take a moment to check out social media profiles, groups, or vendors and look for anything that might be out of the ordinary and a red flag – often scammers only have a few likes and followers, and the profiles all link up to each other if you go digging.

The Allure of Cynicism

What about trust concerns that are not so grave, perhaps people you know who are just trying to get likes and reposts in order to build a following? The misrepresentation of self on social media has a few different angles to it, people who build a façade that paints their life in strictly rosy tones and those who leverage deepfakes for revenge porn or something similar. Not the same type of person, but it is productive to look at two concepts that can be driven by untruthfulness to negative effect.

Let us look at the effects misrepresentation on social can have on psychology. We all know that person – the one who presents a self on social media that is radically misaligned with their true situation. At first, it is astonishing to see the dichotomy but one quickly realizes that the deception (often viewed as potentially harmless) can be the result of psychological issues or mental illness. We would not want to take everything at face value if we want to protect that person IRL. Many youths experience inadequacy and low self-esteem due to the discrepancy between one’s real self and the online persona.

Researchers Hui-Tzu Grace Chou and Nicholas Edge found in a 2012 study that many perceive that others are happier and more successful than they are, according to what they see on social. So, this has been going on almost as long as social media has been around – we cannot expect it to stop anytime soon. While some teens are sanguine about the negative impact AI may have on them, educators have a different perspective. 69% predict that AI will have a negative impact over the next decade and 24% expect it to be “very negative.” Can advancing technology make things worse? Sorry to be cynical, but of course it can. We cannot ever assume that what we see on social media is objectively accurate.

All of it makes cynicism extremely attractive – and potentially well-reasoned.

Now, let us look at deepfakes, any image, video, audio, or text that purports to represent the associated individual but is created as a knowing misrepresentation using a combo of artificial intelligence and machine learning. It is a problem for teens, a problem for free and fair elections, a problem for Taylor Swift. Deepfakes are a distinct threat, not because of the technology used to create them, but due to human psychology. People have a natural tendency to believe what they see. As such, they do not need to be perfect or even believable to be effective in promulgating mis/disinformation. We have already seen deepfakes in the ramp-up to the 2024 U.S. election. Given the impact the technology is likely to have, it isn't easy to see where legitimate applications exist. While merit is arguable, it can be difficult to perceive how the benefits outweigh the potential damage.

All of it makes cynicism extremely attractive – and potentially well-reasoned.

What’s On the Other Side?

So, how can we wade through the various forms of online deception without becoming round-the-clock cynics? Should we even try to resist? For the people you care about, education is most important - warn those who are unfamiliar with digital chicanery about some of the more contemporary schemes and go over the various method scammers and bad actors use.

In the future, we can expect digital trickery to advance rapidly. Unfortunately, there will always be a portion of society that is of ill intent, and it is reasonable to expect they will utilize the scale and quality of AI tools to reach new targets with different approaches. The more people that can be targeted, the more likely the scammers will find that one person who will fall for it – just make sure that person is not you, or someone you care about. As we head into some potentially tumultuous elections this year across the world, we can expect to see both traditional scammers and bad-actor governments utilizing the latest in tech to spread false information to influence opinion.

Perhaps the only sane response is to adopt protocols like Zero Trust (ZTA). It is a framework adopted by IT (Information Technologies) security groups that amounts to “never trust, always verify.” In the original manifestation, this is oriented primarily towards authenticating users and that does not assume anything about identity even if the user comes from a connected, permissioned network. For more abstract, informational, or conceptual pursuits it requires the user to check the veracity of anything important that they consume – especially if you intend to promulgate that content. Believing something false without sharing outwardly contains potential damage (except to the individual). The “never trust, always verify” mantra also has relevance to information offered by generative AI tools such as ChatGPT. Such systems are known to “hallucinate” and offer content that is not true or accurate. As these tools gain traction, be sure to verify valuable information.

While digital technologies offer immense benefits and conveniences, they also introduce risks that must be actively managed. Building digital trust requires more than just advanced technology; it also demands robust legal frameworks, transparent practices, and a commitment to ethical standards. If vulnerabilities exist and are exploited by scammers, complete digital trust remains out of reach.

In the future (as well as the here and now), learn to pause, don’t take things at face value, and do a little research.

As for this article, do you trust that I wrote it myself? Or is it something produced by generative AI and a skillful prompt? We'll let you be the judge.

Who (or what) do you think wrote this article?