In the wake of The Facebook Files and Frances Haugen’s outing of the company’s internal flaws, the word “algorithm” has become synonymous with other words like “hate speech,” “fake news,” and “conspiracy theories.” People talk about “the algorithm” as if it’s some all-knowing, all-powerful figure—the God-like presence that controls each respective platform—of which they have minimal knowledge and minimal control. They are just victims.

How did we get here?

In order to truly understand why social algorithms have had the impact they have on our society, it’s important to start at the beginning.

Too Much Content, Not Enough Space

All social media platforms face the same two big challenges:

The second question is paramount, because it doesn’t matter how many new users you attract if a large percentage of them don’t stay on the platform. For many platforms, this meant showing users things that interested them and made them feel like they were able to connect with their friends digitally: status updates, photos, comments, ‘Likes,’ and other engagement-oriented media. And back in the late 2000s, all of this was organized in a chronological feed. You saw what was happening most recently at the top of your feed, and moved backward in time as you scrolled down.

This was back when the wants and needs of users were at the heart of the product.

I do believe social media platforms, in their early days, started off with good intentions. These platforms wanted to connect friends and family. They wanted to give everyone a voice (and later, a megaphone). The problem (and this happens over and over again, like clockwork) is when the wants and needs of users fall second to the wants and needs of the platform, the founders, the executive team, and most importantly, investors.

From the mid 2000s through the decade that followed, these platforms added more and more and more functionality—not necessarily because that’s what was in the best interests of users, but because the platform needed to constantly prove it could re-engage their attention. Suddenly, you didn’t just see what your friends were posting, you saw what their friends (who weren’t your friends) were posting and ‘Liking.’ And then you saw what brand pages, celebrities, and influencers were posting (because they tricked you into Liking their pages). And then you saw recommendations of people you might want to friend or follow, even if you had never met, seen, or heard of them before.

All of these features led to so much content, so much noise, all being crammed into each user’s feed, that suddenly the user didn’t know what to pay attention to—and the platform didn’t know what the user wanted.

And along came social algorithms.

The Rise of Algorithmic News Feeds

Social media algorithms were the solution to a problem called “too much content.” As the News Feed became busier and busier, they made the decision to use technology to create an algorithm to decide, out of all the possible options, what users should and shouldn’t see. It was, in theory, a good solution to the problem.

But why was the user’s timeline full of garbage in the first place?

The unfortunate answer is: these products were no longer designed for the wants and needs of the user. Instead, features were added (and added, and added) with one central goal, one guiding North Star: to bring users back to the platform and keep them engaged. And in the short term, it worked. Until it became a problem, and then users had too much content to consume and were wondering where their friends and family had gone—to which these social media platforms quickly took moral high ground and said, “We hear you. So we created an algorithm to show you the things you care most about.”

They created the problem, and then created the solution.

Weaponizing Algorithms

Since the social media industry’s move from chronological to algorithmic News Feed, we have since learned the consequences that come with algorithmic decision making. Algorithms can be gamed. Algorithmics can be misused, and abused—to the point where, if a lot of people ‘Like’ something, regardless of whether or not you do, you will see it. This is how content spirals out of control (and hate speech spreads, and elections come into question).

For example:

Mike Tonkelowitz, the Engineering Manager who spoke of the benefits of Facebook’s algorithmic News Feed in 2011, had it right: the most interesting stories will always make their way to the top.

But we now know: this isn’t always a good thing.

It’s not the most “interesting” stories that make their way to the top of your News Feed (the word “interesting” implying “valuable”), but the most emotional. The most divisive. The ones with the most Likes, Comments, and Shares, and most likely to spark debate, conflict, anger. Either that, or the content a brand was willing to spend the most money sponsoring—all of which reveals a disconcerting conclusion: as a user of these platforms, being forced to see what the algorithm and brands want you to see, you have no rights. Each and every day, the privacy of your “personal News Feed” is being invaded. And the abuse has gone on so long, users of social media have learned to love it, or at least deal with it—the modern day Stockholm Syndrome.

We see that as a big problem.

HalloApp’s Chronological Feed

My co-founder, Michael Donohue, and I started HalloApp in 2019.

We were concerned about the ways social media had changed, and how the world was changing as a result. What had originally been created as a tool for people to connect, keep in touch, and even build new relationships, had become the equivalent of a digital mall. You weren’t there to see what your friends and family were up to, or have meaningful conversations. You were there to be tricked, triggered, and distracted long enough to sell another dollar of advertising revenue—your emotions manipulated at scale, and your News Feed filled with content you didn’t consent to seeing.

And since algorithmic News Feeds have become the status quo for all major social networks, what’s the alternative?

We started working on HalloApp because we believed there was tremendous value in giving the power back to the users, and creating an environment where real relationships are protected by privacy.

Our mission is to build a product where tools are left to the discretion of each individual user—not addictive engagement loops disguised as “value adds.”

All of these things, together, create for a completely new and different definition of what a “personal News Feed” should look like. One that is chronological and exclusively designed for the user’s best interests: protecting the digital relationships you have with your real friends and family members, in private.

We call HalloApp the first real-relationship network because there is no algorithm stuck between you and your friends and family.