top of page
Search
  • rajasankar

The challenge with open sourcing Twitter’s decision-making algorithm


Can Twitter’s decision-making algorithm, which decides what tweets to show, really be open sourced?


Many AI and other technical experts maintain that it cannot, because the algorithm depends on too many parameters. To get to the root of their argument, let’s first consider how the algorithm works on a technical level.


If I follow 100 Twitter accounts, and each account posts 5 tweets a day, then I would only have 500 tweets in my feed. At such a low number, the tweets could just be displayed chronologically and it would be pretty manageable for me to read through them all.


Imagine, though, that I follow 5,000 accounts. Even if each of these accounts only posts 10 tweets per day, my feed would still fill up with 50,000 tweets — simply too much content for me to parse through on my own. This is why recommendation systems have become so important to our information economy today.


For example: when I search for something on Google, I usually wind up with hundreds of thousands (if not millions) of results. For so many results to be useful, they have to be ranked according to relevance. Otherwise, I would either spend all day trying to locate the results I find relevant, or I would give up before I even started.


Google’s ranking algorithm works because the entire internet essentially provides the necessary parameters. Each website that is able to be found on Google is public, and can be compared to all other public websites based on these relevance criteria (which may include presence of keywords searched, number of visitors, and more).


Though the theory of ranking is well-known, Google has resisted pressure to reveal its specific ranking algorithm, saying that specifying the parameters would allow website creators to game the system.


How does this tie back to our Twitter scenario?


In the same way that Google’s search results are ranked according to relevance, my feed of 50,000 tweets must also be ranked according to what I would most likely find relevant.


Twitter is functionally still very different from Google, though.


  • Users on Twitter are served the latest tweets in a newsfeed from accounts they follow. On Google, users must perform a keyword search to find the latest content matching their queries.

  • Tweets are essentially static pieces of content, they are not editable but can be deleted or hidden. This differs from Google’s source of content: websites, which are constantly being updated.

  • Every tweet is a datapoint within Twitter’s private database, whereas the world of websites that Google draws upon for search results exists outside of Google. Because of this, Google must crawl the internet everyday to update the content it returns when users perform a search.


That last point is actually what makes this discussion about open sourcing Twitter’s algorithm so interesting.


There is currently no algorithm or method available to rank, or recommend, private text data without human intervention.


Take, for example, organizations that need to search through private databases of documents. Today, they have two options: turn toward an open source tool, like Solr or ElasticSearch, which is free but limited in functionality; and/or opt for custom AI/ML models, which are expensive, require training, and do not explain why they rank or display results in the ways they do.


Even with their limitations, these search tools still work because they determine relevance through one key parameter: the keywords that are searched. At the most basic level, a search result can be sorted as “relevant” or “not relevant” based on whether it contains the user’s keywords.


The same principle applies to any non-text content that can be recommended, too. Relevance for products on Amazon, photos on Instagram, and videos on Tiktok is determined through keyword search (where a search function is available) or the user’s history of interacting with (liking, watching, rating, purchasing, sharing…) similar content.


Written content is much more challenging, though. AI/ML may guess how to rank a tweet or a Facebook post, but it will run into some challenges. For one, it is not always as easy to boil a text post down to a single idea, like “gaming keyboards” or “cat videos,” that a user may have interacted with in the past.


Even if the post can be reduced to a single idea, however, that idea may not be explicitly found as a keyword in the text post. Most people do not start a post by writing, “This post is about ______.”


Likewise — assuming a post can be reduced to a single idea — that idea may be an abstract one, like an ideology.


In short, truly accurate recommendations for text content today require a human to provide input.


In the absence of a simple ranking method for tweets, which parameter should weigh most heavily when determining relevance is still a bit of a mystery. Should it be the author’s number of followers? Number of tweets? Frequency of tweeting? The number of impressions or retweets the tweet has received?


We just don’t know yet.


The fact that these parameters seem arbitrary is one of the reasons some people do not believe that Twitter’s decision-making algorithm can be open sourced. Others point out that it would be easy to warp the algorithm to favor certain types of content over others. Still others theorize that there is actually no algorithm at all, but rather simple code that makes random decisions about which tweets to recommend.


Unfortunately, the discussion about whether the algorithm can be open sourced tends to focus solely on why. There is much less attention paid to how the algorithm actually works, though it is vital to answering this question of “why” — and even “how.”


Social media platforms must acknowledge the problem caused by not having a standardized and logical way to recommend text content. Not only does it reduce the value of a user’s experience on the platform — as they are not able to find or engage with the content that would be truly relevant to them — but it also puts the companies into an awkward position when the topic of open sourcing algorithms comes up.


Only once the platforms admit this problem can the discussion on how to fix it can start.

bottom of page