Technology gives more of us opportunities to create things.
No longer does an individual need to possess all of the expertise of a craftsperson in order to create a quality product. This is particularly evident with digital products, which can be produced and reproduced on an absurd scale. For instance, I’m putting these words next to digital drawings I created and adding the end product (this post) to the interweb where another human being may find and engage with it. Several million other people are doing the same (but with different words next to probably different pictures), which means that someone who is poking around the interwebs is very unlikely to stumble across any given piece of content. This creates a need for something to control the information overload and make it easier to make decisions (we are, after all, pretty picky about how we spend our attention and where we choose to burn through brain energies by thinking really hard).
We create algorithms that help us make those choices and alleviate some of that brain strain. Instead of having to think about all of the different aspects of a jam that one might consider when choosing among jams to purchase, one could just trust an algorithm to make the best choice for them.
Once a company releases an algorithm to churn through user data to look for patterns (which jams get the most clicks?) they have created a set of rules by which content is filtered and sorted. Anyone who is able to play within those rules can figure out how to get the highest score (without necessarily playing fair). This means companies who want their products to perform well may purchase favor with an algorithm (e.g. the ads that pop up during searches) or they could employ other means to influence the outcome (e.g. click farms and bot-nets).
As long as success is measured by revenue (ads tend to generate revenue while users tend to not, except in subscription cases) the metrics are not going to take into account the intangible costs, such as unfairness and unintended consequences.
The problem of too much content on the internet, when addressed through machine learning algorithms that filter content and provide recommendations, is that we end up with content that looks and feels homogenous (same-ish) because the rules applied by the algorithm tend to filter out the stuff that doesn’t look and feel like the most popular stuff.
So, in a way, we’ve come full circle. Technology enables more of us to create different stuff, which is essentially an amalgam of all of the stuff we’ve personally encountered – I engaged with a lot of XKCD (https://xkcd.com) and The Oatmeal (https://theoatmeal.com). More technology is created to help sift through all of the stuff that we’re creating. That technology faces all sorts of hurdles like: success metrics that aren’t necessarily aligned with ethics, incentives to concentrate users in a few content areas to generate ad revenue, and blind spots about what the algorithms are actually doing. We then end up with contents that might all look and feel the same, along with other unintended consequences.