top of page
Search

Bright Futures, Dark Edges: An Analysis of Techno-Optimism’s Incompatibility with The Developing World

  • Writer: Gabriella Fedetto
    Gabriella Fedetto
  • Jul 2
  • 21 min read

“We believe the techno-capital machine is not anti-human – in fact, it may be the most pro-human thing there is. It serves us. The techno-capital machine works for us. All the machines work for us.” ~ Mark Andreessen, Techno-Optimist Manifesto, 2023

 

Optimism in the digital age has become a proxy for recklessness. If we are optimistic about what will happen—if we simply anticipate good outcomes—we mistake naivety for hopefulness and ignorance for clairvoyance. The Silicon Valley mindset of “move fast and break things” has done what it says on the tin. This approach to progress is well represented by Mark Andreessen’s late-2023 publication The Techno-Optimist Manifesto. Through equating growth with progress, techno-optimists have created a system which, not through their malice but rather through their own blind faith, threatens to swallow the world.

It is easy to look at technological advancement and limit our imaginations to what is happening in Silicon Valley. When we imagine the latest technology, we imagine it integrated into some already pristine first-world metropolis. We imagine pre-existing infrastructure. We presuppose access to a stable internet connection and Wi-Fi. We assume that basic needs have been met. The Techno-Optimist Manifesto celebrates technology as the thing that, at every turn, has solved every problem. 

It is difficult to imagine the cutting-edge tech products and solutions being integrated into the capital cities of fragile democracies in Sub-Saharan Africa—or Starlink being financially accessible to communities in remote villages living off less than a dollar a day. It is ludicrous to imagine electric vehicles in places with inconsistent electricity. It seems absurd that we would send laptops to rural schools before we guarantee running water and working sanitation. It is important that we sit in this absurdity. Why do we so struggle to imagine the technology we take for granted in the developed world? The answer is simple. The technology is not designed for it. Technology is designed in wealthy nations, by wealthy people, for wealthy nations.

Under traditional capitalism, the interests of the wealthy (people and nations) have dehumanized people in the Global South. Those in the Global North have failed to imagine the implications of their actions on people who neither look nor live like them—who they only see through skewed media representations. This has always been a problem. It is why the wealth gap between rich and poor nations is so vast. It is why exploitation and xenophobia have persisted. It is why there are such dire structural asymmetries between the developed and developing world. I believe it is also why democracy and human rights causes have often faltered in the developing world.

Though rooted in the same belief systems and capitalist dogma, the propagation of Big Tech and other tech companies into the developing world is something new. I believe technology is not a reflection but rather a magnification of the interests and beliefs of its founders and owners—subconscious or otherwise. In this paper I will discuss how the unique qualities of technology, in comparison to the hierarchies and powers of traditional capitalism, amplify the inequities and sentiments towards the developing world—and the new risks this presents as our global market continues to shift away from global capitalism. I will refer primarily to two texts: the aforementioned Techno-Optimist Manifesto and Yanis Varoufakis’ recent book Techno Feudalism: What Killed Capitalism. The system of ideas in Andreessen’s manifesto will act as a surrogate for the mindset of most tech companies’ founders and leadership—who I believe to be acting mostly in good, though misguided, faith. Varoufakis’ book is a profound characterization of the transformation of capitalism under technology—though not specifically focused on the developing world.

This paper will focus largely on the African continent—often seen as the “next frontier” for tech companies as they aim to capture the attention of untapped populations. I was born in Zimbabwe and left during the financial and political instability of the mid-2000s. I grew up in Johannesburg, South Africa before coming to Stanford for college—Silicon Valley, what you could call “The Seat of the Empire”. My background should mostly explain my interest.

I have identified six key qualities unique to Big Tech and digital technology in general that differentiates its impact from that of traditional capitalism in its impacts on the Global South:

1.     It is designed to be unregulated.

2.     Where it appears the most democratic, it is the most oppressive.

3.     It trades in time and attention.

4.     It blurs the distinction between consumer and product.

5.     It brokers information.

6.     It is an experiment.

Following an analysis and breakdown of each of these points and their specific link to the tenets of techno-optimism, I will discuss what the root of the issues surrounding these characteristics are and propose a way forward—a path to genuine techno-optimism.

 

 

IT IS DESIGNED TO BE UNREGULATED.

            The tech industry has been founded on supposedly libertarian ideologies. Many of the most prominent tech founders and executives have advertised these ideals. Mark Zuckerberg, for example, coined the motto “move fast and break things” and has clearly prioritized rapid innovation over caution and moderation. He has consistently centered free speech in public statements. Marc Andreessen, the author of the Techno-Optimist Manifesto and the co-founder of the VC company responsible for investments into Facebook, Airbnb, Instagram, Lyft, Robinhood, and numerous others, expresses radical libertarian beliefs in his manifesto: “We believe that freedom of speech is essential to a free and prosperous society, and that attempts to censor, suppress, or control speech are fundamentally at odds with human flourishing” (Andreessen, 2023).

            Prominent digital platforms are manifestations of this ideology. Digital, especially social media, platforms have labeled themselves not as curators but as intermediaries. Their platforms are a neutral virtual space upon which people can build social media presences, read the news, create and consume content, buy and sell, and communicate with one another. This removes responsibility from the company and moves the onus onto the users, allowing a reasonable argument for denying responsibility for the digital landscape over they ultimately control. The only responsibility they claim is an ethical duty to preserve free speech—that the purpose of their platform is to allow people to upload and engage largely without regulation.

Though the largest catastrophes resulting from a lack of regulation have involved Facebook, it is not limited to it or even to Meta. Twitter, prior to its acquisition by Elon Musk’s, mishandled COVID-19 misinformation during the pandemic. Parler became the primary organizing platform for disgruntled Trump supporters, culminating in the January 6 U.S. Capitol riots. YouTube amplified extremist content like Alex Jones’ InfoWars conspiracy theories. And, most recently, Elon Musk has scaled back moderation efforts at X in the name of free speech.

            Of course, since the platforms’ inceptions, moderation has, often begrudgingly, been introduced due to pressure from government and activist groups. The failures of moderation on sites like Facebook are a result of some combination of a lack of resources, incompetence, or willful ignorance and carelessness. It is telling that the catastrophes caused by failures in moderation have been recorded primarily in the developing world.

            The release of Facebook’s own internal research by whistleblower Frances Haugen forced them to admit that the platform had been used to incite violence and propagate hate speech against Rohingya Muslims, a minority group in Myanmar. Though Facebook admitted that it lacked the ability to moderate in local languages, it first defended its moderation policies. It strove to cultivate free speech, and it did not want to interfere too much in politics. Again, the platform was a neutral actor whose goal was to “stay out of it”.

            When moderation is an afterthought, and moderation for non-European languages are an afterthought to an afterthought, it is those farthest from Silicon Valley that suffer the most due to failures and refusals to regulate platforms. This is where the libertarian “move fast and break things” approach begins to wreak the most havoc. Knowingly deploying “disruptive” technologies into already fragile democracies is provably irresponsible.

 

WHERE IT APPEARS MOST DEMOCRATIC, IT IS THE MOST OPPRESSIVE.

            A lack of external regulation means those in power at tech companies establish themselves as the utmost regulators—hidden behind the complexity and scale of their own projects. Despite the appearance of embracing Enlightenment values and citing “reason, progress, [and] freedom”, the freedom they describe refers to their own supreme concentration of power given to them by their own platforms (LaFrance, 2024). Ultimately, they have the final say about who thrives on the app and who does not, who is banned or shadow-banned and who is boosted.

            The recommendation algorithm can be used as an excuse to shield companies from accountability. It is true that recommendation algorithms typically amplify what is trending or what it anticipates users will engage with, based on behavior. This begs the question: who decides what behaviors the recommendation algorithm wants? Simply put: these recommendation algorithms are designed to keep you scrolling for as long as possible. (Harari & Harris, 2019) It recommends content to you to perpetuate that behavior: content that keeps you engaged for the longest and keeps you coming back. It does not matter how you engage or how you come back. If self-loathing keeps you on your phone more (it does) then self-loathing it shall encourage.

            In an interview with WIRED.com in 2018, Tristan Harris (the founder of the Center for Humane Technology and a former Facebook executive) and Yuval Noah Harari (the prominent political philosopher and historian) talk about this phenomenon. They describe it as human neurobiology being “hacked” by recommendation algorithms for the benefit of the platform—working to maximize ad revenue and data collection.

            What is dangerous is that this is not obvious. Social media appears to simply curate your interests and show you only what you want to see—seemingly optimizing how you consume information related to the things you are passionate about. But it is not exactly about passion.

Similar to Myanmar, Facebook’s internal research showed that its failure to moderate content during the Tigray conflict in Ethiopia escalated the existing violence in the region. It propagated hate speech towards minority ethnic groups and misinformation about the conflict and served as a coordinating platform to mobilize individuals to participate in violence. Facebook admitted, only after Haugen leaked the data, that they did not have sufficient moderation capabilities for local Ethiopian languages like Tigrinya, Oromo, and Amharic. Without this moderation, the recommendation algorithm did what it does best: drove engagement. It is well known that platform users engage with negative and inflammatory content more and for longer than benign or positive content.

A recommendation algorithm prioritizing engagement—at its root designed to benefit data brokers, advertisers, and third parties—combined with the landscape of a region in crisis directly contributed to the deaths, displacements, and suffering resulting from ethnic violence in Tigray.

 

IT TRADES IN TIME AND ATTENTION.

Under traditional capitalism, people spend money. Now, people now spend time and attention.  Suddenly, even the poorest people on earth have currency to burn—if unknowingly. There are data brokers hungry for their information and platforms wanting their engagement. 

Consider Zimbabwe. The average household monthly income was US$75 in 2021, dropping to US$57 in 2022. (ZimVAC, 2022) Countries like this have historically been a low priority for companies who deal only in money. But companies do not need users to pay them to make money, but rather to spend their time and attention on the platforms.

67.4% of the Zimbabwean population, as of January 2024, did not have access to the Internet. (Kemp, 2024), while mobile phone penetration was reported to be 95.9% as of 2022. (Matarise, 2024) People have phones with cellular connections. Now the African continent and much of the Global South, which historically has been a low priority for companies who deal only in money, is the “new frontier” as the demand for digital service increases. 

There is a reason why so few people in Zimbabwe are connected to the Internet, though, beyond lack of infrastructure and coverage. Data is expensive, given the monthly income of the median household in Zimbabwe.

Realizations such as this resulted in the birth of projects like Free Basics or Internet.org, which I believe to be largely well-intentioned but misguided, as many interventions in the developing world are.

Photograph: Manjunath Kiran/AFP/Getty Images
Photograph: Manjunath Kiran/AFP/Getty Images

Internet.org saw Meta partner with cell service providers in developing nations to provide affordable access to a pared-down version of the Internet to everyone with that particular network. This is facilitated through the app “Free Basics”. Launched in 2013, by 2018, 100 million people were using Internet.org. (TechCrunch, 2020)

It is difficult to believe that undertaking the Free Basics project was entirely noble. Currency in the attention economy is people’s time and attention. By allowing free access to the platform, Meta added tens of millions of users, who brought with them their time, attention, and data. Free Basics was an effective method to link the population of the developing world to data brokers—tapping into this demographic previously out-of-reach of data collection and Big Tech.

Net neutrality concerns were raised too. Having single platforms act as (and be presented as) the entire Internet is misleading to begin with. It establishes a captive audience. This directly contradicts the Andreessen’s manifesto: “Our enemy is speech control and thought control – the increasing use, in plain sight, of George Orwell’s 1984 as an instruction manual.” (Andreessen, 2023) When that platform is a social media platform, suddenly the Internet is only biased content. You do not have access to an independent search engine on which you can fact check the things you have read. The Telecom Regulatory Authority of India (TRAI) launched an investigation into Free Basics, culminating in the service being banned in India. (BBC, 2016) A report by the NGO Global Voices published the report “Free Basics in Real Life” in 2017—concluding that it elevated Western, largely corporate content and did not link people to useful information as it had promised. (Global Voices, 2017)

Free Basics—still operating in 63 countries in Africa, Asia, and Latin America—has widely been branded as a weapon of digital colonialism. Given the way Free Basics operates, the way it serves to control the flow of knowledge and information into communities in the Global South, and the way it promotes Western content and therefore Western ideals makes the comparison to settler colonialism obvious.

 

IT BLURS THE DISTINCTION BETWEEN CONSUMER AND PRODUCT.

The main thesis of Techno Feudalism is that the nature of the digital economy has transformed global markets into something less like traditional capitalism and something more like Medieval feudalism. He characterizes all users whose online activity and data is collected and sold to third parties as equivalent to unpaid serfs in the feudal period. (Varoufakis, 2024) In long-exploited countries and communities, the more direct comparison is to indentured labor, slavery, and settler colonialism.

Laborers in the developing world not being compensated fairly for their labor, unfortunately, is a staple of globalization. It is a staple of capitalism—and even “softer” versions of capitalism like social democracy—well known for outsourcing labor to the Global South. Under globalization, workers must survive amidst low wages and poor working conditions, while still being completely alienated from the products of their own labor, which are typically exported overseas or sold to the wealthiest class of the country. Under Techno Feudalism, they are alienated from their own digital footprint—their data and online activity. The user of any platform is now both consumer and product. They relinquish their data, largely unknowingly, in exchange for content, which in itself manipulated their desires and behaviors. (Varoufakis, 2024)

There are undeniable parallels between data theft under digital colonialism and land theft under settler colonialism. Discussions of Data Commons, which serve to return data and control to their original owners, echo conversations about land expropriation and reform following the fall of Apartheid in South Africa and the first democratic elections in 1994. “To own our minds individually, we must own cloud capital collectively,” Varoufakis states. “The land must be shared among those who work it,” states the African National Congress’s Freedom Charter. (ANC, 1955)  

I acknowledge that data reform seems much more abstract than land reform. In checking “accept all cookies”, I have asked myself if it really mattered. It is easy to become complacent when we do not see the direct effects of how our data is used, or how much companies benefit from buying and selling it. It is important to remember that our data is a representation of our identity. The notion that we have “nothing to hide” is not relevant. Even just using the data to manipulate our behaviors and direct our actions on the platforms going forward is a consequence of your data belonging to an entity. Data ownership sits at the intersection of safety, privacy, and our right to consent.

Furthermore, it is also difficult to imagine ourselves and our information as commodities. But if companies do, we should too. The profit margins of Facebook and Google should tell us how important our data really. Of Meta’s $134 billion 2023 revenue, advertising revenue made up $131 billion. (Statista, 2023) For Google, advertising made up $237.86 billion of its $305.63 billion total revenue. (Statista, 2023)

While the ability to sell our data to brokers would restore some of our autonomy over our information, it is important to note the discrepancies in the “value” of data, depending on your demographic (including your geographic location). Legal scholar Tim Wu writes briefly about this in his analysis of the legal blind spots in regulating the attention economy. Simply put, some data is worth more than others. (Wu, 2019) Average Revenue Per User (ARPU) by region showed that Facebook earns $68.44 per user per quarter in the United States and Canada, $23.14 in Europe, $5.52 in Asia and Oceania, and a meagre $4.50 per user in the rest of the world. (Meta, 2023) It is obvious that a huge component of this is affluence, which determines purchasing power.

These values serve as yet another proxy for the inequities between the Global North and South. But just because someone’s data earns a company less ad revenue, is it fair to say that data is worth less—especially when data and identity are so closely linked? A user who spends 8 hours on their phone in Africa provides the same amount of time, data, and attention as someone who does so in North America. The same volume of data is still extracted and stored—its future value not guaranteed. There are equal amounts of digital labor without compensation, as Varoufakis would characterize it.

 

IT BROKERS INFORMATION.

            Further, the “value” of data is not rooted exclusively in its ability to drive ad revenue. Putting a dollar amount on any individual’s personal information discounts the potential consequences of the misuse of that information. This is especially prevalent in places where there are elements of one’s private life that can be weaponized.

The Zimbabwean government has a history of intimidation and state violence towards political opponents, often along ethnic or tribal lines to maintain power. The 2008 election was “held against the backdrop of widespread killings, torture and assault of perceived opposition supporters.” (Amnesty International, 2008) The repression of opposition ensured Robert Mugabe’s continued dictatorial grasp on Zimbabwe—a country of which he had been prime minister and then president since its 1980 independence. Mugabe would only be ousted in a coup in 2018, and his party ZANU-PF remains in power as of 2024. Over 2,000 members of the opposition Movement for Democratic Change (MDC) were arrested and over 200 people were killed in politically motivated violence. Tens of thousands of Zimbabweans were displaced from their homes.

Zimbabwe has enacted a variety of “phone-tapping bills”, beginning with the Postal and Telecommunications Bill of June 2000, which allowed the government to “eavesdrop on communication between individuals and companies”. (United Nations OCHA, 2000) I remember being on phone calls with my grandmother in Harare and hearing strange echoing and sometimes other voices during the phone call—which my mother said was a result of the call being monitored. Other than anecdotally, I have no concrete evidence of this.  

Zimbabwe is not alone in its undemocratic practices. The V-Dem Institute’s democracy index ranks Zimbabwe as the 36th most democratic country in Africa, of 56. In world rankings, it is ranked 126th out of 179. (V-Dem, 2024)

Government access to detailed citizen information—especially relating to their identity and political views—has and will cause the suppression of political opposition, political instability, and ultimately violence. Access to the user data collected by Facebook, for example, could prove catastrophic. Detailed political profiles on people, used in the United States for targeted political ads and campaigning, would allow the government to efficiently identify opponents—even if they were not outwardly supportive of the opposition.

Beyond this, Zimbabwe has a number of oppressive laws, including a potential 15-year prison sentence for homosexuality. Social media platforms have often been celebrated (rightfully so) as means for connecting closeted queer youth in intolerant environments. If the Internet is the one place LGBTQ+ kids can be themselves, then it is online that they are most likely to be outed and punished if their online activity gets into the wrong hands.  Even if tech companies vow to not sell to governments, they sell to data brokers who sell to third parties. Data is bought, sold, and resold. And maintaining power and quashing resistance are powerful incentives for authoritarian governments gaining access to that data.

Another risk for vulnerable populations is “function creep”. “Function creep” refers to the unintended and often unpredictable change in what information is used for. This could be something as simple as cameras in an office being installed to ensure security, but eventually being used to investigate employees or track their attendance. A much broader data collection campaign in a humanitarian crisis can have widespread, dire consequences.

The United Nations High Commissioner for Refugees (UNHCR) and World Food Program (WFP) collect biometric data from refugees. These agencies initially use this data to ensure that aid is distributed fairly by preventing fraud. Humanitarian agencies in Bangladesh collected data from Rohingya Muslim refugees following the Rohingya genocide in Myanmar. This information was shared with Myanmar authorities, putting the already vulnerable Rohingya refugee population in even more danger, especially if they were extradited to Myanmar for any reason. (Thomas, 2018)

In the current crisis in Gaza, aid distribution has used a smart card system linked to biometric data. It has been alleged that these smart cards are a small component of a larger ecosystem of Israeli surveillance on the Palestinian people. (Institute for Middle East Understanding, 2021)

In humanitarian crises, the lack of consent over one’s data being collected and shared is amplified. Varoufakis describes data as the “rent” we pay to continue to participate in our social networks and access information. (Varoufakis, 2024) Circumstances in which data must be collected to receive basic necessities like food or medical attention are the extreme version of the rent that Varoufakis describes. Populations in the developing world and in crisis zones at the highest risk, whose data is the most sensitive, are the groups who have to choose between the certainty of starvation and the risk of further persecution. The practice is coercive, especially given the susceptibility of these systems to function creep.

 

IT IS AN EXPERIMENT.

Because the development of digital technology has been limited to the last few decades, it is difficult for anyone, even the founders and CEOs of major platforms, to predict its effects. It is not only an experiment, but a double blind one in many ways. We are all learning as we go—and we have already seen some of the widespread, unintended consequences of data collection and recommendation algorithms, among other things.

For instance, Instagram is proven to have detrimental effects on the mental health of teenage girls—once again according to Meta’s internal research. It “makes body image issues worse for 1 in 3 girls”. (Haugen, 2023) It contributes to higher rates of anxiety and depression. Its simplest features—likes, comments—have become quantifiable proxies for self-worth.

Some consequences of the great tech experiment are not entirely unintended. And despite the fact that no one fully understands the entire system, some actors obviously have more power and control over it than others—seen in Facebook’s 2012 mood manipulation experiment. The study, titled “Experimental evidence of massive-scale emotional contagion through social networks,” studied 689,000 users without their explicit consent. (Meyer, 2014) Users were shown fewer positive or fewer negative posts, resulting in users being more likely to create negative or positive posts, respectively. This is an example of the platform exerting their power and influence over their users to “see what happens”, causing intentional negative emotional outcomes in hundreds of thousands of users.

The techno-optimist states: “Our enemy is the ivory tower, the know-it-all credentialed expert worldview, indulging in abstract theories, luxury beliefs, social engineering, disconnected from the real world, delusional, unelected, and unaccountable – playing God with everyone else’s lives, with total insulation from the consequences.” (Andreessen, 2023) It would seem that these sorts of activities—social engineering and experiments being deployed by “expert” engineers all over the world from Palo Alto. Comparisons have been drawn between technocrats and Italian futurists like F. T. Marinetti—whose maxim was “Creation, not contemplation”. Marinetti is even named in Andreessen’s manifesto as one of the “patron saints of techno-optimism”. Following Marinetti’s futurist manifesto, his next publication was his Fascist manifesto—though he had already celebrated Benito Mussolini in the former work.

Simply put, the wealth concentration and lack of diversity in Silicon Valley further removes those who engineer and deploy digital technologies from their new ivory tower into the rest of the world. The concentration of expertise in the United States further isolates lawmakers in developing economies from effectively managing the propagation of digital technology into their countries. 

 

 

WHAT IS ACTUAL TECHNO-OPTIMISM?

            It is important to recognize that the problems with technology are not because of the technology. As Andreessen says, “technology works for us”—it shows us our reflection, though magnified many times over. So, the bad news is what the bad news always is. To save the world, people must become better. The technocrats and the elite especially—who show no sign of relinquishing any money or power any time soon. It is also naïve to invest control in centralized government—who cannot or will not effectively regulate powerful digital technologies. The European Union’s landmark General Data Protection Regulation has proved this insufficiency. Data protection authorities in EU member states are too underfunded and under-resourced to effectively enforce the GDPR, especially in the face of the money and power of Big Tech companies. Investigations are slow and legislation quickly becomes obsolete because of the rapid development of technology.

What this does prove, though, is that digital technology works, and works well, for those whose interests are programmed into it. That, I believe, is the good news.

            I believe there is a version of real techno-optimism that honors modern technology’s power and potential without accepting the present power structure that controls it.

            I came to Stanford because I strongly believed that technology was the way to uplift the developing world—to counteract inequity, to resolve the power imbalances caused by the legacy of colonialism. It is rapid, far-reaching, and extremely powerful—more influential than anything we have experienced in our lifetimes. I believe there is something about each of the uniquely dangerous characteristics discussed in this paper that also illustrates the potential of technology.

            It is designed to be unregulated and so it is protected from oppressive state actors. Libertarianism is not necessarily incompatible with progress—nor with socialism or Marxism. Yanis Varoufakis identifies himself as a Marxist Libertarian which, by popular definitions of libertarianism in America, sounds contradictory. However, true libertarianism focuses on self-management and autonomy and acknowledges suspicion with state power. Particularly in fragile political landscapes, humanitarian crises, or oppressive dictatorships, prioritizing these aspects of libertarianism in our approach to technology could prove essential in liberating marginalized populations and protecting people from persecution. Unfortunately, the present digital landscape has traded state power for the monopolistic corporate power of Big Tech companies.

            Where it appears the most democratic, it is the most oppressive, but only because it favors the interests of the oppressive elite. This oppressive quality comes from recommendation algorithms trained to prioritize behaviors that addict people to their phones by driving engagement. Instead of encouraging behaviors that maximize ad revenue or longevity at the behest of data brokers and advertisers, it is plausible that we could prioritize behaviors that maximize positive real-world community action and accurate information consumption. This would be especially effective in mobilizing and informing populations, which over time may help stabilize fragile democracies and empower people through education.

            It trades in time and attention, which can connect people to resources and information at low cost. While I am skeptical of the way that platforms that trade in attention and consume your time, attention is obviously not a bad thing—in fact, it is an inevitability. It depends towards what your attention is directed. In many elections, for example, one of the hardest things to do is simply get your median voter’s attention. During the pandemic, vaccine skepticism and COVID misinformation was a result of user attention being directed to the wrong places. Redirecting attention to accurate, verified information would be a powerful tool for reaching untapped communities.

            It blurs the distinction between consumer and product but has the potential to empower people in the digital age. As with land reform, the key to reform of power in the developing world is the redistribution of resources. Just as the ANC strives to return land seized from Black South Africans during colonialism and Apartheid, returning their own data to users (in the developed and developing world) empowers previously oppressed groups—even with allowing financial rewards for selling one’s own data.

            It brokers information, the control over which could be returned to communities. Reclamation of data also directly represents a reclamation of identity and autonomy in a way that land reform does not. Varoufakis advocates for ownership and bargaining-power over one’s own data, and the establishment of Data Commons—where data is owned, collected, managed, and shared by the communities to which the data belongs. Information can be weaponized by repressive regimes and be used to target marginalized populations—like homosexuality in Zimbabwe, for example. This shows us the importance of marginalized populations being the sole arbiters of their own information and having a system for consent for the propagation of that information. Collection of data is not inherently bad. A community can use its own data to optimize its cities for efficiency and energy consumption, for example. 

            It is an experiment. It is not set in stone, and nothing is truly guaranteed. With transparency and informed consent, it is possible that our data can be used to make social media platforms work for us. With a project as widespread, world-connecting, and scalable as the online world, we could use its power to improve everyone’s lives, not just the wealthy elite. In many ways, it has. It has saved LGBTQ+ youth in oppressive environments by connecting them to one another. It has connected people to unprecedented libraries of tools and information. Because of the vast reach of digital technology, I believe there has never been a greater chance of magnifying the best things about people, rather than the worst.

            Marc Andreessen’s Techno-Optimist Manifesto presents a lot of ideas that I believe do not align with reality. Especially given its recent publication and Andreessen Horowitz’s portfolio of investments, it is hard to not perceive many of the statements made as hypocritical and delusional. Optimism can coexist with caution and care. I do not disagree with all of his ideas—I still hold some, often fleeting, optimism towards the potential of technology. In his list of “enemies”, he names monopolies, cartels, and centralized power, social engineering, authoritarianism, corruption, and regulatory capture. In my imagination, these are the enemies of true techno-optimism—one that departs from free market capitalist ideals. The main failure of his piece is that he fails to identify himself and his investments as his own enemy.


Reference List:

 

1.     African National Congress. (1955). The Freedom Charter. https://www.anc1912.org.za/the-freedom-charter-2/ 

 

2.     Amnesty International. (2008). Violence and coercion mark Zimbabwe’s election. https://www.amnesty.org/en/latest/news/2008/06/violence-and-coercion-mark-zimbabwe039s-election-20080627/ 

 

3.     Andreessen Horowitz. (2024). A16Z Portfolio: Builders We’ve Backed. https://a16z.com/portfolio/ 

 

4.     Andreessen, M. (2011, August 20). Why Software Is Eating The World. Andreessen Horowitz. https://a16z.com/why-software-is-eating-the-world/ 

 

5.     Andreessen, M. (2023, October 16). The Techno-Optimist Manifesto. Andreessen Horowitz. https://a16z.com/the-techno-optimist-manifesto/ 

 

7.     BBC News. (2016). India blocks Zuckerberg’s free net app. https://www.bbc.com/news/technology-35522899

 

 

9.     Government of Zimbabwe. (2022). Zimbabwe Vulnerability Assessment Committee Report 2022. https://fnc.org.zw/wp-content/uploads/2023/04/Midlands-2022-ZimVAC-Rural-Livelihoods-Assessment-Report.pdf 

 

10.  Global Voices. (2017).  Free Basics in real life: Six case studies on Facebook's Internet "on ramp" initiative from Africa, Asia and Latin America. Amsterdam: Global Voices Foundation.

 

11.  Haugen, F. (2023). The Power of One: Blowing the Whistle on Facebook. Little, Brown and Company.

 

12.  Institute for Middle East Understanding. (2021). Fact Sheet: Israeli Surveillance & Restrictions on Palestinian Movement. https://imeu.org/article/fact-sheet-israeli-surveillance-restrictions-on-palestinian-movement 

 

13.  LaFrance, A. (2024, January 30). The Rise of Techno-Authoritarianism. The Atlantic. https://www.theatlantic.com/magazine/archive/2024/03/facebook-meta-silicon-valley-politics/677168/ 

 

 

15.  Meyer, M. (2014). Everything You Need to Know About Facebook’s Controversial Emotion Experiment. WIRED. https://www.wired.com/2014/06/everything-you-need-to-know-about-facebooks-manipulative-experiment/ 

 

16.  The New Humanitarian News. (2000). New phone-tapping bill. At the time part of the United Nations Office for the Coordination of Humanitarian Affairs. https://www.thenewhumanitarian.org/news/2000/06/01/new-phone-tapping-bill

 

17.  Thomas, E. (2018). Tagged, tracked, and endangered: how the Rohingya got caught in the UN’s risky biometric database. WIRED. https://www.wired.com/story/united-nations-refugees-biometric-database-rohingya-myanmar-bangladesh/ 

 

18.  Varoufakis, Y. (2024). Techno Feudalism: What Killed Capitalism. Melville House.

 

19.  WIRED.com. (2018) How Humans Get Hacked: Yuval Noah Harari & Tristan Harris Talk with WIRED. https://www.wired.com/video/watch/yuval-harari-tristan-harris-humans-get-hacked 

 

20.  Wu, T. (2019) Blind Spot: The Attention Economy and the Law, 82 Antitrust L. J. 771.https://scholarship.law.columbia.edu/faculty_scholarship/2029 

 

 

 
 
 

Comments


bottom of page