Ideals aren’t much good if there’s no one to hold you to them. That’s what I hope to do here: to give you a snapshot of the founding ideals and principles behind doFlo. Openness is vital to accountability.

doFlo is an automation company. It’s reasonable for you to wonder what we’re trying to accomplish and what our vision of the future looks like. That’s what this post aims to do: to give you a sense of who we are and what we want. In order to engage thoughtfully and constructively, we have to make a clear statement about what we believe in. 

Fear of AI 

Traditional fears of Ai have focused on killer robots from the future or a malevolent machine intelligence. Those fears are understandable but misleading. Yet, the idea that an Ai might decide that humanity “threatens its existence” runs counter to what we understand about machine learning.

Ai is not alive. It doesn’t have instincts as an animal has, or executive functioning as one might find in a complex animal. At most, a machine is a collection of reflexes. Nothing more. It’s more comparable to a species of mushroom than to a thinking being.

This is not to say our world is not about to change drastically and irreversibly because of our inventions. Even a fungus can be dangerous, particularly if it can become weaponized by humans. The impact of Ai on our daily lives will be profound, in ways seen and unseen.

Ai-generated text can and will be used to mislead. And Ai-generated sound and imagery will be used to lie and deceive. They already are. Who hasn’t begun to ask themselves, as they browse the news or social media, how much of the text they see already comes from an algorithm designed to trick them and how much of the imagery they’re seeing is real? 

The Most Dangerous Animal

That sense of uncertainty and unreality will only grow. Yet the dangers we face are really human in nature, even if ChatGPT and DALL-E have given them form. The real dangers arise from what human beings do with the technology we conceive and from the fact that there will always be those who will be willing to hurt others and themselves in order to achieve a political or economic goal. 

The tech industry has failed at every stage to assess the capacity for our inventions to have unintended consequences. That is not because the technology is dangerous. It is because people are.

That is an old story today. Futurism has been an obsession in the developed world for centuries. Science fiction literature has changed the course of history too. Jules Verne’s From The Earth to the Moon inspired Nasa’s Apollo program. Star Wars inspired, well, Star Wars. Sci-fi has prompted governments to invest in new technologies. It has also taught generations of children to be anti-racist, to use the scientific method, and to open their minds to new ways of thinking and being. In short: it gives people hope. Science fiction is even the inspiration for this company, which I’ll get to shortly.

The Future is Now

But that’s the pleasant side of it. There is an unpleasant side as well. Dystopian visions of the future abound. Worlds ruled by human-killing robots gone rogue (with Austrian accents!). Worlds in which our technology has destroyed us through war or overpopulation, or techno-totalitarianism. Though these stories ostensibly depict the future, more importantly, they hold up a mirror to our present reality. 

Obviously, H.G. Wells was not warning the people of 1895 about the threat of underground trolls that would eat them if the British class system were to continue. The Time Machine is rather an evocation of the very real fears and anxieties of 1895. That was a world struggling to reconcile the costs of industrialization and colonialism on human beings. It was, in short, a book about its own time, even though it’s set 800,000 years in the future. 

Likewise, The Terminator is not about the dangers of time-traveling robots. It’s about the anxieties of the society of 1984, when millions of people in the developed world were being introduced to computerization, with all the attendant alienation and anxieties. That was a time in which films like War Games imagined computerized warfare. The cost of automation and convenience was front and center in the collective consciousness.

The widespread modern concern over Large Language Model (LLM) Ai systems is reasonable. Though crucially not because machines are taking over. Like in 1895 or 1984, the anxieties of today stem from our present challenges: mounting ecological emergency, spiraling wealth inequality, resource depletion, and consequent global political instability are facts of our everyday lives now. The wars for food and water resources have already begun. We have already seen how propaganda and disinformation can turn people against each other.

An Age of Anxiety

We are living through a new age of anxiety. It is typified by fear of the consequences of globalism and technology innovation. That anxiety also prompts a renewed focus on the self, mental well-being, and the role of the individual in society.

Our mounting problems come at the same time as the sudden availability of so much information and so much communication that we have a growing awareness of our shared challenges. But that awareness is not being met with the progress we desperately need to avoid catastrophe. When taken together, it’s easier to see that this fear of Ai is really an embodiment of that anxiety: the realization that there is not enough to go around and that if we are willing to lie and deceive each other, others will happily lie to and deceive us. This has driven a desire to look inward, to our closer communities of shared interest for a sense of safety.

Yet whatever good this renewed focus on wellness may confer, it comes at a cost. Far from building a shared global understanding of our challenges and their solutions, we have weaponized our information economy to deny these problems and to place the blame and the onus of change elsewhere; somewhere we are not responsible. We now understand that our exploitation of the world’s resources is bringing ruin, but we are responding to this unpleasant reality with a war, not on the problem, but on reality itself. Rather than mustering the force of scientific consensus to tackle the problem, our scientific community is too busy defending itself from constant attack — as if questioning the consensus will change the facts.

So here we are, in a world where we can vividly see and prolifically discuss our impending doom, and we can’t seem to do anything about it. We can see that Ai technology is progressing incredibly fast… but we cannot see how that technology will solve our problems, and we’re increasingly concerned that it will only make things worse.

One look at our popular entertainment today tells that story. Succession, Chernobyl, Oppenheimer: these are stories about the consequences of irresponsibility and lies. They are stories about how we refuse to face the consequences of our own actions. The story of the past five years has been defined by irresponsibility girded with lies: as if a global pandemic could be got through by doing anything but acknowledging the conditions that created it or by taking the steps necessary to resolve it.

These are the perfect conditions for a panic. Perhaps that panic has already begun.

“A Better Place for Whom?”

This is where we step in, as a company focused on what is often called “Robotic Process Automation (RPA),” but which I prefer to call a “Universal Application Programming Interface (UAPI),” a program that connects to all other programs in order to allow a person to leverage all the power of modern computing for their own use. We are building a technology that should rival the ability of the computer in Star Trek: The Next Generation: a way for you to use plain language to access any computer system or program you can connect to and make it do what you want it to do: provided that is allowed by the program or system, and that it is legal.

I sincerely believe that giving people this ability can make the world a better place, but critically, I do not believe that this alone will do it, nor am I so naive as to assume that it couldn’t be used to do the opposite of what we intend. We are all flawed people, and therefore, any system we build will reflect our own weaknesses and myopias.

When a company says that they’re “making the world a better place,” with technology, one should always immediately ask: “A better place for whom?” Technologists are prone to techno-solutionism. That’s the idea that there is always a technological solution to a societal, scientific, or historical problem.

Our technology will not save the world. We would love for it to make the world a better place, as so many companies have promised to do. But we cannot even make this guarantee. Instead, we can only talk about what we wish to do. One thing that we wish to do is, to be honest and operate with our eyes open. That means practicing humility and understanding that we don’t know everything.

Big Data And Risk

There is a common misundertanding about the era of Big Data that we would like to correct. For years, people have been warned about invasions of privacy, or of mass behavioral manipulation. While those are valid concerns, the real driver of the Big Data economy is the understanding of risk. Access to more and better data ensures an understanding of what risks an organization or government faces. In a highly financialized global economy, those with the data can always benefit, regardless of the outcome.

This is why the power to control access to data cannot be underestimated. Access to data is essential to assess risk in our technological world. The risk of losing customers. The risk of climate change. And the risk of liability.

If we view the ever-increasing world of data and automation now available to big companies as a kind of risk, then we can see that the big data revolution will, if unchecked, tend to benefit the already powerful instead of the individual members of society who will experience the consequences. Although ordinary people generate data, those who aggregate and use that data are the ones benefiting from it. We are already seeing this effect in action. 

These realizations have led to the extraordinary boom the tech industry has experienced. It has also led our industry to overestimate its own value to the global economy. The data is valuable, but that doesn’t necessarily mean that the products that generate that data are a net good. Despite this, the tech industry is full of people who fervently believe that if they were in charge, the world would be a better place. Yet as the tech industry grows, is the world becoming a better place?

Our industry has ignored the most important challenges of our time because those challenges can’t be solved with an app. Maybe that is precisely because the technology, if it accrues all its benefits only to those who understand it (and control it), cannot help but increase inequality and division.

More technology may not be enough. Progress must be shared, controlled, and understood by the least empowered. If we want to give ordinary people a chance to compete and to dictate their own futures, then we must put as much data and as many of those advanced tools into their hands as we are able.

People are the Answer

We believe that people are the solution to our shared problems. Technology made to be open and accessible is the kind that can be used to do the most good. But right now, the cutting edge of technology isn’t doing much for ordinary people. The power of big data is not being distributed via our industry. Instead it’s being concentrated.

Ai automation is no different. For the productivity growth since 1980, little of the economic gains have gone to workers. We’re talking about fast food workers, taxi drivers, logistical workers, and other so-called “low-skill” professions. These professions have become highly dependent on automation. But ordinary workers find their bargaining power shrink and their workloads increase. As our collective ability to do things grows, the economic possibilities of workers shrink.

For whom has Ai automation represented economic opportunity? Certainly not the people whose jobs are displaced. Certainly not for those who cannot afford a place to live, electric cars, trips into space, the newest AI-discovered medicines, access to the best universities, or food delivered to their doors, day or night. Our industry is failing those people. It is failing to acknowledge that to “disrupt” and transform someone’s work comes with it a large responsibility to be aware of how that disruption alters people’s lives and affects their livelihoods.

For most in the tech industry, technology always holds an answer. That is because it never forces them to ask harder questions: Where does all this cobalt come from? Who is this delivering food to my door at 3 a.m. on a Wednesday? What’s it like to work in an Amazon fulfillment center?

For ordinary people, the 21st century has offered little but debasement. As everyday conveniences have multiplied, a growing subset of the population has been enslaved to the sharing economy. They are always working in service of ever more convenience and speed. Meanwhile the tech industry has reinvented a form of wage slavery. This system is not unlike the sharecropping of a century ago, enslaving free farm laborers to their land. Instead of land, the modern worker has a bike or a car, or a laptop. We push the costs of economic expansion onto workers while absorbing all the profits. Doing so, we ensure we have no obligation to workers, and workers have no choice but to continue working more and harder.

Increasingly, a large subset of the population lives precariously upon the whims of large, extremely profitable corporations that are seeking openly to eliminate them from the supply chain. Gig workers see 60, 70, or 80% of the fees paid for their services taken from them, often via blatantly deceptive practices. Our industry has the temerity to call this anything but what it is: an exploitative dystopian nightmare.

Technology is continually used to separate people and to dictate their actions en masse for the benefit of a few. That is the great risk today. 

The conveniences we once viewed as life-changing comforts we must now see as a danger to our livelihoods and communities. I do not believe that is an overstatement. Our prosperity can be destructive. Our futurism can be, in its way, regressive. The fulfillment centers will run out of fresh workers to churn through. The code sweatshops will have no more hours in the day to grind through. Workers will have no more income to garnish for tuition fees, or insurance, or taxes. Somehow, the amount and complexity of our entertainment has grown exponentially, while those who code and write and perform in that entertainment aren’t making a living and have to strike to get what’s rightfully owed to them. Somehow McMansions dot the landscape, yet tent cities are so prevalent that they must be banned. The very capital of the tech industry is awash in homeless veterans and children.

So far, Ai automation has largely meant the potential for exploitation on a massive scale. But not all problems are technology problems. The techno-solutionism popular in our industry today is as destructive to reason as any other blind faith. Instead, we need to address this question: For whom are we changing this world? 

“If I Own The Robot”

I believe that in order to do good for ourselves and our communities, it is necessary to state plainly and boldly what we believe in. This is something that must go beyond our technology. to address our behavior as a company and as a micro-culture.

I was talking with a member of our small team a few months ago. He had spent some years working in the “sharing economy,” as an Uber driver, and an AirBnB host, and in sharing economy startups, and had seen some of how the sausage was made, both from the business side and the human side. We talked about the rise of self-driving and automated food production, preparation, and delivery. He said something I haven’t forgotten: “I don’t mind if a robot takes my job: I just want to own the robot.” 

It stuck with me then and continues to stick with me now because that’s the answer. It’s the answer to all our modern fears, isn’t it? H.G. Wells’s time traveler owns the time machine. He can go back and forth through time. In Terminator 2, John Connor owns the T-800: a dedicated protector who will never betray him or leave him. The technology that was a source of fear and dread had become a salvation.

These points don’t just make good drama: they also reflect one answer to our modern anxiety about technology. I don’t mind technology changing, but I do mind not being in control of what technology does. If we believe that people lie at the heart of positive reform, then empowering ordinary people, scientists, teachers, parents, and members of society to use technology for their own benefit, is how we can change the world for the better.

We should never for a moment believe that this will be enough on its own. But putting the power of modern technology into the hands of the many could give people a chance to do better. It might give them a toe-hold in the race to shape the future – a race they’ve been losing. 

Instead of harnessing machine learning against people by manipulating the reality they experience, and dictating to them how they should work and what they should do, why don’t we focus on giving people more choices and more degrees of freedom?

Putting People Back in Control

This is all a long way of getting to this point: doFlo is founded on the idea that technology should be something that doesn’t just benefit the public but is owned by the public. That the technology already exists today to make your life far less stressful and depressing, and that it is in your interest to have direct control of how that technology is used. Moreover, it is our mission to support the disempowered and the disenfranchised in all that we do.

Let’s make technology that doesn’t replace people, but enables them. Let’s do automation that the workers control, not that they have to fear. And let’s create prosperity for regular people, not out of regular people. Instead of replacing low-skilled workers, let’s give those workers more abilities, and make them each more valuable. It just may be that the workers themselves have some of the best ideas on how to make their own lives better.

In our industry, the watchword is always “disruption”, but disruption usually comes at a price for the disempowered. We want to create a technology platform that puts control of the future into as many hands as possible, with faith in the idea that regular people know what they themselves want better than anyone else. Our role should always be to enable those people and to support a democratic political and economic system that empowers them.

Furthermore, in order to build a team of investors, employees, and partners who share our core ideals, we have to make an ongoing commitment to supporting the social and governmental policy goals that we believe are necessary for a safe, prosperous, healthy, and sustainable future. There is no truly apolitical company. Only those who are open about their politics, and those who aren’t. 

This means openly and candidly supporting progressive policies such as Universal Basic Income (UBI), shorter work weeks, Universal Healthcare, guaranteed family and compassionate leave, minimum and maximum salaries, sustainable energy, anti-racism, anti-fascism, rights of LGBTQ+ workers and customers, environmental protection, economic redistribution, progressive taxation, freedom of speech, and collective ownership and collective bargaining.

In order to live in the kind of society we want, we must reflect the change we seek.

To make a concrete commitment to this ideal and to the points I’ve raised in this post, I propose the following:

Our Core Ideals:

  • Serve people over systems
  • Ask: “For whom is this making the world a better place?” 
  • Our employees are people first, and their lives are more important than their jobs (no grindset!)
  • To make our technology available to those who cannot afford to pay for it. At all times, consider the customer’s ability to pay, and not just their willingness to do so.
  • The data that our users generate is owned by them. Our users are not our “product.”

Our Core Commitments: 

  • We will strive to develop this platform as a public utility, made for anyone, from the humblest individual to the biggest corporation, to leverage the vast resources of our technological world for the common good, and the good of individuals and families, small businesses, charities, and communities of interest.
  • The future stability and prosperity of the human race depends on ordinary people. We will support worker-first policies including UBI (Universal Basic Income), Universal Free Healthcare, Universal Free Tertiary Education, rights of migrants and stateless persons, and the worker’s right to organize and bargain collectively (including our own workers).
  • Progressive sharing of ownership and control of this company by its future employees
  • Capping management salaries at the median average wage of C-suite executives in the US (currently $141,000 a year), or the territory where they live. Whichever is more.
  • Base salary minimums above the median average wage of salaried workers in the territory where the employee lives.
  • Financialization is anathema to collective ownership. Our founding team is committed not to borrow against vested shares in the company.
  • We are committed to promoting and evangelizing this human-centered approach to business, not just for our benefit, but for the world we live in.