Systematic Ethics, Part 2 (<Code-of-Ethics>)
Systematic Ethics: Part 1, Part 2, Part 3
As the saying goes, the devil is always in the details. All ideas are great ideas until you start talking about them in specific. Help the homeless? Great! Let’s build that shelter for them down the street. Opposition from the local neighborhood council? Why would they do such a thing? The system was designed that way…on purpose? What?
As you dig deeper into the issues and start becoming more specific, more often than not it reveals a part of the system — a part of us — that most of us don’t want to see, much less admit to being a part of. The fact of the matter, though, in one way or another, we’re all responsible for the problems that we face today: in civilized nations we have chosen to elect leadership from our own ranks, after all.
People have a tendency to avoid talking about ethics as a systematic issue and instead focus on personality battles since the good vs. evil framing tends to be more entertaining and easier to understand. But it actually runs deeper than that — when we externalize our morality onto others in such a way, we’re basically making it “their job” to do the right thing, absolving the responsibility on our end. This auto-pilot mentality doesn’t tend to work very well in practice, however, since it is the primary mechanism in which good people can be tricked into doing bad things, often without them even noticing. Many of us actually have the power to stop a lot of the problems that we see in society today, but in many cases we simply choose not to.
The Internet up to now has basically been a feudal model where the administrative classes rule over the users with complete impunity. (.exe stands for Executive Order, btw) On the Internet you’re not allowed to vote, raise motions, or meaningfully negotiate with the platforms (or even other users) you use everyday, since that was never how it was designed. The system tells us exactly what to do and when to do it — and we’ve become very comfortable with this way of operating in basically every aspect of our lives at this point. And the problems of having done things top-down for the last 10–20 years are now becoming more and more apparent as time goes on.
In the technology industry, decentralization of power happens off-screen, in the form of lawsuits, government regulations, and police interventions — but in the next few years (partially fueled by the rise of blockchain and cryptocurrency tech, partially by government mandates) there will be a move towards baking these ideas directly into the code itself. This means giving users the power and authority to affect the system directly and meaningfully in ways they couldn’t do before.
But with power, also comes the potential for corruption and abuse. Absolute power corrupts absolutely, so when we build governance and compliance systems to try to decentralize the concentration of power, the system itself will have to be designed for symmetry, scale, and balance. The era of uncontrolled, “big”, asymmetrical growth is now over — as elegantly described in Aya Miyaguchi’s idea of “Beauty of Subtraction” here — as we migrate away from autocracies into a truly representative digital space.
Since we will no longer need dictators and autocrats to tell us what to do, however, we will need some kind of rule system — a code, if you will —that will help guide people in making better decisions amidst the complex environments that they will be forced to navigate. As history shows over and over, removing dictators from power without changing the game just ends up propping up another one eventually, so if the cryptocurrency space really wants to fulfill its utopian dreams, its projects will have to take these types of issues seriously.
And this is, literally, the <Code-of-Ethics> that will be needed, going forward. If the devil really is in the details, then we need to dive right into it and root it out before the details themselves take over the system and turn it into an unrecognizable mess.
Codifying Ethics, Literally
I spend a lot of time working with groups affiliated with Code for America where they have an unofficial slogan: “Efficiency and Transparency in Government is a Moral Issue” (paraphrased). This slogan resonates very strongly with me not only because I believe it, but because it does a good job tying those two concepts into an ethical framework, which will be useful when attempting to bake these ideas into actual product.
The idea that “efficiency is moral” comes from the fact that in government projects, funding comes from the taxpayer therefore it is imperative that government staff do their best to use those resources wisely. The same can be said for the idea of transparency, since every citizen (at least in theory) should have easy and reasonable access to the information about things that affect them directly.
Although not all blockchain/crypto projects will be working with government systems (though I suspect many of them will), when governance systems start to go online it will require technologists to think of users not only as people who use their product, but as citizens who have rights, a voice, an agencies of their own. The difficulty curve here is by no means small — building effective governance systems (blockchain or otherwise) will require its teams to have ambitious and audacious motivations that are on par with nation-building and foreign diplomatic projects.
Actionable Ethics
Being “metrics-driven and actionable” is one of the technology industry’s cultural strengths, and in many ways is the reason why it has been an economic success in recent times. The industry itself, however, has been weaker in its understanding of how the humanities work, which translates into the troubles they face in the political and cultural spheres. The way tech receives the brunt of negative press regarding issues of gentrification (even though it’s a society-wide problem and should be talked as such) is one result of how this neglect can be harmful to the industry and everyone as a whole.
In recent years there have been attempts for companies to bring ethical frameworks into tech products and platforms: copyright enforcement, community moderation policies, harassment protections, just to name a few. Since most of these initiatives were forced upon them through lawsuits and litigations, however, a lot of its designs and practices tend to be reactive, rather than proactive, thus of low-quality. A “proactive” ethical model will allow both users and developers to make quick decisions regarding certain situations based on rule systems that are derived from the company or platform’s mission statement, guiding the process of good conduct and conflict resolution at every turn.
In order for “actionable ethical” models to work, however, they need to be universalized throughout the institution, exist everywhere, and be perfectly transparent. As a “product”, I’m imagining a list of scenarios and situations that might pop up during your day-to-day job that all employees — including the CEO — can reference in case they get caught in a situation they don’t know how to deal with. And this repository can be turned into a space for debate and discussion (with the option to vote to make the process democratic), where people can settle on a specific definition of what a “good action” really means.
Many platforms and companies create mission statements, TOS agreements, and company policies that have a lot of good intentions in them but since most places don’t bother documenting or publicizing how these ideas are used in practice, the act of moral inquiry essentially becomes outsourced to employees on an individual level. But in my experience leaving this to chance tends not to work too well, since many people simply don’t know what to do when things start getting heated and the pressure starts to mount.
In the end, the individual might even decide to go against what the policy says, based on the needs of the situation at the time. But this gives them a framework in which to work with in case they have to justify their decisions somewhere down the line. “Yes, I know what was agreed upon in the repository but I thought of the mission statement and believed that this was the right thing to do and here’s why.” sounds a lot better than “I don’t know, I just thought it was a good idea.” Agree or disagree, having a reference point to talk about these issues more specifically both empowers the user as well acts as an antidote to double-standards that erodes institutional morale.
In ethics, ambiguity is the biggest enemy — nobody expects perfection, but it’s reasonable to expect people to at least try to do the right thing given their circumstance. What is the “right thing”, and what are the circumstances, however? At this point, we simply don’t know. And the best way to fix this is to talk about ethics in a very specific, clear, and transparent way.
The “solution” here may or may not come in the form of the blockchain (though it is the most effective way to establish a “Sacred Space” in technological platforms, imo), but there are many ways of getting there if the will to do so is there. The most important thing is that we show that we are at least trying.
If the industry shows that it is taking these issues seriously, I do believe that it will improve both the productivity and reputation of the field as a whole in the long run. Since the “ROI” on these types of projects is indirect and not very obvious, most institutions will be likely be either resistant or apprehensive about implementing them at first. But I do believe that the ones that do will do very well in the long run. An institution’s biggest resource is their people, after all, and the ones that nurture not only their skills and talents, but their sense of moral direction as well — will have a competitive advantage that puts them way ahead of the curve.
Part 3 will be the final installment of this series, talking about how ethical models play out in political and cultural arenas. In the US, we are going through a day-of-reckoning style transformation in the way we think about morality and ethics — the technology industry has the potential to become a thought leader in this area by taking advantage of its “newcomer” status but it will have to play its cards right and lead by example.