From our Discord: "Useful Concepts for the Governance of DAOs"

This is a cleaned-up version of a Discord thread I posted between 2021/11/06 and 2021/11/09. The intent with posting this here is to make this material easier to find, reference and share as appropriate.

Wanted to inject some ideas into the conversation in a not horribly distracting way, so I am hoping that using a thread will accomplish that.

First a disclaimer: I’m still learning a lot about DAOs and ignorant of a lot of work already done in this space. Don’t hesitate to point out where I’ve ignored obvious knowledge sources. Suggestions, arguments, and questions are welcome and all input will be considered - some longer than others.

Let’s start with getting clearer about what we mean by “governance of DAOs”. When I talk about governance of DAOs, I’m focused on value and how we maximize it. While this is typically meant in financial terms, I mean value in the broad sense of something people care about. Money is one way to measure value, but it’s not the sole means.

Most references to governance miss this, and it causes a lot of issues. In traditional organizations, governance is equated with committees, bodies, etc. In much (but not all) of what I have learned about DAOs so far, governance is equated with voting. These ideas aren’t wrong per se, but they are limited in how much they actually help increase value.

The best formulation of value I have seen comes from a group called ISACA and their COBIT framework. They break value down into three components - benefits, resource costs, and risks (BCR). Ideally, we get more benefits than we expend in resource costs, with a level of risk we can stand. Again, this is about more than finance. The benefits may be in well-being, the resource I am spending may be time with my family, and risks may be to a relationship. Whatever matters to us.

COBIT also calls out 3 aspects of governance - evaluate, direct and monitor (EDM). While not as fun as electronic dance music, it is a useful cycle. We evaluate our current state of value, direct actions to increase value, and monitor changes in value that lead to further evaluation. All 3 matter.

This highlights that the core of good governance is effective decision making. This is where concepts like John Boyd’s OODA (observe, orient, decide, act) loop are helpful.

What makes for a good decision? Well, that’s a value question, so we can use BCR to answer that. An ideal decision provides the best benefit/resource tradeoff, without delay or coordination costs, with no risk of failing. In practice that’s impossible, of course, but the closer the better.

When I look at DAOs through this lens, it’s obvious that we often fall far short of an ideal decision. Members may not agree on what benefits they want, what resources they are willing to expend to get those benefits, and what their risk tolerances are. When everyone has to vote on everything, that has a cost and creates delays. These factors alone increase the risk of failure, and that is magnified when you consider that it’s hard to get 100% commitment to a decision that only 51% of people wanted.

This also starts to shed light on why scaling is so hard. The evidence is in - scaling doesn’t work. Anyone in a traditional organization that has grown knows that an 8 person organization can’t function the same way an 800 person organization does. An insect scaled up to human size would collapse under its exoskeleton. The 50 foot woman literally couldn’t be human at that size - her bone density would not support her body. Geoffrey West’s book Scale includes many great examples of these challenges.

We often use the word “scale” for “growth”, but they aren’t the same. Nature doesn’t grow by scaling, it grows by reproduction. The promise of DAOs is that they can harness the principles of reproduction far more effectively than traditional organizations.

This is the main reason that I was attracted to Orca Protocol. Pods have the potential to enable DAOs to coordinate on the big picture effectively while minimizing the costs of doing so. But to do this, we need an approach to organizing that can take advantage of the power of pods.

The governance 2.0 work by yearn that @zkchun shared is a good step in this direction, yet it too is likely to break down as yearn continues to grow and change. Discord

We have to actually design support for growth into DAOs at their start and make governance part of how they work from day 1. But again that doesn’t mean subcommittees or just having everyone vote on everything. It does mean making value clear in BCR terms and promoting good governance hygiene with minimal burden.

To make value clear, a notion from Joseph Campbell’s Hero’s Journey is useful - the call to adventure. The best way to get to a common view of value in a DAO is to have a shared vision that guides everything the DAO does. But if we want this to scale, by definition that vision has to be bigger than what the DAO can ever fulfill. Otherwise, we top out and lose our way.

Vision statements get a bad rap in many circles, and deservedly so. However, the concept is still helpful if we approach it the right way. The best approach I have seen is by a woefully underappreciated thinker named Tom Graves, who talks about the concept of “Vision, Roles, Mission, and Goals”. Building-blocks for a viable business-architecture – Tom Graves / Tetradian

To reduce the risk of lousy vision statements, let’s coin the term “adventure-vision” to apply Graves’ concept of enterprise-vision to DAOs. As he puts it, the adventure-vision describes a desired world, ideally no more than half a dozen words, about the ‘what’, ‘how’ and ‘why’. I love how he describes it as a way of saying "this is what interests us – and if this interests you too, perhaps you should speak with us“ A structure for enterprise-vision

A key point is that this adventure-vision DOES NOT CHANGE. It serves as an attractor that enables a common orientation as we grow into multiple pods and as the world changes around us. Now the link to decision making is clear via OODA. A common orientation helps us better interpret observations, make decisions, and take action.

Creating a DAO without being clear on the adventure-vision will lead to failure. That’s one of the reason why financially-oriented DAOs may tend to hold together better. In that case, the adventure-vision is more consistent since getting rich is pretty interesting!

The DAO then needs to identify the role(s) it will play in the adventure-vision. What will it do, and not do, to bring about that world? This is where strategy is useful. Again, the term is horribly abused, but I like Richard P. Rumelt’s notion that he shared in “Good Strategy, Bad Strategy” that can be roughly summed up as a diagnosis, guiding policy and coherent actions.

  • Diagnosis can be thought of as what is preventing the adventure-vision from being real. It’s a simple, understandable story that suggests the role(s) the DAO might play in bringing the adventure-vision to life.

  • Guiding policy in a DAO would primarily focus on setting constraints from a risk and resource cost perspective. Just like in nature, we must survive long enough to make an impact or pass along our genes. This provides the “not do” that complements the “do” from the diagnosis.

  • Coherent actions come from the diagnosis and guiding policy. The more the DAO’s actions support each other, the bigger the DAO’s impact will be on the adventure-vision.

These coherent actions take the form of missions. Yes, mission statements are terrible, but the concept as Graves describes it is helpful. A DAO’s mission(s) are the capabilities and services it intends to create and maintain to fulfill its role(s) in service to the adventure-vision.

This creation is guided by goals. A goal in Graves’ approach is a project - a set of deliverables with a target date for completion. As goals are accomplished, they create services and capabilities that the DAO then operates as part of its mission. Ultimately, accomplishing these goals is how we bring the adventure-vision to life!

So just for fun, let’s try to describe what Orca Protocol is doing using this approach and see how badly I misunderstand things.

The second paragraph of the “Closing Thoughts” section of “The Eightfold Path to DAOism” post fits very well with this structure! The Eightfold Path to DAOism — Mirror

  • Orca Protocol’s adventure-vision: “Help DAOs achieve their full potential”

  • Its role: “make governance accessible by creating tools around a DAO’s most basic primitive: people.”

  • Its mission: provide “a modular and flexible body to manage participation, shared assets, and organizational permissioning”

  • Its goal: create a pod primitive that “allows for dynamic and composable structures to be created around any party of actors within a DAO ecosystem, while introducing mechanisms for accountability, incentive alignment, and checks and balances”

Obviously the goals will change a lot, the missions will expand over time (hopefully!), and even its role will evolve. But the adventure-vision will remain.

So the approach seems to check out, which provides at least a bit of credibility. But how many DAOs can actually state their adventure-vision, roles, missions and goals? And how closely would their members match up if you asked each of them separately? I suspect there is a strong correlation between the gap between members and the health of a DAO.

Let’s talk about decision making some more. Specifically, decision capacity. As in, it’s limited.

The corporate world tends to forget this, which is why in large orgs senior leaders spend all of their time in meetings (governance and otherwise). Unless they are deliberate in carving out time for strategy and planning, it’s very easy to fall into reactive mode. It’s frankly remarkable that this works as well as it does, but it leaves a lot of potential on the table.

In a DAO, this can be even worse. If everyone has to vote on everything, I have to choose between the following options: spend a lot of time on research, vote ignorantly, or miss a lot of votes. Of course, if we naively scale, we can do all 3!

So what’s the right amount of decision capacity to invest in a decision? Of course “it depends”, but there are a few things we can say with certainty. The first is that if there is a best answer, any decision capacity we invest past the point of feeling confident it is the best answer is wasted. So we want to be efficient here too. If we have too many decisions to make, we want to invest our decision capacity where it provides the most benefits versus the resource costs.

The next is that not all decision capacity is created equal. I can waste a lot of time trying to figure out which medication you should take for your ailment, but a doctor is much more likely to provide both the right answer and do it more quickly. Of course, if a specific decision maker’s capacity is committed to higher value topics, we may have to settle for someone less efficient, or accept a higher risk of a bad choice.

The decision approach also has a big impact here. This is where the remarkable Forrest Landry’s piece “On the Nature of Human Assembly” is so helpful.

Landry posits that all decision making processes can be described by three types, which I’ll quote in full since this is so important to understand:

  • “In a democracy, a range of possible options is reviewed, discussion and persuasion (rhetoric) is followed by a vote, and the majority decision applies to the whole.”

  • “In a meritocracy, some process (it does not matter which), is used to select a focus of decision making (a single person or smaller group), which then (perhaps after listening to the considered council of others) will make a decision which is applied to the whole. It is assumed that the whole will always and implicitly trust the part to make decisions ‘correctly’ (whatever that notion is taken to mean for the whole group).”

  • “In consensus, all members sit in council together and discuss potential decisions together until a complete and total agreement is reached among all members (however long it takes) which then becomes the decision of the group.”

Democracy, meritocracy, and consensus. They can be mixed and matched, but these are the composable building blocks of decision process.

Each has its strengths and weaknesses. As Landry points out, democracy is faster than consensus, and less prone to abuse than meritocracy. Meritocracy puts more power behind a decision than democracy, and is more adaptive than consensus. Consensus strengthens bonds between members more than democracy, and is more likely to avoid blind spots in decision making than a meritocracy. Yes, there are edge cases, but in general this isn’t telling you anything you don’t already recognize as generally true.

The real genius of Landry’s work is to recognize the differences in each approach and combine them into an overall approach that gets the benefits of each approach while minimizing their weaknesses. I recommend reading his whole work, but my takeaway of how this would apply to a DAO is as follows.

  • Defining the adventure-vision, role(s), and mission(s) of the DAO should be performed by consensus. These define what the DAO is, and there are always good returns from a better answer to these questions. The power of a true believer is hard to ignore, whether it’s of a religion, crypto, or crackpot conspiracy theories (are those 3 things, or just 1? )

  • Executing against goals should be performed by meritocracy. When Vitalik codes and I make dinner, we get Ethereum and a nice meal. If I code and Vitalik makes dinner, we get Hello World (on a good day) and the food might be inedible too (he’s never cooked for me, so maybe he would best me there too?) This is the whole comparative advantage notion of David Ricardo and is pretty well established.

  • Curbing waste and abuses should be performed by democracy. That probably sounds wrong to you if you live in the US, but hear me out. People can much more easily identify what they don’t want than what they want, as anyone who has ever met their friends for a meal can attest to. Thus, if we’re getting nowhere in trying to decide on whether to adopt a new mission, democracy is a great way to get us out of that rut. Once enough people get frustrated and vote to stop talking about it, we can move on to something else. Likewise, if the person in charge of the project turns out to be an incompetent klepto monster, we vote zim out and pick someone else.

Exactly how to put this together into a DAO playbook is something I am playing around with, yet have not yet fully figured out. But the idea is to put something together kinda like Holacracy, except that people can actually follow it.

The main reason I bring this up is that Orca Protocol needs to have the ability to support all three decision methods (and their checks and balances) in order to make this work. If a new DAO starts its life with a primitive that enables this balanced method to work, it will deliver higher value against its adventure-vision than alternative DAO approaches (and conventional organizations, to boot).

Of course, success will likely mean growth, so how do we deal with that once the DAO gets anywhere close to Dunbar’s number? More to come on that.

1 Like

How do we assess the health of governance within a DAO? The bit of reading and conversations I have had on this topic seems to indicate there aren’t any answers people find satisfying yet. Again, I’m still learning so please don’t hesitate to point to answers I’ve neglected.

The corporate world isn’t much better at this, frankly. The tendency is to talk about this in terms of auditing and controls, and in a limited sense they do a good job. What they miss, though, is what can’t be documented. How likely are we to reach our objectives? Are we making good decisions? How in sync are we? Some of this stuff may help a DAO in some circumstances, but audit committees and compliance statements are obviously DOA as a solution.

It’s remarkable, though, that the best solution I have found to this problem comes from a regional development bank. Asian Development Bank first published this approach in a 1995 document called “Governance: Sound Development Management”. It’s very long and there isn’t much elaboration on this, so instead I’ll point you to this 2010 document they published that has a much better signal/noise ratio for our purposes.

Their approach outlines four elements of good governance, which I will summarize with the acronym APPT - accountability, participation, predictability, and transparency. The relationship between this and my pseudonym is no accident.

  • Let’s start with accountability. When I talk about accountability for a DAO, I see three areas of need.

    • First, we need accountability around whether we are getting the intended value. Are we taking the best path towards the adventure-vision that we could be? Are we getting suitable benefits for the resources and risks we are taking on?

    • Next, we need accountability around coordinating the DAO’s activities. Are we gaining consensus on the goals we need to take on? Are we getting the right leaders in place to successfully deliver projects against those goals? Is the “glue work” that tends to be left to marginalized people being properly recognized and rewarded?

    • Finally, we need accountability about task execution. Are we hitting our project target commitments? Are we doing our work in line with our values? It’s not an accident that there is a storytelling element to accountability. After all, the root of the word is “account”, as in to give an account for our actions. Who is telling the story, and how much credibility do they have with the rest of the DAO?

  • The second element is participation. I’ve seen several members of Orca Protocol highlight this as a concern, which was yet another reason I wanted to get more involved with them. Others have talked about it too, of course, and rightly so.

    • When I talk about participation for a DAO, it’s primarily focused on the decisions being made within the DAO. Are the people that should be participating involved? Are we minimizing involvement of people who should not be participating, or don’t need to?

    • This ties back into my earlier point on decision capacity. We don’t have much, so let’s make sure we get as much value from it as we can. Discord

    • Decision process design is crucial here. The best resource I have found for this is the RAPID tool that comes from Bain & Co. RAPID®: Bain’s tool to clarify decision accountability When the roles involved in decisions are clearly delineated, teams and organizations make the right choices.

    • RAPID stands for the five roles involved in a decision making process - Recommend, Agree, Performance, Input, Decision. It’s not in that order, but IRADP is much harder to pronounce.

      • We start with identifying who provides input. Who absolutely cannot be ignored? Who would we ideally like to hear from, but can move forward even if we don’t? Who can chime in if they want? And who do we definitely NOT want to hear from on this?

      • We also need to know who is forming the recommendation. Who will assess the situation, gather the input, identify the relevant decision criteria, create options, and select the option that seems best to implement?

      • Next is knowing who needs to agree with the decision. In a DAO, we could either make this a consensus step (everyone needs to agree with the recommendation) or a veto step (no one disagrees enough to kill it). Since this costs us decision capacity and will slow down the decision, we should only do this when absolutely necessary. In most cases, this should be zero, and ideally it is always one person or at most a small group.

      • We then come to the decider. This is pretty straightforward, but we should keep in mind that as Peter Drucker pointed out many years ago, a decision includes both making a choice and committing resources to the actions required to carry it out. That second bit often gets missed and will come back to bite us!

      • Lastly we have the performance of the decision. Who is actually going to lead carrying out the actions? That person also needs to be clear on the reporting needed to assess the performance of the decision. People will whine about having to both the actual work and report on it. Those people need to get over it. Failing to report is just as bad as failing to do the work if we want to sustain success.

    • So that’s participation. Define the RAPID process for key decisions, assign the roles appropriately, and get to work.

  • The third element of predictability is one that DAOs should be well-positioned to address. That’s what a smart contract is intended to provide, after all. When I talk about predictability for a DAO, I mean that there needs to be clarity in three areas.

    • Members need to know what is expected of them, they need to know how results will be measured, and they need to know what will be done based on the results. Ideally, we have thresholds defined that will help us gauge whether expected results are delivered or not, but it’s more art than science at times.

    • This is fairly straightforward as long as the expected results are delivered. Whatever compensation (in the broadest sense of the term) was expected for the results should be given. When we do this, commitment and trust increases. When there is a mismatch, it drops. Higher is better.

    • The tricky part is that we need to know what we will do when results deviate from expectations. Most folks focus on underperformance, but let’s touch quickly on overperformance. When someone exceeds expectations, we need to recognize that and reward it appropriately! We also need to learn from it and make changes elsewhere based on what we learn. This encourages overperformance and innovation, which will help us move closer towards our adventure-vision.

    • Of course, we need to also account for underperformance. When we set our thresholds, we also need to make sure everyone knows what actions will be taken based on underperformance. This matters not just for the directly involved folks, but the wider DAO as a whole. If DAO participants can easily predict the consequences of everyone’s actions, they can act more confidently in their own work.

    • Ben Horowitz in his great book “The Hard Thing about Hard Things” provides a useful approach to the art of assessing failures. Look at three factors - the seniority of the accountable party, the degree of difficulty of the desired result, and how much negligence contributed to the failure. For the most critical decisions, we want to flesh these guidelines out a bit and make sure we have consensus on them before moving forward.

    • So that’s predictability. Create a sense of inevitability about setting expectations, communicating results, and exception handling.

  • Last element is transparency. Again, this should be a strength of DAOs due to the natural consequences of blockchains. When I talk about transparency, I mean that the right entities have visibility into each of the other three elements. Can we easily see who is accountable for what? Are we actually adhering to our decision flows? Are consequences being delivered in line with norms?

    • There are many useful frames for thinking about this, but my favorite at the moment comes from “Transparency and Communication: Kipling’s Six Questions”, by Elisa Baraibar-Diez and María D Odriozola of the Universidad de Cantabria. (PDF) Transparency and communication: Kipling’s six questions

    • They recommend a seven step checklist to check for transparency. The link has more details but here is how I think about them in the context of a DAO.

      • Why and when do we need transparency?

      • To whom do we need to provide that transparency?

      • What do they need to know?

      • How and where should we provide it to them?

      • How much tailoring of the information is required to meet their needs?

      • What channel(s) should we use to notify them of this information?

      • How often do they need to be notified of this information?

    • That’s enough for now on transparency. The specifics need to flex to the situation, of course, but the key is to at least think about this, try something, and then adjust based on feedback.

Accountability, participation, predictability, and transparency. I’ve yet to encounter a situation where governance is failing where this approach wasn’t useful for diagnosing the situation and finding useful ways to make things better. Of course, DAOs might break the model - we’ll see!

With this in place, we have a way to assess governance health at any scale in a way that can start out light and only get heavier where it makes sense. That’s enough for now - more later.

How do we increase the ability of DAOs (and pods within DAOs) to coordinate effectively?

As @itsdanwu noted in the thread on silos from pods, if DAOs and pods can’t “get over the wall” effectively, they will only reach a fraction of their potential. Discord

Traditional firms address coordination in many ways. There’s what’s known as “command and control” by a leader in a hierarchy, there’s process design and workflows, as well as service and product focused models, as well as others. All of these, though, eventually rely on being able to put obligations on others, and deliver rewards or consequences based on perceived fulfillment of those obligations.

The idea is to reduce uncertainty in whether work will be completed. In a DAO, though, these methods are at best of limited use. We may have some automations and services that we can essentially treat as guaranteed, but the nature of participation means we face much greater uncertainty when we work with people.

Of course, between DAOs, the notion of obligation is pretty much impossible. That’s what “autonomous” means, after all! Therefore, we need an alternative to command and control that provides us with a useful way of thinking about our relationships with others.

The issue with that, though, is that people are very used to hierarchy. Approaches like holarchy can work where we have high motivation and can either retain or attract people familiar with the method, but we can’t really apply that here either. We need an approach that is at least somewhat familiar to people, can be used at many scales, translated into automation tooling, and can accommodate varying degrees of certainty in results.

Fortunately, such an approach exists in the form of Promise Theory. Promise Theory was created by Mark Burgess, the creator of CFEngine and one of the most insightful thinkers alive today. Here’s a helpful 10 minute introduction by him on this concept (transcript included for the impatient). 1. Promise Theory - Basic Concepts (part 1) - YouTube

Burgess’ key insight is that promises are a far more useful way of thinking about potential and actual interactions than obligations. As anyone who lives with a cat and/or teenager knows, we can’t actually force anyone else to do what we want. We can only provide assurances of our own behavior.

From this starting point, Burgess is able to derive an intuitive yet deep approach to thinking about dependencies. A promise is defined as “a declaration or assurance that one will do a particular thing or that a particular thing will happen”.

When I wrote at the end of my last sequence of posts in this thread “more later”, that is a very simple example of a promise. I intended to come back, but I couldn’t guarantee it. The server could have gone offline forever. I might have accepted an offer of 1 million Bitcoins to never come back. (I turned it down - this is too much fun!) I could have been hit by a bus going to work this morning.

You knew that there was some likelihood I would fail to keep my promise. And of course, we all know that we trust some people’s promises much more than others. We adjust for that pretty well and often don’t even need to think much about it.

More formally, a promise in Burgess’ language has five elements. The first four are explicit.

  • There is an intention that is formed by the voluntary action of an agent.
  • That agent then has to communicate that intention to at least one other agent.
  • The agent making the promise expresses some level of commitment and intensity to keeping the promise.
    *The promise has to have some sort of benefit to the agents receiving the communication.

The fifth element, and the only implicit one, is the amount of uncertainty that the promise will be kept. That will vary by the receiver, of course, based on our experiences, biases, and other factors.

You’re probably wondering at this point why we need all of this. The main reason is that with obligations, we have no simple way to represent uncertainty. If I tell my daughter to clean her room, I cannot actually force the result. All I can enforce is whether I deliver rewards and consequences compared to what I said I would do (which ties in to the Predictability aspect of APPT as discussed earlier).

In addition, obligations are one of the key contributors to the principal-agent problem. If we as the principal want a particular result and try to oblige an agent to deliver it, we can’t guarantee results, only actions. Obligations provide a false sense of security.

With promises, we can be much more realistic in our expectations. We can make better decisions about risks versus our tolerance levels. The “we” can be a person, a group, or a machine. For the programmers out there, it’s a declarative model vs an imperative model, which is much easier to use in uncertain environments. In fact, Promise Theory has been in use for many years in Cisco products because it copes well with the uncertainties of digital networks. OpFlex-ing Your Cisco Application Centric Infrastructure |

In the context of a DAO, we can consider each of the adventure-vision, role, mission and goal elements as being promises we make to others. Imagine what could be done if Orca Protocol provided the means to capture these promises.

That data could be aggregated and used to make it easier for others to find the DAO and/or pod that can best help with a specific goal. DAOs, people, or other systems could make more informed decisions about how to interact with the DAO. It would be a boon for transparency, predictability and accountability, which is why Promise Theory can provide huge benefits for governance.

In addition to the video posted, Burgess has written many articles, books and other materials exploring this topic in more depth. Check out this FAQ on his web site if you’d like to learn more. Promise Theory Frequently Asked Questions (FAQ)

There’s a lot about how DAOs can use Promise Theory that goes beyond what I’ve written here. While the specifics will need to be worked out, there’s no doubt in my mind that adoption of Promise Theory would make DAOs more effective, both on their own and in coordination with others. Lots of exciting stuff to explore here in the future!

Of course, the flip side of the question of how DAOs and pods coordinate is the question “what is the right size for a DAO or a pod?” Let’s explore that topic and see what we come up with.

The question of organization size has received a lot of attention over the years. One of the most powerful concepts for examining this was introduced by economist Ronald Coase in the 1930s in his essay “The Nature of the Firm”.

Coase examines the question of why and under what conditions firms hire workers. After all, if markets are efficient, should we just outsource everything? Many companies seemed to try to do this over the past couple of decades, and the results are in - it didn’t work. Why not?

Coase’s insight was that even if markets are efficient, there are transaction costs that occur when we work with external parties. We have to find them, come to an agreement, and make sure we get what we expected. None of this is free. But if I hire someone and we already have an agreement, I can avoid many of these costs. Therefore, we should hire people instead of contracting out for service.

Of course, if we keep hiring people, that has its own costs. We have more overhead, and the issues with decision capacity versus demand become overwhelming. These factors push us towards hiring out where we can.

Coase observed that the size of the firm should strike an optimal balance between the external and internal costs. He showed that when external costs went down compared to internal costs, firms got bigger. When the opposite occurred, they got smaller.

So how does this apply to DAOs? On the external side, the notion of web3 and composability means that external costs are much lower than in traditional markets. Compare an AMM to calling a stock broker, for instance.

On the internal side, the use of smart contracts, voting mechanisms, and the like reduce those costs as well. So it looks like the question of “Will DAOs be bigger or smaller than traditional orgs” deserves a consultant’s favorite answer, “it depends”.

Having said that, if we circle back to the decision making approach suggested by Landry, it’s clear that for most DAOs, smaller is likely to be better. Maintaining consensus is fiercely difficult as the group gets bigger, because the number of relationships grows quickly. A 3 person DAO only has to worry about 3 relationships, 8 people have 28 relationships, and 17 people in the DAO would put us at 153!

To maintain consensus, we’ll either have to invest a lot more time or reduce the scope to something less intensive. Both of these cause issues. We only have so much time to devote to the DAO, and the smaller the consensus, the larger the potential for conflict when we try to make decisions or take action.

Orca Protocol pods that support the adventure-vision/role/mission/goal structure, Landry decision management and Promise Theory coordination provide key elements that allow for a high degree of cohesion in each pod around a compelling view of the future, the ability to act quickly to make progress on that future, and to be able to coordinate with a wide range of other players that also seek compatible objectives.

There’s more needed to make this truly work, yet that is a good start to a “DAO governance system” that can leverage the potential for DAOs to outcompete traditional organizations. That’s probably enough for this thread for now, yet I’d appreciate getting the thoughts of others with more experience in DAOs as well as questions, suggestions for improvement, and anything I’ve neglected but shouldn’t have.


I finally got around to reading, “On the Nature of Human Assembly.”

WOW! A phenomenal read, and now I get where some of your insights come from. I think the abstraction and complexity although not really high, may be more than many can really take in and appreciate. However, for anyone wanting to step into leadership consciouisly, I really think it should be required reading. It is short enough that there is no excuse not to, and practical enough that I think it would be foolish not to.

Whether someone is formally a leader, or just wants to help a group function more smoothly it’s basic ideas are simple enough to take in and apply informally just to help things work better. I really appreciate the share.

Thank You!

P.S. I will probably buy The Effective Choice soon. I like Landry’s work that much.

1 Like