...by Daniel Szego
quote
"On a long enough timeline we will all become Satoshi Nakamoto.."
Daniel Szego

Sunday, December 30, 2018

Solidity Tips and Tricks - struct at local function hack


Solidity has a lot of surprising and actually shitty characteristics. On thing is that a struct can be defined both in memory and in storage as well similarly to arrays. The problem is however that if you define your struct in a local function body, it overwrites the storage variables of the contract itself. On top unfortunately, if you create a struct in a function body without explicitly marking if it is a storage or a memory one, it is created automatically as a storage. So the following example:

contract StructHack {
    uint public myNum = 0;
    
    struct TestStruct {
        uint structNum;
    }
    
    function hackStruct() {
        TestStruct test;
        test.structNum = 22;
    }
}

Surprisingly, if you deploy the contract and call the function hackStruct, the myNum value will be initialized to 22.  

Wednesday, December 26, 2018

Solidity Tips and Tricks - transferring ether at fallback logic


There are basically three ways of sending ether from a solidity contract:

- contractAddress.call.value("sent_value")() is the less secure version of all of the possibilities. It opens the door for reentrancy and other attacks. It is usually not proposed to use this contract due to security reasons. However, if you transfer ether for a contract that has a complicated fallback functionality, actually this is the only possibility. If you want to fine-tune the gas price of the executed target logic, it is again the only possibility. 

- contractAddress.send("value") sends the value to the address in a way that only 23000 gas is allocated to a possible fallback logic. The call gives back a true or false value depending result of the logic. So it can not really be used at complicated fallback logic. 

- contractAddress.transfer("value") sends the value to the address in a way that only 23000 gas is allocated to a possible fallback logic. The call throws an exception if something goes wrong, so it is regarded as a more secure way of transferring values. It can not be used at complicated fallback logic either. 

Sunday, December 23, 2018

Architecting Blockchain and archiving

Realizing an archiving solution with the help of blockchain has many considerations. First of all, blockhcian is not very efficient to store a large amount of data. For this reason, we usually use a mixed architecture, namely a centralized or decentralized storage for storing the documents and a blockchain platform to store the integrity data of the document versions:
The architecture provides many different versions and combinations:
- Blockchain: can be public or a consortium one. It might work with many different consensus algorithm providing different kind of and different strength of cryptoeconomical guarantees.
- Storage: can be totally centralized, like a file storage or a cloud storage. It can be decentralized as well, like realized by IPFS, Swarm or Bittorrent.

Integrity of a document can be realized by hashing the document data with a timestamp and with some metadata and writing the data into the blockchain. This saves the integrity information into the chain and provides a hint that the document did exist. In real implementations, further consideration must be done, since the simple hash value might be vulnerable to a dictionary or rainbow table attack. For this reason, the simple hash value might be extended with a random salt, or optionally the document might be encrypted first and only the encryptoed version is written into the chain.

A further architecture possibility can be if we do not want to save even the hash value into the chain. In this scenario the blockchain is only used to track a certain number of trusted validators and a document can be regarded as valid if a majority of the tracked validators sign the document with some metadata. In this architecture there is no information about the existence of the document in the chain, but if the document exist, we can prove if it is valid. 


Last but not least, we can have some consideration about the fact how the archiving logic works. The archiving logic might be somewhat more complicated, having like different rules for archiving. In such a scenario we might as well evaluate if the logic itself should run centralized or decentralized, like with the help of a Byzantine fault tolerant system. 
  

Saturday, December 22, 2018

Designing an optimal software frameworks


Programming languages and software platforms represent an interface technology between the Turing machines of the computer and the Neorotex of the human brain. It means that they should be designed as much to the computers as to the humans. As an example good software frameworks should be represented as conveyor chains to the humans: having an explicit number of steps that can be parametrized by people in a logical order, having at each step only an explicit number of possible choices. It is not because, that is the way how the computer can best handle things, it is because, that is the way how humans can handle things. The design can be further fine-tuned with considering further limitations of the human thinking, like having the 7+-2 limitation of the short term memory, an optimal conveyor chain might contain the same amount of steps and each step might have 7+-2 input elements as well. Considering the fact that the Neocortex is pretty much hierarchical the chain should can be built up in a hierarchical way containing sub-conveyor chains in each step, up to 7+-2 steps. The whole structure might contain elements not only from the Neocortex but from the human collaboration and social interactions as well. As an example, there might be different teams working on different different steps of the software with different competence areas and the collaboration of the teams might directly be supported by the software framework itself. 

Wednesday, December 19, 2018

Notes on the design of programming languages


A programming language is actually an interface technology. It provides an interface, a communication channel between two fundamentally different technologies: the one which is a basically a Turing machine or some kind of a Neumann architecture, and the other one which is some kind of a hierarchical pattern recognition system with some deep level neuroscience mechanism, with other words called as the brain. A good programming language is designed for both environments, not just to the hardware environment but to the neocortex as well. 


Tuesday, December 18, 2018

Notes on zero knowledge proofs



Can you prove your knowledge on zero knowledge proof without actually explaining or revealing any kind of details on zero knowledge proofs ? 

That would be a zero knowledge meta proof :)

Monday, December 17, 2018

Notes on decentralized business logic platforms


The disruptive technological ideas behind the blockchain applications gives the possibility to design better middleware frameworks as well for business logic. An architecture might have the following proprieties:
- Elements of the business logic are separated into transactions an atomic processing units.
- Transactions are signed by end-users providing authentication and maybe privacy in the whole system.
- Processed transactions are signed by the processing units as well providing a defense mechanism against tampering.
- Processing units can be configured with different combinations, like on the same computer or on different machines.
- Processing units can be configured with different scaling strategies, like scaling for performance, or scaling for security, like having different Byzantine fault tolerant algorithms.
- Service level agreement for a service should be defined and measured as well.
- Processing of a processing unit might be guaranteed by security deposit, that can be somehow automatically payed out if the service level agreement is not matched.
- Special consideration has to be taken for processing units making serialization, like writing something into a database or ledger. 

Notes on thinking linear about an exponential technology

At dealing with exponential technologies, we usually do not have technological problems, but rather human problems:
Linear thinking bias: The human brain struggles to understand nonlinear relationships, which are most often how technological revolutions behave. Short-term developments are generally overestimated, while long-term developments are underestimated.

Sunday, December 16, 2018

Secure multiparty protocol on the blockchain


Implementing a secure multiparty protocol on the top of the blockchain requires some special considerations. Examples might be for such protocols if semi trusted actors want to communicate with each other with the help of a consortium distributed ledger solution, like sharing salary data on the blockchain in a way that only average of the salary will be available, or similarly aggregating ghg emission data on a consortium distributed ledger, in a way that the emission data of the individual companies are not revealed only sum of the data. 

Integrating blockchain with secure multiparty protocols have two major issues:
- Visibility of the data: by default all data on the blockchain is visible for all of the participants, which is not so optimal in case of a secure multiparty protocol. As a consequence, either an encryption algorithm should take place, or some of the data and communication should happen off-chain. 
- Trust model: classical secure multiparty protocols assume that the actors are trusted. In the context of distributed ledger solutions, the assumption of the trust model is weaker, like assuming Byzantine faults as well. 

A secure multiparty sum might be implemented on the blockchain with the following steps:
1. Each participant {1..k} generates off-chain a private and public keys
2. Each participant publishes the public key to the chain.
3. Each participant has a Vi value that should be summarized with the help of the secure multiparty protocol.
4. Each participant splits the Vi value into randomly into k pieces {v1, v2, ... vk} for each node.
5. The values are encrypted by the public keys of the participants, in a way that the first value ifs encrypted by the public key of the first node, the second value in encrypted by the public key of the second node and so on, forming {E(v1), E(v2), ... E(vk)} encrypted values for each node.  
6. All of the data is published to the blockchain forming practically as a trusted communication channel.
7. Each node selects the data from the blockchain that is encrypted with its public key and decrypts them with the private key. At the end each node will know k pieces of decrypted data in a way that each value comes from different nodes. 
8. Each node creates an individual sum of the different values, cause that the same summary is manifested at each individual nodes. 
9. As an optional step the produced data might be published to the blockchain as well. We can build in here some kind of a Bytantine fault tolerance, like in a way that the sum values are published with the help of blind voting algorithm, where we can choose the sum values that is chosen by most of the participants (supposing that most of the participants are honest).
  

Saturday, December 15, 2018

Notes on the skillset of blockchain architect



It is a common misconception that every standard software architect can become a blockchain architect with the help of a couple of weeks intensive education. This is actually far from the reality. Although designing a decentralized system on the blockchain requires some components and ideas from the world of the classical software architectures, blockchain system engineering requires more strongly the skills to design and implement complex cryptoeconmical systems. Among the others, the following fields must be covered:
- Designing for trust model
- Scaling the architecture based on fine-tuning the consensus or implementing off-chain scaling to the required trust model.
- Economically designing a one or more token architecture, like for stable usage tokens. 
- Designing crypto and privacy models, like with increased privacy as zero knowledge proofs or secure multiparty protocols.

As a sum up, a blockchain architect should have its competence only around one third from a software architect. The rest should be cryptography, pure economics, and of course a lot of blockchain specific knowledge.  

Friday, December 14, 2018

On the strategical sales strategy of the consortium distributed ledger technologies



From a sales strategical point of view selling a consortium blockchain solution is different from the classical enterprise sales. The major difference is that classical sales enterprise sales targets enterprise companies to sell customized solutions. Blockchain is not very efficient if it is implemented inside one company, usually it is an overkill. It is more permanent in a multi-actor environment, taking the consortium use-cases in a multi company environment. Enterprise companies however deal rarely with other companies in a community way; they usually separate themselves with walled bastions from the world and contact via well defined interfaces with suppliers or customers. Therefore, blockchain consortium sales should target companies that act as intermediaries between several enterprises in a certain consortium segment, like providing a service, consulting or legal activities for the whole segment. Such mediator companies or foundations have the best use-cases and contacts for the given segment. Sales strategy of consortium blockhchain platforms should target such a mediator companies and not directly the enterprises.   

Thursday, December 13, 2018

Distributed ledger and trust model


At collecting requirements for distributed systems, one of the most important requirement of the application is the trust model. Firstly, general trust model must be exactly specified: 

- Untrusted model: in untrusred model participants do not now each other and do not trust each other. Despite the system has to guarantee the participants can cooperate and can exchange value. In untrusted model almost the only logical choice is the public blockchain model, possibly with high security. 

- Semitrusted model: in semitrusted model, the participants might know each others and might trust each others up to a certain level, but they do not fully trust each others. In such a models, a consortium blockchain solution might be a good solution. 

- Fully trusted model: in a fully trusted model, the participants know each other and trust each other. In such a model, a blockchain solution is pretty much an overkill. 

The exact model can be further fine-tuned, like considering a more complicated architecture, including storage, computation, resource intensive computation, user interface, external oracles, communication channels. At each part we can define how much do we trust in the certain medium, or with other words, how much do we want create the certain part to be byzantine fault tolerant.   




Wednesday, December 12, 2018

Bitcoin blockheader and on-chain governance information


The bitcoin header contains many information. Most of them are responsible to maintain the consistency of the bitcoin blockhcain. However,  there is one that is a little bit exceptional and that is the difficulty target. Difficulty target is actually related rather to on-chain governance and not strongly to the consistency of the chain. The model can be actually extended in a way that there is more than one piece of on-chain information in the block header apart from difficulty. As a general extension there can be a specific data area only for on-chain governance information for which there are special rules how they can be changed and which information is secured with the help of a merkle tree which root is written into the block header. 

Pseudorandomly choosing the next leader at delegated proof of stake


Cryptographic secure random number generation is one of the biggest challenge for every blockchain protocol. It can be extremely important at different Nakamoto consensus or similar algorithms, because the next leader should be chosen with a cryptographically secure pseudorandom generator, otherwise denial of service attack can be easily carried out against the leader. At a delegated proof of stake system, however such an algorithm can be realized easily, and supposing that at least one node acts in an honest way producing real pseudorandom number, the result should be pseudorandom as well. The following algorithm sketch produces the required result:
- The actual leader node creates a private public key pairs and initiates a request for the delegate nodes producing a random number. This request also contains the public key that is actually related to the request. 
- The delegate nodes individually create a pseudorandom random number with some internal or hardware algorithm. These random numbers are encrypted by the public key of the request and spent back to the leader in an encrypted form. During this phase, the random information is encrypted, so practically, nobody can see or manipulate the exact random number. 
- The leader decrypts the with the help of the private key the random numbers and combines them into a final random number, like with the help of a similar algorithms: 

R1 XOR R2 XOR ... XOR Rn

or

sha256(... sha256(sha256(R1) XOR R2) ... XOR Rn)

at the end both the final random number and the private key is revealed in the blockchain, so everybody can proof if the choice algorithm was correct. 

If the random numbers are generated independently from each other and there is at least truly random number the result will be probably truly random as well. If the randomness of the numbers generated by the individual nodes can be checked, there can be a critpoeconomical incentive mechanism as well to reward or slash certain behaviors. 

There is one denial of service attack against the scheme though. If the leader does not reveal the information at the end of the round, like the random key or the random number at the end, a new leader should be chosen, however without the given pseudo-random number. This case requires further investigation. 


Tuesday, December 11, 2018

Blockchain actually doesn't need blocks


Blocks in a blockchain protocol have several functions. Major function is clearly to have an order of the transactions, which order is the same for all of the distributed - decentralized nodes in order to avoid double spending. Like if there a two transactions that are double spends, every node can consider the first one as valid and the second one as invalid. Just for avoiding double spending however, we do not need an order. A partial order of transactions is enough in a sense if two transactions want to consume the same account or transaction there must be a common order among these two transactions. So, instead of blocks and full orders, it clearly makes sense to investigate more efficient storage structures for transaction processing. 

Deletable blockchain with delegated proof of stake


Previous idea with the deletable blockchain platform can not only be realized in a quorum or consortium pattern, but with the help of a delegated proof of stake mechanism as well. In delegated proof of stake, like in EOS, some delegates are elected by the community of nodes either by an explicit voting mechanism or somehow indirectly with voting by cryptocurrency stakes. To produce a valid block, a majority of the delegates have to agree in the block and sign it. Similarly an external data that is not part of the chain can be validated by the delegates and signed by them. Certainly, there should be a cryptographical or cryptoeconomical mechanism that guarantees that in a round the same piece of information is not signed two times. The signature should be realized by one time keys of the delegate nodes, in order that the we can be sure that the given information was signed in the given round. As the information is not stored in the blockchain self, the architecture does not guarantees that the given information exist, if the information exist however we can 100% check if it was signed by the blockchain itself. 

The external information to be signed can a block of an account - state based blockchain system, where certainly validity of the blocks and transactions must be checked by the delegates during the signature. There must be a mechanism making sure that the last, or last couple of off-chain blocks are stored, but regarding older blocks, the should not necessarily be stored. They can be deleted without causing consistency problems in the chain.  

Monday, December 10, 2018

Vending machines and replicated state machines



For distributed state machines, the usual example that is used in a crypto community is the vending machine. Vending machine is the usual example for the state machines in computer science. It is however a bad example for a distributed state. The main example for that is that it is a physical object that can be imagined pretty difficult in a real decentralized implementation. It would mean something similar that you get your coke only if all of the vending machines around the world produce the same output, which is actually pretty weird. Instead a better decentralized example should be used possibly not including any kind of a physical object. As a simple example, a conditional money transfer can be executed if some party signs the transaction and these signatures are validated by the majority of the nodes around the world. 

Transaction ordering and double spending


Distributed systems usually suffer from the so called double spending problem, there should not be two transactions applied to the same account in the same round. There might be several strategies to be followed based on the exact ledger storage implementation, however from a consensus point of view usually a superset of double spending is used, which is a common ordering of the transactions in a way that each node sees exactly the same order of transactions. This implies actually that there is no double spending, because if there are two transactions applied to the same account or unspent transaction, the first one can be chosen as valid and the second one as double spend and the same order will be used by every node in the consensus. The other direction will not hold however, there might be consensus algorithm avoiding double spend without a requirement of full ordering of the transactions. Actually, from a theoretical point of view only partial order is required, creating an order of the transactions that want to change the same account.

Certainly, from a theoretical basis, the situation can be a little bit more complicated if we consider general smart contracts as well instead of cryptocurrency transfer. At smart contracts a transaction might consume information of an account and modifies another one. It is important here to have consistency, instead of double spending, like a transaction should not be dependent on the value of an account if another transaction tries to write the same account variable.

Sunday, December 9, 2018

Denial of service attack against final Byzantine fault tolerant systems


Most classical Byzantine fault tolerant system prefers finality on the consensus based on a voting algorithm that might not scale as good as stochastic finality. With other worlds, these systems prefer consistency over availability in case of network separation. It means, however, that the best denial of service attack against such a systems is the network separation, like separating the nodes in 60 - 40 %. In such case , the consensus mechanism simply stops.   

The difference between delegated proof of stake and proof of stake with proxy staking


Delegated proof of stake and proof of stake with proxy staking represents two similar approaches, but they have differences as well. In both approaches one financial motivation is that accounts having cryptocurrency but not wanting to take part direct in the consensus algorithm can indirectly take part by a locking the cryptocurrency at a node that makes validation and gaining a revenue for that. In this sense, the approach is pretty similar to participate financial in a company and getting financial revenue for that. In delegated proof of stake, the delegates are chosen directly by the accounts wanting to lock the money in the system. The motivation is here usually something similar than to EOS, or Tendermint having a finite number of validators to finalize the consensus. This finite number of validators are chosen directly by stakeholders, like the top most well-financed nodes are getting always into the actual set of validators. In proof of stake with proxy staking however, the explicit validator candidates are chosen by a different algorithm, like with a central authority. The stakeholders can choose one node to lock down some cryptocurrency and gain revenue on that, however the exact validator set will be not reorganized based on the locked in cryptocurrency. It can only be modified by a different algorithm.  

Can a final consensus algorithm fork ?


That is a difficult question, but simple put there are systems that provide probabilistic finality and prefer availability against consistency at network partition. These systems can be forked. On the other hand, there are Byzantine fault tolerant systems (BFT) that prefer consistency against availability at network partition and they can not really fork. Certainly, BFT systems do not scale so good as systems with probabilistic or economic finality. 

Saturday, December 8, 2018

Notes on off-chain information a blocks



From a practical point of view, if a piece of information can be found in the blockchain, it gives two kinds of a guarantee. On the one hand, the information exist, meaning that at verifying the whole chain, the information has to exist and has to be downloaded, otherwise the consistency can not be identified. On the other hand, the information is valid; actually the whole consistency of the blockchain gives the guarantee if the piece of information is valid. However there might be a solution to guarantee information consistency without the need of giving a guarantee for availability. If a piece of information is signed by the blockchain itself but it is not stored directly in the chain, it gives a guarantee, like an off-chain proof that the information was valid without storing that piece of information directly in the chain. Certainly, the situation will be a little bit more complicated if the blockchain can actually fork.    

#InformationAvailability #InformationValidity

Blueprint of a deletable blockchain platform

Based on our previous blog, creating off-chain proofs, there might be the possibility to create a blockchain algorithm where you can actually delete. For the architecture we defined the following architectural considerations:
- The nodes taking part in the consensus should have an identity, a private and public key key pair where the private key is kept secret.
- The consensus must be implicitly or explicitly based on a quorum, meaning that two thirds of the nodes should sign a given piece of information with validity.
- The consensus can be an indirect quorum, like in Tendermin, where there are several rounds for the consensus, like one for choosing a leader node and proposing a block and a second one for validating the given node by a quorum. 
- On-chain governance must contain a transaction that adds new nodes of the system. After successfully adding a node to the system, the public key of a node (e.g. identity of a node) must be added to the blockchain.
- The blockchain in a classical sense contains only governance information, e.g. the public keys of the different validator nodes.
- There must be a mechanism and a special transaction for deleting validator nodes from the blockchain, like with the help of an explicit transaction, or with the help of an automatic mechanism, like if a validator does not sign a block for an amount of time, it will be deleted from the active validators. 
- The architecture prefers consistency against availability at network separation, so it can not be forked and can not be long range attacked, unless more than one third of the nodes private key is leaked. 
- Efficient network separation is a working denial of service against the architecture. 
- Non-governance information, like smart contracts, or cryptocurrency transactions must be built up in a account-state based fashion, meaning that for a getting a valid state of the system, it is enough to read the last valid state. 
- Non-governance information is not stored in blockchain but in a separate block that is stored off-chain and validated by an off-chain proof by the blockchain, see the following picture
:

- Validator nodes have a two level key mechanism: master key is the main key of the node, but in each round, the node creates a new private - public key pair where the public key is published into the blockchain. This round key is used to sign off chain blocks. 
- At each round validity of the off-chain information and data is validated by all of the validator nodes and signed by the public keys that are specific for a certain round. 
- An external observer can check if a given block is valid, by checking the public keys of the validator nodes and checking  if they really signed the given block. As public keys are individual for each round, both the exact round and the fact if a given block is the last one can be easily identified. 
- At signing an external block, there can be different strategies defined, like checking the last block or validating the last couple of blocks. 
- Nothing guarantees that the external external block are stored. As it is on the one hand an advantage, because this way old information can be deleted, it is a drawback as well, because nothing guarantees that the old blocks are actually stored. 
- There should be a mechanism that guarantees that the given last nodes are stored somewhere. This mechanism must crash fault tolerance, but not necessarily Byzantine, because based on the block proofs it can identified if the block is valid.    
- The storage mechanism and probably the whole infrastructure as well must be supported by a cryptoeconomical incentive mechanism. 
- The algorithm should have an economical incentive mechanism that slashes or discourages nodes signing more than one blocks on a round.

Friday, December 7, 2018

Creating off-chain proofs in blockchain protocols



Considering a blockchain protocol, it can be sometimes useful to have a proof that a certain transaction or state was applied to the blockchain, without actually involving this data in the blockchain. Such a structure can be realized by distributed ledger solutions where the identity of the nodes are well-known. One algorithm might be that miner or validator node creates a private - public key pairs as identity where the public key is written into the blockchain but the private key is kept secret. If a piece of data is signed by the private key, it can be made sure that this piece of information was in fact validated by the protocol. The algorithm might be fine-tuned and further improved in a way that not only a leader node signs the piece of data, but many others as well, like in case of a quorum consensus or at a two phase Nakamoto consensus, like at Tendermint. The piece of information can be assigned by a timestamp as well, in the sense of validating the exact block where the information was signed. It can be realized as generating a new identity, new private public key by the validator nodes at each round only for the given round. 

Notes on Ethereum WASM



Ethereum WASM is on the horizon, providing a more stable, flexible and faster programming environment. However introducing the technology might bring a couple of unexpected results. The problem is that with the help of WASM there will be many other languages available for Ethereum programming, apart from Solidity. The real difficulty in Blockchain programming is however not the programming language knowledge, but actually the mindset. As Ethereum smart contracts store a huge amount of money on a public blockchain and actually the deployed code is pretty much immutable, developing such a code requires special considerations. Instead of Agile or DevOps methodologies, defensive programming, formal versification and correct by construction tools should be used. For those who has actually never taken part in mission critical system development, these methodologies are new. The result will be that many software developers with WASM compatible languages will start smart contract development, which will cause again a huge amount of buggy software and a lot of hacks on the public chain. That will result that the general perception of the chain security will be again pretty low.   

Thursday, December 6, 2018

SLA with cryptoeconomical service guarantee


Service level agreements are important parts of every corporate infrastructure and for companies providing infrastructure or even software as a service products. Service level agreements provide at least a theoretical guarantee that if the service is not provided in a promised way, there can be something done. However to really do something if the SLA is not hold is difficult, starting a legal process might take years and cost a lot of money, finding a new provider and migrate existing services to the infrastructure of the new provider might again take month and can be pretty costly. 

A good solution can be however to introduce cryptoeconomical guarantees as insurance for a service, similarly as the Ethereum initiative Swap, Swear and Swindle does for off-chain computation and services. SLA parameters have to be formalized in an explicit and measurable way, and in case the service is not provided, the consumers of the service should be able technically, in a reliable way prove that the SLA was not hold. The service provider should make an insurance up to a certain amount of cryptocurrency that money is held online in a smart contract. Customers of the possible service providers might choose a provider not just based on services but on the fact how much it is cryptoeconomically secured, with other words how much cryptocurrency is locked as an insurance or security deposit in case there are problems with the service If a customer experiences problems with the SLA, an cryptogaphical proof can be generated for that, this proof can be validated by decentralized validators and in case the customer has right a certain amount of cryptocurrency can be transferred, up to the security deposit of the service provider. The whole process might run fully or  almost automatically, practically in minutes.     

Wednesday, December 5, 2018

Most limits of current technologies are actually limits of our neocortex


Different kinds of technologies are said to have limits, problems, drawbacks. However in some or most cases there are limits of the technology itself but limits of our own neocortex. As an example, according to the social scaling idea from Nick Szabo, the result of the institutional systems in the trust based services is the direct result of our neocortex is able to handle at most around 150 social connections. Probably, our love for the different hierarchical structures is the direct result that our neocortex is organized in a hierarchical way.   

Tuesday, December 4, 2018

Notes on efficient education and community building

Most education and community building strategies are based on two simple things:
1. Oxytocin - produced at collaboration, communication, team works ...
2. Dopamin - produced at achieving goals, getting positive feedback, reaching milestones or badges ...

Notes on community building


Taking part in a community building, the most important question is to decide if you are taking part building a true open Open Community or rather a Digital Apartheid. 
#Hashgraph

Sunday, December 2, 2018

Creating a blockchain algorithm that can not fork


Creating a blockchain algorithm that can not fork is actually not so complicated. What we need to have is a mechanism to make sure that we know all of the miners or validators in the system. This might be a special transaction that can be initiated by a node that wants to join. This special transaction is mined or validated in a standard way and added to the list of  active miners or validators that can be stored in the ledger in part of the on-chain governance. 

After that the blockchain consensus algorithm has to have two rounds:

1. with the help of a Nakamoto consensus the next block can be found, however this block is still not applied to the state. The block is considered as a proposed block at first. 

2. The proposed block must be signed by more than 66% of the active nodes to be considered as valid and be applied to the state. Certainly, such a signing and voting process is not something that can necessarily efficiently realized on a global scale considering many active nodes. 

3. Last but not least, a deleting voting peers mechanism can be built in a garbage collection style. If a node do not have active vote for a long time, it can be considered as it has already left the active miner or validator nodes. 

The algorithm can guarantee the working mechanism without forks, as in case of a network partition there would not be enough votes for a given block. Certainly, there is no magic, in such a case, the network would simply stop working. We simply prefer consistency against availability in case of network partition, according to CAP tolerance.  

Saturday, December 1, 2018

Notes on the strengths of the cryproteconomical guarantees



Cryptoeconomical guarantees are actually weaker than the pure cryptographical ones. Cryptoeconomical guarantee means that hacking the system is not profitable, meaning that more money will be lost than gained. As an example proof of steak is based on such a cryptoeconomical guarantee, that states that if I want to hack the system and it become well known you lost all of your stake. As it is a logical motivation that most of the people or rational actors behave economically rational, meaning that they tend to maximize the profit or benefit in some sense. However, the problem is that most of the economical guarantees exist only in the system itself based on assets and resources that are defined by the system itself. So, it is certainly if I try to hack the system in a proof of stake I will loose my stake, but perhaps I my economical rational profit maximization is not based hundred percent to the systems itself. As an example, if the hacking attempt will be well-known, the general trading value of the platform against like USD might fall, that means that I can make a profit from shorting and I might as well do profit even if I loose my stake. 

On the contrary cryptographical proofs use another scarce resource, which is usually computation. Simply put, most practical cryptographical algorithm assumes that breaking the algorithm is computational infeasible meaning that it would take millions of years even with the most advanced, state of the art computers. This is a kind of practically impossible guarantee which is much stronger  than a not profitable one. 

As a consequence, a cryptographical or computational guarantee is always stronger than a simple cryptoeconomical one, mostly because economical rationality can be interpreted only in the system but in general economical rationality is something more complex. 

Perhaps it is important to note though that proof of work is actually an economical guarantee not a cryptographical one, because the amount of computational power to break the consensus does depend on the competition of the miners. With other words, it depends on the amount of money invested to mining equipment.